Document Tree
Document Properties
Kbid
2Y9665
Last Modified
27-Nov-2023
Added to KB
20-Oct-2020
Public Access
Everyone
Status
Online
Doc Type
Guidelines
Product
  • IOM 3.0
  • IOM 3.1
  • IOM 3.2
  • IOM 3.3
  • IOM 3.4
  • IOM 3.5
  • IOM 3.6
  • IOM 3.7
  • IOM 4.0
  • IOM 4.1
  • IOM 4.2
  • IOM 4.3
  • IOM 4.4
  • IOM 4.5
  • IOM 4.6
  • IOM 4.7
Guide - Intershop Order Management - Technical Overview (3.0 - 4.7)

Introduction

The Intershop Order Management System (IOM) as a middle-ware for e-commerce combines the order processes used in all various channels. It takes incoming orders from all available channels and connects them with selected order fulfillment processes. Depending on configurations, these processes are individually managed for each combination of channels. In addition, it provides the customers with greater transparency on product availability, order status, and returns. Thus, it supports call center employees, warehouse employees, and business managers in their respective work-field.

This guide gives a technical overview of the Intershop Order Management System as well as the applied technical concepts and latest technology updates.

The main target group of this document are system administrators. 

Glossary

TermDescription
GDPRGeneral Data Protection Regulation
HAHigh availability
HTTPHypertext Transfer Protocol
HTTPSHypertext Transfer Protocol Secure
IOMThe abbreviation for Intershop Order Management
JMS

Java Message Service

JSONJavaScript Object Notation
OMSThe abbreviation for Order Management System, the technical name of the IOM
OMTThe abbreviation for Order Management Tool, the graphical management tool of the IOM
RESTRepresentational State Transfer
RMAReturn Merchandise Authorization
SMTPSimple Mail Transfer Protocol
SOAPSimple Object Access Protocol
Spring MVCA Web Framework implementing the Model-View-Controller Pattern

References

Overall Architecture

Overview

The following figure provides a very high-level overview of components of IOM, other components required by IOM, and their relations. When going from top to bottom, the following components can be found:

All incoming communication has to go through an HTTP-Proxy, which has the following purposes and requirements:

  • Load-balance the incoming HTTP/HTTPS traffic to the IOM application servers.
  • Provide session-affinity (sticky sessions) for requests to OMT.
  • Terminate HTTPS since IOM application-servers are only able to handle HTTP traffic directly.
  • Observe health-status of IOM application servers in order to send HTTP requests to healthy servers only.

It can be seen that there are IOM application servers of two different types. Every IOM installation needs to have exactly one application server that is running IOM scalable applications and IOM singleton applications. For high availability and scalability much more IOM application servers may be part of the IOM cluster, but these must run IOM scalable applications only. More information can be found in the section IOM Application Server.

The IOM application servers are using two different kinds of persistent storage: a shared file system and a database. In fact, an IOM installation is defined by its data, persisted at these two places. Therefore, the persistent data have to be managed very carefully. They have to be backed up both, in order to be able to restore an IOM installation.

Finally, the IOM applications need to be able to access an SMTP server in order to send e-mails as part of implemented business processes.

HTTP-Proxy

Any HTTP proxy that meets the requirements listed in the Architecture Overview can be used. The section Ingress and Ingress Controller provides more information about actual implementation.

IOM Application Server

IOM applications are running inside Wildfly Application Server. This application server provides many base technologies used by IOM, e.g. JMS, JPA, EJB, etc.

For further information please see https://docs.wildfly.org/.

Shared File System

The following table gives an overview of the directory structure of IOM's shared file system.

Sub-DirectoryDescription
archive/

Folder used to archive old data, principally sensitive one, before deleting them in the database.

importarticle/Import/export of all kind of data (products, stocks, dispatches, returns)
communication-messages/Exchange of data, e.g., for orders.xml
media/Media data, e.g., product images
pdf/PDF documents created by IOM application server, e.g., invoices, credit notes, delivery notes
jobs/Reserved for projects, working files, and archived files for scheduled jobs for projects

PostgreSQL Database Server

IOM requires a PostgreSQL Database Server. For further information see https://www.postgresql.org. The section PostgreSQL Database Server will give more information on how to use the PostgreSQL Database Server along with IOM.

SMTP Server

An SMTP server is required to send different kinds of e-mails. The section SMTP Server provides information on how to use the SMTP server along with IOM.

Helm Charts for IOM

Overview

IOM is delivered in the form of Docker images which are dedicated to run in Kubernetes. Intershop also provides Helm Charts for IOM, which makes it very easy to operate IOM.

A general view on the concept on how to operate IOM with the help of Helm charts is shown in the following figure. The project owners (e.g. implementation partners) have to define a set of values, which is controlling the behavior of the IOM installation. Using the Helm command-line tool along with these values and the IOM Helm Charts, they are able to execute all tasks that are required to run IOM in a Kubernetes environment.

For more information on how to use Helm Charts for IOM see Operate Intershop Order Management (GitHub).

Inside Helm Charts for IOM

The main goal of IOM Helm Charts is to provide a running IOM system. For this reason, the Helm Charts for IOM has to cover all components which are required to operate IOM, as described in the section above. These are:

  • HTTP-Proxy for load-balancing, HTTPS-termination, and sticky sessions,
  • IOM application servers in two different flavors: with singleton applications and without,
  • SMTP-Server,
  • PostgreSQL Database-Server with persistent storage of database data,
  • and a shared file system.

These components have to be translated into the world of Kubernetes.

Ingress and Ingress Controller

Within a Kubernetes environment, an Ingress-object is defining how HTTP access to the underlying service (IOM in this case) has to be made. In fact, the Ingress-object holds the configuration for load-balancing, HTTPS-termination, and sticky sessions. But the Ingress-object is a configuration snippet only. This configuration has to be applied to an Ingress controller, which is a really running software. Some different implementations of Ingress controllers exist, the most common one is the NGINX Ingress controller. Within professional Kubernetes clusters, usually, only one global Ingress controller exists, which is used by all Ingress-objects.

There are two problems the Helm Charts for IOM have to deal with:

  • An Ingress controller is not an integral part of a Kubernetes cluster. It always has to be installed as an extra component.
  • The configuration of sticky sessions within the Ingress-object is implementation-specific for NGINX Ingress controllers. If a Kubernetes cluster used a different implementation of the Ingress controller, IOM would not work as expected in this case.

To solve these problems, Helm Charts for IOM is providing an integrated NGINX Ingress controller, which can be used if necessary. This will be the case if no Ingress controller exists at all, or if the existing global Ingress controller is not an NGINX implementation. In this case, the internal NGINX Ingress controller has to be looped in as a proxy between the global Ingress controller and IOM.

The following table shows use cases and the corresponding settings of values to properly control Helm Charts for IOM. See Operate Intershop Order Management (GitHub) for examples.

#Use Casenginx.enablednginx.proxy.enabledingress.className
1global NGINX Ingress controller availablefalse--
2global Ingress controller available, but it is not an NGINXtruetrue-
3global Ingress controller not available at alltruefalsenginx-iom

nginx.enabled and nginx.proxy.enabled in the headers of the table above are two parameters that are used to control integrated NGINX Ingress Controller of IOM Helm Charts. These parameters are explained in detail in Operate Intershop Order Management (GitHub) ingress.className in the header of the table is a parameter that has to be set at Ingress object definitions. The description of this parameter is also covered by Operate Intershop Order Management (GitHub).

IOM Application Servers

According to the information in Overall Architecture described above, two different types of IOM application servers exist: one with IOM singleton and scalable applications, which has to exist in each installation of IOM exactly once, and 0 or more application servers, which are running IOM scalable applications only.

The IOM Helm Charts are using a StatefulSet to realize the required behavior. Within a stateful set, the pods have fixed names, all ending with a number. The first pod (ending with 0) is the one and only, which is running IOM singleton applications.

Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).

SMTP Server

Different types of IOM installations require different types of SMTP-Servers. A production environment of IOM requires access to a professional e-mail service to deliver e-mails sent by IOM. Of course, this professional e-mail service is not part of Helm Charts for IOM.

The SMTP server, which is part of IOM Helm Charts is dedicated for test, demo and preproduction installations only. MailHog is such a kind of SMTP server, which was created especially for testing purposes. For this reason, MailHog is part of IOM Helm Charts.

MailHog is not an SMTP server only. It also provides a REST API and a web-GUI to get access to the received e-mails. Helm Charts for IOM provides the required Ingress object, which has to be configured in the same way as described in section Ingress and Ingress controller.

Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).

PostgreSQL Database Server

As with the SMTP server, different types of IOM installations have different requirements for installations of PostgreSQL Database Server. A production system of IOM requires a professionally managed PostgreSQL Database Server, providing HA (high availability) features, full backup and restore capabilities, etc. 

The Helm Charts for IOM do not cover a PostgreSQL Database Server, which is suitable for a production environment. For production environments PostgreSQL Database Server should be used from a service, providing all the required features or must be set up and operated manually.

The PostgreSQL Database Server, which is part of IOM Helm Charts does not provide any HA or backup/restore features. It should be used for test and uncritical demo installations only. The integrated PostgreSQL Database Server allows deciding whether to store the database data in memory or to persist them in the file system.

Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).

Shared File System

The shared file system is as important for IOM as the database data. Therefore, production systems of IOM must use an external storage provider (e.g. Azure-Files) instead of the one built into Helm Charts for IOM. For the same reason, the external storage provider needs to have very good backup and restore capabilities.

Uncritical IOM installations, like test and demo installations, may use the storage provider which is defined by Helm Charts for IOM.

Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).

Customization

An IOM installation without customization cannot do anything. It is an empty shell only that needs to be filled.

Customization of IOM consists basically of the following things:

  • An SQL configuration, defining the behavior of IOM,
  • Mail and XSL templates for creation of mails and PDF documents,
  • Deployment artifacts for various purposes, e.g. to modify business processes or to adapt customer systems,
  • Any sort of files needed otherwise, e.g. certificates.

All these data have to be provided by implementation partners in a very special way, which is described in Guide - IOM Standard Project Structure 3.X. The easiest way to provide artifacts in this standard project structure is the usage of the IOM Project Archetype. Projects created this way also provide the ability to easily create the project-specific IOM docker image. This docker image can then be rolled out by IOM Helm Charts.

Monitoring

IOM provides standard metrics in Prometheus format at http://<pod-name>:9990/metrics.

Logging

IOM writes all logging information to stdout. Each log-entry is written as a single line in JSON format. There are different sources of log-messages, all using different formats. To enable automated processing of such log-lines, all different log formats are providing unique meta-data, which give more information about the content and structure of the "real" content.

Meta-Data

KeyDescription
tenant

The name of the tenant, e.g. Intershop

Note

Deprecated since IOM 3.4.0.0. Datadog will inject according information in the future, without the need to loop them through IOM.

environment

The name of the environment, e.g. prod, pre-prod, etc.

Note

Deprecated since IOM 3.4.0.0. Datadog will inject according information in the future, without the need to loop them through IOM.

logHostThe hostname of the pod that has written the log-line
logVersionThe log-type-specific version of log-format
logType

access|message|script

access: access-log of Wildfly's undertow sub-system

message: log-messages written by Wildfly Application Server, its sub-systems, and IOM

script: messages written by shell scripts, e.g. to start Wildfly, initialize the database, etc.

appNameThe name of container and customization if available, e.g. iom, iom-app, iom-config, iom-app+ci, iom+ci
appVersionThe version of IOM and of customization if available, e.g. 3.0.0.0+1.2.0.0
configNameName of project configuration that was selected, e.g. ci.

access-log

Version 1.0 of access-log provides the following information:

KeyDescription
eventSourceDefault entry, made by undertow subsystem. The source of the event in the request. There is redundancy with meta-data, but the undertow sub-system should not define the content of IOM's meta-data.
hostNameDefault entry, made by undertow subsystem. The Wildfly host that processed the request. There is redundancy with meta-data, but the undertow sub-system should not define the content of IOM's meta-data.
bytesSentThe number of bytes sent in the body of the response.
dateTimeDate and time of request, using format-string: "yyyy-MM-dd'T'HH:mm:ss.SSSXXX" (see JavaDoc of SimpleDateFormat).
remoteHostIP/hostname of the host that sent the request.
requestLineThe complete request-line of the HTTP-request.
responseHeaderContent-TypeContent of response-header Content-Type.
responseHeaderSet-CookieContent of response-header Set-Cookie.
responseCodeHTTP response code.
remoteUserName of the user sending the request.
localIpIP of Wildfly Application-Server that received the request.
localPortPort of Wildfly Application-Server that received the request.
requestProtocolThe protocol of the request.
responseTimeResponse time in milliseconds.
requestSchemeThe URI scheme of the request.
requestHeaderRefererContent of request-header Referer.
requestHeaderUser-AgentContent of request-header User-Agent.
requestHeaderHostContent of request-header Host.
requestHeaderCookieContent of request-header Cookie.
requestHeaderX-Forwarded-ForContent of request-header X-Forwarded-For.
requestHeaderX-Real-IPContent of request-header X-Real-IP.
requestHeaderX-Forwarded-HostContent of request-header X-Forwarded-Host.
requestHeaderX-Forwarded-ProtoContent of request-header X-Forwarded-Proto.

For better readability the logline in the following example was formatted. The IOM application-server will always write such a log-entry to a single line.

Example
{
  "eventSource": "web-access",
  "hostName": "default-host",
  "tenant": "Intershop",
  "environment": "aks-ci",
  "logHost": "ci-iom-0",
  "logVersion": "1.0",
  "appVersion": "3.0.0.0-SNAPSHOT@19137+1.2.0.0-SNAPSHOT",
  "appName": "iom-app+ci",
  "logType": "access",
  "configName": "ci",
  "bytesSent": 0,
  "dateTime": "2020-09-15T16:56:20.697Z",
  "localIp": "10.244.1.7",
  "localPort": 8080,
  "remoteHost": "10.244.0.148",
  "remoteUser": null,
  "requestHeaderReferer": "http://global-ingress-nginx-controller.iom-3-0-0-0-nginx.svc.cluster.local/omt/app/roleAssignment/userManagement",
  "requestHeaderUser-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0",
  "requestHeaderHost": "global-ingress-nginx-controller.iom-3-0-0-0-nginx.svc.cluster.local",
  "requestHeaderCookie": "route=1600188975.597.41.791971; JSESSIONID=stHSCG0HI0eWUgLOThNGBm-pYEtfSohEh6js3jdZ.ci-iom-0; org.springframework.web.servlet.i18n.CookieLocaleResolver.LOCALE=en",
  "requestHeaderX-Forwarded-For": "10.244.0.148",
  "requestHeaderX-Real-IP": "10.244.0.148",
  "requestHeaderX-Forwarded-Host": "global-ingress-nginx-controller.iom-3-0-0-0-nginx.svc.cluster.local",
  "requestHeaderX-Forwarded-Proto": "http",
  "requestLine": "GET /omt/static/amCharts/js/amCharts.js?version=3.0.0.0-SNAPSHOT HTTP/1.1",
  "requestProtocol": "HTTP/1.1",
  "requestScheme": "http",
  "responseCode": 304,
  "responseHeaderContent-Type": null,
  "responseHeaderSet-Cookie": null,
  "responseTime": 0
}

message-log

Version 1.0 of message-log provides the following information:

KeyDescription
timestampThe timestamp of the log message.
sequenceThe sequence number of messages.
loggerClassNameThe class name of the logger.
loggerNameThe name of the logger.
levelThe level of the logged message.
messageThe simple unformatted message without stack trace.
threadNameThe name of the callers' thread.
threadIdThe ID of the callers' thread.
mdcThe mapped diagnostic context entry.
ndcThe nested diagnostic context entries.
hostNameThe hostname of the IOM application server that has written the message.
processNameThe name of the process.
processIdThe ID of the process.
stackTraceThe exception stack trace (formatting characters are present, but quoted).
sourceClassNameThe class of the code calling the log method.
sourceFileNameThe source file of the code calling the log method.
sourceMethodNameThe callers' method name.
sourceLineNumberThe line number of the caller.
sourceModuleNameThe name of the module the log message came from.
sourceModuleVersionThe version of the module the log message came from.

For better readability the logline in the following example was formatted. The IOM application-server will always write such a log-entry to a single line. Additionally, the stack trace was shortened and line breaks were added, which makes the example to show invalid JSON.

Example
{
  "timestamp": "2020-09-14T09:38:14.852Z",
  "sequence": 1933085,
  "loggerClassName": "org.jboss.as.ejb3.logging.EjbLogger_$logger",
  "loggerName": "org.jboss.as.ejb3.invocation",
  "level": "ERROR",
  "message": "WFLYEJB0034: EJB Invocation failed on component CreateOrderTransmissionPTBean for method public abstract bakery.logic.valueobject.ProcessContainer bakery.logic.service.controller.Executable.execute(bakery.logic.valueobject.ProcessContainer) throws java.lang.Exception",
  "threadName": "Thread-20 (ActiveMQ-client-global-threads)",
  "threadId": 1406,
  "mdc": {},
  "ndc": "",
  "hostName": "ci-iom-0",
  "processName": "jboss-modules.jar",
  "processId": 222,
  "stackTrace": ": javax.ejb.EJBTransactionRolledbackException:  (orderPU) exception found for object 'class bakery.persistence.dataobject.configuration.connections.CommunicationPartnerDO'\n\tat
org.jboss.as.ejb3@17.0.0.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:203)\n\tat
org.jboss.as.ejb3@17.0.0.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.required(CMTTxInterceptor.java:364)\n\tat
...
org.hibernate@5.3.10.Final//org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1537)\n\tat org.hibernate@5.3.10.Final//org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1505)\n\tat
org.hibernate@5.3.10.Final//org.hibernate.query.Query.getResultList(Query.java:132)\n\tat
deployment.bakery.base-app-3.0.0.0-SNAPSHOT.ear.bakery.persistence-core-3.0.0.0-SNAPSHOT.jar//bakery.persistence.bean.configuration.connections.CommunicationPartnerPersistenceBean.getCommunicationPartnerList(CommunicationPartnerPersistenceBean.java:1087)\n\t
... 321 more\n",
  "sourceClassName": "org.jboss.as.ejb3.component.interceptors.LoggingInterceptor",
  "sourceFileName": "LoggingInterceptor.java",
  "sourceMethodName": "processInvocation",
  "sourceLineNumber": 77,
  "sourceModuleName": "org.jboss.as.ejb3",
  "sourceModuleVersion": "17.0.0.Final",
  "tenant": "Intershop",
  "environment": "aks-ci",
  "logHost": "ci-iom-0",
  "logVersion": "1.0",
  "appVersion": "3.0.0.0-SNAPSHOT@19129+1.2.0.0-SNAPSHOT",
  "appName": "iom-app+ci",
  "logType": "message",
  "configName": "ci"
}

script-log

Version 1.0 of script-log provides the following information:

KeyDescription
timestampThe timestamp of log-message, formatted by "date --iso-8601=seconds".
levelThe level of the logged message.
processNameThe name of the process that has written the message.
messageThe simple unformatted message.
additionalInfoAdditional info, belonging to the message.

For better readability the logline in the following example was formatted. The IOM scripts will always write such a log-entry to a single line.

Example
{
  "tenant": "Intershop",
  "environment": "aks-ci",
  "logHost": "ci-iom-0",
  "logVersion": "1.0",
  "appName": "iom-config+ci",
  "appVersion": "3.0.0.0-SNAPSHOT@19129+1.2.0.0-SNAPSHOT",
  "logType": "script",
  "timestamp": "2020-09-14T09:26:31+00:00",
  "level": "INFO",
  "processName": "apply_json_config.sh",
  "message": "success",
  "configName": "ci",
  "additionalInfo": "462 files were processed."
}

Configuration

Logging can be configured by a set of parameters, all having the prefix log. A detailed description of these parameters can be found in Operate Intershop Order Management (GitHub).

IOM Application Server

Overview

IOM is a system that is mainly event-driven. Events trigger business processes that run asynchronously within the IOM application servers.

These business processes may send other events, triggering other business processes, and so on. Initial sources of events may be HTTP requests, coming from outside or jobs and schedules, triggered by timers.

Technically, events are realized by Java Message Service (JMS), and business processes are realized by message-driven beans.

Deployment Artifacts

The following list shows the deployment artifacts of IOM and the order in which they are loaded.

bakery.base-app-${version}.ear

The application Base contains the essential (and crucial) functionality of the IOM and provides several functionalities used by the other applications.

Customization

Right after base application, applications provided by project customizations will be loaded.

process-app-${version}.ear

The application Process contains message-driven beans and is the starting point of the business processes of the IOM. Typical business processes are the announcement of an order, routing of ordered articles to one or more suppliers, creation of invoice documents, or creation of payment notifications. Processes are triggered by the Control application, by other locally running business processes, and by incoming HTTP requests. Messages are sent and received locally only, not from other application servers.

bakery.control-app-${version}.war

The application Control is responsible for all processes that should be triggered periodically (scheduled). Scheduled processes are for example:

  • Continue processing of business objects in an abnormal state
  • Import and export

The application Control is a singleton application and must be deployed on one IOM application server only.

bakery.impex-app-${version}.war

The application Impex is responsible for the import and export of selected business objects. Impex can be used to exchange data with the connected actors as required. Possible business objects can be orders, customers, or products, for example.
The application Impex is one of the singleton applications and must be deployed on one IOM application server only.

bakery.communication-app-${version}.ear

The application Communication is responsible for communication with external applications. Intended external applications are mostly shops and suppliers. Offered services include general order handling, return management, stock reservation, and more. Services are offered as SOAP and REST.

For further information see:

bakery.omt-app-${version}.war

The application OMT is the standard graphical user interface of the Intershop Order Management System.

It can be used to manage the IOM in a more comfortable way using a common Internet browser. This tool provides functionality to manage customers, items, and orders. Due to the sensitive data, a login is needed. For this purpose, the OMT comes with user and role management.

For frontend functionality, the application uses several frameworks e.g., Bootstrap, jQuery, and others. The backend of the OMT is based on frameworks such as Spring, Spring MVC, and Hibernate.

oms.rest.communication-app-${version}.war

The application Communication is responsible for communication with external applications. Intended external applications are mostly shops and suppliers. Offered services include general order handling, return management, stock reservation, and more. Services are offered as SOAP and REST.

For further information see:

gdpr-app-${version}.war

The application GDPR offers functionality including REST interfaces to support the General Data Protection Regulation of the IOM as well as other external systems that can be connected.

Also, see Reference - IOM REST API.

rma-app-${version}.war

The application RMA offers functionality including REST interfaces to support the process of Return Merchandise Authorization of the IOM as well as other external systems that can be connected.

For further information see Overview - IOM Return Merchandise Authorization and Overview - Intershop Order Management REST API.

transmission-app-${version}.war

The application Transmission offers functionality including REST interfaces to support the message transmission handling of the IOM.

For further information see Overview - Intershop Order Management REST API.

schedule-app-${version}.war

The application Schedule provides the functionality of customizable, timebase-triggered jobs.

For further information see Cookbook - IOM Job Framework and 

order-state-app-${version}.war

The application Order State is a replacement of the SOAP OrderState service and offers a REST interface to get the status of one or more orders for given search criteria.

For further information see Overview - Intershop Order Management REST API.

order-app-${version}.war

The application Order is a replacement of the SOAP Order service and offers a REST interface to create orders.

For further information see Overview - Intershop Order Management REST API.

oms.monitoring-app-${version}.war

The application Monitoring supports the monitoring of application servers.

Please see section Health Check Requests for more information.

Quartz Jobs

The message-driven beans defined by the application Control are triggered by Quartz jobs, defined in quartz-jobs-cluster.xml. Since the application Control is rolled out on one IOM application server only, Quartz configuration becomes effective only on the IOM application server on which the application Control is deployed. But the Quartz sub-system and the corresponding job configuration are active on all IOM application servers. Please keep this in mind when customizing Quartz jobs.

See also Reference - IOM Quartz Jobs 2.2.

Schedules

There is a second concept that allows time-based orders to be triggered: Schedules.

This concept is mainly intended to be used by projects to define custom jobs. The respective application Schedules is rolled out on all IOM application servers. Hence, the jobs triggered by Schedules are running on all IOM application servers, too.

Also, see Cookbook - IOM Job Framework.

JMS

In contrast to traditional JMS-driven applications, Java Messages in IOM are sent and received only locally. There is no distribution of JMS events across different IOM application servers.

High Availability

High availability (HA) can be defined as follows:

The system is designed for the highest requirements in terms of performance and reliability. Several platform capabilities allow easy scaling without downtimes.

The following sections describe a tested and working approach to enabling IOM to be highly available.

IOM Application Servers

High availability can be provided by using multiple IOM application servers running in parallel. As shown in the section Overall Architecture, there are two different types of IOM application servers. Each IOM cluster needs to have exactly one IOM application server that is running IOM singleton applications. However, these IOM singleton applications do not process any requests coming from outside. Instead, the IOM singleton applications process jobs only asynchronously.

From the standpoint of High Availability, all IOM application-servers are identical in their ability to process requests synchronously. Hence, if there is at least one running IOM application server, the application will be available.

For high availability, Helm value replicaCount has to be set to 2 or higher, see Operate Intershop Order Management (GitHub).

During updates of IOM, the system cannot be available since database migration cannot be processed while application servers are still running. For more information see Operate Intershop Order Management (GitHub).

Shared File System

Some directories of IOM servers are containing stateful runtime data that have to be shared by all IOM servers. These directories are placed as sub-directories within the shared file system of IOM.

Intershop recommends to use an HA-ready storage service for a shared file system. Otherwise, IOM cannot be highly available. If the shared file system is not available, all IOM servers are affected and will not be able to answer any requests.

For high availability, the Helm value persistence.storageClass has to be set to a highly available storage class. See Operate Intershop Order Management (GitHub).

Load Balancer / Ingress Controller

According to section Ingress and Ingress Controller the Load Balancer / Ingress Controller can be provided globally within Kubernetes cluster or by IOM Helm Charts itself. If the global Ingress Controller is used, it has to be ensured that it is highly available. If the integrated NGINX Ingress Controller is used, at least two instances of the controller must run. This is controlled by the parameter ingress-nginx.controller.replicaCount, see Operate Intershop Order Management (GitHub).

Sticky Sessions / Session Failover

Session stickiness and session failover are realized by the Ingress controller. Depending on the capabilities of the Ingress controller, this is realized by the global or by the integrated NGINX Ingress controller, see section Ingress and Ingress Controller.

If one of the IOM application servers is not available, e.g., for an upgrade or technical issues, the load balancer / Ingress controller has to be able to send all incoming requests dedicated for this IOM application server to one of the remaining IOM application servers. To do so, IOM provides a REST API for health check requests, which informs the Kubernetes cluster about the state of each IOM application server. The according configuration is already part of IOM Helm Charts. The health check requests support the load-balancer / Ingress controller in deciding which IOM application server to use and which not.

Health Check Requests

All IOM application servers provide a health check that can be requested using the URL /monitoring/services/health/status. It responds with HTTP status code 200 if the application server is healthy. Otherwise, it responds with 5XX.

To ease error analysis, the content delivered by the health check URL contains further information about processed checks. This information is provided in a machine-readable format (JSON), which can also easily be understood by humans.

See Concept - IOM Server Health Check for more information.

High Availability Database

IOM application servers are all connecting to the PostgreSQL database.

For high availability of the whole system, also the database and connection to it have to support HA:

  • A database reconnect has to be possible.
  • PostgreSQL HA cluster must be usable.

DB Reconnect

To provide HA, the application servers are able to reconnect to the database without a restart.

To work properly, invalid connections must be recognized and evicted from the pool. The xa-datasource configuration defines how this happens.

IOM uses the background validation checker rather than the validate-on-match method to reduce checking overhead. Moreover, the timeouts configuration parameters may influence the reconnect behavior (old connections might not get evicted as long as the timeouts are not reached). For more information about data source configuration, see Datasource Parameters in the Wildfly documentation.

The current configuration of IOM xa-datasource looks like this:

Exemplary Configuration of XA-Datasource
# The pool size depends on the number of application servers and the database server ressources.
# It can be set at runtime by according environment variables
/subsystem=datasources/xa-data-source=OmsDB: min-pool-size="${env.JBOSS_XA_POOLSIZE_MIN}"
/subsystem=datasources/xa-data-source=OmsDB: max-pool-size="${env.JBOSS_XA_POOLSIZE_MAX}"
/subsystem=datasources/xa-data-source=OmsDB: pool-prefill="true"

# timeouts
/subsystem=datasources/xa-data-source=OmsDB: set-tx-query-timeout="true"
/subsystem=datasources/xa-data-source=OmsDB: query-timeout="3600"
/subsystem=datasources/xa-data-source=OmsDB: blocking-timeout-wait-millis="3000"
/subsystem=datasources/xa-data-source=OmsDB: idle-timeout-minutes="60"

# connection validation
/subsystem=datasources/xa-data-source=OmsDB: validate-on-match="false"
/subsystem=datasources/xa-data-source=OmsDB: background-validation="true"
/subsystem=datasources/xa-data-source=OmsDB: background-validation-millis="20000"
/subsystem=datasources/xa-data-source=OmsDB: exception-sorter-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter"
/subsystem=datasources/xa-data-source=OmsDB: valid-connection-checker-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker"

# features, that are not used or not supported by IOM
/subsystem=datasources/xa-data-source=OmsDB: interleaving="false"
/subsystem=datasources/xa-data-source=OmsDB: pad-xid="false"
/subsystem=datasources/xa-data-source=OmsDB: wrap-xa-resource="false"
/subsystem=datasources/xa-data-source=OmsDB: same-rm-override="false"
/subsystem=datasources/xa-data-source=OmsDB: share-prepared-statements="false"

# required by metrics
/subsystem=datasources/xa-data-source=OmsDB: statistics-enabled="true"

PostgreSQL HA Cluster

IOM supports access to PostgreSQL HA clusters, but always has to be connected to the master database.

A PostgreSQL HA cluster usually consists of one master server and one or more hot-standby servers. The master server is the only one that is allowed to change data. The failover process requires an additional witness server if the total number of servers (masters + standbys) is odd.

During the failover, the IOM application must be redirected to the new master. One solution is to add a proxy layer between the IOM application servers and the Postgres HA-cluster. This proxy-layer can be realized by PgBouncer. PgBouncer has to be reconfigured on the fly (without restart) whenever the current master changes. PgBouncer being a connection pool, can also be used to limit the number of connections to PostgreSQL. More than one instance of PgBouncer should be defined to avoid single points of failure.

The IOM database connection address is defined by the Helm parameter oms.db.hostlist, which supports a number of one or more database host addresses. For more information see Operate Intershop Order Management (GitHub).

For more information about PostgreSQL, HA clusters see http://repmgr.org and https://pgbouncer.github.io.

Clock and Time Zone Synchronization

All IOM application servers and the IOM database require synchronized clocks. Additionally, all IOM application servers and the IOM database have to use the same time zone. Currently, the time zone is fixed and will be set for IOM application servers and the database to Etc/UTC.

Transregional Installation


Preconditions

Transregional installations require IOM v.3.5.0.0 or newer in combination with IOM Helm charts v.1.4.0 or newer.

Overview

A transregional installation of IOM spans over different Kubernetes clusters in different regions or at least in different locations. The goal of this type of setup is to guarantee continued availability, even if a whole location has failed.

This document covers IOM Helm releases only (the red boxes in the drawing below). All other infrastructure and processes, that are required for a transregional installation of IOM are not in scope of this document.

Additionally, it is important to know that IOM application servers do not communicate with each other. Hence, IOM itself does not require communication paths between Kubernetes clusters. Only the communication paths displayed in the drawing are required.

IOM Helm Release

An IOM Helm release is an IOM running in a Kubernetes cluster which is operated (installed/updated) by IOM Helm charts. To make an IOM Helm release a part of a transregional installation of IOM, some aspects have to be considered in more detail:

  • Every IOM application server has to have a unique ID. Normally this ID is identical to the hostname of the according pod. The hostname is built as <helm release>-iom-<number>, where <helm-release> is the name of the Helm release and <number> is defined by the stateful set that is used to run IOM in Kubernetes. In order to have unique hostnames in both clusters, the names of the Helm releases have to differ.
    But there is also another preferred way to reach that goal. The Helm parameter jboss.nodePrefix can be used to decouple the ID of IOM application-server from hostname. If this parameter is set, the ID is built as <nodePrefix>-<number>, whereas number is defined by the stateful set again. Hence, it is possible to use identical names for Helm releases in both clusters (which might ease processes) and use different values for jboss.nodePrefix instead.
  • Exactly one IOM application server of an installation must execute the IOM singleton applications. Normally this is controlled within the scope of the Kubernetes cluster. However, in a transregional installation many Kubernetes clusters are involved, which would lead to one IOM application server executing IOM singleton applications in each cluster. To solve this issue, the Helm parameter oms.execBackendApps has to be used to control if IOM singleton applications should be executed within a Kubernetes cluster or not.
    If the Kubernetes cluster running the backend applications (IOM singleton applications) has failed, you have to reconfigure the remaining cluster to have an application server executing IOM singleton applications again. However, since IOM singleton applications are not required for general request processing, for short outages it is not necessary to reconfigure the cluster.
  • IOM Helm charts provide different possibilities to access the shared file system. The Helm parameters persistence.storageClass or persistence.pvc can be used. If persistence.pvc is set, any other persistence setting will be ignored.
  • All the logic concerning the database (replication, switch between master and replica) are outside the scope of IOM Helm charts. Pgbouncer (or any other db-loadbalancer) has to be configured in Helm parameters as the database IOM has to use.
  • Updates of IOM which require a downtime have to be orchestrated. The scope of Helm parameter downtime is a single Kubernetes cluster only. Hence, when updating IOM and a downtime is required, you have to shut down all IOM Helm releases except one, which must then be updated with downtime set to true. After that, the other clusters can be started again with the updated configuration.
Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.