Document Tree
Document Properties
Kbid
F27728
Last Modified
30-May-2023
Added to KB
20-Jul-2016
Public Access
Everyone
Status
Online
Doc Type
Guidelines
Product
  • ICM 7.10
  • ICM 11
Guide - Application Logging Administration

Introduction

This document approaches the logging system of Intershop Commerce Management from the configuration perspective. It is addressed to administrators or DevOps who configure and maintain Intershop Commerce Management instances.

This document does not describe the logging framework. For details about the framework implementation, refer to Concept - Logging.

Info

Prior to Intershop version 7.7 the information provided in this document were part of the Administration and Configuration Guide that can be found in the Knowledge Base.

Note

All relevant setup options are to be configured in advance via dedicated deployment script files, before actually executing the deployment. So be aware that, if you modify the Intershop Commerce Management configuration after it has been deployed, the next deployment will override all changes with the settings specified for your deployment.

The Intershop Commerce Management logging framework is used to log application events. Intershop Commerce Management sends logging messages via SLF4J, which interfaces to logback as logging system. The combination of SLF4J and logback allows Intershop Commerce Management to produce more detailed log information and to integrate third party APIs (log4j, Jakarta Commons Logging, java.util.logging, etc) into its logging framework.

The application log files are located in the directory <IS_SHARE>/system/log/. The available options are controlled via logback configuration files logback-*.xml and the Intershop Commerce Management-specific logging configuration files.

Glossary

The main configurable components of the logging framework include:

ConceptDescription

Logger

Generally, loggers are the central framework objects that provide the methods for creating and sending log events. However, "logger" is also used to name the log categories (see below).

Category

Loggers are categorized by their name, based on a hierarchical naming rule. They are usually named according to their corresponding classes. Thus, for instance, the logger com.intershop is parent of com.intershop.adapter. At the top of the hierarchy, there is always the root logger (or root category).

Categories are used to filter the log output. As the logger name corresponds to the class name, it indicates the code location that produces a log message.

Appender

Appenders are responsible for sending the log message to a target, i.e., an output destination.

A logging request will be forwarded to all appenders specified for a logger as well as to any appenders higher in the hierarchy, i.e., appenders are inherited additively from the logger hierarchy. To limit this appender inheritance in a certain category, you can explicitly set the additivity flag of that logger to false, converting this category to a root category in the given context.

Level

Levels are used to hierarchically categorize log messages by severity. Supported levels include ERROR, WARN, INFO, DEBUG and TRACE.

The root category should always be assigned the TRACE level to enable child category appenders to capture messages of any level.

Filter

Adding filters to appenders allows for selecting log events according to various criteria. With respect to filters, consider the following aspects:

  • LevelFilter vs. ThresholdFilter
    Fixed-level filters accept or deny events of the specified level, whereas threshold filters can accept or deny events of higher or lower levels, too.
  • EvaluatorFilter
    Evaluator filters decide whether to accept or deny the log request based on Java expressions that may check the logging event for specific criteria, like levels, logger names, MDC contents, message texts, etc.
    Evaluation expressions can be Java blocks, see http://logback.qos.ch/manual/filters.html#evalutatorFilter.

Turbo Filter

As opposed to filters assigned to appenders, turbo filters apply globally. They pre-select the output before the actual logging event is created. Intershop Commerce Management generates a TurboThresholdFilter automatically based on the lowest level defined for any appender.

Layout

Layouts format the log output and return a string. In the initial configuration, Intershop Commerce Management uses PatternLayout, which allows for customizing the output string based on a configurable conversion pattern.

<layout> must be enclosed in <encoder>, see http://logback.qos.ch/manual/layouts.html.

Mapped Diagnostic Context (MDC)

MDCs can be seen as additional logging context definitions that enrich the logging events and, consequently, allow for further filtering options.

Intershop Commerce Management provides a configurable mechanism to enrich the MDC API. These customizations are configured in <IS_SHARE>/system/config/cluster/logging.properties. By default, the followingMDC enhancements are available:

Request

com.intershop.beehive.core.capi.request.Request

Session

com.intershop.beehive.core.capi.request.Session

User

com.intershop.beehive.core.capi.user.User

JobConfiguration

com.intershop.beehive.core.capi.job.JobConfiguration

Stack

java.util.Stack

Process

com.intershop.beehive.core.capi.locking.Process

References

Application Logging: Configuration

The logging framework options are controlled via a global logback configuration file and cartridge-specific logback configuration files, as well as a global Intershop Commerce Management logging configuration file and server-specific Intershop Commerce Management logging configuration files.

Logback Configuration Files

Upon application server startup, Intershop Commerce Management dynamically creates the internal main logback configuration file. This file sets some basic properties and defines a list of <include> elements, which, basically, constitute the logback-*.xml files as described below.

The internal logback configuration could look, for example, like this:

Logback Configuration
<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
  <include file=".../<cartridge_name>/release/logback/logback-bc_auditing.xml"/>
  <include file=".../<cartridge_name>/release/logback/logback-bc_pricing.xml"/>
</configuration>

The central logback configuration file <IS_SHARE>/system/config/cluster/logback-main.xml controls the cluster-wide appenders and loggers. Any cartridge-specific configuration is passed via logback-<cartridgename>.xml files located in <CARTRIDGE_DIRIECTORY>/<CARTRIDGE_NAME>/release/logback. In addition, Intershop Commerce Management provides a dedicated DBinit log configuration file (<IS_SHARE>/system/config/cluster/logback-dbinit.xml), which is added to the dynamically generated configuration if the application server is started in dbinit mode.

Basically, the appender definitions in the logback-*.xml files specify

  • an appender name for internal reference,
  • a type (file or console) via the class attribute,
  • a filter and an according level or filter expression,
  • a rolling policy, e.g., for time-based or file size-based log file rolling,
  • a layout and an according pattern to be used.

In addition, the logback-main.xml file defines

  • levels for common logger categories, such as root assigned with the TRACE level (which should not be changed),
  • assignments of appenders to categories.

For information about creating project-specific appenders or customizing existing ones, refer to the logback documentation.

The System Management application provides an interface to upload and manage additional logback configuration files, which allow for changing the logging details at server run time, e.g., for maintenance reasons. For details, see the System Management online help.

Intershop Commerce Management Logging Configuration Files

The file <IS_SHARE>/system/config/cluster/logging.properties controls the global, i.e., cluster-wide Intershop Commerce Management-specific logging settings. Intershop Commerce Management-specific logging settings can be defined locally, i.e., on application server level, using <IS_HOME>/config/appserver#.properties.

Basically, the global logging.properties file defines

  • MDC enrichment for certain object types used in the current Intershop Commerce Management instance, using Java expressions

    MDC enrichment
    intershop.logging.mdc.types=<type>
    intershop.logging.mdc.<type>.class=<fully_qualified_class_name>
    intershop.logging.mdc.<type>.attr.<attr1_ID>=<Java_expression>
    intershop.logging.mdc.<type>.attr.<attr2_ID>=<Java_expression>
  • default categories, encoding and pattern for any dynamically created appenders, to be used as fallback if not defined explicitly, for instance

    Default categories, encoding and pattern
    intershop.logging.dynamictarget.categories=root
    intershop.logging.dynamicfiletarget.encoding=
    intershop.logging.dynamicfiletarget.pattern=
    [%date{yyyy-MM-dd HH:mm:ss.SSS z}]
    [%thread] %-5level %logger{36} - %msg%n
    If no encoding is specified, Intershop Commerce Management uses the default platform encoding when writing log output.
  • default JDK logging adapter settings, namely

    JDK
    intershop.logging.javaloggingadapter.enable=true
    intershop.logging.javaloggingadapter.exclusive=true
  • level filters and category assignments for existing appenders as changed via the System Management application

  • appender settings that control specific appender behavior, like, for example, intershop.logging.appender.buffer.flushInterval, which defines the interval at which appenders with <immediateFlush>false</immediateFlush> (as set, for example, in logback-main.xml) are flushed

  • the level of logback status messages (ERROR, WARN, INFO) that are passed to system.err and automatically appended to the application server log files, as well as the number of status messages stored with each application server, for example:

    Level and number of status messages
    intershop.logging.engine.statuslogging.level=WARN
    intershop.logging.engine.statusbuffer.size=500

Application Logging: Default Log Output

The following table lists the columns of the log output as provided by the default error and warn appenders defined in logback-main.xml. This information is expected, for instance, by the Intershop Commerce Insight (ICI).

Column

Description

1

Date, time and time zone (in square brackets), e.g., [2008-04-16 12:34:55.501 CEST+0200]

2

Log level

3

Host name or IP address

4

Intershop Commerce Management instance, e.g., ES1

5

Application server ID, e.g., appserver0

6

[Site name]

7

[Request application URL identifier]

8

Log category, i.e., the class name

9

[Marker], set by the log message author

10

Request type, e.g., storefront, job, back office, etc.

11

[Session ID]

12

[Request ID]

13

Java thread name (in double quotation marks)

14

Log message

Hence, a default log output could look as follows:

[2012-07-16 12:34:55.501 CEST+0200] WARN  127.0.0.1 ES1 appserver0 [] []
 com.intershop.adapter.saferpay.AcSaferpayCartridge [] [] [] [] "main"
 cartridge property: 'intershop.cartridges.ac_saferpay.sac.installation
.path' is *not* found!

Application Logging: Simple Commerce Management Auditing

To support PA-DSS compliance for Intershop Commerce Management-based e-commerce applications, Intershop Commerce Management features a simple logging system accessible to users in Commerce Management. It records payment-relevant Commerce Management user operations, including

  • logging in,
  • changing passwords,
  • creating or deleting users,
  • capturing or cancelling payments

into the dedicated log file audits-PCI-<IS.AS.HOSTNAME>-<IS.INSTANCE.ID>-appserver<ID>.log, located in <IS_SHARE>/system/log. The message states the result of the performed operation (success or error), as well as any additional data, if available.

Commerce Management auditing log is enabled by default. To this end, the standard logging configuration file logback-main.xml includes a dedicated appender definition and corresponding category assignments.

Troubleshooting: Custom Logging Appenders and Rolling Log Files

Note

This section replaces the outdated article with the ID 23989Y and the title Problem in Rolling Log Files.

Sometimes systems run into a problem with rolling log files.

With the out-of-the-box logging configuration the issue does not occur. However, when own logging appenders are configured and they accidentally or intentionally log into the same file, multiple handles for the log file are open. Thus, the file cannot be renamed successfully and consequently, rolling is prohibited.

Note

In general, each log file should have only one defined appender.

Symptoms of this Behavior

  • The application server comes close to a halt.
  • The affected file grows and grows.
  • Copies of the log file with the .tmp extension are left.
  • You find messages like these in the appserver.log (example for more than one appender writing into a rolling job log):

    Log Messages
    [2013-01-21 17​:40​:00​.204 CET]​: 17​:40​:00,204 ​|-WARN in c​.q​.l​.co​.rolling​.helper​.RenameUtil - Failed to rename file [d​:​\eserver9​\share​\system​\log​\<job>-appserver0​.log] to [d​:​\eserver9​\share​\system​\log​\<job>-appserver0​.­log3655490412870351​.­tmp]​.­
    [2013-01-21 17​:40​:00​.204 CET]​: 17​:40​:00,204 ​|-WARN in c​.q​.l​.co​.rolling​.helper​.RenameUtil - Attempting to rename by copying​.
    [2013-01-21 17​:40​:00​.204 CET]​: 17​:40​:00,204 ​|-WARN in c​.q​.l​.co​.rolling​.helper​.RenameUtil - Could not delete d​:/eserver9/share/system/log/<job>-appserver0​.log
  • These errors are also displayed in Intershop System Management (as long as the server is not completely unavailable yet).
  • The logging status icon in the list of servers (Logging menu) changes (yellow for warnings, red for errors).
  • The server details also display the messages as text in the Logging Status tab.

Solution

This issue can be solved by reviewing all logging configurations in the system and trying to find those appenders that log into the same file.

Reconfigure them to write into different files and restart the system.

In case it is intended that different appenders write into the same file (possibly even from different servers), there is some support for this by Logback (although with some restrictions), see Logback Project | Chapter 4: Appenders.

  • Logging configurations can be found here: logback*.xml files in [IS-HOME]/share system/config/cluster
  • Logback configurations that come with cartridges can be found here: [IS-HOME]/share/system/config/cartridges/
  • The directory for uploaded log configuration files is: [IS_SHARE]/system/config/cluster/loggingextension
  • The directory for files specifically uploaded to one application server is: [IS_HOME]/config/appserver0/loggingextension

    Note

    Up to ICM 7.4.5 the directory for files specifically uploaded to one application server is: [IS_SHARE]/system/config/servers/<ip>/<Instance>/<appservername>/loggingextension (for example on Windows D:\eserver9\share\system\config\servers\10.0.56.111\ES9\appserver0\loggingextension)

Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.