unLHCLoggingDB 9.0.0
UNICOS LHC LOGGING DB

Master copy of this document is at git repository UNICOS/SCADA/unLoggingDB Last released version is at EDMS:1730733

INTRODUCTION

PURPOSE OF THIS DOCUMENT

Documentation of the UNICOS LHC LOGGING DB component.

  • General description
  • User interface & Libraries
  • Common use cases and procedures
  • Errors and warnings explained

DEFINITIONS, ACRONYMS and ABBREVIATIONS

  • > WinCC OA: WinCC Open Architecture, previously PVSS
  • > UNICOS: UNified Industrial COntrol System.
  • > DS: UNICOS Data Server, a computer running a WinCC OA project
  • > OWS: UNICOS Operator WorkStation, a computer running a PVSS UI.
  • > Application: set of DS and WinCC OA projects.
  • > DP: WinCC OA data point.
  • > DPT: WinCC OA data point type.
  • > DPE: WinCC OA data point element.
  • > API: Application Programming Interface.
  • > DB: Data Base
  • > RDB Archive: WinCC OA value archive for high performance data

    ‍storage

  • > LDB: Central Logging Data Base of CERN

Description

The UNICOS LHC Logging DB component is a configuration and monitoring tool, which is used to setup data transfer from RDB Archive to the central logging database (LDB).

Data is being transferred by up to 16 Java transfer processes directly from Oracle to NXCALS, so no additional API-Manager (as in the previous logging solution) is needed.

In order to configure and monitor data transfer from RDB-Archive to LDB the LHC Logging DB component needs to have a permanent database connection. Concretely it has to be connected to the database schema RDB2NXCALS of the dedicated database server.

The LHC Logging DB component heavily depends on CtrlRDBAccess (for database access) and RDB Archiver (for pushing data to RDB Archive).

The LHC Logging DB component is composed of:

  • WinCC OA panels to set up and monitor data transfer
  • WinCC OA scripts used during installation and for system integrity checking
  • WinCC OA libraries containing shared functions used during run-time.

The source files of the component can be found at:

https://gitlab.cern.ch/UNICOS/SCADA/unLoggingDB

LOGGING DB CONCEPT

Figure 11: LHC Logging DB architecture

Figure 1 illustrates the architecture of the full logging chain. Data is being fed into the WinCC OA system via drivers and then inserted into Oracle RDB Archive using the RDB Archive Manager. Data transfer to LDB is being done via a DB link from Oracle DB to Oracle DB. The LHC Logging DB UI-panels are used to configure and monitor data transfer between RDB-Archive and LDB.

Figure 2 gives some more insight into data transfer, which is being done by up to 16 transfer jobs.

Each transfer job is responsible for transferring a specific set of signals to LDB. The assignment of signals to a certain transfer job is being done using data categories, which are being handled by the Logging DB administrator.

Transfer jobs execute every 5 minutes and transfer chunks of data (one chunk per signal). Chunks have a maximum size which can be changed by the LoggingDB administrator in agreement with BE/CO logging team.

To keep track of already transferred data each transfer job keeps statistics (timestamps and number of transferred values for each signal) in a specific table. Each time a job executes it looks up the timestamp of the last logged value and resumes the transfer from this timestamp.

Data transfer is being done in 3 steps:

  1. Query new data from RDB Archive
  2. Push values to long term storage (LDB)
  3. Store timestamp of last logged value

Since many WinCC OA projects need to push data to LDB, each project has a dedicated database schema on the database server for storing its data.

**Important note: When setting up new production projects stick to the convention to only use one project per schema.**

The logging configuration is centrally stored on the same database server and it contains a list of signals that need to be transferred to LDB plus additional metadata.

The most important parts of the logging configuration are:

  • VARIABLE_NAME: Unique name of the signal on LDB side
  • ELEMENT_ID: Id of the data point element to be transferred
  • RDB_SOURCE: Name of the data source (schema and archive group)
  • DATA_CATEGORY_ID: Assignment to a specific transfer job
  • OWNER: User responsible for this signal
  • DESCRIPTION: Purpose of the signal
  • ...

The logging configuration also contains additional hierarchy information, which is needed to display data in hierarchical form in timber.

In order to configure a set of signals to be transferred to LDB the operator has to provide necessary configuration data in form of an import file.

IMPORT FILE FORMAT

LHC Logging DB import files contain information that is necessary to register a set of signals or data points for logging.

They include the following fields:

  • Name: Name of the signal in LDB, internally called VARIABLE_NAME
  • DPE: Name of the DP Element, internally resolved to ELEMENT_ID
  • Alias: Name of the DP Alias, internally resolved to ELEMENT_ID
  • Description: Description of the signal
  • Hierarchy: Hierarchy of the signal in LDB

DPE and Alias can be specified in different ways:

  • DPE only: Full data point element name
  • Alias only: Alias of the data point element
  • Alias+Dpe: Alias of the data point + element extension

Important note:

  • Names must not exceed 50 characters
  • Hierarchies must not exceed 255 characters
  • Description must not exceed 259 characters
  • Attribute texts should not contain any special characters,
    stick to ASCII encoding to avoid automatic replacements

Other metadata that is needed for logging configuration (e.g.: RDB_SOURCE, OWNER) is automatically extracted from the WinCC OA project, so there is no need to put this information into the import file.

TEXT IMPORT FILES

Text import files have been introduced to preserve backward compatibility to the old LHC Logging solution, so that import files from old logging can still be used to configure LHC Logging DB.

\[\_RDBArchive\]

dpe= \_unSystemAlarm\_DS\_Comm\_\_unPlc\_CFP\_2865\_FSSF8.alarm

alias=

name= CFP\_2865\_FSSF8:COMMUNICATION

description= Communication CFP\_2865\_FSSF8 to DS driver

hierarchy=\<node name="CRYO"\>\<node name="DIST"\>\</node\>\</node\>

format=

dpe=ProcessInput.evStsReg02

alias=TEST\_DEV\_1

name=TEST\_DEV\_1.evStsReg02

description=SU8 COOLING PLANT

hierarchy=\<node name="CRYO"\>\<node name="DIST"\>\</node\>\</node\>

format=

Since there is only one RDB Archive the archive tag [_RDBArchive] has no meaning any more, but it still has to be present in the file. Also the attribute "format" is being ignored but it still has to be specified without any value. If an attribute is not being used, it still has to be present with an empty value.

Important note:

  • Text import file format is deprecated in favor of XML import file format
  • It will not receive any new features
  • Support for it will be eventually fully removed

XML IMPORT FILES

XML import files contain the same information as text import files, but the information is structured in a different way. One advantage of the XML import files is that all information of a signal can be displayed in a single line. It is also possible to add xml comments (<!– ...–>) to the file or to comment out specific signals, that need to be ignored during import.

\<?xml version="1.0" encoding="ISO 8859-1" ?\>  
\<configFile version="1" application="unLHCLoggingDB"\>  
\<configData group="default"\>  
\<signal name="" dpe="" alias="" description="" hierarchy="" /\>  
\<signal name="" dpe="" alias="" description="" hierarchy="" /\>

...

...

\</configData\>  
\</configFile\>

When using XML import files the representation of hierarchies is different. The name of each node is being displayed using the delimiter "/".

E.g.: hierarchy="CRYO/DIST"

User Interface

The LHC Logging DB user interface contains panels to setup, configure and monitor data transfer to LDB. From the UNICOS main panel LHC Logging DB can be accessed via (1.) UNICOS button > Configuration > LHCLoggingDB as shown in below figure. Note that the menu entry for LHC Logging DB might not be the first one on the menu.

Figure 4: LHC Logging DB menu

Setup

Make sure that RDB Archiving is already installed and properly set up (including system integrity checking) before setting up LHC Logging DB.

In order to establish a database connection database credentials must be present in the settings panel (Figure 4). A comfortable way to enter the credentials is to install the fwRDBSettingsSCADAR component, which will take care of this step.

Figure 5: Database settings

Before a project can be used it has to be ensured that:

  • A schema has been created for the project in RDB
  • RDB Archiving is installed an properly set up
  • unLHCLoggingDB is installed and properly set up
  • Data Categories for the project have been created
  • Logging Users for the project has been created and configured
  • Data Category Mappings have been created

The panel 'Admin Tools' provides functionality for these tasks.

Figure 6: Admin Tools

Each data category has its dedicated transfer job number, which can be changed. This allows the administrator balance the load by assigning sets of projects to certain transfer jobs.

Data Category Mappings, allow to pre-select usernames and data categories depending on the name of the DB schema. They connect 'Schema Name', 'User Name' and 'Data Category' to provide a convenient way to limit the number of possible user names and data categories to a small set. This feature was implemented to prevent users from accidentaly choosing wrong user names or data categories during import.

For more details see [Procedure to setup RDB Archiving and LHC Logging DB](https://wikis.web.cern.ch/wikis/display/EN/WinCC+OA+Oracle+Archiving+for+SAS).

Logging Configuration

The LHC Logging DB component provides a set of panels and tools for logging configuration. The following chapters will explain how and when to use them.

Manage Configurations

The concept of configurations has been introduced to define the content of an import file on a higher level of abstraction. It allows to specify certain rules, which can later be used to generate import files in a reproducible way.

Figure 7: Manage configurations

A project can have multiple configurations (e.g.: one production configuration and some test configurations or many production configurations, where each configuration represents a different application within the project)

If there is no configuration entry, click button 1. on the above figure to create one.

A configuration contains the following settings:

  • 2. Configuration: A unique name to identify the configuration (no spaces). The name of an already existing configuration cannot be modified. The field is editable only creating a new configuration. A check is in place to ensure that the configuration name is unique.
  • 3. Import file: Name of the import file that belongs to this configuration.
    It is necessary to manually add the .xml extension if it is an xml file. It is required to have a unique file name per configuration. A check is in place to enforce the file name uniqueness. The file is always located in the data folder of the project.
  • 4. Usage: To indicate the purpose of the configuration: 'production' or 'test'. Test configurations will not be automatically checked for consistency.
  • 4bis. Manually maintained: Used to disable the use of a user function. A manually maintained configuration must be maintained by hand and cannot be automatically generated with a user function.
  • 4ter. File format: Format of the import files that are being created by the user function. ('text' or in 'xml')
  • 5. User function: If users want to use the import file generator, they need to specify the name of the "user function", which forms the 'name' and 'hierarchy' from full data point element name. The existence of this function can be checked using the check button (8.) to the left of the form field. More details on the definition of the user function can be seen below.
  • 6. Data retrieval: To specify if internal data points, that is data points starting with an underscore, should be included or not.
  • 7. Reduction: Include devices, system alarm and other data points. Other data points include any and all data points that are not devices or system alarms.
  • 8. Naming: Specify data points as ALIAS+Extension or force DPE usage.
  • 9. Filter patterns: List of patterns used to filter out the DPEs returns by the criteria Data retrieval and Reduction.
  • 10. Free text: Define a free text pattern used internally by the function patternMatch().
  • 11. UNICOS application: Automatically create the pattern for the selected UNICOS application. Only local UNICOS applications are available. If the option Include internal DPs is selected, an additional pattern is automatically added matching the internal DPs, i.e. the ones starting with underscore.

Note: If the built-in import file generator is used, a user-function with the following signature needs to be provided and the containing library needs to be included in the project's configuration file (sections [ui] and [ctrl])

/\*\*

@param sFullDpeName input, Full data point element name

@param sName output, Signal name

@param sDescription output, Description

@param sHierarchy output, Activity/Location/Appl./Nature/Domain

@param sDpAlias output, DP Alias

@param sDpeExtension output, element extension

\*/

**public void THE\_NAME\_OF\_THE\_USER\_FUNCTION (**

**string sFullDpeName,**

**string \&sName,**

**string \&sDescription,**

**string \&sHierarchy,**

**string \&sDpAlias,**

**string \&sDpeExtension)**

**{**

**...**

**}**

Note 1: Set sHierarchy to "ERROR" if the function cannot retrieve the output information. Set it to "IGNORE" if a certain type of signals should be explicitly ignored (e.g. where description matches SPARE or RESERVE)

Note 2: If the built-in import file generator is not used because the user can provide the import files in a different way, a manually maintained configuration should still be created for each import file. Configurations help to keep track of the import files and they also allow to make use of some nice features concerning consistency checking.

Datapoint List Filtering

Figure 19: Datapoint list filtering

Figure 19 shows how a list of datapoints, for long term archiving archiving, is filtered depending on selected options in configuration panel. Note that the different filters are cascaded, i.e. each filter is applied on the set of DPEs returned by the previous filter. Finally each datapoint from the final list (right side) is passed to the user function for processing. Lists marked with a star are ones that contain, or might contain, internal datapoints.

Generate Import Files

Note: Generating an import file will overwrite the existing import file specified. Always make a backup before generation.

Import files can be generated with the Generate panel, which provides following functionality:

  • Generate an import file by retrieving data, marking signals for logging (all except ignored) and saving the result as a file.
  • Retrieve data from RDB and display signals that match the criteria given by the specified configuration by clicking the "Refresh" button.
    • Signals with hierarchy "ERROR" will be highlighted in red, which is useful for debugging the user function.
    • Signals with hierarchy "IGNORE" will be highlighted in grey, which indicates that they will be not included in the import file.
    • The info-column will provide useful information about invalid characters and character replacements being in column 'description'. This column I not visible on the screenshot below, but can be seen by scrolling to the right in the panel.
  • Mark signals for logging: 'Mark all', 'Mark visible' or 'Mark from DB' buttons will select the signals to be included in the import file. They will not select signals that are ignored.
  • Save will create an import file containing all marked signals overwriting the existing file. Please be sure to always make a backup.

Figure 8: Generate Import File

The labels 'Marked' , 'Ignored', 'Infos' and 'Errors' provide useful metrics about the state of the import file, that is going to be created.

Note: You may right click on any row and choose enable columns for full DPE name, alias and DPE extension, which can help when debugging errors.

Import Logging Configuration

The import panel is used to append and modify signals to the central logging configuration in the database. Operators need to authenticate with a dedicated username to be allowed to change the logging configuration.

New signals are stored in import files, which are held by configurations. After choosing a configuration for import, the name of its import file is shown in the text box above.

Before an import file can be appended to the central configuration on DB it needs to be checked for errors. The check reports on name clashes, duplicated entries, forbidden characters, and data points that do not exist on the project or that have mismatching aliases and data point elements specified.

Please see chapter 5.2 for conflict types and their resolutions.

Figure 9: Import Logging Configuration

Note: Invalid characters in attribute Description will be replaced according to a substitution list. If a character is not present in the list or in the whitelist, it will be completely removed. For the substitution table see REPLACED CHARACTERS. For the whitelist see Invalid Characters.

Note: When appending signals that are already registered for logging their metadata (DPE, description, and data category) will be updated on the central logging configuration but the changes will not be propagated to LDB. When appending a signals having different hierarchies, the old hierarchies will remain and must be deleted manually.

Data Consistency Check

The data consistency check panel was designed to compare logging configuration in DB with logging configuration in one or many import files.

Figure 10: Data Consistency Check

On default all production configurations will be marked for checking, but the user can also mark test configurations to simulate changes that would result from importing certain files.

If checkbox 'Generate files' is enabled, temporary import files will be generated for all marked configurations and to be used for the comparison. This option only makes sense, if the configuration is using a user function to generate the file. If the check box is not checked, only the signals that are specified in the import file will be checked for consistency.

Hint: Make sure that the library containing the user function for import file generation has been added to the projects config file in sections [ui] and [ctrl] otherwise option 'Generate files' will not work.

Note: The panel is only used to trigger the integrity check and display the result in the table. The actual check is being executed in the system integrity control manager (usually manager number 41)

Delete Logging Configuration

In some cases it is necessary to delete signals or hierarchies from the logging configuration on the database of a specific project. Deletion of signals or hierarchies will not be propagated to LDB.

Note: Deletion of signals or hierarchies only marks them deprecated. To delete or obsolete signals or hierarchies in LDB, you must contact acc-logging-support@cern.ch.

When does deletion of signals or hierarchies make sense?

  • Delete signals and hierarchies, if the new configuration contains fewer signals than the previous one to preserve data consistency.
  • Delete signals and hierarchies, if the new configuration contains renamed signals and there is no need to keep continuous history
  • Delete only hierarchies, if there are multiple hierarchies for the same signal (note that this will not be propagated to hierarchies on LDB).

Figure 11: Delete Logging Configuration

Diagnostics

Signal Overview

The signal overview panel provides information about all signals, which are registered for logging in a certain database schema. This panel is accessible from the Import panel by clicking the View Signals... button.

By default the database schema of the current project will be selected, but since multiple schemas share the same database it is also possible to view signals from other systems or to connect to another database and view the schemas and signals registered there.

Figure 12: Signal Overview

Columns shown by default:

  • Signal Name: Name of the signal in LDB
  • Data Category: Category that was chosen before import
  • Logging Required: Indicates, if signal should be logged
  • Registered for Logging: Indicates, if signal has been picked up by transfer job
  • Transfer Group: Id of the dedicated transfer job
  • Records Logged: Number of records transferred to logging during last 5 min. The number of records that can be considered normal depends on the signals frequency.
  • Time Logged: Time stamp of last value transferred to logging

Additional columns and options can be opened by right click on a signal in the table.

Transfer History

The transfer history panel provides a historical listing of data transfers to LDB for a specific signal. The user can specify the time range, but beware of specifying a too long range, because queries' load on the database can be very heavy and can take a considerably long time.

Table columns:

  • Time Created: Timestamp when data chunk was processed by the transfer job
  • First Checked: Timestamp of first value in the chunk
  • Last Checked: Timestamp of last value in the chunk
  • Last Logged: Timestamp of last value transferred to LDB
  • Records Checked: Number of records found between timestamps
  • Records Logged: Number of records transferred to logging

Figure 13: Transfer History

Note: The number of records checked and records logged can be different depending on the timestamp of the Last Checked and Last Logged. Any discrepancies when the load on the transfer jobs is normal and there are no system integrity manager indicates a problem.

Transfer Job Monitoring

The transfer job monitoring panel shows the current status of all transfer jobs.

Figure 14: Transfer Job Monitoring

  • Job Name: Name of the transfer job (trailing number is transfer group)
  • Job Enabled: Indicates whether the job is enabled
  • Job State: Current operation stage of the job. Most commonly SCHEDULED or RUNNING. See next page for a table of possible states and their descriptions.
  • Last Start: Timestamp of the last job execution
  • Run Duration: How long did last job execution take. This depends on load and amount of data that was transferred.
  • Next Run: Timestamp of the next job execution
  • Job Load: Ratio between run duration and job run interval. If load gets too big the job doesn't have time to finish before the next run should be started. It is important to balance loads between jobs as equally as possible.

When selecting a certain transfer job in the table, details will be displayed below:

  • The left text box shows information about the actual work the transfer job is doing. Possible entries are 'collect data', 'log data', 'register variables' or 'register hierarchies'.
  • The right graphic shows historical load information of the selected job.
    • Query time: How long did it take to query value changes
    • Transfer time: How long did it take to transfer values to LDB
    • Row count: How many values have been transferred to LDB

Each transfer job can be in one of the following state:

  • DISABLED: The job is disabled.
  • SCHEDULED: The job is scheduled to be executed.
  • RUNNING: The job is currently running.
  • COMPLETED: The job has completed, and is not scheduled to run again.
  • STOPPED: The job was scheduled to run once and was stopped while it was running.
  • BORKEN: The job is broken.
  • FAILED: The job was scheduled to run once and failed.
  • RETRY SCHEDULED: The job has failed at least once and a retry has been scheduled to be executed.
  • SUCCEEDED: The job was scheduled to run once and completed successfully.

System Integrity

Configuration

System integrity checking for LHC Logging DB was designed to answer two questions:

  • Does the data transfer from RDB-Archive to LDB work properly?
  • Is the logging configuration still up to date?

Figure 15: System Integrity Configuration

There are five different integrity checks available to answer these questions:

  • Transfer Jobs: Checks, if transfer jobs are enabled and are being executed within given time span. Threshold for delay is specified by setting 'Max. age jobs'
  • Test Datapoints: Creates a test signal to check the full logging chain. (Increments value, stores it to RDB archive and transfers to logging)
  • Data Consistency: Checks, if every registered signal has an existing data point with an archive config and checks if RDB_SOURCE matches ARCHIVE_GROUP.
  • Full Consistency: (optional) Regenerates temporary config files and compares them with logging configuration in DB. Triggers an error if differences are found. If built-in import file generator is not used, the check will use existing import files for comparison. This check is only executed once a day and the time can be specified using the setting 'Start full check at'. The number here is the time of day, as in 8h indicates 8 am, whereas 15h would indicate 3 pm. The check can be started any time by clicking the button 'Start Now'

The interval for running system integrity checks can be modified by changing the setting 'Run checks every ___ [s]'.

If a system integrity check fails an alarm will be triggered and shown in the system integrity alert screen.

If the reason for the alarm is a mismatching logging configuration ('Data Consistency' or 'Full Consistency' check failed), the problem can be fixed by the operator by updating the logging configuration in DB (e.g. generate and re-import config files).

If the reason for the alarm is a problem on database ('Transfer Jobs' or 'Transfer History' check failed) it can only be treated by the LHCLoggingDB admin or acc-logging-support.

Checks can be added to the list of 'Activated LHCLoggingDB integrity classes' by selecting them from the list of 'Available LHCLoggingDB integrity classes' and then adding them via the 'arrow down' button.

Checks can be removed from the list of 'Activated LHCLoggingDB integrity classes' by selecting them from the list of 'Activated LHCLoggingDB integrity' classes and then removing them via the 'arrow up' button.

Already added checks can be disabled by untagging the check box 'Enabled'. Disabled checks will not be executed and the alarm will be masked.

Note: System integrity checking for LHC Logging DB will be done by a dedicated control manager. Usually this manager has the number 41.

Note: To be able to use system integrity checking for LHC Logging DB first set up system integrity checking for RDB. Otherwise checks involving test data points will not work.

Important note: When database load is very high it can happen that system integrity alarms for database transfers jobs of LHC Logging DB flip to error state. Try to increase settings 'Max. age jobs' and 'Max age history' to make checks less sensitive to load peaks on database. Note that these tests are there for a reason, and should be low enough to catch problems in the process. Certain applications can be quite susceptible to this as they are both writing to and reading from LDB. For example PIC can only accept maximum five minutes delay from value change to its availability in LDB.

Diagnostics

The system integrity diagnostics panel shows the current status of all added system integrity checks for LHC Logging DB.

Figure 16: System Integrity Diagnostics

To keep track of created test signals a list of test data points including the current value and timestamp of the last value change is also shown in the second table.

Using the button 'Data Consistency' the operator can open the data consistency check panel and run a consistency check for the specified system.

Hint: This panel can be opened from the main UNICOS menu via (1.) UNICOS button > Management > Diagnostic > System integrity as shown in figure to the right.

Note: The menu entry for System integrity might not be in the same position in the Management sub menu as in the figure.

Scripts and Libs

Scripts:

  • unLHCLoggingDB.postInstall: contains post installation steps

Libs:

Important Tables on DB Side

The LHC Logging DB component makes use of the DB schema RDB2NXCALS.

  • META_VARIABLES: Contains all signals registered for logging
  • LDB_TRANSFER_VARIABLE_STATUSES: Contains last logged time stamps
  • META_VARIABLE_CATEGORIES: Contains data categories
  • META_USERS: Contains logging users
  • META_HIERARCHIES: Contains hierarchy definitions
  • META_HIERARCHY_VARIABLES: Maps variables to hierarchies
  • META_CATEGORY_MAPPINGS: Maps Schema to User to Data Category
  • GTT_STAGE_VARIABLES: Temp. table for signal registration
  • GTT_STAGE_HIERARCHIES: Temp. table for hierarchy registration
  • GTT_TO_MARK_VARS_OBSOLETE: Temp. table to unregister signals
  • GTT_TO_RENAME_VARS: Temp. table to rename signals
  • AUDIT_LDB_CATEGORY_TRANSFERS: Transfer statistics per data category
  • AUDIT_LDB_VARIABLE_TRANSFERS: Transfer statistics per signal
  • AUDIT_OPERATIONS: Transfer statistics per session
  • AUDIT_SESSIONS: Transfer statistics per session

API functions for variable registration can be found in PL/SQL package PVSS_DATA_TRANSFER_MGR.

API functions for variable renaming can be found in PL/SQL package VAR_METADATA_MGR.

COMMON USE CASES AND PROCEDURES

How to Setup LHC Logging DB

Setting up LHC Logging is a complex operation that require actions from three departments. Please contact Icecontrols.Support@cern.ch well in advance as the steps to be followed will take at the very least a few days.

Before a project can be used make sure that:

  • A schema has been created for the project in RDB
  • RDB Archiving is installed an properly set up
  • unLHCLoggingDB is installed and properly set up
  • Data Categories for the project have been created
  • Logging Users for the project have been created and configured
  • Data Category Mappings have been created

For more details see [Procedure to setup RDB Archiving and LHC Logging DB](https://wikis.web.cern.ch/wikis/display/EN/WinCC+OA+Oracle+Archiving+for+SAS).

Getting Username for import

To be allowed to modify the LHC Logging DB configuration you need to authenticate with a dedicated username.

If you cannot find a suitable username in the list of proposed users, ask Icecontrols.Support@cern.ch for help (don't forget to provide the name of your project, machine and database schema).

Setting up system integrity

Five steps to enable system integrity checking:

  1. Make sure that system integrity checking for RDB Archiving is set-up and running properly.
  2. Go to Configuration > LHCLoggingDB > System Integrity and add desired integrity classes. (select and add them via clicking on arrow down)
  3. Go to Configuration > Application > Alarms > Add class: LHCLoggingDB and add alarm classes so that alarms are visible in the system status panel.
  4. Double click the System Status box in the top right corner of the main UNICOS panel and check if LHC Logging DB alarm classes are being displayed
  5. After enabling system integrity for LHC Logging DB it can take up to 20 minutes until all checks turn green.

How to modify logging configuration

To modify the logging configuration use the panels at Configuration > LHC Logging DB.

Registering new signals for logging

  1. Go to Configuration > LHC Logging DB > Import
  2. Authenticate with the dedicated username using "Auth..." button in the top right corner of the panel.
  3. Select (double click) the desired configuration from the list for import
  4. Click Check to verify the import file
  5. Resolve conflicts and go to previous step until check reports no errors
  6. Choose an appropriate Data Category (usually there is only one)
  7. Click Append to add the configuration to the database

Unregistering signals from logging

  1. Go to Configuration > LHC Logging DB > Delete
  2. Authenticate with the dedicated username
  3. Reload the panel content by using button in lower left part of the panel.
  4. Select the signals you want to unregister from logging
  5. Click on Delete > Signals+Hierarchies
  6. Wait until signals are deleted and list is being reloaded. This time will depend on the amount of signals and hierarchies behind deleted, and on the database load.

Deleting Hierarchies

If a new import file contains different hierarchies than the old one, you need to delete old hierarchies before importing the new file, otherwise the logging configuration on DB will contain multiple hierarchies for a signal.

Note: Duplicated hierarchies are reported by the data consistency panel and they will also let the system integrity check "Full Consistency" fail.

Steps to delete hierarchies from logging configuration:

  1. Go to Configuration > LHC Logging DB > Delete
  2. Select the signals whose hierarchies you want to delete
  3. Click on Delete > Hierarchies only

**Note: Deletion of hierarchies using the delete panel will not be propagated to LDB, if you also want to delete hierarchies visible on TIMBER, send a request to acc-logging-support@cern.ch**

Changing Signals Hierarchy

As mentionned in the [Deleting Hierarchies]{#deleting-hierarchies} paragraph, if a new import file contains different hierarchies than the old ones, the signal will then be visible in both hierarchies. In other words, hierarchies are not replaced but added to the signals. This situation may also cause the consistency check to fail as only one hierarchy is considered for the checks.

When changing the hierarchies of a signal, one needs to either:

  • delete explicitly the signal hierarchy as described in [Deleting Hierarchies]{#deleting-hierarchies} and then import the signals with the new hierarchies
  • or delete the Hierarchies only of the problematic signals when reported by the full consistency check panel

Renaming Signals

When checking import files, renamed signals will be found and reported in the status table of the import panel. The user must take further actions to resolve the conflicts.

Resolve by Delete and Re-Import

One solution for resolving conflicts caused by renamed signals is delete and re-import:

  1. Remove old Signal_1 (select signals > right click > resolve conflict > delete signals)
  2. Re-Import import file containing new Signal_2

Since deletion of signals will not be propagated to LDB we will end up having both Signal_1 and Signal_2 on LDB but new values will only be stored into Signal_2.

Note: If you don't need to have one signal with continuous history this solution will do.

Note: If you try to re-import without deleting first the import will fail.

Reconnecting Signals

Usually each registered signal has an assigned DPE: Signal_1 > DPE_1

Sometimes other DPEs need to be assigned to the signal: Signal_1 > DPE_2

When checking import files, reconnected signals will be found and reported.

How to treat reconnected signals:

  1. Update import file (so that it contains reconnected signals)
  2. Check and append the new import file to logging.

Note: You can check the consistency between import files and central logging configuration in DB with the data consistency check panel.

How to manage logging configuration per application

By using different configurations per UNICOS application, it is possible to register only a subset of the devices present in the project.

  1. Go to Configuration > LHC Logging DB > Manage Configurations
  2. Create a new configuration.
  3. Name the configuration uniquely, e.g. following the application name.
  4. Set the user function to the user specific one if needed.
  5. In the advanced filter, select the desired application from the drop down menu and add it to the list of patterns. It is up to the user to include the internal DPs within each configuration or have a dedicated configuration file for them. If a dedicated configuration is chosen, the user will have to manually add the pattern internal DP pattern and the configuration will have to be re-generated and imported everytime a new system alarm, i.e. front end, is added.

Introducing and modifying the user function

The user function is used to filter and select data point elements, and to define the signals and hierarchies. Should the application need custom hierarchy and signal definitions that are different from the generic user function, a custom user function can be used instead.

How to add a custom user function:

  1. > Add the library to the project.
  2. > Include it in any [ctrl] section of the project config

‍LoadCtrlLibs = "libraryWithTheUserFunction.ctl"

  1. > Restart UI and System Integrity manager (manager number 41)

How to modify a custom user function:

  1. > After modifying the function, restart UI and System Integrity

    ‍manager (Manager number 41)

How to check if Logging configuration is up to date

Go to Configuration > LHC Logging DB > Data Consistency and click on the check button. If the check result does not contain any errors the logging configuration on DB is in sync with your import files.

If you are using the build in import file generator you can also check the option "Generate files". Then new import files will be generated from your current devices and they will be used for comparison. These files are temporary and will not overwrite your configuration.

Hint: When using the option "Generate files" the consistency check might take much longer (a couple of minutes).

How to request a retransfer

Data retransfers are very special and complex operations that require actions from two groups (BE-ICS and BE-CSS). They should be avoided whenever possible.

If there is no way to avoid a data retransfer it can be requested via acc-logging-support@cern.ch or Icecontrols.Support@cern.ch which will redirect the request to acc-logging-support.

Note: by default when adding a new signal, its data for the last 3 days will automatically be transfered from WinCC OA RDB schema to NxCALS. Therefore a transfer request should only apply to time ranges older than 3 days.

The retransfer request should contain the following information:

  • List of signals to be retransferred, if single signals can be specified
  • Name of the DB schema, if all signals of a schema need to be retransferred
  • Time range of the retransfer (start time and end time)

Understanding errors and warnings

System Integrity

For more details see [LHC Logging DB Problem Management](https://wikis.web.cern.ch/wikis/display/EN/LHC+Logging+DB+Problem+Management).

Transfer jobs

  • 70 - Query Error

‍Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.

Possible reasons:

  1. Import file used is not consistent with what is generated automatically by the user function
    • If you have modified the user function, restart the UI and system consistency manager (manager number 41)
    • Regenerate and import the new file
  2. User function not in system integrity manager's memory
    • Check that library containing function is included in the project's configuration's [ctrl] section
    • Restart the system integrity manger (manager number 41)
  3. RDB database connection lost or misconfigured
    • Check and fix RDB configuration
  4. LHC Logging database connection lost or misconfigured
    • Check and fix LHC Logging database connection
  • 20 - Transfer jobs delayed

‍Error value 20 indicates a delay in the Database to Database transfer jobs. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"

Possible reason:

  1. Database is under heavy load and transfers are taking longer than expected
    • Increase the transfer job max age
  • 10 - Transfer jobs stopped

‍Error value 10 indicates a stop in the Database to Database transfer jobs. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"

Possible reason:

  1. Transfer jobs are stopped or corrupted
    • Check that transfer jobs are running and have no errors

Test Data Points

  • 70 - Query Error

‍Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.

Possible reasons:

  1. Import file used is not consistent with what is generated automatically by the user function
    • If you have modified the user function, restart the UI and system consistency manager (Manager number 41)
    • Regenerate and import the new file
  2. User function not in system integrity manager's memory
    • Check that library containing function is included in the project configurations [ctrl] section
    • Restart the system integrity manger (Manager number 41)
  3. RDB database connection lost or misconfigured
    • Check and fix RDB configuration
  4. LHC Logging database connection lost or misconfigured
    • Check and fix LHC Logging database connection
  • 40 - Registration error

‍Error value 40 is raised when registration of test data points was not successful.

Contact your Application Responsible for assistance.

Possible reasons:

  1. Test data point elements do not exist
    • Create missing test data points
  2. Test data point configuration is corrupted
    • Fix test data point configuration
    • Restart the system integrity manager (41)
  • 30 - Transfer error

‍Error value 30 indicates an error in the Database to Database transfer jobs.

Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"

Possible reason:

  1. Transfer jobs are corrupted
    • Check on RDB2NXCALS that transfer jobs are running and have no errors
  2. Database connection is not working
    • Check on RDB2NXCALS that the connection to LDB is valid and restart transfer jobs

Data Consistency

  • 70 - Query Error

‍Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.

Possible reasons:

  1. Import file used is not consistent with what is generated automatically by the user function
    • If you have modified the user function, restart the UI and system consistency manager (Manager number 41)
    • Regenerate and import the new file
  2. User function not in system integrity manager's memory
    • Check that library containing function is included in the project configurations [ctrl] section
    • Restart the system integrity manger (Manager number 41)
  3. RDB database connection lost or misconfigured
    • Check and fix RDB configuration
  4. LHC Logging database connection lost or misconfigured
    • Check and fix LHC Logging database connection
  • 10 - Signals with missing DPEs

‍Error value 10 indicates that the current logging configuration contains signals without data point elements. Contact your Application Responsible for assistance.

Possible reasons:

  1. Application responsible has modified devices and didn't update the LHC Logging DB configuration
    • Update the LHC Logging DB configuration
  • 5 - Signals with inconsistent archive group

‍Error value 5 indicates that the current logging configuration contains signals with inconsistent archive groups. Contact your Application Responsible for assistance.

Possible reasons:

  1. Application responsible has modified devices and didn't update the LHC Logging DB configuration
    • Update the LHC Logging DB configuration

Full Consistency

  • 70 - Query Error

‍Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.

Possible reasons:

  1. Import file used is not consistent with what is generated automatically by the user function
    • If you have modified the user function, restart the UI and system consistency manager (Manager number 41)
    • Regenerate and import the new file
  2. User function not in system integrity manager's memory
    • Check that library containing function is included in the project configurations [ctrl] section
    • Restart the system integrity manger (Manager number 41)
  3. RDB database connection lost or misconfigured
    • Check and fix RDB configuration
  4. LHC Logging database connection lost or misconfigured
    • Check and fix LHC Logging database connection
  • 10 - Inconsistent logging configuration

‍Error value 10 indicates that the current logging configuration is inconsistent.

Contact your Application Responsible for assistance.

Possible reasons:

  1. Application responsible has modified devices and didn't update the LHC Logging DB configuration
    • Update the LHC Logging DB configuration

Warnings

REPLACED CHARACTERS

  • Invalid characters replaced in 'Description': (x,y,z)

‍Checking automatically replaces a set of invalid characters in description. This is to support legacy configurations containing non ASCII characters. Characters which are replaced are shown below.

Note: If a character is not present in the above list or on the whitelist, it will be completely removed. For the whitelist see Invalid Characters.

RECONNECTED SIGNAL

  • Name 'XXXX' using Dpe 'YYYY' will be connected to Alias 'ZZZZ' Dpe 'WWWW'

‍Signal XXXX used to be connected to data point element YYYY but the imported file connects it to alias ZZZZ data point element WWWW. Data point elements might have been deleted and recreated, or the signal could be legitimately been reassigned. Please check with the responsible if this is expected. See Reconnecting Signals for more details.

Note: if the application is not using RDB API, continuous trends cannot be provided after reconnection.

RENAMED SIGNAL

  • Old Name: 'XXXX' new Name: 'YYYY' using Alias: 'ZZZZ' Dpe: 'WWWW'

‍Alias ZZZZ with data point element WWWW used to be connected to signal with name XXXX. The imported file renames the signal to ZZZZ. Make sure this is intended. See Renaming Signals for more details.

Errors

Invalid Characters

Name, description and hierarchy must all consist of a whitelisted subset of ASCII characters. The accepted characters are A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, _, -, +, =, ., :, ;, /, , (, ), [, ] and comma (,).

  • Alias 'XXXX' Dpe 'YYYY' Attribute 'Name': Forbidden character 'z' found.

‍The name attribute of the signal with alias XXXX and DPE YYYY contains forbidden character z. All characters in the name must be in the above whitelist.

  • Alias 'XXXX' Dpe 'YYYY' Attribute 'Description': Forbidden character 'z' found.

‍The description attribute of the signal with alias XXXX and DPE YYYY contains forbidden character z. All characters in the description must be in the above whitelist.

  • Alias 'XXXX' Dpe 'YYYY' Attribute 'Hierarchy': Forbidden character 'z' found.

‍The hierarchy attribute of the signal with alias XXXX and DPE YYYY contains forbidden character z. All characters in the hierarchy must be in the above whitelist.

Signal Name

  • > Name 'XXXX' is too long

‍Signal name must be less than 51 characters long. This is a limitation on the LDB and cannot be exceeded. Please make the name shorter.

  • > Name 'XXXX' defined more than once

‍The name 'XXXX' has been used multiple times. Signal names must be unique. Please make sure no two signals definition specify the same name.

Hierarchy

  • DPEs with Hierarchy 'ERROR' detected
  1. If import file is manually managed, there are signals where the hierarchy is specified as ERROR. Please check and either correct the hierarchies, or remove these signals.
  2. If user function is used to generate import files, the user function is not able to parse hierarchy from the full DPE name. Check the hierarchy parsing function in the UserLib.
  • DPEs with Hierarchy 'IGNORE' detected
  1. If import file is manually managed, there exists signals where the hierarchy is specified as IGNORE. Please check and either correct the hierarchies, or remove these signals.
  2. If user function is used to generate import files, the user function is finding data point elements that match the pattern to be ignored, commonly SPARE and RESERVE. These data point elements will be ignored in consistency checks. If there should be no ignored data point elements, check the parsing function in UserLib.

Archive

  • Alias 'XXXX' Dpe 'YYYY' not found in RDB-Archive. Check archive config and archive manager.

‍Check that the DPE has an archive config for RDB Archiver (99) and that RDB Archive manager (99) is configured, running and connected.

Parsing

  • Line=XX Column=YY

‍The XML import file contains syntax error on line XX column YY. Please check that the xml file is valid and fix any errors.

  • Root node does not exist

‍The XML import file is missing the root node. See chapter XML IMPORT FILES for correct format.

  • **Next node does not exist**F

‍The XML import file failed to find the root node. See chapter XML IMPORT FILES for correct format.

  • Wrong config file format

‍The import file is not for unLHCLoggingDB or is different version than expected. Check that attributes application and version are correctly defined in the root node. See chapter XML IMPORT FILES for correct format.

  • While reading child nodes

‍Error encountered when parsing either config nodes or signals. Either the XML is badly formed, you have no configData in your import file, or you have no signals in one of your configData nodes file. See chapter XML IMPORT FILES for correct format.

  • Alias 'XXXX' Dpe 'XXXX.YYYY' has no attribute: 'Name'

‍All signals must have the attribute name specified. Please check your import file for signal with data point element XXXX.YYYY or alias XXXX

  • Alias 'XXXX' Dpe 'XXXX.YYYY' has no attribute: 'Hierarchy'

‍All signals must have the attribute hierarchy specified. Please check your import file for signal with data point element XXXX.YYYY or alias XXXX

  • Cannot open config file XXXX

‍The parser was not able to open the specified file. Make sure the file exists and that you have read rights to the file.

Datapoint

  • Element Y, neither Dpe nor Alias specified

‍DP Consistency check was called without data point element or alias specified. Please contact icecontrol.support@cern.ch.

  • Dpe 'XXXX' does not exist

‍The specified data point does not exist. It might have been deleted or could have never existed. The import file must be updated and signal using this data point de-registered if existing.

  • Alias 'XXXX' does not exist

‍The specified alias does not exist. It might have been deleted or could have never existed. The import file must be updated and signal using this alias de-registered if existing.

  • Alias 'XXXX' belongs to DP 'YYYY', please specify the Dpe

‍Aliases used for signals must point to data point elements, not data points. Please specify the exact data point element.

  • Wrong Alias or Dpe. Alias 'XXXX' resolved to DP 'YYYY' but Dpe is 'ZZZZ'

‍The alias points to a different DP than the specified DPE is an element of. Please check and correct the import file.

  • Alias: 'XXXX' and Dpe: 'YYYY' do not match

‍The alias and DPE attributes point to different elements. Both must point to the same element. Please check and correct the import file.

Data Category

  • Getting data categories from database

‍Error getting data categories from RDB2NXCALS database. Make sure the unLHCLoggingDB database connection has been configured (UNICOS Menu > Configuration > LHC Logging DB > Settings) and active.

  • Alias 'XXXX' Dpe 'YYYY' data category 'ZZZZ' does not exist

‍The specified data category does not exist. Please check that you have spelled the category correctly. If the category is new, contact Icecontrols.Support@cern.ch and request for the category to be created.

Pop-ups

  • Error during full consistency check or during manual check of data consistency

‍Possible solutions:

  1. Import file is malformed or empty. If no signals were selected, an empty file will result in this error. File is considered empty, even if it contains the [_RDBArchive] tag.
    • Regenerate import file to get the latest state
  2. Library containing the user function is not included in project config file section [ctrl].
    • Add line LoadCtrlLibs = "libraryWithTheUserFunction.ctl"
    • Restart the UI and the System Integrity manager (Manager number 41)
  3. User function has been introduced or modified but UI or System Integrity manager have not been restarted
    • Restart the UI and the System Integrity manager (Manager number 41)
  • Create a configuration for config file generation!

‍This error is raised when opening the Generate panel with no configuration present.

Possible solution:

  1. No configuration exists
    • Create a configuration in Manage Configurations panel
  • Database schema 'XXXX' is not registered in database

‍This error is raised when opening the delete panel and the RDB schema XXXX has not been declared in the RDB2NXCALS database. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"

Possible solutions:

  1. Schema XXXX not in RDB
    • Connect to correct schema
  2. Schema XXXX not accessible from RDB2NXCALS
    • Contact IT
  • Database schema 'XXXX' of this system is not registered in database 'YYYY'

‍This error is raised when opening the LHC Logging DB Signal Overview panel and the RDB schema XXXX is not registered in the YYYY database.

Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"

Possible solutions:

  1. Schema XXXX not in RDB
    • Connect to correct schema
  2. Database YYYY is not the correct database with RDB2NXCALS
    • Connect to the correct database for RDB2NXCALS
  3. Database YYYY is the correct one and schema XXXX not accessible from it
    • Contact IT
  • Loading database settings failed

‍This error is raised when no LHC Logging DB connection has been configured.

Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"

Possible solutions:

  1. No connection exists
    • Configure the connection, activate and apply.
  • DB Error - unLHCLoggingDBSql_queryDataWithParameterMapping: See log file for details

‍This error is raised when opening the Delete panel and no LHC Logging DB connection has been configured. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"

Possible solutions:

  1. No connection exists
    • Configure the connection, activate and apply.

References & useful links

  1. UNICOS
  2. LHC Logging DB
  3. Logging Configuration
  4. Procedure to setup RDB Archiving and LHC Logging DB
  5. LHC Logging DB Problem Management
  6. System Integrity