unLHCLoggingDB 9.0.0
|
Master copy of this document is at git repository UNICOS/SCADA/unLoggingDB Last released version is at EDMS:1730733
Documentation of the UNICOS LHC LOGGING DB component.
storage
The UNICOS LHC Logging DB component is a configuration and monitoring tool, which is used to setup data transfer from RDB Archive to the central logging database (LDB).
Data is being transferred by up to 16 Java transfer processes directly from Oracle to NXCALS, so no additional API-Manager (as in the previous logging solution) is needed.
In order to configure and monitor data transfer from RDB-Archive to LDB the LHC Logging DB component needs to have a permanent database connection. Concretely it has to be connected to the database schema RDB2NXCALS of the dedicated database server.
The LHC Logging DB component heavily depends on CtrlRDBAccess (for database access) and RDB Archiver (for pushing data to RDB Archive).
The LHC Logging DB component is composed of:
The source files of the component can be found at:
https://gitlab.cern.ch/UNICOS/SCADA/unLoggingDB
Figure 11: LHC Logging DB architecture
Figure 1 illustrates the architecture of the full logging chain. Data is being fed into the WinCC OA system via drivers and then inserted into Oracle RDB Archive using the RDB Archive Manager. Data transfer to LDB is being done via a DB link from Oracle DB to Oracle DB. The LHC Logging DB UI-panels are used to configure and monitor data transfer between RDB-Archive and LDB.
Figure 2 gives some more insight into data transfer, which is being done by up to 16 transfer jobs.
Each transfer job is responsible for transferring a specific set of signals to LDB. The assignment of signals to a certain transfer job is being done using data categories, which are being handled by the Logging DB administrator.
Transfer jobs execute every 5 minutes and transfer chunks of data (one chunk per signal). Chunks have a maximum size which can be changed by the LoggingDB administrator in agreement with BE/CO logging team.
To keep track of already transferred data each transfer job keeps statistics (timestamps and number of transferred values for each signal) in a specific table. Each time a job executes it looks up the timestamp of the last logged value and resumes the transfer from this timestamp.
Data transfer is being done in 3 steps:
Since many WinCC OA projects need to push data to LDB, each project has a dedicated database schema on the database server for storing its data.
**Important note: When setting up new production projects stick to the convention to only use one project per schema.**
The logging configuration is centrally stored on the same database server and it contains a list of signals that need to be transferred to LDB plus additional metadata.
The most important parts of the logging configuration are:
The logging configuration also contains additional hierarchy information, which is needed to display data in hierarchical form in timber.
In order to configure a set of signals to be transferred to LDB the operator has to provide necessary configuration data in form of an import file.
LHC Logging DB import files contain information that is necessary to register a set of signals or data points for logging.
They include the following fields:
DPE and Alias can be specified in different ways:
Important note:
Other metadata that is needed for logging configuration (e.g.: RDB_SOURCE, OWNER) is automatically extracted from the WinCC OA project, so there is no need to put this information into the import file.
Text import files have been introduced to preserve backward compatibility to the old LHC Logging solution, so that import files from old logging can still be used to configure LHC Logging DB.
\[\_RDBArchive\] dpe= \_unSystemAlarm\_DS\_Comm\_\_unPlc\_CFP\_2865\_FSSF8.alarm alias= name= CFP\_2865\_FSSF8:COMMUNICATION description= Communication CFP\_2865\_FSSF8 to DS driver hierarchy=\<node name="CRYO"\>\<node name="DIST"\>\</node\>\</node\> format= dpe=ProcessInput.evStsReg02 alias=TEST\_DEV\_1 name=TEST\_DEV\_1.evStsReg02 description=SU8 COOLING PLANT hierarchy=\<node name="CRYO"\>\<node name="DIST"\>\</node\>\</node\> format=
Since there is only one RDB Archive the archive tag [_RDBArchive] has no meaning any more, but it still has to be present in the file. Also the attribute "format" is being ignored but it still has to be specified without any value. If an attribute is not being used, it still has to be present with an empty value.
Important note:
XML import files contain the same information as text import files, but the information is structured in a different way. One advantage of the XML import files is that all information of a signal can be displayed in a single line. It is also possible to add xml comments (<!– ...–>) to the file or to comment out specific signals, that need to be ignored during import.
\<?xml version="1.0" encoding="ISO 8859-1" ?\> \<configFile version="1" application="unLHCLoggingDB"\> \<configData group="default"\> \<signal name="" dpe="" alias="" description="" hierarchy="" /\> \<signal name="" dpe="" alias="" description="" hierarchy="" /\> ... ... \</configData\> \</configFile\>
When using XML import files the representation of hierarchies is different. The name of each node is being displayed using the delimiter "/".
E.g.: hierarchy="CRYO/DIST"
The LHC Logging DB user interface contains panels to setup, configure and monitor data transfer to LDB. From the UNICOS main panel LHC Logging DB can be accessed via (1.) UNICOS button > Configuration > LHCLoggingDB as shown in below figure. Note that the menu entry for LHC Logging DB might not be the first one on the menu.
Figure 4: LHC Logging DB menu
Make sure that RDB Archiving is already installed and properly set up (including system integrity checking) before setting up LHC Logging DB.
In order to establish a database connection database credentials must be present in the settings panel (Figure 4). A comfortable way to enter the credentials is to install the fwRDBSettingsSCADAR component, which will take care of this step.
Figure 5: Database settings
Before a project can be used it has to be ensured that:
The panel 'Admin Tools' provides functionality for these tasks.
Figure 6: Admin Tools
Each data category has its dedicated transfer job number, which can be changed. This allows the administrator balance the load by assigning sets of projects to certain transfer jobs.
Data Category Mappings, allow to pre-select usernames and data categories depending on the name of the DB schema. They connect 'Schema Name', 'User Name' and 'Data Category' to provide a convenient way to limit the number of possible user names and data categories to a small set. This feature was implemented to prevent users from accidentaly choosing wrong user names or data categories during import.
For more details see [Procedure to setup RDB Archiving and LHC Logging DB](https://wikis.web.cern.ch/wikis/display/EN/WinCC+OA+Oracle+Archiving+for+SAS).
The LHC Logging DB component provides a set of panels and tools for logging configuration. The following chapters will explain how and when to use them.
The concept of configurations has been introduced to define the content of an import file on a higher level of abstraction. It allows to specify certain rules, which can later be used to generate import files in a reproducible way.
Figure 7: Manage configurations
A project can have multiple configurations (e.g.: one production configuration and some test configurations or many production configurations, where each configuration represents a different application within the project)
If there is no configuration entry, click button 1. on the above figure to create one.
A configuration contains the following settings:
Note: If the built-in import file generator is used, a user-function with the following signature needs to be provided and the containing library needs to be included in the project's configuration file (sections [ui] and [ctrl])
/\*\* @param sFullDpeName input, Full data point element name @param sName output, Signal name @param sDescription output, Description @param sHierarchy output, Activity/Location/Appl./Nature/Domain @param sDpAlias output, DP Alias @param sDpeExtension output, element extension \*/ **public void THE\_NAME\_OF\_THE\_USER\_FUNCTION (** **string sFullDpeName,** **string \&sName,** **string \&sDescription,** **string \&sHierarchy,** **string \&sDpAlias,** **string \&sDpeExtension)** **{** **...** **}**
Note 1: Set sHierarchy to "ERROR" if the function cannot retrieve the output information. Set it to "IGNORE" if a certain type of signals should be explicitly ignored (e.g. where description matches SPARE or RESERVE)
Note 2: If the built-in import file generator is not used because the user can provide the import files in a different way, a manually maintained configuration should still be created for each import file. Configurations help to keep track of the import files and they also allow to make use of some nice features concerning consistency checking.
Figure 19: Datapoint list filtering
Figure 19 shows how a list of datapoints, for long term archiving archiving, is filtered depending on selected options in configuration panel. Note that the different filters are cascaded, i.e. each filter is applied on the set of DPEs returned by the previous filter. Finally each datapoint from the final list (right side) is passed to the user function for processing. Lists marked with a star are ones that contain, or might contain, internal datapoints.
Note: Generating an import file will overwrite the existing import file specified. Always make a backup before generation.
Import files can be generated with the Generate panel, which provides following functionality:
Figure 8: Generate Import File
The labels 'Marked' , 'Ignored', 'Infos' and 'Errors' provide useful metrics about the state of the import file, that is going to be created.
Note: You may right click on any row and choose enable columns for full DPE name, alias and DPE extension, which can help when debugging errors.
The import panel is used to append and modify signals to the central logging configuration in the database. Operators need to authenticate with a dedicated username to be allowed to change the logging configuration.
New signals are stored in import files, which are held by configurations. After choosing a configuration for import, the name of its import file is shown in the text box above.
Before an import file can be appended to the central configuration on DB it needs to be checked for errors. The check reports on name clashes, duplicated entries, forbidden characters, and data points that do not exist on the project or that have mismatching aliases and data point elements specified.
Please see chapter 5.2 for conflict types and their resolutions.
Figure 9: Import Logging Configuration
Note: Invalid characters in attribute Description will be replaced according to a substitution list. If a character is not present in the list or in the whitelist, it will be completely removed. For the substitution table see REPLACED CHARACTERS. For the whitelist see Invalid Characters.
Note: When appending signals that are already registered for logging their metadata (DPE, description, and data category) will be updated on the central logging configuration but the changes will not be propagated to LDB. When appending a signals having different hierarchies, the old hierarchies will remain and must be deleted manually.
The data consistency check panel was designed to compare logging configuration in DB with logging configuration in one or many import files.
Figure 10: Data Consistency Check
On default all production configurations will be marked for checking, but the user can also mark test configurations to simulate changes that would result from importing certain files.
If checkbox 'Generate files' is enabled, temporary import files will be generated for all marked configurations and to be used for the comparison. This option only makes sense, if the configuration is using a user function to generate the file. If the check box is not checked, only the signals that are specified in the import file will be checked for consistency.
Hint: Make sure that the library containing the user function for import file generation has been added to the projects config file in sections [ui] and [ctrl] otherwise option 'Generate files' will not work.
Note: The panel is only used to trigger the integrity check and display the result in the table. The actual check is being executed in the system integrity control manager (usually manager number 41)
In some cases it is necessary to delete signals or hierarchies from the logging configuration on the database of a specific project. Deletion of signals or hierarchies will not be propagated to LDB.
Note: Deletion of signals or hierarchies only marks them deprecated. To delete or obsolete signals or hierarchies in LDB, you must contact acc-logging-support@cern.ch.
When does deletion of signals or hierarchies make sense?
Figure 11: Delete Logging Configuration
The signal overview panel provides information about all signals, which are registered for logging in a certain database schema. This panel is accessible from the Import panel by clicking the View Signals... button.
By default the database schema of the current project will be selected, but since multiple schemas share the same database it is also possible to view signals from other systems or to connect to another database and view the schemas and signals registered there.
Figure 12: Signal Overview
Columns shown by default:
Additional columns and options can be opened by right click on a signal in the table.
The transfer history panel provides a historical listing of data transfers to LDB for a specific signal. The user can specify the time range, but beware of specifying a too long range, because queries' load on the database can be very heavy and can take a considerably long time.
Table columns:
Figure 13: Transfer History
Note: The number of records checked and records logged can be different depending on the timestamp of the Last Checked and Last Logged. Any discrepancies when the load on the transfer jobs is normal and there are no system integrity manager indicates a problem.
The transfer job monitoring panel shows the current status of all transfer jobs.
Figure 14: Transfer Job Monitoring
When selecting a certain transfer job in the table, details will be displayed below:
Each transfer job can be in one of the following state:
System integrity checking for LHC Logging DB was designed to answer two questions:
Figure 15: System Integrity Configuration
There are five different integrity checks available to answer these questions:
The interval for running system integrity checks can be modified by changing the setting 'Run checks every ___ [s]'.
If a system integrity check fails an alarm will be triggered and shown in the system integrity alert screen.
If the reason for the alarm is a mismatching logging configuration ('Data Consistency' or 'Full Consistency' check failed), the problem can be fixed by the operator by updating the logging configuration in DB (e.g. generate and re-import config files).
If the reason for the alarm is a problem on database ('Transfer Jobs' or 'Transfer History' check failed) it can only be treated by the LHCLoggingDB admin or acc-logging-support.
Checks can be added to the list of 'Activated LHCLoggingDB integrity classes' by selecting them from the list of 'Available LHCLoggingDB integrity classes' and then adding them via the 'arrow down' button.
Checks can be removed from the list of 'Activated LHCLoggingDB integrity classes' by selecting them from the list of 'Activated LHCLoggingDB integrity' classes and then removing them via the 'arrow up' button.
Already added checks can be disabled by untagging the check box 'Enabled'. Disabled checks will not be executed and the alarm will be masked.
Note: System integrity checking for LHC Logging DB will be done by a dedicated control manager. Usually this manager has the number 41.
Note: To be able to use system integrity checking for LHC Logging DB first set up system integrity checking for RDB. Otherwise checks involving test data points will not work.
Important note: When database load is very high it can happen that system integrity alarms for database transfers jobs of LHC Logging DB flip to error state. Try to increase settings 'Max. age jobs' and 'Max age history' to make checks less sensitive to load peaks on database. Note that these tests are there for a reason, and should be low enough to catch problems in the process. Certain applications can be quite susceptible to this as they are both writing to and reading from LDB. For example PIC can only accept maximum five minutes delay from value change to its availability in LDB.
The system integrity diagnostics panel shows the current status of all added system integrity checks for LHC Logging DB.
Figure 16: System Integrity Diagnostics
To keep track of created test signals a list of test data points including the current value and timestamp of the last value change is also shown in the second table.
Using the button 'Data Consistency' the operator can open the data consistency check panel and run a consistency check for the specified system.
Hint: This panel can be opened from the main UNICOS menu via (1.) UNICOS button > Management > Diagnostic > System integrity as shown in figure to the right.
Note: The menu entry for System integrity might not be in the same position in the Management sub menu as in the figure.
Scripts:
Libs:
The LHC Logging DB component makes use of the DB schema RDB2NXCALS.
API functions for variable registration can be found in PL/SQL package PVSS_DATA_TRANSFER_MGR.
API functions for variable renaming can be found in PL/SQL package VAR_METADATA_MGR.
Setting up LHC Logging is a complex operation that require actions from three departments. Please contact Icecontrols.Support@cern.ch well in advance as the steps to be followed will take at the very least a few days.
Before a project can be used make sure that:
For more details see [Procedure to setup RDB Archiving and LHC Logging DB](https://wikis.web.cern.ch/wikis/display/EN/WinCC+OA+Oracle+Archiving+for+SAS).
To be allowed to modify the LHC Logging DB configuration you need to authenticate with a dedicated username.
If you cannot find a suitable username in the list of proposed users, ask Icecontrols.Support@cern.ch for help (don't forget to provide the name of your project, machine and database schema).
Five steps to enable system integrity checking:
To modify the logging configuration use the panels at Configuration > LHC Logging DB.
If a new import file contains different hierarchies than the old one, you need to delete old hierarchies before importing the new file, otherwise the logging configuration on DB will contain multiple hierarchies for a signal.
Note: Duplicated hierarchies are reported by the data consistency panel and they will also let the system integrity check "Full Consistency" fail.
Steps to delete hierarchies from logging configuration:
**Note: Deletion of hierarchies using the delete panel will not be propagated to LDB, if you also want to delete hierarchies visible on TIMBER, send a request to acc-logging-support@cern.ch**
As mentionned in the [Deleting Hierarchies]{#deleting-hierarchies} paragraph, if a new import file contains different hierarchies than the old ones, the signal will then be visible in both hierarchies. In other words, hierarchies are not replaced but added to the signals. This situation may also cause the consistency check to fail as only one hierarchy is considered for the checks.
When changing the hierarchies of a signal, one needs to either:
When checking import files, renamed signals will be found and reported in the status table of the import panel. The user must take further actions to resolve the conflicts.
One solution for resolving conflicts caused by renamed signals is delete and re-import:
Since deletion of signals will not be propagated to LDB we will end up having both Signal_1 and Signal_2 on LDB but new values will only be stored into Signal_2.
Note: If you don't need to have one signal with continuous history this solution will do.
Note: If you try to re-import without deleting first the import will fail.
Usually each registered signal has an assigned DPE: Signal_1 > DPE_1
Sometimes other DPEs need to be assigned to the signal: Signal_1 > DPE_2
When checking import files, reconnected signals will be found and reported.
How to treat reconnected signals:
Note: You can check the consistency between import files and central logging configuration in DB with the data consistency check panel.
By using different configurations per UNICOS application, it is possible to register only a subset of the devices present in the project.
The user function is used to filter and select data point elements, and to define the signals and hierarchies. Should the application need custom hierarchy and signal definitions that are different from the generic user function, a custom user function can be used instead.
How to add a custom user function:
LoadCtrlLibs = "libraryWithTheUserFunction.ctl"
How to modify a custom user function:
manager (Manager number 41)
Go to Configuration > LHC Logging DB > Data Consistency and click on the check button. If the check result does not contain any errors the logging configuration on DB is in sync with your import files.
If you are using the build in import file generator you can also check the option "Generate files". Then new import files will be generated from your current devices and they will be used for comparison. These files are temporary and will not overwrite your configuration.
Hint: When using the option "Generate files" the consistency check might take much longer (a couple of minutes).
Data retransfers are very special and complex operations that require actions from two groups (BE-ICS and BE-CSS). They should be avoided whenever possible.
If there is no way to avoid a data retransfer it can be requested via acc-logging-support@cern.ch or Icecontrols.Support@cern.ch which will redirect the request to acc-logging-support.
Note: by default when adding a new signal, its data for the last 3 days will automatically be transfered from WinCC OA RDB schema to NxCALS. Therefore a transfer request should only apply to time ranges older than 3 days.
The retransfer request should contain the following information:
For more details see [LHC Logging DB Problem Management](https://wikis.web.cern.ch/wikis/display/EN/LHC+Logging+DB+Problem+Management).
Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.
Possible reasons:
Error value 20 indicates a delay in the Database to Database transfer jobs. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"
Possible reason:
Error value 10 indicates a stop in the Database to Database transfer jobs. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"
Possible reason:
Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.
Possible reasons:
Error value 40 is raised when registration of test data points was not successful.
Contact your Application Responsible for assistance.
Possible reasons:
Error value 30 indicates an error in the Database to Database transfer jobs.
Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"
Possible reason:
Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.
Possible reasons:
Error value 10 indicates that the current logging configuration contains signals without data point elements. Contact your Application Responsible for assistance.
Possible reasons:
Error value 5 indicates that the current logging configuration contains signals with inconsistent archive groups. Contact your Application Responsible for assistance.
Possible reasons:
Error value 70 is a generic error that results when a query to either RDB or LoggingDB fails, at any given point in the function call chain.
Possible reasons:
Error value 10 indicates that the current logging configuration is inconsistent.
Contact your Application Responsible for assistance.
Possible reasons:
Checking automatically replaces a set of invalid characters in description. This is to support legacy configurations containing non ASCII characters. Characters which are replaced are shown below.
Note: If a character is not present in the above list or on the whitelist, it will be completely removed. For the whitelist see Invalid Characters.
Signal XXXX used to be connected to data point element YYYY but the imported file connects it to alias ZZZZ data point element WWWW. Data point elements might have been deleted and recreated, or the signal could be legitimately been reassigned. Please check with the responsible if this is expected. See Reconnecting Signals for more details.
Note: if the application is not using RDB API, continuous trends cannot be provided after reconnection.
Alias ZZZZ with data point element WWWW used to be connected to signal with name XXXX. The imported file renames the signal to ZZZZ. Make sure this is intended. See Renaming Signals for more details.
Name, description and hierarchy must all consist of a whitelisted subset of ASCII characters. The accepted characters are A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, _, -, +, =, ., :, ;, /, , (, ), [, ] and comma (,).
The name attribute of the signal with alias XXXX and DPE YYYY contains forbidden character z. All characters in the name must be in the above whitelist.
The description attribute of the signal with alias XXXX and DPE YYYY contains forbidden character z. All characters in the description must be in the above whitelist.
The hierarchy attribute of the signal with alias XXXX and DPE YYYY contains forbidden character z. All characters in the hierarchy must be in the above whitelist.
Signal name must be less than 51 characters long. This is a limitation on the LDB and cannot be exceeded. Please make the name shorter.
The name 'XXXX' has been used multiple times. Signal names must be unique. Please make sure no two signals definition specify the same name.
Check that the DPE has an archive config for RDB Archiver (99) and that RDB Archive manager (99) is configured, running and connected.
The XML import file contains syntax error on line XX column YY. Please check that the xml file is valid and fix any errors.
The XML import file is missing the root node. See chapter XML IMPORT FILES for correct format.
The XML import file failed to find the root node. See chapter XML IMPORT FILES for correct format.
The import file is not for unLHCLoggingDB or is different version than expected. Check that attributes application and version are correctly defined in the root node. See chapter XML IMPORT FILES for correct format.
Error encountered when parsing either config nodes or signals. Either the XML is badly formed, you have no configData in your import file, or you have no signals in one of your configData nodes file. See chapter XML IMPORT FILES for correct format.
All signals must have the attribute name specified. Please check your import file for signal with data point element XXXX.YYYY or alias XXXX
All signals must have the attribute hierarchy specified. Please check your import file for signal with data point element XXXX.YYYY or alias XXXX
The parser was not able to open the specified file. Make sure the file exists and that you have read rights to the file.
DP Consistency check was called without data point element or alias specified. Please contact icecontrol.support@cern.ch.
The specified data point does not exist. It might have been deleted or could have never existed. The import file must be updated and signal using this data point de-registered if existing.
The specified alias does not exist. It might have been deleted or could have never existed. The import file must be updated and signal using this alias de-registered if existing.
Aliases used for signals must point to data point elements, not data points. Please specify the exact data point element.
The alias points to a different DP than the specified DPE is an element of. Please check and correct the import file.
The alias and DPE attributes point to different elements. Both must point to the same element. Please check and correct the import file.
Error getting data categories from RDB2NXCALS database. Make sure the unLHCLoggingDB database connection has been configured (UNICOS Menu > Configuration > LHC Logging DB > Settings) and active.
The specified data category does not exist. Please check that you have spelled the category correctly. If the category is new, contact Icecontrols.Support@cern.ch and request for the category to be created.
Possible solutions:
This error is raised when opening the Generate panel with no configuration present.
Possible solution:
This error is raised when opening the delete panel and the RDB schema XXXX has not been declared in the RDB2NXCALS database. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"
Possible solutions:
This error is raised when opening the LHC Logging DB Signal Overview panel and the RDB schema XXXX is not registered in the YYYY database.
Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"
Possible solutions:
This error is raised when no LHC Logging DB connection has been configured.
Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"
Possible solutions:
This error is raised when opening the Delete panel and no LHC Logging DB connection has been configured. Contact the SCADA expert for assistance by sending an email to icecontrol.support@cern.ch with subject line "SCADA SERVICE - LHCLoggingDB"
Possible solutions: