Loading Data Manually into the beqom Rapide Template
The beqom Rapide template offers a foundation for a faster implementation of the beqom TCM solution, based on a standard data model and a set of tools. As part of the beqom Rapide template ETL (Extract, Transform and Load) process, you have the possibility to easily load data into staging tables either manually or through ADF (Azure Data Factory).
The procedure described hereafter details the procedure to be followed in order to load data into the Rapide staging tables by way of a manual data load (i.e. by uploading Excel files), as well as the procedure to run an execution of the ETL process via the ETL Launcher in order to load data from the staging tables into the target (production) tables.
Configuring Global Settings
The first step to perform a manual load of data into the Rapide template is to configure the global ETL settings. To do so, proceed as follows:
Open the Web Application interface of the TCM application, and then open the (Undefined variable: CompoVariables_WA.Data).
Navigate to Global Settings > Global Setting.
-
In the Global Setting grid, configure the parameters as follows:
Staging data archiving: determines whether data should be archived in the database. When this setting is enabled (i.e. set to Yes) data loaded from the staging tables will be archived in the database for the number of days indicated in the configuration of the related parameter .
Logging data errors: determines whether data errors should be logged. When this setting is enabled, data loaded from the staging tables that was moved into the error tables following ETL validation is retained in the database for the number of days indicated in the configuration of the related parameter. Please note that this setting is enabled by default and cannot be disabled.
Format of the date: date format when loading data into the staging tables. Possible values are yyyymmdd, mm/dd/yyyy, dd/mm/yyyy and yyyy/mm/dd. Please note that the separator used in the drop-down list is /, but - and . can also be used. Dates provided in ISO format (yyyymmdd are properly converted according to the selected format.
Failure email notification: determines whether an email should be sent in case of a failure of the data load. When this setting is enabled, in the event of a data load failure, an email is sent to the recipients specified in ETL parameters, with the email subject indicated in the parameter. Please note that for this to work properly, the email scheduler needs to be created and setup according to your preferences. For more information about the email scheduler, refer to Setting up the Email Sender for the Rapide Data Model.
Success email notification: determines whether an email should be sent upon a successful data load. When this setting is enabled, after a successful load, an email is sent to the recipients specified in ETL parameters, with the email subject indicated in the parameter. Please note that for this to work properly, the email scheduler needs to be created and setup according to your preferences. For more information about the email scheduler, refer to Setting up the Email Sender for the Rapide Data Model.
Click Save in the lower-right corner of the grid.
Configuring Global ETL Settings
After having configured the global parameters, depending on the parameters that you have enabled, you then need to perform further configuration steps specific to the ETL process in the Global Setting ETL grid. To do this, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > Data Integration Settings > Global Setting ETL. The grid contains all the parameters enabled in the Global Settings grid.
In the grid, locate and select the parameters that you want to configure. A child grid opens upon clicking each of the rows.
-
Configure the required parameters as follows (depending on the configuration made in the Global Settings grid):
Staging data archiving: if you have enabled that parameter, then enter in the Value Int column the retention period of archived data in the database. Please note that the maximum number of days is 30. If required enter more information in the Value Text field.
-
Logging Data Errors: enter in the Value Int column the retention period of data error logs in the database. The minimum value is 5 days.
Failure email notification: if you have enabled the failure email notification setting in the Global Settings, you then need to specify the recipient(s) and the subject of the email. In the Failure Email Subject row, enter in the Value Text field the subject of the email. In the Failure Email Address row, enter in the Value Text the address(es) of the recipients. Use commas to separate values if you are entering multiple recipients.
Success email notification: if you have enabled the success email notification setting in the Global Settings, you then need to specify the recipient(s) and the subject of the email. In the Success Email Subject row, enter in the Value Text field the subject of the email. In the Success Email Address row, enter in the Value Text the address(es) of the recipients. Use commas to separate values if you are entering multiple recipients.
Click Save in the lower-right corner of the grid panel. Your ETL settings are now saved.
Configuring the "Table Setting" Grid
The Table Setting table contains a full list of payee tables and referential tables for which the method according to which data must be loaded into staging tables, as well as the proper data load type (i.e. the method according to which you want to move data from the staging tables into the target tables).
To configure those settings, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > Data Integration Settings > Table Setting.
In the ETL Type column, select the Manual Load option for the manual data load.
Locate in the grid the table(s) that you want to configure.
-
In the Data Load Type column, select the type of data load that you want to use:
Complete load: when this option is selected, files are fully loaded every time, using a truncate and insert method. This option is only available for payee tables.
Full delta: when this option is selected, the full history of the employee(s) affected by the changes is loaded. Use this load type to reload data for given employees. The requirement behind this load type is that the loaded file can include data for certain employees only, but they should contain all historical rows for the employees whose data is supposed to be updated. In this case, the data for all unique employees in the loaded file will be replaced in the destination table. This option is only available for payee tables.
Upsert: when this option is selected, the full history of the employee(s) affected by the changes is loaded. This option is available only for referential tables.
Check the Is Enabled? flag for the rows corresponding to the table(s) to be configured, in order to indicate whether the table is used or not.
Check the Is Load Enabled? flag for the rows corresponding to the table(s) to be configured, in order to indicate whether the file should be loaded.
Check that the load sequence is properly set up in the Load Sequence column.
Click Save in the lower-right corner of the grid. The tables are now configured.
Configuring the "Table Field Setting" Grid
In the Table Field Setting grid, you need to configure the settings for the fields of the target tables for which you want to run an ETL process. To do so, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > Data Integration Settings > Table Field Setting.
In the grid, locate the table whose fields you want to configure and then click the corresponding row. A child grid that contains the fields corresponding to the selected table is opened.
-
In the child grid, configure the parameters of the fields as required:
Check the Is Mandatory flag for all the fields that you want to mark as mandatory to be filled during the data load. This flag is the basis for performing the mandatory column validation when the data is loaded from the staging tables into the target tables as part of the ETL process. For more information about ETL validation, refer to Default ETL Validation Mechanisms and Potential Errors.
Check the Is business key flag for all the fields that you want to treat as unique during the data load. This flag is the basis for the unique column data validation in referential tables when data is loaded from the staging tables into the target tables as part of the ETL process. For more information about ETL validation, refer to Default ETL Validation Mechanisms and Potential Errors.
Click Save in the lower-right corner of the grid panel. The changes to the field configuration are saved
Repeat the operation for all the tables that you want to configure.
Importing Data into a Staging Grid
To import data into a staging grid, proceed as follows:
Open the staging data grid into which you want to import data. Staging grids are located under (Undefined variable: CompoVariables_WA_Alternate."Data") > Data Integration > Staging > Staging Ref/Staging Payee.
Click Export Template, in the lower-left corner of the grid panel. The system starts generating the template and the status of the process is displayed in a progress bar located in the upper-right corner of the application window. Once the template has been generated, a download link is displayed in the file download box.
Click the download link in the file download box, as illustrated in the following figure:

The file is downloaded locally on your computer.
Open the downloaded template in Excel (or any other spreadsheet edition tool) and insert data into it.
Go back to the data grid that you want to populate in the beqom application, and then click Import in the lower-left corner of the grid panel. A file browser window is opened.
Locate and select the template that you have populated in step 4. The import process is started. Depending on the quantity of data in the file, the import process can take some time.
Once the import is finished, you can click Show Details to see more information about the import or click See Results to refresh the grid.
Triggering the ETL Process using the Task Launcher
To trigger the ETL process, which transfers data from the staging tables into the target tables, with a range of data validation mechanisms, you then need to run the ETL Task Launcher. Once you have setup the task launcher, the ETL process is executed by Rapide based on the parameters configured in the Table Setting grid (see Configuring the "Table Setting" Grid) and the Table Field Setting grid (see Configuring the "Table Field Setting" Grid).
Each time an ETL process is run using the ETL Launcher, the system automatically performs the following steps:
Delete archived data (if the corresponding setting is enabled) according to the retention period defined in the Global Settings ETL for the Staging data archiving parameter (for more information, see Configuring Global Settings and Configuring Global ETL Settings).
Archive data based on the configuration of the Staging data archiving setting (for more information, see Configuring Global Settings).
Call the pre-load stored procedures, if any.
-
Call validation mechanisms before the data load:
Call the mandatory fields validation stored procedure
Call the data type validation stored procedure
Call the referenced data and foreign key validation stored procedure
Call the unique column validation stored procedure
Call the py_Payee employee validation stored procedure
Call the overlapping dates validation stored procedure
For a detailed description of the validation mechanisms run when executing an ETL process, see Default ETL Validation Mechanisms and Potential Errors.
Clear lines identified as error in the corresponding tables in the core_staging schema
-
Load data into the target tables:
Load data into the target tables based on the data load type selected in the Table Setting grid.
Populate the py_Payee table based on the core.payee_personal_data table and populate the dimension tables with the payees (Dim_Structure, Dim_Department, Dim_Territory)
Run post-load stored procedures, if any.
Update id_payee in the payee tables.
Truncate the staging tables after the load.
Call a refresh of the Payee Situation grid.
To run an ETL process and move data from the staging tables into the target tables, proceed as follows:
Make sure that you are in the (Undefined variable: CompoVariables_WA.Data) of the beqom TCM Web App interface and then navigate to Data Integration > Task Launcher - ETL.
In the Task Launcher - ETL grid, click the Run ETL row. A child grid called Action is opened.
In the Run ETL row of the child grid, check the Requested box.
Click Save in the lower-right corner of the application window. The system will run the ETL process.
Default ETL Validation Mechanisms and Potential Errors
As part of the standard Rapide ETL process, a number of validation mechanisms is provided. These mechanisms validate whether the data loaded into the target tables meet specific, predefined requirements. Should any of the records in the staging tables fail validation, then the records are not loaded into the target tables. Instead, they are moved to the corresponding error table with an error message corresponding to the failed validation.
Detailed information about the errors that occurred during a given ETL process is provided in the data load summary. For more information refer to Default ETL Validation Mechanisms and Potential Errors.
The following table details the validation mechanisms run by the system when an ETL process is executed:
| ETL Validation Name | Description | Error Message |
|---|---|---|
| Data Type Validation |
This validation mechanism checks whether the loaded data meets the requirements regarding data type. In addition, it checks whether each row from the staging tables was loaded into the target tables (with an already used target data type) without SQL conversion error. The validation mechanisms prevents the loading of all values that cannot be successfully converted into the corresponding data types in the target tables. |
Data Type Conversion Error for [Column Name]. |
This mechanism also checks that provided dates can be successfully converted into the date format selected in the Global Setting ETL. For more information, refer to Configuring the Global ETL Settings. |
Incorrect Date Format (not aligned with Global Setting) or value provided is not DATE type for [Column Name]. | |
| Referenced Data Validation |
This validation mechanism checks that the data marked as referenced already exists in the related referential table(s). If the data present in the staging tables and marked as referenced does not exist in the corresponding referential table, then all records containing this data will be identified by the validation mechanism and won't be loaded in the target tables. This validation mechanism is closely linked to the configuration of the Table Field Setting grid, which contains a standard, predefined setup to link specific table columns to referential tables. For more information, refer to Configuring the "Table Field Setting" Grid. |
Value for [Column Name] does not exist in the related referential tableā. |
| Mandatory Column Validation |
This validation mechanism checks that the columns marked as mandatory for the target table in the Table Field Setting grid (see Configuring the "Table Field Setting" Grid) contain data in the corresponding staging tables. If no data exists in the staging table for a field marked as mandatory, then the record is not loaded into the target table. |
Column [Column Name] is mandatory and none of the values can be NULL. |
| Unique Column Validation |
This validation mechanism checks that the combination of values in the columns marked as business keys in the Table Field Setting grid for the target table is unique. For more information, see Configuring the "Table Field Setting" Grid. If duplicated values exist in the staging data for fields marked as business keys, then the records are not loaded in the target tables. |
Unique key validations for business key columns ([Column Name]). |
| Overlapping Dates Validation |
This validation mechanism checks the dates against various requirements:
This validation mechanism is designed to check that all the requirements regarding the start and end dates are met in the staging tables before loading into the target tables. An error is returned if any of the above mentioned requirements is not met. |
|
| py_Payee Employee Validation |
This validation mechanism checks that the unique code for each payee in the loaded data exists in the Payee Personal Data and in the py_Payee table. This is specifically used when an employee is referenced from another employee (in the case of a direct report/manager relationship for instance). If there are, in the staging data, records for employees with codes that don't exist in the Payee Personal Data table, then those records are not loaded into the target table. |
Employee does not exists in Payee Personal Data. |
| Code Parent Validation |
This validation mechanism checks if, in the referential tables, there are records for which the code parent is not equal to the code. In addition, it checks, in the referential tables, that the code parent exists in either the target referential table or in the staging referential table (and that it passed prior validations) |
|
ETL validation checks
Viewing the Data Load Summary
The all the details regarding the data load process are available in the Data Load Summary audit grid. To view the results of the data load, proceed as follows:
Make sure that you are in the (Undefined variable: CompoVariables_WA.Data) of the beqom TCM Web App interface and then navigate to Data Integration > Data Load Summary.
Locate in the grid the recently executed ETL process whose details you want to view. You can use the quick filters to identify quicker the process that you have triggered, for instance by filtering the Insert By column on your own name. To help you identify the relevant row, keep in mind that ETL processes triggered manually are identified as "[MANUALLY TRIGGERED]" in the Name Data Load column (whereas ETL processes triggered by an ADF upload are identified by "[ADF TRIGGERED]" and ETL processes triggered via API are identified by "[API TRIGGERED]").
Click the row corresponding to the execution that you want to view. A child grid called Data Load Detail is opened. This grid contains several detail columns, as described in the table below and each row corresponds to an action, broken down by table.
-
If the Execution Result column indicates Executed with error, open the corresponding error child grid using the drop-down list located above the child grid panel. There are a number of error grids:
Payee Personal Data Error: lists errors that occurred for the Payee Personal Data table during the execution of the ETL process.
Payee Job Assignment Error: lists errors that occurred for the Payee Job Assignment table during the execution of the ETL process.
Payee Org Assignment Error: lists errors that occurred for the Payee Org Assignment table during the execution of the ETL process.
Payee Compensation Error: lists errors that occurred for the Payee Compensation table during the execution of the ETL process.
Payee Absence Error: lists errors that occurred for the Payee Absence table during the execution of the ETL process.
Payee Address Error: lists errors that occurred for the Payee Address table during the execution of the ETL process.
Payee Rating Error: lists errors that occurred for the Payee Rating table during the execution of the ETL process.
Payee Job Assignment Error: lists errors that occurred for the Payee Job Assignment table during the execution of the ETL process.
Payee Tables Error: lists errors that occurred in all payee tables during the execution of the ETL process.
Referential Tables Error: lists errors that occurred in referential tables during the execution of the ETL process.