Loading Data into the Rapide Data Model Using Azure Data Factory
The manual load of Excel files into the beqom Rapide template according to the procedure described in Loading Data Manually into the beqom Rapide Template is a solution that is well fitted for small companies. However for larger solutions and/or cases where there are frequent changes in HR data, a more automated approach is preferable. This document describes the automated data loading solution using Azure Data Factory that you can use to feed business data to the Rapide model.
Assumptions
The procedure outlined below is based on the following assumptions:
The source of the data is an SFTP server; any other data source requires an ADF pipeline and linked services adjustments.
By default, the PGP encryption/decryption is provided by the pipeline.
A part of the setup must be performed by the beqom Platform Operations team (for more information, see Setting up the Infrastructure).
The configuration of files and file fields is performed in data grids and ADF pipelines dynamically retrieve this configuration.
Setting up the Infrastructure
In order to make sure that the ADF load and ETL processes operate as expected, the following structure needs to be in place:
Azure Data Factory
Tenant database
SFTP server
Blob storage with two default containers. beqom recommends that these containers be called "working" and "archive", since this is the default naming convention used in the Rapide template ETL process
PGP function. The platform Operations team must provide the URL and json code sample. Once this is done, the proper access rights need to be set up to guarantee and adequate use of the function
-
Key vault secrets used directly in the pipeline, as outlined below:
Tenant - PGP - Storage - ConnectionString
Tenant - PGP - Private - Key
Tenant - PGP - Public - Key
Tenant - PGP - Passphrase
Tenant - Storage - Connection
Tenant - Db - ConnectionString
SFTP - Username
SFTP - Password
Once this structure is in place, the beqom Platform Operations team must configure the linked services in Azure Data Factory in order to establish the proper links between the various parts linked above. The beqom Rapide pipelines use certain standard linked services created based on KeyVault secrets. Consequently, the creation of those secrets in the customer KeyVault must be requested using the same naming convention as the one used in Rapide, and then these secrets must be linked to the relevant service in Azure Data Factory. The following linked services are required to use the Rapide pipelines:
SFTP: link to SFTP-URL, SFTP-Username and SFTP-Password
Blob: link to the Tenant-Storage-Connection secret
Database: link to the Tenant-Db-ConnectionString secret
Configuring Global Settings
Once the infrastructure has been put in place, you can proceed to the configuration of the ADF load within the beqom TCM application. To do so, proceed as follows:
Open the Web Application interface of the TCM application, and then open the (Undefined variable: CompoVariables_WA.Data).
Navigate to Global Settings > Global Setting.
-
In the Global Setting grid, configure the parameters as follows:
Blob Used: determines whether a blob should be used for ETL. In the case of an ADF load, a blob is always used and this parameter cannot be set t No.
Failure email notification: determines whether an email should be sent in case of a failure of the data load. When this setting is enabled, in the event of a data load failure, an email is sent to the recipients specified in ETL parameters, with the email subject indicated in the parameter. Please note that for this to work properly, the email scheduler needs to be created and setup according to your preferences. For more information about the email scheduler, refer to Setting up the Email Sender for the Rapide Data Model.
File archiving: determines whether files should be archived in the storage blob. When this parameter is enabled (i.e. set to Yes), loaded files will be archived in the blob for the number of days specified in the configuration of the related parameter.
Format of the date: date format when loading data into the staging tables. Possible values are yyyymmdd, mm/dd/yyyy, dd/mm/yyyy and yyyy/mm/dd. Please note that the separator used in the drop-down list is /, but - and . can also be used. Dates provided in ISO format (yyyymmdd) are properly converted according to the selected format.
Key Vault Used: determines whether the key vault is used in ADF. This parameter is always set to Yes and cannot be modified.
Logging data errors: determines whether data errors should be logged. When this setting is enabled, data loaded from the staging tables that was moved into the error tables following ETL validation is retained in the database for the number of days indicated in the configuration of the related parameter. Please note that this setting is enabled by default and cannot be disabled.
PGP decryption: determines whether PGP decryption is used for at least one file.
PGP encryption: determines whether PGP encryption used for at least one file.
SFTP Used: determines whether SFTP is used as a file source. When using ADF for file load, SFTP is always enabled and the setting cannot be disabled.
Staging data archiving: determines whether data should be archived in the database. When this setting is enabled, data loaded from the staging tables will be archived in the database for the number of days indicated in the configuration of the related parameter.
Success email notification: determines whether an email should be sent upon a successful data load. When this setting is enabled, after a successful load, an email is sent to the recipients specified in ETL parameters, with the email subject indicated in the parameter. Please note that for this to work properly, the email scheduler needs to be created and setup according to your preferences. For more information about the email scheduler, refer to Setting up the Email Sender for the Rapide Data Model.
Click Save in the lower-right corner of the grid.
Configuring Global ETL Settings
After having configured the global parameters, depending on the parameters that you have enabled, you then need to perform further configuration steps specific to the ETL process in the Global Setting ETL grid. To do this, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > Data Integration Settings > Global Setting ETL. The grid contains all the parameters enabled in the Global Settings grid.
In the grid, locate and select the parameters that you want to configure. A child grid opens upon clicking each of the rows.
-
Configure the required parameters as follows (depending on the configuration made in the Global Settings grid):
-
Logging Data Errors: enter in the Value Int column the retention period of data error logs in the database. The minimum value is 5 days.
Blob Used: in the Blob archive folder container row, enter in the Value Text field the name of the blob container used to store archive files. In the Blob working folder container row, enter in the Value Text field the name of the blob container used as a working space. beqom recommends that you use the names "archive" and "working" respectively, since these are aligned with the name convention used within the Rapide template.
SFTP Used: in the SFTP folder name for outgoing files row, enter in the Value Text field the name of the SFTP folder in which outgoing files are uploaded. In the SFTP folder name for incoming files row, enter in the Value Text the name of the folder incoming files are uploaded. Both names should reflect the name of the folders created on the SFTP server for outgoing and incoming files.
Key Vault Used: enter in the Value Text field the Key Vault URL variable value of the main pipeline.
PGP encryption: in the PGP Encryption URL row, enter in the Value Text field the URL provided by the Platform Operations team in the ticket related to the setup of the PGP function. In the PGP File Extension Encryption row, enter in the Value Text field the encryption file extension used by the PGP function. Options are ".gpg" or ".pgp".
PGP decryption: in the PGP Decryption URL row, enter in the Value Text field the URL provided by the Platform Operations team in the ticket related to the setup of the PGP function. In the PGP File Extension Decryption row, enter in the Value Text field the decryption file extension used by the PGP function. Options are ".gpg" or ".pgp".
Staging data archiving: if you have enabled that parameter, then enter in the Value Int column the retention period of archived data in the database. Please note that the maximum number of days is 30. If required enter more information in the Value Text field.
File archiving: in the Archive Files - Number of Days row, enter in the Value Int field the duration, in days, for which archived files should be kept in the blob storage. The maximum value is 30 days.
Failure email notification: if you have enabled the failure email notification setting in the Global Settings, you then need to specify the recipient(s) and the subject of the email. In the Failure Email Subject row, enter in the Value Text field the subject of the email. In the Failure Email Address row, enter in the Value Text the address(es) of the recipients. Use commas to separate values if you are entering multiple recipients.
Success email notification: if you have enabled the success email notification setting in the Global Settings, you then need to specify the recipient(s) and the subject of the email. In the Success Email Subject row, enter in the Value Text field the subject of the email. In the Success Email Address row, enter in the Value Text the address(es) of the recipients. Use commas to separate values if you are entering multiple recipients.
-
Click Save in the lower-right corner of the grid panel. Your ETL settings are now saved.
Configuring the "Table Setting" Grid
The Table Setting table contains a full list of payee tables and referential tables for which the method according to which data must be loaded into staging tables, as well as the proper data load type (i.e. the method according to which you want to move data from the staging tables into the target tables).
To configure those settings, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > Data Integration Settings > Table Setting.
In the ETL Type column, select the Files Load option for the manual data load.
Locate in the grid the table(s) that you want to configure.
-
In the Data Load Type column, select the type of data load that you want to use:
Complete load: when this option is selected, files are fully loaded every time, using a truncate and insert method. This option is only available for payee tables.
Full delta: when this option is selected, the full history of the employee(s) affected by the changes is loaded. Use this load type to reload data for given employees. The requirement behind this load type is that the loaded file can include data for certain employees only, but they should contain all historical rows for the employees whose data is supposed to be updated. In this case, the data for all unique employees in the loaded file will be replaced in the destination table. This option is only available for payee tables.
Upsert: when this option is selected, the full history of the employee(s) affected by the changes is loaded. This option is available only for referential tables.
Check the Is Enabled? flag for the rows corresponding to the table(s) to be configured, in order to indicate whether the table is used or not.
Check the Is Load Enabled? flag for the rows corresponding to the table(s) to be configured, in order to indicate whether the file should be loaded.
Check that the load sequence is properly set up in the Load Sequence column.
Click Save in the lower-right corner of the grid. The tables are now configured.
Configuring the "Table Field Setting" Grid
In the Table Field Setting grid, you need to configure the settings for the fields of the target tables for which you want to run an ETL process. To do so, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > Data Integration Settings > Table Field Setting.
In the grid, locate the table whose fields you want to configure and then click the corresponding row. A child grid that contains the fields corresponding to the selected table is opened.
-
In the child grid, configure the parameters of the fields as required:
Check the Is Mandatory flag for all the fields that you want to mark as mandatory to be filled during the data load. This flag is the basis for performing the mandatory column validation when the data is loaded from the staging tables into the target tables as part of the ETL process.
Check the Is business key flag for all the fields that you want to treat as unique during the data load. This flag is the basis for the unique column data validation in referential tables when data is loaded from the staging tables into the target tables as part of the ETL process.
Click Save in the lower-right corner of the grid panel. The changes to the field configuration are saved
Repeat the operation for all the tables that you want to configure.
Configuring the ADF Standard Data Load Mapping
The next step in the preparation of a data load into the Rapide template via ADF consists in configuring files and file fields. This operation is partly done in data grids in the beqom TCM application; the ADF pipelines then retrieve those settings dynamically.
Configuring the "File Table Mapping" Grid
To configure the data load mapping, you first need to specify the mapping between the loaded files and the standard staging tables. To do so, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > ADF Config > File Table Mapping. This table lists the staging tables and their mapping to uploaded files.
Define, for each of the relevant tables listed in the grid, the mapping settings as described in the table below.
Click Save in the lower-right corner of the application window.
| Column Name | Description | Available Options |
|---|---|---|
| Table Name | Name of the table impacted by the configuration. | - |
| Mask File | Mask file definition; indicates the file name. |
In this column, you can use the * character at the beginning and at the end of the string to define the pattern indicating the file name, rather than specifying the exact file name. Example: *ref-absence-reason* |
| Description File | Description of the file. | Optional; free text input. |
| File Extension | Extension of the file to be uploaded. |
|
| Column Delimiter | Column delimiter used in the file to be uploaded. |
|
| Row Delimiter | Row delimiter used in the file to be uploaded. |
|
| Text Qualifier | Text qualifier used in the file to be uploaded. |
|
| Is Pgp Used | Flag indicating whether the file is encrypted using PGP. | Enabled/Disabled |
| Load Sequence | Position of the file in the file load sequence. | Free integer input |
| Is Used | Flag indicating whether the flag is used or not. |
Please note that the state of this flag cannot be changed from this grid. It is automatically populated based on the settings specified in the Table Setting grid. Two conditions must be met in order for this flag to be enabled:
|
ADF mapping settings
Configuring the "File Table Field Mapping" Grid
Once the File Table Mapping grid has been configured for ADF load, you then need to configure the File Table Field Mapping grid accordingly. The purpose of this step is to define the file field configuration and mapping with the fields of the standard staging tables.
To perform this configuration, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > ADF Config > File Table Field Mapping. This table lists the tables for which the Is Used flag was enabled in the previous mapping step (for more information, see Configuring the "File Table Mapping" Grid).
Click the row corresponding to the table that you want to configure. A child grid containing the fields included in that table is opened.
-
Define the settings for each row of the child grid, which correspond to the fields of the selected table, as follows:
In the File Field column, enter the name of the field from the file to be uploaded.
In the Description File Field, you can optionally enter a description for the field.
Enable the flag in the Is Used? column to indicate whether or not the field is present in the data load. If the flag is disabled, the field will be excluded from the data load.
Click Save in the lower-right corner of the application window.
Configuring a Custom Data Load Mapping
The custom data load mechanism handles the loading of data from the staging tables o the target tables for any custom table (i.e. tables that are not part of the Rapide data model or that are not available in existing customizations. For more information about the customization of the Rapide data model, refer to Extending the beqom Rapide Data Model).
This mechanism is called by a standard Rapide stored procedure entitled [etl].[call_data_load] which includes a placeholder in which dedicated code can be included in order to load/edit data for non Rapide tables.
The procedures to configure the files and file fields mapping is very similar to standard tables and is performed in the Web Application interface, via data grids.
Configuring the "Custom File Table Mapping" Grid
To configure the mapping between loaded files and custom staging tables, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > ADF Custom Config > Custom File Table Mapping.
Click the Add a Row button, located in the lower-left corner of the grid panel. A new row is added to the grid.
Fill in the parameters and the mapping of the custom staging table as described in the table below.
Click Save in the lower-right corner of the application window.
Repeat steps 2-4 for each custom file table that you want to configure.
The following table details the fields available in the Custom File Table Mapping grid and the information that you need to input in those fields:
| Field Name | Description | Available Options |
|---|---|---|
| Table Staging Schema Name | Name of the database schema corresponding to the custom staging table | Free text input |
| Table Staging Name | Name of the related custom staging table | Free text input |
| Name File | Name of the file to be uploaded | Free text input |
| Mask File | Definition of the file mask which indicates the file name | In this column, you can use the * character at the beginning and at the end of the string to define the pattern indicating the file name, rather than specifying the exact file name. |
| Description File | Description of the file to be uploaded. | Free text input; optional |
| File Extension | Extension used in the file to be uploaded. |
|
| Column Delimiter | Column delimiter used in the file to be uploaded. |
|
| Row Delimiter | Row delimiter used in the file to be uploaded. |
|
| Text Qualifier | Text qualifier used in the file to be uploaded. |
|
| Is Load Enabled? | Flag indicating whether the file should be uploaded. | Enabled/disabled |
| Is Pgp Used | Flag indicating whether PGP encryption/decryption is used in the file to be uploaded. | Enabled/disabled |
| Load sequence | Order of the file to be uploaded in the data upload sequence. | Free integer input |
Custom file load mapping
Configuring the "Custom Field Table Field Mapping" Grid
Once you have set up the correct parameter for custom files in the Custom File Table Mapping grid, you then need to configure the Custom File Table Field Mapping grid accordingly.
To do so, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > ADF Custom Config > Custom File Table Field Mapping. The parent grid opens with the list of files that have been mapped to custom staging tables (see Configuring the "Custom File Table Mapping" Grid).
Click the row corresponding to the file whose fields you want to configure. A child grid is opened.
Click the Add a Row button, located in the lower-left corner of the child grid panel. A row is added to the child grid to enable you to configure one field of the custom table.
-
Configure the parameters of the field as follows:
In the File Field column, enter the name of the field.
In Table Field Staging column, enter the name of the corresponding field in the custom staging table.
In the Description File Field, you can optionally enter a description for the custom field.
In the Field Order column, enter the order of the field in the field load sequence.
Check the Is Used? flag in order to indicate whether the field is used or not. If the flag is not checked, then the field will be excluded from the upload.
Click Save in the lower-right corner of the application window.
Repeat steps 3-5 for each custom file field that you want to map.
Running the ADF Pipeline
Once all the ADF and ETL configuration steps have been performed, you can trigger an execution of the ADF pipelines.
In Azure Data Factory, you have three options to run pipelines:
You can trigger test runs of a pipeline without publishing your changes to the service. This is the debug mode.
You can trigger a pipeline on demand.
You can schedule a pipeline execution.
Running the Pipeline in Debug Mode
To run the pipeline in debug mode, proceed as follows:
Open ADF and open the proper tenant.
Open the Factory Resources component.
Expand the Pipelines section, and then select the "Main" pipeline, as illustrated in the following figure:

Click the Debug button, located in the toolbar above the diagram. The pipeline is run in debug mode.
Running the Pipeline Immediately
To trigger an immediate execution of the pipeline, proceed as follows
Open ADF and open the proper tenant.
Open the Factory Resources component.
Expand the Pipelines section, and then select the "Main" pipeline.
Click the Add Trigger button, located in the toolbar above the diagram. A contextual menu is opened.
Click Trigger Now in the contextual menu, as illustrated in the following figure:

The execution of the pipeline is started.
Scheduling the Execution of the Pipeline
You can schedule the execution of the pipeline according to a calendar-based trigger or an event trigger. To schedule the execution of the pipeline, proceed as described below.
Open ADF and open the proper tenant.
Open the Factory Resources component.
Expand the Pipelines section, and then select the "Main" pipeline.
Click the Add Trigger button, located in the toolbar above the diagram. A contextual menu is opened.
Click New/Edit in the contextual menu. The trigger configuration window is opened.
-
In the New Trigger window, specify the parameters of the trigger:
In the Name field, specify the name of the trigger
In the Description Field, optionally enter a description for the trigger
In the Type drop-down list, select the type of trigger that you want to create. For the purpose of this procedure, select Schedule.
In the Start Date field, select the effective date for the trigger.
In the Time Zone field, specify the time zone with which you want to coordinate the trigger. UTC is used by default.
In the Recurrence section, define the frequency of execution of the pipeline.
If you wish so, check the Specify an End Date, and then select the end date.
Click the Yes radio button for the Activated option.
Click OK in the lower-left corner of the trigger creation window. The trigger is now created with the specified parameters.
Viewing the Details of a Pipeline Execution
To monitor the execution(s) of the pipelines, proceed as follows:
Open ADF and open the proper tenant.
Open the Monitor component in the left navigation bar. The monitoring interface is opened. By default, the monitoring interface lists the triggered pipeline runs in the selected time period. You can however change the time range, and filter by status, pipeline name or annotation.
Click the name of the pipeline in the list. The list of executions for that pipeline is displayed. Click the icon located at the right of an execution row to view run-specific information, such as the json input and output.
In the event that the run activity failed, click the icon in the Status column to view the detailed error message.
Viewing the ADF Pipeline Execution Summary
Contrary to the manual data load process in which you need to explicitly request the execution of the ETL process, when the ADF pipeline is run, the ETL process is directly executed as the last step of the pipeline, will all the same validation checks as when you request it manually.
All details regarding the ADF pipeline run are available in the ADF Pipeline Execution Summary grid. In this grid, each data load is divided into three parts:
Standard Files Load: corresponds to the execution of the standard ADF data load, for standard tables which are part of the beqom Rapide data model, either out of the box or existing customizations.
Custom Files Load: corresponds to the execution of the custom ADF data load for custom tables
Post ETL: corresponds to the transfer of the data from the staging tables to the target (production) tables
For each operation, various data is available such as data load name, pipeline name, start and end time of the execution, execution time, execution status, execution result and number of loaded files. In addition, more details are available in a child grid upon clicking a row in the parent grid.
To view those details, proceed as follows:
Make sure you are in the (Undefined variable: CompoVariables_WA.Data) of the TCM Web Application interface, and then navigate to Data Integration > ADF Pipeline Execution Summary. The execution summary parent grid is opened.
Click the row whose details you want to see. A child grid called Detail is displayed. This grid contains details about the execution of the ETL process, broken down by table. In addition, information about the refer of the Payee Situation table (which is performed as part of the ETL process) is provided.
Default ETL Validation Mechanisms and Potential Errors
As part of the standard Rapide ETL process, a number of validation mechanisms is provided. These mechanisms validate whether the data loaded into the target tables meet specific, predefined requirements. Should any of the records in the staging tables fail validation, then the records are not loaded into the target tables. Instead, they are moved to the corresponding error table with an error message corresponding to the failed validation.
Detailed information about the errors that occurred during a given ETL process is provided in the data load summary. For more information refer to Viewing the Data Load Summary.
The following table details the validation mechanisms run by the system when an ETL process is executed:
| ETL Validation Name | Description | Error Message |
|---|---|---|
| Data Type Validation |
This validation mechanism checks whether the loaded data meets the requirements regarding data type. In addition, it checks whether each row from the staging tables was loaded into the target tables (with an already used target data type) without SQL conversion error. The validation mechanisms prevents the loading of all values that cannot be successfully converted into the corresponding data types in the target tables. |
Data Type Conversion Error for [Column Name]. |
This mechanism also checks that provided dates can be successfully converted into the date format selected in the Global Setting ETL. For more information, refer to Configuring the Global ETL Settings. |
Incorrect Date Format (not aligned with Global Setting) or value provided is not DATE type for [Column Name]. | |
| Referenced Data Validation |
This validation mechanism checks that the data marked as referenced already exists in the related referential table(s). If the data present in the staging tables and marked as referenced does not exist in the corresponding referential table, then all records containing this data will be identified by the validation mechanism and won't be loaded in the target tables. This validation mechanism is closely linked to the configuration of the Table Field Setting grid, which contains a standard, predefined setup to link specific table columns to referential tables. For more information, refer to Configuring the "Table Field Setting" Grid. |
Value for [Column Name] does not exist in the related referential tableā. |
| Mandatory Column Validation |
This validation mechanism checks that the columns marked as mandatory for the target table in the Table Field Setting grid (see Configuring the "Table Field Setting" Grid) contain data in the corresponding staging tables. If no data exists in the staging table for a field marked as mandatory, then the record is not loaded into the target table. |
Column [Column Name] is mandatory and none of the values can be NULL. |
| Unique Column Validation |
This validation mechanism checks that the combination of values in the columns marked as business keys in the Table Field Setting grid for the target table is unique. For more information, see Configuring the "Table Field Setting" Grid. If duplicated values exist in the staging data for fields marked as business keys, then the records are not loaded in the target tables. |
Unique key validations for business key columns ([Column Name]). |
| Overlapping Dates Validation |
This validation mechanism checks the dates against various requirements:
This validation mechanism is designed to check that all the requirements regarding the start and end dates are met in the staging tables before loading into the target tables. An error is returned if any of the above mentioned requirements is not met. |
|
| py_Payee Employee Validation |
This validation mechanism checks that the unique code for each payee in the loaded data exists in the Payee Personal Data and in the py_Payee table. This is specifically used when an employee is referenced from another employee (in the case of a direct report/manager relationship for instance). If there are, in the staging data, records for employees with codes that don't exist in the Payee Personal Data table, then those records are not loaded into the target table. |
Employee does not exists in Payee Personal Data. |
| Code Parent Validation |
This validation mechanism checks if, in the referential tables, there are records for which the code parent is not equal to the code. In addition, it checks, in the referential tables, that the code parent exists in either the target referential table or in the staging referential table (and that it passed prior validations) |
|
Viewing the Data Load Summary
The details regarding the data load process are available in the Data Load Summary audit grid. To view the results of the data load, proceed as follows:
Make sure that you are in the (Undefined variable: CompoVariables_WA.Data) of the beqom TCM Web App interface and then navigate to Data Integration > Data Load Summary.
Locate in the grid the recently executed ETL process whose details you want to view. You can use the quick filters to identify quicker the process that you have triggered, for instance by filtering the Insert By column on your own name. To help you identify the relevant row, keep in mind that ETL processes triggered via an ADF pipeline are identified as "[ADF TRIGGERED]" in the Name Data Load column (whereas ETL processes triggered manually are identified by "[MANUALLY TRIGGERED]" and ETL processes triggered via API are identified by "[API TRIGGERED]").
Click the row corresponding to the execution that you want to view. A child grid called Data Load Detail is opened. This grid contains several detail columns, as described in the table below and each row corresponds to an action, broken down by table.
-
If the Execution Result column indicates Executed with error, open the corresponding error child grid using the drop-down list located above the child grid panel. There are a number of error grids:
Payee Personal Data Error: lists errors that occurred for the Payee Personal Data table during the execution of the ETL process.
Payee Job Assignment Error: lists errors that occurred for the Payee Job Assignment table during the execution of the ETL process.
Payee Org Assignment Error: lists errors that occurred for the Payee Org Assignment table during the execution of the ETL process.
Payee Compensation Error: lists errors that occurred for the Payee Compensation table during the execution of the ETL process.
Payee Absence Error: lists errors that occurred for the Payee Absence table during the execution of the ETL process.
Payee Address Error: lists errors that occurred for the Payee Address table during the execution of the ETL process.
Payee Rating Error: lists errors that occurred for the Payee Rating table during the execution of the ETL process.
Payee Job Assignment Error: lists errors that occurred for the Payee Job Assignment table during the execution of the ETL process.
Payee Tables Error: lists errors that occurred in all payee tables during the execution of the ETL process.
Referential Tables Error: lists errors that occurred in referential tables during the execution of the ETL process.