# xMart Pipeline Script Guide Markdown version v4.32.10, generated at 2026-04-16T15:32:29 Links to user documentation should point at https://extranet.who.int/xmart4/pipeline-script-guide#{CommandName} ## XmartPipeline template A pipeline should have the following structure: ```xml ``` ## Table of Contents of commands by section - [Context](#context) section - [Add](#context-add) - [Add](#context-add) - [Extract](#extract) section - [GetAccess](#extract-getaccess) - [GetDb](#extract-getdb) - [GetEsriFeatures](#extract-getesrifeatures) - [GetExcel](#extract-getexcel) - [GetFtpFile](#extract-getftpfile) - [GetJson](#extract-getjson) - [GetJsonApi](#extract-getjsonapi) - [GetMart](#extract-getmart) - [GetOData](#extract-getodata) - [GetRecfile](#extract-getrecfile) - [GetText](#extract-gettext) - [GetWebService](#extract-getwebservice) - [GetWordForm](#extract-getwordform) - [GetZip](#extract-getzip) - [OneOnly](#extract-oneonly) - [Transform](#transform) section - [AddColumn](#transform-addcolumn) - [AggregateTable](#transform-aggregatetable) - [ChangeCase](#transform-changecase) - [CleanTable](#transform-cleantable) - [CopyColumn](#transform-copycolumn) - [CopyTable](#transform-copytable) - [CustomScript](#transform-customscript) - [DateAdd](#transform-dateadd) - [DateDiff](#transform-datediff) - [DbLookup](#transform-dblookup) - [FindReplace](#transform-findreplace) - [FormatDate](#transform-formatdate) - [GenerateHash](#transform-generatehash) - [GenerateID](#transform-generateid) - [HierarchyLookup](#transform-hierarchylookup) - [JoinTables](#transform-jointables) - [MartLookup](#transform-martlookup) - [MergeTables](#transform-mergetables) - [ParseJson](#transform-parsejson) - [PivotColumns](#transform-pivotcolumns) - [RemoveColumn](#transform-removecolumn) - [RemoveDuplicates](#transform-removeduplicates) - [RemoveRowsByFilter](#transform-removerowsbyfilter) - [RemoveRowsByRowNumber](#transform-removerowsbyrownumber) - [RemoveTable](#transform-removetable) - [RenameColumn](#transform-renamecolumn) - [RenameColumnsByLookup](#transform-renamecolumnsbylookup) - [Round](#transform-round) - [SetColumnDataType](#transform-setcolumndatatype) - [SetContext](#transform-setcontext) - [SplitCellsToRows](#transform-splitcellstorows) - [SplitColumnByRegEx](#transform-splitcolumnbyregex) - [SplitDate](#transform-splitdate) - [SynonymLookup](#transform-synonymlookup) - [TableLookup](#transform-tablelookup) - [TransposeTable](#transform-transposetable) - [UnpivotColumns](#transform-unpivotcolumns) - [UnpivotSimple](#transform-unpivotsimple) - [ValuesToColumns](#transform-valuestocolumns) - [Validate](#validate) section - [TestColumnsExist](#validate-testcolumnsexist) - [TestNotEmpty](#validate-testnotempty) - [TestPattern](#validate-testpattern) - [TestRow](#validate-testrow) - [TestValueInRange](#validate-testvalueinrange) - [Approvals](#approvals) section - [AddStep](#approvals-addstep) - [Load](#load) section - [AutoLoad](#load-autoload) - [ColumnMapping](#load-columnmapping) - [LoadTable](#load-loadtable) - [SysIDLookup](#load-sysidlookup) - [PostRun](#postrun) section - [CallWebService](#postrun-callwebservice) - [Flow](#postrun-flow) - [PostRun](#postrun-postrun) - [RunPipeline](#postrun-runpipeline) - [Flow](#flow) section - [CallWebService](#flow-callwebservice) - [Flow](#flow-flow) - [RunPipeline](#flow-runpipeline) ## Context Section to define pipeline inputs values ### Add [In the Context / Inputs section] Declare a new variable either provided by the origin or by the user at load time #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `DefaultValue` | Attribute | - | [For Inputs] Default value for input, set when origin started, can be changed by the user in the UI. | | `FilterExpression` | Attribute | - | [For Inputs] DataTable Select expression | | `Key` | Attribute | - | Name of the value to provide | | `Label` | Attribute | - | Label of the value, optional, if not set Key will be used as label | | `Max` | Attribute | - | [For Inputs] Mapped to HTML input type=number/datetime attribute | | `MaxLength` | Attribute | - | [For Inputs] Mapped to HTML input/textarea attribute | | `Min` | Attribute | - | [For Inputs] Mapped to HTML input type=number/datetime attribute | | `Pattern` | Attribute | - | [For Inputs] Mapped to HTML input type=text attribute (RegExp format) | | `Placeholder` | Attribute | - | [For Inputs] Mapped to HTML input type=text attribute | | `PropertyTitleCode` | Attribute | - | [For Inputs] Which property of the table to display in the dropdown list foreach item of the martQuery. Ex: COUNTRY_FRIENDLY_NAME | | `PropertyValueCode` | Attribute | - | [For Inputs] Which property of the table should be used as the value of the dropdown foreach item of the martQuery Ex: COUNTRY_CODE | | `Source` | Attribute | - | [For Inputs] Used with "select" Can be martQuery, tableCode or csvList
Possible values: empty, csvList, martQuery, tableCode, userSelectedSource | | `TableCode` | Attribute | - | [For Inputs] Use with Source="martQuery" | | `Type` | Attribute | - | For Context > Inputs > Add, a Type is required to know how to request the value to the user in the UI For Context > OriginValues > Origin > Add, Type defaults to "origin"
Possible values: text, date, number, select, origin | | `date` | Element | - | an input date should be used | | `number` | Element | - | an input number should be used | | `origin` | Element | - | To identify values already provided at upload time by origin | | `select` | Element | - | a dropdown should be used | | `text` | Element | - | a standard input text should be used | | `Values` | Attribute | - | [For Inputs] Use with Source="csvList" | ### Add [In the Context / OriginValues / Origin element] Provides a default value for a particular origin, the user of this origin won't be prompted for this value anymore. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Key` | Attribute | - | Name of the value to provide | | `Label` | Attribute | - | Label of the value, optional, if not set Key will be used as label | | `Type` | Attribute | - | For Context > Inputs > Add, a Type is required to know how to request the value to the user in the UI For Context > OriginValues > Origin > Add, Type defaults to "origin"
Possible values: text, date, number, select, origin | | `date` | Element | - | an input date should be used | | `number` | Element | - | an input number should be used | | `origin` | Element | - | To identify values already provided at upload time by origin | | `select` | Element | - | a dropdown should be used | | `text` | Element | - | a standard input text should be used | | `Value` | Attribute | - | [For OriginValues / Origin] Sets the value of this input for the origin. | ## Extract One or many getters to fetch the input data from different sources ### GetAccess Loads data from an MS Access database file. Supported formats: mdb (pre-200 and 2002-2003) and accb (2007 to present). #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `AsStrings` | Attribute | - | If true, all data is imported as strings regardless of data type. If false (default), the source data types are retained. | | `OutputTableName` | Attribute | - | Only relevant if Sql is defined, this defines the name of the in-memory table after extraction using the Sql statement. | | `Sql` | Attribute | - | A single SQL SELECT query used to retrieve a single table of data from the MS Access database. If provided, TableName should be empty. | | `TableName` | Attribute | - | Name of table or query in MS Access file to extract. Multiple table/query names can be provided if separated by commas. After extraction, the names of the in-memory tables will match the original MS Access names. If this is provided, the value of Sql should be empty. | ### GetDb Loads data from a database table. ConnectionName could be empty Select data from a mart's objects using a SELECT statement. Objects include tables and views in the {martcode} and {martcode}_c schemas. As opposed to the GetMart command, custom views do not need to be registered to be queried in the SELECT statement. In principle there is no limit on the complexity of the SELECT statement but it is advisable to use relatively simple SELECT statements in GetDb and put complex SELECT statements in proper custom SQL views stored in the database. #### Remarks Currently, ODBC databases are supported. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `CommandTimeout` | Attribute | - | The time (in seconds) to wait for the command to execute. The default is 30 seconds. | | `ConnectionName` | Attribute | - | Name of the database connection name created in the Pipelines > Connections > SQL Server section | | `Sql` | Attribute | - | An SQL SELECT query used to retrieve data from a database. Required. | | `StrictTypes` | Attribute | - | Default to true. Keep the source types from the DataBase. Set to false to stringify all the values. | | `TableName` | Attribute | - | Once retrieved, name given to the table in the script context. Optional | ### GetEsriFeatures Retrieves data from an ESRI feature service. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConnectionName` | Attribute | - | If the webservice is secured, we need a connection to get the accessToken Connection Code to use to retrieve the Credentials needed to authenticate against the remote WebService | | `TableName` | Attribute | - | Once retrieved, name given to the table in the script context. | | `Url` | Attribute | - | Url to download the data from. | | `Body` | Element | - | Defines the body parameters of the POST request | | `Type` | Attribute | - | Gets or sets the Body request content type.
Possible values: raw, x-www-form-urlencoded, graphQL | | `graphQL` | Element | - | GraphQL query json content | | `raw` | Element | - | String Content without particular media type (use Headers to set Content-Type) | | `x-www-form-urlencoded` | Element | - | x-www-form-urlencoded media type, one key:value per line using this convention key1:value1 key2:value2 | | `Headers` | Element | - | Additional headers to add to the request
Contains repetable element(s): add | | `Name` | Attribute | - | Gets or sets the name. | | `Value` | Attribute | - | Gets or sets the value. | ### GetExcel Reads one or more worksheets or named ranges from an Excel workbook with options to merge tables and perform common cleaning operations. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ColumnSuffixIfDuplicate` | Attribute | - | Optional. If provided, when some columns name are duplicate, this value will be inserted between the raw name and the incremented duplicated index. Ex: if the worksheets has 2 columns named "MyColum", the second one will be renamed "MyColumMY_SUFFIX2", "MyColumMY_SUFFIX3", ... if ColumnSuffixIfDuplicate="MY_SUFFIX" Defaults to empty string. | | `ExtractCellHyperlinks` | Attribute | - | For cells containing hyperlinks, by default, only the value (displayed text) will be extracted. When ExtractCellHyperlinks is set to true, the hyperlink value will be appended to the cell value (separated by " | "). Defaults to false. Only works for Excel 2007 and above (xlsx). | | `FindStartingRow` | Attribute | - | Comma-separated list of parts of field names that identify the header row. This is an alternative to specifying the StartingRow integer (FindStartingRow takes precedence if defined). The fields can be in any order. A search for first row containing all of these fields occurs. This allows there to be a variable number of rows before the header row. The values given in FindStartingRow can be part of the field names (a "contains" comparison is performed). The values are also whitespace trimmed. For example, If FindStartingRow = "Name, Phone, Address" this will find a row in the excel table with columns "Full Name", "Telephon", and "Street Address". | | `OutputTableName` | Attribute | - | Optional. If provided, the original name of the worksheet/named range is changed to this after reading into memory. For example, if the original worksheet is called "Sheet1" and OutputTableName is "MyData", the name of the table after getting the data will be "MyData". | | `RemoveBlankCols` | Attribute | - | If true, blank columns in the table are removed. Default is false. | | `RemoveBlankRows` | Attribute | - | If true, blank rows in the table are removed. Default is true. | | `RepeatMergedCellsValue` | Attribute | - | For merged cells, by default, only first cell will contain the displayed value. Set to true to repeat first cell's value in every cell of the merged cell. Defaults to false. | | `StartingRow` | Attribute | - | One-based row index of the row with column headers of the data in the Excel document, ie the row number visible in Excel.
The header row supports merged cells used in the row headers, specify the last row index that is part of the header.
If FindStartingRow is defined, StartingRow will be ignored. If both StartingRow and FindStartingRow are blank, the first non-blank row will be used. | | `StrictTypes` | Attribute | - | Default is false. If true, the excel extractor extracts values as their underlying values while preserving their data types, rather than formatted values as they appear in Excel. Also, for .xls files, the first row of the data must contain the headers and currently this means that the StartingRow and FindStartingRow settings will not have any effect here (these require that data is first extracted as strings and then the header is searched for in the data). Under the hood, the value of Excel.IMEX will be set to 2 (guess types, no type intermixing) and Excel.Headers will be set to true. If false, values are extracted as the formatted strings as they appear in the excel workbook. Which to use? It's a tradeoff. Consider an Excel date field containing the value 2099-1-1 (Jan 1, 2099). If StrictTypes is false, and the date is formatted in Excel as "1-1-99" the retrieved value will be Jan 1, 1999 - not 2099. If StrictTypes is true, the retrieved value will be Jan 1, 2099. However, consider a column in Excel of single numbers except that on row 20 a cell contains "2 - 5" implying a range. If StrictTypes is false, "2 - 5" will be retrieved and displayed to the end-user so they can correct the error. Any data type validation that ensues will correctly flag this as a problem, which is good. If StrictTypes is true, "2 - 5" will be retrieved as an empty value (in older versions of the OLE driver, this would raise an exception preventing any data upload). The user would not be made aware of the error is NULLs are allowed, which is bad if the intent is to bring the situation to the user's attention. | | `SystemHeaders` | Attribute | - | Default is false. If true, Excel will assign its own column headers (F1, F2, F3, etc.). If false, column headers will be based on the data. | | `TableName` | Attribute | - | Name of sheet or named range to extract. Multiple sheet/range names can be provided if separated by commas. Use single asterisk character (*) to load all sheets and all ranges. Extract a specific range of cells with the dollar sign after the worksheet name: TableName="worksheet$A2:B5"
If MergeTables.TableNames is defined, RangeOrSheetName may be left blank or be defined. If defined, it will merged with the tables listed in MergeTables.TableNames. If not found and multiples worksheet extracted, should prompt the user | | `TableTypes` | Attribute | - | When TableNames="*", TableTypes indicates what type of excel tables should be loaded. The options are "All" (default), "Worksheets" (only worksheets), "Tables" (preferred if the data is formatted as an Excel named table) or "NamedRanges" (only named ranges)
Possible values: All, Worksheets, NamedRanges, Tables | | `All` | Element | - | Default: all types of excel objets (worksheet, named table or range) | | `NamedRanges` | Element | - | Only retrieves named ranges | | `Tables` | Element | - | Only retrieves named tables | | `Worksheets` | Element | - | Only retrieves worksheets | | `TrimValues` | Attribute | - | If true, will cause all loaded cell values to be white-spaced trimmed. Default is true. | | `WorksheetNameToColumn` | Attribute | - | If defined, the Excel worksheet name of all extracted tables is put into this column, so it can be treated like data. The column must not already exist. If MergeTables is defined and the same attribute isn't specified there, it will be used in the MergeTables processing.). | | `RemoveTrailingBlankCols` | Element | `True` | If defined, trailing blank columns at the end of the table are removed. Prevents huge column count if some rogue empty cells are merged at column 16384 Default is true. | ### GetFtpFile Retrieves a file from a FTP Server. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConnectionName` | Attribute | - | If the webservice is secured, we need a connection to get the accessToken Connection Code to use to retrieve the Credentials needed to authenticate against the remote WebService | | `TableName` | Attribute | - | Once retrieved, name given to the table in the script context. Obsolete: use underlying Getter OutputTableName instead. | | `Url` | Attribute | - | Url to download the file from. Should start with ftp:// or ftps:// | | `Getter` | Element | - | How to process the response based on the expected response type
Contains element(s): GetJson, GetText, GetExcel, GetAccess, GetZip | ### GetJson Loads data from a Json file. The Path element is required to transform the data as a flat table #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `OutputTableName` | Attribute | - | Name of the in-memory table after extraction. | | `Path` | Element | - | Defines the JSONPath query for values. See JSONPath syntax Ex: For GoogleSheet API V4, use `values[1:]` to get all rows except first one of the values array | | `PathHeader` | Element | - | Optional: To use only when data is not formatted as objects with named properties: like in array/grid format with first row containing the columns names The expression should return a flat array of strings to use a headers- Ex: For GoogleSheet API V4, use `values[0][*]` to get first row of values array | #### Examples ##### Example 1 ```xml values[*][0] values[1:] ``` ### GetJsonApi Retrieves data from a jsonapi.org compliant Web service. This command supports paginated loading. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConnectionName` | Attribute | - | If the webservice is secured, we need a connection to get the accessToken Connection Code to use to retrieve the Credentials needed to authenticate against the remote WebService | | `FollowNextLink` | Attribute | - | If the response contains links.next property, it will download those records as well recursively. | | `PageSize` | Attribute | - | Adds page[size] parameter to OData query, returns only the n first rows of table | | `RootUrl` | Attribute | - | Url to the root of JSON:API WebService, it should be the path just before the table level (TableName will be appended). If provided, the Url attribute is not needed. If you need to use special operators like $select or $filter, please use the Url attribute instead. | | `TableName` | Attribute | - | Tables to extract from the JSON:API WebService. If used with RootUrl, you can use a comma separated list of table like TableName="Table1, Table2". Each table will be appended to RootUrl and downloaded. If used with Url, it will be the name the output table. | | `Url` | Attribute | - | Url to download the data from. | | `Body` | Element | - | Defines the body parameters of the POST request | | `Type` | Attribute | - | Gets or sets the Body request content type.
Possible values: raw, x-www-form-urlencoded, graphQL | | `graphQL` | Element | - | GraphQL query json content | | `raw` | Element | - | String Content without particular media type (use Headers to set Content-Type) | | `x-www-form-urlencoded` | Element | - | x-www-form-urlencoded media type, one key:value per line using this convention key1:value1 key2:value2 | | `Headers` | Element | - | Additional headers to add to the request
Contains repetable element(s): add | | `Name` | Attribute | - | Gets or sets the name. | | `Value` | Attribute | - | Gets or sets the value. | #### Examples ##### Example 1 ```xml ``` ### GetMart Extracts data from a storage table in xmart. The table may be located in the current mart or another mart (security applies). SinceDays can be used for incremental data extractions - only data inserted, modified or soft deleted since this number of days ago will be retrieved. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Columns` | Attribute | - | Comma-separated list of field codes to extract. If not provided, all fields are extracted. Braces are supported in column names: "FIELD_1" or "[FIELD_1]" are both valid. Fields may be renamed upon extraction using AS syntax (like in SQL), for example "FIELD_1 AS NEW_Name". | | `ExcludeSysColumns` | Attribute | - | Set to true to exclude system columns | | `IncludeDeleted` | Attribute | - | Set to true to also return soft deleted records. and the property _Delete for each record. | | `MartCode` | Attribute | - | If defined, will fetch the table from the target mart. Note, the service account of this mart must have VIEW_DATA permission to the given TableCode | | `NoLock` | Attribute | - | If true, the GetMart command does not lock the table and will read a table while it's being updated, including partially committed data | | `OutputTableName` | Attribute | - | Name of in-memory table after extraction. If empty, the TableCode is used. | | `SinceDays` | Attribute | - | Select only data changed (inserted, modified or soft deleted) since this number of days ago. The comparison is with Sys_CommitDateUtc. If not provided, all data is extracted. | | `StrictTypes` | Attribute | - | Default to false. Will stringify all the values so xmart can properly cast them during analysis. Set to true to keep the source types from the DataBase if numeric operations are done in the Transform section. | | `TableCode` | Attribute | - | Code of the table or view to extract data from, required. | | `Top` | Attribute | - | Limit the number of rows | | `Filter` | Element | - | Gets or sets the filter of source data. | | `InheritFromDataView` | Attribute | - | Loads filters from data view if true. Used when the origin is attached to a data view page. | | `Operator` | Attribute | - | Determines whether terms are ANDed (default) or ORed.
Possible values: OR, AND | | `Terms` | Element | - | List of filter terms.
Contains repetable element(s): Term, Group | ### GetOData Retrieves data from an OData Web service. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConnectionName` | Attribute | - | If the webservice is secured, we need a connection to get the accessToken Connection Code to use to retrieve the Credentials needed to authenticate against the remote WebService | | `Expand` | Attribute | - | Adds $expand parameter to OData query, pass "*" to select every column or comma separated list to pass directly to the Odata query | | `Filter` | Attribute | - | Adds $filter parameter to OData query | | `FollowNextLink` | Attribute | - | if the OData response contains odata.nextLink property, it will download those records as well recursively | | `PreserveODataSpecialAttributes` | Attribute | - | Default is false. If true, does not remove the @odata.XXX columns like @odata.type or @odata.context | | `RootUrl` | Attribute | - | Url to the root of OData WebService, it should be the path just before the table level (TableName will be appended). If provided, the Url attribute is not needed. If you need to use special operators like $select or $filter, please use the Url attribute instead. | | `Select` | Attribute | - | Adds $select parameter to OData query, pass "*" to select every column or comma separated list to pass directly to the Odata query | | `Skip` | Attribute | - | Adds $skip parameter to OData query, skips the n first rows of table | | `TableName` | Attribute | - | Tables to extract from the OData WebService. If used with RootUrl, you can use TableName="*" to retrieve every tables from the OData endpoint or a comma separated list of table like TableName="Table1, Table2". If used with Url, it will be the name the output table. | | `Top` | Attribute | - | Adds $top parameter to OData query, returns only the n first rows of table | | `Url` | Attribute | - | Url to download the data from. | | `Body` | Element | - | Defines the body parameters of the POST request | | `Type` | Attribute | - | Gets or sets the Body request content type.
Possible values: raw, x-www-form-urlencoded, graphQL | | `graphQL` | Element | - | GraphQL query json content | | `raw` | Element | - | String Content without particular media type (use Headers to set Content-Type) | | `x-www-form-urlencoded` | Element | - | x-www-form-urlencoded media type, one key:value per line using this convention key1:value1 key2:value2 | | `Headers` | Element | - | Additional headers to add to the request
Contains repetable element(s): add | | `Name` | Attribute | - | Gets or sets the name. | | `Value` | Attribute | - | Gets or sets the value. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### GetRecfile Parses data from a .REC file generated by EPI INFO application. Generates a table with the columns extracted from the header of the .REC file. It will use the short name as column name (not the label of the column) and will use the datatype declared in the header to better parse some fields (like US and EUR dates). Also extracts an extra column "REC_TERMINATOR" with "!" for normal records, "?" for deleted records, or "^" for verified records. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConvertDateToIso` | Attribute | - | Default to true: US dates and EUR dates will be converted to ISO format (understood by xMart) based on the type set in the RecFile (2 US, 11 EUR). Set to false to keep original values. | | `OutputTableName` | Attribute | - | Name of the in-memory table after extraction. | ### GetText Loads data from a character-delimited text file. Supported formats: any single-character delimited format such as csv or tab-separated values text files. Values which contain delimeters (a comma in a csv file or a tab in a tsv file) can be imported if their values are fully enclosed by double quote characters. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Columns` | Attribute | - | An optional list of column names (comma-separated list) to assign to the imported data. If not provided, the first data row is used as column names. | | `ColumnSuffixIfDuplicate` | Attribute | - | Optional. If provided, when some columns name are duplicate, this value will be inserted between the raw name and the incremented duplicated index. Ex: if the worksheets has 2 columns named "MyColum", the second one will be renamed "MyColumMY_SUFFIX2", "MyColumMY_SUFFIX3", ... if ColumnSuffixIfDuplicate="MY_SUFFIX" Defaults to empty string. | | `Delimiter` | Attribute | - | Defines the single-character delimiter used to separate values in a record. The character itself should be entered (“,” for comma, “;” for semi-colon, etc.). However, specify the following delimeters using key words (case insensitive): “tab”. If the values in the source file contain delimeters or line breaks or carriage returns: values should be wrapped with double quotes – just like comma- or tab-separated files generated by Excel. | | `Encoding` | Attribute | - | Default: "UTF-8". Possible values: "detect" (auto-detection algo based on character stats), "Windows-1252" (Cyrillic), "UTF-16" (alias for little-endian), "us-ascii", "iso-8859-15" (Latin-9 adds €), "windows-1253" (Greek), ... | | `FindStartingRow` | Attribute | - | Comma-separated list of parts of field names that identify the header row. This is an alternative to specifying the StartingRow integer (FindStartingRow takes precedence if defined). The fields can be in any order. A search for first row containing all of these fields occurs. This allows there to be a variable number of rows before the header row. The values given in FindStartingRow can be part of the field names (a "contains" comparison is performed). The values are also whitespace trimmed. For example, If FindStartingRow = "Name, Phone, Address" this will find a row with columns "Full Name", "Telephone", and "Street Address". | | `IgnoreBadData` | Attribute | - | A flag to acknowledge the file you are trying to load is badly formed but you want to try to load it anyway (will produce a Warning rather than a Critical Error) | | `IgnoreQuotes` | Attribute | - | A flag that tells the CSV parser to ignore quotes as an escape character and treat it like any other character. | | `OutputTableName` | Attribute | - | Name of the in-memory table after extraction. | | `StartingRow` | Attribute | - | One-based row index of the row of the data in the Text document.
StartingRow and FindStartingRow should not both be specified.
If both StartingRow and FindStartingRow are blank, the first row will be used.
If Columns is not provided, StartingRow indicates the row with column headers
If Columns is provided, StartingRow should indicate the first data row | ### GetWebService Retrieves data from a Web service. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConnectionName` | Attribute | - | If the webservice is secured, we need a connection to get the accessToken Connection Code to use to retrieve the Credentials needed to authenticate against the remote WebService | | `Url` | Attribute | - | Url to download the data from. | | `Body` | Element | - | Defines the body parameters of the POST request | | `Type` | Attribute | - | Gets or sets the Body request content type.
Possible values: raw, x-www-form-urlencoded, graphQL | | `graphQL` | Element | - | GraphQL query json content | | `raw` | Element | - | String Content without particular media type (use Headers to set Content-Type) | | `x-www-form-urlencoded` | Element | - | x-www-form-urlencoded media type, one key:value per line using this convention key1:value1 key2:value2 | | `Getter` | Element | - | How to process the response based on the expected response type
Contains element(s): GetJson, GetText, GetExcel, GetZip, GetWordForm | | `Headers` | Element | - | Additional headers to add to the request
Contains repetable element(s): add | | `Name` | Attribute | - | Gets or sets the name. | | `Value` | Attribute | - | Gets or sets the value. | #### Examples ##### Example 1 ```xml $[0:10] ``` ##### Example 2 ```xml goal:1 areaCodes:4 timePeriodStart:2007 timePeriodEnd:2008 ``` ### GetWordForm Extracts data from MS Word documents containing form fields. Either a single MS Word document may be uploaded or a zip file of MS Word documents having a similar structure. Each document's form field data is extracted as a single record with column names matching the names of the form fields. The file name of each document is extracted to a field called MSWORD_FILE_NAME. If a form field exists in one document but not in another, the field will be created in the extracted table and have NULL values for any document missing that form field. Learn about Content controls in Word https://learn.microsoft.com/en-us/office/client-developer/word/content-controls-in-word #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ColumnNamesFrom` | Attribute | - | Generated table will have columns named from the Content Control's Tag (as shown in Form control's Properties). Switch to "Title" to use the user-friendly name as displayed in Word instead.
Possible values: Tag, Title | | `ColumnSuffixIfDuplicate` | Attribute | - | Optional. If provided, when some columns name are duplicate, this value will be inserted between the raw name and the incremented duplicated index. Ex: if the worksheets has 2 columns named "MyColum", the second one will be renamed "MyColumMY_SUFFIX2", "MyColumMY_SUFFIX3", ... if ColumnSuffixIfDuplicate="MY_SUFFIX" Defaults to empty string. | | `OutputTableName` | Attribute | - | Name of the in-memory table after extraction | ### GetZip Extracts a zip file uploaded by the user or fetched via GetWebService, and processes the extracted file with the provided Getter. GetZip command can be used multiple times to extract different files from the same zip. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `FileNameToExtractPattern` | Attribute | - | Optional if only one file in the zip. In case the zip file contains multiple file, attribute is required to target what file in the extracted archive should be processed by the getter. If more than one file match the pattern, files will be processed by the same command and output tables concatenated if they have the same name (use $1 to refer to the first capture group of the regex pattern to generate distinct table names). | | `Getter` | Element | - | How to process the response based on the expected response type. To extract multiple tables from different text files (GetText, GetJson, GetRecfile), use a regex matching group in FileNameToExtractPattern and "$1" in OutputTableName will be replaced with matched value.
Contains element(s): GetJson, GetText, GetExcel, GetRecfile | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ##### Example 3 ```xml ``` ### OneOnly OneOnly allows a pipeline to support different file types (such as Excel, Access, csv, etc.) when the internal data structure is the same regardless of the type. For example, a user have a table in MS Access and then switch to Excel, while the table structure itself has not changed (same column names).

OneOnly is a container element into which a list of Getter commands are placed. When a user uploads a file, the first applicable Getter in the list is used to extract the file. If no applicable Getter is found, the load fails. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Getters` | Element (Multiple) | - | Gets or sets the getters.
Contains element(s): GetDb, GetExcel, GetAccess, GetText, GetMart, GetWebService, GetEsriFeatures, GetJson, GetOData, GetZip, GetFtpFile, GetRecfile | ## Transform Sequence of transformers to apply on the input table(s) ### AddColumn Adds a new column to the active table. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ActionIfAlreadyExists` | Attribute | - | New in 4.26. What to do if the column already exists. In 4.25 and before, it used to do nothing, except if ErrorIfColumnAlreadyExists was true.
Possible values: DoNothing, Override, Warning, Error | | `DoNothing` | Element | - | Do nothing, does not change existing values. Useful to make sure the column exists after this point. Used to be the default behavior in 4.25 and before, or if FillWith or FillWithExpression is empty. | | `Error` | Element | - | Stops the batch with a critical issue if the column already exists. | | `Override` | Element | - | Changes the values inside the column using the provided FillWith or FillWithExpression, even if the column already existed. | | `Warning` | Element | - | New default in 4.26. Warns the user that the column already exists if eitherFillWith or FillWithExpression is set, but does not change existing values. (same behavior than in 4.25 and before, except for the new warning). For the exact same old behavior, use DoNothing. | | `Before` | Attribute | - | Name of column before which the new column will be added. If blank column is added at end. | | `CopyAsValues` | Attribute | - | Set to true to copy the computed values rather than the formula (like Excel, copy as values feature) By default is false meaning if the values with be dynamically computed every time. | | `DataType` | Attribute | - | List of supported datatypes: BIG_INTEGER, BOOLEAN, DATE, DATE_TIME, DECIMAL, GUID, INTEGER, TEXT
Possible values: BIG_INTEGER, BOOLEAN, DATE, DATE_TIME, DECIMAL, GUID, INTEGER, TEXT | | `ErrorIfColumnAlreadyExists` | Attribute | - | Obsolote. Use ActionIfAlreadyExists instead. Kept for backward compatibility with existing batches. Set to true to create a critical issue if the column already exists. By default is false meaning if the column is already there, the batch will continue without warning/error | | `FillWith` | Attribute | - | A value to fill all rows of this column with. Ex: to fill with current date, use
FillWith="${Now.TimeUtc}"
| | `FillWithExpression` | Attribute | - | An expression to fill all rows of this column with. Corresponds to DataColumn.Expression syntax, reference available at: https://learn.microsoft.com/en-us/dotnet/fundamentals/runtime-libraries/system-data-datacolumn-expression | | `Name` | Attribute | - | Name of column to Add. | #### Examples ##### Example 1 ```xml ``` ### AggregateTable Aggregate data in a table using common aggregation functions like SUM, COUNT, MAX, MIN, AVG or STRING_AGG. The aggregated result is output as a new pipeline table. Note that it is necessary to set column data types accordingly for the aggregation to be successful. For example, AVG and SUM require numeric data types; MAX and MIN behave differently for text, numeric or date field types, etc. To set column data types, either use the SetColumnDataType transform or, for the GetMart and GetDb commands (which are strongly typed data sources), set the StrictTypes property to true. Otherwise all data is extracted as text, by default. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Columns` | Attribute | - | Required. Comma-separated list of column names to group by and the aggregation function to apply. Ex: "COUNTRY, PROVINCE, SUM(POPULATION) as POPULATION, AVG(INCOME) as AVG_INCOME" to group by (country,province) and compute sum of population by province and average of income. Only columns declared here will be included in the output table. All other columns will be ignored. Supported aggregate functions: SUM(), AVG(), MEDIAN(), COUNT(), COUNT_DISTINCT(), MAX(), MIN(), STRING_AGG(). Make sure the values in SUM/AVG/MEDIAN columns are numeric. Use SetColumnDataType command to convert them if needed. STRING_AGG will do a "distinct" by default. | | `Filter` | Attribute | - | Optional. Row filter indicating which rows should be aggregated. Default is no filter (all rows aggregated). Corresponds to DataColumn.Expression syntax, reference available at: https://learn.microsoft.com/en-us/dotnet/fundamentals/runtime-libraries/system-data-datacolumn-expression | | `OutputTableName` | Attribute | - | Optional. If not provided (default), the input table is replaced by the aggregated table. If provided, name of the new table to be generated. If the output table already exists, an error will be raised. | | `TableName` | Attribute | - | Optional. Name of pipeline table to aggregate. If provided, this is the table to aggregate. If not provided, the table which is currently in context will be aggregated ("in context" = the pipeline table of the current LoadTable section or the active table set by SetContext). | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### ChangeCase Changes the case of text values in a column. Supported cases: upper, lower, title (proper), camelize or slugify. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column to change text values in. | | `TitleCasePreserveUpper` | Attribute | - | Relevant only if To = "Title" (title case). If true and if input string is completely in uppercase, preserve the uppercase but titlecase all other values. This will preserve text such as acronyms. If false, all values will be titlecased. Default is false. | | `To` | Attribute | - | Case change options: "upper", "lower", "title", "camelize" or "slugify". | #### Examples ##### Example 1 ```xml ``` ### CleanTable Performs common cleaning operations on all columns and rows of a table such as whitespace trimming and removal of empty columns and rows. This is especially useful for cleaning Excel data, which often has empty rows and/or columns. #### Remarks The ExcelGetter, by default, automatically performs all of these cleaning operations. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `RemoveBlankColumns` | Attribute | - | Removes columns which consist entirely of whitespace or NULL values. | | `RemoveBlankRows` | Attribute | - | Removes rows which consist entirely of whitespace or NULL values. | | `TrimValues` | Attribute | - | Removes leading and trailing whitespace from all values in the table. | #### Examples ##### Example 1 ```xml ``` ### CopyColumn Creates a new column that is a copy (duplicate) of an existing one. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Before` | Attribute | - | If To does not exist, a new column will be created and positioned before column named Before. If blank column will be added at the end. | | `From` | Attribute | - | Name of column to copy from. | | `To` | Attribute | - | Name of column to copy to. | #### Examples ##### Example 1 ```xml ``` ### CopyTable The CopyTable command copies structure and/or data from an existing table into a new table. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Columns` | Attribute | - | Optional comma-separated list of column names to include in the copy operation. An empty value (default) indicates that all columns will be copied. | | `ExcludeColumns` | Attribute | - | Optional comma-separated list of column names to exclude from the copy operation. Only relevant when the Columns value is empty. | | `Filter` | Attribute | - | Optional row filter indicating which rows should be copied. Default is no filter (all rows copied). Corresponds to DataColumn.Expression syntax, reference available at: https://learn.microsoft.com/en-us/dotnet/fundamentals/runtime-libraries/system-data-datacolumn-expression | | `OutputTableName` | Attribute | - | Name of new table (required). If the table exists, an error will be raised (RemoveTable can be used to remove a table). | | `StructureOnly` | Attribute | - | If true, only the structure of the table is copied; data is not transferred. Default = false. | | `TableName` | Attribute | - | Name of existing table to copy (required). | ### CustomScript Command that allows to execute an arbitrary piece of code to transform the active table(s) or their rows in any way that can be written in the proposed languages. The custom script will be executed on each active tables (if multiple) separately. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ContextColumns` | Attribute | - | For debugging purposes, or if an error occurs, controls which columns are displayed to the user. No quoting should be used Multiple columns should be seperated by a comma with no spaces Example: ContextColumns=FIELD_TYPE_CODE,FK_TABLE_CODE | | `Language` | Attribute | - | Script language used: only C# for now | | `Expression` | Element | - | Code fragment in C# that takes the active table as parameter. Double click on <Expression> tag to open the side editor. Expression text can surrounded with <![CDATA[ EXPRESSION ]]> if the Expression contains operators like >= that needs XML escaping. The expression should finish with return table;. Built-in "using" statements are: using System; using System.Data; using Newtonsoft.Json; using Newtonsoft.Json.Linq; so you can get do Guid myId = row.Field<Guid>("column_name"); to cast to expected C# type. See code example below/in guide. | #### Examples ##### Example 1 ```xml ("FieldExtensions"); var parsed = JArray.Parse(json); foreach (DataColumn col in table.Columns) { if (row[col] == DBNull.Value && table.Columns.Contains(col.ColumnName + "@field.id")) { string fieldId = row.Field(col.ColumnName + "@field.id"); var nullReasonType = parsed.SelectToken($"$[?(@.key=='{fieldId}')].Value.nullReasonType").ToString(); row[col] = nullReasonType; } } } table.Columns.Remove("FieldExtensions"); return table; ]]> ``` ##### Example 2 ```xml new {Col1 = r["ISO3"]}) .Select(g => { var sum = g.Select((a, i) => int.Parse(a["poP_LT15"].ToString())).Sum(); var row = g.First(); // take first row for the group row["poP_LT15"] = sum; // replace value with the sum of all grouped rows return row; }) .CopyToDataTable(); ]]> ``` ### DateAdd Add time to or subtract time from date values in 1 or more columns #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `DatePart` | Attribute | - | Date part to be added. Either 'Year', 'Month' {etc}.
Possible values: Year, Month, Day, Hour, Minute, Second | | `InputColumn` | Attribute | - | Name of input column name. Supports comma separated list of columns | | `IssueLevel` | Attribute | - | Turn on / off issues generation. None: ignore errors; Warning: warning value issues will be generated; Error: error value issue will be generated. Default is Error.
Possible values: None, Warning, Error | | `Error` | Element | - | Generate error value issue | | `None` | Element | - | Ignore any errors | | `Warning` | Element | - | Generate warning issue | | `OutputColumn` | Attribute | - | Name of output column name.Supports comma separated list of columns If empty InputColumn will be overwritten | | `ValueToAdd` | Attribute | - | Option 1: Value to be added to the date. To subtract a value, use a negative number. | | `ValueToAddFromColumn` | Attribute | - | Option 2 (New): Column name whose Value to be added to the date. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ##### Example 3 ```xml ``` ### DateDiff Calculates diff between dates. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Date1` | Attribute | - | Reference Date column | | `Date2` | Attribute | - | Date column to substract to Date1 | | `DiffUnit` | Attribute | - | Difference unit to calculate Default value is "Day"
Possible values: Day, DayOfYear, Hour, Minute, Month, Quarter, Second, Weekday, WeekOfYear, Year | | `Name` | Attribute | - | Name of the output column to create with the calculated differences. | #### Examples ##### Example 1 ```xml ``` ### DbLookup Translates values in a column similar to the way Excel VLOOKUP works, except that the lookup uses an SQL Database. The translated value is put into ResultColumn, which can be a new or existing column (it can also be the same as LookupColumn, which overwrites the original values). If a value is not found in the database table, NULL is returned and a Validation issue is created in the context's issues list. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `DbConnectionName` | Attribute | - | Name of the database connection string name stored in the web.config file. | | `DbFilter` | Attribute | - | Filter to apply to select statement when retrieving data. | | `DbLookupColumn` | Attribute | - | Name of the column in the database table in which the lookup is performed. | | `DbResultColumn` | Attribute | - | Name of the column or comma-separated columns in the database table containing the resulting value to be returned. | | `DbTableName` | Attribute | - | Name of the database schema.table in which the lookup is performed. | | `InCaseInputEmpty` | Attribute | - | In case the input value from LookupColumn is empty, return this value instead | | `InCaseNotFound` | Attribute | - | If the lookup was unsuccessful, return this value instead | | `LookupColumn` | Attribute | - | Name of the column in the table being transformed containing the value to be looked up. Currently, only a single key is supported (ie only a single field should be specified, not a list of fields). | | `RegisterMissingAsIssues` | Attribute | - | Wether to genereate Process Issues. Default is true. | | `ResultColumn` | Attribute | - | Name of the column or comma-separated columns in the table being transformed in which to put the resulting value. Existing values, if any, are overwritten. If the column does not exist, it will be created (as type String). | | `Tag` | Attribute | - | Tag is a key to group process issues together | #### Examples ##### Example 1 ```xml ``` ### FindReplace Find and replace text in a column or whole table. The Find text is looked for in the Columns and replaced with ReplaceWith. The search is case insensitive. By default, the find is a wild-card search which will look for the Find text anywhere in the data's values. This can be changed with the Mode setting. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column name to perform the find-replace operation in. Supports a comma separated list of column names. If not defined, all columns of the table are processed. | | `Find` | Attribute | - | Value to find. Find="" will match null or empty cells.
To replace the NewLine character, use xml encoded Find="&#10;". | | `Mode` | Attribute | - | Find mode. One of the following values (anywhere is the default)
Possible values: ANYWHERE, ALL, START, END | | `ALL` | Element | - | Match all of a value. | | `ANYWHERE` | Element | - | Match all or part of a value (wildcard match). | | `END` | Element | - | Match at end of value. | | `START` | Element | - | Match at start of value. | | `ReplaceWith` | Attribute | - | New value to replace by. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### FormatDate Given a column of type Date or a string representation of a date, this transformer will create a new column with a representation of the date in a custom string format. For example, for an input column of type Date, a new column called DateISO can be created containing an ISO string representation of the date. If neither the InputFormat or InputCulture are specified, the following automatic fixes are attempted: extracting dates with non-standard separators (expected order either day month year or year month day), converting Excel dates stored as numbers to proper dates, converting Hijri dates to Gregorian (if year is below 1600), and heuristically switching date and month (by considering months greater than 12 to be days instead). #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `InputColumn` | Attribute | - | Name of input column serving as the source of the data to provide the Format operation on. | | `InputCulture` | Attribute | - | Optional. but useful when data is loaded as strings, as from Excel workbooks, and the dates contain culture-specific month strings such as "june" or "juin". The value of this property should be a culture string as listed in the "Language Culture Name" column on this reference page: https://msdn.microsoft.com/en-us/library/ee825488(v=cs.20).aspx For example, for english month names a value of "en-gb" or "en-us" could be provided. For french month names like "juin" a value of "fr-fr" or "fr-ch" could be provided.

| | `InputFormat` | Attribute | - | Optional. If not provided the input format is automatically determined for each value, thus adapting to a mixture of date formats in a column. Hijri dates are also automatically determined. Leaving InputFormat blank is best when the input format is not guaranteed. If provided, all values in the InputColumn are expected to have this format and any that do not are considered a warning/error. For example "yyyy-MM-dd" for ISO date format, "UnixTimeSeconds" for Unix timestamps. See the Microsoft learning guide on date formats: https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings . | | `OutputColumn` | Attribute | - | Name of output column name which into which the custom formatted string will be stored. If it exists, it will be overwritten. If it does not exist, it will be created. | | `OutputCulture` | Attribute | - | Useful when month strings such as "june" (english) or "juin" (french) are needed in the output date format. By using both InputCulture and OutputCulture date strings can be translated. For example, if the input month is "june" we can translate that to french by providing values of "en-gb" for InputCulture and "fr-fr" for OutputCulture.

A list of culture strings is available here: https://msdn.microsoft.com/en-us/library/ee825488(v=cs.20).aspx | | `OutputFormat` | Attribute | - | Format string used to format the date. For example "yyyy-MM-dd" for ISO date format, "UnixTimeSeconds" for Unix timestamps. See the Microsoft learning guide on date formats: https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings. | #### Examples ##### Example 1 ```xml ``` ### GenerateHash Generates a hash of the specified columns and stores it in a new column. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Algo` | Attribute | - | Defines the algorithm to use to compute the hash.
Possible values: md5 | | `md5` | Element | - | md5 checksum | | `Columns` | Attribute | - | Required. Comma separated list of columns to concatenate and hash. Use "*" to hash all columns of the row. | | `OutputColumn` | Attribute | - | Required. The name of the column to store the hash in. If the column does not exist, it is added to the table. If it exists, it will be overwritten. | #### Examples ##### Example 1 ```xml ``` ### GenerateID Generates unique IDs in a column. The IDs can either be integers or GUIDs. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column in which to create unique IDs. If the column does not exist, it is added to the table. If the column exists, any existing data is overwritten with the new IDs. | | `IDType` | Attribute | - | Defines the type of ID to generate. Possible values (case insensitive): "Integer", "Guid" (default)
Possible values: GUID, INTEGER | #### Examples ##### Example 1 ```xml ``` ### HierarchyLookup Allows to do a lookup on multiple columns where a hierarchical link exist between values (like find a province name within a particular country only). See the documentation article on how to use it. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `FuzzyMatching` | Attribute | - | When true, special characters and whitespaces are ignored when matching source values to values in the lookup table. Special characters will be canonized to a space. Litteral characters with diacritics will be canonized to their version without accents. Values will be trimmed. | | `LookupTable` | Attribute | - | REF table from the same mart to lookup up input values. | | `RegisterMissingAsIssues` | Attribute | - | Issues are raised when non-empty source values (at any level) could not be resolved. Possible impacts: None (do not generate issues), Warning (default), ErrorRemoveRows (generate error issue and row discarded). Default to "Warning".
Possible values: None, Warning, ErrorRemoveRows | | `Tag` | Attribute | - | Tag is a key to group lookup issues together | | `Lookups` | Element | - | List of Lookup elements. | | `LookupResult` | Attribute | - | Name of the REF table column value ID (ex: PLACE_ID) | | `ParentFilterUseFullHierarchy` | Attribute | - | Whether to use all parent levels in the generated parent filter to scope the children rows. Default false is enough is the LookupResult parent identifier is unique (e.g. GUID or 'PAK/BALOCHISTAN/BARKHAN'). Set to true if the resolved parent identifier is not unique (e.g. ADM1_CODE = '004' where '004' exists in multiple countries). Then the country code will be appended in the parent filter. | | `SourceFilter` | Attribute | - | Filter to apply to source table. | | `SourceResult` | Attribute | - | Optional: Deepest resolved Lookup result | | `TryIfEmptyParent` | Attribute | - | For example, if missing Admin0, allow to resolve using the child Admin1 | | `Lookup` | Element (Multiple) | - | Each LookupElement can be resolved separately from top to bottom | | `Lookup` | Attribute | - | Reference data column to match with (ex: REF_PLACES.Admin0_Name) | | `LookupFilter` | Attribute | - | Optional filtering of REF data ex: Level=0 | | `LookupResult` | Attribute | - | Name of the REF table column value ID (ex: PLACE_ID) | | `Source` | Attribute | - | Raw text to resolve | | `SourceResult` | Attribute | - | Optional: will replace parent section attribute Result Name of the result column in which the resolved Id will be stored | | `LookupInterval` | Element | - | Allows to filter matched lookup rows against a value defined in the source row. Useful to check that lookup.StartDate < row.Source < lookup.EndDate (dates must be in ISO format yyyy-MM-dd to be sorted alphabetically) LookupInterval can only be placed as the first child of <Lookups> element and there can only be one node of type LookupInterval. | | `LookupFrom` | Attribute | - | Reference data column to match with (ex: REF_PLACES.StartDate) | | `LookupFromCoalesce` | Attribute | - | Value to use if LookupFrom is null (ex: "1970-01-01") | | `LookupTo` | Attribute | - | Reference data column to match with (ex: REF_PLACES.EndDate) | | `LookupToCoalesce` | Attribute | - | Value to use if LookupTo is null (ex: "9999-01-01") | | `SkipIntervalInIssues` | Attribute | - | Temp workaround while waiting for https://github.com/WorldHealthOrganization/xmart/issues/4849 Group all issues with a different interval in a single issue in the UI. Disclaimer: the resolution result could be wrong for some rows, use with caution! Set the False to rollback to previous behavior. | | `Source` | Attribute | - | Raw text to resolve | | `SynonymTable` | Element | - | SynonymTable is optional element. If SynonymTable is defined, then Name, FromColumn and ToColumn are required (Filter is optional). | | `Code` | Attribute | - | Default to "REF_Synonyms" | | `Filter` | Attribute | - | Can be "Category = 'Geoplaces'" | | `FromColumn` | Attribute | - | Default to "In" | | `MartCode` | Attribute | - | Default to "SameAsLookup"
Possible values: SameAsLookup, CurrentMart | | `MultipleMatchesBehavior` | Attribute | - | New behavior in 4.24.18. Default to MultipleMatchesBehavior: the records with the least number of synonyms used will be selected as a match. Will still trigger "Ambiguous" issue if multiple records with same number of synonyms are found. Set to AlwaysGenerateAmbiguousIssue to always generate a lookup issue if multiple reference records could be matched thanks to synonyms permutation.
Possible values: UseClosestMatchToDisambiguate, AlwaysGenerateAmbiguousIssue | | `ToColumn` | Attribute | - | Default to "Out" | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### JoinTables The JoinTables command performs an inner join between two tables. It returns the result from LeftTable join RightTable on OnLeftColumn = OnRightColumn. To join on multiple columns, new columns can be created in tables with AddColumn FillWithExpression to concatenate all their keys in the order they want in a single column. The original tables are deleted after the join command (to keep them, set KeepTables to true). #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `JoinType` | Attribute | - | Optional: whether to perform an INNER JOIN or a LEFT JOIN. Inner: rows from LeftTable that don't have match in RightTable are removed. Left: rows from LeftTable that don't have match in RightTable are kept but values in RightColumns are NULL. Defaults to "Inner".
Possible values: Inner, Left | | `KeepTables` | Attribute | - | If true, will not remove the Left and Right tables from the context Defaults to "false". | | `LeftColumns` | Attribute | - | Optional: comma-separated list of column names to include in the result output table Defaults to "*", ie all columns from LeftTable. | | `LeftTable` | Attribute | - | Required: Name of existing of main table to consider | | `OnLeftColumn` | Attribute | - | Required: column from LeftTable to join on LeftColumn = RightColumn | | `OnRightColumn` | Attribute | - | Required: column from RightTable to join on LeftColumn = RightColumn | | `OutputTableName` | Attribute | - | Name of new table (required). If the table exists, an error will be raised (RemoveTable can be used to remove a table). | | `RightColumns` | Attribute | - | Optional: comma-separated list of column names to include in the result output table. If a column from RightTable has the same name than a column from LeftTable, it will be suffixed by 2 in the OutputTable Defaults to "*", ie all columns from RightTable. | | `RightTable` | Attribute | - | Required: Name of table to join to LeftTable. If a column from RightTable has the same name than a column from RightTable, it will be suffixed by 2 in the OutputTable | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### MartLookup Translates values in a column similar to the way Excel VLOOKUP works, except that the lookup uses an existing mart table. The translated value is put into SourceResultColumn, which can be a new or existing column (if existing, values with be overwriten). If a value is not found in the database table, NULL is returned and a Validation issue is created in the context's issues list. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `FuzzyMatching` | Attribute | - | If true, will try to match values by replacing characters to their normalized version (e.g: é -> e) and special characters to spaces. | | `LookupColumns` | Attribute | - | Columns in the lookup table that will be matched with the SourceColumns | | `LookupFilter` | Attribute | - | Filter to apply to select statement when retrieving data. Use [lookup] and [source] to refer to the lookup and source table respectively. | | `LookupResultColumn` | Attribute | - | Name of the column or comma-separated columns in the database table containing the resulting value to be returned. Override the default value Sys_Id | | `LookupTable` | Attribute | - | Table containing the Sys_ID. Generally a REF table. | | `MartCode` | Attribute | - | If defined, will fetch data from the target mart. | | `RegisterMissingAsIssues` | Attribute | - | Whether to generate Process Issues for unresolved source values. | | `SourceColumns` | Attribute | - | Columns in the source table to use to match the LookupColumns | | `SourceFilter` | Attribute | - | Filter to apply to source table. | | `SourceResultColumn` | Attribute | - | Where the found result should be set to in the current in-memory table. Supports comma-separated columns | | `Tag` | Attribute | - | Tag is a key to group lookup issues together | | `SynonymTable` | Element | - | SynonymTable is optional element. If SynonymTable is defined, then Name, FromColumn and ToColumn are required (Filter is optional). | | `Code` | Attribute | - | Default to "REF_Synonyms" | | `Filter` | Attribute | - | Can be "Category = 'Geoplaces'" | | `FromColumn` | Attribute | - | Default to "In" | | `MartCode` | Attribute | - | Default to "SameAsLookup"
Possible values: SameAsLookup, CurrentMart | | `MultipleMatchesBehavior` | Attribute | - | New behavior in 4.24.18. Default to MultipleMatchesBehavior: the records with the least number of synonyms used will be selected as a match. Will still trigger "Ambiguous" issue if multiple records with same number of synonyms are found. Set to AlwaysGenerateAmbiguousIssue to always generate a lookup issue if multiple reference records could be matched thanks to synonyms permutation.
Possible values: UseClosestMatchToDisambiguate, AlwaysGenerateAmbiguousIssue | | `ToColumn` | Attribute | - | Default to "Out" | #### Examples ##### Example 1 ```xml ``` ### MergeTables Merges multiple tables into one in a UNION-like manner. Columns that exist in some tables but not in others are created in the final table. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `KeepTables` | Attribute | - | If true, will not drop the merged tables | | `MergedTableName` | Attribute | - | Table name given to the resulting merged table. | | `SkipTableNames` | Attribute | - | If TableNames is "*" (all tables), then SkipTableNames can be set to a comma-separated list of tablenames to skip (not merge). A "*" character throws an exception. Otherwise enter each table separated by a comma. WhiteSpace will be trimmed. Braces ([]) may be used to protect spaces and commas in table names. | | `TableNames` | Attribute | - | Comma-separated list of table names to merge. An empty string indicates that no tables should be merged (default). An "*" character indicates that all tables should be merged. Otherwise enter each table separated by a comma. WhiteSpace will be trimmed. Braces ([]) may be used to protect spaces and commas in table names. | | `WorksheetNameToColumn` | Attribute | - | If defined, the name of all extracted tables/worksheets is put into this column, so it can be treated like data. The column must not already exist. | ### ParseJson When a cell contains some JSON data, parses the JSON object and retrieves the desired property. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Column of the table that contains some JSON | | `ErrorIfColumnDoesNotExist` | Attribute | - | Critical error if the column is not found in the active table if the table is not empty. Default is true. Set to false to ignore the column if it does not exist. | | `ErrorIfInvalidJson` | Attribute | - | Critical error if the column is not a valid JSON object. Default is true. Set to false to return the invalid value in the output column if it cannot be parsed. | | `OutputColumnName` | Attribute | - | Optional: if only one property is extracted, use this name for the produced column | | `Prefix` | Attribute | - | Optional: How to name the produced column(s) Defaults to the Column + underscore + property name | | `Properties` | Attribute | - | Comma separated list of properties to extract or * (star) for all | | `RemoveColumnAfterParse` | Attribute | - | Optional: set to false to keep the original column containing the JSON, true to remove the column if no longer needed Default is true. | #### Examples ##### Example 1 ```xml complexTypes ``` ### PivotColumns Reshapes long data to a wide format by pivoting a column containing column headers and a column containing column values. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `HeadersColumn` | Attribute | - | Column containing the column headers to be created. Only a single column can be specified. | | `ValuesColumn` | Attribute | - | Column containing the values to be put in the newly created columns. Only a single column can be specified. | #### Examples ##### Example 1 ```xml ``` ### RemoveColumn Removes a column from the active table(s). #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Name` | Attribute | - | Name of column to remove. Supports comma-separated values to remove multiple columns in one step. | #### Examples ##### Example 1 ```xml ``` ### RemoveDuplicates Removes duplicate rows in a table based on all or subset of key columns. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Columns` | Attribute | - | Optional comma-separated list of column names to include in the deduplication operation as 'key' columns. An empty value (default) indicates that all columns will be used. | ### RemoveRowsByFilter Removes rows from the active table(s) based on a filter. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `FilterExpression` | Element | - | Filter expression (ie, row filter, where clause) which selects rows to remove. This must be of the same syntax as described by Microsoft on this page: https://learn.microsoft.com/en-us/dotnet/fundamentals/runtime-libraries/system-data-datacolumn-expression

Note that strings must be enclosed in single quotation marks (not double). Column names that contain spaces must be wrapped by brackets ([ and ]). | #### Examples ##### Example 1 ```xml Site IS NULL AND [Epid. No.] IS NULL ``` ##### Example 2 ```xml Date < '${Now.DateUtc}' ``` ### RemoveRowsByRowNumber Removes rows from the active table(s) based on range of row numbers. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `FromRow` | Attribute | - | Removes rows beginning with (including) this row number. The first row of the table begins with 0 (zero-based). Default is 0 (the first row). | | `ToRow` | Attribute | - | Removes rows up to (and including) this row number. The first row of the table begins with 0 (zero-based). Default is -1 which indicates the last row of the table. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ##### Example 3 ```xml ``` ### RemoveTable Removes an in-memory table from the processing context, freeing up memory. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `TableName` | Attribute | - | Name of table to remove. Supports comma-separated values to remove multiple tables in one step. | ### RenameColumn Renames an existing column. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ErrorIfColumnDoesNotExist` | Attribute | - | Critical error if the column is not found in the active table. Default is true. Set to false to ignore the column if it does not exist. | | `From` | Attribute | - | Option 1: Name of the existing column to rename. | | `FromPattern` | Attribute | - | Option 2: Find a column to rename using a regex pattern in the existing columns of the table. | | `To` | Attribute | - | New name of the column. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### RenameColumnsByLookup Renames columns in a data-driven way based on mappings in a lookup table. The lookup table is any normal table that has one column with the input/original column names and a second column containing the output/standard column names. By default, all columns of the currently active table are potentially renamed. Columns are not renamed if they are not found in the lookup table. In case multiple lookup values are found for one column, the column will be duplicated along its original values. To prevent renaming of some columns use the SkipColumns property. To limit which rows in the lookup table should be used for the operation, use the Filter property. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `InputNameColumn` | Attribute | - | Column in the lookup table containing the input/original column names. | | `LookupFilter` | Attribute | - | SQL filter to be applied on the lookup table to limit which rows should be used. | | `LookupTable` | Attribute | - | Table containing the mapping columns. Generally a REF table. | | `MartCode` | Attribute | - | If defined, will use the lookup table from the target mart. | | `OutputNameColumn` | Attribute | - | Column in the lookup table containing the output/standard column names. | | `RegisterMissingAsIssues` | Attribute | - | If source column is not found in the lookup table issue will be generated
Possible values: None, Warning, Error | | `Error` | Element | - | Generate error value issue | | `None` | Element | - | Ignore any errors | | `Warning` | Element | - | Generate warning issue | | `SkipColumns` | Attribute | - | Comma-separated list of input/original column names to preserve. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ##### Example 3 ```xml ``` ### Round Round numerical data, very similar to Excel's ROUND function. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column with numbers to round. Null values are ignored and non-numerical values (ie text values) are also ignored. Rounding occurs in this column (use CopyColumn to make a copy of the original values if needed). | | `Digits` | Attribute | - | Behaves like Excel's "Num_digits" parameter. Positive Integer: Use a positive integer (such as 1, 2, and so on) to specify the number of digits after the decimal place to which you want to round. For example, entering “3” would round to three places after the decimal point. Zero: Enter “0” to round to the nearest integer. Negative Integer: Use a negative integer (such as -1, -2, and so on) to round to left of the decimal place. For example, if you were rounding the number 328.25 and input “-1” here, it would round your number to 330. | ### SetColumnDataType Strictly sets the datatype of one or more columns so that invalid values are not allowed. By default, the datatype of all columns of in-memory pipeline tables are of type string so that invalid data can still be processed. Using this command can simplify later operations in the pipeline because casting and optionally null-checking are no longer necessary. For example, you can eliminate casting and optionally null-checking when using AddColumn.FillWithExpression. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ColumnNames` | Attribute | - | Names of 1 or more columns to set the DataType of. Separate multiple columns by a comma. | | `DataType` | Attribute | - | List of supported datatypes: BIG_INTEGER, BOOLEAN, DATE, DATE_TIME, DECIMAL, GUID, INTEGER, TEXT
Possible values: BIG_INTEGER, BOOLEAN, DATE, DATE_TIME, DECIMAL, GUID, INTEGER, TEXT | | `OutputColumnNames` | Attribute | - | (optional) Names of new columns to create corresponding in sequence to the values in ColumnNames. If not provided, the data type of the existing columns is set in place, rather than creating new columns. | | `RegisterInvalidAsIssues` | Attribute | - | Issue will be raised when invalid value conversion
Possible values: None, Warning, Error | | `Error` | Element | - | Generate error value issue | | `None` | Element | - | Ignore any errors | | `Warning` | Element | - | Generate warning issue | | `ValueIfInvalid` | Attribute | - | (optional) A valid value to replace the invalid value by. Default is null. | | `ValueIfNull` | Attribute | - | (optional) Value to replace null values by. If not provided, null values remain null. If ValueIfInvalid is defined, null values will be replaced that. | #### Examples ##### Example 1 ```xml ``` ### SetContext A "transformer" that doesn't transform, but sets the context of the DataCleaner transformations. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ActiveTables` | Attribute | - | Sets the active table if there are more than one table. This may also be set to "*" to indicate that all tables are active. "Active" means that they will be transformed by the transformers following this SetContext statement in the config file. | ### SplitCellsToRows Creates new rows based on newline (default) or defined separator #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Column containing values to be splitted by separator | | `Filter` | Attribute | - | Optional: DataColumn expression to select a subset of rows of the active table. Only rows that match this filter will be split. Rows that don't match will be left untouched. | | `Separator` | Attribute | - | Separator to split values in cell. Special case: Use Separator="json_array" to parse a json array (array of raw strings or objects are supported). Default is the newline character (generate a new row for each line) | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### SplitColumnByRegEx Splits a column based on capture groups (named or indexed) in a regular expression. The named groups may correspond to existing or new columns. If the columns exist, their contents will be overwritten. The column containing the data to be split may not also be a name of a group. #### Remarks There are many RegEx language references available such as this one: http://regexstorm.net/reference

A good dotnet regular expression tester is invaluable, such as this one: http://regexstorm.net/tester

If editing the Xml datacleaner config file directly, the regular expression needs to be Xml Encoded so that all the '<' and '>' characters, which are common in regular expressions, do not confuse the xml. This is easy using a tool such as this online at: http://coderstoolbox.net/string/#!encoding=xml&action=encode&charset=us_ascii #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column name serving as the source of the data to provide the split operation on. If not defined, all columns of the table are processed. | | `RegEx` | Attribute | - | A regular expression containing named groups. If the regular expression does not contain any named groups, an exception is thrown. No named group can be the same as the name of Column. Note unnamed capture groups also become columns and will created columns named "1", "2", etc. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### SplitDate Splits a date field into separate Year, Month and Day columns #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of existing column containing the date. If not defined, all columns of the table are processed. | | `Day` | Attribute | - | Optional. Name of the column to accept the day (default = "Day"). If it doesn't exist, it will be created. | | `ImpactOnError` | Attribute | - | Optional. By default, any invalid date will trigger a critical error (aka Error_RemoveBatch) Change impact to keep processing and choose what to do with invalid dates: RemoveRow: Remove the row that contains the invalid date and logs a process issue with severity Error RemoveValue: Fill the new columns with NULL and logs a process issue with severity Error Warning: Fill the new columns with NULL and logs a process issue with severity Warning
Possible values: Warning, Error_RemoveValue, Error_RemoveRow, Error_RemoveColumn, Error_RemoveTable, Error_RemoveBatch | | `Error_RemoveBatch` | Element | - | All tables removed. | | `Error_RemoveColumn` | Element | - | Column removed. | | `Error_RemoveRow` | Element | - | Record removed. | | `Error_RemoveTable` | Element | - | Table removed. | | `Error_RemoveValue` | Element | - | Value replaced by NULL. | | `Warning` | Element | - | Raises a warning but no change made to data. | | `Month` | Attribute | - | Optional. Name of the column to accept the month (default = "Month"). If it doesn't exist, it will be created. | | `Year` | Attribute | - | Optional. Name of the column to accept the year (default = "Year"). If it doesn't exist, it will be created. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### SynonymLookup Obsolete command. Config item to perform a synonym lookup. It is a shorthand notation for an AddColumn and a DbLookup #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `DbConnectionName` | Attribute | - | The database connection used for the loading | | `DbLookupColumn` | Attribute | - | Search in this column in the DbTableName | | `DbResultColumn` | Attribute | - | Return the value from this column if the lookup was successful | | `DbTableName` | Attribute | - | The table to load from | | `Dimension` | Attribute | - | Only Xmart uses this field to populate the DbTableName property if that is empty | | `InCaseInputEmpty` | Attribute | - | In case the input value is empty, return this value instead | | `InCaseNotFound` | Attribute | - | If the lookup was unsuccessful, return this value instead | | `LookupColumn` | Attribute | - | Search for the value in this field | | `ResultColumn` | Attribute | - | Store the lookup result in this column The column will be added to the datatable. | ### TableLookup Translates values in a column similar to the way Excel VLOOKUP works, except that the lookup happens in an in-memory pipeline table (before anything is written to the database). The translated value is put into ResultColumn, which can be a new or existing column (it can also be the same as LookupColumn, which overwrites the original values). If a value is not found in the table, NULL is returned (no Validation issue generated). #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `InCaseInputEmpty` | Attribute | - | Not supported for TableLookup | | `InCaseNotFound` | Attribute | - | If the lookup was unsuccessful, return this value instead | | `LookupColumns` | Attribute | - | Name of the columns in the lookup table being transformed containing the value to be looked up. | | `LookupFilter` | Attribute | - | Optional: Filter to apply to the in-memory table. | | `LookupResult` | Attribute | - | Name of the column or comma-separated columns in the in-memory table containing the resulting value to be returned. | | `LookupTable` | Attribute | - | Name of the in-memory table in which the lookup is performed. | | `RegisterMissingAsIssues` | Attribute | - | Not supported yet. Wether to generate Process Issues. Default is false. | | `SourceColumns` | Attribute | - | Name of the columns in the in-memory table in which the lookup is performed. | | `SourceFilter` | Attribute | - | Filter to apply to source table. | | `SourceResult` | Attribute | - | Name of the column or comma-separated columns in the table being transformed in which to put the resulting value. Existing values, if any, are overwritten. If the column does not exist, it will be created (as type String). | | `Tag` | Attribute | - | Tag is a key to group lookup issues together | ### TransposeTable Transpose a table by switching rows and columns. Rows become columns, columns become rows.

If using the GetExcel command, it is recommended to use the SystemHeaders="true" option, to prevent the first row from becoming column headers and potentially renamed to enforce column uniqueness. #### Examples ##### Example 1 ```xml ``` ### UnpivotColumns Unpivots one or more columns in a table (wide to long). This command will also "disaggregate" column headers into multiple new columns. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Mode` | Attribute | - | Behavior when a column to pivot is not found. One of the following values (ERROR is the default).
  • ERROR = Raise a critical error and stop immediately.
  • WARN = Raise a warning and continue.
  • NOTHING = Silently continue.
| | `UnpivotNulls` | Attribute | - | If true, null values will be unpivoted, ie a row will be created where the value in ValueToColumn will be Null. Default is false. | | `UnpivotWhitespace` | Attribute | - | If true, whitespace values will be unpivoted, ie a row will be created where the value in ValueToColumn will be whitespace. Default is false. | | `Column` | Element (Multiple) | - | List of columns to be unpivoted. | | `Name` | Attribute | - | Name of the column to unpivot. | | `ValuesToColumn` | Attribute | - | New column added to table in which the values will be put. | | `Unpivot` | Element (Multiple) | - | List of unpivoting details. | | `ToColumn` | Attribute | - | Unpivot to this column | | `Value` | Attribute | - | Put this value in the ToColumn | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### UnpivotSimple Unpivots one or more columns in a table (wide to long). The values in a given set of columns will be put into a single column. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Columns` | Attribute | - | Comma-separated list of column names to unpivot. Each column name extracted from this list will be white-space trimmed. If a column to unpivot is not present in the table, no error will be produced. This provides flexibility for variable column names, such as a list of years. | | `FixedColumns` | Attribute | - | Comma-separated list of fixed column names which specifies a column list to not pivot. All other columns would be pivoted. Each column name extracted from this list will be white-space trimmed. | | `HeaderToColumn` | Attribute | - | Name of column to put the column header text into. For example, if Columns are years like "2015, 2014, 2013, 2012" HeaderToColumn could be set to "Year" to accept values 2015, 2014, 2013, etc. If the column does not exist it will be created. If it exists its data will be overwritten. | | `UnpivotNulls` | Attribute | - | If true, null values will be unpivoted, ie a row will be created where the value in ValueToColumn will be Null. Default is false. | | `UnpivotWhitespace` | Attribute | - | If true, whitespace values will be unpivoted, ie a row will be created where the value in ValueToColumn will be whitespace. Default is false. | | `ValueToColumn` | Attribute | - | Name of column to put pivoted cell values into. If the column does not exist it will be created. If it exists it will be overwritten. | #### Examples ##### Example 1 ```xml ``` ##### Example 2 ```xml ``` ### ValuesToColumns Gets a column name from the row data and populates it with a value from the row data. Example: Original columns: ColName, ColVal Data: ColA, Val1 ColA, Val2 ColB, Val1 With would produce ColName, ColVal, ColA, ColB ColA, Val1, Val1, NULL ColA, Val2, Val2, NULL ColB, Val1, NULL, Val1 Multiple instances of ValuesToColumns can exist to populate multiple columns for a given row. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ColumnNamesInColumn` | Attribute | - | The original column that contains the name of the column to populate. If it doesn't exist, it will be added to the DataTable. The data type will be string | | `ValuesInColumn` | Attribute | - | The original column that contains the name of the column to populate. If it doesn't exist, it will be added to the DataTable. The data type will be string | #### Examples ##### Example 1 ```xml ``` ## Validate List of Test commands to run on the staging data before being inserted into the store table ### TestColumnsExist Verifies that a list of columns exists in the table. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Columns` | Attribute | - | Comma separated list of column names to verify the existence of. | | `ContextColumns` | Attribute | - | If an error occurs, controls which columns are displayed to the user. Business primary keys (IS_PRIMARY_KEY=True) are automatically displayed No quoting should be used Multiple columns should be seperated by a comma with no spaces Example: ContextColumns=FIELD_TYPE_CODE,FK_TABLE_CODE | | `Impact` | Attribute | - | Impact of failed validation.
Possible values: Warning, Error_RemoveValue, Error_RemoveRow, Error_RemoveColumn, Error_RemoveTable, Error_RemoveBatch | | `Error_RemoveBatch` | Element | - | All tables removed. | | `Error_RemoveColumn` | Element | - | Column removed. | | `Error_RemoveRow` | Element | - | Record removed. | | `Error_RemoveTable` | Element | - | Table removed. | | `Error_RemoveValue` | Element | - | Value replaced by NULL. | | `Warning` | Element | - | Raises a warning but no change made to data. | | `MessageIfInvalid` | Attribute | - | User-friendly message to explain the expected format if pattern does not match. Can contain {0} to be replaced by invalid value | | `Tag` | Attribute | - | Missing columns | ### TestNotEmpty Tests for the existence of required values. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column to validate | | `ContextColumns` | Attribute | - | If an error occurs, controls which columns are displayed to the user. Business primary keys (IS_PRIMARY_KEY=True) are automatically displayed No quoting should be used Multiple columns should be seperated by a comma with no spaces Example: ContextColumns=FIELD_TYPE_CODE,FK_TABLE_CODE | | `Impact` | Attribute | - | Impact of failed validation.
Possible values: Warning, Error_RemoveValue, Error_RemoveRow, Error_RemoveColumn, Error_RemoveTable, Error_RemoveBatch | | `Error_RemoveBatch` | Element | - | All tables removed. | | `Error_RemoveColumn` | Element | - | Column removed. | | `Error_RemoveRow` | Element | - | Record removed. | | `Error_RemoveTable` | Element | - | Table removed. | | `Error_RemoveValue` | Element | - | Value replaced by NULL. | | `Warning` | Element | - | Raises a warning but no change made to data. | | `MessageIfInvalid` | Attribute | - | User-friendly message to explain the expected format if pattern does not match. Can contain {0} to be replaced by invalid value | | `Mode` | Attribute | - | How "missing" values are defined. Possible values:
  • "Nulls", only NULL values are considered missing.
  • "NullOrEmpty", NULL or empty strings are considered missing
  • "NullOrWhitespace", NULL, empty strings or strings containing only whitespace (spaces, tabs, carriage-returns, etc.) are considered missing.

Possible values: Nulls, NullOrEmpty, NullOrWhitespace | | `NullOrEmpty` | Element | - | The null or empty | | `NullOrWhitespace` | Element | - | The null or whitespace | | `Nulls` | Element | - | The nulls | | `Tag` | Attribute | - | Test commands can be tagged with a label, to support meaningful groupings of issues in the user interface. | #### Examples ##### Example 1 ```xml ``` ### TestPattern Verifies that a value matches a regular expression pattern #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column to validate | | `ContextColumns` | Attribute | - | If an error occurs, controls which columns are displayed to the user. Business primary keys (IS_PRIMARY_KEY=True) are automatically displayed No quoting should be used Multiple columns should be seperated by a comma with no spaces Example: ContextColumns=FIELD_TYPE_CODE,FK_TABLE_CODE | | `Impact` | Attribute | - | Impact of failed validation.
Possible values: Warning, Error_RemoveValue, Error_RemoveRow, Error_RemoveColumn, Error_RemoveTable, Error_RemoveBatch | | `Error_RemoveBatch` | Element | - | All tables removed. | | `Error_RemoveColumn` | Element | - | Column removed. | | `Error_RemoveRow` | Element | - | Record removed. | | `Error_RemoveTable` | Element | - | Table removed. | | `Error_RemoveValue` | Element | - | Value replaced by NULL. | | `Warning` | Element | - | Raises a warning but no change made to data. | | `MessageIfInvalid` | Attribute | - | User-friendly message to explain the expected format if pattern does not match. Can contain {0} to be replaced by invalid value | | `Pattern` | Attribute | - | Regular expression pattern that is matched against each value in the column. | | `Tag` | Attribute | - | Test commands can be tagged with a label, to support meaningful groupings of issues in the user interface. | ### TestRow Checks that a row is valid based on either matching a filter expression or using a C# code fragment that returns true (to keep) or false (to remove).

To match using a filter expression, use the FilterTerms element inside TestRow. To match using a C# fragment, use the Expression element inside TestRow. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ContextColumns` | Attribute | - | If an error occurs, controls which columns are displayed to the user. Business primary keys (IS_PRIMARY_KEY=True) are automatically displayed No quoting should be used Multiple columns should be seperated by a comma with no spaces Example: ContextColumns=FIELD_TYPE_CODE,FK_TABLE_CODE | | `Impact` | Attribute | - | Impact of failed validation.
Possible values: Warning, Error_RemoveValue, Error_RemoveRow, Error_RemoveColumn, Error_RemoveTable, Error_RemoveBatch | | `Error_RemoveBatch` | Element | - | All tables removed. | | `Error_RemoveColumn` | Element | - | Column removed. | | `Error_RemoveRow` | Element | - | Record removed. | | `Error_RemoveTable` | Element | - | Table removed. | | `Error_RemoveValue` | Element | - | Value replaced by NULL. | | `Warning` | Element | - | Raises a warning but no change made to data. | | `MessageIfInvalid` | Attribute | - | User-friendly message to explain the expected format if pattern does not match. Can contain {0} to be replaced by invalid value | | `Tag` | Attribute | - | Test commands can be tagged with a label, to support meaningful groupings of issues in the user interface. | | `Expression` | Element | - | Code fragment in C# that must evaluate to either true (to keep) or false (to remove). Use row["ColumnName"] to get a reference to the cell, then use row["ColumnName"].Value or .StringValue or .DateValue or .NumberValue to use the value with the expected type operators. Expression text can surrounded with <![CDATA[ EXPRESSION ]]> if the Expression contains operators like >= that needs XML escaping. The expression may optionally begin with the word 'return'. | | `FilterTerms` | Element | - | A group of terms that must return true for the row to be kept. e.g.: <Terms Operator="AND"> <Terms> <Term Field="SEQUENCE" Rule="GreaterOrEqualTo" Operator="AND"<<Value> 123 </Value> </Term> </Terms> | #### Examples ##### Example 1 ```xml 0 3 ``` ##### Example 2 ```xml row["Price"].NumberValue * row["Quantity"].NumberValue > 100 ``` ##### Example 3 ```xml ``` ### TestValueInRange Checks that a value is above, below or within a certain range of limits. The CompareType setting determines how values are compared. A data conversion may be needed to perform the comparision (for example, to convert a string to a number) – if the conversion fails, a warning or error issue is created (a warning if Impact = Warning, an error if Impact = Error_*). #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Column` | Attribute | - | Name of column to validate | | `CompareType` | Attribute | - | “Auto” (default), “Number”, “String” or “Date”. If “Auto”, the comparison type is determined by the field type of the target column. If “Number” comparison is numerical: 10 is considered greater than 2. If “String”, comparison is alphabetical, 2 is considered greater than 10. If “Date”, values are treated as dates.
Possible values: Auto, String, Number, Date | | `ContextColumns` | Attribute | - | If an error occurs, controls which columns are displayed to the user. Business primary keys (IS_PRIMARY_KEY=True) are automatically displayed No quoting should be used Multiple columns should be seperated by a comma with no spaces Example: ContextColumns=FIELD_TYPE_CODE,FK_TABLE_CODE | | `HighLimit` | Attribute | - | Upper limit of the tested range. If empty, there is no upper limit. | | `HighLimitColumnName` | Attribute | - | Upper limit column of the tested range. Value of this column is used as the upper limit of the tested range. | | `Impact` | Attribute | - | Impact of failed validation.
Possible values: Warning, Error_RemoveValue, Error_RemoveRow, Error_RemoveColumn, Error_RemoveTable, Error_RemoveBatch | | `Error_RemoveBatch` | Element | - | All tables removed. | | `Error_RemoveColumn` | Element | - | Column removed. | | `Error_RemoveRow` | Element | - | Record removed. | | `Error_RemoveTable` | Element | - | Table removed. | | `Error_RemoveValue` | Element | - | Value replaced by NULL. | | `Warning` | Element | - | Raises a warning but no change made to data. | | `Inclusive` | Attribute | - | If true (default), the value may equal LowLimit or HighLimit. If false, the value is not allowed to equal LowLimit nor HighLimit. | | `LowLimit` | Attribute | - | Lower limit value of the tested range. If empty, there is no lower limit. | | `LowLimitColumnName` | Attribute | - | Lower limit column of the tested range. Value of this column is used as the lower limit of the tested range. | | `MessageIfInvalid` | Attribute | - | User-friendly message to explain the expected format if pattern does not match. Can contain {0} to be replaced by invalid value | | `Tag` | Attribute | - | Test commands can be tagged with a label, to support meaningful groupings of issues in the user interface. | ## Approvals One or multiple approvals steps required before allowing a batch to be committed ### AddStep Allows to configure the order of required approvers, for example if a batch needs approvals from the Country first, then the Region, then the HQ. Steps order will be used to request approval from the first AddStep, then the next if previous is approved. Any rejection won't trigger the following steps. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ChangeTypes` | Attribute | - | Comma-separated list of record change types which should trigger approval. For an approval to be triggered, at least one of the records in the batch must have one of the change types. For example, if ChangeType="NEW, SOFT_DELETE", only batches containing any new or any soft-deleted records will require approval. Possible values are: NEW, UPDATED, SOFT_DELETE, REPLACE. | | `Message` | Attribute | - | The text that will be displayed to Approvers when viewing the batch preview. | | `Mode` | Attribute | - | Notification mode for the approvers Some approvers may want to be notified for all batches, even if the batch have no changes.
Possible values: CHANGES_ONLY, ALWAYS | | `RoleName` | Attribute | - | Role whose members will be notified. | ## Load Table loaders configuration: write data from an input table into the target store table. ### AutoLoad virtual command insert as a placeholder when LoadSection.AutoLoad is true, should not be serialized ### ColumnMapping Maps a column from the active table to the target store table. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `Optional` | Attribute | - | Set to true to not raise error if the Source column does not exist in Source Table Default is false | | `Source` | Attribute | - | Column name of the Source table to map. | | `Target` | Attribute | - | Column name of the store target table to fill. | ### LoadTable Each LoadTable should target one store table to fill with the input data. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `DeleteNotInSource` | Attribute | - | Only applies for MERGE strategy.Default is false. If true, the system performs a match of existing records in target tables with source tables, within the data range specified by the DataScope filter. If a target record exists for which there is no corresponding source record, the target record is flagged for soft-deletion. The records flagged for deletion appear as soft-deleted records visually in the batch preview. The DataScope defines the range of data in both the source and target to be considered for loading.For example, if a target table has data for regions A, B and C but the data scope defines "Region = 'A'" , and the source file only has data for Region A, the B and C records should not be flagged for deletion even though they do not exist in the source file because they are out of data scope. | | `DropIfNoSourceData` | Attribute | - | For REPLACE strategy only: if true, and there are 0 records to load, proceed with dropping database data. If false (default), no data will be dropped if there are no records to load. This prevents accidental dropping of data if the wrong data is loaded, if the source file is empty, etc. | | `LoadStrategy` | Attribute | - | How the data will be processed.
Possible values: MERGE, REPLACE, APPEND, OFF | | `Optional` | Attribute | - | Default: false. If true, the system will not throw an error if the source table does not exist. Useful for data entry forms with subforms (child table can be empty), or with getters that extract multiple tables using wildcards. | | `SourceTable` | Attribute | - | The in-memory table to select as a source | | `TargetTable` | Attribute | - | The store table to fill data into. | | `ColumnMappings` | Element | - | Map the columns from the source table with the target table. | | `Auto` | Attribute | - | Set to true to have the mappings be populated automatically from those that exist in both the source and target tables. Note that columns mapped to FOREIGN KEY columns will automatically have a sysid_lookup generated using the given field and the Sys_RowTitle of the targeted tables (FK_TABLE_CODE). Default is false | | `IfSourceColumnNotMapped` | Attribute | - | Warnings will be added for any column present in a pipeline table (a source table) which is not mapped.
Possible values: Warning, SayNothing | | `ColumnMapping` | Element (Multiple) | - | Gets the mappings. | | `Optional` | Attribute | - | Set to true to not raise error if the Source column does not exist in Source Table Default is false | | `Source` | Attribute | - | Column name of the Source table to map. | | `Target` | Attribute | - | Column name of the store target table to fill. | | `DataScope` | Element | - | Provides a filter to only REPLACE a part of the store rows in the target table and filters out data from the source table that does not match this filter. | | `Operator` | Attribute | - | Determines whether terms are ANDed (default) or ORed.
Possible values: OR, AND | | `Origins` | Attribute | - | Optional: comma separated list of existing origins codes for which the command should apply. Provide code of origins for which this command will apply (Ex:
Origins="MY_ORIGIN_AFRO, MY_ORIGIN_EMRO"
) Use !CODE_ORIGIN to run this command to exclude a particular origin (Ex:
Origins="!MY_ORIGIN_EURO, !MY_ORIGIN_SEARO"
) | | `Terms` | Element | - | List of filter terms.
Contains repetable element(s): Term, Group | | `LookupIDs` | Element | - | Declare the lookup ids. to populate FK columns.
Contains repetable element(s): SysIDLookup | | `RLSScope` | Element | - | System generated filter to only REPLACE a part of the store rows in the target table and filters out data from the source table that does not match this filter based on RLS settings. | | `Transform` | Element | - | Sets the transformers to apply to the source table. | | `Transformers` | Element (Multiple) | - | Gets or sets the transformers.
Contains element(s): FindReplace, CopyColumn, AggregateTable, CopyTable, AddColumn, RemoveColumn, RenameColumn, RemoveRowsByFilter, RemoveDuplicates, DbLookup, HierarchyLookup, MartLookup, MergeTables, SetContext, SplitColumnByRegEx, SplitDate, ValuesToColumns, FormatDate, UnpivotSimple, ChangeCase, CleanTable, DateDiff, RemoveRowsByRowNumber, UnpivotColumns, TransposeTable, GenerateID, GenerateHash, Round, RemoveTable, PivotColumns, ParseJson, CustomScript, SplitCellsToRows, TableLookup, JoinTables, DateAdd, RenameColumnsByLookup, SetColumnDataType | | `Validate` | Element | - | Sets some optional Tests to generate some validation errors. | | `Validators` | Element (Multiple) | - | Gets or sets the validators.
Contains element(s): TestNotEmpty, TestColumnsExist, TestValueInRange, TestRow, TestPattern | ### SysIDLookup Used to resolve foreign key using a Lookup Table than can be either in memory or a store table #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `LookupColumns` | Attribute | - | Columns in the lookup table that will be matched with the SourceColumns | | `LookupFilter` | Attribute | - | Filter to apply to select statement when retrieving data. Use [lookup] and [source] to refer to the lookup and source table respectively. | | `LookupTable` | Attribute | - | Table containing the Sys_ID. Generally a REF table. | | `RegisterMissingAsIssues` | Attribute | - | Whether to generate Process Issues for unresolved source values. | | `SourceColumns` | Attribute | - | Columns in the source table to use to match the LookupColumns | | `SourceFilter` | Attribute | - | Filter to apply to source table. | | `SourceResultColumn` | Attribute | - | Where the found result should be set to in the current in-memory table. | | `Tag` | Attribute | - | Tag is a key to group lookup issues together | ## PostRun Describe a flow of postRun commands to run after the initial pipeline batch completes ### CallWebService Calls a remote Web service API after a batch completes. Useful to trigger a synchronisation right after data updates. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConnectionName` | Attribute | - | If the webservice is secured, we need a connection to get the accessToken Connection Code to use to retrieve the Credentials needed to authenticate against the remote WebService | | `ExecuteOnBatchStatuses` | Attribute | - | Comma-separated list of batch statuses following which the next command can be executed. Possible values: SUCCESS (default), CANCELED, SYSTEM_ERROR, REJECTED, INVALID, TIMEOUT_CANCELED | | `ResultStatus` | Attribute | - | Read-only. This is populated by the system after the flow step has run and indicates the result of the triggered batch or webservice call. It is available in the generated pipeline xml attached to the batch on the load page. | | `RunEvenIfNoChanges` | Attribute | - | True to run pipeline if previous batch has no changes (only works in Mode="SEQUENTIAL", not "PARALLEL") need to use enums as XmlAttribute does not support nullable "bool?" type
Possible values: default, False, True | | `Url` | Attribute | - | Url to download the data from. | | `Body` | Element | - | Defines the body parameters of the POST request | | `Type` | Attribute | - | Gets or sets the Body request content type.
Possible values: raw, x-www-form-urlencoded, graphQL | | `graphQL` | Element | - | GraphQL query json content | | `raw` | Element | - | String Content without particular media type (use Headers to set Content-Type) | | `x-www-form-urlencoded` | Element | - | x-www-form-urlencoded media type, one key:value per line using this convention key1:value1 key2:value2 | | `Headers` | Element | - | Additional headers to add to the request
Contains repetable element(s): add | | `Name` | Attribute | - | Gets or sets the name. | | `Value` | Attribute | - | Gets or sets the value. | | `ResultResponse` | Element | - | Set by the system from the http response | ### Flow Pipeline flow section: configure RunPipeline or CallWebService launch order from a single place. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ExecuteOnBatchStatuses` | Attribute | - | Comma-separated list of batch statuses following which the next command can be executed. Possible values: SUCCESS (default), CANCELED, SYSTEM_ERROR, REJECTED, INVALID, TIMEOUT_CANCELED | | `Mode` | Attribute | - | PARALLEL (default) or SEQUENCE. How to start the flow commands: in PARALLEL, each command will be started without waiting for the result of the previous command. In SEQUENCE, each command is processed individually, and awaited for before triggering the next commands, useful when the 2nd step requires the output of the 1st step.
Possible values: PARALLEL, SEQUENTIAL | | `ResultStatus` | Attribute | - | Read-only. This is populated by the system after the flow step has run and indicates the result of the triggered batch or webservice call. It is available in the generated pipeline xml attached to the batch on the load page. | | `RunEvenIfNoChanges` | Attribute | - | True to run pipeline if previous batch has no changes (only works in Mode="SEQUENTIAL", not "PARALLEL") need to use enums as XmlAttribute does not support nullable "bool?" type
Possible values: default, False, True | | `RunBaseCommands` | Element (Multiple) | - |
Contains element(s): RunPipeline, CallWebService, Flow | ### PostRun Describe a flow of postRun commands to run after the initial pipeline batch completes #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ExecuteOnBatchStatuses` | Attribute | - | Comma-separated list of batch statuses following which the next command can be executed. Possible values: SUCCESS (default), CANCELED, SYSTEM_ERROR, REJECTED, INVALID, TIMEOUT_CANCELED | | `Mode` | Attribute | - | PARALLEL (default) or SEQUENCE. How to start the flow commands: in PARALLEL, each command will be started without waiting for the result of the previous command. In SEQUENCE, each command is processed individually, and awaited for before triggering the next commands, useful when the 2nd step requires the output of the 1st step.
Possible values: PARALLEL, SEQUENTIAL | | `ResultStatus` | Attribute | - | Read-only. This is populated by the system after the flow step has run and indicates the result of the triggered batch or webservice call. It is available in the generated pipeline xml attached to the batch on the load page. | | `RunEvenIfNoChanges` | Attribute | - | True to run pipeline if previous batch has no changes (only works in Mode="SEQUENTIAL", not "PARALLEL") need to use enums as XmlAttribute does not support nullable "bool?" type
Possible values: default, False, True | | `RunBaseCommands` | Element (Multiple) | - |
Contains element(s): RunPipeline, CallWebService, Flow | ### RunPipeline In a PostRun, automatically triggers the run of a pipeline after the current Load is finished. By default, will be triggered only if parent batch has changes. In a Flow, triggers the run of a pipeline. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `BatchComment` | Attribute | - | Optional: Comment to be added to the batch when the pipeline is triggered. Pipeline variables like ${BATCH_ID} can be used to identify the parent batch. Default: "PostRun Batch ID: ${BATCH_ID}" | | `BatchIDs` | Attribute | - | Populated by the system, list of generated batch IDs started by the command | | `ExecuteOnBatchStatuses` | Attribute | - | Comma-separated list of batch statuses following which the next command can be executed. Possible values: SUCCESS (default), CANCELED, SYSTEM_ERROR, REJECTED, INVALID, TIMEOUT_CANCELED | | `OriginCode` | Attribute | - | Required: Code of the pipeline origin to run. | | `ResultStatus` | Attribute | - | Read-only. This is populated by the system after the flow step has run and indicates the result of the triggered batch or webservice call. It is available in the generated pipeline xml attached to the batch on the load page. | | `RunEvenIfNoChanges` | Attribute | - | True to run pipeline if previous batch has no changes (only works in Mode="SEQUENTIAL", not "PARALLEL") need to use enums as XmlAttribute does not support nullable "bool?" type
Possible values: default, False, True | | `Values` | Element | - | Optional: Set the values to pass to next pipeline as inputs. The target pipeline should have those inputs declared in its "Context > Inputs" section | | `Add` | Element (Multiple) | - | Gets or sets the adders. | ## Flow Pipeline flow section: configure RunPipeline or CallWebService launch order from a single place. ### CallWebService Calls a remote Web service API after a batch completes. Useful to trigger a synchronisation right after data updates. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ConnectionName` | Attribute | - | If the webservice is secured, we need a connection to get the accessToken Connection Code to use to retrieve the Credentials needed to authenticate against the remote WebService | | `ExecuteOnBatchStatuses` | Attribute | - | Comma-separated list of batch statuses following which the next command can be executed. Possible values: SUCCESS (default), CANCELED, SYSTEM_ERROR, REJECTED, INVALID, TIMEOUT_CANCELED | | `ResultStatus` | Attribute | - | Read-only. This is populated by the system after the flow step has run and indicates the result of the triggered batch or webservice call. It is available in the generated pipeline xml attached to the batch on the load page. | | `RunEvenIfNoChanges` | Attribute | - | True to run pipeline if previous batch has no changes (only works in Mode="SEQUENTIAL", not "PARALLEL") need to use enums as XmlAttribute does not support nullable "bool?" type
Possible values: default, False, True | | `Url` | Attribute | - | Url to download the data from. | | `Body` | Element | - | Defines the body parameters of the POST request | | `Type` | Attribute | - | Gets or sets the Body request content type.
Possible values: raw, x-www-form-urlencoded, graphQL | | `graphQL` | Element | - | GraphQL query json content | | `raw` | Element | - | String Content without particular media type (use Headers to set Content-Type) | | `x-www-form-urlencoded` | Element | - | x-www-form-urlencoded media type, one key:value per line using this convention key1:value1 key2:value2 | | `Headers` | Element | - | Additional headers to add to the request
Contains repetable element(s): add | | `Name` | Attribute | - | Gets or sets the name. | | `Value` | Attribute | - | Gets or sets the value. | | `ResultResponse` | Element | - | Set by the system from the http response | ### Flow Pipeline flow section: configure RunPipeline or CallWebService launch order from a single place. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `ExecuteOnBatchStatuses` | Attribute | - | Comma-separated list of batch statuses following which the next command can be executed. Possible values: SUCCESS (default), CANCELED, SYSTEM_ERROR, REJECTED, INVALID, TIMEOUT_CANCELED | | `Mode` | Attribute | - | PARALLEL (default) or SEQUENCE. How to start the flow commands: in PARALLEL, each command will be started without waiting for the result of the previous command. In SEQUENCE, each command is processed individually, and awaited for before triggering the next commands, useful when the 2nd step requires the output of the 1st step.
Possible values: PARALLEL, SEQUENTIAL | | `ResultStatus` | Attribute | - | Read-only. This is populated by the system after the flow step has run and indicates the result of the triggered batch or webservice call. It is available in the generated pipeline xml attached to the batch on the load page. | | `RunEvenIfNoChanges` | Attribute | - | True to run pipeline if previous batch has no changes (only works in Mode="SEQUENTIAL", not "PARALLEL") need to use enums as XmlAttribute does not support nullable "bool?" type
Possible values: default, False, True | | `RunBaseCommands` | Element (Multiple) | - |
Contains element(s): RunPipeline, CallWebService, Flow | ### RunPipeline In a PostRun, automatically triggers the run of a pipeline after the current Load is finished. By default, will be triggered only if parent batch has changes. In a Flow, triggers the run of a pipeline. #### Properties | Name | Type | Default Value | Description | |------|------|--------------|-------------| | `BatchComment` | Attribute | - | Optional: Comment to be added to the batch when the pipeline is triggered. Pipeline variables like ${BATCH_ID} can be used to identify the parent batch. Default: "PostRun Batch ID: ${BATCH_ID}" | | `BatchIDs` | Attribute | - | Populated by the system, list of generated batch IDs started by the command | | `ExecuteOnBatchStatuses` | Attribute | - | Comma-separated list of batch statuses following which the next command can be executed. Possible values: SUCCESS (default), CANCELED, SYSTEM_ERROR, REJECTED, INVALID, TIMEOUT_CANCELED | | `OriginCode` | Attribute | - | Required: Code of the pipeline origin to run. | | `ResultStatus` | Attribute | - | Read-only. This is populated by the system after the flow step has run and indicates the result of the triggered batch or webservice call. It is available in the generated pipeline xml attached to the batch on the load page. | | `RunEvenIfNoChanges` | Attribute | - | True to run pipeline if previous batch has no changes (only works in Mode="SEQUENTIAL", not "PARALLEL") need to use enums as XmlAttribute does not support nullable "bool?" type
Possible values: default, False, True | | `Values` | Element | - | Optional: Set the values to pass to next pipeline as inputs. The target pipeline should have those inputs declared in its "Context > Inputs" section | | `Add` | Element (Multiple) | - | Gets or sets the adders. |