Why is metadata tab not executed?

Product:

Planning Analytics 2.0.9x

Issue:

When is metadata tab executed?

Solution:

If the data view contain calculated cells, then you may need to add this in the prolog:

ViewExtractSkipCalcsSet ( sCubeName, sViewName, 0 );

to make the metadata be processed with the calculated values.

Metadata and data is only processed, if you have any data in the variables.

 

From Alan Kirk at https://www.tm1forum.com/viewtopic.php?t=15670

These are the roles of the tabs in a TI process:

Prolog
This runs ONCE, before the data source (if any) is connected to, read or even looked at by the process.
You can use it to:

  • Create a data source by defining a view and its subsets;
  • Check for the existence of a file and run a batch file to rename or move it;
  • Change the data source from a view to a text file to a dimension subset or whatever you need it to be;
  • Dynamically assign a different data source (a different file name, a new view name or whatever) to the process at run time;
  • Define any constants;
  • Write information to a log file or to the server log;
  • Insert new elements to a dimension if you have ones that do not come from your data source;
  • Pretty much anything that you need to do before the process even looks at the data source, if any.
  • The fact that it runs before you connect to the data source is what allows you to change the data source on this tab as mentioned above using functions like DataSourceType and DatasourceNameForServer.

There is one proviso with this; one thing that you can’t do within a TI itself is to change its variable list or the data types. Normally you would define those by using an example file or view at the time that you write the process.

Metadata
This loops through the data source (if any) ONCE. That is, for every record in the data source, the values in each column will be read into the corresponding process variables, then the code on the metadata tab will be executed, then the process will move onto the next row of the data source and the whole process repeats.

The purpose of the tab is to create any metadata (cubes, dimensions etc) that you will need to store the data that you upload on the Data tab.

When you use functions like DimensionElementInsert, changes are made to a copy of the dimension.

After the last record has been processed on the Metadata tab, the real dimension will be overwritten by the copy. If you did any insertions on the Prolog tab, these will also be added at that point.

Typically you will be using element names from your data source’s variables to do the element insertion. If you have a hard coded element name as you have in your example code, the TI will add the element on the first pass if it needs to, and spend every other pass saying “Nope, it’s already there, Nope, it’s already there, Nope, it’s already there, etc”. This is not what we call “optimum code efficiency”. That’s why insertions like that are generally done on the Prolog tab.

There are also newer functions like DimensionElementInsertDirect which will push the elements straight into the dimension without creating a copy. Information about such functions will be found in the Reference Guide.

IMPORTANT NOTE FOR NEW PLAYERS: If you don’t have a data source, or if you have a data source which has no records, then nothing that you have written in the Metadata tab will ever be executed. Ever.

Data
This will again loop through each row in the data source one at a time, assigning the values in each column to variables, and doing whatever you tell it to. This may be loading values into a TM1 cube, or it may be writing values from a TM1 cube to a text file, or to another database. If you are loading values into a cube it’s assumed that you have created any necessary elements in the Metadata tab.

Note that any attributes of elements (alias names, etc) are regarded as being data for the purposes of this exercise and need to be written on the Data tab (unless you used the Direct functions mentioned above).

Epilog
This is run AFTER the last Data record is processed. It is usually used to clean up whatever needs cleaning up, and maybe writing results into a control cube, according to taste.

 

More Information:

https://blogs.perficient.com/2015/04/29/ibm-cognos-tm1-updating-metadata-in-ti-submit-time-explore/

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=tf-odbc-turbointegrator-functions

https://exploringtm1.com/viewextractskiprulevaluesset/

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=vmtf-viewextractskipcalcsset