Product:

Planning Analytics 2.0.9.13 TM1_version=TM1-AW64-ML-RTM-11.0.913.10-0

Microsoft Windows 2019 server

Microsoft SQL Server 2012 Native Client ODBC driver version 11.2.5058.0

Issue:

Can not open a connection to a Microsoft SQL database, via ODBCopen command. You can open the connection in the TM1 datasource tab and read the data, but not open in a ODBCopen command in the prolog tab. When you test the ODBC connection in Windows, it works fine with your SQL login.

You get similar error, when you try a 32 bit ODBC or 64 bit ODBC connection.

In Windows event log you get this error:

Faulting application name: tm1odbcproxy32.exe, version: 0.0.0.0, time stamp: 0x62586410
Faulting module name: tm1odbcproxy32.exe, version: 0.0.0.0, time stamp: 0x62586410
Exception code: 0xc0000005
Fault offset: 0x000058f8
Faulting process id: 0xfc8
Faulting application start time: 0x01d93a5fe5481324
Faulting application path: C:\Program Files\ibm\cognos\tm1_64\bin64\tm1odbcproxy32.exe
Faulting module path: C:\Program Files\ibm\cognos\tm1_64\bin64\tm1odbcproxy32.exe
Report Id: cadfaa7a-2357-4370-9ecf-3c6500029586
Faulting package full name:
Faulting package-relative application ID:

Solution:

The connection needs to use UNICODE. Use ODBCOPENex instead.

Format is: ODBCOPENEx (dataset name, dataset client name, client password, (use-Unicode-interface flag) )

https://www.ibm.com/docs/en/cognos-tm1/10.2.2?topic=functions-odbcopenex

In TM1 Architect data source tab, you get same error if you uncheck the “use unicode” checkbox.

In prolog enter this code:

 

#--- debug setup
sDEBUGfile = 'debugfile3.txt';
sDEBUGpath = 'c:\temp\';
sDEBUG1 = sDEBUGpath | sDEBUGfile ;

#-- setup the database connection
sODBCname = 'windows2016';
sUser = 'donald';
sPassword = 'Password!';

#-- open the connection to the database with unicode by adding the 1 parameter
ODBCOpenEx(sODBCname, sUser, sPassword,1);

#-- create the sql statement - adjust for your test database
sSQL = 'TRUNCATE TABLE AdventureWorksLT2019.dbo.Lista';

ASCIIOutput ( sDEBUG1, sSQL, 'check sql syntax');

#-- execute the SQL statement
ODBCOutput (sODBCname , ( sSQL) );

You can open the SQL ODBC connection in PROLOG, even when other data sources are in use.

sODBCname = ‘windows2016’; should be the name of the ODBC connection you want to use, it can be case-sensitive.

Please try to change ODBC driver in Windows Control Panel on your Windows Server. A new ODBC driver can help.

 

 More Information:

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=tv-turbointegrator-local-variables

https://www.bi4all.pt/en/news/en-blog/cognos-tm1-dynamic-management-of-data-sources-and-connections-through-configuration-cubes/

https://www.tm1forum.com/viewtopic.php?t=11978

https://www.tm1forum.com/viewtopic.php?t=10947

https://www.tm1forum.com/viewtopic.php?t=16234

https://quebit.com/askquebit/IBM/how-to-speed-up-tm1-odbcoutput/

Product:

Planning Analytics 2.0.9x

Issue:

When is metadata tab executed?

Solution:

If the data view contain calculated cells, then you may need to add this in the prolog:

ViewExtractSkipCalcsSet ( sCubeName, sViewName, 0 );

to make the metadata be processed with the calculated values.

Metadata and data is only processed, if you have any data in the variables.

 

From Alan Kirk at https://www.tm1forum.com/viewtopic.php?t=15670

These are the roles of the tabs in a TI process:

Prolog
This runs ONCE, before the data source (if any) is connected to, read or even looked at by the process.
You can use it to:

  • Create a data source by defining a view and its subsets;
  • Check for the existence of a file and run a batch file to rename or move it;
  • Change the data source from a view to a text file to a dimension subset or whatever you need it to be;
  • Dynamically assign a different data source (a different file name, a new view name or whatever) to the process at run time;
  • Define any constants;
  • Write information to a log file or to the server log;
  • Insert new elements to a dimension if you have ones that do not come from your data source;
  • Pretty much anything that you need to do before the process even looks at the data source, if any.
  • The fact that it runs before you connect to the data source is what allows you to change the data source on this tab as mentioned above using functions like DataSourceType and DatasourceNameForServer.

There is one proviso with this; one thing that you can’t do within a TI itself is to change its variable list or the data types. Normally you would define those by using an example file or view at the time that you write the process.

Metadata
This loops through the data source (if any) ONCE. That is, for every record in the data source, the values in each column will be read into the corresponding process variables, then the code on the metadata tab will be executed, then the process will move onto the next row of the data source and the whole process repeats.

The purpose of the tab is to create any metadata (cubes, dimensions etc) that you will need to store the data that you upload on the Data tab.

When you use functions like DimensionElementInsert, changes are made to a copy of the dimension.

After the last record has been processed on the Metadata tab, the real dimension will be overwritten by the copy. If you did any insertions on the Prolog tab, these will also be added at that point.

Typically you will be using element names from your data source’s variables to do the element insertion. If you have a hard coded element name as you have in your example code, the TI will add the element on the first pass if it needs to, and spend every other pass saying “Nope, it’s already there, Nope, it’s already there, Nope, it’s already there, etc”. This is not what we call “optimum code efficiency”. That’s why insertions like that are generally done on the Prolog tab.

There are also newer functions like DimensionElementInsertDirect which will push the elements straight into the dimension without creating a copy. Information about such functions will be found in the Reference Guide.

IMPORTANT NOTE FOR NEW PLAYERS: If you don’t have a data source, or if you have a data source which has no records, then nothing that you have written in the Metadata tab will ever be executed. Ever.

Data
This will again loop through each row in the data source one at a time, assigning the values in each column to variables, and doing whatever you tell it to. This may be loading values into a TM1 cube, or it may be writing values from a TM1 cube to a text file, or to another database. If you are loading values into a cube it’s assumed that you have created any necessary elements in the Metadata tab.

Note that any attributes of elements (alias names, etc) are regarded as being data for the purposes of this exercise and need to be written on the Data tab (unless you used the Direct functions mentioned above).

Epilog
This is run AFTER the last Data record is processed. It is usually used to clean up whatever needs cleaning up, and maybe writing results into a control cube, according to taste.

 

More Information:

https://blogs.perficient.com/2015/04/29/ibm-cognos-tm1-updating-metadata-in-ti-submit-time-explore/

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=tf-odbc-turbointegrator-functions

https://exploringtm1.com/viewextractskiprulevaluesset/

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=vmtf-viewextractskipcalcsset

Product:

Microsoft Azure SQL

Issue:

How list size of database?

Solution:

Login to SSMS, go to your database and enter command:

EXEC sp_spaceused;

 

Not all commands are allowed in Azure SQL, as it is a PAAS solution. See here for diffrences:

https://www.codeguru.com/csharp/the-difference-between-sql-server-and-sql-azure/ 

https://learn.microsoft.com/en-us/azure/azure-sql/database/features-comparison?view=azuresql 

Permissions comparison between on-premises SQL Server and Azure SQL Database

 

The version of Azure SQL is the latest, and is upgraded automatic, you need to manually upgrade your databases in Azure SQL. Check the version of your database with command:

SELECT name, compatibility_level FROM sys.databases;

Azure SQL Database Compatibility with SQL Server

The version of the SQL Server database engine run by Azure SQL Database, Azure SQL Managed Instance and Azure Synapse Analytics is always ahead of the on-premises version of SQL Server, and includes the latest security fixes. This means that the patch level is always on par with or ahead of the on-premises version of SQL Server, and that the latest features available in SQL Server are available in these services.

https://en.wikipedia.org/wiki/Microsoft_Azure_SQL_Database 

More Information:

https://database.guide/6-ways-to-check-the-size-of-a-database-in-sql-server-using-t-sql/ 

https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server?view=azuresql

Product:

Microsoft Azure SQL

Issue:

Can not change default database for a SQL user in Azure SQL. User can not login to database, if it is not possible to select database in the login dialog ( like SSMS). You get this error when you try to change the default database for a SQL Azure user. If you create a SQL user that only resize in a single database, he can only login to that database. With Azure Data Studio can you select the database name during login dialog.

Solution:

Add the user to the master database too, with command:

create user pbi_reader for login pbi_reader

Before that, you may have run below command to created the login:

create login pbi_reader with password = 'Password!'

Switch to the user database, enter below code to create a user there:

create user pbi_reader for login pbi_reader

grant select on schema::DM to pbi_reader

Above command will give the SQL user pbi_reader access to the tables that have schema DM.

Azure SQL does not support the DEFAULT_DATABASE, DEFAULT_LANGUAGE, CHECK_EXPIRATION, CHECK_POLICY commands.

 

If you are db_owner in the SQL Azure database, you can add a Azure AD account to have access, by using a command like this:

CREATE USER [donald.duck@domain.com] FROM EXTERNAL PROVIDER WITH DEFAULT_SCHEMA = DM;

To add a user as db_datareader to a specific database, use this command:

EXEC sp_addrolemember 'db_datareader', 'pbi_reader'

To list users in database, use this command:

SELECT * FROM SYS.DATABASE_PRINCIPALS

To be able to connect from Power BI to Azure SQL, you need to open the firewall in the database.

https://www.mssqltips.com/sqlservertip/5953/create-power-bi-connection-to-azure-sql-database/

https://hevodata.com/learn/azure-to-power-bi/

Power BI in the cloud, uses a span of ip address that change every week, if your database in the cloud or on-prem need to whitelist ip addresses in the firewall to gain access, it is best to eiter try to use a URL/DOMAIN in the firewall to allow PowerBI in the cloud to have access. You can create a support ticket with Microsoft Azure team and get a list of IP addresses that is used by Power BI.

For a on-prem database, the best solution is to install Azure gateway on-prem, then this box will keep the connection between on-prem and the azure based power bi service. And your on-prem databases will talk to the on-prem azure gateway.

The on-premises data gateway acts as a bridge. It provides quick and secure data transfer between on-premises data, which is data that isn’t in the cloud, and several Microsoft cloud services.

For Azure SQL in the cloud, it can be simpler to allow all cloud based services access, by inside https://portal.azure.com go to your SQL server page, then to network, and mark Allow Azure services and resources to access this workspace to be enabled. This could make it possible for Power BI in the cloud to access your Azure SQL database.

You may also be able to set Azure SQL firewall with command like this (in SSMS):

 exec sp_set_firewall_rule N'Allow DB Connections', '10.0.0.2', '10.0.0.2';

 

More information:

https://www.mssqltips.com/sqlservertip/5953/create-power-bi-connection-to-azure-sql-database/ 

https://www.microsoft.com/en-us/download/details.aspx?id=56519 

https://learn.microsoft.com/en-us/power-bi/admin/power-bi-allow-list-urls 

https://radacad.com/the-power-bi-gateway-all-you-need-to-know

https://learn.microsoft.com/en-us/azure/analysis-services/analysis-services-gateway

https://learn.microsoft.com/en-us/data-integration/gateway/service-gateway-onprem

https://chartio.com/learn/databases/grant-sql-server-table-permissions/

https://www.guru99.com/sql-server-create-user.html

Azure SQL Limitations compared with a SQL Server Enterprise

https://learn.microsoft.com/en-us/sql/azure-data-studio/quickstart-sql-server?view=sql-server-ver16

https://learn.microsoft.com/en-us/sql/azure-data-studio/quickstart-sql-database?view=sql-server-ver16

https://learn.microsoft.com/en-us/sql/azure-data-studio/download-azure-data-studio?view=sql-server-ver16&tabs=redhat-install%2Credhat-uninstall#install-azure-data-studio

https://devblogs.microsoft.com/premier-developer/secure-access-to-azure-sql-servers-for-power-bi/

Product:

Planning Analytics 2.0.9.13

Microsoft Windows 2019 server

Issue:

If i get a text file with the AD usernames that should be inside my TM1 application, how do i remove the users from TM1 that is not in the list?

This for a TM1 application that uses CAM security, and is connected to CA11 for login.

If you have a AD group called “Tm1people” you can with this command create a list:

IN CMD

dsquery group -name "Tm1people" | dsget group -members > D:\user_handling\keepusers.txt

IN POWERSHELL

Get-ADGroupMember -identity "Tm1people" | select SamAccountName

 

Suggested Solution:

Create two TI processes, one that load the list of user in a temp dimension, and a second process, that check every user in the TM1 application against that temp dimension list.  Login to TM1 Architect as administrator.  Create a new TI process.

In process ONE, go to data source and select text and select the file D:\user_handling\keepusers.txt

Click preview, and you should get a list of the users Active Directory account names.

Change the variable name to vUser and set the content to other. This to get the variable to be accessible in the Advanced tab.

In Parameters tab, create a pSure parameter, that you need to answear JA to make the process run.

In advanced prolog tab, enter this code:

 

# only execute if parameter pSure is JA
if (pSure @<> 'JA');
ProcessQuit;
endif;


#--- variables section ---
sCube = '}ClientGroups';
sDimName1 = '}Clients';

#-- enter the domain name of your company
sDomainName = 'adname/' ;
sDimName2 = 'TempUsersList';

#--- file name setup for debug text file
sFileName= 'debugfile1.txt';
sFilePath = 'd:\temp\';
sDEBUGFILE = sFilePath | sFileName ;

# create a temporary dimension to hold all users to keep
if (DimensionExists( sDimName2 ) =1);
DimensionDestroy(sDimName2);
endif;
DimensionCreate( sDimName2 );

 

In the Metadata tab, enter this code:

 

#-- add domain name to username
sFullUser = sDomainName | vUser;
# add users from list to the temp dimension
DimensionElementInsertDirect( sDimName2, '' , sFullUser, 'S' );

 

In the Epilog tab, enter this command to start the second process:

# call the other process to clean out users
ExecuteProcess( 'second processname');

 

Save the process, and create a new process, the second process TWO.

In Advanced – prolog tab, enter this code:

 

#--- variables section ---
sCube = '}ClientGroups';
sDimName1 = '}Clients';
#-- enter the domain name of your company
sDomainName = 'adname/' ;
sDimName2 = 'TempUsersList';

#--- file name setup for debug text file
sFileName= 'debugfile2.txt';
sFilePath = 'd:\temp\';
sDEBUGFILE = sFilePath | sFileName ;


#-- in a while loop check if the user clients exist in the dimension list --

iElm = 1;
#-- get number of users in clients dimension
ElmCount = DIMSIZ(sDimName1);
#-- do this for each user in client dimension
WHILE(iElm <= ElmCount);
#-- get the user alias from the list
sElment = DIMNM( sDimName1, iElm );
sElmentAlias = ATTRS( sDimName1, sElment, '}TM1_DefaultDisplayValue' );

#-- check if he is not in the list from the file
nResult = (DIMIX (sDimName2, sElmentAlias) );
if( nResult=0 );
#-- get if the user is admin
vGroupAdmin = CellGetS( sCube , sElment , 'ADMIN' );
If (vGroupAdmin @<> 'ADMIN' );
sPrincipalName = DimensionElementPrincipalName( sDimName1, sElmentAlias );

# check that name is not blank
if (sPrincipalName @<> '');
##--- remove user from security cube in tm1
##--- only if user is not part of ADMIN group

#--- only debug, write out data to file
# ASCIIOUTPUT ( sDEBUGFILE, sPrincipalName, sElment, sElmentAlias );

DeleteClient ( sPrincipalName );
endif;

endif;

endif;

iElm = iElm + 1;
END;

 

In the epilog tab, enter this code:

#-- Remove the temp list of users to prevent accidental runs
DimensionDestroy(sDimName2);

 

Save the process.

Create the keepusers.txt file and test run the process in a sample application with many users.

You may need to have below command in the Epilog:

SECURITYREFRESH;

More Information:

https://blog.technotesdesk.com/export-a-list-of-members-from-an-active-directory-group-to-a-text-file 

https://everanalytics.wordpress.com/2015/09/15/manage-user-sessions-on-your-ibm-cognos-tm1-server/ 

DIMNM TM1 Function: Use and Syntax

https://exploringtm1.com/dimsiz-tm1-function-use-and-syntax/ 

https://www.wimgielis.com/tm1_clientsgroups_EN.htm

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=stf-deleteclient

How list users in TM1 with python:

https://github.com/wimgielis/Count-and-list-TM1-users 

https://www.tm1forum.com/viewtopic.php?t=15531

Product:

Planning Analytics 2.0.9.13

Microsoft Windows 2019 server

Problem:

If i remove a user from the TM1 Security dialog, will he come back if he have access in Cognos Analytics, when we use CAM security in TM1?

Solution:

Yes, If a user can login to CA11, that is used as the validator for TM1 access ( with the IntegratedSecurityMode=5 parameter in tm1s.cfg file), then the user will pop up inside the Tm1 architect security clients/groups dialog.

The AD group he is part of will not be listed, but if you have before added a AD group into TM1, then he will be marked with a X that he belongs to that AD group.

This give that you can remove a user from security clients/groups dialog in Tm1 architect, and he will next time he logins to TM1 come up again. He will also get back the security he had, as he belongs to the same AD groups as before.

You can not remove the AD groups from security clients/groups dialog, as then the security you have set on that group in TM1 is gone.

If you set security direct on the user, that security is also gone when you remove the user from Tm1.

 

To remove several users at once from a tm1 application with CAM security, create a text file, with each AD account name on a row (that you want to be removed). Save the file as D:\user_handling\removeuser.txt on the TM1 server.

Login to Tm1 Architect on the Tm1 server direct, and create a new TM1 process.

Select Text as datasource, and select the D:\user_handling\removeuser.txt file.

Click Preview, and you should see a list of the names.

Go to the Variables tab, and change the name of the first (and only) variable to vUser.

Change the contents to be Other.

Go to the Advanced – prolog tab, and enter below code:

sCube = '}ClientGroups';
sDimName = '}Clients';

#-- enter the name of the company domain as it looks in CA11 below --
sDomainName = 'CompanyAD/' ;

#-- file name setup for debug text file --
sFileName= 'debugfile1.txt';
sFilePath = 'd:\temp\';
sDEBUGFILE = sFilePath | sFileName ;

In the metadata tab, enter below code:

 

#--- add domain name to username name
sFullUser = sDomainName | vUser;

#--- find users alias in client dimenstion and get the tm1 internal name
sPrincipalName = DimensionElementPrincipalName( sDimName, sFullUser );
#--- check that user is not admin
vGroupAdmin = CellGetS( sCube , sPrincipalName , 'ADMIN' );
#--- remove remark to debug write out data to file
# ASCIIOUTPUT ( sDEBUGFILE, sPrincipalName, vGroupAdmin );

If (vGroupAdmin @<> 'ADMIN' );
#--- remove user from security cube in tm1
#--- only if user is not part of ADMIN group
DeleteClient ( sPrincipalName );
endif;

Save the TM1 TI process, and test run it in a sample database first.

More information:

https://www.ibm.com/docs/en/cognos-tm1/10.2.2?topic=groups-deleting-user#DeletingaUser_N1704BD 

https://succeedium.com/teamone/doc/users.html#deleting-users 

 

Product:

Microsoft SQL Azure

Issue:

How create a user in SQL Azure, as you do not have a GUI to use in SSMS.

Solution:

Connect to Azure SQL with SSMS as administrator.

Connect to your user database, and enter below SQL statement to create user Roger and give it datareader rights:

CREATE User Roger
WITH PASSWORD = 'Password!'

ALTER ROLE db_datareader ADD member Roger

This user only exist in the database selected.

To create a user that will need access to more than one database, select the master database and enter below SQL:

CREATE Login Roger
WITH PASSWORD = 'Password!' 

CREATE USER Roger FOR LOGIN Roger

(You need to create the user in master database to, so he can login when master is default database)

Then switch to your user database and enter this SQL statement:

CREATE USER Roger FOR LOGIN Roger

ALTER ROLE db_datareader ADD member Roger

 

Best is to use a AD group instead, and set that SQL login to have the needed rights in the database.

 

More Information:

https://stackoverflow.com/questions/55230724/set-default-database-to-user-sql-server-azure 

https://learn.microsoft.com/en-us/sql/relational-databases/security/contained-database-users-making-your-database-portable?view=sql-server-ver16 

— =======================================================================================
— Create User as DBO template for Azure SQL Database and Azure Synapse Analytics Database
— =======================================================================================
— For login login_name, create a user in the database
CREATE USER <user_name, sysname, user_name>
FOR LOGIN <login_name, sysname, login_name>
WITH DEFAULT_SCHEMA = <default_schema, sysname, dbo>
GO

— =======================================================================================
— Create Azure Active Directory User for Azure SQL Database and Azure Synapse Analytics Database
— =======================================================================================
— For login <login_name, sysname, login_name>, create a user in the database
— CREATE USER <Azure_Active_Directory_Principal_User, sysname, user_name>
— [ { FOR | FROM } LOGIN <Azure_Active_Directory_Principal_Login, sysname, login_name> ]
— | FROM EXTERNAL PROVIDER
— [ WITH DEFAULT_SCHEMA = <default_schema, sysname, dbo> ]
— GO

— Add user to the database owner role
EXEC sp_addrolemember N’db_owner’, N'<user_name, sysname, user_name>’
GO

Product:
Planning Analytics 2.0.9.13  ( IBM_PAfE_x64_2.0.82.5.xll )
Microsoft Windows 2019 server

Issue:
How publish a excel sheet with a dynamic report to TM1WEB?

Solution:

You publish the excel sheet to PAW, and it will be availble in both TM1WEB and PAW.

To be able to create Action Button inside excel for Planning Analytics, you need to enable trust for VBA.

Inside Excel – go to file – options – trust center – trust center settings button – macro settings.

Mark “trust access to the VBA project object model”.  Click OK.

You need also in many cases have “Optimize for compatibility” marked.

Go to Excel Options – General – User Interface options, mark “Optimize for compatibility“.

Click OK.

To be able to publish a excel dynamic report, you must save the report first, before you publish it. Recommended to save it with the same name, as you would like to have it shown in TM1WEB.

Click on Publish icon in Excel ribbon.

Select the application folder, where you want your websheet and enter a name. Above we have expanded Planning Sample applications.

Mark “make public” if you want other users to see the websheet in TM1WEB. Press publish button.

 

If you get a blank page in PAW when you insert a Excel sheet to a workbook, that works fine in TM1WEB, you need to ensure that the firewall ports between the TM1 web server and the planning analytics workspace server is open on ports 9510 and 9511. The websheet that show inside PAW; is rendered from TM1WEB.

Access to TM1WEB for both end user and the PAW server is essentiell to make it (PAfE) work.

More information:

https://www.ibm.com/support/pages/blank-white-page-rendered-while-opening-websheets-within-planning-analytics-workspace

https://www.ibm.com/support/pages/planning-analytics-excel-task-pane-empty-and-grayed-out-when-connecting-cam-secured-tm1-server

https://qmetrix.com.sg/ibm-planning-analytics-for-excel-pax-vs-tm1-perspectives/

Product:

Planning Analytics 2.0.9.13
Microsoft Windows 2019 server

Issue:

How install the new pafe (planning analytics for excel) for the end user?  Previous known as PAX.

Solution:

Check what version of Excel you have. Go to File – Account. Click on About excel. Check the bold line;

Microsoft® Excel® for Microsoft 365 MSO (Version 2212 Build 16.0.15928.20196) 64-bit

If it say 64-bit then you need to download the 64 bit xll file, otherwise you need to download the 32 bit xll file and use.

Go to IBM Fix Central and search for release level: BA-PAXL-2.0.82 IBM Planning Analytics for Microsoft Excel 2.0.82

Download the version that matches the version of PAW you have installed. Check here what works well: https://www.ibm.com/support/pages/node/6519826

Copy the file IBM_PAfE_x64_2.0.82.5.xll to a folder on your laptop – like c:\program files.

Right click on the file IBM_PAfE_x64_2.0.82.5.xll and select create “short-cut”

Click Yes on above question, and place the shortcut on your desktop.

You should now get a icon like above – click on this icon to start PAFE (new pax) on your laptop.

If you inside excel get a sheet with garbage characters – then you have downloaded wrong bit version, like if you use 64 bit version in 32 bit excel version.

Inside Excel click on Planning Analytics and options.

Click on IBM and then ADD button.

Enter the full URL to your PAW server. Check your company server-name for your PAW server. This is what you need to get the pafe to work.

Click TEST connection and then save.

You may need to inside Excel, allow the xll file to be run by “Enable all code published by this publisher” inside file – options – trust center.

Before you can publish a excel workbook with buttons, to TM1WEB, you need to save it as a .xlsm file. then you can click publish it to TM1 application folder.

After you publish it, you go back into publish dialog, and right click your tm1websheet and select “public” to make it accessible for all users.

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=analysis-publish-workbook-tm1-server-application-folder

More information:

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=icpame-downloading-upgrading-planning-analytics-microsoft-excel-single-xll-add-in-versions-2065-later

https://www.ibm.com/support/pages/download-fixes-fix-central

https://quebit.com/askquebit/IBM/demystifying-the-pa-pafe-paw-passl-install-files-from-ibm-fix-central/

https://exploringtm1.com/install-planning-analytics-workspace-to-windows-2016/

https://exploringtm1.com/how-to-install-planning-analytics-workspace-to-windows-server-2019/

https://exploringtm1.com/how-to-upgrade-install-planning-analytics-for-excel-pax/

Product:
Planning Analytics 2.0.9.13

TM1_version=TM1-AW64-ML-RTM-11.0.913.10-0
Microsoft Windows 2019 server

Issue:

Can not read data from ODBC connection in TI process.

Solution:

Check that the ODBC connection still exist, and that the TM1 service have access to the ODBC connection.

 

How setup a ODBC connection for TM1:

On the server go to control panel – administrative tools – ODBC data source (32 bit).
Click on system DSN tab.
Click on Add button.
Select SQL server native client 11.0 and press Finish button.

Fill in a name and a server-name, that you can connect to the database on.

Enter a SQL native login, that you can use to test the connection with.

Click next – next – finish.

Click Test data source button, you should get this if you did correct:

Microsoft SQL Server Native Client Version 11.00.5058

Running connectivity tests…

Attempting connection
Connection established
Verifying option settings
Disconnecting from server

TESTS COMPLETED SUCCESSFULLY!

Now go into you TM1 application, and create a new TI process.

Select ODBC as data source, and pick your ODBC from the list.

Enter a working SQL native username and password.
Enter a SELECT SQL statement, that you have tested in SSMS, that it works to bring some data back.
Press Preview to check that the SQL connection work – you should see your data.

Go to variables tab.

The variables name suggestion is created from first data line, you can change them to vNames that you think is more describing. Do not use spaces in variable names.
For the lines of data that you want to use, change the Contents column to ‘Other’ as shown above. Click on Advanced tab.

Go to prolog tab, and enter some code to test this process:

# variables setup - set them all here as empty at least

sAccount = '';

# file name setup for debug text file
sFileName= 'debugfile2.txt';
sFilePath = 'c:\temp\';
sDEBUGFILE = sFilePath | sFileName ;

# get the date and time to set a stamp in the log file
# https://edu.cubewise.com/tm1-function-for-rules-timst/

sNowTime = NOW();
sDATETIME = TIMST( sNowTime , '\Y-\m-\d \h:\i' );

Then go to the Data tab, to enter code to write out the data to a file for debugging.

# set the variable from the data source to a variable in the code and print it out
sAccount = AccountDescription;

ASCIIOUTPUT ( sDEBUGFILE, sAccount, sDATETIME );

Save the process with a name that describe the function.

Run the process.

Then to check that it works, go to c:\temp and open the text file.

The value in the database for column [AccountDescription] is first, and then a date-time stamp from the code.

As we use the TM1 data source tab, we do not need to use the ODBCopen command to get the data in the data tab.

Planning analytics should work with both 32 bit and 64 bit ODBC connections on the Windows server, as below parameter is on by default.

EnableODBCProxy makes 32-bit data source names to be available to TurboIntegrator processes on 64-bit machines.

Parameter type: optional, dynamic

EnableODBCProxy is useful, if a 64-bit driver is not available. Each proxied connection creates a 32-bit tm1odbcproxy.exe process during the connection. EnableODBCProxy is true by default, but you can disable the feature by including EnableODBCProxy=false in the tm1s.cfg.

This give that your TM1 64 bit server can use 32 bit ODBC connections. But for best result you should use a 64 Bit connection in Windows server.

More information:

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=tf-odbc-turbointegrator-functions

https://code.cubewise.com/odbc-connect

https://quebit.com/askquebit/IBM/how-to-speed-up-tm1-odbcoutput/

https://www.mssqltips.com/sqlservertip/5030/import-and-export-data-between-sql-server-and-cognos-tm1-cube/

Sending Cube Data into a Database