Product:
Cognos Controller 11.0.1200.28

Microsoft Windows 2022 Server

Issue:

Most users get that the Cognos Controller client hang, when they open some selection dialogs. For example – go to Maintain – Account Structure Reports – select codes – and click on the … for the account field, to get a list of all the accounts.  Here the Cognos Controller client hang, and you have to stop cognos controller from the task manager.

Possible Solution:

Do a optimize on the database from inside cognos controller client.

  1. To run the optimization immediately, the system must be in single user mode. Go to Maintain > User > Single Mode and set the system to single mode.
  2. Click Maintain > Database > Optimize. The Optimize Database window opens.
  3. Select the Rebuild Structure Tables check box to rebuild the structure tables.
  4. Select Rebuild Indexes (advanced option).
  5. Click Run

 

Other things you can try to solve the issue are:

Clear the cache in cognos controller

Inside Controller Client program, via Maintain > Special Utilities > Clear Local Cache.

 

Change the cache folder for cognos controller client:

Launch Windows Explorer
4. Open the folder where Controller is installed.

  • Default this is here: C:\Program Files\ibm\IBM Cognos Controller Local Client

5. Open the file “CCR.exe.config” inside Notepad++
6. Search for the entry similar to the following:
<add key=”CacheDir” value=””/>
7. Modify this entry appropriately, for example similar to the following:
<add key=”CacheDir” value=”D:\Controller\Cache” />
8. Save and test

Start Cognos Controller client, and try to open the same account list.

 

There are 3 types of cache folder used by Cognos Controller Client:
(1) Microsoft Internet Explorer cache at C:\Documents and Settings\%USERNAME%\Local Settings\Temporary Internet Files
(2) Microsoft .NET components at C:\Documents and Settings\%username%\Local Settings\Application Data\assembly
(3) Controller user cache files: at C:\Documents and Settings\%appdata%\Application Data\Cognos\ccr\

If there is damage files in this folders, can give strange errors inside Cognos Controller Client program.

 

More information:

Optimize the Database – IBM Documentation

FAQ – What does an ‘internal’ database optimise do, and how often should I run a database optimize?

https://www.ibm.com/support/pages/how-modify-controller-use-different-location-its-temporary-cache-files 

https://www.ibm.com/support/pages/error-systemxml-root-element-missing-inside-multiple-menus-controller-caused-appdata-redirected-unc-path 

https://www.ibm.com/support/pages/explanation-why-cognos-controller-client-side-cache-file-folders-can-be-very-large-and-how-housekeep-them

https://www.ibm.com/support/pages/how-automatically-clear-local-cache-when-closing-controller

https://www.ibm.com/support/pages/controller-microsoft-sql-database-optimisation-and-maintenance-proven-practice-best-practice 

https://www.ibm.com/support/pages/massive-sql-transaction-logs-caused-rebuilding-sql-indexes

https://www.ibm.com/support/pages/reducing-size-controller-database-deleting-period-locking-company-records-table-xacclock

 

You can use Cognos Controller database optimize function to make the following database changes:

  • Remove Period Zero Values – clears the value 0 from selected period tables.
  • Rebuild Structure Tables – which gives quicker access to data.
  • Rebuild Indexes (advanced option) – Rebuilds all indexes in the database to enhance performance and give quicker access to data.
  • Analyze Schema (advanced option) – Reanalyzes the schema for Oracle database.

Product:
Planning Analytics Workspace version 96
Microsoft Windows 2019 server

Issue:

How do I schedule backup of the PAW to disk?

Solution:

Try to use the scheduler inside Windows server.

Go to control panel.

Go to Task Scheduler.

Click on create a basic task

Click Next

Select weekly (if you want that) and click next

Set a time and day of week, and click next

Select “start a program” and click next

Enter “powershell.exe” as the program, and in arguments dialog enter the script you want to run; after -file parameter.

-file   D:\PAW96\scripts\backup.ps1

Above path can be different in your Environment. Click next.

Click finish.

Now this should restart the PAW services and make a backup for you, in folder D:\PAW96\backup\backup_2024_10_18_23_30_00

To get it to work, you may have to go back into “properties” and change the task to run for a systemaccount.

Also you should limit the task, to not run to long.

 

 

More Information:

https://blog.netwrix.com/how-to-automate-powershell-scripts-with-task-scheduler 

https://exploringtm1.com/how-to-upgrade-planning-analytics-workspace/ 

https://blog.atwork.at/medium.aspx?id=9a0dc8a0-5b51-40d4-8dd7-c0cab4e5564c&date=/post/2020/06/25/ 

https://blog.danskingdom.com/Run-PowerShell-as-another-user/

Product:
Planning Analytics Workspace 96
Microsoft Windows 2019 Server

Issue:
How sync two objects in PAW?

Solution:

Go into PAW, and create a new Book.
Expand the cube view you want to show.
Add the view to the book.

If you want the plan_report dimension (budget) to be selectable, click on the dots, and go to add widget as – selector list.

Put the box above the view, in a nice format.

Select the Open Set editor, to select what element should be possible to select.

After you select the element they should be able to select, click apply.

Now you need to set up the Synchronization. First when no item selected, go to the Properties and Synchronize and check that the group is Sheet.

Then click on the budget widget and go to Properties and Synchronize

Ensure that synchronize set is on, as shown above.

Then click on the view and go to Properties and Synchronize

Ensure the dimension plan_report is selected, and none of the others. Then active the Synchronize hierachies.

 

Save the book with a good name, and now it should work to select “budget” to get a display of that numbers.

 

More Information:

https://revelwood.com/ibm-planning-analytics-tips-tricks-how-to-set-up-synchronizations-in-ibm-planning-analytics-workspace/ 

Quick Tips – Selection Widget Synchronization

Product:

Microsoft Power BI portal

Issue:

When refresh a powerbi report ( e.g. Semantic model ) we got a error – this report worked before in the Power BI portal workspace. Error is like:

Data source error{“error”:{“code”:”DMTS_OAuthTokenRefreshFailedError”,”pbi.error”:{“code”:”DMTS_OAuthTokenRefreshFailedError”,”details”:[{“code”:”DM_ErrorDetailNameCode_UnderlyingErrorMessage”,”detail”:{“type”:1,”value”:” Device is not in required device state: compliant. Conditional Access policy requires a compliant device, and the device is not compliant. The user must enroll their device with an approved MDM provider like Intune…

 

Solution:

You must be in Microsoft Edge web browser to do this change, if you do the change in Chrome, it will not work.

On your reports Semantic model – click on the 3 dots and select “settings”.

Go to data source credentials and click on edit credntials that is marked as not working.

And enter your windows account credentials again.

Now it should be green, and this login is affecting all your reports – if the access point is the same.

Now click the “refresh” icon to update you PowerBi report semantic model in the portal.

 

 

If above does not work, try below.

Download the Semantic model to you computer.

Restart your computer.

Login to your company windows account.

Open the PowerBI pbix file, and refresh the report.

You will be prompted to login to the datasource.

Enter your credintials for windows account.

When the report is refreshed and working in PowerBI on your computer.

Save the report.

Publish the report to the powerbi portal – overwrite the previus report.

 

More Information:

https://learn.microsoft.com/en-us/entra/identity/conditional-access/concept-conditional-access-conditions

Blood, Sweat, and built-in compliance policy

There is a known issue with the Chrome browser that can cause this error to occur. If there is no device information sent in the sign-in logs, this might be the problem. Device information is sent when there is a PRT and the user is logged onto the browser. If the user is using Chrome, the Windows 10 accounts extension is needed.

If this is the case, you can test by asking the user to logon to the Edge browser or install the Windows 10 accounts extension to see if the issue is resolved.

https://learn.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-conditions

If they are signing in using Edge, they cannot use an incognito window because it will not pass the device state.

Is Microsoft Authenticator App is installed on the device? As Microsoft Authenticator is an broker app for iOS and would be needed to pass MFA and Device claims to Azure AD.

Sign-ins from legacy authentication clients also do not pass device state information to Azure AD.

 

Some years ago Microsoft stated that the in-app browser must be using a supported browser such as Edge, however, the Windows store uses Edge, and it also does not pass the device ID, so the conditional access policy can’t be compliant.

Many 3rd party applications use in-app browsers that are “not supported,” and it appears that Microsoft doesn’t offer the appropriate developer documentation that would allow 3rd parties to include this conditional access device information in their in-app browsers, even if they used Edge.

These are the links that were provided as dev resources:

https://www.graber.cloud/en/aadsts50131-device-not-required-state/ 

https://cloudbrothers.info/entra-id-azure-ad-signin-errors/ 

You may need to do below in chrome to get it to work…

this is the exact requirements (Chome-side, your Azure AD setup has its own stuff) you needed:

Latest “Chrome Enterprise Policy List”: https://support.google.com/chrome/a/answer/187202?hl=en

GPO Settings
User Configuration\Policies\Administrative Templates\Google\Google Chrome\HTTP Authentication
-Kerberos delegation server whitelist
autologon.microsoftazuread-sso.com,aadg.windows.net.nsatc.net
-Authentication server Whitelist
autologon.microsoftazuread-sso.com,aadg.windows.net.nsatc.net
# Needed if you’re blocking extensions from being installed to whitelist this one
User Configuration\Policies\Administrative Templates\Google\Google Chrome\Extensions
-Configure the list of force-installed apps and extensions (Enabled)
ppnbnpeolgkicgegkbkbjmhlideopiji
-Configure extension installation allow list (Enabled)
ppnbnpeolgkicgegkbkbjmhlideopiji

Note: That extension ID I pulled from https://chrome.google.com/webstore/detail/windows-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji (Windows Accounts)

 

Product:
Planning Analytics 2.0.9.19

Microsoft Windows 2019 server

Issue:

How use variables in data tab in ASCIIOUTPUT from string in prolog tab?

Solution:

Create the string with the variables with % around them.

In data tab use command EXPAND to convert the varibles to the correct value.

 

To export a cube content, build a view of the cube first, you then need to create the variables, good to name them like v1, v2 etc.

In prolog – build up your string like this:

# add above subset to the view
cCounter2 =1;
WHILE(cCounter2 <= nSteps); 
sDimName = TABDIM(Cube, cCounter2) ;
ViewSubsetAssign ( Cube, ViewName, sDimName, SubName );
# build up the string to print out in data tab
IF (cCounter2 = 1) ;
stexttoexecute = ('%v' | numbertostring(cCounter2) | '%');
ELSE; 
stexttoexecute = stexttoexecute | ',' | ('%v' | numbertostring(cCounter2) | '%' );
ENDIF;
cCounter2 = cCounter2 + 1;
END;

 

Then print it out in data like this

WHILE (cCounter3 > 0 ) ; 
# write the header in the file
ASCIIOutput( vFilePath, sHeaderLine ) ;
cCounter3 = 0;
END ;

ASCIIOutput ( vFilePath, EXPAND (stexttoexecute) , NumberToString( NValue ) ) ;

 

In this example we export the data from the revenue cube in sample 24retail database.

Create a dummy cube with 15 dimensions, that you have as datasource.

Create prompts that give you what data to export, you need to enter the correct element names and measures.

Enter below code in PROLOG:

#****Begin: Generated Statements***
#****End: Generated Statements****

# set variabler
Cube = pCube;

sName = GetProcessName();
ViewName = sName ;
SubName = sName ;

sHeaderLine = '';
cCounter1 = 1;
cCounter2 = 1;
cCounter3 = 1;

# erase view from before
ViewDestroy ( Cube, ViewName );

# create the path - you can get the values from a cube or made the filename up from the prompts
sPath = 'D:\temp\';
# create file name
vFile = 'Aexportdatafromcube.csv';
vFilePath = sPath | vFile;

# check number of dimensions
nSteps = CubeDimensionCountGet (cube) ;

# destroy the view first
ViewDestroy ( Cube, ViewName );
# erase all subset to the view
cCounter2 =1;
WHILE (cCounter2 <= nSteps); 
sDimName = TABDIM(Cube, cCounter2) ;
SubsetDestroy ( sDimName, SubName );
cCounter2 = cCounter2 + 1;
END;


# Create the new view and subset of selected data
ViewCreate ( Cube, ViewName );

WHILE(cCounter1 < nSteps);
# get the dimension name
sDimName = TABDIM(Cube, cCounter1) ;
IF ( sDimName @= 'Year' );
SubsetCreate ( sDimName, SubName );
SubsetElementInsert ( sDimName, SubName, pYear, 1 );
ELSEIF ( sDimName @= 'Month' );
IF (pMonth @= 'AllMonths');
SubsetCreateByMDX ( SubName, '{TM1FILTERBYLEVEL( {TM1SUBSETALL( [' | sDimName | '] )}, 0)}' );
ELSE;
SubsetCreate ( sDimName, SubName );
SubsetElementInsert ( sDimName, SubName, pMonth, 1 );
ENDIF;

ELSEIF ( sDimName @= 'Version' );
SubsetCreate ( sDimName, SubName );
SubsetElementInsert ( sDimName, SubName, pVersion, 1 );
ELSE;
SubsetCreateByMDX ( SubName, '{TM1FILTERBYLEVEL( {TM1SUBSETALL( [' | sDimName | '] )}, 0)}' );
ENDIF;
# create the header line to the file with dim names
IF (cCounter1 = 1);
sHeaderLine = sDimName;
ELSE; 
sHeaderLine = sHeaderLine | ',' | sDimName ; 
ENDIF;

cCounter1 = cCounter1 + 1;
END;

# add the measures to the last dimension
IF ( cCounter1 = nSteps) ;
sDimName = TABDIM(Cube, cCounter1) ;
SubsetCreate ( sDimName, SubName );
sHeaderLine = sHeaderLine | ',' | sDimName ;

# find the measure in last parameter
pMeasurePart = pMeasure ;
vScan_1 = SCAN ( ':' , pMeasurePart);
sMeasure = pMeasure ;
IF (vScan_1 = 0) ;
sMeasure = pMeasurePart ;
SubsetElementInsert ( sDimName, SubName, sMeasure, 1 );
ELSE;

WHILE ( vScan_1 > 0 );
# find the next colon 
vScan_1 = SCAN ( ':' , pMeasurePart);
IF (vScan_1 = 0) ;
sMeasure = pMeasurePart ;
ELSE ;
sMeasure = SUBST ( pMeasurePart, 1, vScan_1-1 );
# truncate the text to be only part after the colon
pMeasurePart= SUBST ( pMeasurePart, vScan_1+1, LONG (pMeasurePart ) - vScan_1);
ENDIF ;


# extra check if value is blank - do not insert the sMeasure
IF ( sMeasure @<> '' );
SubsetElementInsert ( sDimName, SubName, sMeasure, 1 );
ENDIF;
END;
ENDIF;
ENDIF;

# add the last column value 
sHeaderLine = sHeaderLine | ',' | pMeasureText ;

# add above subset to the view
cCounter2 =1;
WHILE(cCounter2 <= nSteps); 
sDimName = TABDIM(Cube, cCounter2) ;
ViewSubsetAssign ( Cube, ViewName, sDimName, SubName );
# build up the string to print out in data tab
IF (cCounter2 = 1) ;
stexttoexecute = ('%v' | numbertostring(cCounter2) | '%');
ELSE; 
stexttoexecute = stexttoexecute | ',' | ('%v' | numbertostring(cCounter2) | '%' );
ENDIF;
cCounter2 = cCounter2 + 1;
END;

###  set what to exclude from the view 
ViewExtractSkipCalcsSet ( Cube, ViewName, 1);
ViewExtractSkipRuleValuesSet ( Cube, ViewName, 0 );
ViewExtractSkipZeroesSet ( Cube, ViewName, 1 );


# Set source 
DataSourceType='VIEW';
DataSourceNameForServer=Cube;
DataSourceCubeview=ViewName;

# Change to get comma in the text file
DatasourceASCIIQuoteCharacter='';
DatasourceASCIIDelimiter = ',';

 

 

Enter below code in DATA:

WHILE (cCounter3 > 0 ) ; 
# write the header in the file
ASCIIOutput( vFilePath, sHeaderLine ) ;
cCounter3 = 0;
END ;

# print out the values from the cube - check if text or number on value
IF (Value_Is_String = 0);
ASCIIOutput ( vFilePath, EXPAND (stexttoexecute) , NumberToString( NValue ) ) ;
ELSE;
ASCIIOutput ( vFilePath, EXPAND (stexttoexecute) , ( SValue ) ) ;
ENDIF;

 

Should give you a csv file in d:\temp folder.

 

 

More Information:

https://community.ibm.com/community/user/businessanalytics/blogs/declan-rodger/2021/09/10/time-to-stop-using-expand 

The Expand Function in TI processes is used to convert Variable names enclosed in % symbols to the contents of the variable at run time.

Below code (on my metadata or data tabs) to export the Country variable for the data record that is being processed:

TextOutput ( cFileName, Expand ( %vsCountry% ) );​

The Expand function also converts its entire contents into 1 concatenated string, the 2 examples below would do the same thing:

TextOutput ( cFileName, vsCountry, vsStore, vsProduct, NumberToString ( vnVolume), NumberToString ( vnRevenue), NumberToString ( vnCost ), NumberToString ( vnProfit ) );​

 

TextOutput ( cFileName, Expand ( %vsCountry%, %vsStore%, %vsProduct%, %vnVolume%, %vnRevenue%, %vnCost%, %vnProfit% ) );​​

 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=functions-expand

https://www.wimgielis.com/tm1_reworkingexcellayoutII_EN.htm

https://quebit.com/askquebit/debugging-a-turbointegrator-process/

Product:
Microsoft Windows 2019 server

Issue:

We need to check what server our server contact to syncronice the time with?

Solution:

Login to the windows server

Start a command prompt (CMD)

Enter command:

w32tm /query /status

gives you information such as:

stratum
leap indicator
precision
last sync
NTP server
poll interval

 

More Information:

https://learn.microsoft.com/en-us/windows-server/networking/windows-time-service/windows-time-service-tools-and-settings?tabs=config 

https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/ff799054(v=ws.11)

https://www.meinbergglobal.com/english/info/ntp-w32time.htm

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server

Issue:

What does the .cub$ files in the data folder mean? They increase the zip file for the backup rather much.

Solution:
They are left over from a crashed SaveDataAll function. Did you have a hung TM1 instance recently?

They’re generated when you do a data save. The cube in memory is initially saved to the .cub$ file. If the save is successful then the .cub file is replaced by the .cub$ file and the latter is deleted.

If the file is still there then either:
– You’re currently doing a data save in which case deleting the file would be highly inadvisable; or
– It’s left over from a past failed data save, in which case it can be deleted. (you might want to take a backup of the files before you delete it)

More Information:

https://www.ibm.com/support/pages/savedataall-best-practices 

https://exploringtm1.com/tm1-file-extensions-understand-the-files-in-your-tm1-model/

https://www.ibm.com/docs/en/planning-analytics/2.1.0?topic=smtf-savedataall

 

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server

Issue:

Does the STR function round up the number before converting it to a string?

Solution:

Create a cube with this values:

Create a TM1 TI process with this code:

You will get this output when run:

Looks like the 5 or 6 fifth decimal is increasing the value in the result.

More Information:

https://exploringtm1.com/str-tm1-function-syntax-use/

https://exploringtm1.com/dimnm-tm1-function-use-syntax/

https://exploringtm1.com/dimsiz-tm1-function-use-and-syntax/

 

STR converts a number to a string.

This is a TM1® rules function, valid in both TM1 rules and TurboIntegrator processes.

The number passed to the STR function must use. (period) as the decimal separator and , (comma) as the thousand separator. Any other decimal/thousand separators will cause incorrect results.

Syntax

STR(number, length, decimal)

Argument

Description

number

The number being converted to a string.

length

The length of the string. If necessary, the function inserts leading blank spaces to attain this length.

decimal

The number of decimal places to include in the function result.

 

Product:

Planning Analytics Workspace version 96

Planning Analytics 2.0.9.19

Kate Agent Version 2.0.96.1686

Issue:

When you try to use the File Manage function in PAW administration for a TM1 instance, you get a error like this:

The following administration agent error occurred – {“errorMessage”:”Illegal char <:> at index 31: D:\\TM1\\donalds budget\\Data\\;D:\\TM1\\donalds budget\\Data_SysSettings\\model_upload”,”query”:”/pakate-agent/v0/filemanager/donalds%20budget/files”,”httpStatusCode”:500}

This happens when you use….

Multiple data directories

You can specify that you want IBM TM1 Server to use multiple data directories by separating the directory names with semicolons. When you specify multiple data directories, TM1 does the following.
  • Accesses cubes and dimensions from each of the specified directories. If there is a duplicate object, TM1 accesses the object from the first directory specified.
  • Writes changes to the directory where the object is located. When you create a new object, TM1 writes to the first directory you had specified.

For example, suppose you want to store dimensions in a directory called tm1dims, and cubes in a directory called tm1cubes. You would specify the following in the Tm1s.cfg file:

DatabaseDirectory="c:\tm1dims;c:\tm1cubes"

By concatenating the two directories, you can access these objects through Server Explorer as if they were in a single location.

 

 

Solution:

Downgrade to a previous version of PAA Agent, they does not seem to have the problem.

Ask IBM Support for help.

Looks like this function is not supported in newer versions of PAW.

 

 

More Information:

https://www.ibm.com/community/101/ibm-planning-analytics/planning-analytics-technotes/

https://www.ibm.com/docs/sv/planning-analytics/2.0.0?topic=SSD29G_2.0.0/com.ibm.swg.ba.cognos.tm1_inst.2.0.0.doc/c_datadirectory_n140458.htm 

From the 2.1.0 documentation:

Specifies the data directory from which the database loads cubes, dimensions, and other objects.

This parameter is not applicable to Planning Analytics Engine.

You can list multiple data directories by separating them with semicolons.

Important: The Planning Analytics database supports multiple data directories, but Planning Analytics Administration in Planning Analytics Workspace Local does not. If you are configuring your database for Planning Analytics Workspace Local, you must specify a single database.

 

Product:
Planning Analytics 2.0.9.19

Microsoft Windows 2019 Server

Issue:
After create a new Content Store database for Cognos Analytics 11, then you can not start or create new TM1 instances. You get an error like:

[Cryptography]
[ ERROR ] CAM-CRP-1315 Current configuration points to a different Trust Domain than originally configured.

[Cryptography]
[ ERROR ] The cryptography information was not generated.

or you get error

The Cryptographic information cannot be encrypted. Do you want to save the configuration in plain text?

Solution:

This happens when the Planning Analytics Local is a single-server installation, and Admin Server and Application Service are installed on the same server (same directory).

 

Note: Be sure to back up any files and directories before deletion

1) From the Cognos Configuration menu bar click File–>Export As
*Save in decrypted format. Click yes and save the file in a location outside Planning Analytics installation directory
**This step keeps a backup of the config file (cogstartup.xml)

2) Close Cognos Configuration

3) In Windows Services, stop all Planning Analytics services

4) From the <Planning Analytics install dir>\configuration\certs\ directory, delete the files CAMKeystore and CAMKeystore.lock

5) From the <Planning Analytics install dir>\configuration\ directory, delete the \csk directory

6) From the <Planning Analytics install dir>\configuration\ directory, delete the cogstartup.xml

7) From the <Planning Analytics install dir>\configuration\ directory, delete the file ‘caserial’ (if this file does not exist, continue with next step)

8) From the <Planning Analytics install dir>\temp\cam\ directory, delete the file ‘freshness’ (if this file does not exist, continue with next step)

9) Copy the cogstartup.xml file backup taken in Step 1 and paste in <Planning Analytics install dir>\configuration\ directory

10) Launch the Cognos Configuration

11) Save the configuration and start the Planning Analytics services
*The first save might take a few minutes to complete

 

More Information:

https://www.ibm.com/support/pages/node/5692682?mhsrc=ibmsearch_a&mhq=regenerate%20planning%20analytics%20keys 

Unable to save changes in Cognos Configuration.  It fails during the step Generate cryptographic information with an error.
ERROR:
[Cryptography]
[ERROR ] CAM-CRP-1093 Unable to read the contents of the keystore ‘E:/ibm/cognos/tm1_64/configuration/certs\CAMKeystore’. Reason: java.io.IOException: error construct MAC: java.security.NoSuchAlgorithmException: no such algorithm: 2.16.840.1.101.3.4.2.2 for provider BC.

Resolving The Problem

  • Re-create the Cryptographic keys as above.
  • Make a backup of the “<Planning Analytics install>/tm1_64/configuration/certs” folder
  • Launch iKeyman from “<Planning Analytics install>/tm1_64/jre/bin” and perform the following steps in iKeyman
  1. Click Open Button
  2. Select configuration/certs/CAMKeyStore, and use the password
  3. Now delete all the certificate
  4. Select Personal Certificates and click the import
  5. Import the pfx file
  6. Rename the label to “encryption”
  7. From the drop-down, select signer certificates
  8. Import the ibmtm1.arm file from the location “bin64/ssl” and name it to ibmtm1, click OK.
  • Start the service “IBM Cognos TM1” from the services.msc

https://www.ibm.com/support/pages/node/7100898