Planning Analytics


How create a simple check on input values in a TI process?


Ad a IF statement in the PROLOG tab that check the input values against size, simplest to ensure the values entered is real – keep the size correct.

You have parameters the user should enter, then you need to do some simple check that the values entered are realistic.

ProcessQuit will terminate that process. Metadata and Data will not be executed.

ProcessBreak will stop processing the source and proceed directly to the Epilog tab (without returning an error handle to the TI).

More Information: 

This example is simply checking if an element entered into a parameter exists in a dimension using a DIMIX and if it doesn’t, it puts a message in the log and quits the process.

IF ( DIMIX ( 'Location', pLocation) = 0);
sErrorMessage = 'The Location entered does not exist';
ItemReject ( sErrorMessage );


Planning Analytics


The calculated cell does not contain a zero value, instead it contain an old number from before. The Trace Feeder also show that the empty cell have a numbers, that are used in the consolidation. The cube does not have FEEDERS, but it is feed from a different cube.

Running CubeProcessFeeders ( ‘cubename’ ) – solve the problem, until next restart of the TM1 instance.


IBM is aware of a defect in Planning Analytics versions IF2 through Rule-derived cells are not calculating after a TM1 database restart when the cube doesn’t have a feeder statement. This defect is addressed in Planning Analytics, available 10 July 2023.

If we add a feeder to the rules file for the cube, then the values are correct in the consolidation cell.

You can also create a TM1 TI process with:

CubeProcessFeeders ( 'thecubename' );

Then create a TM1 chore that will run that every week, but do not activate it.

Click “create new chore” in TM1 Architect.

Select the TM1 TI process you created for “run cubefeeders” above.

Next step select that the chore should run every 7 day, and save it with a name “S. 1010 run cubefeeders”.

Do not active the chore.

Go to the TM1S.CFG file for your TM1 instance.

Add this line:

StartupChores=S. 1010 run cubefeeders

This should give at the start of the TM1 instance, the TI process in that core is run before uses can login to the TM1 application.

In tm1server.log

21620 [] INFO 2023-07-13 14:12:59.743 TM1.Chore Executing Startup Chore “S. 1010 Run cubefeeders”
21620 [] INFO 2023-07-13 14:12:59.744 TM1.Process Process “S. 1111 run cubefeeders” executed by chore “S. 1010 run cubefeeders”


More Information: 

In this example we have cube A and cube B. Cube B is very similar to cube B but it has one more dimension

Cube A

Cube B

Say you have a sales value in cube A and you want to split it across the locations in cube B. The rule in cube B might look something like:

[‘sales’] = N: DB(‘cube A’,!Time,!Measues) * DB(‘Location Split’,!Location,!Time); (The latter being a lookup cube)

You will then have a feeder from A to B. It will look something like this:

[‘Sales’] => DB(‘Cube B’,’All Locations’,!Time,!Measures);

In the example above the Location dimension does not exist in cube A. When you have a case like this you need to select one item within the unknown dimension. In this case I have selected a consolidation called ‘All Locations’. This will then in turn feed all it’s children. If you do not have a consolidation like this create one. You can then add all children to it. 

Planning Analytics
Microsoft Windows 2019 server


How see memory usage of Feeders and other cube data?

Possible solution:

The }StatsByCube control cube captures information about the cubes in your TM1 application. You’ll need to enable Performance Monitor either by right-clicking on your TM1 instance in Architect and choosing Start Performance Monitor or by adding PerformanceMonitorOn=T to your TM1s.cfg file – the latter change requires a service restart to take effect.

After Performance Monitor starts, it takes a few seconds for the counters to begin writing to the }StatsByCube cube.

When you open }StatsByCube, the most useful view when starting out is to put the LATEST member from the }TimeIntervals in the title and the }StatsStatsByCube and }PerfCubes dimensions in the rows and columns. The counters you want to focus on are Memory Used for Views and Number of Stored Views. These represent the amount of RAM (in bytes) used for Stargate Views and the number of Stargate Views created for this specific cube.

1 Megabyte = 1,048,576 bytes

Memory used for feeders below should then be 458 865 488 bytes = 458 Mb RAM for only feeders. They in most cases does not change.

More Information: 

Planning Analytics Workspace 88
Microsoft Windows 2019 server


New installation of PAW give error when try to login/connect to site. Where you use CAM security and are connected to a CA11 installation.

The PMHub service parameter was not specified or is not one of the configured locations


Update the pmhub.html file on the CA gateway server in folder D:\Program Files\ibm\cognos\analytics\webcontent\bi to include the PAW server name without portnumbers, on line:

// Update the following to point to the location of the pmhub service(s)
var pmhubURLs = ["","","","http://pawservername"];


Check also “pmhub.html” file name is in lowercase only, with no uppercase characters (for example “PMHub.html” would not be found)

-If Cognos Analytics is accessed through a gateway URL (for example by using IIS, like “http://<CAgateway>:80/ibmcognos/bi/v1/disp“) then the pmhub.html interoperability file must be placed in “<CA_Gateway_machine>/webcontent/bi/
-If Cognos Analytics is accessed directly through an application tier or dispatcher (like “http://<CAdispatcher>:9300/bi/v1/disp“) then the pmhub.html file must be placed in each “<CA_Dispatcher_machine>/webcontent/”  folder.


More information:


Microsoft Azure Blob storage


Put a file on blob storage with POSTMAN.

Get error  404 The specified blob does not exist



Change to PUT instead of GET in postman program, to place a file on the blob storage.

As the file you are asking for does not exist – as you have not put it there yet – you get a 404 error.

To put a file on the blob storage you need to add headers like:

x-ms-blob-type  = BlockBlob

x-ms-date = 2022-11-02


Error like 400 Value for one of the query parameters specified in the request URI is invalid. can be that you miss the container name, only have the URL.

Error like 400 The requested URI does not represent any resource on the server. can be that you have not listed the file, only the container name in the URL.

Error like 403 This request is not authorized to perform this operation, can be that you are not logged into the VPN to give you access to the azure storage.

Error like 400 The requested URI does not represent any resource on the server. can be that the url does not contain the file to read in a get statement.

Error like 400 An HTTP header that’s mandatory for this request is not specified. can be that you are missing the x-ms-blob-type header.


More Information:


Microsoft PowerShell
Planning Analytics
Microsoft Windows 2019 server

I have backup zip files in a folder that i want to move to a different drive, to save space on the first drive.



Create a d:\script folder and in notepad++ create a copyoldfile.ps1 file with this content:

# powershell script to copy old files
# older than a month from today's date

# set common start values
$abortFlag = 0

# Get a input parameter
# get from folder as parameter to call to script
[String]$FromFolder = $Args[0]
[String]$ToFolder = $Args[1]

# debug lines 
Write-Host [$FromFolder] 
Write-Host [$ToFolder] 

# check if args are empty and then break the code
if ( $FromFolder -eq '' ) {
 Write-Host "Fromfolder is missing" -ForegroundColor Red
            $abortFlag = 1

if ( $ToFolder -eq '' ) {
 Write-Host "Tofolder is missing" -ForegroundColor Red
            $abortFlag = 1

 if ($abortFlag -eq 1) {
        Write-Host "The abort flag has been tripped. Exit script." -ForegroundColor Red

# for each file in folder check date and move file older than 30 days
Get-Childitem -File -Path $FromFolder -Filter *.zip | Where-Object {$_.LastWriteTime -lt (Get-Date).AddDays(-30)}  |     Move-Item  -Destination $ToFolder 

The above script take two values the from folder and the target folder. It will then check the creation date on each file in folder and move the files that are older than 30 days. It will only move files that end with .zip.
When testing a powershell script, you should at the end of line for execution add: -Verbose -WhatIf
The script have a check that the values entered is not empty (null), and then stop the script.  The output is not shown if you call the ps1 script from a other program. To call this script from a TM1 process enter in prolog like this:
DatasourceASCIIQuoteCharacter= Char(39);

sExec = 'Powershell -ExecutionPolicy ByPass -file ' | SC_QUOTE_COMMAND | 'd:\script\copyoldfile.ps1' | SC_QUOTE_COMMAND | ' ' ;
sSourceDir = SC_QUOTE_COMMAND | 'D:\TM1 Data\Back Up' | SC_QUOTE_COMMAND | ' ' ;
sTargetDir = SC_QUOTE_COMMAND | 'd:\arkiv' | SC_QUOTE_COMMAND | ' ' ;

sCommand = sExec | ' ' | sSourceDir | ' ' | sTargetDir ;

## remove to debug the string sent to cmd
## ASCIIOUTPUT ( 'd:\temp\result.txt' , sCommand);


The powershell only accepts ” around its file paths, we have to change the QuoteCharacter to be ‘. Change the folders in the TI process to match your backup folders. Do your developing and testing in your TM1 development servers first.


More Information: 



Microsoft PowerShell

Microsoft Windows 2019 Server


How get the month value from datetime function in powershell?


Enter a ps1 script like this:

# get today's day number: Get-Date -UFormat "%d"
# set month value in a variable: [String]$Manaden = (Get-Date -UFormat "%m")

[String]$Dagen = (Get-Date -UFormat "%d")
[String]$Manaden = (Get-Date -UFormat "%m")

# debug lines 
Write-Host [$Manaden]


UFormat specifiers to format the DateTime result of Get-Date in PowerShell. Return a formatted string and not a date object.


More information:

Get-Date – How to get and use Dates in PowerShell


Planning Analytics Workspace 88

Microsoft Windows 2019 server


During first ./start.ps1 you get errors that it does not work, after you have configured the settings in the admin tool and clicked Update.

“fatal: failed to start daemon: Error initializing network controller: failed during hnsCallRawResponse: hnsCall failed in Win32: The dependency service or group failed to start. (0x42c)”


Check if there is any anti – virus software on server, like Trellix Agent Service, that is “C:\Program Files\McAfee\Agent\masvc.exe” /ServiceStart

or Trellix Validation Trust Protection Service, that is “C:\Program Files\Common Files\McAfee\SystemCore\mfemms.exe”

(remove them first as a test).


Try to start the docker containers one by one, from the powershell prompt:

docker start glass
docker start plan-service 
docker start redis 
docker start prism-app 
docker start share-proxy 
docker start couchdb-init 
docker start share-platform 
docker start user-admin 
docker start mongo
docker start pa-predict


After you have started 10, try to start the rest with the command: ./paw.ps1

If you get error like this;

ERROR: for bss-init (232, ‘WriteFile’, ‘The pipe is being closed.’)
The current Compose file version is not compatible with your engine version. Please upgrade your Compose file to a more recent version, or set a COMPOSE_API_VERSION in your environment.
Execution failed with exit code 1: The current Compose file version is not compatible with your engine version. Please
upgrade your Compose file to a more recent version, or set a COMPOSE_API_VERSION in your environment

do not give up – try again to start that service by itself. It can work.

If you have temporary containers that have numbers in the beginning of the name, try to do below steps in power-shell:

cd <your_PAW_folder>
./scripts/paw.ps1 down

# if there are temporary containers, then uncomment and run this additional command too:
# docker rm $(docker ps -a -q)

stop-service docker
start-service docker


When all is up, then do a ./paw.ps1 stop.

Then reboot the windows server and wait a long time.

Does all containers for paw start successfully? Then it should be up.

If it does not work, try to migration from McAfee to MS Defender on the server, and then try with powershell commands:

.\paw.ps1 stop

If this start all needed services, it can be the Anti-virus software that was the problem.


Check log files like D:\PAW\log\share-platform\messages.log and D:\PAW\log\prism-platform\messages.log for more information.

Errors like;

Caused by: CWNEN1003E: The server was unable to find the concurrent/biExecSvc binding with the javax.enterprise.concurrent.ManagedExecutorService type for the java:comp/env/ reference.

found in D:\PAW\log\predict\messages.log you have to decide if they are harmful to the system.


Cause can be that the underlying hard disk are to slow, so the docker container does can not create the images fast enough during first installation, as ./start.ps1 will try to start all paw containers at the same time.

Sadly, most common issue for PAW is the anti-virus software on the windows server.


More Information:

Search for “Container was too slow to start because the computer was overloaded” on internet. Like a Windows Server LTSC 2019 VM backed by a VHDX on a spinning disk has terrible IO performance, and can be overloaded and see startup failures.


Planning Analytics

Microsoft Windows 2019 server


How do i find the java version?


Find the java bin folder for your cognos product, can be in folder;

D:\Program Files\ibm\cognos\analytics\ibm-jre\jre\bin  or

D:\Program Files\ibm\cognos\tm1_64\jre\bin

then start a cmd or powershell prompt on that folder.

Enter ./java -version to find the version.

first line is java version “1.8.0_331”

More information: