Product:
Planning Analytics version 2.0.9.19 (PAL)
Microsoft Windows 2019 Server

Issue:
How upgrade to Planning Analytics Workspace 100 from version 96 on Microsoft Windows 2019 server?

Solution:

Download the correct file from IBM Fix Pack.  https://www.ibm.com/support/pages/download-ibm-planning-analytics-local-v20-planning-analytics-workspace-release-100-fix-central

ipa_workspace_win_2019_2.0.100.219.zip

Copy the zip file to your Windows server where you run PAW.  Unzip the file to a folder named d:\running\paw100, should look like below (after the installation)

Ensure that the Windows server have 25 GB ram (the new version of PAW demands more memory).
Ensure you have at least 100 GB free disk space on the server, (the installation takes around 12 GB for the new version).

This version use a new Mongo database, before you upgrade you need to backup your content in the existing solution.

Update the anti-virus software on the Windows server to not scan above newly created folder and subfolders.

Start powershell as administrator, and go to the old PAW installation folder.
Run backup.ps1 from the scripts folder.

This will restart the PAW and create backup in a folder with todays date, as below:

Now you in the \paw96\scripts folder enter paw.ps1 stop, to stop the old version from running.

Enter docker ps, to check that there is no containers running, before you start the installation.

Copy the d:/paw96/config/paw.ps1 file from your current installation to the new installation location e.g.  d:\running\paw100\config\.

Copy the d:/paw96/config/certs directory from your current installation to the new installation location.

If you have configured SSL, copy the <paw_install_location>/config/ssl directory from your current installation to the new installation location.

Inside the powershell window, go to the new folder d:\running\paw100 and enter start.ps1

After the check have passed, enter y to start the installation.

Enter yes, to get the installation to continue, after you confirmed that you have made a backup.

Enter y to start the Administrative console.  If you have chrome on the server, it will start up and you have to accept the license requirements for all 3 parties components.

You can see the version number above, to ensure you are installing version 100.

When you get to the configuration page, click validate, to ensure that all ports are still open. The config URL should be the same as they are read from the paw file you copied before.

 

TM1 application server gateway URI is not used anymore, so this error is OK.  Click on update, to restart the PAW containers.

On the status page, can you see what process take the most CPU at the moment. The start of PAW take time, you have to wait.

If you get a lot of errors, you may need faster CPU and more MEMORY on the server.

When the server settles, and you have check that you can brows to your planning analytics workspace from you chrome web browser, it is time to restore the databases.

Inside Powershell window on server, copy the path to you backup folder, and then go to the new script folder at d:\running\paw100\scripts and enter restore.ps1 “path to you backup folder” as picture show below:

This will stop PAW and restore the database, so you have all your books and reports back into PAW.

When all is running, you should have this containers on your PAW server:

Now you need to go to your administrator page in PAW, and click on agent, to come to the page where you can download the new PAA agent file to install on your TM1 servers.

Download the file paa-agent-pkg-2.0.100.1800.zip

Copy the file to the TM1 server, and unzip it to a folder like d:\temp\paa

Check the path to your existing PAA agent – D:\Program Files\ibm\cognos\tm1_64\paa_agent..
Make a backup of file D:\Program Files\ibm\cognos\tm1_64\paa_agent\wlp\usr\servers\kate-agent\bootstrap.properties

Start a command prompt as administrator, and go to the folder where you have unzip the paa agent files.  Enter  updatepaaagent.bat “d:\program files\ibm\cognos\tm1_64”

When it is done, start your web browser on your laptop and go to the PAW site, and go to administration, agent, to test the mail function, click on Test email.

Email will only work if you have it configured.

 

Check that you have your folders and books back in PAW. If all works, then the upgrade was a success.

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=2wnj12-enable-email-notifications-in-planning-analytics-workspace-local-1

 

More Information:

https://exploringtm1.com/planning-analytics-workspace-scripts/ 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=local-upgrade-planning-analytics-workspace 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=ipawl-install-planning-analytics-workspace-local-windows-server-2019-2022 

https://www.ibm.com/support/pages/important-upgrade-requirement-planning-analytics-workspace-local-20100-and-217 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=local-backup-restore-planning-analytics-workspace 

https://www.ibm.com/support/pages/after-each-reboot-some-planning-analytics-workspace-containers-are-failing-start 

 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=components-planning-analytics-administration-agent-local-only

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=integrations-configure-integration-watsonxai-support-decision-optimization 

https://cloud.ibm.com/apidocs/watson-data-api#creating-an-iam-bearer-token 

https://cubewise.com/blog/ibm-planning-analytics-workspace-paw-2-0-100-2-1-7/

Product:

Microsoft Power BI desktop
Microsoft Windows

Issue:

Error when you try to add a measure in the visualizations list of data.

 

Solution:

Try to add a new column instead. The icons tell what type of value it is.

More Information:

Another important difference between measures and calculated columns is that measures are evaluated in the filter context of the visual in which they are applied. The filter context is defined by the filters applied in the report such as row selection, column selection, report filters and slicers applied. Measures are only evaluated at the level of granularity they are plotted at. As calculated columns are computed when you first define them/ when you refresh your dataset, they do not have access to the filter context. Calculated columns are calculated outside of the filter context and do not depend on user interaction in the report.

When you write a calculated column, you need to rely only on the row context. The row context, is simply the notion of a current row. It specifies which row we are calculating the values for, so that DAX can look at other values in the same row. In a calculated column, the row context is implied. When you write an expression in a calculated column, the expression is evaluated for each row of the table. The calculated column has knowledge of the current row.

By contrast, measures implicitly do not have a row context. This is because, by default, they work at the aggregate level. So, you cannot refer to columns directly in a DAX measure, you will get an error because no row context exists. This is because the measure will not know which row to choose in the table

https://endjin.com/blog/2022/04/measures-vs-calculated-columns-in-dax

Measures vs. Calculated Columns in Power BI

https://www.thedataschool.com.au/mipadmin/the-differences-between-new-measure-and-new-column-in-power-bi/ 

https://biinsight.com/define-measure-table-power-bi-desktop/ 

https://exceleratorbi.com.au/conditional-formatting-using-icons-in-power-bi/

 

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server

Issue:
How export a cube to file?

Suggested solution:

If you have a cube that looks like this, in planning sample, you can export it with a TM1 TI process.

You can write a process to export it that should look like this:

#Section Prolog

# -- get the time --
LastProcessStart = TIMST(NOW, '\Y-\m-\D \h:\i:\s');

# -- set the process as a subset name --
sName = GetProcessName();
ViewName = sName ;
SubName = sName ;

# -- check the folder to save the environment file 
DimName1='SYS_ServerParameters';
Element1 = 'ServerName' ;
Element2 = 'LastUpdated' ;

# -- find the data folder from the path of the logs folders---
sLogDirName1 = LOWER ('Logfiles\' ) ;
sLogDirName2 = LOWER ('Logs\' ) ;
sDataDirName = LOWER ( 'Data\' ) ;
sBackupDirName = LOWER ( 'Config\' );
sENvFileName = 'environment.txt' ;
sLogDirPath = LOWER( GetProcessErrorFileDirectory );

nLyckadScan1 = SCAN (sLogDirName1, sLogDirPath) ;
nLyckadScan2 = SCAN (sLogDirName2, sLogDirPath) ;
IF ( nLyckadScan1 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sENvFileName;
ELSEIF ( nLyckadScan2 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sENvFileName;
ELSE;
# the log folder does not have this name use this then --
sEnvDirPath = 'D:\Program Files\ibm\cognos\tm1_64\samples\tm1\PlanSamp\' | sEnvFileName ;
ENDIF;

# -- check that the cube exist
cube = 'Systemparameters';
DimName3 = 'Sysparameters' ;
DimName4 = 'Measure Parameter';
DimName5= 'Text' ;

IF ( CubeExists( Cube ) = 0 ) ;
# -- if cube does not exist - stop the process --
ProcessQuit ;
ENDIF;

# build the view - but first destroy it
ViewDestroy ( Cube, ViewName );
SubsetDestroy ( DimName3, SubName );
SubsetDestroy ( DimName4, SubName );

#-- create a view --
ViewCreate ( Cube, ViewName );

#-- create the subsets --
SubsetCreateByMDX ( SubName, '{TM1FILTERBYLEVEL( {TM1SUBSETALL( [' | DimName3 | '] )}, 0)}' );
SubsetCreateByMDX ( SubName, '{TM1FILTERBYLEVEL( {TM1SUBSETALL( [' | DimName4 | '] )}, 0)}' );

ViewSubsetAssign ( Cube, ViewName, DimName3, SubName );
ViewSubsetAssign ( Cube, ViewName, DimName4, SubName );

# -- set the source to skip calculated values --
ViewExtractSkipCalcsSet ( Cube, ViewName, 1);
ViewExtractSkipRuleValuesSet ( Cube, ViewName, 0 );
ViewExtractSkipZeroesSet ( Cube, ViewName, 1 );

# -- Set source --
DataSourceType='VIEW';
DataSourceNameForServer=Cube;
DataSourceCubeview=ViewName;

# -- Change to get comma in the text file
DatasourceASCIIQuoteCharacter='"';
DatasourceASCIIDelimiter = ',';

#Section Metadata

# -- get the servername from dimension --
aServerName = ATTRS( DimName1, Element1 , 'value') ;
vApplication = aServerName | ':' | DimName3 ;

# -- export the values to file --
ASCIIOutput( sEnvDirPath, vApplication ,v1,v2,v3 );


#Section Data



#Section Epilog

ViewDestroy ( Cube, ViewName );
SubsetDestroy ( DimName3, SubName );
SubsetDestroy ( DimName4, SubName );

 

The TI process need to have a data source like this:

The file will look like below;

 

You can write a process to import the csv file that can look like this (you have to adjust the code to your environment) :

#Section Prolog

# -- get the time --
LastProcessStart = TIMST(NOW, '\Y-\m-\D \h:\i:\s');

# -- check the folder to load the environment file from
DimName1='SYS_ServerParameters';
Element1 = 'ServerName' ;
Element2 = 'LastUpdated' ;

# -- try to find the data folder from the logs folder ---
sLogDirName1 = LOWER ('Logfiles\' ) ;
sLogDirName2 = LOWER ('Logs\' ) ;
sDataDirName = LOWER ( 'Data\' ) ;
sBackupDirName = LOWER ( 'Config\' );
sEnvFileName = 'environment.txt' ;
sLogDirPath = LOWER( GetProcessErrorFileDirectory );

nLyckadScan1 = SCAN (sLogDirName1, sLogDirPath) ;
nLyckadScan2 = SCAN (sLogDirName2, sLogDirPath) ;
IF ( nLyckadScan1 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sEnvFileName;
ELSEIF ( nLyckadScan2 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sEnvFileName;
ELSE;
# we chance that the log and data folder are the same
sEnvDirPath = 'D:\Program Files\ibm\cognos\tm1_64\samples\tm1\PlanSamp\' | sEnvFileName ;
ENDIF;


# -- setup the parameter cube --
cube2 = 'Systemparameters';
DimName3 = 'Sysparameters' ;
DimName4 = 'Measure Parameter';
Element5= 'Text' ;

# -- check that that the cube exist
IF ( CubeExists( Cube2 ) = 0 ) ;
IF (DimensionExists( DimName3 ) = 0 ) ;
DimensionCreate (DimName3 ) ;
ENDIF;
IF (DimensionExists( DimName4 ) = 0 ) ;
DimensionCreate (DimName4 ) ;
ENDIF;
# -- create a cube
CubeCreate(Cube2, DimName3 , DimName4 );
ENDIF;

DimensionElementInsertDirect (DimName4, '', Element5 ,'S') ;


# -- set the source to be the file
DatasourceNameForServer = sEnvDirPath ;
DatasourceNameForClient = sEnvDirPath ;

#Section Metadata

#-- check that the application is correct from the file
# SCAN(substring, string)
vLocation = SCAN(':',vApplication ) ;
vServerName = SUBST( vApplication, 1, vlocation - 1 ) ;
aServerName = ATTRS( DimName1, Element1 , 'value') ;
IF (aServerName @<> vServerName);
ProcessQuit;
ENDIF;

# -- add the dimension element from vParameter
# DimensionElementInsertDirect(DimName, InsertionPoint, ElName,ElType);
DimensionElementInsertDirect (DimName3, '', vParameter ,'S') ;

# -- add the lastupdated parameter
DimensionElementInsertDirect (DimName3, '', Element2 ,'S') ;

#Section Data

# -- load the data to the cube 
CellPutS( vValue, cube2, vParameter, vColumn );


#Section Epilog

# -- write the time to cube
CellPutS( LastProcessStart, cube2, Element2, vColumn );

The import TI process need to have a file as data source, similar to this:

Both process demand that you have a dimension (SYS_ServerParameters) that have the name of the application.

This can be done with the process SYS.GET_ServerParameters.

More Information:

https://www.ibm.com/docs/ru/planning-analytics/2.0.0?topic=data-exporting-from-cube

https://www.ibm.com/docs/en/cognos-tm1/10.2.2?topic=file-parameters-in-tm1scfg 

https://exploringtm1.com/how-to-create-a-tm1-model-a-best-practice-guide/

https://cubewise.com/blog/adjusting-server-configuration-planning-analytics/

https://bihints.com/book/export/html/100

https://exploringtm1.com/text-file-export-tm1-asciioutput-functions/

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server
Google Chrome

Issue:
The background image in TM1WEB is changed on the server – but the change is not shown on end users web browsers.

Solution:

On many browsers, pressing Control-F5 will reload the page and bypass the cache. The Wikipedia “Bypass your cache” page has details for all modern web browsers.

Or you can change the name of the file PA_header.svg in folder D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\standalone\images\login to something else. Then you need to update all code that calls that file – can be this css-html files for tm1web:

D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\carbon.css

D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\standalone\all.css

D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\standalone\LoginDialog.css

 

Or clean the Web Browser cache at the end users laptop.

More Information:

https://medium.com/@gulshan_n/overcoming-persistent-image-caching-how-to-ensure-visitors-see-your-latest-website-changes-cd89a1434e1a 

https://www.avast.com/c-how-to-clear-cache

Resurrecting images from a web browser cache can be a useful way to recover lost or deleted images that you previously viewed online. Here’s a step-by-step guide on how to do this for various web browsers:

Google Chrome

  1. Access Cache:
    – Type chrome://cache in the address bar and press Enter. This will take you to a list of cached files.
  2. Find Images:
    – You can’t view the images directly, but you can see the URLs. To find images, you can also use a tool like ChromeCacheView (a third-party utility) that allows you to view and extract images from the cache more easily.
  3. Extract Images:
    – If using ChromeCacheView, download and run it. It will display cached files, including images. You can select the images you want to save and extract them to your computer.

Mozilla Firefox

  1. Access Cache:
    – Type about:cache in the address bar and press Enter. This will show you the disk and memory cache information.
  2. Find Images:
    – Look for the section that lists the cache entries. You can find images by checking the file type and URL.
  3. Extract Images:
    – You can use Mozilla’s cache viewer or a third-party tool like Firefox Cache Viewer to extract images more conveniently.

Microsoft Edge

  1. Access Cache:
    – Type edge://cache in the address bar and press Enter. This will show you cached items.
  2. Find Images:
    – Similar to Chrome, you might need a third-party tool like EdgeCacheView to find and extract images.

Safari

  1. Access Cache:
    – For Safari, the cache is not easily accessible like in other browsers. You can use Terminal commands or third-party tools.
  2. Find Images:
    – You can also look in the ~/Library/Caches/com.apple.Safari/ directory to find cached files.
  3. Extract Images:
    – Use a third-party application like SafariCache to help retrieve images from the cache.

General Tips

  • Look for File Types: When browsing the cache, look specifically for file types like .jpg.png, or .gif.
  • Use Third-Party Tools: Tools like WebCacheImage, ChromeCacheView, and others can simplify the process.
  • Browser Extensions: Some extensions can help manage and view cached files directly from the browser.

https://www.linkedin.com/pulse/11-ways-fix-google-chrome-loading-images-benjamin-ladegbaye 

How to refresh cached images and files in Chrome

 

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server

Issue:
I would like to turn of the DEV Tm1 servers during the weekend, can i do it with a chore?

Solution:

Create a parameter cube, something like below, add the tm1 instance name in the application column. Set a Y in Stop column if you would like the chore to stop the service. In the last two columns, we update the time when it was stopped or started.

# -- setup the parameter cube --

Cube = 'ScheduleCube' ;
DimName1 = 'sApplication' ;
DimName2 = 'ScheduleMeasures' ;
sLoadtestdata = 'NO' ;
# sLoadtestdata = 'YES' ;

DimensionDestroy (DimName1 ) ;
DimensionCreate (DimName1 ) ;

# -- add elements
nNum = 1;
While( nNum <= 15 );
sNum = numbertostring (nNum) ;
DimensionElementInsertDirect(DimName1 ,'', sNum , 'N' ) ;
nNum = nNum + 1;
End;



DimensionDestroy (DimName2);
DimensionCreate (DimName2 );

# -- add elements
DimensionElementInsertDirect (DimName2 ,'', 'Application' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Text' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'StartRun' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'StopRun' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Stop' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Start' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Last Stopped' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Last Started' , 'S' ) ;


# -- create a cube
CubeCreate(Cube, DimName1 , DimName2 );

# -- add data to cube --

IF ( sLoadtestdata @= 'YES' ) ;
cellputs ( 'planning sample' , Cube , '1' , 'application' ) ;
cellputs ( 'Y' , Cube , '1' , 'Start' ) ;
cellputs ( 'Y' , Cube , '1' , 'Stop' ) ;
cellputs ( 'proven_techniques' , Cube , '2' , 'application' ) ;

# -- add your default test data here -- 
ENDIF;

Create then a TM1 TI process to stop services:

# --- stop a service --

Cube = 'ScheduleCube' ;
DimName1 = 'sApplication' ;
DimName2 = 'ScheduleMeasures' ;

# -- check number of elements --
nLong = DIMSIZ (DimName1) ;
nNum = 1 ;
WHILE ( nNum <= nLong ) ;
sNum = numbertostring (nNum) ;
# -- get the application name --
sApp = CELLGETS ( cube , sNum, 'Application' ) ;
# -- check that it is not empty --
IF (LONG (sApp) <> 0 );
# -- check that it is suppose to be stopped --
IF ('Y' @= CELLGETS ( cube, sNum, 'Stop' ) ) ;

# -- get the time and put in the cube --
sTime = TIMST(now, '\Y-\m-\d \h:\i:\s');
CELLPUTS ( sTime, cube, sNum, 'Last Stopped' ) ;
# -- make the call to stop --

sBatchFile = 'NET STOP "' | sApp |'"' ;
ExecuteCommand('cmd /c ' | sBatchFile, 0);

ENDIF;
ENDIF;
nNum = nNum +1 ;
END;


Create a TM1 TI process to start services:

# -- start a service --

Cube = 'ScheduleCube' ;
DimName1 = 'sApplication' ;
DimName2 = 'ScheduleMeasures' ;

# -- check number of elements --
nLong = DIMSIZ (DimName1) ;
nNum = 1 ;
WHILE ( nNum <= nLong ) ;
sNum = numbertostring (nNum) ;
# -- get the application name --
sApp = CELLGETS ( cube , sNum, 'Application' ) ;
# -- check that it is not empty
IF (LONG (sApp) <> 0 );
# check that it is suppose to be started --
IF ('Y' @= CELLGETS ( cube, sNum, 'Start' ) ) ;

# -- get the time and put in the cube --
sTime = TIMST(now, '\Y-\m-\d \h:\i:\s');
CELLPUTS ( sTime, cube, sNum, 'Last Started' ) ;
# -- make the call to start --

sBatchFile = 'NET START "' | sApp |'"' ;

ExecuteCommand('cmd /c ' | sBatchFile, 0);

ENDIF;
ENDIF;
nNum = nNum +1 ;
END;

 

Schedule them in cores, to run on Friday at 2300 and on Monday at 0300 – to get it to stop and start the service.

I recommend that you have a “savedataall” in the TM1 applications that run every Friday before above.

Set the chore to start on the weekday it should stop the service, and set it to run the chore every 7 days.

More Information about other things:

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=smtf-savedataall

https://www.mci.co.za/business-performance-management/learning-mdx-views-in-ibm-planning-analytics/

Primary sales forecasting using PA, SPSS Modeler and python

https://www.wimgielis.com/tm1_createatestcube_EN.htm