Product:
IBM Controller 11.1 (before Cognos Controller 11.0)

Microsoft Windows 2022 server

Issue:

When you run a standard report inside Cognos Controller Client, you get a error, like this:

The PDF report is not shown.

If you check windows application log on the controller server, you see this message:

Error occured at 2025-03-09 15:13:11 in IBM Cognos Controller, Error No=5, Source=FrangoDirect.StandardReportD.GetCCRReportSystem.Web.Services, Description=System.Web.Services.Protocols.SoapException: Server was unable to process request. —> System.ArgumentException: Unable to connect to the remote server
at Microsoft.VisualBasic.ErrObject.Raise(Int32 Number, Object Source, Object Description, Object HelpFile, Object HelpContext)
at ControllerServerCommon.RaiseErrSrv.RaiseError(String sUser, Int32 lErrNo, String sErrSource, String sErrDesc, String sErrHelpFile, Int32 lErrHelpContext)…..

Solution:

The report service is not started, start the windows service: IBM Controller Reports   (FCMREPORTS)

 

More information:

https://www.ibm.com/docs/en/controller/11.1.0?topic=options-configuring-controller-embedded-report-library

https://www.ibm.com/docs/en/controller/11.1.0?topic=SS9S6B_11.1.0/com.ibm.swg.ba.cognos.ctrl_inst.doc/t_installfixpacks.htm

https://www.ibm.com/docs/en/controller/11.1.0?topic=icclo-installing-configuring-controller-one-computer

Product:
Microsoft PowerBI desktop

Issue:

A column in a table is not sorted correct, it sorted alphabetical and not by date. Even do the column should show a date format.

 

Solution:

Inside PowerBI Desktop, go to the table view.

Select the column (that you want change the sort order for) and click on icon for “sort by column”.

In the dropdown menu, select the column who have a correct format, in our example the date column.

Now check the table in Report view.

You need to repeat this for all columns, in your report in PowerBI for Desktop.

 

More Information:

https://learn.microsoft.com/en-us/power-bi/create-reports/desktop-sort-by-column

https://www.techrepublic.com/article/how-to-sort-by-column-power-bi/ 

https://help.zebrabi.com/kb/power-bi/add-a-sorting-column/ 

 

Product:
Planning Analytics 2.0.9.19
Pafe version 100 (IBM_PAfE_x86_2.0.100.3.xll)
Planning Analytics Workspace 100, or earlier versions.

Issue:
When you connect in PAX (Planning Analytics for Excel) to a TM1 server, you get a error for some of the TM1 instance you have on your server, but not all.

System.Net.WebException: The remote server returned an error: (502) Bad Gateway.
at System.Net.HttpWebRequest.GetResponse()

Solution:

Check the TM1S.CFG file for differences, between a working TM1 application and none working TM1 application.

If you use the command tlsCipherList in TM1S.CFG file, it must be on ONE line.

Above will give errors, but below line will not.

You need to ensure that all parameters in the file is on one line only.

 

Other possible solution, can be that the firewall is not open the ports needed between the PAW server and the TM1 server.

Check from the PAW server that you have access to the TM1 server on all needed TCP ports.

 

More Information:

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=pa2f82-disable-des-3des-ciphers-in-planning-analytics-mitigate-false-positive-security-scans-1 

 

Product:
Cognos Controller 11
Microsoft Windows 2022 server

Issue:
I want to upgrade Cognos Controller – what files should i download?

 

Solution:
Check the different IBM web pages, to find out the latest version. As IBM changed the name of IBM Cognos Controller to IBM Controller, old links are not valid.

First you need to download the “base” RTM files from IBM passport portal.

Then you need to find the latest fix pack for the product, check the “what’s new” section to find the latest versions.

https://www.ibm.com/docs/en/announcements/controller-111-introduces-new-look-new-features-providing-benefits 

Or you can check the fix list to find the latest version:  https://www.ibm.com/support/pages/node/7173282 

Issues corrected in IBM Controller 11.1.0
Related Information

Then search the IBM Fix central for the latest fix pack to download.

In https://www.ibm.com/support/fixcentral  you can browse for the latest fixes for your product.

Select the product, and click continue to get to a page where you can select “browse for fixes”, then you get a list of all the fixes.

In the list, select the latest fix, and click continue. Select download method, simplest is Download using your browser (HTTPS), and click continue.

Click on the blue file name to download the fix pack for your IBM application.

This date, the file up_cntrl_winx64h_11.1.1002.5_ml.tar.gz (2.38 GB)  is the one you should download. In a few weeks there maybe be a new fix pack.

This page can also help find the latest version of IBM Controller:

https://www.ibm.com/support/pages/ibm-controller-and-ibm-cognos-controller-builds-ccr-name-and-database-version 

Latest fix pack for Cognos Analytics can be:

https://www.ibm.com/docs/en/cognos-analytics/12.0.0?topic=120x-release-1204-october-2024 

https://www.ibm.com/support/pages/ibm%C2%AE-cognos-analytics-fix-lists 

Search fix central for above fix (interim fix: 12.0.4-BA-CA-Win64-IF002) , for your OS, in case of Microsoft Windows you should download files:

analytics-installer-3.7.38-win.exe (234.48 MB)

casrv-12.0.4-2501300500-winx64h.zip (7.23 GB)

Latest fix pack for Planning Analytics can be:

https://www.ibm.com/docs/en/planning-analytics/2.1.0?topic=features-whats-new-in-planning-analytics 

https://www.ibm.com/support/pages/ibm-planning-analytics-21-fix-lists 

 

Search fix central for above, and you should download this planning analytics files for use with Windows 2019 server:

tm1_winx64h_2.1.7.1_ml.tar.gz (1.42 GB) = the planning analytics server (tm1).

IBM_PAfE_x86_2.1.7.2.xll (7.83 MB) = the new tm1 perspective replacement e.g. pax.

tm1web-11.0.100-24120513-2.1.7-winx64h_bundle.zip (706.64 MB) = for the tm1web installation.

ipa_workspace_win_2019_2.1.7.219.zip (4.78 GB)  = for paw installation.

(if you have a Windows 2022 server for PAW, then you need to download this file instead ipa_workspace_win_2022_2.1.7.49.zip (4.69 GB) )

Then check the supported environments, to ensure that your download products can work with each other.

Cognos Controller 11.1 support this version of the other products:

IBM Cognos Analytics 12.0.4 support this version of the other products (as data source) :

Planning analytics 2.1.7 support this version of the other products (this is for authentication):

If you need to import data from a SQL database for Planning Analytics 2.1.7 it supports:

 

Please check that cognos controller and cognos analytics support the version of SQL database you are using.

 

More information:

https://www.ibm.com/support/pages/ibm-planning-analytics-local-217-now-available-download-fix-central-0 

https://www.ibm.com/software/reports/compatibility/clarity/index.html 

Product:
Planning Analytics version 2.0.9.19 (PAL)
Microsoft Windows 2019 Server

Issue:
How upgrade to Planning Analytics Workspace 100 from version 96 on Microsoft Windows 2019 server?

Solution:

Download the correct file from IBM Fix Pack.  https://www.ibm.com/support/pages/download-ibm-planning-analytics-local-v20-planning-analytics-workspace-release-100-fix-central

ipa_workspace_win_2019_2.0.100.219.zip

Copy the zip file to your Windows server where you run PAW.  Unzip the file to a folder named d:\running\paw100, should look like below (after the installation)

Ensure that the Windows server have 25 GB ram (the new version of PAW demands more memory).
Ensure you have at least 100 GB free disk space on the server, (the installation takes around 12 GB for the new version).

This version use a new Mongo database, before you upgrade you need to backup your content in the existing solution.

Update the anti-virus software on the Windows server to not scan above newly created folder and subfolders.

Start powershell as administrator, and go to the old PAW installation folder.
Run backup.ps1 from the scripts folder.

This will restart the PAW and create backup in a folder with todays date, as below:

Now you in the \paw96\scripts folder enter paw.ps1 stop, to stop the old version from running.

Enter docker ps, to check that there is no containers running, before you start the installation.

Copy the d:/paw96/config/paw.ps1 file from your current installation to the new installation location e.g.  d:\running\paw100\config\.

Copy the d:/paw96/config/certs directory from your current installation to the new installation location.

If you have configured SSL, copy the <paw_install_location>/config/ssl directory from your current installation to the new installation location.

Inside the powershell window, go to the new folder d:\running\paw100 and enter start.ps1

After the check have passed, enter y to start the installation.

Enter yes, to get the installation to continue, after you confirmed that you have made a backup.

Enter y to start the Administrative console.  If you have chrome on the server, it will start up and you have to accept the license requirements for all 3 parties components.

You can see the version number above, to ensure you are installing version 100.

When you get to the configuration page, click validate, to ensure that all ports are still open. The config URL should be the same as they are read from the paw file you copied before.

 

TM1 application server gateway URI is not used anymore, so this error is OK.  Click on update, to restart the PAW containers.

On the status page, can you see what process take the most CPU at the moment. The start of PAW take time, you have to wait.

If you get a lot of errors, you may need faster CPU and more MEMORY on the server.

When the server settles, and you have check that you can browse to your planning analytics workspace from you chrome web browser, it is time to restore the databases.

Inside Powershell window on server, copy the path to you backup folder, and then go to the new script folder at d:\running\paw100\scripts and enter restore.ps1 “path to you backup folder” as picture show below:

This will stop PAW and restore the database, so you have all your books and reports back into PAW.

When all is running, you should have this containers on your PAW server:

Now you need to go to your administrator page in PAW, and click on agent, to come to the page where you can download the new PAA agent file to install on your TM1 servers.

Download the file paa-agent-pkg-2.0.100.1800.zip

Copy the file to the TM1 server, and unzip it to a folder like d:\temp\paa

Check the path to your existing PAA agent – D:\Program Files\ibm\cognos\tm1_64\paa_agent..
Make a backup of file D:\Program Files\ibm\cognos\tm1_64\paa_agent\wlp\usr\servers\kate-agent\bootstrap.properties

Start a command prompt as administrator, and go to the folder where you have unzip the paa agent files.  Enter  updatepaaagent.bat “d:\program files\ibm\cognos\tm1_64”

When it is done, start your web browser on your laptop and go to the PAW site, and go to administration, agent, to test the mail function, click on Test email.

Email will only work if you have it configured.

 

Check that you have your folders and books back in PAW. If all works, then the upgrade was a success.

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=2wnj12-enable-email-notifications-in-planning-analytics-workspace-local-1

 

More Information:

https://exploringtm1.com/planning-analytics-workspace-scripts/ 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=local-upgrade-planning-analytics-workspace 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=ipawl-install-planning-analytics-workspace-local-windows-server-2019-2022 

https://www.ibm.com/support/pages/important-upgrade-requirement-planning-analytics-workspace-local-20100-and-217 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=local-backup-restore-planning-analytics-workspace 

https://www.ibm.com/support/pages/after-each-reboot-some-planning-analytics-workspace-containers-are-failing-start 

 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=components-planning-analytics-administration-agent-local-only

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=integrations-configure-integration-watsonxai-support-decision-optimization 

https://cloud.ibm.com/apidocs/watson-data-api#creating-an-iam-bearer-token 

https://cubewise.com/blog/ibm-planning-analytics-workspace-paw-2-0-100-2-1-7/

Product:

Microsoft Power BI desktop
Microsoft Windows

Issue:

Error when you try to add a measure in the visualizations list of data.

 

Solution:

Try to add a new column instead. The icons tell what type of value it is.

More Information:

Another important difference between measures and calculated columns is that measures are evaluated in the filter context of the visual in which they are applied. The filter context is defined by the filters applied in the report such as row selection, column selection, report filters and slicers applied. Measures are only evaluated at the level of granularity they are plotted at. As calculated columns are computed when you first define them/ when you refresh your dataset, they do not have access to the filter context. Calculated columns are calculated outside of the filter context and do not depend on user interaction in the report.

When you write a calculated column, you need to rely only on the row context. The row context, is simply the notion of a current row. It specifies which row we are calculating the values for, so that DAX can look at other values in the same row. In a calculated column, the row context is implied. When you write an expression in a calculated column, the expression is evaluated for each row of the table. The calculated column has knowledge of the current row.

By contrast, measures implicitly do not have a row context. This is because, by default, they work at the aggregate level. So, you cannot refer to columns directly in a DAX measure, you will get an error because no row context exists. This is because the measure will not know which row to choose in the table

https://endjin.com/blog/2022/04/measures-vs-calculated-columns-in-dax

Measures vs. Calculated Columns in Power BI

https://www.thedataschool.com.au/mipadmin/the-differences-between-new-measure-and-new-column-in-power-bi/ 

https://biinsight.com/define-measure-table-power-bi-desktop/ 

https://exceleratorbi.com.au/conditional-formatting-using-icons-in-power-bi/

 

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server

Issue:
How export a cube to file?

Suggested solution:

If you have a cube that looks like this, in planning sample, you can export it with a TM1 TI process.

You can write a process to export it that should look like this:

#Section Prolog

# -- get the time --
LastProcessStart = TIMST(NOW, '\Y-\m-\D \h:\i:\s');

# -- set the process as a subset name --
sName = GetProcessName();
ViewName = sName ;
SubName = sName ;

# -- check the folder to save the environment file 
DimName1='SYS_ServerParameters';
Element1 = 'ServerName' ;
Element2 = 'LastUpdated' ;

# -- find the data folder from the path of the logs folders---
sLogDirName1 = LOWER ('Logfiles\' ) ;
sLogDirName2 = LOWER ('Logs\' ) ;
sDataDirName = LOWER ( 'Data\' ) ;
sBackupDirName = LOWER ( 'Config\' );
sENvFileName = 'environment.txt' ;
sLogDirPath = LOWER( GetProcessErrorFileDirectory );

nLyckadScan1 = SCAN (sLogDirName1, sLogDirPath) ;
nLyckadScan2 = SCAN (sLogDirName2, sLogDirPath) ;
IF ( nLyckadScan1 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sENvFileName;
ELSEIF ( nLyckadScan2 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sENvFileName;
ELSE;
# the log folder does not have this name use this then --
sEnvDirPath = 'D:\Program Files\ibm\cognos\tm1_64\samples\tm1\PlanSamp\' | sEnvFileName ;
ENDIF;

# -- check that the cube exist
cube = 'Systemparameters';
DimName3 = 'Sysparameters' ;
DimName4 = 'Measure Parameter';
DimName5= 'Text' ;

IF ( CubeExists( Cube ) = 0 ) ;
# -- if cube does not exist - stop the process --
ProcessQuit ;
ENDIF;

# build the view - but first destroy it
ViewDestroy ( Cube, ViewName );
SubsetDestroy ( DimName3, SubName );
SubsetDestroy ( DimName4, SubName );

#-- create a view --
ViewCreate ( Cube, ViewName );

#-- create the subsets --
SubsetCreateByMDX ( SubName, '{TM1FILTERBYLEVEL( {TM1SUBSETALL( [' | DimName3 | '] )}, 0)}' );
SubsetCreateByMDX ( SubName, '{TM1FILTERBYLEVEL( {TM1SUBSETALL( [' | DimName4 | '] )}, 0)}' );

ViewSubsetAssign ( Cube, ViewName, DimName3, SubName );
ViewSubsetAssign ( Cube, ViewName, DimName4, SubName );

# -- set the source to skip calculated values --
ViewExtractSkipCalcsSet ( Cube, ViewName, 1);
ViewExtractSkipRuleValuesSet ( Cube, ViewName, 0 );
ViewExtractSkipZeroesSet ( Cube, ViewName, 1 );

# -- Set source --
DataSourceType='VIEW';
DataSourceNameForServer=Cube;
DataSourceCubeview=ViewName;

# -- Change to get comma in the text file
DatasourceASCIIQuoteCharacter='"';
DatasourceASCIIDelimiter = ',';

#Section Metadata

# -- get the servername from dimension --
aServerName = ATTRS( DimName1, Element1 , 'value') ;
vApplication = aServerName | ':' | DimName3 ;

# -- export the values to file --
ASCIIOutput( sEnvDirPath, vApplication ,v1,v2,v3 );


#Section Data



#Section Epilog

ViewDestroy ( Cube, ViewName );
SubsetDestroy ( DimName3, SubName );
SubsetDestroy ( DimName4, SubName );

 

The TI process need to have a data source like this:

The file will look like below;

 

You can write a process to import the csv file that can look like this (you have to adjust the code to your environment) :

#Section Prolog

# -- get the time --
LastProcessStart = TIMST(NOW, '\Y-\m-\D \h:\i:\s');

# -- check the folder to load the environment file from
DimName1='SYS_ServerParameters';
Element1 = 'ServerName' ;
Element2 = 'LastUpdated' ;

# -- try to find the data folder from the logs folder ---
sLogDirName1 = LOWER ('Logfiles\' ) ;
sLogDirName2 = LOWER ('Logs\' ) ;
sDataDirName = LOWER ( 'Data\' ) ;
sBackupDirName = LOWER ( 'Config\' );
sEnvFileName = 'environment.txt' ;
sLogDirPath = LOWER( GetProcessErrorFileDirectory );

nLyckadScan1 = SCAN (sLogDirName1, sLogDirPath) ;
nLyckadScan2 = SCAN (sLogDirName2, sLogDirPath) ;
IF ( nLyckadScan1 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan1, LONG (sLogDirName1)) | sEnvFileName;
ELSEIF ( nLyckadScan2 <> 0 );
sDataDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sDataDirName;
sEnvDirPath = DELET (sLogDirPath, nLyckadScan2, LONG (sLogDirName2)) | sEnvFileName;
ELSE;
# we chance that the log and data folder are the same
sEnvDirPath = 'D:\Program Files\ibm\cognos\tm1_64\samples\tm1\PlanSamp\' | sEnvFileName ;
ENDIF;


# -- setup the parameter cube --
cube2 = 'Systemparameters';
DimName3 = 'Sysparameters' ;
DimName4 = 'Measure Parameter';
Element5= 'Text' ;

# -- check that that the cube exist
IF ( CubeExists( Cube2 ) = 0 ) ;
IF (DimensionExists( DimName3 ) = 0 ) ;
DimensionCreate (DimName3 ) ;
ENDIF;
IF (DimensionExists( DimName4 ) = 0 ) ;
DimensionCreate (DimName4 ) ;
ENDIF;
# -- create a cube
CubeCreate(Cube2, DimName3 , DimName4 );
ENDIF;

DimensionElementInsertDirect (DimName4, '', Element5 ,'S') ;


# -- set the source to be the file
DatasourceNameForServer = sEnvDirPath ;
DatasourceNameForClient = sEnvDirPath ;

#Section Metadata

#-- check that the application is correct from the file
# SCAN(substring, string)
vLocation = SCAN(':',vApplication ) ;
vServerName = SUBST( vApplication, 1, vlocation - 1 ) ;
aServerName = ATTRS( DimName1, Element1 , 'value') ;
IF (aServerName @<> vServerName);
ProcessQuit;
ENDIF;

# -- add the dimension element from vParameter
# DimensionElementInsertDirect(DimName, InsertionPoint, ElName,ElType);
DimensionElementInsertDirect (DimName3, '', vParameter ,'S') ;

# -- add the lastupdated parameter
DimensionElementInsertDirect (DimName3, '', Element2 ,'S') ;

#Section Data

# -- load the data to the cube 
CellPutS( vValue, cube2, vParameter, vColumn );


#Section Epilog

# -- write the time to cube
CellPutS( LastProcessStart, cube2, Element2, vColumn );

The import TI process need to have a file as data source, similar to this:

Both process demand that you have a dimension (SYS_ServerParameters) that have the name of the application.

This can be done with the process SYS.GET_ServerParameters.

More Information:

https://www.ibm.com/docs/ru/planning-analytics/2.0.0?topic=data-exporting-from-cube

https://www.ibm.com/docs/en/cognos-tm1/10.2.2?topic=file-parameters-in-tm1scfg 

https://exploringtm1.com/how-to-create-a-tm1-model-a-best-practice-guide/

https://cubewise.com/blog/adjusting-server-configuration-planning-analytics/

https://bihints.com/book/export/html/100

https://exploringtm1.com/text-file-export-tm1-asciioutput-functions/

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server
Google Chrome

Issue:
The background image in TM1WEB is changed on the server – but the change is not shown on end users web browsers.

Solution:

On many browsers, pressing Control-F5 will reload the page and bypass the cache. The Wikipedia “Bypass your cache” page has details for all modern web browsers.

Or you can change the name of the file PA_header.svg in folder D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\standalone\images\login to something else. Then you need to update all code that calls that file – can be this css-html files for tm1web:

D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\carbon.css

D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\standalone\all.css

D:\Program Files\ibm\cognos\tm1web\webapps\tm1web\scripts\tm1web\themes\carbon\standalone\LoginDialog.css

 

Or clean the Web Browser cache at the end users laptop.

More Information:

https://medium.com/@gulshan_n/overcoming-persistent-image-caching-how-to-ensure-visitors-see-your-latest-website-changes-cd89a1434e1a 

https://www.avast.com/c-how-to-clear-cache

Resurrecting images from a web browser cache can be a useful way to recover lost or deleted images that you previously viewed online. Here’s a step-by-step guide on how to do this for various web browsers:

Google Chrome

  1. Access Cache:
    – Type chrome://cache in the address bar and press Enter. This will take you to a list of cached files.
  2. Find Images:
    – You can’t view the images directly, but you can see the URLs. To find images, you can also use a tool like ChromeCacheView (a third-party utility) that allows you to view and extract images from the cache more easily.
  3. Extract Images:
    – If using ChromeCacheView, download and run it. It will display cached files, including images. You can select the images you want to save and extract them to your computer.

Mozilla Firefox

  1. Access Cache:
    – Type about:cache in the address bar and press Enter. This will show you the disk and memory cache information.
  2. Find Images:
    – Look for the section that lists the cache entries. You can find images by checking the file type and URL.
  3. Extract Images:
    – You can use Mozilla’s cache viewer or a third-party tool like Firefox Cache Viewer to extract images more conveniently.

Microsoft Edge

  1. Access Cache:
    – Type edge://cache in the address bar and press Enter. This will show you cached items.
  2. Find Images:
    – Similar to Chrome, you might need a third-party tool like EdgeCacheView to find and extract images.

Safari

  1. Access Cache:
    – For Safari, the cache is not easily accessible like in other browsers. You can use Terminal commands or third-party tools.
  2. Find Images:
    – You can also look in the ~/Library/Caches/com.apple.Safari/ directory to find cached files.
  3. Extract Images:
    – Use a third-party application like SafariCache to help retrieve images from the cache.

General Tips

  • Look for File Types: When browsing the cache, look specifically for file types like .jpg.png, or .gif.
  • Use Third-Party Tools: Tools like WebCacheImage, ChromeCacheView, and others can simplify the process.
  • Browser Extensions: Some extensions can help manage and view cached files directly from the browser.

https://www.linkedin.com/pulse/11-ways-fix-google-chrome-loading-images-benjamin-ladegbaye 

How to refresh cached images and files in Chrome

 

Product:
Planning Analytics 2.0.9.19
Microsoft Windows 2019 server

Issue:
I would like to turn of the DEV Tm1 servers during the weekend, can i do it with a chore?

Solution:

Create a parameter cube, something like below, add the tm1 instance name in the application column. Set a Y in Stop column if you would like the chore to stop the service. In the last two columns, we update the time when it was stopped or started.

# -- setup the parameter cube --

Cube = 'ScheduleCube' ;
DimName1 = 'sApplication' ;
DimName2 = 'ScheduleMeasures' ;
sLoadtestdata = 'NO' ;
# sLoadtestdata = 'YES' ;

DimensionDestroy (DimName1 ) ;
DimensionCreate (DimName1 ) ;

# -- add elements
nNum = 1;
While( nNum <= 15 );
sNum = numbertostring (nNum) ;
DimensionElementInsertDirect(DimName1 ,'', sNum , 'N' ) ;
nNum = nNum + 1;
End;



DimensionDestroy (DimName2);
DimensionCreate (DimName2 );

# -- add elements
DimensionElementInsertDirect (DimName2 ,'', 'Application' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Text' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'StartRun' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'StopRun' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Stop' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Start' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Last Stopped' , 'S' ) ;
DimensionElementInsertDirect (DimName2 ,'', 'Last Started' , 'S' ) ;


# -- create a cube
CubeCreate(Cube, DimName1 , DimName2 );

# -- add data to cube --

IF ( sLoadtestdata @= 'YES' ) ;
cellputs ( 'planning sample' , Cube , '1' , 'application' ) ;
cellputs ( 'Y' , Cube , '1' , 'Start' ) ;
cellputs ( 'Y' , Cube , '1' , 'Stop' ) ;
cellputs ( 'proven_techniques' , Cube , '2' , 'application' ) ;

# -- add your default test data here -- 
ENDIF;

Create then a TM1 TI process to stop services:

# --- stop a service --

Cube = 'ScheduleCube' ;
DimName1 = 'sApplication' ;
DimName2 = 'ScheduleMeasures' ;

# -- check number of elements --
nLong = DIMSIZ (DimName1) ;
nNum = 1 ;
WHILE ( nNum <= nLong ) ;
sNum = numbertostring (nNum) ;
# -- get the application name --
sApp = CELLGETS ( cube , sNum, 'Application' ) ;
# -- check that it is not empty --
IF (LONG (sApp) <> 0 );
# -- check that it is suppose to be stopped --
IF ('Y' @= CELLGETS ( cube, sNum, 'Stop' ) ) ;

# -- get the time and put in the cube --
sTime = TIMST(now, '\Y-\m-\d \h:\i:\s');
CELLPUTS ( sTime, cube, sNum, 'Last Stopped' ) ;
# -- make the call to stop --

sBatchFile = 'NET STOP "' | sApp |'"' ;
ExecuteCommand('cmd /c ' | sBatchFile, 0);

ENDIF;
ENDIF;
nNum = nNum +1 ;
END;


Create a TM1 TI process to start services:

# -- start a service --

Cube = 'ScheduleCube' ;
DimName1 = 'sApplication' ;
DimName2 = 'ScheduleMeasures' ;

# -- check number of elements --
nLong = DIMSIZ (DimName1) ;
nNum = 1 ;
WHILE ( nNum <= nLong ) ;
sNum = numbertostring (nNum) ;
# -- get the application name --
sApp = CELLGETS ( cube , sNum, 'Application' ) ;
# -- check that it is not empty
IF (LONG (sApp) <> 0 );
# check that it is suppose to be started --
IF ('Y' @= CELLGETS ( cube, sNum, 'Start' ) ) ;

# -- get the time and put in the cube --
sTime = TIMST(now, '\Y-\m-\d \h:\i:\s');
CELLPUTS ( sTime, cube, sNum, 'Last Started' ) ;
# -- make the call to start --

sBatchFile = 'NET START "' | sApp |'"' ;

ExecuteCommand('cmd /c ' | sBatchFile, 0);

ENDIF;
ENDIF;
nNum = nNum +1 ;
END;

 

Schedule them in cores, to run on Friday at 2300 and on Monday at 0300 – to get it to stop and start the service.

I recommend that you have a “savedataall” in the TM1 applications that run every Friday before above.

Set the chore to start on the weekday it should stop the service, and set it to run the chore every 7 days.

More Information about other things:

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=smtf-savedataall

https://www.mci.co.za/business-performance-management/learning-mdx-views-in-ibm-planning-analytics/

Primary sales forecasting using PA, SPSS Modeler and python

https://www.wimgielis.com/tm1_createatestcube_EN.htm

Product:

Planning Analytics 2.0.9.19
Microsoft Windows 2019 server

Issue:

TI Process take longer to run – and TM1 Architect is non responsive. If you check the tm1server.log file, you find text like this:

TM1.Server sf_Rename: Failed to rename (d:\tm1data\products}subs\nameoftheprocesshaveingissus.sub) to (d:\tm1data\products}subs\nameoftheprocesshaveingissus.sub$$$). Error: error code:5 reason:”Access is denied.” file..
….could not rename file to intermediate file, “nameoftheprocesshaveingissus.sub” to “nameoftheprocesshaveingissus.sub$$$” after 10 retrys in 20 seconds…

 

Solution:

Go into Windows service and change the user that run the Cognos TM1 server instance to be Local System instead of a specific user or service account.

Maybe security demands on the server NTFS have change by use of any new Windows Policy.

Make certain you are logged into Windows with a user account that has permission to access the location (the folder where the tm1 application is). In the case of running any process that create subset or views, you will need to make sure your user account (that run the TM1 service) has permission to save files to the software’s data and subfolders.