Product:

Planning Analytics 2.0.9.13 TM1_version=TM1-AW64-ML-RTM-11.0.913.10-0

Microsoft Windows 2019 server

Problem:

How make a drill process?

Solution:

Inside sample application proven_techniques login as ADMIN and create a drill from cube “Drivers Alloc” to cube “employeedetails” (not the best example as they do not contain much information).

First right click on cube “Drivers Alloc” and select drill – create drill process.

Click Next, and then select a cube view as source and select any view from that target cube.

Click finish, and select a name like “TOemployees”. The hidden process }Drill_TOemployees is created.
Right click on the cube “Drivers Alloc” and select drill – create drill assignment rule.

Enter in below in rule, to make all cells drill-able to the TI process TOemployees :

[]=s:'TOemployees' ;

Then save it.

Then open the TI process to edit it. The first tabs i automatic populated from the creation cube.

In prolog add this code:

#****Begin: Generated Statements***
#****End: Generated Statements****

ByPass = 'YES';

# set the name of the target cube where the view is build
pCube = 'employeedetails' ;

# Specify dimension name
DimName1= 'employees' ;
DimName2 = 'budget' ;
DimName3 = 'departments' ;
DimName4 = 'employeedetails' ;

#Set Constants for Inserting into View

# Element1 = 'Total Employees' ; we should get only lowest member by MDX
Element2 = 'Budget' ;

# set the parameter we get from the drill function to make a selection in the view
Element3 = Receivers ;

# Element4 = 'Current Salary' ; we get all members by not including it in the code

# set view name to a uniq value
sView = 'Drill__' | pCube ;
sSubset = sView;

If( ViewExists( pCube, sView ) = 1 );
ViewDestroy( pCube, sView );
EndIf;
ViewCreate( pCube, sView );

# delete subset if it exist
If( SubsetExists( DimName1 , sSubset ) = 1 );
SubsetDestroy( DimName1 , sSubset );
EndIf;

# add all leaf elements to the dimension
sMDX = '{TM1FILTERBYLEVEL( {TM1SUBSETALL( [ ' | DimName1 | ' ] )}, 0)}' ;
SubsetCreateByMDX( sSubset, sMDX );

ViewSubsetAssign( pCube, sView, DimName1, sSubset );

# only include on elemet of the dimenstion in the view
If( SubsetExists( DimName2 , sSubset ) = 1 );
SubsetDeleteAllElements( DimName2 , sSubset );
Else;
SubsetCreate( DimName2 , sSubset );
EndIf;
SubsetElementInsert( DimName2, sSubset, Element2, 1 );

ViewSubsetAssign( pCube, sView, DimName2, sSubset );

# 
If( SubsetExists( DimName3 , sSubset ) = 1 );
SubsetDeleteAllElements( DimName3 , sSubset );
Else;
SubsetCreate( DimName3 , sSubset );
EndIf;
SubsetElementInsert( DimName3, sSubset, Element3, 1 );

ViewSubsetAssign( pCube, sView, DimName3, sSubset );

# as dimname4 does not have a total, 
# we leave it out and get all values in the view for that dimension.

# set what dimensions that should be in row and columns
ViewRowDimensionSet( pCube, sView, DimName1 , 1 );
ViewColumnDimensionSet( pCube, sView, DimName4 , 1 );
ViewTitleDimensionSet( pCube, sView, DimName3 );

# set standard values to exclude some cells
ViewExtractSkipCalcsSet( pCube, sView, 1 );
ViewExtractSkipRuleValuesSet( pCube, sView, 1 );
ViewExtractSkipZeroesSet( pCube, sView, 1 );

in epilog add this code:

IF (ByPass @= 'NO' ) ;

#****Begin: Generated Statements***
RETURNVIEWHANDLE('employeedetails','employeedetails');
#****End: Generated Statements****

ELSE;
RETURNVIEWHANDLE( pCube, sView ) ;
ENDIF;

 

 

Click on the lowest cell and select Drill from the popup menu.

The view is shown (that is built by the TI process) – only value that change is the receivers dimension, as that parameter is used to build the view.

You can do a better solution by checking out below links.

More Information:

https://www.wimgielis.com/tm1_neatmdxtricks_EN.htm

http://yinsolutions.blogspot.com/search/label/7.%20Planning%20Analytics%20%2F%20TM1

https://pmsquare.com/analytics-blog/2020/4/13/pa-drilling-through-to-greater-detail

https://quebit.com/askquebit/ibm-planning-analytics-setting-up-basic-drill-through-functionality/

https://lodestarsolutions.com/tm1-drill-through-from-cube-to-cube-not-working/

https://blogs.perficient.com/2011/10/13/cognos-tm1-drill-through-3/

https://quebit.com/askquebit/creating-effective-source-and-zero-out-views-in-planning-analytics/

https://code.cubewise.com/ti-helper

https://www.bihints.com/book/export/html/68

https://exploringtm1.com/drill-through-processes/

Product:
Microsoft Azure Data Factory
Issue:
How schedule the ADF main pipeline to run every hour?

Solution:

Inside ADF  ( https://adf.azure.com/en/authoring/pipeline/Main ) under Factory Resources select you pipeline named “main”.
Click on the Trigger(1) icon to get the dropdown that show “New/Edit”, select that. (The number show how many triggers you got running)

Click the arrow at choose trigger to get the New icon to select.

Enter the name for your trigger.
Edit the start date to sometime next day – easy to only write the numbers in this field. It will start at the time specified, so set 1:00:00 AM to make it run at every whole hour. If you set it to 1:30:00 will make it run at 1:30 and then 2:30 etc.
Select a time zone that is the one you are in – otherwise it will run at for you unexpected time.

Set the Recurrence to 1 hour, if you want it to run each hour.
Note: that it is marked to Start.
And click OK.

You need to click OK one more time.

You need to click on the Publish All button in ADF, to make the change be uploaded and activated.

To check your trigger the next day, go to Monitor icon and Trigger runs. Here will be a list of the last 24 hours trigger runs and if they where successful.

To edit a existing Trigger – go to the pipeline and select again the Trigger drop-down and New/Edit link.

Click on your trigger name to get it up in Edit dialog. Here you can change the name and time it will run – select a start date and time in the future.

Ensure it is started, before you press OK.
Then you have to Publish your changes again to ADF, to make it happen.

As Azure change the layout all the time, the dialogs may look different when you read this.

More Information:

https://www.mssqltips.com/sqlservertip/6062/create-schedule-trigger-in-azure-data-factory-adf/ 

https://www.mssqltips.com/sqlservertutorial/9402/azure-data-factory-scheduling-and-monitoring/ 

https://www.sqlshack.com/how-to-schedule-azure-data-factory-pipeline-executions-using-triggers/ 

https://www.serverlessnotes.com/docs/schedule-azure-data-factory-pipeline-executions 

https://learn.microsoft.com/en-us/azure/data-factory/how-to-create-schedule-trigger?tabs=data-factory 

Product:
Azure Data Factory (ADF)

Issue:
Can not connect to SQL server from ADF in same subscription.

Cannot connect to SQL Database: ‘databaseservername.database.windows.net‘, Database: ‘databasename‘, Reason: Connection was denied since Deny Public Network Access is set to Yes. To connect to this server,
1. If you persist public network access disabled, please use Managed Virtual Network IR and create private endpoint. https://docs.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint; https://docs.microsoft.com/en-us/azure/data-factory/tutorial-copy-data-portal-private;
2. Otherwise you can enable public network access, set “Public network access” option to “Selected networks” on Auzre SQL Networking setting.

Solution:

Prompt yourself to Owner in the subscription and ADF.
Inside ADF you need first ensure that Integration Run-times are using a Managed Virtual Network. Create a new Integration runtime setup,
select Azure. self-hosted.
select Azure
set region to your needs, and click create.

When this use Managed Virtual Network, you can go and create the Linked Service.
Select Azure SQL database.
In connect drop-down select the above created integrationruntime2 that have managed virtual network.
Select your Azure Subscription.
Select your database server name from the drop-down.
Select the database name.
Select the Authentication type to be “System Assigned Managed Identity”
Click Test, and if there is OK, click Create.

Then you may inside SSMS add the ADF managed user to the database with command similar to this:

In master database:

CREATE login [adf_user] FROM EXTERNAL PROVIDER

CREATE USER [adf_user] FROM LOGIN [adf_user] WITH DEFAULT_SCHEMA=[dbo]

In user database:

CREATE USER [adf_user] FROM LOGIN [adf_user]

ALTER ROLE [db_owner] ADD MEMBER [adf_user]

 

 

More Information:

https://learn.microsoft.com/en-us/azure/data-factory/managed-virtual-network-private-endpoint

https://lazyadmin.nl/office-365/how-to-use-azure-managed-identity/ 

https://www.inthecloud247.com/configure-a-user-assigned-managed-identity-the-basics/ 

Product:

Planning Analytics Workspace 88
Microsoft Windows 2019 server

Issue:

How setup the smtp mail function?

Solution:

Check that the port 25 is open from the TM1 server to the SMTP EXCHANGE server first.

Check that the SMTP EXCHANGE server accepts mail from the TM1 server – talk to the e-mail people at the company.

Update the bootstrap properties file located in folder D:\Program Files\ibm\cognos\tm1_64\paa_agent\wlp\usr\servers\kate-agent

 

Set these properties for un-encrypted mail internal at the company:

SMTP_EMAIL_PORT=25

SMTP_EMAIL_AUTH=FALSE

SMTP_EMAIL_HOST=smtp.servername.domain.com

SMTP_EMAIL_START_TLS_ENABLE=FALSE

SMTP_EMAIL_USERNAME=

SMTP_EMAIL_PASSWORD=

PAA_EMAIL_ADDRESS=donald@domain.com

SEND_GRID_API_KEY=""

SEND_GRID_EMAIL_URL=https\://api.sendgrid.com/v3/mail/send

After changes – you need to restart the windows service: IBM Planning Analytics Administration Agent

Then go to the PAW and the administration agent tab and click on the Test email button – enter your mail address at the company.

Then you can setup Alerts, so you will get a email when the disk is full.

To get the share book function to have a email option, you need to update the paw.ps1 file on the PAW server, with information like this:

$env:ENABLE_EMAIL="true"
$env:SENDGRID_API_KEY=""
$env:EMAIL_FROM="donald@domain.com"
$env:EMAIL_SMTP_URL="smtp://servername.domain.com:25"

 

You need to stop and start the PAW for the changes in paw.ps1 should take effect. run  .\paw.ps1 stop   ,then  .\paw.ps1  in the scripts folder.

if you get error – check the log file (app.log) in folder D:\PAW88\log\share-app on the PAW server.

Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate’s altnames:  can be that the EMAIL_SMTP_URL server name not match the DNS altnames. Change to other server name.

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=2wnj12-enable-email-notifications-in-planning-analytics-workspace-local

https://www.ibm.com/support/pages/planning-analytics-agent-messagelog-email-configuration-properties-missing 

https://pmsquare.com/analytics-blog/2020/11/8/ibm-planning-analytics-system-alerts-via-email

From Declan, Tm1 champion:

Setup in paw server

The obvious starting points are to gather details of your SMTP server, the port it uses (for sending email) – it can often have multiple ports available in which case – find out the “preferred” one. Generally speaking:

  • 587 – for TLS
  • 465 – For SSL
  • 25 – This might work (it does for office 365 at least) but general consensus is to not use it as it’s often blocked due to SPAM.

Also, gather the relevant account’s email and password.

Relevant Material

Based on googling the examples provided in the documentation:

I believe that node.js is being used to send the emails, and nodemailer.com/smtp has documentation covering most of the parameters used in the examples (e.g. ?Secure=true)

This doesn’t cover the tls.rejectUnauthorized=false that is used in the example, but looking that up suggests that in the event that your server can’t be verified it will go ahead and send the email anyway… so that sounds like a parameter that I would be cautious about using!

What Worked

Gmail:

  • $env:EMAIL_SMTP_URL=”smtp://<MyUserName>:<MyPassword>@smtp.gmail.com:587″
  • $env:EMAIL_SMTP_URL=”smtps://<MyUserName>:<MyPassword>@smtp.gmail.com:465″
  • $env:EMAIL_SMTP_URL=”smtps://<MyUserName>:<MyPassword>@smtp.gmail.com:465?secure=true&tls.rejectUnauthorized=false”

In all cases:

  • $env:ENABLE_EMAIL=”true”
  • $env:SENDGRID_API_KEY=””
  • $env:EMAIL_FROM=”<An email address that the credentials used in $env:EMAIL_SMTP_URL have access to send from>”

Product:
Planning Analytics 2.0.9.18
Microsoft Windows 2019 server

Issue:
How make a TM1 Ti process who read values from a file and add them as alias to a dimension?

Solution:

Create a text file with the first column with the elements in the dimension names, then the second column with the text you want to have in the alias.

Above we have two alias that we want to add to each element.

Create a new TM1 TI process and load the file similar to this: (you have to upload the file first, and then in the TI code change to location from where the file is read)

In the data source tab, enter the Variables name and change to String type.

Enter the parameters you need, to make the process work with any dimension and alias name that you need in future. When you run the process, you change filename and path etc to the values that are correct for you.

Enter a script similar to above. You can add code to check for it the alias exist, and not delete the attribute and then only update the elements listed in the text file.  Run the process and enter your filename.

The alias attribute are listed in the dimension.

You can view an icon indicating the attribute type in column headers. Click Settings icon and enable the Attribute type option to see the attribute type on column headers. Disable this option to hide the attribute type on column headers.

More Information:

https://everanalytics.wordpress.com/2017/08/10/tm1_alias_attributes/ 

Accomplish Attibute Update in Cognos TM1 Smartly!

https://everanalytics.wordpress.com/2015/07/26/create-a-very-large-dimension-in-cognos-tm1-using-turbo-integrator-ti/ 

AttrPutS in TM1: How to Use and Syntax

https://cubewise.com/blog/tm1-attributes-things-to-be-aware/ 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=cdipawc-import-members-attributes-into-dimension-in-planning-analytics-workspace 

Product:

Microsoft SQL 2019 server standard
Microsoft Windows 2022 server datacenter

Issue:

How take a backup that does not affect the night differentiation backups etc ?

Solution:

Take a full backup and select copy-only backup in general.

In this example, a copy-only backup of the test database is backed up to disk at the default backup location.

  1. In Object Explorer, connect to an instance of the SQL Server Database Engine and then expand that instance.
  2. Expand Databases, right-click test, point to Tasks, and then select Back Up….
  3. On the General page in the Source section, check the Copy-only backup checkbox.
  4. Select OK.

 

 

More Information:

https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/copy-only-backups-sql-server?view=sql-server-ver16

https://www.mssqltips.com/sqlservertip/1772/copy-only-backup-for-sql-server/ 

https://www.ninjaone.com/blog/copy-only-backup-in-sql-servers-explained/ 

Product:
Planning Analytics 2.0.9.18
Microsoft Windows 2019 server

Issue:

Error in TI process when run at random times.

Suggested solution:

Error: Prolog procedure line (9): Unable to register subset
Can be that the destroy of the subset did not work, as a view is around using that subset.
Check what views you have open.

Error: Prolog procedure line (103): Subset “xxx” not found in dimension “Period”
Can be that the subset is not around anymore, the subset have already been deleted.
Can be if you use the SubsetCreate(‘Region’, ‘Northern Europe’, 1);
Change to 0 (zero) in the TI process and see if that helps.

SubsetCreate(DimName, SubName, [AsTemporary]);

AsTemporary
This is an optional argument that specifies whether the subset being created is temporary. 1 indicates a temporary subset, 0 indicates a permanent subset. If this argument is omitted, the subset is permanent.

 

More Information:

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=smtf-subsetcreate

https://www.ibm.com/support/pages/unable-register-subset-error-when-creating-dynamic-subset-data-tab-ti 

 

Product:
Planning Analytics 2.0.9.18
Microsoft Windows 2019

Issue:
The time value in log file is not correct.

Solution:

Add

# Specify GMT or Local timezone
log4j.appender.S1.TimeZone=Local

to your tm1s-log.properties file that should be in the same folder as your tm1s.cfg file.

This should give you have logfiles with same time stamp as the clock on the server.

To test create a TI process with this in PROLOG:

 

In the log file you will see lines like this:

13972 [4a] INFO 2023-11-03 15:17:41.836 TM1.TILogOutput This is the end of the process !!!
13972 [4a] INFO 2023-11-03 15:17:41.836 TM1.Process Process “S. 10 Test of time value”: finished executing normally, elapsed time 0.02 seconds
When GMT is set, the time in the logfile will be ENGLISH TIME (GMT), and depending on time difference it will differ from the server time. Check https://24timezones.com/difference/gmt/stockholm
15296 [4a] INFO 2023-11-03 16:27:43.826 TM1.TILogOutput This is the end of the process !!!
15296 [4a] INFO 2023-11-03 16:27:43.826 TM1.Process Process “S. 10 Test of time value”: finished executing normally, elapsed time 0.00 seconds

With Local set in the file – the time in the logfile should be the same as the windows server clock.

In the text file will the time be shown as above – this is always the computer time of the server. Function NOW give the server time.

The Chore is run at local time when you select local. If UTC is selected it will run at the GMT time shown.

18032 [4] INFO 2023-11-04 07:14:29.923 TM1.Chore Registering chore: “S. 10 test of time” Start time: 2023/11/04 06:15:45 Frequency: 01:00:00:00
15736 [] INFO 2023-11-04 07:15:45.050 TM1.Chore Chore “S. 10 test of time” executed by scheduler

15736 [] INFO 2023-11-04 07:15:45.050 TM1.Process Process “S. 10 Test of time value” executed by chore “S. 10 test of time”
15736 [] INFO 2023-11-04 07:15:45.053 TM1.TILogOutput This is the end of the process !!!
15736 [] INFO 2023-11-04 07:15:45.053 TM1.Process Process “S. 10 Test of time value”: finished executing normally, elapsed time 0.00 seconds
15736 [] INFO 2023-11-04 07:15:45.053 TM1.Chore Chore “S. 10 test of time” time = 0.00 seconds
15736 [] INFO 2023-11-04 07:15:45.054 TM1.Chore Chore “S. 10 test of time” finished executing

Recommend to use Local time setting in chores, if you do not have servers in different time zones.

The line in log file for when you set the start time will always be in GMT time, like Registering chore: “S. 10 test of time” Start time: 2023/11/04 06:25:45 Frequency: 01:00:00:00, even do you have selected local server time in the chore and choose to start at 7:25 AM.

To setup a chore:

  1. In the Server Explorer, select the Chores icon beneath the server on which you want to create the chore.
  2. Choose ChoresCreate New Chore.

    The Chore Setup Wizard opens.

  3. In the Available list, select the process for which you want to create a chore.
  4. Click the right arrow icon.
  5. Click Next.
  6. Click a date on the calendar to specify a start date for the initial execution of the chore.
  7. Enter a time to specify the start time for the initial execution of the chore.
  8. Set the fields in the Chore Execution Frequency box to define the interval at which the chore is executed.
  9. Select a Run Chore Time option.
    • Local Server Time – Runs at the local server time, including during Daylight Saving Time/Summer Time periods.
    • UTC Time – Always runs at UTC, regardless of local Daylight Saving Time/Summer Time.
  10. Fill the Chore Schedule is Active box.
  11. Click Finish.

    The Save Chore As dialog box opens.

  12. Enter a name for the chore and click Save.

 

More Information:

https://exploringtm1.com/changing-tm1-server-log-time-zone-stamp/ 

https://everanalytics.wordpress.com/2015/07/23/write-to-tm1server-log-file-from-turbo-intergrator-ti-process-in-cognos-tm1/ 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=dtrf-timst 

https://exploringtm1.com/date-and-time-functions-for-easy-date-manipulation/ 

https://www.ibm.com/docs/el/planning-analytics/2.0.0?topic=turbointegrator-scheduling-process-automatic-execution-chores 

https://tm1up.com/9-chores-in-tm1.html 

How to set up Turbo Integrator Chores to run Advanced Scheduling

https://www.bihints.com/pushing-data-iseries-tm1 

Product:
Cognos Analytics 11.1.7
Microsoft Windows 2019

Problem:

User can not login to CA11, they get a blank page or spinning icon.

Possible Solution:

Ensure the user URL they enter in the web browser end with / and not only /bi

Valid URL for Cognos:

http://servername.domain.com/ibmcognos/bi/

http://servername.domain.com/ibmcognos/bi/v1/disp/

NOT valid URL:

http://servername.domain.com/ibmcognos/bi

The last word in the URL is considered a file, that need to exist, in that folder (D:\Program Files\ibm\cognos\analytics\webcontent) – if the URL does not end with /.

 

 

Clear the cache in the users web browser or try a different web browser like chrome.

If it does not help, check the cognos server log file for more information: D:\Program Files\ibm\cognos\analytics\logs\cognosserver.log

More Information:

https://www.ibm.com/support/pages/how-set-sample-custom-login-page-ca-112x 

https://www.ibm.com/docs/en/cognos-analytics/11.1.0?topic=settings-defining-authentication-parameters-login-urls 

https://pmsquare.com/analytics-blog/2023/6/13/the-ultimate-guide-to-cognos

https://www.ibm.com/docs/en/cognos-analytics/11.1.0?topic=gateway-configure-cognos-analytics-your-web-server

https://www.ibm.com/support/pages/blank-screen-when-logging-https-enabled-cognos-analytics-1117-environment

https://www.ibm.com/docs/en/cognos-analytics/11.1.0?topic=services-configuring-iis-in-cognos-analytics

Product:
Planning Analytics Workspace version 88

Issue:

How to select the lowest leaf members in the subset editor in workspace?

Suggested Solution:

On the left side, select all leaves to list only the members.

Click on replace icon, to move all selected on left to the right, and replace any previous values there.

Click Apply to update the view with the new subset.

 

More Information:

https://pmsquare.com/analytics-blog/2022/5/5/new-cube-viewer-set-editor-in-planning-analytics 

https://revelwood.com/ibm-planning-analytics-tips-tricks-new-parameters-for-turbo-integrator/ 

https://community.ibm.com/community/user/businessanalytics/blogs/stuart-king1/2021/12/13/planning-analytics-workspace-new-cube-viewer-and-s