Product:
Planning Analytics Local 2.0.9.18
TM1_version=TM1-AW64-ML-RTM-11.0.918.3-0
Microsoft Windows 2019 server

Problem:

After upgrade of Planning Analytics Local installation (PAL), the TM1 App Web is not showing the applications.

Suggested Solution:

Stop the service IBM Cognos TM1

Go to folder D:\Program Files\ibm\cognos\tm1_64\webapps\pmpsvc\WEB-INF\configuration

Rename fpmsvc_config.xml to fpmsvc_config.xml.old.txt

Rename fpmsvc_config.xml.new to fpmsvc_config.xml

Copy the lines between, the tm1 markers, that contain the name of the applications:

<tm1>
 </tm1>

from the file fpmsvc_config.xml.old.txt to the file fpmsvc_config.xml

Now the applications and gateway uri should be listed in the new file.

Start the service IBM Cognos TM1

More information:

https://www.ibm.com/support/pages/tm1-application-does-not-appear-tm1-applications-list-after-deployment

Product:

Planning Analytics Workspace 88  (file version.ps1 in folder paw\config contains a version number)
$env:PAW_BUILD=”121″
$env:PAW_RELEASE=”2.0.88″
Microsoft Windows 2019 server

Issue:

What containers should be running in a working PAW installation?

Solution:

Start powershell as administrator and enter command:

docker ps

This will list all running containers – should be these:

CONTAINER ID IMAGE                                                               PORTS                        NAMES
b6874749d0a5 127.0.0.1:5000/planninganalytics/prism-platform:3.0.2365.2-ltsc2019 9080/tcp                     prism-platform
5f104714f851 127.0.0.1:5000/planninganalytics/bss:1.0.1397-ltsc2019              8082/tcp                     bss
a946de8db063 127.0.0.1:5000/planninganalytics/pa-gateway:1.0.1098-ltsc2019       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp pa-gateway
83e22e0be1f8 127.0.0.1:5000/planninganalytics/neo-provision:1.0.392-ltsc2019     8083/tcp                     neo-provision
6d11be2a5fd7 127.0.0.1:5000/planninganalytics/neo-idviz:9.0.982-ltsc2019         9060/tcp                     neo-idviz
e8c3bd52ca54 127.0.0.1:5000/planninganalytics/monitor:2.0.88.3035-ltsc2019       9082/tcp                     monitor
bfa3ef090459 127.0.0.1:5000/planninganalytics/wa-proxy:1.0.1020-ltsc2019         1338/tcp                     wa-proxy
fb7bd169d5b3 127.0.0.1:5000/planninganalytics/async-service:1.0.387-ltsc2019     9666/tcp                     async-service
fef9418814e5 127.0.0.1:5000/planninganalytics/share-platform:1.0.425-ltsc2019    9110/tcp                     share-platform
afaeb58c897c 127.0.0.1:5000/planninganalytics/pa-plan-service:1.0.2023051901-ltsc2019 9080/tcp                plan-service
b84bc681967b 127.0.0.1:5000/planninganalytics/paw-ui-api:1.0.260-ltsc2019        3000/tcp                     paw-ui-api
31ac3574ea06 127.0.0.1:5000/planninganalytics/user-admin:1.0.737-ltsc2019        3333/tcp                     user-admin
5e55f009d40f 127.0.0.1:5000/planninganalytics/pa-glass:3.0.9788-ltsc2019         9080/tcp                     glass
8dc5e074265d 127.0.0.1:5000/planninganalytics/atlas-service:125-ltsc2019         9076/tcp                     atlas
5d482449ac97 127.0.0.1:5000/planninganalytics/prism-proxy:3.0.11099-ltsc2019     9090/tcp, 9100/tcp           prism-proxy
59b615950e2b 127.0.0.1:5000/planninganalytics/couchdb:234-ltsc2019               5984/tcp                     couchdb
1c0305d7d945 127.0.0.1:5000/planninganalytics/share-app:1.0.605-ltsc2019         9700/tcp                     share-app
010ac1fac8db 127.0.0.1:5000/planninganalytics/mongo:1.0.205-ltsc2019             27017/tcp                    mongo
481b8cb26b29 127.0.0.1:5000/planninganalytics/prism-app:3.0.5536-ltsc2019        9600/tcp                     prism-app
78709cb2d0d1 127.0.0.1:5000/planninganalytics/palm-service:1.0.340-ltsc2019      9085/tcp                     palm-service
545cf6eb986f 127.0.0.1:5000/planninganalytics/pa-content-service:1.0.310-ltsc2019 9191/tcp                    pa-content
61093630f145 127.0.0.1:5000/planninganalytics/redis:1.0.206-ltsc2019             6379/tcp                     redis
4e149a0ae6ea 127.0.0.1:5000/planninganalytics/pa-cdn:3.0.9788-ltsc2019           8080/tcp                     cdn
927748e22277 127.0.0.1:5000/planninganalytics/share-proxy:1.0.605-ltsc2019       9070/tcp                     share-proxy
8004ed0c651d 127.0.0.1:5000/planninganalytics/pa-predict-svc:1.0.1054-ltsc2019   9610/tcp                     pa-predict
0b084764c655 127.0.0.1:5000/planninganalytics/tm1proxy:1.0.411-ltsc2019          1339/tcp                     tm1proxy
394e473a354d 127.0.0.1:5000/planninganalytics/admintool:1.0.340-ltsc2019         8888/tcp                     admintool

Command docker images will list all installed images, should normally be these:

REPOSITORY                                       TAG                  IMAGE ID     CREATED      SIZE
127.0.0.1:5000/planninganalytics/monitor         2.0.88.3035-ltsc2019 cdd1118734aa 7 weeks ago  5.35GB
127.0.0.1:5000/planninganalytics/pa-glass        3.0.9788-ltsc2019    eda31db65292 2 months ago 5.27GB
127.0.0.1:5000/planninganalytics/pa-cdn          3.0.9788-ltsc2019    e30ce3e584eb 2 months ago 5.34GB
127.0.0.1:5000/planninganalytics/prism-proxy     3.0.11099-ltsc2019   ad0700244d5f 2 months ago 4.77GB
127.0.0.1:5000/planninganalytics/share-proxy     1.0.605-ltsc2019     7c0f57f0564c 2 months ago 4.71GB
127.0.0.1:5000/planninganalytics/share-app       1.0.605-ltsc2019     30028e59100d 2 months ago 4.72GB
127.0.0.1:5000/planninganalytics/pa-gateway      1.0.1098-ltsc2019    bc188fbdba7e 2 months ago 4.71GB
127.0.0.1:5000/planninganalytics/prism-platform  3.0.2365.2-ltsc2019  1af1c76c5ebe 2 months ago 5.66GB
127.0.0.1:5000/planninganalytics/prism-app       3.0.5536-ltsc2019    149080a8fc2d 2 months ago 4.73GB
127.0.0.1:5000/planninganalytics/palm-service    1.0.340-ltsc2019     5870fb15710e 2 months ago 4.76GB
127.0.0.1:5000/planninganalytics/tm1proxy        1.0.411-ltsc2019     301bf315ca8b 2 months ago 4.63GB
127.0.0.1:5000/planninganalytics/bss             1.0.1397-ltsc2019    16a9f3403a03 2 months ago 5.32GB
127.0.0.1:5000/planninganalytics/wa-proxy        1.0.1020-ltsc2019    d365a00fbcb0 2 months ago 4.73GB
127.0.0.1:5000/planninganalytics/paw-ui-api      1.0.260-ltsc2019     9e3f8ca98062 2 months ago 4.82GB
127.0.0.1:5000/planninganalytics/async-service   1.0.387-ltsc2019     f7e111569a61 2 months ago 4.82GB
127.0.0.1:5000/planninganalytics/pa-predict-svc  1.0.1054-ltsc2019    6dabd0bfa755 2 months ago 5.34GB
127.0.0.1:5000/planninganalytics/user-admin      1.0.737-ltsc2019     e4ea0b9f71cf 2 months ago 4.82GB
127.0.0.1:5000/planninganalytics/neo-provision   1.0.392-ltsc2019     7c04066c0fed 2 months ago 5.35GB
127.0.0.1:5000/planninganalytics/couchdb         234-ltsc2019         7832c5ecc13c 2 months ago 4.93GB
127.0.0.1:5000/planninganalytics/neo-idviz       9.0.982-ltsc2019     6847d7d6725d 2 months ago 5.55GB
127.0.0.1:5000/planninganalytics/pa-plan-service 1.0.2023051901-ltsc2019 04c8517f2f3a 2 months ago 5.28GB
127.0.0.1:5000/planninganalytics/admintool       1.0.340-ltsc2019     428093815025 2 months ago 4.72GB
127.0.0.1:5000/planninganalytics/pa-content-service 1.0.310-ltsc2019  bfa8b7dcb3f4 2 months ago 4.73GB
127.0.0.1:5000/planninganalytics/atlas-service   125-ltsc2019         f012ea094ed4 2 months ago 5.31GB
127.0.0.1:5000/planninganalytics/share-platform  1.0.425-ltsc2019     9ff87c1b417a 2 months ago 5.29GB
127.0.0.1:5000/planninganalytics/bss-init        1.0.342-ltsc2019     caf9f295cdcb 2 months ago 4.71GB
127.0.0.1:5000/planninganalytics/redis           1.0.206-ltsc2019     c5f67bbdafc5 2 months ago 4.64GB
127.0.0.1:5000/planninganalytics/mongo           1.0.205-ltsc2019     852817ecb6fe 2 months ago 4.85GB
127.0.0.1:5000/planninganalytics/ibm-java8       225-ltsc2019         7d2f0cc5bd1f 2 months ago 5.11GB
127.0.0.1:5000/planninganalytics/couchdb-init    1.0.835-ltsc2019     85873eaa9a40 2 months ago 4.62GB

 

If you are missing a image, the installation have failed. If not all containers is running, then you may have issues with your Anti-virus software.

Uninstall Trellix/McAfee and reboot server and try again to install PAW.

 

More Information:

https://www.ibm.com/support/pages/download-ibm-planning-analytics-local-v20-planning-analytics-workspace-release-88-fix-central

https://exploringtm1.com/how-to-install-planning-analytics-workspace-to-windows-server-2019/

https://www.ibm.com/support/pages/after-each-reboot-some-planning-analytics-workspace-containers-are-failing-start

https://circleci.com/blog/docker-and-cicd-tutorial-a-deep-dive-into-containers/

https://gist.github.com/danijeljw/a7a2553bd06742648172363ce3983a9a

Product:

Planning Analytics 2.0.9.18

Issue:

How create a simple check on input values in a TI process?

Solution:

Ad a IF statement in the PROLOG tab that check the input values against size, simplest to ensure the values entered is real – keep the size correct.

You have parameters the user should enter, then you need to do some simple check that the values entered are realistic.

ProcessQuit will terminate that process. Metadata and Data will not be executed.

ProcessBreak will stop processing the source and proceed directly to the Epilog tab (without returning an error handle to the TI).

More Information:

https://www.wimgielis.com/tm1_break_EN.htm 

https://exploringtm1.com/processquit-tm1-command-use-and-syntax/ 

https://exploringtm1.com/itemreject-tm1-function-syntax-use/ 

This example is simply checking if an element entered into a parameter exists in a dimension using a DIMIX and if it doesn’t, it puts a message in the log and quits the process.

IF ( DIMIX ( 'Location', pLocation) = 0);
sErrorMessage = 'The Location entered does not exist';
ItemReject ( sErrorMessage );
ENDIF;

https://www.ibm.com/docs/en/cognos-tm1/10.2.2?topic=basics-string-length-limit-in-turbointegrator#StringLengthLimitinTurboIntegrator_NB5009
https://code.cubewise.com/blog/tm1-hidden-features

Product:

Planning Analytics 2.0.9.17

Problem:

The calculated cell does not contain a zero value, instead it contain an old number from before. The Trace Feeder also show that the empty cell have a numbers, that are used in the consolidation. The cube does not have FEEDERS, but it is feed from a different cube.

Running CubeProcessFeeders ( ‘cubename’ ) – solve the problem, until next restart of the TM1 instance.

Solution:

IBM is aware of a defect in Planning Analytics versions 2.0.9.15 IF2 through 2.0.9.17. Rule-derived cells are not calculating after a TM1 database restart when the cube doesn’t have a feeder statement. This defect is addressed in Planning Analytics 2.0.9.18, available 10 July 2023.

If we add a feeder to the rules file for the cube, then the values are correct in the consolidation cell.

You can also create a TM1 TI process with:

CubeProcessFeeders ( 'thecubename' );

Then create a TM1 chore that will run that every week, but do not activate it.

Click “create new chore” in TM1 Architect.

Select the TM1 TI process you created for “run cubefeeders” above.

Next step select that the chore should run every 7 day, and save it with a name “S. 1010 run cubefeeders”.

Do not active the chore.

Go to the TM1S.CFG file for your TM1 instance.

Add this line:

StartupChores=S. 1010 run cubefeeders

This should give at the start of the TM1 instance, the TI process in that core is run before uses can login to the TM1 application.

In tm1server.log

21620 [] INFO 2023-07-13 14:12:59.743 TM1.Chore Executing Startup Chore “S. 1010 Run cubefeeders”
21620 [] INFO 2023-07-13 14:12:59.744 TM1.Process Process “S. 1111 run cubefeeders” executed by chore “S. 1010 run cubefeeders”

 

More Information:

https://blogs.perficient.com/2014/09/28/cognos-tm1-server-on-start-up/ 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=feeders-feeding-one-cube-from-another 

In this example we have cube A and cube B. Cube B is very similar to cube B but it has one more dimension

Cube A
Time
Measures

Cube B
Locations
Time
Measures

Say you have a sales value in cube A and you want to split it across the locations in cube B. The rule in cube B might look something like:

[‘sales’] = N: DB(‘cube A’,!Time,!Measues) * DB(‘Location Split’,!Location,!Time); (The latter being a lookup cube)

You will then have a feeder from A to B. It will look something like this:

[‘Sales’] => DB(‘Cube B’,’All Locations’,!Time,!Measures);

In the example above the Location dimension does not exist in cube A. When you have a case like this you need to select one item within the unknown dimension. In this case I have selected a consolidation called ‘All Locations’. This will then in turn feed all it’s children. If you do not have a consolidation like this create one. You can then add all children to it.

https://www.tm1forum.com/viewtopic.php?t=274 

https://www.ibm.com/support/pages/tm1-server-feeders-guidelines-and-best-practices 

https://www.decisionsystems.com/wp-content/uploads/2016/05/IBM-Cognos-TM1-Feeders-Best-Practices.pdf 

https://code.cubewise.com/blog/mastering-conditional-feeders-in-tm1

https://blog.octanesolutions.com.au/using-rules-and-feeders 

https://www.ibm.com/support/pages/node/7005575 

Product:
Planning Analytics 2.0.9.17
Microsoft Windows 2019 server

Problem:

How see memory usage of Feeders and other cube data?

Possible solution:

The }StatsByCube control cube captures information about the cubes in your TM1 application. You’ll need to enable Performance Monitor either by right-clicking on your TM1 instance in Architect and choosing Start Performance Monitor or by adding PerformanceMonitorOn=T to your TM1s.cfg file – the latter change requires a service restart to take effect.

After Performance Monitor starts, it takes a few seconds for the counters to begin writing to the }StatsByCube cube.

When you open }StatsByCube, the most useful view when starting out is to put the LATEST member from the }TimeIntervals in the title and the }StatsStatsByCube and }PerfCubes dimensions in the rows and columns. The counters you want to focus on are Memory Used for Views and Number of Stored Views. These represent the amount of RAM (in bytes) used for Stargate Views and the number of Stargate Views created for this specific cube.

1 Megabyte = 1,048,576 bytes
https://www.gbmb.org/gb-to-mb

Memory used for feeders below should then be 458 865 488 bytes = 458 Mb RAM for only feeders. They in most cases does not change.

More Information:

https://www.ironsidegroup.com/blog/tm1-stargate-views-explained/ 

https://blog.octanesolutions.com.au/performance-enhancements-using-tm1-cached-views

https://everanalytics.wordpress.com/2016/03/07/vmm_vmt_explained/

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=cubes-performance-monitoring-control 

Product:
Planning Analytics Workspace 88
Microsoft Windows 2019 server

Problem:

New installation of PAW give error when try to login/connect to site. Where you use CAM security and are connected to a CA11 installation.

The PMHub service parameter was not specified or is not one of the configured locations

Solution:

Update the pmhub.html file on the CA gateway server in folder D:\Program Files\ibm\cognos\analytics\webcontent\bi to include the PAW server name without portnumbers, on line:

// Update the following to point to the location of the pmhub service(s)
var pmhubURLs = ["http://palservername.domain.com:9510","http://palservername.domain.com","http://pawservername.domain.com","http://pawservername"];

 

Check also “pmhub.html” file name is in lowercase only, with no uppercase characters (for example “PMHub.html” would not be found)

-If Cognos Analytics is accessed through a gateway URL (for example by using IIS, like “http://<CAgateway>:80/ibmcognos/bi/v1/disp“) then the pmhub.html interoperability file must be placed in “<CA_Gateway_machine>/webcontent/bi/
-If Cognos Analytics is accessed directly through an application tier or dispatcher (like “http://<CAdispatcher>:9300/bi/v1/disp“) then the pmhub.html file must be placed in each “<CA_Dispatcher_machine>/webcontent/”  folder.

 

More information:

https://www.ibm.com/support/pages/planning-analytics-workspace-cam-security-either-http-404-or-blank-page-or-pmhub-errors 

https://community.ibm.com/community/user/businessanalytics/blogs/nadine-mnch/2018/03/07/cognosanalytics-and-planninganalytics-integration-walkthrough-part-3

Product:

Planning Analytics Workspace 88

Microsoft Windows 2019 server

Issue:

During first ./start.ps1 you get errors that it does not work, after you have configured the settings in the admin tool and clicked Update.

“fatal: failed to start daemon: Error initializing network controller: failed during hnsCallRawResponse: hnsCall failed in Win32: The dependency service or group failed to start. (0x42c)”

 

Check if there is any anti – virus software on server, like Trellix Agent Service, that is “C:\Program Files\McAfee\Agent\masvc.exe” /ServiceStart

or Trellix Validation Trust Protection Service, that is “C:\Program Files\Common Files\McAfee\SystemCore\mfemms.exe”

(remove them first as a test).

Solution:

Try to start the docker containers one by one, from the powershell prompt:

docker start glass
docker start plan-service 
docker start redis 
docker start prism-app 
docker start share-proxy 
docker start couchdb-init 
docker start share-platform 
docker start user-admin 
docker start mongo
docker start pa-predict

 

After you have started 10, try to start the rest with the command: ./paw.ps1

If you get error like this;

ERROR: for bss-init (232, ‘WriteFile’, ‘The pipe is being closed.’)
The current Compose file version is not compatible with your engine version. Please upgrade your Compose file to a more recent version, or set a COMPOSE_API_VERSION in your environment.
Execution failed with exit code 1: The current Compose file version is not compatible with your engine version. Please
upgrade your Compose file to a more recent version, or set a COMPOSE_API_VERSION in your environment

do not give up – try again to start that service by itself. It can work.

If you have temporary containers that have numbers in the beginning of the name, try to do below steps in power-shell:

cd <your_PAW_folder>
./scripts/paw.ps1 down

# if there are temporary containers, then uncomment and run this additional command too:
# docker rm $(docker ps -a -q)

stop-service docker
start-service docker
./scripts/paw.ps1

 

When all is up, then do a ./paw.ps1 stop.

Then reboot the windows server and wait a long time.

Does all containers for paw start successfully? Then it should be up.

If it does not work, try to migration from McAfee to MS Defender on the server, and then try with powershell commands:

.\paw.ps1 stop
.\paw.ps1

If this start all needed services, it can be the Anti-virus software that was the problem.

 

Check log files like D:\PAW\log\share-platform\messages.log and D:\PAW\log\prism-platform\messages.log for more information.

Errors like;

Caused by: com.ibm.wsspi.injectionengine.InjectionException: CWNEN1003E: The server was unable to find the concurrent/biExecSvc binding with the javax.enterprise.concurrent.ManagedExecutorService type for the java:comp/env/com.ibm.bi.platform.commons.shared.BIContextListener/execSvc reference.

found in D:\PAW\log\predict\messages.log you have to decide if they are harmful to the system.

 

Cause can be that the underlying hard disk are to slow, so the docker container does can not create the images fast enough during first installation, as ./start.ps1 will try to start all paw containers at the same time.

Sadly, most common issue for PAW is the anti-virus software on the windows server.

 

More Information:

https://exploringtm1.com/how-to-upgrade-planning-analytics-workspace/ 

https://docs.docker.com/compose/compose-file/compose-versioning/

Search for “Container was too slow to start because the computer was overloaded” on internet. Like a Windows Server LTSC 2019 VM backed by a VHDX on a spinning disk has terrible IO performance, and can be overloaded and see startup failures.

https://www.ibm.com/docs/en/was-liberty/base?topic=manually-configuring-managed-executors

https://www.ibm.com/support/pages/after-each-reboot-some-planning-analytics-workspace-containers-are-failing-start

https://docs.trellix.com/bundle/agent-5.5.0-product-guide-epolicy-orchestrator/page/GUID-FA9DA1DC-CAB1-4E7F-8E28-6BA8CF12A4F5.html

Product:

Planning Analytics 2.0.9.17

Microsoft Windows 2019 server

Issue:

How do i find the java version?

Solution:

Find the java bin folder for your cognos product, can be in folder;

D:\Program Files\ibm\cognos\analytics\ibm-jre\jre\bin  or

D:\Program Files\ibm\cognos\tm1_64\jre\bin

then start a cmd or powershell prompt on that folder.

Enter ./java -version to find the version.

first line is java version “1.8.0_331”

More information:

https://www.ibm.com/support/pages/how-find-jre-java-version-used-cognos-analytics 

https://www.ibm.com/support/pages/sdk-java-versions-be-used-ibm-cognos-software-products

https://www.ibm.com/support/pages/how-determine-version-ibm-planning-analytics-spreadsheet-services-installed

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=pa2o12-upgrade-java-8-removal-java-7 

https://www.ibm.com/support/pages/node/6856457

Product:
Planning Analytics 2.0.9.17
Microsoft Windows Server 2019

Problem:

I want to copy a csv file to azure file storage from folder on TM1 server. How to do it?

Solution:

Create a folder where you store both your azxopy.exe and ps1 script (d:\script).  https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

Create a folder where you store the file you want to send (d:\temp).

In the AZURE FILE STORAGE account you need to create a SAS key. https://adamtheautomator.com/azure-sas-token/ 

With that key, create below powershell script, and save it as uploadfile.ps1 in your d:\script folder.

$env:AZCOPY_CRED_TYPE = "Anonymous";
$env:AZCOPY_CONCURRENCY_VALUE = "AUTO";
$env:AZCOPY_LOG_LOCATION="d:\temp\";
d:\script\azcopy.exe copy "d:\Temp\testfile1.txt" "https://yourstorageaccountname.file.core.windows.net/input/testfile1.txt?sv=2022-12-03&ss=f&srt=sco&sp=rwlc&se=2023-08-09T19%3A35%3A07Z&st=2023-06-29T11%3A35%3A07Z&spr=https&sig=yoursaskeytokenandtheotherstuff3D" --overwrite=prompt --from-to=LocalFile --follow-symlinks --put-md5 --follow-symlinks --preserve-smb-info=true --disable-auto-decoding=false --recursive --log-level=INFO;
$env:AZCOPY_CRED_TYPE = "";
$env:AZCOPY_CONCURRENCY_VALUE = "";

You need to replace some parts above, with your values from the SAS key.

AZCOPY_LOG_LOCATION will write the log file from azcopy to that temp folder, instead of a folder under the user account.

You must provide full path to all folders, e.g. d:\script\azcopy.exe, otherwise the TI process will not find the files, and you will not get any errors.

Replace testfile1.txt with the name of the file you want to upload. You need to provide the filename both at source and target folder.

Then you need to create a TM1 process – with below in the prolog tab:

 ExecuteCommand( 'Powershell -ExecutionPolicy ByPass -file "D:\script\uploadfile.ps1" ', 0 );

 

If you have issues; you can try with this:

ExecuteCommand( 'Powershell -noprofile -noninteractive -ExecutionPolicy ByPass -file "D:\script\uploadfile.ps1" ', 0 );

To see who is running powershell, create a powershell file with this content (d:\script\system.ps1):

try {
$Who = whoami.exe
}
catch {
$Who = Write-Output "Error Message: $($_.Exception.Message) Error on line $($_.InvocationInfo.ScriptLineNumber)"
}
finally {
$Who | Out-File -FilePath 'd:\Script\SystemWhoami.txt'
}

Then in TM1 call that ps1 file with this command:

ExecuteCommand( 'Powershell -ExecutionPolicy ByPass -file "D:\script\system.ps1" ', 0 );

This will create a file in the d:\script\ folder that contain the person who run the script:

If the TM1 application is run by the system account, you will get above result. If the TM1 service is run by a windows user, that name will show in above file.

The TM1 process call to a powershell file with a azcopy command is working with the system account running the Tm1 application.

Browse to your AZURE file storage account and check if the file was uploaded.

If the file exist in the AZURE file storage, the transfer will not succeed.

 

If it still does not work, check if TM1 windows server had execution of scripts disabled.

In some cases when the powershell does not work, it can be needed to add two folders to the c drive:

(32Bit, always)
C:\Windows\System32\config\systemprofile\Desktop

(64Bit)
C:\Windows\SysWOW64\config\systemprofile\Desktop

 

Please note that file storage does not support all authentications methods that other azure storage does.

  • local <-> Azure Blob (SAS or OAuth authentication)
  • local <-> Azure Files (Share/directory SAS authentication)
  • local <-> Azure Data Lake Storage Gen2 (SAS, OAuth, or SharedKey authentication)

For a Azure file storage you must use SAS or KEY, to access.

 

More Information:

https://powershelltoolbox.tech/testing-powershell-scripts-as-the-system-account/ 

https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-copy?source=recommendations 

https://www.aramar.co.uk/using-powershell-in-planning-analytics/ 

https://www.tm1forum.com/viewtopic.php?t=14763 

https://sahayagodson.medium.com/azcopy-sync-with-task-scheduler-8a59fba1755a

https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview

https://learn.microsoft.com/en-us/azure/cognitive-services/translator/document-translation/how-to-guides/create-sas-tokens?tabs=Containers

https://www.51sec.org/2022/08/12/using-azcopy-to-migrate-local-files-to-azure-blob-storage/

https://bluexp.netapp.com/blog/azure-cvo-blg-how-to-upload-files-to-azure-blob-storage#H_H2 

PowerShell script: Download and extract the latest AzCopy (Windows 64-bit) executable into a specified folder

https://gist.github.com/DaveRuijter/b530c84020950c729cb9e15efbfe04ce 

https://www.smikar.com/using-azcopy-with-azure-storage/ 

Usage:
azcopy copy [source] [destination] [flags]

Aliases:
copy, cp, c

Examples:
Upload a single file by using OAuth authentication. If you have not yet logged into AzCopy, please run the azcopy login command before you run the following command.

– azcopy cp “/path/to/file.txt” “https://[account].blob.core.windows.net/[container]/[path/to/blob]”

Same as above, but this time also compute MD5 hash of the file content and save it as the blob’s Content-MD5 property:

– azcopy cp “/path/to/file.txt” “https://[account].blob.core.windows.net/[container]/[path/to/blob]” –put-md5

Upload a single file by using a SAS token:

– azcopy cp “/path/to/file.txt” “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Upload a single file by using a SAS token and piping (block blobs only):

– cat “/path/to/file.txt” | azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” –from-to PipeBlob

Upload a single file by using OAuth and piping (block blobs only):

– cat “/path/to/file.txt” | azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]” –from-to PipeBlob

Upload an entire directory by using a SAS token:

– azcopy cp “/path/to/dir” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true
or
– azcopy cp “/path/to/dir” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true –put-md5

Upload a set of files by using a SAS token and wildcard (*) characters:

– azcopy cp “/path/*foo/*bar/*.pdf” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]”

Upload files and directories by using a SAS token and wildcard (*) characters:

– azcopy cp “/path/*foo/*bar*” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Upload files and directories to Azure Storage account and set the query-string encoded tags on the blob.

– To set tags {key = “bla bla”, val = “foo”} and {key = “bla bla 2”, val = “bar”}, use the following syntax :
– azcopy cp “/path/*foo/*bar*” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –blob-tags=”bla%20bla=foo&bla%20bla%202=bar”
– Keys and values are URL encoded and the key-value pairs are separated by an ampersand(‘&’)
– https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-index-how-to?tabs=azure-portal
– While setting tags on the blobs, there are additional permissions(‘t’ for tags) in SAS without which the service will give authorization error back.

Download a single file by using OAuth authentication. If you have not yet logged into AzCopy, please run the azcopy login command before you run the following command.

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]” “/path/to/file.txt”

Download a single file by using a SAS token:

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” “/path/to/file.txt”

Download a single file by using a SAS token and then piping the output to a file (block blobs only):

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” –from-to BlobPipe > “/path/to/file.txt”

Download a single file by using OAuth and then piping the output to a file (block blobs only):

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]” –from-to BlobPipe > “/path/to/file.txt”

Download an entire directory by using a SAS token:

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” “/path/to/dir” –recursive=true

A note about using a wildcard character (*) in URLs:

There’s only two supported ways to use a wildcard character in a URL.
– You can use one just after the final forward slash (/) of a URL. This copies all of the files in a directory directly to the destination without placing them into a subdirectory.
– You can also use one in the name of a container as long as the URL refers only to a container and not to a blob. You can use this approach to obtain files from a subset of containers.

Download the contents of a directory without copying the containing directory itself.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/folder]/*?[SAS]” “/path/to/dir”

Download an entire storage account.

– azcopy cp “https://[srcaccount].blob.core.windows.net/” “/path/to/dir” –recursive

Download a subset of containers within a storage account by using a wildcard symbol (*) in the container name.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container*name]” “/path/to/dir” –recursive

Download all the versions of a blob from Azure Storage to local directory. Ensure that source is a valid blob, destination is a local folder and versionidsFile which takes in a path to the file where each version is written on a separate line. All the specified versions will get downloaded in the destination folder specified.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[containername]/[blobname]” “/path/to/dir” –list-of-versions=”/another/path/to/dir/[versionidsFile]”

Copy a subset of files within a flat container by using a wildcard symbol (*) in the container name without listing all files in the container.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[containername]/*” “/path/to/dir” –include-pattern=”1*”

Copy a single blob to another blob by using a SAS token.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Copy a single blob to another blob by using a SAS token and an OAuth token. You have to use a SAS token at the end of the source account URL if you do not have the right permissions to read it with the identity used for login.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]”

Copy one blob virtual directory to another by using a SAS token:

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Copy all blob containers, directories, and blobs from storage account to another by using a SAS token:

– azcopy cp “https://[srcaccount].blob.core.windows.net?[SAS]” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy a single object to Blob Storage from Amazon Web Services (AWS) S3 by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/[bucket]/[object]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Copy an entire directory to Blob Storage from AWS S3 by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/[bucket]/[folder]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Please refer to https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html to better understand the [folder] placeholder.

Copy all buckets to Blob Storage from Amazon Web Services (AWS) by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy all buckets to Blob Storage from an Amazon Web Services (AWS) region by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3-[region].amazonaws.com/” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy a subset of buckets by using a wildcard symbol (*) in the bucket name. Like the previous examples, you’ll need an access key and a SAS token. Make sure to set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/[bucket*name]/” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy blobs from one blob storage to another and preserve the tags from source. To preserve tags, use the following syntax :

– azcopy cp “https://[account].blob.core.windows.net/[source_container]/[path/to/directory]?[SAS]” “https://[account].blob.core.windows.net/[destination_container]/[path/to/directory]?[SAS]” –s2s-preserve-blob-tags=true

Transfer files and directories to Azure Storage account and set the given query-string encoded tags on the blob.

– To set tags {key = “bla bla”, val = “foo”} and {key = “bla bla 2”, val = “bar”}, use the following syntax :
– azcopy cp “https://[account].blob.core.windows.net/[source_container]/[path/to/directory]?[SAS]” “https://[account].blob.core.windows.net/[destination_container]/[path/to/directory]?[SAS]” –blob-tags=”bla%20bla=foo&bla%20bla%202=bar”
– Keys and values are URL encoded and the key-value pairs are separated by an ampersand(‘&’)
– https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-index-how-to?tabs=azure-portal
– While setting tags on the blobs, there are additional permissions(‘t’ for tags) in SAS without which the service will give authorization error back.

Copy a single object to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.

– azcopy cp “https://storage.cloud.google.com/[bucket]/[object]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Copy an entire directory to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.

– azcopy cp “https://storage.cloud.google.com/[bucket]/[folder]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Copy an entire bucket to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.

– azcopy cp “https://storage.cloud.google.com/[bucket]” “https://[destaccount].blob.core.windows.net/?[SAS]” –recursive=true

Copy all buckets to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variables GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT=<project-id> for GCS source

– azcopy cp “https://storage.cloud.google.com/” “https://[destaccount].blob.core.windows.net/?[SAS]” –recursive=true

Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from Google Cloud Storage (GCS) by using a service account key and a SAS token for destination. First, set the environment variables GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT=<project-id> for GCS source

– azcopy cp “https://storage.cloud.google.com/[bucket*name]/” “https://[destaccount].blob.core.windows.net/?[SAS]” –recursive=true

Flags:
–as-subdir True by default. Places folder sources as subdirectories under the destination. (default true)
–backup Activates Windows’ SeBackupPrivilege for uploads, or SeRestorePrivilege for downloads, to allow AzCopy to see read all files, regardless of their file system permissions, and to restore all permissions. Requires that the account running AzCopy already has these permissions (e.g. has Administrator rights or is a member of the ‘Backup Operators’ group). All this flag does is activate privileges that the account already has
–blob-tags string Set tags on blobs to categorize data in your storage account
–blob-type string Defines the type of blob at the destination. This is used for uploading blobs and when copying between accounts (default ‘Detect’). Valid values include ‘Detect’, ‘BlockBlob’, ‘PageBlob’, and ‘AppendBlob’. When copying between accounts, a value of ‘Detect’ causes AzCopy to use the type of source blob to determine the type of the destination blob. When uploading a file, ‘Detect’ determines if the file is a VHD or a VHDX file based on the file extension. If the file is either a VHD or VHDX file, AzCopy treats the file as a page blob. (default “Detect”)
–block-blob-tier string upload block blob to Azure Storage using this blob tier. (default “None”)
–block-size-mb float Use this block size (specified in MiB) when uploading to Azure Storage, and downloading from Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
–cache-control string Set the cache-control header. Returned on download.
–check-length Check the length of a file on the destination after the transfer. If there is a mismatch between source and destination, the transfer is marked as failed. (default true)
–check-md5 string Specifies how strictly MD5 hashes should be validated when downloading. Only available when downloading. Available options: NoCheck, LogOnly, FailIfDifferent, FailIfDifferentOrMissing. (default ‘FailIfDifferent’) (default “FailIfDifferent”)
–content-disposition string Set the content-disposition header. Returned on download.
–content-encoding string Set the content-encoding header. Returned on download.
–content-language string Set the content-language header. Returned on download.
–content-type string Specifies the content type of the file. Implies no-guess-mime-type. Returned on download.
–cpk-by-name string Client provided key by name let clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key name will be fetched from Azure Key Vault and will be used to encrypt the data
–cpk-by-value Client provided key by name let clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables
–decompress Automatically decompress files when downloading, if their content-encoding indicates that they are compressed. The supported content-encoding values are ‘gzip’ and ‘deflate’. File extensions of ‘.gz’/’.gzip’ or ‘.zz’ aren’t necessary, but will be removed if present.
–disable-auto-decoding False by default to enable automatic decoding of illegal chars on Windows. Can be set to true to disable automatic decoding.
–dry-run Prints the file paths that would be copied by this command. This flag does not copy the actual files.
–exclude-attributes string (Windows only) Exclude files whose attributes match the attribute list. For example: A;S;R
–exclude-blob-type string Optionally specifies the type of blob (BlockBlob/ PageBlob/ AppendBlob) to exclude when copying blobs from the container or the account. Use of this flag is not applicable for copying data from non azure-service to service. More than one blob should be separated by ‘;’.
–exclude-path string Exclude these paths when copying. This option does not support wildcard characters (*). Checks relative path prefix(For example: myFolder;myFolder/subDirName/file.pdf). When used in combination with account traversal, paths do not include the container name.
–exclude-pattern string Exclude these files when copying. This option supports wildcard characters (*)
–exclude-regex string Exclude all the relative path of the files that align with regular expressions. Separate regular expressions with ‘;’.
–follow-symlinks Follow symbolic links when uploading from local file system.
–force-if-read-only When overwriting an existing file on Windows or Azure Files, force the overwrite to work even if the existing file has its read-only attribute set
–from-to string Specified to nudge AzCopy when resource detection may not work (e.g. piping/emulator/azure stack); Valid FromTo are pairs of Source-Destination words (e.g. BlobLocal, BlobBlob) that specify the source and destination resource types. All valid FromTos are: BlobBlob, BlobFSLocal, BlobFile, BlobLocal, BlobPipe, FileBlob, FileFile, FileLocal, FilePipe, GCPBlob, LocalBlob, LocalBlobFS, LocalFile, PipeBlob, PipeFile, S3Blob
-h, –help help for copy
–include-after string Include only those files modified on or after the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. E.g. ‘2020-08-19T15:04:00Z’ for a UTC time, or ‘2020-08-19’ for midnight (00:00) in the local timezone. As of AzCopy 10.5, this flag applies only to files, not folders, so folder properties won’t be copied when using this flag with –preserve-smb-info or –preserve-smb-permissions.
–include-attributes string (Windows only) Include files whose attributes match the attribute list. For example: A;S;R
–include-before string Include only those files modified before or on the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. E.g. ‘2020-08-19T15:04:00Z’ for a UTC time, or ‘2020-08-19’ for midnight (00:00) in the local timezone. As of AzCopy 10.7, this flag applies only to files, not folders, so folder properties won’t be copied when using this flag with –preserve-smb-info or –preserve-smb-permissions.
–include-directory-stub False by default to ignore directory stubs. Directory stubs are blobs with metadata ‘hdi_isfolder:true’. Setting value to true will preserve directory stubs during transfers.
–include-path string Include only these paths when copying. This option does not support wildcard characters (*). Checks relative path prefix (For example: myFolder;myFolder/subDirName/file.pdf).
–include-pattern string Include only these files when copying. This option supports wildcard characters (*). Separate files by using a ‘;’.
–include-regex string Include only the relative path of the files that align with regular expressions. Separate regular expressions with ‘;’.
–list-of-versions string Specifies a file where each version id is listed on a separate line. Ensure that the source must point to a single blob and all the version ids specified in the file using this flag must belong to the source blob only. AzCopy will download the specified versions in the destination folder provided.
–metadata string Upload to Azure Storage with these key-value pairs as metadata.
–no-guess-mime-type Prevents AzCopy from detecting the content-type based on the extension or content of the file.
–overwrite string Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default ‘true’) Possible values include ‘true’, ‘false’, ‘prompt’, and ‘ifSourceNewer’. For destinations that support folders, conflicting folder-level properties will be overwritten this flag is ‘true’ or if a positive response is provided to the prompt. (default “true”)
–page-blob-tier string Upload page blob to Azure Storage using this blob tier. (default ‘None’). (default “None”)
–preserve-last-modified-time Only available when destination is file system.
–preserve-owner Only has an effect in downloads, and only when –preserve-smb-permissions is used. If true (the default), the file Owner and Group are preserved in downloads. If set to false, –preserve-smb-permissions will still preserve ACLs but Owner and Group will be based on the user running AzCopy (default true)
–preserve-permissions False by default. Preserves ACLs between aware resources (Windows and Azure Files, or ADLS Gen 2 to ADLS Gen 2). For Hierarchical Namespace accounts, you will need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you will also need the –backup flag to restore permissions where the new Owner will not be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (e.g. include-pattern).
–preserve-posix-properties ‘Preserves’ property info gleaned from stat or statx into object metadata.
–preserve-smb-info Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). On windows, this flag will be set to true by default. If the source or destination is a volume mounted on Linux using SMB protocol, this flag will have to be explicitly set to true. Only the attribute bits supported by Azure Files will be transferred; any others will be ignored. This flag applies to both files and folders, unless a file-only filter is specified (e.g. include-pattern). The info transferred for folders is the same as that for files, except for Last Write Time which is never preserved for folders. (default true)
–preserve-symlinks If enabled, symlink destinations are preserved as the blob content, rather than uploading the file/folder on the other end of the symlink
–put-md5 Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading.
–recursive Look into sub-directories recursively when uploading from local file system.
–s2s-detect-source-changed Detect if the source file/blob changes while it is being read. (This parameter only applies to service to service copies, because the corresponding check is permanently enabled for uploads and downloads.)
–s2s-handle-invalid-metadata string Specifies how invalid metadata keys are handled. Available options: ExcludeIfInvalid, FailIfInvalid, RenameIfInvalid. (default ‘ExcludeIfInvalid’). (default “ExcludeIfInvalid”)
–s2s-preserve-access-tier Preserve access tier during service to service copy. Please refer to [Azure Blob storage: hot, cool, and archive access tiers](https://docs.microsoft.com/azure/storage/blobs/storage-blob-storage-tiers) to ensure destination storage account supports setting access tier. In the cases that setting access tier is not supported, please use s2sPreserveAccessTier=false to bypass copying access tier. (default true). (default true)
–s2s-preserve-blob-tags Preserve index tags during service to service transfer from one blob storage to another
–s2s-preserve-properties Preserve full properties during service to service copy. For AWS S3 and Azure File non-single file source, the list operation doesn’t return full properties of objects and files. To preserve full properties, AzCopy needs to send one additional request per object or file. (default true)

Flags Applying to All Commands:
–cap-mbps float Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn’t capped.
–log-level string Define the log verbosity for the log file, available levels: INFO(all requests/responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default ‘INFO’). (default “INFO”)
–output-level string Define the output verbosity. Available levels: essential, quiet. (default “default”)
–output-type string Format of the command’s output. The choices include: text, json. The default value is ‘text’. (default “text”)
–skip-version-check Do not perform the version check at startup. Intended for automation scenarios & airgapped use.
–trusted-microsoft-suffixes string Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is ‘*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net’. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.

 

 

https://ecapitaladvisors.com/blog/implementing-email-notifications-in-a-day/

 

Product:

Planning Analytics 2.0.9.13

Problem:

How log user login to tm1?

Solution:

Create a file named tm1s-log.properties in same folder as the tm1s.cfg, with this content:

log4j.logger.TM1=INFO, S1
log4j.logger.TM1.Lock=ERROR, S1
log4j.appender.S1=org.apache.log4j.SharedMemoryAppender
log4j.appender.S1.MemorySize=10 MB
log4j.appender.S1.MaxFileSize=100 MB
log4j.appender.S1.MaxBackupIndex=20
log4j.appender.S1.TimeZone=local

log4j.logger.TM1.Login=DEBUG, S2
log4j.additivity.TM1.Login=false
log4j.appender.S2=org.apache.log4j.SharedMemoryAppender
log4j.appender.S2.File=TM1login.log
log4j.appender.S2.MaxFileSize=100 MB
log4j.appender.S2.MaxBackupIndex=20
log4j.appender.S2.TimeZone=local

Will create a tm1login.log file in your log folder. It will contain the user name for IntegratedSecurityMode=1.

If it is a CAM user you may have to read the }TM1_DefaultDisplayValue in }Clients dimension.

 

 

Then you can use this code to load the log file into a cube;

https://github.com/ykud/sample_tm1_tis/blob/main/SYS%20TM1Login%20Update.pro

More Information:

https://exploringtm1.com/using-tm1-log-files/

https://blog.octanesolutions.com.au/changing-password-in-multiple-tm1-instances

https://code.cubewise.com/blog/ibm-tm1-and-planning-analytics-cheatsheet

https://everanalytics.wordpress.com/2021/02/17/tm1-rest-api-example/

https://github.com/cubewise-code/tm1py/issues/393

https://github.com/cubewise-code/tm1py-samples/blob/master/Administration/list_tm1_users