Product:

Microsoft SQL Azure server

Issue:

Need to change a column to have nvarchar(20) instead of nvarchar(200). How change nvarchar in table?

Solution:

Check the data length in the column (nvarchar is unicode, that uses 2 bytes for a single character):

SELECT DATALENGTH(Column_Name) AS FIELDSIZE, Column_Name 
FROM Table_Name
ORDER by FIELDSIZE DESC

Change the columns value with:

Update Table_Name set Column_Name = left(coalesce(Column_Name ,''),20);
Alter table Table_Name alter column Column_Name nvarchar(20) not null;

If you have constrains, you need to drop them first.  DROP INDEX index_name  ON table_name  ;

https://www.w3schools.com/sql/sql_ref_drop_constraint.asp

Otherwise, rename the table;

sp_rename 'old_table_name', 'new_table_name'

Creata a new table with the old name, in SSMS you select the table and from menu select “Script table as” – create to – new query editor window. Adjust the code to have the new nvarchar value, example below:

CREATE TABLE [dbo].[table2](
[index] [int] NOT NULL,
[Name] [nvarchar](20) NULL,
[Country] [nvarchar](50) NULL,
[Employess] [int] NULL
) ON [PRIMARY]

 

Copy the data over to the new table

INSERT INTO table2 (column1, column2, column3, ...)
SELECT column1, column2, column3, ...
FROM table1
WHERE condition; 

For above table the example is as below, the left(coalesce([Name] ,”),20) make that we only copy the 20 first characters.

 insert into table2 ( [index],[Name]
,[Country]
,[Employess])
select [index] ,left(coalesce([Name] ,''),20)
,[Country]
,[Employess] from table1

 

If you not use the LEFT function you may get a error like “String or binary data would be truncated”.

More Information:

https://javarevisited.blogspot.com/2016/03/how-to-increase-length-of-existing-VARCHAR-column-in-SQL-Server.html 

https://www.sqlservertutorial.net/sql-server-basics/sql-server-nvarchar/

https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-rename-transact-sql?view=sql-server-ver16 

How to rename tables in SQL Server with the sp_rename command

https://www.w3schools.com/sql/sql_insert_into_select.asp

https://www.tutorialrepublic.com/sql-tutorial/sql-cloning-tables.php

https://www.w3schools.com/sql/sql_ref_insert_into_select.asp

https://www.w3schools.com/sql/func_sqlserver_left.asp

As index is a reserved command in SQL, you need to have it inside [ ] for SQL to understand it is a table name.

Product:
Microsoft SQL Azure

Issue:
System is slow, and it looks like TEMPDB is working hard. What can we do?

Solution:

Check what is going on in SQL TEMPDB with this query:

SELECT [Source] = 'database_transactions',
[session_id] = ST.session_id,
[transaction_id] = ST.transaction_id,
[login_name] = S.login_name,
[database_id] = S.database_id,
[program_name] = S.program_name,
[host_name] = S.host_name,
[database_id] = DT.database_id,
[database_name] = CASE
WHEN D.name IS NULL AND DT.database_id = 2 THEN 'TEMPDB'
ELSE D.name
END,

[log_reuse_wait_desc] = D.log_reuse_wait_desc,
[database_transaction_log_used_Kb] = CONVERT(numeric(18,2), DT.database_transaction_log_bytes_used / 1024.0 ),
[database_transaction_begin_time] = DT.database_transaction_begin_time,
[transaction_type_desc] = CASE DT.database_transaction_type
WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction'
END,

[transaction_state_desc] = CASE DT.database_transaction_state
WHEN 1 THEN 'The transaction has not been initialized.'
WHEN 2 THEN 'The transaction is active'
WHEN 3 THEN 'The transaction has been initialized but has not generated any log records.'
WHEN 4 THEN 'The transaction has generated log records.'
WHEN 5 THEN 'The transaction has been prepared.'
WHEN 10 THEN 'The transaction has been committed.'
WHEN 11 THEN 'The transaction has been rolled back.'
WHEN 12 THEN 'The transaction is being committed. (The log record is being generated, but has not been materialized or persisted.)'
END,

[active_transaction_type_desc] = CASE AT.transaction_type
WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction'
END,

[active_transaction_state_desc] = CASE AT.transaction_state
WHEN 0 THEN 'The transaction has not been completely initialized yet.'
WHEN 1 THEN 'The transaction has been initialized but has not started.'
WHEN 2 THEN 'The transaction is active'
WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
WHEN 4 THEN 'The commit process has been initiated on the distributed transaction.'
WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
WHEN 6 THEN 'The transaction has been committed.'
WHEN 7 THEN 'The transaction is being rolled back.'
WHEN 8 THEN 'The transaction has been rolled back.'
END

FROM sys.dm_tran_database_transactions DT
INNER JOIN sys.dm_tran_session_transactions ST ON DT.transaction_id = ST.transaction_id
INNER JOIN sys.dm_tran_active_transactions AT ON DT.transaction_id = AT.transaction_id
INNER JOIN sys.dm_exec_sessions S ON ST.session_id = S.session_id
LEFT JOIN sys.databases D ON DT.database_id = D.database_id
WHERE DT.database_id = 2 -- tempdb
ORDER BY ST.session_id, DT.database_id;

 

 

More Information:

https://techcommunity.microsoft.com/t5/azure-database-support-blog/azure-sql-db-and-tempdb-usage-tracking/ba-p/1573220 

https://techcommunity.microsoft.com/t5/azure-database-support-blog/resolve-tempdb-related-errors-in-azure-sql-database/ba-p/3597944 

 

Product:
Microsoft Azure Database SQL

Issue:
When running a Store Procedure (SP) you get an error after a long time,

Sql error number: 40544. Error Message: The database ‘tempdb’ has reached its size quota.

Suggested solution:

Increase, if possible, your DTU, this will reset the tempdb and you will free space to start over.

In SSMS check what SQL Azure level you are with:

SELECT Edition = DATABASEPROPERTYEX('databasename', 'EDITION'),

        ServiceObjective = DATABASEPROPERTYEX('databasename', 'ServiceObjective'),

        MaxSizeInBytes =  DATABASEPROPERTYEX('databasename', 'MaxSizeInBytes');

 

In SSMS enter to see the file size of TEMPDB at the moment:

SELECT [Source] = 'database_files', 
[TEMPDB_max_size_MB] = SUM(max_size) * 8 / 1027.0, 
[TEMPDB_current_size_MB] = SUM(size) * 8 / 1027.0, 
[FileCount] = COUNT(FILE_ID)
FROM tempdb.sys.database_files
WHERE type = 0 --ROWS

Above we used up the file size limit of 13.9 GB for TEMPDB that exist in the first tiers. But with below SQL query can you see how much space is used inside:

SELECT 
(SUM(unallocated_extent_page_count)*1.0/128) AS [Free space(MB)]
,(SUM(version_store_reserved_page_count)*1.0/128) AS [Used Space by VersionStore(MB)]
,(SUM(internal_object_reserved_page_count)*1.0/128) AS [Used Space by InternalObjects(MB)]
,(SUM(user_object_reserved_page_count)*1.0/128) AS [Used Space by UserObjects(MB)]
FROM tempdb.sys.dm_db_file_space_usage;

 

Service-level objective Maximum tempdb data file size (GB) Number of tempdb data files Maximum tempdb data size (GB)
Basic 13.9 1 13.9
S0 13.9 1 13.9
S1 13.9 1 13.9
S2 13.9 1 13.9
S3 32 1 32

 

Recommended solution, is to check what is causing the creation of large use of TEMPDB space, by check your query plans in SSMS.

Then improve you table columns format, to only what you need. Use nvarchar(50) instead of nvarchar(max) etc.

Check you store procedures query’s, and insert a index on the columns that you thing will make the selection smaller fastest.

DTU in azure is a combination of CPU usage per second and read/write I/O per second to disc. When you have used up your quota, there is a limit on how much bytes you can write to disc per second, so your process will succeed but it will take much longer as a small amount of data is processed at each second.

A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. Service tiers in the DTU-based purchasing model are differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period for backups, and fixed price.

https://blog.atwork.at/post/Azure-Subscription-and-Service-Limits 

https://www.spotlightcloud.io/blog/what-is-dtu-in-azure-sql-database-and-how-much-do-we-need

More information:

https://knowledge-base.havit.eu/2018/02/19/azure-sql-the-database-tempdb-has-reached-its-size-quota-partition-or-delete-data-drop-indexes-or-consult-the-documentation-for-possible-resolutions-microsoft-sql-server-error-40544/ 

https://techcommunity.microsoft.com/t5/azure-database-support-blog/resolve-tempdb-related-errors-in-azure-sql-database/ba-p/3597944 

https://sqlcoffee.com/Azure_0013.htm 

https://www.brentozar.com/archive/2018/02/memory-grants-sql-servers-public-toilet/

Top five considerations for SQL Server index design

https://learn.microsoft.com/en-us/azure/azure-sql/database/resource-limits-dtu-single-databases?view=azuresql#tempdb-sizes 

https://www.brentozar.com/archive/2023/09/oops-i-lost-my-indexes-in-azure-sql-db/ 

https://www.sqlshack.com/sql-index-overview-and-strategy/ 

https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-index-design-guide?view=sql-server-ver16  

https://datasimantics.com/2018/08/24/sql-servers-nvarcharmax-and-how-to-wield-it/ 

https://www.ibm.com/support/pages/only-first-1024-characters-nvarcharmax-column-are-presented-report-based-dqm-package

https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tiers-dtu?view=azuresql

https://learn.microsoft.com/en-us/azure/azure-sql/database/resource-limits-dtu-single-databases?view=azuresql 

To see sessions that use TEMPDB:

-- Sessions with open transactions in tempdb
SELECT [Source] = 'database_transactions', 
[session_id] = ST.session_id, 
[transaction_id] = ST.transaction_id, 
[login_name] = S.login_name, 
[database_id] = S.database_id, 
[program_name] = S.program_name, 
[host_name] = S.host_name, 
[database_id] = DT.database_id, 
[database_name] = CASE
WHEN D.name IS NULL AND DT.database_id = 2 THEN 'TEMPDB'
ELSE D.name
END, 
[log_reuse_wait_desc] = D.log_reuse_wait_desc, 
[database_transaction_log_used_Kb] = CONVERT(numeric(18,2), DT.database_transaction_log_bytes_used / 1024.0 ), 
[database_transaction_begin_time] = DT.database_transaction_begin_time, 
[transaction_type_desc] = CASE DT.database_transaction_type
WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction'
END, 
[transaction_state_desc] = CASE DT.database_transaction_state
WHEN 1 THEN 'The transaction has not been initialized.'
WHEN 2 THEN 'The transaction is active'
WHEN 3 THEN 'The transaction has been initialized but has not generated any log records.'
WHEN 4 THEN 'The transaction has generated log records.'
WHEN 5 THEN 'The transaction has been prepared.'
WHEN 10 THEN 'The transaction has been committed.'
WHEN 11 THEN 'The transaction has been rolled back.'
WHEN 12 THEN 'The transaction is being committed. (The log record is being generated, but has not been materialized or persisted.)'
END,
[active_transaction_type_desc] = CASE AT.transaction_type
WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction'
END, 
[active_transaction_state_desc] = CASE AT.transaction_state
WHEN 0 THEN 'The transaction has not been completely initialized yet.'
WHEN 1 THEN 'The transaction has been initialized but has not started.'
WHEN 2 THEN 'The transaction is active'
WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
WHEN 4 THEN 'The commit process has been initiated on the distributed transaction.'
WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
WHEN 6 THEN 'The transaction has been committed.'
WHEN 7 THEN 'The transaction is being rolled back.'
WHEN 8 THEN 'The transaction has been rolled back.'
END
FROM sys.dm_tran_database_transactions DT
INNER JOIN sys.dm_tran_session_transactions ST ON DT.transaction_id = ST.transaction_id
INNER JOIN sys.dm_tran_active_transactions AT ON DT.transaction_id = AT.transaction_id
INNER JOIN sys.dm_exec_sessions S ON ST.session_id = S.session_id
LEFT JOIN sys.databases D ON DT.database_id = D.database_id
WHERE DT.database_id = 2 -- tempdb
ORDER BY ST.session_id, DT.database_id;



 

Product:
Microsoft SQL Azure

Issue:
How many rows are it in a database table?

Solution:

Enter in SSMS:

sp_spaceused 'dbo.tablename';

This will give you both number of rows and space used by the table.

 

More Information:

https://www.brentozar.com/archive/2014/02/count-number-rows-table-sql-server/

https://www.sqlshack.com/capture-databases-usage-stats-using-sp_spaceused-powershell/ 

Product:

Planning Analytics 2.0.9.18

Microsoft Windows 2019 server

Issue:

TM1 TI process that have worked before, crash now with error:

Error: Data procedure line (0): Exception Occurred during process execution: TM1MemoryException: Temporary pool exceeded

Possible Solution:

Change in the Tm1s.cfg file for your application to have a bigger value for MaximumViewSize, like MaximumViewSize=1000.

Restart the TM1 service, and run your process again.

MaximumViewSize is a per thread limitation.

More information:

https://www.ibm.com/support/pages/warning-noted-tm1-server-log-memorytemppoolexceeded 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=pitf-applymaximumviewsizetoentiretransaction#concept_ur1_bbp_rq 

By default MaximumViewSize checks individual view processing. For example, if 10 views are processed in a single transaction, the threshold is crossed only if the processing of any single view crosses the threshold. See MaximumViewSize.

With ApplyMaximumViewSizeToEntireTransaction=T parameter set to True, the cumulative memory usage of all views processed in a single transaction is compared against the threshold value. This allows the memory size threshold to catch more transactions that consume large amounts of memory.

Product:
Planning Analytics Local 2.0.9.18
TM1_version=TM1-AW64-ML-RTM-11.0.918.3-0
Microsoft Windows 2019 server

Problem:

After upgrade of Planning Analytics Local installation (PAL), the TM1 App Web is not showing the applications.

Suggested Solution:

Stop the service IBM Cognos TM1

Go to folder D:\Program Files\ibm\cognos\tm1_64\webapps\pmpsvc\WEB-INF\configuration

Rename fpmsvc_config.xml to fpmsvc_config.xml.old.txt

Rename fpmsvc_config.xml.new to fpmsvc_config.xml

Copy the lines between, the tm1 markers, that contain the name of the applications:

<tm1>
 </tm1>

from the file fpmsvc_config.xml.old.txt to the file fpmsvc_config.xml

Now the applications and gateway uri should be listed in the new file.

Start the service IBM Cognos TM1

More information:

https://www.ibm.com/support/pages/tm1-application-does-not-appear-tm1-applications-list-after-deployment

Product:

Planning Analytics Workspace 88  (file version.ps1 in folder paw\config contains a version number)
$env:PAW_BUILD=”121″
$env:PAW_RELEASE=”2.0.88″
Microsoft Windows 2019 server

Issue:

What containers should be running in a working PAW installation?

Solution:

Start powershell as administrator and enter command:

docker ps

This will list all running containers – should be these:

CONTAINER ID IMAGE                                                               PORTS                        NAMES
b6874749d0a5 127.0.0.1:5000/planninganalytics/prism-platform:3.0.2365.2-ltsc2019 9080/tcp                     prism-platform
5f104714f851 127.0.0.1:5000/planninganalytics/bss:1.0.1397-ltsc2019              8082/tcp                     bss
a946de8db063 127.0.0.1:5000/planninganalytics/pa-gateway:1.0.1098-ltsc2019       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp pa-gateway
83e22e0be1f8 127.0.0.1:5000/planninganalytics/neo-provision:1.0.392-ltsc2019     8083/tcp                     neo-provision
6d11be2a5fd7 127.0.0.1:5000/planninganalytics/neo-idviz:9.0.982-ltsc2019         9060/tcp                     neo-idviz
e8c3bd52ca54 127.0.0.1:5000/planninganalytics/monitor:2.0.88.3035-ltsc2019       9082/tcp                     monitor
bfa3ef090459 127.0.0.1:5000/planninganalytics/wa-proxy:1.0.1020-ltsc2019         1338/tcp                     wa-proxy
fb7bd169d5b3 127.0.0.1:5000/planninganalytics/async-service:1.0.387-ltsc2019     9666/tcp                     async-service
fef9418814e5 127.0.0.1:5000/planninganalytics/share-platform:1.0.425-ltsc2019    9110/tcp                     share-platform
afaeb58c897c 127.0.0.1:5000/planninganalytics/pa-plan-service:1.0.2023051901-ltsc2019 9080/tcp                plan-service
b84bc681967b 127.0.0.1:5000/planninganalytics/paw-ui-api:1.0.260-ltsc2019        3000/tcp                     paw-ui-api
31ac3574ea06 127.0.0.1:5000/planninganalytics/user-admin:1.0.737-ltsc2019        3333/tcp                     user-admin
5e55f009d40f 127.0.0.1:5000/planninganalytics/pa-glass:3.0.9788-ltsc2019         9080/tcp                     glass
8dc5e074265d 127.0.0.1:5000/planninganalytics/atlas-service:125-ltsc2019         9076/tcp                     atlas
5d482449ac97 127.0.0.1:5000/planninganalytics/prism-proxy:3.0.11099-ltsc2019     9090/tcp, 9100/tcp           prism-proxy
59b615950e2b 127.0.0.1:5000/planninganalytics/couchdb:234-ltsc2019               5984/tcp                     couchdb
1c0305d7d945 127.0.0.1:5000/planninganalytics/share-app:1.0.605-ltsc2019         9700/tcp                     share-app
010ac1fac8db 127.0.0.1:5000/planninganalytics/mongo:1.0.205-ltsc2019             27017/tcp                    mongo
481b8cb26b29 127.0.0.1:5000/planninganalytics/prism-app:3.0.5536-ltsc2019        9600/tcp                     prism-app
78709cb2d0d1 127.0.0.1:5000/planninganalytics/palm-service:1.0.340-ltsc2019      9085/tcp                     palm-service
545cf6eb986f 127.0.0.1:5000/planninganalytics/pa-content-service:1.0.310-ltsc2019 9191/tcp                    pa-content
61093630f145 127.0.0.1:5000/planninganalytics/redis:1.0.206-ltsc2019             6379/tcp                     redis
4e149a0ae6ea 127.0.0.1:5000/planninganalytics/pa-cdn:3.0.9788-ltsc2019           8080/tcp                     cdn
927748e22277 127.0.0.1:5000/planninganalytics/share-proxy:1.0.605-ltsc2019       9070/tcp                     share-proxy
8004ed0c651d 127.0.0.1:5000/planninganalytics/pa-predict-svc:1.0.1054-ltsc2019   9610/tcp                     pa-predict
0b084764c655 127.0.0.1:5000/planninganalytics/tm1proxy:1.0.411-ltsc2019          1339/tcp                     tm1proxy
394e473a354d 127.0.0.1:5000/planninganalytics/admintool:1.0.340-ltsc2019         8888/tcp                     admintool

Command docker images will list all installed images, should normally be these:

REPOSITORY                                       TAG                  IMAGE ID     CREATED      SIZE
127.0.0.1:5000/planninganalytics/monitor         2.0.88.3035-ltsc2019 cdd1118734aa 7 weeks ago  5.35GB
127.0.0.1:5000/planninganalytics/pa-glass        3.0.9788-ltsc2019    eda31db65292 2 months ago 5.27GB
127.0.0.1:5000/planninganalytics/pa-cdn          3.0.9788-ltsc2019    e30ce3e584eb 2 months ago 5.34GB
127.0.0.1:5000/planninganalytics/prism-proxy     3.0.11099-ltsc2019   ad0700244d5f 2 months ago 4.77GB
127.0.0.1:5000/planninganalytics/share-proxy     1.0.605-ltsc2019     7c0f57f0564c 2 months ago 4.71GB
127.0.0.1:5000/planninganalytics/share-app       1.0.605-ltsc2019     30028e59100d 2 months ago 4.72GB
127.0.0.1:5000/planninganalytics/pa-gateway      1.0.1098-ltsc2019    bc188fbdba7e 2 months ago 4.71GB
127.0.0.1:5000/planninganalytics/prism-platform  3.0.2365.2-ltsc2019  1af1c76c5ebe 2 months ago 5.66GB
127.0.0.1:5000/planninganalytics/prism-app       3.0.5536-ltsc2019    149080a8fc2d 2 months ago 4.73GB
127.0.0.1:5000/planninganalytics/palm-service    1.0.340-ltsc2019     5870fb15710e 2 months ago 4.76GB
127.0.0.1:5000/planninganalytics/tm1proxy        1.0.411-ltsc2019     301bf315ca8b 2 months ago 4.63GB
127.0.0.1:5000/planninganalytics/bss             1.0.1397-ltsc2019    16a9f3403a03 2 months ago 5.32GB
127.0.0.1:5000/planninganalytics/wa-proxy        1.0.1020-ltsc2019    d365a00fbcb0 2 months ago 4.73GB
127.0.0.1:5000/planninganalytics/paw-ui-api      1.0.260-ltsc2019     9e3f8ca98062 2 months ago 4.82GB
127.0.0.1:5000/planninganalytics/async-service   1.0.387-ltsc2019     f7e111569a61 2 months ago 4.82GB
127.0.0.1:5000/planninganalytics/pa-predict-svc  1.0.1054-ltsc2019    6dabd0bfa755 2 months ago 5.34GB
127.0.0.1:5000/planninganalytics/user-admin      1.0.737-ltsc2019     e4ea0b9f71cf 2 months ago 4.82GB
127.0.0.1:5000/planninganalytics/neo-provision   1.0.392-ltsc2019     7c04066c0fed 2 months ago 5.35GB
127.0.0.1:5000/planninganalytics/couchdb         234-ltsc2019         7832c5ecc13c 2 months ago 4.93GB
127.0.0.1:5000/planninganalytics/neo-idviz       9.0.982-ltsc2019     6847d7d6725d 2 months ago 5.55GB
127.0.0.1:5000/planninganalytics/pa-plan-service 1.0.2023051901-ltsc2019 04c8517f2f3a 2 months ago 5.28GB
127.0.0.1:5000/planninganalytics/admintool       1.0.340-ltsc2019     428093815025 2 months ago 4.72GB
127.0.0.1:5000/planninganalytics/pa-content-service 1.0.310-ltsc2019  bfa8b7dcb3f4 2 months ago 4.73GB
127.0.0.1:5000/planninganalytics/atlas-service   125-ltsc2019         f012ea094ed4 2 months ago 5.31GB
127.0.0.1:5000/planninganalytics/share-platform  1.0.425-ltsc2019     9ff87c1b417a 2 months ago 5.29GB
127.0.0.1:5000/planninganalytics/bss-init        1.0.342-ltsc2019     caf9f295cdcb 2 months ago 4.71GB
127.0.0.1:5000/planninganalytics/redis           1.0.206-ltsc2019     c5f67bbdafc5 2 months ago 4.64GB
127.0.0.1:5000/planninganalytics/mongo           1.0.205-ltsc2019     852817ecb6fe 2 months ago 4.85GB
127.0.0.1:5000/planninganalytics/ibm-java8       225-ltsc2019         7d2f0cc5bd1f 2 months ago 5.11GB
127.0.0.1:5000/planninganalytics/couchdb-init    1.0.835-ltsc2019     85873eaa9a40 2 months ago 4.62GB

 

If you are missing a image, the installation have failed. If not all containers is running, then you may have issues with your Anti-virus software.

Uninstall Trellix/McAfee and reboot server and try again to install PAW.

 

More Information:

https://www.ibm.com/support/pages/download-ibm-planning-analytics-local-v20-planning-analytics-workspace-release-88-fix-central

https://exploringtm1.com/how-to-install-planning-analytics-workspace-to-windows-server-2019/

https://www.ibm.com/support/pages/after-each-reboot-some-planning-analytics-workspace-containers-are-failing-start

https://circleci.com/blog/docker-and-cicd-tutorial-a-deep-dive-into-containers/

https://gist.github.com/danijeljw/a7a2553bd06742648172363ce3983a9a

Product:

Planning Analytics 2.0.9.18

Issue:

How create a simple check on input values in a TI process?

Solution:

Ad a IF statement in the PROLOG tab that check the input values against size, simplest to ensure the values entered is real – keep the size correct.

You have parameters the user should enter, then you need to do some simple check that the values entered are realistic.

ProcessQuit will terminate that process. Metadata and Data will not be executed.

ProcessBreak will stop processing the source and proceed directly to the Epilog tab (without returning an error handle to the TI).

More Information:

https://www.wimgielis.com/tm1_break_EN.htm 

https://exploringtm1.com/processquit-tm1-command-use-and-syntax/ 

https://exploringtm1.com/itemreject-tm1-function-syntax-use/ 

This example is simply checking if an element entered into a parameter exists in a dimension using a DIMIX and if it doesn’t, it puts a message in the log and quits the process.

IF ( DIMIX ( 'Location', pLocation) = 0);
sErrorMessage = 'The Location entered does not exist';
ItemReject ( sErrorMessage );
ENDIF;

https://www.ibm.com/docs/en/cognos-tm1/10.2.2?topic=basics-string-length-limit-in-turbointegrator#StringLengthLimitinTurboIntegrator_NB5009
https://code.cubewise.com/blog/tm1-hidden-features

Product:

Planning Analytics 2.0.9.17

Problem:

The calculated cell does not contain a zero value, instead it contain an old number from before. The Trace Feeder also show that the empty cell have a numbers, that are used in the consolidation. The cube does not have FEEDERS, but it is feed from a different cube.

Running CubeProcessFeeders ( ‘cubename’ ) – solve the problem, until next restart of the TM1 instance.

Solution:

IBM is aware of a defect in Planning Analytics versions 2.0.9.15 IF2 through 2.0.9.17. Rule-derived cells are not calculating after a TM1 database restart when the cube doesn’t have a feeder statement. This defect is addressed in Planning Analytics 2.0.9.18, available 10 July 2023.

If we add a feeder to the rules file for the cube, then the values are correct in the consolidation cell.

You can also create a TM1 TI process with:

CubeProcessFeeders ( 'thecubename' );

Then create a TM1 chore that will run that every week, but do not activate it.

Click “create new chore” in TM1 Architect.

Select the TM1 TI process you created for “run cubefeeders” above.

Next step select that the chore should run every 7 day, and save it with a name “S. 1010 run cubefeeders”.

Do not active the chore.

Go to the TM1S.CFG file for your TM1 instance.

Add this line:

StartupChores=S. 1010 run cubefeeders

This should give at the start of the TM1 instance, the TI process in that core is run before uses can login to the TM1 application.

In tm1server.log

21620 [] INFO 2023-07-13 14:12:59.743 TM1.Chore Executing Startup Chore “S. 1010 Run cubefeeders”
21620 [] INFO 2023-07-13 14:12:59.744 TM1.Process Process “S. 1111 run cubefeeders” executed by chore “S. 1010 run cubefeeders”

 

More Information:

https://blogs.perficient.com/2014/09/28/cognos-tm1-server-on-start-up/ 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=feeders-feeding-one-cube-from-another 

In this example we have cube A and cube B. Cube B is very similar to cube B but it has one more dimension

Cube A
Time
Measures

Cube B
Locations
Time
Measures

Say you have a sales value in cube A and you want to split it across the locations in cube B. The rule in cube B might look something like:

[‘sales’] = N: DB(‘cube A’,!Time,!Measues) * DB(‘Location Split’,!Location,!Time); (The latter being a lookup cube)

You will then have a feeder from A to B. It will look something like this:

[‘Sales’] => DB(‘Cube B’,’All Locations’,!Time,!Measures);

In the example above the Location dimension does not exist in cube A. When you have a case like this you need to select one item within the unknown dimension. In this case I have selected a consolidation called ‘All Locations’. This will then in turn feed all it’s children. If you do not have a consolidation like this create one. You can then add all children to it.

https://www.tm1forum.com/viewtopic.php?t=274 

https://www.ibm.com/support/pages/tm1-server-feeders-guidelines-and-best-practices 

https://www.decisionsystems.com/wp-content/uploads/2016/05/IBM-Cognos-TM1-Feeders-Best-Practices.pdf 

https://code.cubewise.com/blog/mastering-conditional-feeders-in-tm1

https://blog.octanesolutions.com.au/using-rules-and-feeders 

https://www.ibm.com/support/pages/node/7005575 

Product:
Planning Analytics 2.0.9.17
Microsoft Windows 2019 server

Problem:

How see memory usage of Feeders and other cube data?

Possible solution:

The }StatsByCube control cube captures information about the cubes in your TM1 application. You’ll need to enable Performance Monitor either by right-clicking on your TM1 instance in Architect and choosing Start Performance Monitor or by adding PerformanceMonitorOn=T to your TM1s.cfg file – the latter change requires a service restart to take effect.

After Performance Monitor starts, it takes a few seconds for the counters to begin writing to the }StatsByCube cube.

When you open }StatsByCube, the most useful view when starting out is to put the LATEST member from the }TimeIntervals in the title and the }StatsStatsByCube and }PerfCubes dimensions in the rows and columns. The counters you want to focus on are Memory Used for Views and Number of Stored Views. These represent the amount of RAM (in bytes) used for Stargate Views and the number of Stargate Views created for this specific cube.

1 Megabyte = 1,048,576 bytes
https://www.gbmb.org/gb-to-mb

Memory used for feeders below should then be 458 865 488 bytes = 458 Mb RAM for only feeders. They in most cases does not change.

More Information:

https://www.ironsidegroup.com/blog/tm1-stargate-views-explained/ 

https://blog.octanesolutions.com.au/performance-enhancements-using-tm1-cached-views

https://everanalytics.wordpress.com/2016/03/07/vmm_vmt_explained/

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=cubes-performance-monitoring-control