Microsoft Azure Database SQL

When running a Store Procedure (SP) you get an error after a long time,

Sql error number: 40544. Error Message: The database ‘tempdb’ has reached its size quota.

Suggested solution:

Increase, if possible, your DTU, this will reset the tempdb and you will free space to start over.

In SSMS check what SQL Azure level you are with:


        ServiceObjective = DATABASEPROPERTYEX('databasename', 'ServiceObjective'),

        MaxSizeInBytes =  DATABASEPROPERTYEX('databasename', 'MaxSizeInBytes');


In SSMS enter to see the file size of TEMPDB at the moment:

SELECT [Source] = 'database_files', 
[TEMPDB_max_size_MB] = SUM(max_size) * 8 / 1027.0, 
[TEMPDB_current_size_MB] = SUM(size) * 8 / 1027.0, 
[FileCount] = COUNT(FILE_ID)
FROM tempdb.sys.database_files
WHERE type = 0 --ROWS

Above we used up the file size limit of 13.9 GB for TEMPDB that exist in the first tiers. But with below SQL query can you see how much space is used inside:

(SUM(unallocated_extent_page_count)*1.0/128) AS [Free space(MB)]
,(SUM(version_store_reserved_page_count)*1.0/128) AS [Used Space by VersionStore(MB)]
,(SUM(internal_object_reserved_page_count)*1.0/128) AS [Used Space by InternalObjects(MB)]
,(SUM(user_object_reserved_page_count)*1.0/128) AS [Used Space by UserObjects(MB)]
FROM tempdb.sys.dm_db_file_space_usage;


Service-level objective Maximum tempdb data file size (GB) Number of tempdb data files Maximum tempdb data size (GB)
Basic 13.9 1 13.9
S0 13.9 1 13.9
S1 13.9 1 13.9
S2 13.9 1 13.9
S3 32 1 32


Recommended solution, is to check what is causing the creation of large use of TEMPDB space, by check your query plans in SSMS.

Then improve you table columns format, to only what you need. Use nvarchar(50) instead of nvarchar(max) etc.

Check you store procedures query’s, and insert a index on the columns that you thing will make the selection smaller fastest.

DTU in azure is a combination of CPU usage per second and read/write I/O per second to disc. When you have used up your quota, there is a limit on how much bytes you can write to disc per second, so your process will succeed but it will take much longer as a small amount of data is processed at each second.

A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. Service tiers in the DTU-based purchasing model are differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period for backups, and fixed price.

More information:

Top five considerations for SQL Server index design 

To see sessions that use TEMPDB:

-- Sessions with open transactions in tempdb
SELECT [Source] = 'database_transactions', 
[session_id] = ST.session_id, 
[transaction_id] = ST.transaction_id, 
[login_name] = S.login_name, 
[database_id] = S.database_id, 
[program_name] = S.program_name, 
[host_name] = S.host_name, 
[database_id] = DT.database_id, 
[database_name] = CASE
[log_reuse_wait_desc] = D.log_reuse_wait_desc, 
[database_transaction_log_used_Kb] = CONVERT(numeric(18,2), DT.database_transaction_log_bytes_used / 1024.0 ), 
[database_transaction_begin_time] = DT.database_transaction_begin_time, 
[transaction_type_desc] = CASE DT.database_transaction_type
WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction'
[transaction_state_desc] = CASE DT.database_transaction_state
WHEN 1 THEN 'The transaction has not been initialized.'
WHEN 2 THEN 'The transaction is active'
WHEN 3 THEN 'The transaction has been initialized but has not generated any log records.'
WHEN 4 THEN 'The transaction has generated log records.'
WHEN 5 THEN 'The transaction has been prepared.'
WHEN 10 THEN 'The transaction has been committed.'
WHEN 11 THEN 'The transaction has been rolled back.'
WHEN 12 THEN 'The transaction is being committed. (The log record is being generated, but has not been materialized or persisted.)'
[active_transaction_type_desc] = CASE AT.transaction_type
WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction'
[active_transaction_state_desc] = CASE AT.transaction_state
WHEN 0 THEN 'The transaction has not been completely initialized yet.'
WHEN 1 THEN 'The transaction has been initialized but has not started.'
WHEN 2 THEN 'The transaction is active'
WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
WHEN 4 THEN 'The commit process has been initiated on the distributed transaction.'
WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
WHEN 6 THEN 'The transaction has been committed.'
WHEN 7 THEN 'The transaction is being rolled back.'
WHEN 8 THEN 'The transaction has been rolled back.'
FROM sys.dm_tran_database_transactions DT
INNER JOIN sys.dm_tran_session_transactions ST ON DT.transaction_id = ST.transaction_id
INNER JOIN sys.dm_tran_active_transactions AT ON DT.transaction_id = AT.transaction_id
INNER JOIN sys.dm_exec_sessions S ON ST.session_id = S.session_id
LEFT JOIN sys.databases D ON DT.database_id = D.database_id
WHERE DT.database_id = 2 -- tempdb
ORDER BY ST.session_id, DT.database_id;


Microsoft SQL Azure

How many rows are it in a database table?


Enter in SSMS:

sp_spaceused 'dbo.tablename';

This will give you both number of rows and space used by the table.


More Information: 


Planning Analytics

Microsoft Windows 2019 server


TM1 TI process that have worked before, crash now with error:

Error: Data procedure line (0): Exception Occurred during process execution: TM1MemoryException: Temporary pool exceeded

Possible Solution:

Change in the Tm1s.cfg file for your application to have a bigger value for MaximumViewSize, like MaximumViewSize=1000.

Restart the TM1 service, and run your process again.

MaximumViewSize is a per thread limitation.

More information: 

By default MaximumViewSize checks individual view processing. For example, if 10 views are processed in a single transaction, the threshold is crossed only if the processing of any single view crosses the threshold. See MaximumViewSize.

With ApplyMaximumViewSizeToEntireTransaction=T parameter set to True, the cumulative memory usage of all views processed in a single transaction is compared against the threshold value. This allows the memory size threshold to catch more transactions that consume large amounts of memory.

Planning Analytics Local
Microsoft Windows 2019 server


After upgrade of Planning Analytics Local installation (PAL), the TM1 App Web is not showing the applications.

Suggested Solution:

Stop the service IBM Cognos TM1

Go to folder D:\Program Files\ibm\cognos\tm1_64\webapps\pmpsvc\WEB-INF\configuration

Rename fpmsvc_config.xml to fpmsvc_config.xml.old.txt

Rename to fpmsvc_config.xml

Copy the lines between, the tm1 markers, that contain the name of the applications:


from the file fpmsvc_config.xml.old.txt to the file fpmsvc_config.xml

Now the applications and gateway uri should be listed in the new file.

Start the service IBM Cognos TM1

More information:


Planning Analytics Workspace 88  (file version.ps1 in folder paw\config contains a version number)
Microsoft Windows 2019 server


What containers should be running in a working PAW installation?


Start powershell as administrator and enter command:

docker ps

This will list all running containers – should be these:

CONTAINER ID IMAGE                                                               PORTS                        NAMES
b6874749d0a5 9080/tcp                     prism-platform
5f104714f851              8082/tcp                     bss
a946de8db063>80/tcp,>443/tcp pa-gateway
83e22e0be1f8     8083/tcp                     neo-provision
6d11be2a5fd7         9060/tcp                     neo-idviz
e8c3bd52ca54       9082/tcp                     monitor
bfa3ef090459         1338/tcp                     wa-proxy
fb7bd169d5b3     9666/tcp                     async-service
fef9418814e5    9110/tcp                     share-platform
afaeb58c897c 9080/tcp                plan-service
b84bc681967b        3000/tcp                     paw-ui-api
31ac3574ea06        3333/tcp                     user-admin
5e55f009d40f         9080/tcp                     glass
8dc5e074265d         9076/tcp                     atlas
5d482449ac97     9090/tcp, 9100/tcp           prism-proxy
59b615950e2b               5984/tcp                     couchdb
1c0305d7d945         9700/tcp                     share-app
010ac1fac8db             27017/tcp                    mongo
481b8cb26b29        9600/tcp                     prism-app
78709cb2d0d1      9085/tcp                     palm-service
545cf6eb986f 9191/tcp                    pa-content
61093630f145             6379/tcp                     redis
4e149a0ae6ea           8080/tcp                     cdn
927748e22277       9070/tcp                     share-proxy
8004ed0c651d   9610/tcp                     pa-predict
0b084764c655          1339/tcp                     tm1proxy
394e473a354d         8888/tcp                     admintool

Command docker images will list all installed images, should normally be these:

REPOSITORY                                       TAG                  IMAGE ID     CREATED      SIZE cdd1118734aa 7 weeks ago  5.35GB        3.0.9788-ltsc2019    eda31db65292 2 months ago 5.27GB          3.0.9788-ltsc2019    e30ce3e584eb 2 months ago 5.34GB     3.0.11099-ltsc2019   ad0700244d5f 2 months ago 4.77GB     1.0.605-ltsc2019     7c0f57f0564c 2 months ago 4.71GB       1.0.605-ltsc2019     30028e59100d 2 months ago 4.72GB      1.0.1098-ltsc2019    bc188fbdba7e 2 months ago 4.71GB  3.0.2365.2-ltsc2019  1af1c76c5ebe 2 months ago 5.66GB       3.0.5536-ltsc2019    149080a8fc2d 2 months ago 4.73GB    1.0.340-ltsc2019     5870fb15710e 2 months ago 4.76GB        1.0.411-ltsc2019     301bf315ca8b 2 months ago 4.63GB             1.0.1397-ltsc2019    16a9f3403a03 2 months ago 5.32GB        1.0.1020-ltsc2019    d365a00fbcb0 2 months ago 4.73GB      1.0.260-ltsc2019     9e3f8ca98062 2 months ago 4.82GB   1.0.387-ltsc2019     f7e111569a61 2 months ago 4.82GB  1.0.1054-ltsc2019    6dabd0bfa755 2 months ago 5.34GB      1.0.737-ltsc2019     e4ea0b9f71cf 2 months ago 4.82GB   1.0.392-ltsc2019     7c04066c0fed 2 months ago 5.35GB         234-ltsc2019         7832c5ecc13c 2 months ago 4.93GB       9.0.982-ltsc2019     6847d7d6725d 2 months ago 5.55GB 1.0.2023051901-ltsc2019 04c8517f2f3a 2 months ago 5.28GB       1.0.340-ltsc2019     428093815025 2 months ago 4.72GB 1.0.310-ltsc2019  bfa8b7dcb3f4 2 months ago 4.73GB   125-ltsc2019         f012ea094ed4 2 months ago 5.31GB  1.0.425-ltsc2019     9ff87c1b417a 2 months ago 5.29GB        1.0.342-ltsc2019     caf9f295cdcb 2 months ago 4.71GB           1.0.206-ltsc2019     c5f67bbdafc5 2 months ago 4.64GB           1.0.205-ltsc2019     852817ecb6fe 2 months ago 4.85GB       225-ltsc2019         7d2f0cc5bd1f 2 months ago 5.11GB    1.0.835-ltsc2019     85873eaa9a40 2 months ago 4.62GB


If you are missing a image, the installation have failed. If not all containers is running, then you may have issues with your Anti-virus software.

Uninstall Trellix/McAfee and reboot server and try again to install PAW.


More Information:


Planning Analytics


How create a simple check on input values in a TI process?


Ad a IF statement in the PROLOG tab that check the input values against size, simplest to ensure the values entered is real – keep the size correct.

You have parameters the user should enter, then you need to do some simple check that the values entered are realistic.

ProcessQuit will terminate that process. Metadata and Data will not be executed.

ProcessBreak will stop processing the source and proceed directly to the Epilog tab (without returning an error handle to the TI).

More Information: 

This example is simply checking if an element entered into a parameter exists in a dimension using a DIMIX and if it doesn’t, it puts a message in the log and quits the process.

IF ( DIMIX ( 'Location', pLocation) = 0);
sErrorMessage = 'The Location entered does not exist';
ItemReject ( sErrorMessage );


Planning Analytics


The calculated cell does not contain a zero value, instead it contain an old number from before. The Trace Feeder also show that the empty cell have a numbers, that are used in the consolidation. The cube does not have FEEDERS, but it is feed from a different cube.

Running CubeProcessFeeders ( ‘cubename’ ) – solve the problem, until next restart of the TM1 instance.


IBM is aware of a defect in Planning Analytics versions IF2 through Rule-derived cells are not calculating after a TM1 database restart when the cube doesn’t have a feeder statement. This defect is addressed in Planning Analytics, available 10 July 2023.

If we add a feeder to the rules file for the cube, then the values are correct in the consolidation cell.

You can also create a TM1 TI process with:

CubeProcessFeeders ( 'thecubename' );

Then create a TM1 chore that will run that every week, but do not activate it.

Click “create new chore” in TM1 Architect.

Select the TM1 TI process you created for “run cubefeeders” above.

Next step select that the chore should run every 7 day, and save it with a name “S. 1010 run cubefeeders”.

Do not active the chore.

Go to the TM1S.CFG file for your TM1 instance.

Add this line:

StartupChores=S. 1010 run cubefeeders

This should give at the start of the TM1 instance, the TI process in that core is run before uses can login to the TM1 application.

In tm1server.log

21620 [] INFO 2023-07-13 14:12:59.743 TM1.Chore Executing Startup Chore “S. 1010 Run cubefeeders”
21620 [] INFO 2023-07-13 14:12:59.744 TM1.Process Process “S. 1111 run cubefeeders” executed by chore “S. 1010 run cubefeeders”


More Information: 

In this example we have cube A and cube B. Cube B is very similar to cube B but it has one more dimension

Cube A

Cube B

Say you have a sales value in cube A and you want to split it across the locations in cube B. The rule in cube B might look something like:

[‘sales’] = N: DB(‘cube A’,!Time,!Measues) * DB(‘Location Split’,!Location,!Time); (The latter being a lookup cube)

You will then have a feeder from A to B. It will look something like this:

[‘Sales’] => DB(‘Cube B’,’All Locations’,!Time,!Measures);

In the example above the Location dimension does not exist in cube A. When you have a case like this you need to select one item within the unknown dimension. In this case I have selected a consolidation called ‘All Locations’. This will then in turn feed all it’s children. If you do not have a consolidation like this create one. You can then add all children to it. 

Planning Analytics
Microsoft Windows 2019 server


How see memory usage of Feeders and other cube data?

Possible solution:

The }StatsByCube control cube captures information about the cubes in your TM1 application. You’ll need to enable Performance Monitor either by right-clicking on your TM1 instance in Architect and choosing Start Performance Monitor or by adding PerformanceMonitorOn=T to your TM1s.cfg file – the latter change requires a service restart to take effect.

After Performance Monitor starts, it takes a few seconds for the counters to begin writing to the }StatsByCube cube.

When you open }StatsByCube, the most useful view when starting out is to put the LATEST member from the }TimeIntervals in the title and the }StatsStatsByCube and }PerfCubes dimensions in the rows and columns. The counters you want to focus on are Memory Used for Views and Number of Stored Views. These represent the amount of RAM (in bytes) used for Stargate Views and the number of Stargate Views created for this specific cube.

1 Megabyte = 1,048,576 bytes

Memory used for feeders below should then be 458 865 488 bytes = 458 Mb RAM for only feeders. They in most cases does not change.

More Information: 

Planning Analytics Workspace 88
Microsoft Windows 2019 server


New installation of PAW give error when try to login/connect to site. Where you use CAM security and are connected to a CA11 installation.

The PMHub service parameter was not specified or is not one of the configured locations


Update the pmhub.html file on the CA gateway server in folder D:\Program Files\ibm\cognos\analytics\webcontent\bi to include the PAW server name without portnumbers, on line:

// Update the following to point to the location of the pmhub service(s)
var pmhubURLs = ["","","","http://pawservername"];


Check also “pmhub.html” file name is in lowercase only, with no uppercase characters (for example “PMHub.html” would not be found)

-If Cognos Analytics is accessed through a gateway URL (for example by using IIS, like “http://<CAgateway>:80/ibmcognos/bi/v1/disp“) then the pmhub.html interoperability file must be placed in “<CA_Gateway_machine>/webcontent/bi/
-If Cognos Analytics is accessed directly through an application tier or dispatcher (like “http://<CAdispatcher>:9300/bi/v1/disp“) then the pmhub.html file must be placed in each “<CA_Dispatcher_machine>/webcontent/”  folder.


More information:


Microsoft Azure Blob storage


Put a file on blob storage with POSTMAN.

Get error  404 The specified blob does not exist



Change to PUT instead of GET in postman program, to place a file on the blob storage.

As the file you are asking for does not exist – as you have not put it there yet – you get a 404 error.

To put a file on the blob storage you need to add headers like:

x-ms-blob-type  = BlockBlob

x-ms-date = 2022-11-02


Error like 400 Value for one of the query parameters specified in the request URI is invalid. can be that you miss the container name, only have the URL.

Error like 400 The requested URI does not represent any resource on the server. can be that you have not listed the file, only the container name in the URL.

Error like 403 This request is not authorized to perform this operation, can be that you are not logged into the VPN to give you access to the azure storage.

Error like 400 The requested URI does not represent any resource on the server. can be that the url does not contain the file to read in a get statement.

Error like 400 An HTTP header that’s mandatory for this request is not specified. can be that you are missing the x-ms-blob-type header.


More Information: