Product:

TM1

Issue:

Set security with code?

Solution:

Check out this from https://exploringtm1.com/cell-security-in-tm1-the-complete-guide/

TM1’s security can be as simple or as complex as you need. We tend to start with the broadest possible definition of security and then refine it down to the specific, to the cell if required. TM1 cell security places an overhead for your administrator to manage as it can get complex, not only within a cube but also with the interaction of it with element or dimension security. This guide will take you through how to create Cell Security the right way, that minimises the overhead on your server and administrator.


Standard “Create Cell Security Cube” Method

When you right-click on a cube and select Security, you are prompted to Create a Cell Security Cube. If you do this, it will create the cube that replicates the primary cube, but with the addition of the }Groups dimension. Let’s say you have a GL cube with Time, Version, Entity, Cost Centre, Account, and Measure. Using the automated method will give you those plus }Groups.

This is a dead-simple way to create a Cell Security cube and from it you can assign cell-level security. However, if you have, say six dimensions in the underlying cube, then you’ll have seven dimensions in the resulting cube. Great flexibility, because you can assign security to any corresponding intersection to the primary cube. Huge overhead through because you have to maintain all those intersections and if you want to go down the path of having rules manage the cell security, then it could have a big hit on performance.


Customised Cell Security Cube

So what do we do? We want a cell security cube that only has the dimensions you need to assign security for the primary cube. If, from our GL cube above, we only need Time, Version, and Account for administering security, then we create a security cube with only those plus the }Groups dimension. Administering a 4 dimension cube is very much easier than a 7 dimension cube!


Create a Custom Cell Security Cube

The only way two create a customised cell security cube is via a special TI. This TI contains just two lines, namely:

CubeName = 'General Ledger';
CellSecurityCubeCreate ( CubeName, '1:1:0:0:1:0');

This has a set of simple binary switches that enable or disable a dimension from the primary cube. So obviously, our primary cube, the General Ledger cube, here has six dimensions, they are referred to here in the exact order they are in the primary cube and are separated by a colon (one of these “:”). Finally, the zero and one switches are contained inside a single inverted comma.

Running this TI will then create a cell security cube with only the required dimensions. So, with our dimensions above, we would end up with Time, Version, Account, and }Groups in the new cube. This corresponds with the 1’s in the command in the TI.


Rules

Once the Cell Security cube has been created, then we can assign rules to it. In the rule below we have six blocks. YOU can read the annotation in the rules. Note the last one is a catch-all with the scope of []. This sets it to be for all remaining intersections not caught by the rules above.

# Set Actuals to have Write access to future Weeks only
['Actual'] = S: IF ( ELLEV ( 'Time', !Time) = 0
	,IF ( ATTRN ('Time', !Time, 'FY Week No') <= DB('System Control','Current Week','Value'), STET,'WRITE')
	,CONTINUE
	);

# Set Actuals to be Read for historic Periods (months) up until the most recent completed Month End
['Actual'] = S: IF ( ATTRS ('Time', !Time, 'Monthend Completed') @= 'Yes', STET,'WRITE');

# Set Budget to be all Read only
['Budget'] = S: 'READ';

# Set WEEKS for Active Forecast Versions to Write from Forecast Start Week onwards
[] = S: IF ( ELISANC ( 'Version', 'Active Forecast Versions', !Version ) = 1 & ELLEV ( 'Time', !Time) = 0 
	,IF ( ATTRN ('Time', !Time, 'FY Week No') < DB('System Control','Forecast Start Year-Week','Value'), STET,'WRITE')
	,CONTINUE 
	);

# Set PERIODS for Active Forecast Versions to Write from Forecast Start Week onwards
[] = S:  IF ( ELISANC ( 'Version', 'Active Forecast Versions', !Version ) = 1
	,IF ( ATTRN ('Time', !Time, 'FY Period No') < ATTRN ( 'Time' ,ATTRS ( 'Time', DB('System Control','Forecast Start Year-Week','String'), 'Current Period'), 'FY Period No'), STET,'WRITE')
	,CONTINUE 
	);

# Set ALL else to READ
[] = S: 'READ';

Deleting a Control Cube

Like other Control Objects, these cubes are special and cannot be elated by the normal right-click method. They must be deleted via a TI process as well. The TI just needs to contain the following

CubeName = '}CellSecurity_General Ledger';
CubeDestroy ( CubeName );

Obviously, if you run this, it will delete any data in the cube and any rules you have written against it. So copy the rules out first if you want to re-use them!

Product:
Microsoft SQL Server 2019
Issue:
How active encryption on SQL servers databases?

Solution:

You need a folder on the SQL server to store the certificate, create a folder like e:\key and only give local administrators and the SQL service account access there.

You can use the same certificate for a group of SQL servers. Then it is possible to restore a database backup to one of the others server in that group – that use the same certificate.

One the first SQL server:

USE Master;
GO
CREATE MASTER KEY ENCRYPTION
BY PASSWORD='InsertStrongPasswordHere12!';
GO

CREATE CERTIFICATE TDE_Cert
WITH
SUBJECT='Database_Encryption';
GO

BACKUP CERTIFICATE TDE_Cert
TO FILE = 'e:\key\TDE_Cert.cer'
WITH PRIVATE KEY (file='e:\key\TDE_CertKey.pvk',
ENCRYPTION BY PASSWORD='InsertStrongPasswordHere12!')

 

Then on every other SQL server in the group , copy above files to the e:\key folder on the next server, and do this to activate TDE:

USE Master;
GO
CREATE MASTER KEY ENCRYPTION
BY PASSWORD='InsertStrongPasswordHere12!';
GO

USE MASTER
GO
CREATE CERTIFICATE TDE_Cert
FROM FILE = 'e:\key\TDE_Cert.cer'
WITH PRIVATE KEY (FILE = 'e:\key\TDE_CertKey.pvk',
DECRYPTION BY PASSWORD = 'InsertStrongPasswordHere12!' );

 

Then to enable the encryption, you need to run this on every database:

USE <DB>
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE TDE_Cert;
GO

ALTER DATABASE <DB>
SET ENCRYPTION ON;
GO

Replace <DB> with your database name.

 

Then the database and its coming backup files are encrypted. The Backup can only be restored on a server with the same certificate.

 

If you get an error like this:

The certificate, asymmetric key, or private key file is not valid or does not exist; or you do not have permissions for it.

Try by change the path from file=‘e:\key\TDE_CertKey.pvk’ to file=’e:/key/TDE_CertKey.pvk’

 

To see what databases are encrypted:

SELECT name,is_encrypted,* FROM sys.databases WHERE is_encrypted = 1

To check if the certificate is installed:

SELECT * FROM sys.certificates WHERE name = 'TDE_Cert'

 

Important: Keep your password and backup of the certificates files in a secure location. In case you need to restore a database to a new SQL server, this keys need to be restored first.

To remove encryption from a database:

ALTER DATABASE [RecoveryWithTDE]
  SET ENCRYPTION OFF;
GO
USE [RecoveryWithTDE]
GO 
DROP DATABASE ENCRYPTION KEY;

To backup the master key:

USE Master ;
Open Master Key Decryption by password = 'InsertStrongPasswordHere12!'
Backup master key to file = 'e:\key\MasterKeyName.key'
        ENCRYPTION BY PASSWORD = 'InsertStrongPasswordHere12!';
    GO

To restore the master key to the database server:

Use master 
    restore master key
    FROM FILE = 'e:\key\MasterKeyName.key'
    DECRYPTION BY PASSWORD = 'InsertStrongPasswordHere12!'
    ENCRYPTION BY PASSWORD = 'InsertStrongPasswordHere12!'

 

The TEMPDB database will be encrypted when you start using TDE, then it is always encrypted.

Cannot encrypt a system database. Database encryption operations cannot be performed for ‘master’, ‘model’, ‘tempdb’, ‘msdb’, or ‘resource’ databases.

More Information:

https://www.databasejournal.com/ms-sql/suspending-and-resuming-transparent-data-encryption-tde/

https://www.sqlshack.com/how-to-configure-transparent-data-encryption-tde-in-sql-server/

https://www.sqlservertutorial.net/sql-server-administration/sql-server-tde/

https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver16

 

Product:

Microsoft Windows 2019 server

Issue:

How to open the windows firewall, to allow the server to respond to ping request?

Solution:

  1. Open Control Panel, open Administrative Tools.
  2. Open Windows Defender Firewall.
  3. From the left pane of the resulting window, click ‘Inbound Rules’.
  4. In the right pane, find the rules titled ‘File and Printer Sharing (Echo Request – ICMPv4-In)’.
  5. Right-click each rule and choose Enable Rule

More Information:

https://www.rootusers.com/how-to-enable-ping-in-windows-server-2019-firewall/

Product:

Microsoft SQL server 2016

Issue:

How to test import only 10 rows from a text file, with bulk insert command? To check if it works.

Solution:

BULK INSERT Salestable
FROM 'C:\temp\data.txt'
WITH (LASTROW = 10,
    BATCHSIZE=250000,
    MAXERRORS=2);

Enter LASTROW = 10 to only read ten rows of data from your data.txt file. Then you can check if you get the correct type of data to your table.

LASTROW = last_row

Specifies the number of the last row to load. The default is 0, which indicates the last row in the specified data file.

MAXERRORS = max_errors

Specifies the maximum number of syntax errors allowed in the data before the bulk-import operation is canceled. Each row that can’t be imported by the bulk-import operation is ignored and counted as one error. If max_errors isn’t specified, the default is 10.

BATCHSIZE = batch_size

Specifies the number of rows in a batch. Each batch is copied to the server as one transaction. If this fails, SQL Server commits or rolls back the transaction for every batch. By default, all data in the specified data file is one batch.

If you cancel a BULK INSERT, it will try to roll back all data, this will take time.

 

More information:

https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver16#performance-considerations

https://www.mssqltips.com/sqlservertip/6109/bulk-insert-data-into-sql-server/

Product:
Microsoft SQL server 2016

Issue:
Can i see if i should improve anything on my SQL server?

Solution:

Mr Ozar have develop some sp that can give you more information, get the kit from

https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit#writing-sp_blitz-output-to-a-table

install this one in your master database

Install-Core-Blitz-With-Query-Store.sql

Now you have a set of SP to use.

First run (if you have performance issues just now)

sp_BlitzFirst

Run this to get the most common wait issues you need to check on your server:

sp_BlitzFirst  @sincestartup = 1, @outputtype = 'Top10'

 

Setup a database LOGSQL and schedule this every month;

EXEC sp_Blitz @OutputDatabaseName = 'LOGSQL', @OutputSchemaName = 'dbo', @OutputTableName = 'BlitzResults';

Will give you a table BlitzResults to look at to see what status is for your SQL server.

Run this manually to see what can be done with index

EXEC sp_BlitzIndex @GetAllDatabases = 1, @BringThePain = 1;

copy result to excel for further analysis, remove unused index and duplicates.

 

Run to see the biggest queries:

EXEC sp_BlitzCache @Top = 20,  @BringThePain = 1;

Run this to see the size of index in one database:

SELECT OBJECT_NAME(IX.OBJECT_ID) Table_Name
  ,IX.name AS Index_Name
  ,IX.type_desc Index_Type
  ,SUM(PS.[used_page_count]) * 8 IndexSizeKB
  ,IXUS.user_seeks AS NumOfSeeks
  ,IXUS.user_scans AS NumOfScans
  ,IXUS.user_lookups AS NumOfLookups
  ,IXUS.user_updates AS NumOfUpdates
  ,IXUS.last_user_seek AS LastSeek
  ,IXUS.last_user_scan AS LastScan
  ,IXUS.last_user_lookup AS LastLookup
  ,IXUS.last_user_update AS LastUpdate
FROM sys.indexes IX
INNER JOIN sys.dm_db_index_usage_stats IXUS ON IXUS.index_id = IX.index_id AND IXUS.OBJECT_ID = IX.OBJECT_ID
INNER JOIN sys.dm_db_partition_stats PS on PS.object_id=IX.object_id
WHERE OBJECTPROPERTY(IX.OBJECT_ID,'IsUserTable') = 1
GROUP BY OBJECT_NAME(IX.OBJECT_ID) ,IX.name ,IX.type_desc ,IXUS.user_seeks ,IXUS.user_scans ,IXUS.user_lookups,IXUS.user_updates ,IXUS.last_user_seek ,IXUS.last_user_scan ,IXUS.last_user_lookup ,IXUS.last_user_update

Check the size of the index and the usages – have it been used?

https://www.sqlshack.com/boost-sql-server-performance-with-wait-statistics/

https://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/

What is the most worrying wait type?

 

More Information:

https://www.brentozar.com/archive/2018/07/cxconsumer-is-harmless-not-so-fast-tiger/ 

 

 

Product:

Microsoft SQL server 2016

Issue:

Should i make a index?

Solution:

Yes, all table should have a index – at least a clustered index.

But a data-ware house table, should maybe not have a index when you load a lot of data.

Index can be created after you have loaded a lot of data. This to speed up the process.

Correct made index make the SELECT faster. Any index in the target table, makes the INSERT slower.

You should have not too few, or too many index on a table to get the best performance.

To create a index:

CREATE TABLE dbo.TestTable 
(TestCol1 int NOT NULL, 
TestCol2 nchar(10) NULL, 
TestCol3 nvarchar(50) NULL);

CREATE CLUSTERED INDEX IX_TestTable_TestCol1 
ON dbo.TestTable (TestCol1);

You can also create index by right-click on table name – indexes – New Index.

Check if index exist, before you drop it;

IF EXISTS (SELECT name FROM sys.indexes  
    WHERE name = N'IX_ProductVendor_VendorID')   
    DROP INDEX IX_ProductVendor_VendorID ON Purchasing.ProductVendor;   

CREATE NONCLUSTERED INDEX IX_ProductVendor_VendorID   
    ON Purchasing.ProductVendor (BusinessEntityID); 

Use ONLINE=ON to make the table readable when you create the index:

CREATE NONCLUSTERED INDEX [IX_NonClusteredIndexDemo_StudentName] 
ON [dbo].[NonClusteredIndexDemo] ([StudentName],[STDAddress]) 
WITH ( ONLINE=ON, FILLFACTOR=90) 

To get more information of a query process use:

SET STATISTICS TIME ON
SET STATISTICS IO ON

SELECT * FROM [dbo].[DimAccount] 
WHERE [AccountType] ='Assets'

More Information:

https://www.sqlshack.com/tracing-and-tuning-queries-using-sql-server-indexes/

https://dataschool.com/sql-optimization/how-indexing-works/

  • Columns with text, image, ntext, varchar(max), nvarchar(max) and varbinary(max) cannot be used in the index key columns.
  • It is recommended to use an integer data type in the index key column. It has a low space requirement and works efficiently. Because of this, you’ll want to create the primary key column, usually on an integer data type.
  • You should consider creating a primary key for the column with unique values. If a table does not have any unique value columns, you might define an identity column for an integer data type. A primary key also creates a clustered index for the row distribution.
  • You can consider a column with the Unique and Not NULL values as a useful index key candidate.
  • You should build an index based on the predicates in the Where clause. For example, you can consider columns used in the Where clause, SQL joins, like, order by, group by predicates, and so on.
  • You should join tables in a way that reduces the number of rows for the rest of the query. This will help query optimizer prepare the execution plan with minimum system resources.
  • If you use multiple columns for an index key, it is also essential to consider their position in the index key.
  • You should also consider using included columns in your indexes.

https://blog.quest.com/11-sql-server-index-best-practices-for-improved-performance-tuning/

https://hungdoan.com/2017/04/13/nvarcharn-vs-nvarcharmax-performance-in-ms-sql-server/

The clustered index defines the order in which the table data will be sorted and stored. As mentioned before, a table without indexes will be stored in an unordered structure. When you define a clustered index on a column, it will sort data based on that column values and store it. Thus, it helps in faster retrieval of the data.

There can be only one clustered index on a table because the data rows can be stored in only one order.

When you create a Primary Key constraint on a table, a unique clustered index is automatically created on the table.

The non-clustered index does not sort the data rows physically. It creates a separate key-value structure from the table data where the key contains the column values (on which a non-clustered index is declared) and each value contains a pointer to the data row that contains the actual value. It is similar to a textbook having an index at the back of the book with page numbers pointing to the actual information.

https://www.tutorialsteacher.com/sqlserver/indexes

https://www.midnightdba.com/Jen/2014/12/create-a-clustered-index-and-a-primary-key-on-existing-tables/

https://www.sqlshack.com/designing-effective-sql-server-clustered-indexes/

https://developer.ibm.com/articles/i-sql-indexs-and-native-io/

https://tprojects.schneider-electric.com/GeoSCADAHelp/ClearSCADA%202017%20R2/Content/SQLGuide/IntroductiontoSQLQueryStructure.htm

https://www.brentozar.com/archive/2021/06/learn-fundamentals-of-index-tuning-for-1/

https://www.sqlshack.com/designing-effective-sql-server-non-clustered-indexes/

Product:

Microsoft SQL server 2016

Issue:

How allow a domain group only to execute all SP in a database?

Solution:

Create a role with only the correct rights:

CREATE ROLE db_executor;

GRANT EXECUTE TO db_executor;

Add a existing database user to that role:

ALTER ROLE db_executor ADD MEMBER [company\username]

 

To add a domain user to the sysadmin role:

exec sp_addsrvrolemember 'whatDomain\Domain Admins', 'sysadmin';

To add a domain user and add it to a predefined role in the database:

CREATE LOGIN [company\User] 
CREATE USER [company\User]
ALTER ROLE [db_datareader] ADD MEMBER [company\User]

 

More Information:

https://www.sqlmatters.com/Articles/Adding%20a%20db_executor%20role.aspx

https://kimconnect.com/how-to-add-domain-admins-to-sql-server/

https://help.genesys.com/pureconnect/mergedprojects/wh_tr/mergedprojects/wh_tr_active_directory_sql/desktop/run_sql_server_script_to_add_ad_accounts.htm

Product:

Microsoft SQL server 2016

Issue:

How list the size of the tables in a database?

Solution:

Try this SQL query:

SELECT 
    t.NAME AS TableName,
    i.name as indexName,
    sum(p.rows) as RowCounts,
    sum(a.total_pages) as TotalPages, 
    sum(a.used_pages) as UsedPages, 
    sum(a.data_pages) as DataPages,
    (sum(a.total_pages) * 8) / 1024 as TotalSpaceMB, 
    (sum(a.used_pages) * 8) / 1024 as UsedSpaceMB, 
    (sum(a.data_pages) * 8) / 1024 as DataSpaceMB
FROM 
    sys.tables t
INNER JOIN      
    sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN 
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN 
    sys.allocation_units a ON p.partition_id = a.container_id
WHERE 
    t.NAME NOT LIKE 'dt%' AND
    i.OBJECT_ID > 255 AND   
    i.index_id <= 1
GROUP BY 
    t.NAME, i.object_id, i.index_id, i.name 
ORDER BY 
    SUM(a.total_pages) DESC

 

More Information:

https://ss64.com/sql/syntax-largest-table.html

Useful T-SQL queries and scripts to work in SQL Server

https://gist.github.com/hlsupe/d38b10ffdf34eb620f428100257a2695

Product:

Cognos Analytics 11.1.7

Microsoft Windows 2016 server

Issue:

Can not create new Planning Analytics data source connections inside CA11.

You have changed the security on the PAL installation, in case it worked before.

When you test the data connection in Cognos Analytics administration page, you get an error like this:

Dynamic Failed XTR-ERR-0005 A request to TM1 resulted in error: “[400] javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.j: PKIX path building failed: com.ibm.security.cert.IBMCertPathBuilderException: unable to find valid certification path to requested target”.

Solution:

Add the TM1 default certificate to the CA11 certificate store in java.

Stop IBM cognos service

Go to D:\Program Files\ibm\cognos\analytics\ibm-jre\jre\bin and start Ikeyman.exe as administrator.

Click on open icon.

Select the cacerts file in folder D:\Program Files\ibm\cognos\analytics\ibm-jre\jre\lib\security

Set type as JKS

Click OK

Enter password: changeit

Select in Signer Certificates from the drop-down menu

Click on Add.

Select the ibmtm1.arm file (that you copied over from the Planning Analytics server)

Click OK.

Give it a name like TM1ServerCert.

Close IKEYMAN program.

Start Cognos service.

If you test the connection – the message should be similar to this:

Dynamic Succeeded XQE-DS-0015 TM1 Server Name: tm1ServerName: “11.8.01000.6”.

 

More information:

https://www.ibm.com/support/pages/xtr-err-0005-error-when-testing-ibm-planning-analytics-20x-datasource-ibm-cognos-analytics-1112

Product:
Cognos Controller Web 10.4.2
Microsoft Windows 2016 Server

Issue:
User can not login to Cognos Controller Web after update to HTTPS.

In folder D:\Program Files\ibm\cognos\ccr_64\frontend\logs

error in log file can be;

error from web proxy: { Error: Client network socket disconnected before secure TLS connection was established at TLSSocket.onConnectEnd (_tls_wrap.js:1088:19)
at Object.onceWrapper (events.js:277:13)
at TLSSocket.emit (events.js:194:15)

Solution:

Check that the local windows firewall allow connections on port 3443.

Add a inbound rule that allow communications on port 9080,9081,3443,3000.

Controller web ports are set in the config.js and server.xml file.

<httpEndpoint host="*" httpPort="3000" httpsPort="3443" id="defaultHttpEndpoint">
<httpOptions removeServerHeader="true" />
</httpEndpoint>

More information:

https://www.ibm.com/support/pages/server-not-reachable-or-still-initializing-handshake-failure-signer-might-need-be-added-local-trust-store-ccr-ws-api-not-initialized-yet-errors-controller-web-caused-keystore-file-keyjks-containing-wrong-information

https://www.ibm.com/support/pages/node/883036

https://www.ibm.com/support/pages/controller-web-does-not-work-when-installed-same-server-cognos-analytics

https://www.ibm.com/support/pages/how-configure-controller-web-use-ssl-https