Product:
Planning Analytics 2.0.9.17
Microsoft Windows Server 2019

Problem:

I want to copy a csv file to azure file storage from folder on TM1 server. How to do it?

Solution:

Create a folder where you store both your azxopy.exe and ps1 script (d:\script).  https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

Create a folder where you store the file you want to send (d:\temp).

In the AZURE FILE STORAGE account you need to create a SAS key. https://adamtheautomator.com/azure-sas-token/ 

With that key, create below powershell script, and save it as uploadfile.ps1 in your d:\script folder.

$env:AZCOPY_CRED_TYPE = "Anonymous";
$env:AZCOPY_CONCURRENCY_VALUE = "AUTO";
$env:AZCOPY_LOG_LOCATION="d:\temp\";
d:\script\azcopy.exe copy "d:\Temp\testfile1.txt" "https://yourstorageaccountname.file.core.windows.net/input/testfile1.txt?sv=2022-12-03&ss=f&srt=sco&sp=rwlc&se=2023-08-09T19%3A35%3A07Z&st=2023-06-29T11%3A35%3A07Z&spr=https&sig=yoursaskeytokenandtheotherstuff3D" --overwrite=prompt --from-to=LocalFile --follow-symlinks --put-md5 --follow-symlinks --preserve-smb-info=true --disable-auto-decoding=false --recursive --log-level=INFO;
$env:AZCOPY_CRED_TYPE = "";
$env:AZCOPY_CONCURRENCY_VALUE = "";

You need to replace some parts above, with your values from the SAS key.

AZCOPY_LOG_LOCATION will write the log file from azcopy to that temp folder, instead of a folder under the user account.

You must provide full path to all folders, e.g. d:\script\azcopy.exe, otherwise the TI process will not find the files, and you will not get any errors.

Replace testfile1.txt with the name of the file you want to upload. You need to provide the filename both at source and target folder.

Then you need to create a TM1 process – with below in the prolog tab:

 ExecuteCommand( 'Powershell -ExecutionPolicy ByPass -file "D:\script\uploadfile.ps1" ', 0 );

 

If you have issues; you can try with this:

ExecuteCommand( 'Powershell -noprofile -noninteractive -ExecutionPolicy ByPass -file "D:\script\uploadfile.ps1" ', 0 );

To see who is running powershell, create a powershell file with this content (d:\script\system.ps1):

try {
$Who = whoami.exe
}
catch {
$Who = Write-Output "Error Message: $($_.Exception.Message) Error on line $($_.InvocationInfo.ScriptLineNumber)"
}
finally {
$Who | Out-File -FilePath 'd:\Script\SystemWhoami.txt'
}

Then in TM1 call that ps1 file with this command:

ExecuteCommand( 'Powershell -ExecutionPolicy ByPass -file "D:\script\system.ps1" ', 0 );

This will create a file in the d:\script\ folder that contain the person who run the script:

If the TM1 application is run by the system account, you will get above result. If the TM1 service is run by a windows user, that name will show in above file.

The TM1 process call to a powershell file with a azcopy command is working with the system account running the Tm1 application.

Browse to your AZURE file storage account and check if the file was uploaded.

If the file exist in the AZURE file storage, the transfer will not succeed.

 

If it still does not work, check if TM1 windows server had execution of scripts disabled.

In some cases when the powershell does not work, it can be needed to add two folders to the c drive:

(32Bit, always)
C:\Windows\System32\config\systemprofile\Desktop

(64Bit)
C:\Windows\SysWOW64\config\systemprofile\Desktop

 

Please note that file storage does not support all authentications methods that other azure storage does.

  • local <-> Azure Blob (SAS or OAuth authentication)
  • local <-> Azure Files (Share/directory SAS authentication)
  • local <-> Azure Data Lake Storage Gen2 (SAS, OAuth, or SharedKey authentication)

For a Azure file storage you must use SAS or KEY, to access.

 

More Information:

https://powershelltoolbox.tech/testing-powershell-scripts-as-the-system-account/ 

https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-copy?source=recommendations 

https://www.aramar.co.uk/using-powershell-in-planning-analytics/ 

https://www.tm1forum.com/viewtopic.php?t=14763 

https://sahayagodson.medium.com/azcopy-sync-with-task-scheduler-8a59fba1755a

https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview

https://learn.microsoft.com/en-us/azure/cognitive-services/translator/document-translation/how-to-guides/create-sas-tokens?tabs=Containers

https://www.51sec.org/2022/08/12/using-azcopy-to-migrate-local-files-to-azure-blob-storage/

https://bluexp.netapp.com/blog/azure-cvo-blg-how-to-upload-files-to-azure-blob-storage#H_H2 

PowerShell script: Download and extract the latest AzCopy (Windows 64-bit) executable into a specified folder

https://gist.github.com/DaveRuijter/b530c84020950c729cb9e15efbfe04ce 

https://www.smikar.com/using-azcopy-with-azure-storage/ 

Usage:
azcopy copy [source] [destination] [flags]

Aliases:
copy, cp, c

Examples:
Upload a single file by using OAuth authentication. If you have not yet logged into AzCopy, please run the azcopy login command before you run the following command.

– azcopy cp “/path/to/file.txt” “https://[account].blob.core.windows.net/[container]/[path/to/blob]”

Same as above, but this time also compute MD5 hash of the file content and save it as the blob’s Content-MD5 property:

– azcopy cp “/path/to/file.txt” “https://[account].blob.core.windows.net/[container]/[path/to/blob]” –put-md5

Upload a single file by using a SAS token:

– azcopy cp “/path/to/file.txt” “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Upload a single file by using a SAS token and piping (block blobs only):

– cat “/path/to/file.txt” | azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” –from-to PipeBlob

Upload a single file by using OAuth and piping (block blobs only):

– cat “/path/to/file.txt” | azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]” –from-to PipeBlob

Upload an entire directory by using a SAS token:

– azcopy cp “/path/to/dir” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true
or
– azcopy cp “/path/to/dir” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true –put-md5

Upload a set of files by using a SAS token and wildcard (*) characters:

– azcopy cp “/path/*foo/*bar/*.pdf” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]”

Upload files and directories by using a SAS token and wildcard (*) characters:

– azcopy cp “/path/*foo/*bar*” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Upload files and directories to Azure Storage account and set the query-string encoded tags on the blob.

– To set tags {key = “bla bla”, val = “foo”} and {key = “bla bla 2”, val = “bar”}, use the following syntax :
– azcopy cp “/path/*foo/*bar*” “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –blob-tags=”bla%20bla=foo&bla%20bla%202=bar”
– Keys and values are URL encoded and the key-value pairs are separated by an ampersand(‘&’)
– https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-index-how-to?tabs=azure-portal
– While setting tags on the blobs, there are additional permissions(‘t’ for tags) in SAS without which the service will give authorization error back.

Download a single file by using OAuth authentication. If you have not yet logged into AzCopy, please run the azcopy login command before you run the following command.

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]” “/path/to/file.txt”

Download a single file by using a SAS token:

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” “/path/to/file.txt”

Download a single file by using a SAS token and then piping the output to a file (block blobs only):

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” –from-to BlobPipe > “/path/to/file.txt”

Download a single file by using OAuth and then piping the output to a file (block blobs only):

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/blob]” –from-to BlobPipe > “/path/to/file.txt”

Download an entire directory by using a SAS token:

– azcopy cp “https://[account].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” “/path/to/dir” –recursive=true

A note about using a wildcard character (*) in URLs:

There’s only two supported ways to use a wildcard character in a URL.
– You can use one just after the final forward slash (/) of a URL. This copies all of the files in a directory directly to the destination without placing them into a subdirectory.
– You can also use one in the name of a container as long as the URL refers only to a container and not to a blob. You can use this approach to obtain files from a subset of containers.

Download the contents of a directory without copying the containing directory itself.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/folder]/*?[SAS]” “/path/to/dir”

Download an entire storage account.

– azcopy cp “https://[srcaccount].blob.core.windows.net/” “/path/to/dir” –recursive

Download a subset of containers within a storage account by using a wildcard symbol (*) in the container name.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container*name]” “/path/to/dir” –recursive

Download all the versions of a blob from Azure Storage to local directory. Ensure that source is a valid blob, destination is a local folder and versionidsFile which takes in a path to the file where each version is written on a separate line. All the specified versions will get downloaded in the destination folder specified.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[containername]/[blobname]” “/path/to/dir” –list-of-versions=”/another/path/to/dir/[versionidsFile]”

Copy a subset of files within a flat container by using a wildcard symbol (*) in the container name without listing all files in the container.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[containername]/*” “/path/to/dir” –include-pattern=”1*”

Copy a single blob to another blob by using a SAS token.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Copy a single blob to another blob by using a SAS token and an OAuth token. You have to use a SAS token at the end of the source account URL if you do not have the right permissions to read it with the identity used for login.

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]”

Copy one blob virtual directory to another by using a SAS token:

– azcopy cp “https://[srcaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Copy all blob containers, directories, and blobs from storage account to another by using a SAS token:

– azcopy cp “https://[srcaccount].blob.core.windows.net?[SAS]” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy a single object to Blob Storage from Amazon Web Services (AWS) S3 by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/[bucket]/[object]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Copy an entire directory to Blob Storage from AWS S3 by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/[bucket]/[folder]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Please refer to https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html to better understand the [folder] placeholder.

Copy all buckets to Blob Storage from Amazon Web Services (AWS) by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy all buckets to Blob Storage from an Amazon Web Services (AWS) region by using an access key and a SAS token. First, set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3-[region].amazonaws.com/” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy a subset of buckets by using a wildcard symbol (*) in the bucket name. Like the previous examples, you’ll need an access key and a SAS token. Make sure to set the environment variable AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for AWS S3 source.

– azcopy cp “https://s3.amazonaws.com/[bucket*name]/” “https://[destaccount].blob.core.windows.net?[SAS]” –recursive=true

Copy blobs from one blob storage to another and preserve the tags from source. To preserve tags, use the following syntax :

– azcopy cp “https://[account].blob.core.windows.net/[source_container]/[path/to/directory]?[SAS]” “https://[account].blob.core.windows.net/[destination_container]/[path/to/directory]?[SAS]” –s2s-preserve-blob-tags=true

Transfer files and directories to Azure Storage account and set the given query-string encoded tags on the blob.

– To set tags {key = “bla bla”, val = “foo”} and {key = “bla bla 2”, val = “bar”}, use the following syntax :
– azcopy cp “https://[account].blob.core.windows.net/[source_container]/[path/to/directory]?[SAS]” “https://[account].blob.core.windows.net/[destination_container]/[path/to/directory]?[SAS]” –blob-tags=”bla%20bla=foo&bla%20bla%202=bar”
– Keys and values are URL encoded and the key-value pairs are separated by an ampersand(‘&’)
– https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-index-how-to?tabs=azure-portal
– While setting tags on the blobs, there are additional permissions(‘t’ for tags) in SAS without which the service will give authorization error back.

Copy a single object to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.

– azcopy cp “https://storage.cloud.google.com/[bucket]/[object]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/blob]?[SAS]”

Copy an entire directory to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.

– azcopy cp “https://storage.cloud.google.com/[bucket]/[folder]” “https://[destaccount].blob.core.windows.net/[container]/[path/to/directory]?[SAS]” –recursive=true

Copy an entire bucket to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variable GOOGLE_APPLICATION_CREDENTIALS for GCS source.

– azcopy cp “https://storage.cloud.google.com/[bucket]” “https://[destaccount].blob.core.windows.net/?[SAS]” –recursive=true

Copy all buckets to Blob Storage from Google Cloud Storage (GCS) by using a service account key and a SAS token. First, set the environment variables GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT=<project-id> for GCS source

– azcopy cp “https://storage.cloud.google.com/” “https://[destaccount].blob.core.windows.net/?[SAS]” –recursive=true

Copy a subset of buckets by using a wildcard symbol (*) in the bucket name from Google Cloud Storage (GCS) by using a service account key and a SAS token for destination. First, set the environment variables GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT=<project-id> for GCS source

– azcopy cp “https://storage.cloud.google.com/[bucket*name]/” “https://[destaccount].blob.core.windows.net/?[SAS]” –recursive=true

Flags:
–as-subdir True by default. Places folder sources as subdirectories under the destination. (default true)
–backup Activates Windows’ SeBackupPrivilege for uploads, or SeRestorePrivilege for downloads, to allow AzCopy to see read all files, regardless of their file system permissions, and to restore all permissions. Requires that the account running AzCopy already has these permissions (e.g. has Administrator rights or is a member of the ‘Backup Operators’ group). All this flag does is activate privileges that the account already has
–blob-tags string Set tags on blobs to categorize data in your storage account
–blob-type string Defines the type of blob at the destination. This is used for uploading blobs and when copying between accounts (default ‘Detect’). Valid values include ‘Detect’, ‘BlockBlob’, ‘PageBlob’, and ‘AppendBlob’. When copying between accounts, a value of ‘Detect’ causes AzCopy to use the type of source blob to determine the type of the destination blob. When uploading a file, ‘Detect’ determines if the file is a VHD or a VHDX file based on the file extension. If the file is either a VHD or VHDX file, AzCopy treats the file as a page blob. (default “Detect”)
–block-blob-tier string upload block blob to Azure Storage using this blob tier. (default “None”)
–block-size-mb float Use this block size (specified in MiB) when uploading to Azure Storage, and downloading from Azure Storage. The default value is automatically calculated based on file size. Decimal fractions are allowed (For example: 0.25).
–cache-control string Set the cache-control header. Returned on download.
–check-length Check the length of a file on the destination after the transfer. If there is a mismatch between source and destination, the transfer is marked as failed. (default true)
–check-md5 string Specifies how strictly MD5 hashes should be validated when downloading. Only available when downloading. Available options: NoCheck, LogOnly, FailIfDifferent, FailIfDifferentOrMissing. (default ‘FailIfDifferent’) (default “FailIfDifferent”)
–content-disposition string Set the content-disposition header. Returned on download.
–content-encoding string Set the content-encoding header. Returned on download.
–content-language string Set the content-language header. Returned on download.
–content-type string Specifies the content type of the file. Implies no-guess-mime-type. Returned on download.
–cpk-by-name string Client provided key by name let clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key name will be fetched from Azure Key Vault and will be used to encrypt the data
–cpk-by-value Client provided key by name let clients making requests against Azure Blob storage an option to provide an encryption key on a per-request basis. Provided key and its hash will be fetched from environment variables
–decompress Automatically decompress files when downloading, if their content-encoding indicates that they are compressed. The supported content-encoding values are ‘gzip’ and ‘deflate’. File extensions of ‘.gz’/’.gzip’ or ‘.zz’ aren’t necessary, but will be removed if present.
–disable-auto-decoding False by default to enable automatic decoding of illegal chars on Windows. Can be set to true to disable automatic decoding.
–dry-run Prints the file paths that would be copied by this command. This flag does not copy the actual files.
–exclude-attributes string (Windows only) Exclude files whose attributes match the attribute list. For example: A;S;R
–exclude-blob-type string Optionally specifies the type of blob (BlockBlob/ PageBlob/ AppendBlob) to exclude when copying blobs from the container or the account. Use of this flag is not applicable for copying data from non azure-service to service. More than one blob should be separated by ‘;’.
–exclude-path string Exclude these paths when copying. This option does not support wildcard characters (*). Checks relative path prefix(For example: myFolder;myFolder/subDirName/file.pdf). When used in combination with account traversal, paths do not include the container name.
–exclude-pattern string Exclude these files when copying. This option supports wildcard characters (*)
–exclude-regex string Exclude all the relative path of the files that align with regular expressions. Separate regular expressions with ‘;’.
–follow-symlinks Follow symbolic links when uploading from local file system.
–force-if-read-only When overwriting an existing file on Windows or Azure Files, force the overwrite to work even if the existing file has its read-only attribute set
–from-to string Specified to nudge AzCopy when resource detection may not work (e.g. piping/emulator/azure stack); Valid FromTo are pairs of Source-Destination words (e.g. BlobLocal, BlobBlob) that specify the source and destination resource types. All valid FromTos are: BlobBlob, BlobFSLocal, BlobFile, BlobLocal, BlobPipe, FileBlob, FileFile, FileLocal, FilePipe, GCPBlob, LocalBlob, LocalBlobFS, LocalFile, PipeBlob, PipeFile, S3Blob
-h, –help help for copy
–include-after string Include only those files modified on or after the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. E.g. ‘2020-08-19T15:04:00Z’ for a UTC time, or ‘2020-08-19’ for midnight (00:00) in the local timezone. As of AzCopy 10.5, this flag applies only to files, not folders, so folder properties won’t be copied when using this flag with –preserve-smb-info or –preserve-smb-permissions.
–include-attributes string (Windows only) Include files whose attributes match the attribute list. For example: A;S;R
–include-before string Include only those files modified before or on the given date/time. The value should be in ISO8601 format. If no timezone is specified, the value is assumed to be in the local timezone of the machine running AzCopy. E.g. ‘2020-08-19T15:04:00Z’ for a UTC time, or ‘2020-08-19’ for midnight (00:00) in the local timezone. As of AzCopy 10.7, this flag applies only to files, not folders, so folder properties won’t be copied when using this flag with –preserve-smb-info or –preserve-smb-permissions.
–include-directory-stub False by default to ignore directory stubs. Directory stubs are blobs with metadata ‘hdi_isfolder:true’. Setting value to true will preserve directory stubs during transfers.
–include-path string Include only these paths when copying. This option does not support wildcard characters (*). Checks relative path prefix (For example: myFolder;myFolder/subDirName/file.pdf).
–include-pattern string Include only these files when copying. This option supports wildcard characters (*). Separate files by using a ‘;’.
–include-regex string Include only the relative path of the files that align with regular expressions. Separate regular expressions with ‘;’.
–list-of-versions string Specifies a file where each version id is listed on a separate line. Ensure that the source must point to a single blob and all the version ids specified in the file using this flag must belong to the source blob only. AzCopy will download the specified versions in the destination folder provided.
–metadata string Upload to Azure Storage with these key-value pairs as metadata.
–no-guess-mime-type Prevents AzCopy from detecting the content-type based on the extension or content of the file.
–overwrite string Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default ‘true’) Possible values include ‘true’, ‘false’, ‘prompt’, and ‘ifSourceNewer’. For destinations that support folders, conflicting folder-level properties will be overwritten this flag is ‘true’ or if a positive response is provided to the prompt. (default “true”)
–page-blob-tier string Upload page blob to Azure Storage using this blob tier. (default ‘None’). (default “None”)
–preserve-last-modified-time Only available when destination is file system.
–preserve-owner Only has an effect in downloads, and only when –preserve-smb-permissions is used. If true (the default), the file Owner and Group are preserved in downloads. If set to false, –preserve-smb-permissions will still preserve ACLs but Owner and Group will be based on the user running AzCopy (default true)
–preserve-permissions False by default. Preserves ACLs between aware resources (Windows and Azure Files, or ADLS Gen 2 to ADLS Gen 2). For Hierarchical Namespace accounts, you will need a container SAS or OAuth token with Modify Ownership and Modify Permissions permissions. For downloads, you will also need the –backup flag to restore permissions where the new Owner will not be the user running AzCopy. This flag applies to both files and folders, unless a file-only filter is specified (e.g. include-pattern).
–preserve-posix-properties ‘Preserves’ property info gleaned from stat or statx into object metadata.
–preserve-smb-info Preserves SMB property info (last write time, creation time, attribute bits) between SMB-aware resources (Windows and Azure Files). On windows, this flag will be set to true by default. If the source or destination is a volume mounted on Linux using SMB protocol, this flag will have to be explicitly set to true. Only the attribute bits supported by Azure Files will be transferred; any others will be ignored. This flag applies to both files and folders, unless a file-only filter is specified (e.g. include-pattern). The info transferred for folders is the same as that for files, except for Last Write Time which is never preserved for folders. (default true)
–preserve-symlinks If enabled, symlink destinations are preserved as the blob content, rather than uploading the file/folder on the other end of the symlink
–put-md5 Create an MD5 hash of each file, and save the hash as the Content-MD5 property of the destination blob or file. (By default the hash is NOT created.) Only available when uploading.
–recursive Look into sub-directories recursively when uploading from local file system.
–s2s-detect-source-changed Detect if the source file/blob changes while it is being read. (This parameter only applies to service to service copies, because the corresponding check is permanently enabled for uploads and downloads.)
–s2s-handle-invalid-metadata string Specifies how invalid metadata keys are handled. Available options: ExcludeIfInvalid, FailIfInvalid, RenameIfInvalid. (default ‘ExcludeIfInvalid’). (default “ExcludeIfInvalid”)
–s2s-preserve-access-tier Preserve access tier during service to service copy. Please refer to [Azure Blob storage: hot, cool, and archive access tiers](https://docs.microsoft.com/azure/storage/blobs/storage-blob-storage-tiers) to ensure destination storage account supports setting access tier. In the cases that setting access tier is not supported, please use s2sPreserveAccessTier=false to bypass copying access tier. (default true). (default true)
–s2s-preserve-blob-tags Preserve index tags during service to service transfer from one blob storage to another
–s2s-preserve-properties Preserve full properties during service to service copy. For AWS S3 and Azure File non-single file source, the list operation doesn’t return full properties of objects and files. To preserve full properties, AzCopy needs to send one additional request per object or file. (default true)

Flags Applying to All Commands:
–cap-mbps float Caps the transfer rate, in megabits per second. Moment-by-moment throughput might vary slightly from the cap. If this option is set to zero, or it is omitted, the throughput isn’t capped.
–log-level string Define the log verbosity for the log file, available levels: INFO(all requests/responses), WARNING(slow responses), ERROR(only failed requests), and NONE(no output logs). (default ‘INFO’). (default “INFO”)
–output-level string Define the output verbosity. Available levels: essential, quiet. (default “default”)
–output-type string Format of the command’s output. The choices include: text, json. The default value is ‘text’. (default “text”)
–skip-version-check Do not perform the version check at startup. Intended for automation scenarios & airgapped use.
–trusted-microsoft-suffixes string Specifies additional domain suffixes where Azure Active Directory login tokens may be sent. The default is ‘*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net’. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.

 

 

https://ecapitaladvisors.com/blog/implementing-email-notifications-in-a-day/

 

Product:

Microsoft SQL server 2016 database

Issue:

How let the user select what rows should be shown in the view?

Solution:

Not possible direct in a SQL view.

https://www.mssqltips.com/sqlservertip/5147/limitations-when-working-with-sql-server-views/

A workaround is to have a table that the user update with the value they want to select on, and then in the view join on that table and filter on that particulare value.

If you want the ID column to match a PERIOD value, create a parameter table first

CREATE TABLE [admin].[Parameter](
[key_Parameter] [int] IDENTITY(1,1) NOT NULL,
[Period] [nvarchar](200) NULL,
) ON [PRIMARY]

Add the select value into the the table.

Then create a view – you have to update the view code below to reflect your table columns;

CREATE VIEW [DM].[Konto-Period]
AS
SELECT 
Kontonummer, 
ID
FROM DM.kontotabell a
JOIN [admin].[Parameter] b on a.[ID] = b.[Period]
-- WHERE a.[ID] = b.[Period]

This will when run check against the value in the parameter table.

Then you have to figure out a way for end user to update the parameter table.

More Information:

https://www.sqltutorial.org/sql-date-functions/sql-convert-string-to-date-functions/ 

https://learn.microsoft.com/en-us/sql/relational-databases/stored-procedures/specify-parameters?view=sql-server-ver16 

https://learn.microsoft.com/en-us/sql/relational-databases/tables/use-table-valued-parameters-database-engine?view=sql-server-ver16

Product:

Planning Analytics 2.0.9.13

Problem:

How log user login to tm1?

Solution:

Create a file named tm1s-log.properties in same folder as the tm1s.cfg, with this content:

log4j.logger.TM1=INFO, S1
log4j.logger.TM1.Lock=ERROR, S1
log4j.appender.S1=org.apache.log4j.SharedMemoryAppender
log4j.appender.S1.MemorySize=10 MB
log4j.appender.S1.MaxFileSize=100 MB
log4j.appender.S1.MaxBackupIndex=20
log4j.appender.S1.TimeZone=local

log4j.logger.TM1.Login=DEBUG, S2
log4j.additivity.TM1.Login=false
log4j.appender.S2=org.apache.log4j.SharedMemoryAppender
log4j.appender.S2.File=TM1login.log
log4j.appender.S2.MaxFileSize=100 MB
log4j.appender.S2.MaxBackupIndex=20
log4j.appender.S2.TimeZone=local

Will create a tm1login.log file in your log folder. It will contain the user name for IntegratedSecurityMode=1.

If it is a CAM user you may have to read the }TM1_DefaultDisplayValue in }Clients dimension.

 

 

Then you can use this code to load the log file into a cube;

https://github.com/ykud/sample_tm1_tis/blob/main/SYS%20TM1Login%20Update.pro

More Information:

https://exploringtm1.com/using-tm1-log-files/

https://blog.octanesolutions.com.au/changing-password-in-multiple-tm1-instances

https://code.cubewise.com/blog/ibm-tm1-and-planning-analytics-cheatsheet

https://everanalytics.wordpress.com/2021/02/17/tm1-rest-api-example/

https://github.com/cubewise-code/tm1py/issues/393

https://github.com/cubewise-code/tm1py-samples/blob/master/Administration/list_tm1_users

Product:

Planning analytics 2.0.9
Microsoft Windows server

Problem:

What ports are used by PAL?

Solution:

 

Check this article from  https://www.ibm.com/support/pages/what-are-port-numbers-used-ibm-planning-analytics-products 

Ports used by Planning Analytics
5495   The unsecured TCP/IP port number on which the Admin Server listens for client requests (if allowed in Cognos Configuration).

 

5498   The secured TCP/IP port number on which all TM1 components communicate with the Cognos TM1 Admin Server using Transport Layer Security (TLS).

 

5895     Admin Server to REST API unsecured communication port number (tm1AdminHTTPPortNumber). Specifies the HTTP port number that is used by TM1 Admin Server to communicate with the TM1 REST API for unsecured communication (if allowed). The default value is 5895. This default value cannot be changed using IBM Cognos Configuration. For more information, see “Appendix 1: TM1 Admin Host” in the TM1 REST API documentation.

 

5898   Admin Server to REST API secured communication port number (tm1AdminHTTPSPortNumber). Specifies the HTTPS port number that is used by TM1 Admin Server to communicate with the TM1 REST API for secured (SSL) communication. The default value is 5898. This value cannot be changed using IBM Cognos Configuration.
For more information, see “Appendix 1: TM1 Admin Host” in the TM1 REST API documentation.

 

<portNumber> The port on which the TM1 server runs. This parameter is used to distinguish multiple TM1 servers running on the same computer. Valid port values fall between 5000 and 65535. The default value is 12345 (already used by Planning Sample). This parameter is to be set in the Tm1s.cfg server configuration file.
12345  Default portNumber for Planning Sample
12346  Default portNumber for SData
45557  Default portNumber for GO_New_Stores
12347  Default portNumber for PData
5333   Default portNumber for Proven_Techniques
44321  Default portNumber for GO_scorecards
50909  Default portNumber for 24Retail

 

<HTTPPortNumber> Port number on which the TM1 Server listens for incoming HTTP(S) requests. The IBM Planning Analytics TM1 Server services the REST API using this HTTP(S) channel. The server accepts either standard HTTP or SSL secured HTTPS connections depending on the UseSSL parameter: If UseSSL is set to T, switching the use of SSL on, then the server will accept only HTTPS connections, if UseSSL is set to F then the server will accept unsecured HTTP connections. If HTTPPortNumber is not defined in your tm1s.cfg file, then port number “5001” will be assigned automatically. This parameter is to be set in the Tm1s.cfg server configuration file.
12354  Default HTTPportNumber for Planning Sample
8010   Default HTTPportNumber for SData
5010   Default HTTPportNumber for GO_New_Stores
8011   Default HTTPportNumber for PData
5011   Default HTTPportNumber for Proven_Techniques
44312  Default HTTPportNumber for GO_scorecards
52670  Default HTTPportNumber for 24Retail

 

<ClientMessagePortNumber>  TM1 Client Message port number. This port number establishes a secondary port for client progress messages to use when a lengthy operation is waiting to be canceled. This parameter is to be set in the Tm1s.cfg server configuration file. The default value is blank. By default, this port number is automatically and dynamically assigned when the TM1 server starts. You do not have to set ClientMessagePortNumber to a specific number unless firewalls or other network issues require the listener port to be a well-known number.
CAUTION: If you choose to set a specific value for the ClientMessagePortNumber parameter, instead of having it dynamically assigned, be sure to assign unique port numbers for all the TM1 server and client message ports you are using. If you have two servers running on the same machine using the same port number, the message activity may cause a system conflict or hang.
17469  Default ClientMessagePortNumber for 24Retail

 

<LDAPPort>    Port that IBM TM1 Server uses to bind to an LDAP server. It is used if PasswordSource=LDAP in tm1s.cfg. The default LDAPPort is 389 (unsecured). Usually, in production, secured port 636 is used instead (LDAPS).

 

9510    Default port for both TM1 Application Server (depending on “IBM Cognos TM1” service) and IBM Planning Analytics Spreadsheet Services (the new TM1 Web that depends on “IBM Planning Analytics Spreadsheet Services” service). If both services are still needed on the same machine, then this default port has to be changed for one of them, and TM1 Application Web (pmpsvc) may have to be reconfigured to connect to TM1 Web. Follow this document to achieve this: “How to Configure TM1 Application Web to connect to TM1 Web since 2.0.9.2 ? (IBM Planning Analytics Spreadsheet Services)

 

53      Docker daemon port.

 

80         Planning Analytics Workspace’s PAGatewayHTTPPort (to be defined in /config/paw.ps1). This is an HTTP port that is mapped to the host by pa-gateway. The default value is 80.

 

443     Planning Analytics Workspace’s PAGatewayHTTPSPort (to be defined in /config/paw.ps1). This is an HTTPS port that is mapped to the host by pa-gateway. The default value is 443.

 

9012     To use IBM Planning Analytics Administration on Planning Analytics Workspace Local, you install and configure the Planning Analytics Administration agent wherever you install IBM TM1 Server. The default port of the Planning Analytics Administration agent is 9012. This port cannot be easily changed because it is hardcoded in the docker image of the container that is accessing the Administration Agent.

 

8888     Default admintool port. By default, the Planning Analytics Workspace administration tool is accessible on http://127.0.0.1:8888.
In IBM Planning Analytics Workspace Local version 2.0.44 or later, if port 8888 is not free, you can configure Planning Analytics Workspace Local to access the Planning Analytics Workspace administration tool remotely on Windows Server. For more information, see “Access the Planning Analytics Workspace administration tool remotely on Windows Server“.
If you can’t run a browser on the localhost interface, you can configure Planning Analytics Workspace Local to access the Planning Analytics Workspace administration tool on another IP address. For more information, see “Access the Planning Analytics Workspace administration tool remotely on Linux

 

 

More Information:

https://www.ibm.com/support/pages/blank-white-page-rendered-while-opening-websheets-within-planning-analytics-workspace 

https://www.ibm.com/support/pages/node/6985967

Product:

Azure Data Factory

Issue:

How connect with managed identity to SQL private endpoint?

Solution:

In your azure subscription, ensure that both ADF and SQL are in the same subnet. Then there should not be any firewalls that need to be open between them.

On your AZURE SQL server, set Networking – public access to disable. Ensure you have created private endpoints for you SQL AZURE resource.

Set up a Managed Identity for your Azure Data Factory:  This will create a ID that can be used by other Azure resources to get access.

    • In the Azure portal, go to your Azure Data Factory resource.
    • Under the “Settings” section, select ” managed identity “.
    • Enable the system-assigned managed identity for your Data Factory.

Grant the Managed Identity access to the SQL Azure Database:

    • Go to your SQL Azure Database resource.
    • Under the “Settings” section, select “Access control (IAM)”.
    • Click on “+ Add” and add a role assignment.
    • Select the appropriate role (e.g., “Contributor” or “SQL Server Contributor”) and search for the name of your Azure Data Factory.
    • Select the Data Factory name from the search results and click “Save”.

You can also give the ADF access inside SQL server by giving it access with this commands in SSMS:

-- run in master database

CREATE LOGIN [adf-name] FROM EXTERNAL PROVIDER

CREATE USER [adf-name] FROM LOGIN [adf-name] WITH DEFAULT_SCHEMA=[dbo]

-- run in sql database

CREATE USER [adf-name] FROM LOGIN [adf-name]

ALTER ROLE [db_owner] ADD MEMBER [adf-name]

 

Configure the Linked Service in Azure Data Factory:

    • Open your Azure Data Factory resource in the Azure portal.
    • Click on launch Studio
    • Go to the “manage” section.
    • Click on the “Linked service” tab and select “New”.
    • Choose the appropriate SQL Server connector (e.g., “Azure SQL Database”).
    • Provide the required connection details such as server name, database name, authentication type like:
      integrationRuntime2 (Managed Virtual Network)
      connection string
      Account selection method – Enter manually.
      Enter SQL server name (Fully qualified domain name) like: sql-name.database.windows.net
      Enter database name
      For authentication type, under “Managed private endpoint”, Select System Assigned Managed Identity – then all values should come up automatic.
    • Click on “Test Connection” to validate the connection.

 

Use the Linked Service in Azure Data Factory:

      • Now, you can use the configured Linked Service to connect to the SQL Azure Database private endpoint in your dataset, that are the integration pipelines within Azure Data Factory.

By following these steps, you’ll be able to establish a connection to a SQL Azure Database private endpoint from Azure Data Factory using a managed identity.

More information:

https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql 

https://techcommunity.microsoft.com/t5/azure-sql-blog/private-endpoints-for-azure-sql-managed-instance/ba-p/3782015 

https://learn.microsoft.com/en-us/azure/data-factory/tutorial-copy-data-portal-private

Product:

Microsoft Azure Storage Account

Issue:

When doing the Microsoft learn section:

https://learn.microsoft.com/en-us/training/modules/connect-an-app-to-azure-storage/9-initialize-the-storage-account-model?pivots=csharp

Try to list content in a blob, you get a message:

There are no credentials provided in your command and environment, we will query for account key for your storage account.
It is recommended to provide –connection-string, –account-key or –sas-token in your command as credentials.

You also can add `–auth-mode login` in your command to use Azure Active Directory (Azure AD) for authorization if your login account is assigned required RBAC roles.
For more information about RBAC roles in storage, visit https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-cli.

In addition, setting the corresponding environment variables can avoid inputting credentials in your command. Please use –help to get more information about environment variable usage.

 

Solution:

The blob photos have been created, if you check direct in your subscription storageaccount. But you can not verify it with the command:

az storage container list \
--account-name <name>

The <name> should be replaced with your uniq storage account name.

The result returned is cryptic, but if you find below lines, then there is a success. Keep in mind that you need to enter the CLI command in one line.

 },
"immutableStorageWithVersioningEnabled": false,
"metadata": null,
"name": "photos",
"properties": {

As this message is a warning, and not a error. You can add –only-show-errors  to suppress warnings. Like this;

az storage container list --only-show-errors --account-key  <your key> --account-name <your account>

The --auth-mode key will be deprecated in the future, try to use other method.

As this warning message is new – it may break your scripts, as they do not expect the message to come.

You can get more information with adding –debug  to the command, like;

az storage container list --debug --account-name  <name>

 

More Information:

https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-cli 

 

When you don’t specify the authentication type, it will try yo get the access key of the storage account: This requires Microsoft.Storage/storageAccounts/listkeys/action permission. If you have contributor role or the storage account, you have the required permission.

--auth-mode login means it will use AAD auth to connect to the storage. You can use of the built-in roles to access the storage (see documentation):

  • Storage Table Data Contributor
  • Storage Table Data Reader

When using AAD Auth, you could also disable access key authentication.

There is an good article related to RBAC management and data plane model: Assign an Azure role for access to blob data.

Product:
Microsoft Azure File share

Issue:

How use POSTMAN to send up a file to AZURE file store with REST API?

Solution:

Download POSTMAN program from https://www.postman.com/downloads/

Go into your AZURE subscription and to your storage account, to get the shared access signature (SAS), that is a URI that grants restricted access rights to Azure Storage resources.

As this is a file share you should select allowed service to be FILE, and allowed resource types to be OBJECT.

Set the end date for expiry to a year from now.

Leave the Allowed IP addresses blank – to allow any computer to access the account.  (Keep the DNS and FIREWALL setup so that only computers from your company can reach the azure area).

Allowed protocols should be HTTPS only.

Click on Generate SAS and connection string.  This strings you must copy and save in notepad. You can not show them again when you have left this Azure page.

The connection string contain all info you need. Inside notepad split it up to only have the information you need in one string for file share. Copy text after FileEndpoint=.

You should get something like this:

https://xyz.file.core.windows.net/?sv=2022-11-02&ss=f&srt=o&sp=rwdlc&se=2025-07-31T22:45:45Z&st=2023-05-30T14:45:45Z&spr=https&sig=xyzsecretkeyxyz

sv= is the version of REST API, the value you may need to add as; x-ms-version: 2022-11-02

se= is the end date for the connection key to work, like 2025-07-31T22:45:45Z

st= is the start date for the connection key to work, like 2023-05-30T14:45:45Z

sig= is the key value, that gives you full access to the area. Do not share it with others.

sp= is what kind of rights you have given, e.g. read write delete list create.

In your storage account file share, you may have created some subfolders like testfiles. Click on File Shares to find the name and any sub-folders you have underneath it. Click on file share and click on browse to find the folder name where you have Authentication method: Access key. This is the folder you can access.

Update your url to contain the path and the filename of the file you want to create. Like https://xyz.file.core.windows.net/testfiles/test.txt?sv=2022-11-02……

Start POSTMAN. Do not login, skip that. Create a New HTTP dialog.

Select PUT and paste in your URL. Then POSTMAN will interpreter your values and list them as parameters.

With file share, you must do the REST API in two steps. First create the file in correct size, and then do a second call to fill the file with data. This is different from BLOB storage, where you can do it in one REST API CALL.

In POSTMAN go to the Headers tab and add two keys:

x-ms-type = file

x-ms-content-length = 1

Here we set the length of the file to 1 character (1 byte). (this will work as long you only use a-z characters and UTF-8 coding).

Click on SEND button, and if all is correct you should get:  201 created.

Browse to you AZURE file storage and check that the file was created, with a size of 1.

To write to the file, add in headers this two keys:

x-ms-write = update

x-ms-range = bytes=0-0

The x-ms-range should always start with 0 and then be one number less than the total of characters in your file. If the file is 42 characters, then the value should be bytes=0-41.

Important, in the params tab you must add a key as below (this to active the range function – otherwise the x-ms-range is not used);

comp = range

Then we need to add some data in POSTMAN to write to the file, go to the Body tab, and select RAW – text and enter a letter.

The text should be the same size as the file you have created. File size and text you put into the file must match exactly on the byte.

Click on SEND, and you should get a Status: 201 Created if all is fine.

Common errors you can see in POSTMAN are:

Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. Authentication scheme Bearer for files is not supported in this version.

This is solved by adding the correct version, like: x-ms-version: 2022-11-02

You should also try to have headers like:

Authorization: Bearer
x-ms-type:file
<Error><Code>UnsupportedHttpVerb</Code><Message>The resource doesn't support specified Http Verb.

This is solved by using PUT instead of POST.

<Error><Code>ShareNotFound</Code><Message>The specified share does not exist.

This is solved by you enter the correct path and filename to the URL.

HTTP Error 400.  The request host name is invalid.

Solved by enter the correct host.

HTTP Error 411. The request must be chunked or have a content length

Solved by enter the header Content-Length.

HTTP Error 404 The specified resource does not exist.

HTTP Error 400 Value for one of the query parameters specified in the request URI is invalid

HTTP Error 404 The specified parent path does not exist
Solved by enter the correct path to the files in AZURE.
One of the HTTP headers specified in the request is not supported.
<HeaderName>x-ms-range</HeaderName>

Solved by adding the key comp in params tab.

An HTTP header that's mandatory for this request is not specified.
<HeaderName>x-ms-content-length</HeaderName>

Solved by adding the key x-ms-content-length.

An HTTP header that's mandatory for this request is not specified.
<HeaderName>x-ms-write</HeaderName>

Solved by adding the key x-ms-write.

The value for one of the HTTP headers is not in the correct format.
<HeaderName>Content-Length</HeaderName>
<HeaderValue>1</HeaderValue>
This is solved by enter correct value in the x-ms-range key or that you add comp = range in the params tab.

 

More Information:

https://www.mikaelsand.se/2019/11/simple-how-to-upload-a-file-to-azure-storage-using-rest-api/
https://www.petecodes.co.uk/uploading-files-to-azure-blob-storage-using-the-rest-api-and-postman/
https://www.serverless360.com/blog/azure-blob-storage-vs-file-storage
https://raaviblog.com/how-to-use-azure-blob-storage-service-rest-api-operations-using-postman/
http://www.mikaelsand.se/2020/06/oauth-with-azure-the-just-make-it-work-edition/

https://azureops.org/articles/connect-azure-sql-from-data-factory-using-managed-identity/ 
https://www.datahai.co.uk/power-bi/connecting-power-bi-to-azure-sql-database-using-private-endpoints/ 

https://en.wikipedia.org/wiki/Spy_vs._Spy

 

Product:
Cognos Controller 10.4.2
Microsoft Windows Server 2022

Issue:

Suddenly user can not login to cognos controller.

They get a error like this:

System.Web.Services.Protocols.SoapException: Server was unable to process request. —> System.ArgumentNullException: Value cannot be null.
Parameter name: uriString
at System.Uri..ctor(String uriString)
at System.Web.Services.Protocols.WebClientProtocol.set_Url(String value)
at Cognos.Controller.Common.CRNBridge.CRNBridge.set_EndPointURL(String sURL)
at Cognos.Controller.Proxy.CCRWS.GetUserInfo(String sGuid, String sUser, String passportId)
— End of inner exception stack trace —
at Cognos.Controller.Forms.Common.Main.DoLoginCognos8(Form& frm, Boolean runtimCheck)
at CCR.AppContext.DoLogin()
at CCR.AppContext.Login()

Solution:

Restart the IIS service on the Cognos Controller server.

 

Steps to check the issue:

Login to the Cognos Controller server (via remote desktop).

Check that all IBM Cognos service are running.

Start IE and browse to CA11 – does it work?

http://controllerserver.domain.com/ibmcognos

Start IIS manager and check that all application pools are running.

Go to Components service from control panel administration. Expand COM+ components.

Ensure that “IBM Cognos Controller Consolidation” is running. If not, then restart the IIS from inside IIS manager program.

Check the windows event log for any error message, that can explain why any of above processes have stopped.

More Information:

https://blog.ittoby.com/2014/07/why-schannel-eventid-36888-36874-occurs.html 

https://allthingscognos.wordpress.com/category/cognos-controller/

Product:

Planning Analytics 2.0.9.16

Microsoft Windows 2019 server

Issue:

How change TM1 Application web (pmpsvc) to use CAM SSO security when it have been setup to use Native TM1 security before?

Solution:

If you only have one tm1 application connected to tm1 app web (common in new installations – like planning sample in our example).

Stop the “IBM Cognos TM1” service (pmpsvc web server).

Open the file fpmsvc_config.xml from D:\Program Files\ibm\cognos\tm1_64\webapps\pmpsvc\WEB-INF\configuration folder.

Remove the planning sample line from between servers section.

So it looks like this;

<servers>
</servers>
</admin_host>
</tm1>

Save the file.

Change you planning sample tm1s.cfg file to have correct values like below:

IntegratedSecurityMode=5

ServerCAMURI=http://cognosserver.domain.com:9300/p2pd/servlet/dispatch
ClientCAMURI=http://cognosserver.domain.com:80/ibmcognos/bi/v1/disp

Save tm1s.cfg file and restart the planning sample service.

Test to login to planning sample in Tm1 Architect, it should work with CAM SSO if all is correct.

Then browse to your tm1 app web on:

http://planninganalyticsserver.domain.com:9510/pmpsvc

If all works well – you should get to the configuration page – where you can select Tm1 instance. Select a TM1 instance that will be up and use CAM security. All Tm1 applications that are used inside Tm1 Application web (contributor) must all have the same security settings, most common is IntegratedSecurityMode=5.

Save the settings and you should get into the IBM Cognos Tm1 Applications portal. If you need to edit the configuration later, click on the tools icon.

In Cognos TM1 Applications Configuration web page, click on Edit link, below the selected server names.

Then fill out the fields – for any change needed.

Admin host = should be the server-name of windows server where Tm1 Admin service is running (normally the tm1 server itself).

Server Name = should be the selected TM1 instance, that user will first be authenticated against. Should be using CAM SSO, as describe above.

Cognos BI Gateway URI = should be to the gateway like http://cognosserver.domain.com:80/ibmcognos/bi/v1/disp

Cognos BI Dispatchers URI = should be point to the CA11 server on port 9300 like http://cognosserver.domain.com:9300/p2pd/servlet/dispatch

Click OK to save, and there should not be any errors. If errors check in IE on server if you can browse to above URL for Cognos BI.

 

If you turn off the planning sample application and change it to IntegratedSecurityMode=5, without change in fpmsvc_config.xml file, then you get a Tm1 login dialog in TM1 App Web when you connect and you can not login. This becouse “pmpsvc” is setup for Native Security. You need to change planning sample back to IntegratedSecurityMode=1 to be able to login to Tm1 App Web again.

To clear a TM1 Application from TM1 APP WEB connections, so you can work with the TM1 instance in TM1WEB, you need to run TI process:

}tp_admin_delete_all

 

More Information:

https://www.ibm.com/docs/sr/planning-analytics/2.0.0?topic=web-configuring-tm1-application 

https://allthingscognos.wordpress.com/2014/08/26/configuring-performance-modeller-and-tm1-web-with-cam-security-for-tm1-10-2-n/ 

You can maybe edit pmpsvc_path\WEB_INF\configuration\log4j.properties file in the same folder to change log level to INFO or DEBUG to get a lot more info in WEB_INF\logs\pmpsvc.log… or check folder D:\Program Files\ibm\cognos\tm1_64\wlp\usr\servers\tm1\logs for messages.log files.

https://www.ibm.com/support/pages/how-manually-reset-deployed-tm1-applications 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=mctaip-resetting-application-in-portal 

https://www.ibm.com/docs/en/planning-analytics/2.0.0?topic=security-standard-cognos-tm1-authentication