Product:
Microsoft Azure Blob storage

Issue:

The Azure Data Factory (ADF) does not run the job.

Operation on target LookupFileNames failed: ErrorCode=MICredentialUnderSyncing,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The Managed Identity is not ready. This could happen if it is newly assigned or inactive for more than 90 days. The system is updating it now. Please try again after 10 minutes.,Source=Microsoft.DataTransfer.MsiStoreServiceClient,”Type=Microsoft.Rest.Azure.CloudException,Message=Acquire MI token from AAD failed with credential under syncing issue. ErrorCode: invalid_client…

Solution:

Wait ten minutes. If the ADF have been not running for 90 days, the system is “turned off” and will take some time for Microsoft to get up and run again.

 

More Information:

https://learn.microsoft.com/en-us/azure/automation/troubleshoot/managed-identity 

https://docs.uipath.com/automation-cloud/automation-cloud/latest/admin-guide/azure-ad-integration

https://azure.status.microsoft/en-us/status/history/ 

https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory 

https://learn.microsoft.com/en-us/fabric/data-factory/connector-azure-blob-storage 

https://k21academy.com/microsoft-azure/data-engineer/connect-azure-data-lake-to-azure-data-factory-and-load-data/

 

Product:
Microsoft SQL server

Issue:
How to in a view only list the data rows for the last 3 years?

Solution:

You need to have a column with the date in your fact table. If the date column in your fact table is a int, you have to join it with a date conversion table or use cast/convert.

We have used a date table (DM.dimdate).

The key_dimdate is a integer, and the date column is in the date format in SQL. The date format make it easy to compare it to a date question in SQL.

Create a SQL similar to this:

SELECT a.[Customer]
,a.[Account]
,a.[Order]
,a.[key_dimDate]
,a.[key_dimVersion]
,a.[Amount]
,a.[startDate] as [startDate]
,a.[endDate] as [endDate]
,a.[IsActive] as [IsActive]
FROM [DM].[facttable] a
inner JOIN [DM].[dimDate] f on a.[key_dimDate] = f.[key_dimDate]
where 1=1
and a.[IsActive] = 'Y'
-- and DATEADD(year, -3, GETDATE() ) < f.[Date] -- will list 3 years
and DATEADD(Quarter, -13, GETDATE() ) < f.[Date]  -- will list 39 months

 

If you get error like Column ‘id’ in field list is ambiguous, then you have missed to set the alias letter in front of all the columns references in the SQL query.

If you are getting an error: “Arithmetic overflow error converting expression to data type datetime.” Is that the F.Date in above SQL is a int, you have to convert it to a date somehow.

Only using CONVERT(DATETIME,[key_dimDate],103) can give overflow error.

Change the SQL to reflect your columns and tables.

 

More Information:

https://www.w3schools.com/sql/func_sqlserver_convert.asp 

https://www.sqlshack.com/sql-server-functions-for-converting-string-to-date/ 

https://www.w3schools.com/sql/func_sqlserver_dateadd.asp 

Let’s say you need to add five months to current date, use this:

SELECT * FROM YourTable
WHERE YourDate < DATEADD(month, 5, GETDATE())

I used function GETDATE() for getting current DateTime.

If you need to subtract some time, just add minus to second parameter:

SELECT * FROM YourTable
WHERE YourDate < DATEADD(month, -5, GETDATE())

https://koz.tv/sql-query-for-todays-date-minus-year-month-day-or-minute/ 

https://www.mssqltips.com/sqlservertip/2509/add-and-subtract-dates-using-dateadd-in-sql-server/ 

https://www.sqlshack.com/how-to-add-or-subtract-dates-in-sql-server/ 

 

To see the SQL job log for a time period, try this in SQL server (will not work in Azure SQL)

SELECT
@@SERVERNAME as ‘Server’, j.name as ‘Job Name’,
jh.run_date as ‘Run Date’, jh.run_status as ‘Job Status’
FROM msdb.dbo.sysjobs j
LEFT OUTER JOIN (
SELECT ROW_NUMBER() OVER(PARTITION BY jh.job_id ORDER BY jh.run_date DESC) AS row_num, jh.*
FROM msdb.dbo.sysjobhistory jh
WHERE
jh.step_id = 0 AND
jh.run_date >= CONVERT(varchar(8), DATEADD(DAY, -7, GETDATE()), 112) AND
jh.run_date <= CONVERT(varchar(8), GETDATE() + 1, 112)
–ORDER BY jh.run_date DESC
) AS jh ON jh.job_id = j.job_id
WHERE j.enabled = 1 AND jh.run_status = 0
ORDER BY j.name, jh.run_date;

 

https://www.w3schools.com/sql/sql_join.asp 

Here are the different types of the JOINs in SQL:

  • (INNER) JOIN: Returns records that have matching values in both tables
  • LEFT (OUTER) JOIN: Returns all records from the left table, and the matched records from the right table
  • RIGHT (OUTER) JOIN: Returns all records from the right table, and the matched records from the left table
  • FULL (OUTER) JOIN: Returns all records when there is a match in either left or right table

Product:

Linux Mint
Microsoft Windows 10

Problem:

How make a bootable usb to install a OS to old computer?  Old computer only have CDROM drive, and modern OS does not fit on that. They need GB of space.

Solution:

Download the ISO you need – for example Linux Mint from here:

https://www.linuxmint.com/download.php 

Download software to create USB drive, like

https://etcher.balena.io/#download-etcher 

https://rufus.ie/en/ 

How to do it:

https://youtu.be/764JLB13GjE?si=Gby6iZeC79FGZLll 

Start the program balenaEtcher.

Select the iso file to use.

Select the USB stick to use.

Click on Flash to create the usb media.

If you get error, reboot your laptop, and try again.

Ensure that the computer you are going to install Linux to, have in BOOT setup, that it will start from USB stick, before hard drive.

https://www.zdnet.com/article/how-to-install-linux-on-an-old-laptop/ 

https://itsfoss.com/install-linux-mint/ 

 

For old computers with 32 bit processor , you need to install the 32 version of linux.

https://www.linuxmint.com/edition.php?id=308 

https://www.debugpoint.com/32-bit-linux-distributions/

 

What does the name on Windows OS ISO file mean?

This post is not made to list every possible filename there is, only summarize what other have written about the subject. They iso code may vary slightly between products and editions.

We believe that J_CPRA_X64FRE is Windows 10 Pro 64 bit.

The coding does not seem to be formally or comprehensively documented, but you can assemble hints from various scattered observations around the web.

J_CPRA_X64F looks like a short form version designed to fit in the legacy 11 character disk label from the old FAT volume label limits. J is the Windows 10 release. CPRA is the Pro edition, where the “C” is for “client” (as opposed to “server”) “PR” is for “Professional” and “A” is the variant of the professional edition (most times there is only one). X64 is the CPU architecture. F is short for FRE is a final-released (checked) build as opposed to a CHK (checked) build, used for debugging. (source: https://forums.whirlpool.net.au/archive/2468594).

The typical full version of an ISO file name is longer, including the language and region, and sometimes the target medium. For example, JM1_CCSA_X64FRE_EN-US_DV9 was the Windows 10 Technical Preview install DVD (dual layer, 8.5GB aka DVD9) and JM1_CCSA_X64FRE_EN-US_DV5 is a Windows 10 install DVD (single layer, 4.7GB, aka DVD5). EN-US is US English. X64 is for the x86 64-bit version. JM1 is a pre-release of “Redstone” (Windows 10); CCSA is the “Windows Technical Preview Edition.”

Other Windows 10 edition examples include:

CENA (Client, ENterprise, “A”)
CEDA (Client, EDucational, “A”)

You can seem some historic product to volume label mapping at https://support.microsoft.com/en-us/help/889713/how-to-determine-the-channel-that-your-copy-of-windows-server-2003-was

and also

Original CD/DVD Volume Labels for Windows

Some of the Windows 7 media labels can be found at Get Windows 7 SP1-U Media Refresh MSDN/TechNet ISO (Download or Convert) « My Digital Life. and Official Windows 7 SP1 ISO from Digital River « My Digital Life.

Windows 7 releases started with the letter “G”.

Windows 8 releases started with the letter “H”. (Windows 8 Enterprise x86 Volume Licensing ISO Leaked « My Digital Life)

Windows 10 uses the letter “J”. I would presume that “I” was skipped to avoid confusion with “1”.

https://www.quora.com/How-do-I-determine-what-version-of-Windows-installation-is-on-my-USB-drive-like-J_CPRA_X64F 

The two Windows 7 ISO file names, can be explained like this:
——–
Image

  1. Language of the OS. Always two letters (with one exception). Example: en, en-gb, cs, da, no, sv.
  2. Minor version build of the OS.
  3. Compile date of the OS (not of the ISO). Indicates YYMMDD-HHMM.
  4. Platform architechture and target. x86 = 32bit 8086-based, x64 = 32bit 8086-based with 64bit AMD-64 extensions. fre = Free, for end user. chk = Checked, debug version. Examples: x86fre, x64chk.
  5. SKU version. Examples: enterprise, enterprisen, professional, starter.
  6. Full language tag of the OS. Must match (1). Examples: en-us, en-gb, sv-se.
  7. Volume license identifier. Optional.
  8. Upgrade identifier. Optional.
  9. Original equipment manufacturer identifier. Optional.
  10. Matches (5).
  11. Volume label of the mounted ISO volume. Contains various codes to identify language, SKU, volume/OEM and media.

——–
Image

  1. Language of the OS. Always two letters (with one exception). Example: en, en-gb, cs, da, no, sv.
  2. Product name. Examples: windows_7, windows_8, windows_server_2012.
  3. SKU version. Examples: enterprise, enterprisen, professional, starter.
  4. Integrated service pack level. Optional.
  5. Platform architechture and target. x86 = 32bit 8086-based, x64 = 32bit 8086-based with 64bit AMD-64 extensions. chk = Checked, debug version. Examples: x86, x64chk.
  6. Storage media
  7. Update flag. The ISO was updated with some critical patch.
  8. Unique MSDN/Technet image number ID.

——–
Image

  1. Language of the OS. Always two letters. Example: en, cs, da, no, sv.
  2. ?
  3. Single or double layer DVD. DVD5 = 4.7GB
  4. Product name. Win = Windows.
  5. SKU version. Examples: Pro, Pro KN, Ent.
  6. 7 With. Means it’s an integrated installation with a service pack. Combined with (7) for full service pack level information. Optional.
  7. Integrated service pack level. Optional.
  8. Platform target. 32BIT = x86, 64BIT = x64.
  9. Language of the OS.
  10. Updated ISO. Base ISO with added KB update.
  11. Microsoft Licensing Fulfillment
  12. Microsoft Part Number as it appears on the physical installation media.

https://www.betaarchive.com/forum/viewtopic.php?t=26026 

More information:

https://itsfoss.com/watch-tv-channels-on-ubuntu-or-linux-mint/ 

Downloads

https://kodi.tv/ 

https://www.tecmint.com/linux-media-center-distros/ 

https://www.computerhope.com/history/processor.htm 

Have fun!

Product:
PowerBI Portal service

Problem:
How to import a csv file to dataflow from sharepoint area.

If you use sharepoint connection, you may get error like this;

an exception occurred: DataSource.Error: Microsoft.Mashup.Engine1.Library.Resources.HttpResource: Request failed:
OData Version: 3 and 4, Error: The remote server returned an error: (404) Not Found. (Not Found)
OData Version: 4, Error: The remote server returned an error: (404) Not Found. (Not Found)

Solution:

There exist different solutions to this issue, they way that work for you can depend on how your company have set up the security.

Go to your power bi portal https://app.powerbi.com/home?experience=power-bi

Open up your workspace area where you have administrator rights.

Click NEW DATAFLOW

Click Add new tables

Select csv file, not SharePoint file.

From you SharePoint folder, where you have stored your file, copy the link.

Then edit the link in notepad, so you remove /:x:/r/ between  .sharepoint.com/  and  /teams, also remove all garbage after ?.

Then you get a “clean” url path that will work. Like this (replace with your company info):

https://company.sharepoint.com/teams/powerbiworkspacename/foldername/General/enkel.csv

Paste the adjusted url to link to file field.

You may need to select data gateway to be “none”.

Enter “Organizational account” at authentication kind, if your SharePoint are part of you company, you will be prompted with your company azure login. If you are not already logged in to azure in your web browser.

If all works, you get a preview of the file.  Change the file origin to ensure that the special characters are handled correctly in the file. UNICODE-7 (utf-7) will support Swedish characters.

Click on transform data.

You will now have a similar look to power-bi desktop transform, where you can change the data before it is loaded into the cache.

The code is similar to below:

Csv.Document(Web.Contents(“https://company.sharepoint.com/teams/powerbiworkspacename/foldername/General/enkel.csv”), [Delimiter = “,”, Columns = 2, Encoding = 65001, QuoteStyle = QuoteStyle.None])

More Information:

https://www.linkedin.com/pulse/analytics-tips-connecting-data-from-sharepoint-folder-diane-zhu

https://learn.microsoft.com/en-us/power-query/connectors/sharepoint-folder 

https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-create 

https://www.phdata.io/blog/how-and-when-to-use-dataflows-in-power-bi/ 

Dataflow Gen2 (fabric) is indeed an enhancement over the original Dataflow. One of the key improvements is the ability to separate your Extract, Transform, Load (ETL) logic from the destination storage, providing more flexibility. Gen2 also comes with a more streamlined authoring experience and improved performance.

For a more detailed comparison, you can refer to this link:

Differences between Dataflow Gen1 and Dataflow Gen2 – Microsoft Fabric | Microsoft Learn

Datamart primarily utilizes data streaming technology to import data into Azure SQL Server. Datamart then automatically generates and links datasets. You can then actually create data streams that connect to the Datamart, which can be used for DirectQuery or import if the Advanced Compute Engine is enabled.

For a more detailed comparison, you can refer to this link:

Introduction to datamarts – Power BI | Microsoft Learn

Product:
Microsoft Azure

Issue:
You get a mail about that some of your subscriptions parts are expire in a few days like “Your Storage File Data Privileged Contributor role in the prod subscription will expire in 1 day(s)”

Solution:

Depending how your Azure accounts are setup and handled by your organisation, but it should be similar to this:

(you need to be owner of the subscription, to be able to extend the role for your self)

Go to azure  Home – Microsoft Azure

Search for PIM.  Click on Privileged Identity Management

Expand tasks and click on my roles.

Go to Azure Resources, and for the subscriptions you need to extend, go to owner line and activate you as owner.

click on activate. After someone have approved, go to next step.

Click on Privileged Identity Management, to get back to the start page of PIM.

Click now on Azure resources under Manage, and from the new page drop down select the subscription you want to handled.

Click on Manage button, after you have selected a subscription.

Click on Assignments under Manage. Then browse the lines to the right, to find what roles you can extend. Click on extend link.

Set a date, at least a year in the future. Press save.  Repeat for all roles you need to extend.

If you do not find your role, check under the Expired assignments tab – to see if it is there, and you can extend it there.

Repeat for all subscriptions you have in Azure.

 

More Information:

Renew Azure resource role assignments in PIM – Microsoft Entra ID Governance | Microsoft Learn

Extend or renew PIM for groups assignments – Microsoft Entra ID Governance | Microsoft Learn

What is Privileged Identity Management? – Microsoft Entra ID Governance | Microsoft Learn

 

Product:
Microsoft Windows 2019 server

Issue:

Your security scan software report that you have a issue on the Windows server with ciphers.

TLS/SSL Birthday attacks on 64-bit block ciphers (SWEET32)
Negotiated with the following insecure cipher suites:
* TLS 1.2 ciphers:
* TLS_RSA_WITH_3DES_EDE_CBC_SHA

Configure the server to disable support for 3DES suite.

Solution:

Check that the application software you use on the server does not need this cipher.

Login to the Windows server as local administrator.

Then run this powershell command to remove the support for the cipher:

Disable-TlsCipherSuite -Name 'TLS_RSA_WITH_3DES_EDE_CBC_SHA'

 

Reboot the windows server for the change to take affect.

Get list of cipher that is 3DES:

Get-TlsCipherSuite -name “3DES”

 

Get list of all cipher on server:

Get-TlsCipherSuite | Format-Table Name

Get list of curves:

certutil.exe –displayEccCurve

 

More Information:

A cipher suite is a set of cryptographic algorithms. The schannel SSP implementation of the TLS/SSL protocols use algorithms from a cipher suite to create keys and encrypt information. A cipher suite specifies one algorithm for each of the following tasks:

  • Key exchange
  • Bulk encryption
  • Message authentication

Key exchange algorithms protect information required to create shared keys. These algorithms are asymmetric (public key algorithms) and perform well for relatively small amounts of data.

Bulk encryption algorithms encrypt messages exchanged between clients and servers. These algorithms are symmetric and perform well for large amounts of data.

Message authentication algorithms generate message hashes and signatures that ensure the integrity of a message.

Developers specify these elements by using ALG_ID data types. For more information, see Specifying Schannel Ciphers and Cipher Strengths.

In earlier versions of Windows, TLS cipher suites and elliptical curves were configured by using a single string:

Diagram that shows a single string for a Cipher Suite.

 

https://learn.microsoft.com/en-us/windows/win32/secauthn/cipher-suites-in-schannel 

https://learn.microsoft.com/en-us/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order 

https://learn.microsoft.com/en-us/powershell/module/tls/?view=windowsserver2022-ps

https://learn.microsoft.com/en-us/powershell/module/tls/disable-tlsciphersuite?view=windowsserver2022-ps 

https://rdr-it.io/en/windows-server-disable-a-cipher-suite/ 

https://learn.microsoft.com/en-us/windows-server/security/tls/tls-registry-settings?tabs=diffie-hellman

Product:

Microsoft Power BI desktop

Issue:

Would like to add a step inside a existing step list in Power BI desktop.

Solution:

Go to Transform Data. Select that Query you want to change steps for.

In the right “applied steps” column, mark the row where you want to insert a blank step below.

Click on FX icon to add a step.

Now you can paste code from other step into the = #”Replaced Value” field, or do the transforms you need.

Save the report.

More Information:

https://learn.microsoft.com/en-us/power-query/applied-steps

https://learn.microsoft.com/en-us/power-bi/

https://learn.microsoft.com/en-us/power-bi/transform-model/dataflows/dataflows-configure-consume 

https://learn.microsoft.com/en-us/power-bi/create-reports/desktop-buttons?tabs=powerbi-desktop

https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-common-query-tasks

Product:
Microsoft Power BI

Issue:

What is Enable Load and Include In Report Refresh in the power bi transform data for each query?

 

Solution:

“Include In Report Refresh” means query is automatically refreshed when you press “Refresh” button on the ribbon. “Enable Load” means query results are available for report builder. Otherwise you may use it in your other queries (for example to merge data), but it is not shown in the report builder.

If you have a query that use a other query, that is unchecked “Include In Report Refresh”, then both query’s are updated when you update the first query.

This works in Power BI Desktop but not in Power BI service at 2022, but can have changed. If you unchecked Include in report refresh – The functionality works only in Power BI Desktop and it will not have any impact in Power BI Service i.e. if you refresh report in Power BI service, this will still refresh all the queries even through Include in report refresh is unchecked.

 

More Information:

https://www.purplefrogsystems.com/2021/04/power-bi-enable-load/ 

Keep The Existing Data In Your Power BI Dataset And Add New Data To It Using Incremental Refresh

 

Product:
Microsoft Power BI desktop

Issue:

How select all data from a year, when you have a date column in your table?

Solution:

Inside PowerBI you can enter formulas like this;

to select last year

= Table.SelectRows(#"Changed Type1", each Date.IsInPreviousYear([Datum]))

 

to select specific year

= Table.SelectRows(#"Changed Type1", each Date.Year([Datum])=2022)

 

to select all dates before 2022


= Table.SelectRows(#"Changed Type1", each [Datum] < #date(2022, 1, 1))

 

 

More Information:

https://learn.microsoft.com/en-us/powerquery-m/date-isinpreviousyear

How to Calculate Year to Date and Prior YTD in Power BI

https://thedatalabs.org/how-to-get-the-same-period-last-year-to-date-in-power-bi/

Product:

Microsoft Power BI service (in the cloud)

https://learn.microsoft.com/en-us/power-bi/connect-data/service-azure-and-power-bi 

Issue:

When i check my report, it show old numbers, why?

Solution:

It can be that you need to refresh your web browser.  Press F5 to see if it helps.

Data is first updated at the source, can be your SQL database server.

Then (if you use it) the azure dataflow need to be updated.

When that is finished, you can update your semantic model. That will give you a updated Power BI report.

But the web based report can be cached, so please also update the page in your web browser, to ensure you are updated.

(if you use direct queries in your powerbi reports, there is other implications that may give issues).

 

More Information:

https://fabricdigital.co.nz/blog/how-to-hard-refresh-your-browser-and-clear-cache 

If the dataflow is standard, then the data is stored in Dataverse. Dataverse is like a database system; it has the concept of tables, views, and so on. Dataverse is a structured data storage option used by standard dataflows.

However, when the dataflow is analytical, the data is stored in Azure Data Lake Storage. A dataflow’s data and metadata is stored in a Common Data Model folder. Since a storage account might have multiple dataflows stored in it, a hierarchy of folders and subfolders has been introduced to help organize the data. Depending on the product the dataflow was created in, the folders and subfolders may represent workspaces (or environments), and then the dataflow’s Common Data Model folder. Inside the Common Data Model folder, both schema and data of the dataflow tables are stored. This structure follows the standards defined for Common Data Model.

https://learn.microsoft.com/en-us/power-query/dataflows/what-is-the-cdm-storage-structure-for-analytical-dataflows 

A dataflow stores the data for each table in a subfolder with the table’s name. Data for a table might be split into multiple data partitions, stored in CSV format.

https://learn.microsoft.com/en-us/power-query/dataflows/configuring-storage-and-compute-options-for-analytical-dataflows 

In Power BI, in addition to the standard dataflow engine, an enhanced compute engine is available for the dataflows created in Power BI Premium workspaces. You can configure this setting in the Power BI admin portal, under the Premium capacity settings. The enhanced compute engine is available in Premium P1 or A3 capacities and above. The enhanced compute engine reduces the refresh time required for long-running extract, transform, load (ETL) steps over computed tables, such as joins, distinct, filters, and group by. It also provides the ability to perform DirectQuery over tables from the Power BI semantic model. More information: The enhanced compute engine

https://10senses.com/blog/azure-synapse-vs-azure-data-factory-vs-power-bi-dataflows-what-are-the-similarities-and-differences/

Power Platform dataflows are data transformation services empowered by the Power Query engine and hosted in the cloud. These dataflows get data from different data sources and, after applying transformations, store it either in Dataverse or in Azure Data Lake Storage.

Dataflows are created using Power Query Online. Once you create them, the “M” scripts are available for review or for changes, but you do not need to write any line of code by yourself. It makes creating dataflows in Power Bi a code-free solution, just like Azure Synapse and ADF.

With Power BI dataflows, you can develop ETL processes which can be used to connect with business data from multiple data sources. Data imported by Power BI dataflows is stored in Azure Data Lake (Gen2), which is known for having massive scalability.

https://debbiesmspowerbiazureblog.home.blog/2019/12/04/use-data-lake-storage-v2-as-data-flow-storage/ 

Power BI semantic models can store data in a highly compressed in-memory cache for optimized query performance, enabling fast user interactivity. With Premium capacities, large semantic models beyond the default limit can be enabled with the Large semantic model storage format setting. When enabled, semantic model size is limited by the Premium capacity size or the maximum size set by the administrator.

Large semantic models in the service don’t affect the Power BI Desktop model upload size, which is still limited to 10 GB.

https://learn.microsoft.com/en-us/power-bi/enterprise/service-premium-large-models 

While a semantic model can be built using Power BI Desktop (in a .pbix file), it does not need to contain any visuals. Think of a semantic model as the last stop in the data pipeline before reports and dashboards are built. Thereafter, once you share a semantic model with other members of the organization, they can build any number of reports and dashboards from just that one semantic model.

Semantic models hide the complex technical details behind reports so that both technical and non-technical users can concentrate on analyzing the data and answering business questions. Sharing and reusability are two stand-out features of semantic models.

 A Power BI Desktop model is effectively an Analysis Services tabular model.

https://www.datacamp.com/blog/what-are-power-bi-semantic-models 

https://learn.microsoft.com/en-us/power-bi/connect-data/service-datasets-understand 

https://kyligence.io/plp/what-is-a-semantic-layer-in-power-bi/ 

https://www.analyticscreator.com/blog/power-bi-dataset-content-type-renamed-to-semantic-model 

  • Larger model sizes may not be supported by your capacity. Shared capacity can host models up to 1 GB in size, while Premium capacities can host larger models depending on the SKU. For further information, read the Power BI Premium support for large semantic models article. (Semantic models were previously known as datasets.)
  • Smaller model sizes reduce contention for capacity resources, in particular memory. It allows more models to be concurrently loaded for longer periods of time, resulting in lower eviction rates.
  • Smaller models achieve faster data refresh, resulting in lower latency reporting, higher semantic model refresh throughput, and less pressure on source system and capacity resources.
  • Smaller table row counts can result in faster calculation evaluations, which can deliver better overall query performance.

https://azure.microsoft.com/en-us/blog/data-models-within-azure-analysis-services-and-power-bi/ 

https://learn.microsoft.com/en-us/power-bi/guidance/import-modeling-data-reduction 

Power BI’s Data Compression: Large Data Imports in Power BI

https://community.fabric.microsoft.com/t5/Service/Maximum-Data-that-be-Consumed-by-Power-BI-from-Azure-Data-Lake/m-p/2031983 

https://learn.microsoft.com/en-us/power-bi/connect-data/service-live-connect-dq-datasets