Skip to content

Commit

Permalink
v3.1 docs update
Browse files Browse the repository at this point in the history
  • Loading branch information
dgosbell committed Jul 31, 2024
1 parent 4362dda commit 4a165c0
Show file tree
Hide file tree
Showing 20 changed files with 184 additions and 28 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,42 +14,66 @@ Today we are happy to announce the release of version 3.1.0 of DAX Studio which

### DSCMD - DAX Studio command line utility
DAX Studio is now shipping with a new utility called `dscmd.exe` this tool is a command line utility so you can run a subset of common operations from a command line.
This means that you can schedule tasks or run them as part of automated build pipelines.
This means that you can schedule tasks or run them as part of automated build pipelines. [Learn more](/docs/features/command-line)

![](../../docs/features/command-line/dscmd.png)

### Capture Diagnostics
This feature provides an easy way to capture Server Timings, Query Plan and Model Metrics with a single click.
This works for a single query in the editor or you can run this over the current set of queries in an All Queries trace or from imported Power BI Performance data.
If you are using the All Queries or Performance Data option only queries matching the filter conditions are captured so you can do things like only capture diagnostics
for queries that run longer than 100ms.
for queries that run longer than 100ms. [Learn More](/docs/features/capture-diagnostics)

![](../../docs/features/capture-diagnostics-all-queries.png)

### Database dialog
When connecting to a server with a lot of databases like a Power BI workspace / Fabric it can be frustrating to wait for the first database in the list to populate its metadata
and then have to change to one of the other database and wait again for the metadata. Now if it detects multiple databases DAX Studio will display a list of them so that
you can select the one you wish to connect to. The list is sorted alphabetically and has a search box to help quickly locate a specific semantic model [Learn more](/docs/features/database-dialog)

![](../../docs/features/database-dialog.png)

### EvaluateAndLog Trace
The EvaluateAndLog function in DAX provides a way to get visibility of intermediate result sets that are used when evaluating DAX expressions. These can be helpful in diagnosing logic issues.
[Learn more](/docs/features/traces/evaluateandlog-trace)

![](../../docs/tutorials/eval-and-log-3.png)

### Execution Metrics added to Server Timings
The new ExecutionMetrics events are now visible in Server Timings if your data source is capable of emitting those events. [Learn more](/docs/features/traces/server-timings-trace/#execution-metrics-events)

![](../../docs/features/traces/server-timings-executionmetrics.png)

## Full Change List

## New Features
### New Features
* Added [Capture Diagnostics](/docs/features/capture-diagnostics)
* Added [Evaluate and Log trace](/docs/features/traces/evaluateandlog-trace)
* Added [Database dialog](/docs/features/database-dialog) when connecting
* Added [command line](/docs/features/command-line) support
* Added Model Metrics / Vertipaq Analyzer [Options dialog](/docs/features/model-metrics/#metric-options-dialog)
* Added support for [obfuscated](/docs/features/model-metrics/#obfuscated-vertipaq-analyzer-files) model metrics
* Added support to Server Timings for the new [ExecutionMetrics event](https://powerbi.microsoft.com/en-in/blog/new-executionmetrics-event-in-azure-log-analytics-for-power-bi-semantic-models/)
* Added support to Server Timings for the new [ExecutionMetrics event](/docs/features/traces/server-timings-trace/#execution-metrics-events)

## Improvements
### Improvements
* <Issue id="1204" /> made listview selected row color lighter to improve the contrast
* <Issue id="1124" /> Improved labelling of the zoom level
* <Issue id="1241"/> Added "learn more" link to connection dialog
* Added storage mode column to the Partitions tab in View Metrics
* updated TOM, ADOMD and other 3rd party dependencies
* Fixed the images for the Server Timings event type filters
* Moved [Server FE Benchmark](/docs/features/server-fe-benchmark/) out of preview status

## Fixes
### Fixes
* Fixed a crash trying to show fonts in the option dialog
* Fixed <Issue id="1213" /> Formatted file export not applying formatting
* Fixed loading of AggregateRewrite events in saved Server Timings
* Fixed occassional crash when using publish functions
* Fixed occasional crash when using publish functions
* Fixed <Issue id="1228" /> Query Builder not respecting delimiter setting
* Fixed <<Issue id="1179" /> reconnect active traces on connection retries
* Fixed <Issue id="1179" /> reconnect active traces on connection retries
* Fixed an issue in QueryBuilder when loading a saved query containing a hierarchy
* Fixed an issue in QueryBuilder when trying to filter on a query scoped measure
* Fixed an issue where the Ribbon buttons get stuck in a disabled state after an error while a trace is active
* Fixed <Issue id="1262 /> ViewAs not working with "Other User" option against the Power BI Service
* Fixed <Issue id="1262"/> ViewAs not working with "Other User" option against the Power BI Service
* Fixed <Issue id="1264"/> Status bar timer stopped too early
* Fixed <Issue id="1268"/> where the View As dialog did not work properly with a large number of roles
Binary file added docs/features/capture-diagnostics-all-queries.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/features/capture-diagnostics-ribbon.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 17 additions & 6 deletions docs/features/capture-diagnostics.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,27 @@ title: Capture Diagnostics

Capture Diagnostics is designed as a way to capture diagnostic information about 1 or more queries in a single file. This file can then be used for later analysis.

Capture Diagnostics can be trigged in a number of different ways.

It is available as a button in the Advanced Ribbon. In this scenario it will attempt to capture diagnostics against the query in the editor window. If there is no text in the editor window it will check the clipboard contents for a possible DAX query and use that.

It is also available in both the All Queries Trace window and in the Power BI Performance Data window. In both of these cases you can use the filter option to further refine the list of queries. For example you can filter for only those that took longer than a given duration. The Capture Diagnostics option will then run all of the selected queries and save the results in a single .zip file.

When you click the Capture Diagnostic it will automate performing the following actions.
1. Running **View Metrics**
1. Starting a **Server Timings** trace
1. Starting a **Query Plan** trace
1. Running a **Clear Cache** command
1. Saving the results as a .daxx file (or saving multiple daxx files to a zip file)

Capture Diagnostics can be trigged in a number of different ways.

It is available as a button in the Advanced Ribbon. In this scenario it will attempt to capture diagnostics against the query in the editor window. If there is no text in the editor window it will check the clipboard contents for a possible DAX query and use that.

![](./capture-diagnostics-ribbon.png)

It is also available in both the All Queries Trace window:

![](./capture-diagnostics-all-queries.png)

And in the Power BI Performance Data window:

![](./capture-diagnostics-pbi-performance.png)

In both of these cases you can use the filter option to further refine the list of queries. For example you can filter for only those that took longer than a given duration. The Capture Diagnostics option will then run all of the selected queries and save the results in a single .zip file.


1 change: 1 addition & 0 deletions docs/features/command-line/commands/csv-command.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ DSCMD CSV <OutputFile> [OPTIONS]
| -c, --connectionstring &lt;CONNECTIONSTRING> | The connection string for the data source |
| -f, --file &lt;FILE> | A file containing a DAX query to be executed. Could be a text file or .dax or .daxx |
| -q, --query &lt;QUERY> | A string with the DAX query to be executed |
| -t, --filetype | Can be one of the following values { UTF8CSV, UNICODECSV, JSON, TAB } if omitted the file extension is used <ul><li>.csv - UFT-8 csv file is generated</li><li>.txt - tab delimited file is generated</li><li>.json - a json file is generated </li></ul>|

## Examples

Expand Down
Binary file added docs/features/command-line/dscmd.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/features/command-line/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: DSCMD - DAX Studio Command Line
---
This section documents the DAX Studio command line utility dscmd.exe

![](./dscmd.png)

## Syntax
The different [commands](commands) all use the same [syntax](syntax) conventions.

Expand Down
2 changes: 2 additions & 0 deletions docs/features/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,5 @@ sidebar_position: 1

This section lists of all the features in DAX Studio.


<iframe src="/home-ribbon.html" height="1050px" width="1200px" scrolling="no" />
13 changes: 13 additions & 0 deletions docs/features/server-fe-benchmark.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
title: Server FE Benchmark
---

The FE Benchmark feature runs a standard formula engine only query against the current data source. It produces a FE Benchmark number that should help in estimating the expected performance in production.

If your laptop runs a query in 2 seconds and it's index is 200, then if you run the same query on a server that has an index 100 we should expect roughly 4 seconds ie. the query will take twice as long since the benchmark score was half.

:::note
This assumes that the queries are largely bound by CPU time, differences in performance could also be impacted by things like:
* memory access speed
* other processes consuming resources on the machine
:::
23 changes: 17 additions & 6 deletions docs/features/traces/evaluateandlog-trace.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,31 @@
title: EvaluateAndLog Trace
---

This trace captures the output from the `EVALUATEANDLOG()` function

:::info
The `EVALUATEANDLOG()` function only works in models hosted in **Power BI Desktop** if a model is hosted in the Power BI service this function will not produce any output.
:::

This function is useful for viewing intermediate values and tables that are involved in evaluating DAX expressions. This trace captures the output from the [`EVALUATEANDLOG()`](https://learn.microsoft.com/en-us/dax/evaluateandlog-function-dax) function. The output of this function will show you **input** values coming from the filter context and the resulting **output** values. The output can be a scalar value or a table.

## The User Interface

1. EvaluateAndLog calls from your query are captured and displayed in the list on the left. A single query can have multiple calls to `EVALUATEANDLOG()`
2. The expression for the currently selected trace event is show here
3. Columns with a Blue underline are **input** columns generated by the current filter context
4. Columns with a Yellow underline are **output** columns
5. The remain columns are metadata columns show the table number, row number and total count of rows in the result

![](./evaluateandlog-trace.png)

## Learn More

The [EvaluateAndLog tutorial](/docs/tutorials/evaluateandlog-trace) has a number of examples showing how this feature can be used to gain more insights into your query evaluations.

## Further Reading

The following series of blog articles are from one of the senior developers who built the EVALUATEANDLOG() function and are an excellent source of detailed information about this function.

https://pbidax.wordpress.com/2022/08/16/introduce-the-dax-evaluateandlog-function/
https://pbidax.wordpress.com/2022/08/22/understand-the-output-of-evaluateandlog-function-of-scalar-expressions/
https://pbidax.wordpress.com/2022/08/29/understand-the-output-of-evaluateandlog-function-of-table-expressions/
https://pbidax.wordpress.com/2022/09/06/evaluateandlog-and-dax-engine-optimizations/
* https://pbidax.wordpress.com/2022/08/16/introduce-the-dax-evaluateandlog-function/
* https://pbidax.wordpress.com/2022/08/22/understand-the-output-of-evaluateandlog-function-of-scalar-expressions/
* https://pbidax.wordpress.com/2022/08/29/understand-the-output-of-evaluateandlog-function-of-table-expressions/
* https://pbidax.wordpress.com/2022/09/06/evaluateandlog-and-dax-engine-optimizations/
Binary file added docs/features/traces/evaluateandlog-trace.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
19 changes: 18 additions & 1 deletion docs/features/traces/server-timings-trace.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,21 @@ This button causes an extra tab to be displayed which shows the total time the s
| SE Queries | this is the number of Storage Engine queries that were performed during the processing of the query |
| SE Cache | this is the number of Storage Engine cache hits |

You may also wonder what that “SQL like” query is that captured by the scan event when running a query against an import mode model. This is called xmSQL and is textual representation of the requests that the Formula Engine sent to the Storage Engine. There is no way of executing these queries, they are merely a textual representation of the requests sent to the Storage Engine to enable people to understand what operations the storage engine was performing. When your data model is in Direct Query mode you will see a generic T-SQL query which may be transformed into a more data source specific query before it is executed against the actual data source.
You may also wonder what that “SQL like” query is that captured by the scan event when running a query against an import mode model. This is called xmSQL and is textual representation of the requests that the Formula Engine sent to the Storage Engine. There is no way of executing these queries, they are merely a textual representation of the requests sent to the Storage Engine to enable people to understand what operations the storage engine was performing. When your data model is in Direct Query mode you will see a generic T-SQL query which may be transformed into a more data source specific query before it is executed against the actual data source.

### Execution Metrics events

Some data sources like the XMLA endpoint for the Fabric / Power BI service can now emit ExecutionMetrics events. These events output a collection of metrics that can vary based on the type of event. For query events like those captured by server timings these events show information about the query execution such as.

* Number of rows processed for queries and refresh.
* Time a request got delayed due to capacity throttling.
* Time to establish a connection to data source.
* Approximate memory and CPU consumption.

These events were announced in the following [blog post](https://powerbi.microsoft.com/en-in/blog/new-executionmetrics-event-in-azure-log-analytics-for-power-bi-semantic-models/)

:::note
As of July 2024 only the XMLA endpoint on the Fabric / Power BI cloud service exposes these events
:::

![](server-timings-executionmetrics.png)
71 changes: 66 additions & 5 deletions docs/tutorials/evaluateandlog-trace.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,30 @@
---
title: Using the EvaluateAndLog Trace
sidebar_position: 3
---

The EvaluateAndLog trace is a great tool for helping debug logic issues with DAX measures since it helps provide insights into the context of your calculations.
## Prerequisites

If you want to follow along and try out these queries yourself all you need is:
* DAX Studio
* Power BI Desktop
* the [Adventure Works 2020](https://aka.ms/dax-docs-sample-file) sample file

The simplest way to get started after installing both DAX Studio and Power BI Desktop is to open the **Adventure Works 2020.pbix** file, then click on External Tools and launch DAX Studio from there.

:::tip
For more details on how you can connect to your particular data model check out the tutorial on [Getting Connected](../getting-connected/)
:::

## About the EvalauteAndLog function

The EvaluateAndLog trace is a great tool for helping debug logic issues with DAX measures since it helps provide insights into the context of your calculations.

:::info
The `EVALUATEANDLOG()` function is only enabled in Power BI Desktop, in order for it to work it sometimes has to disable some internal engine optimizations. DAX Studio is aware of this and will only enable the trace button when you are connected to a model hosted in Power BI Desktop
:::

The [`EvaluateAndLog()`](https://learn.microsoft.com/en-us/dax/evaluateandlog-function-dax) function takes 3 arguments, the last 2 of which are optional using the following syntax
The `EvaluateAndLog()` function takes 3 arguments, the last 2 of which are optional using the following syntax
```
EVALUATEANDLOG(<Value>, [Label], [MaxRows])
```
Expand All @@ -18,19 +34,51 @@ Where:
* `[Label]` is a string value which you can use to identify a specific instance of the EvaluateAndLog event, and
* `[MaxRows]` are the maxium number of rows to return (defaults to 10)

The full documentation for this function can be found on [Microsoft Learn](https://learn.microsoft.com/en-us/dax/evaluateandlog-function-dax)

:::note
The output from the `EVALUATEANDLOG()` function can potentially get very large so the engine will truncate any results over 1 million characters
:::

## Scalar Values
The count function returns a scalar number. In this example we are just evaluating a single figure based on the count of the distinct ProductKey values in the Product table which will return a value of 397.

```
DEFINE
MEASURE Sales[Measure1] =
EVALUATEANDLOG ( COUNT ( 'Product'[ProductKey] ), "Measure1" )
EVALUATE
{ [Measure1] }
```

![](./eval-and-log-1.png)

If we change the query slightly to return the count of product per color we can see that the data scanned by the storage engine now returns a row per color.

```
DEFINE
MEASURE Sales[Measure1] =
EVALUATEANDLOG ( COUNT ( 'Product'[ProductKey] ), "Measure1v2" )
EVALUATE
SUMMARIZECOLUMNS ( 'Product'[Color], "Measure1", [Measure1] )
```
![](./eval-and-log-2.png)

But notice when we introduce the `Customer[Country-Region]` column that the
But notice when we introduce the `Customer[Country-Region]` column that the output from `EVALUATEANDLOG()` has not changed and we still have the same intermediate results logged as before.

```
DEFINE
MEASURE Sales[Measure1] =
EVALUATEANDLOG ( COUNT ( 'Product'[ProductKey] ), "Measure1v3" )
EVALUATE
SUMMARIZECOLUMNS (
'Product'[Color],
Customer[Country-Region],
"Measure1", [Measure1]
)
```

![](./eval-and-log-3.png)

Expand All @@ -40,6 +88,19 @@ We can see this reflected in the results, the Customer table does not filter the

## Table Values

![](./eval-and-log-4.png)
Up until now we've seen examples of `EVALUATEANDLOG()` producing a single output value for a single input, but it can also output tables of results.

In the following example the function is outputting all the lines from the **Sales** table for each color in the current filter context so you can see which rows are being contributing to the row count for each color.

```
DEFINE
MEASURE sales[Measure4] =
COUNTROWS ( EVALUATEANDLOG ( Sales, "Sales Table" ) )
EVALUATE
SUMMARIZECOLUMNS( 'Product'[color], "Measure4", [Measure4] )
```

![](./eval-and-log-5.png)

![](./eval-and-log-5.png)
1 change: 1 addition & 0 deletions docs/tutorials/getting-connected.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: Getting Connected
sidebar_position: 1
---

There are a number of different connection options in DAX Studio the following guid will run through all the different data sources that you can connect to.
Expand Down
1 change: 1 addition & 0 deletions docs/tutorials/writing-dax-queries.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: Writing DAX Queries
sidebar_position: 2
---

DAX Queries have quite a simple structure. Microsoft describes the query syntax in their documentation [here](https://docs.microsoft.com/en-us/dax/dax-queries). But in this guide we are going to take a very practical, example based approach.
Expand Down
3 changes: 2 additions & 1 deletion src/css/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@
--ifm-code-font-size: 95%;
--docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.1);
--ifm-badge-background-color: gray;

--ifm-container-width-xl: 1600px;
--ifm-container-width: 1280px;
}

/* For readability concerns, you should choose a lighter palette in dark mode. */
Expand Down
11 changes: 11 additions & 0 deletions static/home-ribbon.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
<html>
<body>
<img src="img/DAXStudio-UI.png" usemap="#image-map" width="1200"/>

<map name="image-map">
<area target="_self" alt="Run Button" title="Run Button" href="/docs/features/run-modes/" coords="63,129,10,48" shape="rect"/>
<area target="_self" alt="Cancel" title="Cancel" href="/docs/features/cancel/" coords="115,127,69,48" shape="rect"/>
<area target="_self" alt="Query Builder" title="Query Builder" href="/docs/features/query-builder/" coords="161,124,120,50" shape="rect"/>
</map>
</body>
</html>
Binary file added static/img/DAXStudio-UI.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 4a165c0

Please sign in to comment.