1

I need to filter two separate tabular models and return a table of summarized results from each, using values retrieved from a third semantic model for calculate context. The process creates a data-table thousands of rows long using M lang:

let
    // Returns a text value which is dynamically built which then gets inserted into query as DATATABLE
    // Summarized values are added to it with ADDCOLUMNS
    _filter_table = let src = AnalysisServices.Query("myFirstSource", "schedule", [Query="evaluate addcolumns(
            summarizecolumns(calendar[Day], times[EasternTime], ""startTime"", min(schedule[...
            , ""row_string"", ""{"" & format([Day], "Short Date") & ", " & format([EasternTime], ""hh:mm"")
            & "... etc. etc. ..."]) in Table.Buffer(src),

    fact1 = let qrytext = "define table filter_table = datatable(""ColumnName"", STRING, ""Date"", DATETIME
       , ""StartTime"", DATETIME, ""EndTime"" DATETIME, {"
       & Text.Combine(_filter_table[row_string], ",")
       & "}) evaluate ADDCOLUMNS( filter_table
       , ""Measurement"", CALCULATE([SomeMeasure]
                          , local_table[Column]=[ColumnName]
                          , time_table[Period]>=[StartTime]
                          , time_table[Period]<=[EndTime]
                         , calendar[Day]=[Date]))",
        ...
        result = AnalysisServices.Query("my source", "widgets", [Query=qrytext]) in result,

    fact2 = let qrytext = "define table filter_table = datatable(""ColumnName"", STRING, ""Date"", DATETIME
       , ""StartTime"", DATETIME, ""EndTime"" DATETIME, {"
       & Text.Combine(_filter_table[row_string], ",")
       & "}) evaluate ADDCOLUMNS( filter_table
       , ""Measurement"", CALCULATE([DifferentMeasure]
                          , local_table[Column]=[ColumnName]
                          , time_table[Period]>=[StartTime]
                          , time_table[Period]<=[EndTime]
                         , calendar[Day]=[Date]))",
        ...
        result = AnalysisServices.Query("my second source", "whatsits", [Query=qrytext]) in result,

    finished_product = Table.Combine({fact1, fact2})
in
    finished_product

I can use the result of one analysis service query to perform calculations in two disparate semantic models. This performs badly as the number of rows in the filter table grows.

What is a better pattern when I have hundreds of thousands of filter rows that are only available dynamically from a query to an unrelated semantic model?

2
  • Why don't you use Direct Query and DAX to query your other semantic models? Commented Jul 29 at 16:56
  • The preference is for an import flow due to capacity utilization - we'll have multiple users looking at arbitrary date ranges from a single day up to / exceeding a year for a large number of individuals, expecting to summarize by business unit relations etc. Commented Jul 29 at 17:21

1 Answer 1

2

"use Direct Query and DAX to query your other semantic models"

Start by building a composite model in Power BI Desktop, referencing your other models. https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-composite-models

Once you build the composite model, then you can send it simple DAX queries and do whatever you want with the results, including loading them into an import mode semantic model.

Or you could refactor the source models to store results in a database, and build all the semantic models from the that database.

Sign up to request clarification or add additional context in comments.

2 Comments

@DavidBrowne-Microsoft I built a composite model which has the 'filter_table' from above as an import, and then set up DQ to the other two models. I then ran ADDCOLUMNS( filter_table , "Measure", CALCULATE([Measure], calendar[day]=earlier(filter_table[date])... ) but it returned an error that a resultset had exceeded 1,000,000 rows and so it was canceled. Rowcount from filter_Table is 10,080, and when I limit to a single individual I do not see unexpected rows. Is there some kind of internal table creation to process the measure which might be crossjoining or something to exceed 1M rows?
That's a limitation for all DirectQuery models, which this is a kind of. The limit is on the rows returned from the remote model to the local model in the middle of your DAX calculations. See learn.microsoft.com/en-us/power-bi/guidance/… You can increase this limit in the service, but it's better to find a way to limit the rows returned from the remote model.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.