

when you use join and expand steps, it will increases the number of HTTP calls by one for each row in your datasource. The expand steps could be the issue of slow performance. Here is a official document for your reference. Īnd use computed entity is also a option for you. The complex transition will leads to large cpu use.Īre you using dataflow in premium? You can try to enable Enhanced Dataflows Compute Engine and enlarge the Max Memory in workload. The mashup engine needs to load all files at the same time then do transitions, it will lead to large memory use. You are using flat files as datasource, so there isn't query folding in your datasource. Hi are many causes lead to slow performance. #"Colunas renomeadas" = Table.RenameColumns(#"Invocar a função personalizada", ) #"Invocar a função personalizada" = Table.AddColumn(#"Arquivos ocultos filtrados", "Transformar o arquivo", each #"Transformar o arquivo"()), #"Arquivos ocultos filtrados" = Table.SelectRows(#"Linhas filtradas 1", each ? true), #"Linhas filtradas 1" = Table.SelectRows(#"Linhas filtradas", each Text.Contains(, "company_to_be_analyzed")), #"Linhas filtradas" = Table.SelectRows(Origem, each = ".txt"), Since I connect to SharePoint, I had to filter the folder and files before starting to analyze.that's why there are some filters. Here is the code in this entity, which is not loading. After that execution, the evaluation gets cancelled. I'm attaching a print screen from data flow PQ editor, the total time to execute and memory. Now that I have the connection to the folder stablished, and performed SIMPLE transformations, the evaluation keeps getting cancelled, and I have no Idea why. The combine and load step is required, because I need to combine all 60 files in order to have the required data.

I'm connecting to a 1GB FOLDER that holds 60.

I'm building a Data Flow using SharePoint Folder as datasource.
