I need help converting complex nested JSON into a flat table structure using M code in Power Query. I have data from an API that returns deeply nested JSON objects and I want to transform them into a CSV-like format for dashboard visualization.
My Current Setup
I have two main functions working:
FetchAPIData(tableId as text, authToken as text) - This pulls JSON data from the API.
FlattenJSONColumn(SourceTable as table, FieldName as text) - This converts individual columns from nested JSON to simple values.
The Issues I’m Facing
Problem 1: I need to apply the flattening function to every column automatically instead of doing it one by one. With multiple databases having dozens of columns each, manual processing is not practical.
Problem 2: I can’t figure out how to handle array-type properties where one record might have multiple values. I want to convert these to comma-separated strings.
Sample Code Structure
let
processNestedField = (DataSet as table, PropertyName as text) =>
let
fieldType = getPropertyType(DataSet, PropertyName),
result =
if fieldType = "text_field" then
expandSimpleProperty(DataSet, PropertyName)
else if fieldType = "list_field" then
expandArrayProperty(DataSet, PropertyName)
else
DataSet
in
result
in
processNestedField
Questions
How can I automatically apply the flattening function to all columns in my table?
What’s the best way to convert array properties into comma-separated text values?
Any help with the M code logic would be greatly appreciated!
I’ve dealt with similar JSON flattening challenges in Power Query and found a recursive approach works best for your situation. Instead of trying to detect field types upfront, you can use Table.TransformColumns with a custom function that handles both scenarios dynamically.
For automatically processing all columns, wrap your transformation logic in Table.ColumnNames to iterate through each field. The key is using try-otherwise blocks to handle different data types gracefully - if a column contains nested records, expand them; if it contains lists, convert to text with Text.Combine.
Regarding arrays to comma-separated values, I typically use if Value.Is(_, type list) then Text.Combine(List.Transform(_, Text.From), ", ") else _ within the column transformation. This approach handles null values and mixed data types without throwing errors.
One gotcha I encountered was that some API responses have inconsistent nesting levels between records. Consider adding a normalization step before flattening to ensure all records have the same structure depth.
been wrestling with this exact issue lately and found that using List.Accumulate works really well for bulk column processing. basically you feed it your column list and let it apply your flattening function iterativley. for the array stuff, i just do Text.Combine(List.Transform([field], each Text.From(_)), ",") and it handles most edge cases pretty nicely without breaking on nulls.
The dynamic column processing challenge you’re facing becomes much more manageable when you leverage Table.Schema() to inspect your data structure first. This function returns metadata about each column including data types, which eliminates the guesswork in your current approach.
For bulk transformation across all columns, I’ve had success using Record.TransformFields() combined with Table.FromRecords(). You can build a transformation record where each field name maps to your flattening logic, then apply it in one operation rather than column by column.
Regarding array handling, nested Table.ExpandListColumn() calls work well when you need more control than simple comma separation. Sometimes you want to preserve the array structure for certain analyses while flattening others. I usually create a parameter in my function to toggle between expansion modes.
One technique that saved me considerable debugging time was adding intermediate steps that output the schema at each transformation stage. Power Query’s preview makes it easy to spot where your logic breaks down with unexpected data types. Your processNestedField function looks solid but consider adding logging for field types that don’t match your expected patterns.