I’m trying to build a system that reads XML data and transforms it into CSV format for Facebook advertising. The main issue is that my XML source has different field names than what Facebook expects for their required columns.
I need to create a mapping solution that can take the XML data from a URL (which updates daily with new inventory information) and reformat it to match Facebook’s specifications.
A Zapier support person gave me this workflow suggestion:
The process would involve these steps:
- Start with Email App when new file arrives
- Use Filter to verify XML format
- Upload to Dropbox for processing
- Custom code step to parse XML and transform data (this requires external help)
- Update Google Sheets with formatted data
- Publish sheet as web CSV for Facebook feed
The missing piece is the code that handles XML parsing and field mapping. Has anyone implemented something similar for Facebook product feeds? I’m looking for guidance on the transformation logic that would convert my XML structure into Facebook-compatible format.
The XML updates daily so the solution needs to handle automatic refresh cycles.
I’ve done similar XML to CSV transformations for Facebook feeds - field mapping is always the trickiest part. Skip the custom Zapier code and use something like Altova MapForce or a Python script on a cloud function instead. You can set up visual mapping rules that are way easier to update when Facebook inevitably changes their requirements.
For daily refreshes, ditch the email notifications and set up a webhook that triggers when your XML source updates. Much more reliable and no delays or missed updates.
Facebook’s Product Catalog API has validation rules that aren’t in their docs, so build in error handling that logs failed transformations. And if you’ve got a large inventory, that Google Sheets step will slow you down - generate the CSV directly and host it on a CDN instead.
You might want to scrap that workflow. I did a similar Facebook feed project last year and Google Sheets just creates bottlenecks, especially with daily updates. XSLT stylesheets work way better for XML-to-CSV conversions - that’s what they’re built for.
Here’s what worked for me: set up an AWS Lambda function that grabs the XML from your URL, handles the field mapping, and dumps the CSV straight into S3. Facebook pulls from a static URL that’s always current.
Biggest win? No more Google Sheets publishing delays or formatting issues that break your Facebook catalog sync. You also get real control over transformations when your XML structure changes (and it will). Lambda’s dirt cheap since you only pay for processing time.
zapier’s probably overkill for this. I just run a basic php script daily through cron - it grabs the xml, maps fields with xpath, and outputs csv. facebook pulls it straight from my server. way simpler than the dropbox/sheets setup, plus no api limits or third-party outages to deal with.