Is it possible to link external PostgreSQL database with Cloud SaaS Plus plan?

Hello everyone! My team is considering LangGraph for our AI agent system and we’re looking at the Cloud SaaS Plus plan for fast testing and deployment.

We need to extract LangSmith trace information for our own custom analytics dashboard. Our plan is to save this data in our own PostgreSQL database. However, based on what I’ve read in the documentation, it seems like we need the Hybrid setup which requires an Enterprise subscription.

Can anyone confirm if this is accurate? Are there alternative methods to extract data from the Cloud SaaS version or establish a connection to an external PostgreSQL instance?

Any suggestions would be greatly appreciated!

Yep, Cloud SaaS Plus doesn’t support direct external database connections. We hit the same issue trying to connect LangSmith traces to our PostgreSQL analytics stack. API polling works, but webhooks are way more efficient. LangSmith lets you configure webhooks that push trace data to your endpoints in near real-time - no polling overhead or rate limiting headaches. Just set up a lightweight service to catch the webhooks and batch insert everything into PostgreSQL. The webhook payload has all the trace metadata you need: run IDs, timestamps, input/output data, performance metrics. One heads up - make your webhook handler idempotent because LangSmith sometimes sends duplicate events when there’s network issues. We added a simple duplicate check using trace IDs before inserting. This scales way better than polling and gives you sub-second freshness without paying for Enterprise.

Yeah, you’re right about those Cloud SaaS Plus limitations. Hit the same wall last year trying to get LangSmith data into our analytics warehouse.

Enterprise hybrid’s way overkill unless you’ve got other reasons to upgrade. We just treated it like any API integration.

Built a simple Python service that polls LangSmith’s REST API every few minutes, grabs trace data, and dumps it into PostgreSQL. The API’s got everything - execution traces, token counts, latency, error logs.

Watch out for rate limits though. We got throttled hard during high-volume periods and had to add exponential backoff.

Also think about data freshness. API polling adds delay, so if you need real-time analytics, this won’t cut it. Most cases can handle a 5-minute lag fine.

Whole thing took us two days to get solid. Way cheaper than upgrading just for database connectivity.

Been there with SaaS data extraction headaches. The docs are usually spot-on about these limits - Cloud SaaS plans block direct external DB connections for security.

Skip fighting the plan restrictions. Build a middleware layer that grabs trace data through LangSmith’s API and dumps it into your PostgreSQL. Works no matter what tier you’re on.

I’ve automated this setup dozens of times with Latenode. Set scenarios to pull trace data from LangSmith’s endpoints, transform it for your analytics, and feed it straight to PostgreSQL. Everything runs automatically without infrastructure nightmares.

You get custom analytics without Enterprise pricing or messy hybrid configs. Bonus: add data validation, filtering, or enrichment right in the flow.

Latenode handles API calls, transformations, and database ops seamlessly. Way cleaner than wrestling with SaaS roadblocks.

You’re right - Cloud SaaS Plus won’t let you connect directly to the database. We hit the same wall building our LangGraph monitoring dashboard. Sure, APIs and webhooks work, but here’s what we did instead: set up PostgreSQL foreign data wrappers with a staging layer. We built a small service that authenticates with LangSmith, pulls trace data in batches, and dumps it into a temp schema. Our analytics queries run against views that merge this trace data with our app metrics. Big lesson: design your data pipeline around your analytics needs, don’t just dump everything. LangSmith traces are nested JSON nightmares that need proper normalization if you want decent query performance. We spent time upfront designing our PostgreSQL schema for our most common dashboard queries - agent performance trends, error clustering, token usage by workflow. Keeps costs reasonable and gives you the analytical flexibility you actually need. Data freshness works fine for most BI stuff.

cloud SaaS plus def blocks external DB connections - ran into the same issue a while back. don’t waste ur time fighting it. just make use of LangSmith’s export feature. schedule bulk exports of your trace data as JSON, then pull em into PostgreSQL with a cron job. not real-time like webhooks, but it’s way easier to maintain and avoids API rate limits.