Setting JSON schema for structured responses in langchain-go library

I’m currently using langchain-go and trying to determine how to specify a precise JSON schema for my responses. In the openai-go library, it’s straightforward to configure structured JSON output by including a schema in the parameters:

params := openai.ChatCompletionNewParams{
    Model: openai.F(openai.ChatModelGPT4oMini),
    Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
        openai.UserMessage(userInput),
    }),
    MaxTokens: openai.Int(150),
    Temperature: openai.Float(0.5),
    ResponseFormat: openai.F[openai.ChatCompletionNewParamsResponseFormatUnion](
        openai.ResponseFormatJSONSchemaParam{
            Type: openai.F(openai.ResponseFormatJSONSchemaTypeJSONSchema),
            JSONSchema: openai.F(mySchema),
        },
    ),
}

In contrast, with langchain-go, I can only turn on JSON mode without the ability to specify the exact JSON structure:

model, err := openai.New()
if err != nil {
    return "Failed:", err
}
result, err := llms.GenerateFromSinglePrompt(
    context,
    model,
    "Provide a list of dog breeds in JSON format",
    llms.WithJSONMode(),
    llms.WithTemperature(0.1),
)

Is it possible to define the exact JSON structure I require while using langchain-go instead of merely enabling the basic JSON mode?

Been dealing with this exact issue at work and honestly, the back and forth between libraries gets old fast.

I built an automation workflow that handles all the JSON schema validation and response formatting outside the Go code entirely.

Set up a Latenode workflow that takes my prompts, sends them to OpenAI with proper schema enforcement, validates responses, and returns clean structured data to my Go app through a simple API call.

No more worrying about langchain-go limitations or switching between different Go libraries. The workflow handles retries when JSON doesn’t match my schema, error handling, and logs everything for debugging.

I can easily modify the schema or add new validation rules without touching my Go code. Just update the workflow and everything keeps running.

Much cleaner than hacking prompt engineering or writing custom validation layers in Go.

I’ve hit this same wall using langchain-go in production. The library doesn’t expose the structured output features from OpenAI’s raw API yet. Here’s my workaround though - build a custom chain that puts detailed JSON schema descriptions right in your prompt template, then add validation after you get the response back. Just tell it “Return JSON matching this exact structure: {your schema here}”. It’s not as bulletproof as native schema enforcement, but works pretty well if you handle errors properly. You could also write a wrapper using openai-go directly for structured outputs and keep langchain-go for everything else.

totally agree, langchain-go lacks flexibility. the openai-go is much better for structured output. maybe try specifying your desired format directly in the prompt, but you’ll end up tweaking it yourself for now.

I dealt with this same issue for months and finally just forked langchain-go to add custom structured output support. The changes weren’t that hard - I extended the options interface to accept schema parameters and tweaked the client calls to pass through ResponseFormat config. You can also dig into the langchain wrapper’s internals and access the raw OpenAI client to manually set response format before calling. What I’ve been doing lately is using langchain-go for chain management and preprocessing, then switching to direct openai-go calls when I need structured JSON. Just make sure you keep the same context and config between both libraries.