Dynamic language variable in LangChain JS ConversationalRetrievalQAChain template

I’m working with langchain js and trying to experiment with ConversationalRetrievalQAChain using custom prompts.

I want to add a dynamic variable named language to my prompt template and pass its value when making the call, but I keep getting this error:

throw new Error(Missing value for input ${node.name});

The weird thing is that when I remove the {language} placeholder and hardcode something like ‘French’ directly in the template, everything works perfectly.

const ASSISTANT_PROMPT = `You are a helpful Assistant that communicates exclusively in {language}. Always respond in {language}. Use the provided context below to answer the user's question.
If you cannot find the answer in the context, simply state that you don't know. Never fabricate an answer.
If the question is unrelated to the provided context, politely explain that you can only help with context-related questions.

{context}

User Question: {question}
Response in {language}:`;

const qaChain = await ConversationalRetrievalQAChain.fromLLM(
    llmModel,
    vectorDB.asRetriever(),
    {
        qaTemplate: ASSISTANT_PROMPT,
        returnSourceDocuments: true,
    }
);

let output = ""
const result = await qaChain.call({
    question: "What are the main topics discussed?",
    chat_history: [],
    language: "Spanish"
}, [{
    handleLLMNewToken(chunk) {
        output += chunk;
        console.clear();
        console.log(output);
    }
}]);

I expected to be able to pass a custom language parameter to make the assistant respond in different languages dynamically. I’ve tried various approaches to set up the prompt template for ConversationalRetrievalQAChain but it seems to only accept string templates.

had this exact prob! conversationalretrievalqaChain wnt automatically pass custom vars to ur template. dont pass the string directly to qatemplate - use a proper PromptTemplate obj instead. create it with PromptTemplate.fromTemplate() and make sure you specify all ur input vars, including ‘language’.

The problem is ConversationalRetrievalQAChain doesn’t know about your custom variables unless you declare them upfront in the prompt template - you can’t just toss them into the call parameters. I hit this exact issue building multi-language support last year. You need to create a proper prompt template that registers your language variable. Import PromptTemplate from langchain and define your template with explicit input variables. Wrap your ASSISTANT_PROMPT in a PromptTemplate constructor with inputVariables: [‘context’, ‘question’, ‘language’]. Pass that template object to qaTemplate instead of the raw string. The chain has to know about all variables when it initializes, not when it runs. You could also just replace the language placeholder in your prompt string before feeding it to the chain, but that’s messier than doing it right with template registration.

ConversationalRetrievalQAChain has fixed input variables and doesn’t recognize your custom language parameter.

I hit the same wall building multilingual chatbots. The chain expects specific inputs and ignores everything else you pass in.

I solved it by building a custom flow that’s way cleaner. Instead of fighting LangChain’s rigid setup, I made a workflow that:

  1. Takes user input and language preference
  2. Builds the prompt dynamically with the language variable
  3. Calls the LLM with proper formatting
  4. Returns response in the requested language

You get full control over prompt templating and variable injection. Easy to add more dynamic variables later without framework headaches.

I built this using Latenode’s visual workflow builder. It handles prompt construction, LLM calls, and response formatting automatically. Way simpler than hacking around ConversationalRetrievalQAChain’s limits.

You’re hitting this because ConversationalRetrievalQAChain creates its own prompt templates and won’t merge your custom variables with the built-in ones. It only sees predefined inputs like question and chat_history. I’ve found a few ways around this. First, extend the qaTemplate config to include your custom variables - create a PromptTemplate instance instead of just passing a string, but you’ll also need to modify how the chain handles inputs. Another option is preprocessing the template string before you initialize the chain. Just replace {language} with the actual value during setup rather than execution. You’ll need a new chain instance for each language, which sucks but works. The cleanest fix I found was subclassing ConversationalRetrievalQAChain and overriding the _call method to handle custom variables. More work upfront but gives you full control.