I’m having trouble with my LangChain setup where it keeps showing the exact same prompt template every time I create a new chain. The template contains some weird hardcoded examples about legal agreements and Michael Jackson that I never added. I expected it to be empty or use my custom prompts instead.
I’ve tried restarting my Python kernel multiple times but the issue persists. Here’s my code:
When I print this chain object, it always outputs the same configuration with a fixed template about contract law and random content examples. How can I make it use a clean template or my own custom one?
That hardcoded prompt template is just LangChain’s default for RetrievalQAWithSourcesChain. It’s totally normal - the framework comes with pre-built templates that include those examples to guide the model’s responses. Want to use your own template? Just pass a custom prompt when you create the chain. Make a PromptTemplate object and pass it through chain_type_kwargs like this: from langchain.prompts import PromptTemplate; custom_prompt = PromptTemplate(template=“Your custom template here with {context} and {question} variables”, input_variables=[“context”, “question”]); qa_chain = RetrievalQAWithSourcesChain.from_chain_type(language_model, chain_type=“map_reduce”, retriever=vector_store.as_retriever(), chain_type_kwargs={“prompt”: custom_prompt}); Those default templates aren’t bugs - they’re there to provide structure for common use cases. You just override them when you want different behavior.
same thing happened to me lol. those weird examples are built into langchain’s default templates - you didn’t mess up. quick fix: use return_source_documents=True and rebuild_chain=False in your params. or just switch to basic retrievalqa instead of the withsources version if you don’t need citations.
Yeah, that’s totally normal with LangChain’s RetrievalQAWithSourcesChain. The framework comes with default prompt templates that have those hardcoded examples baked in as reference material for the LLM. When you print the chain, you’re just seeing its internal config with these defaults. The main problem is you’re using RetrievalQAWithSourcesChain, which has way more rigid template structures than the basic RetrievalQA chain. If you don’t actually need source citations, just switch to standard RetrievalQA - you’ll get way more flexibility. If you want custom templates with the WithSources version, make sure your template has all the required variables the chain expects. LangChain validates these internally and falls back to defaults when something’s missing. Double-check that your custom prompt includes all the necessary placeholders for whatever map_reduce chain type you’re using.
Been dealing with LangChain quirks like this for years. Those default templates are hardcoded and honestly pretty annoying when you’re trying to build something clean.
Skip wrestling with LangChain’s template system - automate the whole chain creation instead. Build a workflow that generates custom prompts, handles different document types, and A/B tests templates.
I built something similar last month. The automation pulls docs from various sources, creates vector embeddings, and builds QA chains with custom prompts based on content type. No more hardcoded Michael Jackson examples.
It handles everything from document preprocessing to chain config. You can add logic to optimize prompts based on response quality too.
Way cleaner than manually configuring each chain and fighting LangChain’s defaults every time.
Check out Latenode for building this kind of automated pipeline: https://latenode.com
Hit this exact issue 6 months ago building a document QA system. Super frustrating - thought I’d broken something.
LangChain loads default templates from its internal prompt registry. Those Michael Jackson examples and legal stuff are hardcoded as demo data.
Here’s what fixed it:
from langchain.chains.question_answering import load_qa_chain
from langchain.prompts import PromptTemplate
# Create your own template first
my_template = """
Use the following pieces of context to answer the question.
Context: {context}
Question: {question}
Answer:"""
my_prompt = PromptTemplate(
template=my_template,
input_variables=["context", "question"]
)
# Then build the chain with your template
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
language_model,
chain_type="stuff", # try stuff instead of map_reduce
retriever=vector_store.as_retriever(),
chain_type_kwargs={"prompt": my_prompt}
)
Also try “stuff” chain type if your docs aren’t massive. Way cleaner output and easier to debug.