I’m attempting to update my code in langchain to stop seeing deprecation warnings. My initial setup functions correctly but continuously shows these warnings.
Here’s the working version of my code:
from langchain_community.llms import HuggingFacePipeline
from transformers import AutoTokenizer
import transformers
import torch
model_name="microsoft/DialoGPT-medium"
tokenizer=AutoTokenizer.from_pretrained(model_name)
text_pipeline=transformers.pipeline(
"text-generation",
model=model_name,
tokenizer=tokenizer,
torch_dtype=torch.float16,
trust_remote_code=True,
device_map="auto",
max_length=800,
do_sample=True,
top_k=5,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id
)
my_llm=HuggingFacePipeline(pipeline=text_pipeline, model_kwargs={'temperature':0.1})
from langchain.prompts import PromptTemplate
my_prompt=PromptTemplate(input_variables=["movie_title"],
template="Give me a brief review of the movie {movie_title}")
This section works using the old method:
from langchain.chains import LLMChain
my_chain = LLMChain(llm=my_llm, prompt=my_prompt, verbose=True)
result = my_chain.run("Inception")
print(result)
However, when I switch to the new format to eliminate deprecation warnings:
my_chain = my_prompt | my_llm
result = my_chain.invoke("Interstellar")
print(result)
I encounter this error:
TypeError: Expected a Runnable, callable or dict. Instead got an unsupported type: <class 'str'>
Though I’ve tried incorporating StrOutputParser, the same error keeps showing up. This seems to specifically affect HuggingFacePipeline, as I’ve successfully used similar code with HuggingFaceEndpoint. How can I resolve this issue?
Check your LangChain version - this error usually happens when there’s a mismatch between HuggingFacePipeline and LCEL. I hit this same issue mixing an old community package with newer core components. HuggingFacePipeline doesn’t fully support the Runnable protocol in some versions. Skip the workarounds and just update langchain-community:
pip install --upgrade langchain-community langchain-core
Your original syntax should work fine after that:
my_chain = my_prompt | my_llm
result = my_chain.invoke({"movie_title": "Interstellar"})
I’ve seen this TypeError vanish after updating dependencies. HuggingFacePipeline got major LCEL compatibility updates in recent releases. If you’re stuck on older versions for compatibility, the RunnableLambda approach works, but updating is cleaner long-term.
Wrap your huggingface pipeline in RunnablePassthrough first. Had the same TypeError last week - HuggingFacePipeline doesn’t implement the runnable interface properly for lcel chains. Try chain = my_prompt | RunnablePassthrough() | my_llm instead of the lambda approach.
Skip the manual fixes and automate the whole pipeline integration instead.
I dealt with similar issues migrating legacy LangChain code at work. Patching each compatibility problem turns into a maintenance nightmare. Build a proper automation layer that handles input/output transformations seamlessly.
Create an automated workflow that manages everything - input preprocessing, model inference, output formatting. You won’t need to worry about HuggingFacePipeline vs HuggingFaceEndpoint or other implementations.
I’ve built movie review systems where automation handles model switching, input formatting, and error recovery automatically. No more TypeError headaches when APIs change.
You need a workflow engine that adapts to different model types without rewriting code every time LangChain updates their interfaces.
Check out Latenode for building automated LLM pipelines. It handles integration complexity so you can focus on business logic instead of debugging input formatters: https://latenode.com
The problem is HuggingFacePipeline’s tokenizer config with DialoGPT. I’ve hit this before - DialoGPT expects conversation format, but your setup treats it like a regular completion model. The pad_token probably isn’t set right, which breaks the runnable interface.
Try adding tokenizer.pad_token = tokenizer.eos_token before creating your pipeline. DialoGPT also works way better with chat templates than single prompts.
Honestly, for movie reviews you’d be better off using GPT2 instead. Or if you want to stick with DialoGPT, restructure your prompt to match its conversation format. The TypeError goes away once the tokenizer config matches what the model expects.
Had the exact same problem switching from LLMChain to LCEL with HuggingFacePipeline. It’s how the pipeline handles input formatting internally. This fixed it for me - convert the input to the dictionary format the pipeline expects:
from langchain_core.runnables import RunnableLambda
def format_input(x):
if isinstance(x, str):
return {"movie_title": x}
return x
my_chain = RunnableLambda(format_input) | my_prompt | my_llm
result = my_chain.invoke("Interstellar")
This structures the input properly before it hits the prompt template. HuggingFacePipeline is way pickier about input types than other LLM implementations - that’s why you didn’t see this with HuggingFaceEndpoint. The wrapper function bridges the gap between your string input and what the pipeline wants.
This is simpler than everyone’s making it out to be. Your HuggingFacePipeline is missing the proper LCEL interface.
I hit this exact same issue when migrating our chatbot last year. The error happens because HuggingFacePipeline expects specific parameter handling with the pipe operator.
Try this:
from langchain_core.output_parsers import StrOutputParser
output_parser = StrOutputParser()
my_chain = my_prompt | my_llm | output_parser
# Use invoke with a dict, not a string
result = my_chain.invoke({"movie_title": "Interstellar"})
print(result)
The key is passing a dictionary to invoke instead of a string. Your prompt template wants the “movie_title” variable, so give it exactly that.
HuggingFaceEndpoint works differently because it’s got better LCEL integration built in. HuggingFacePipeline is more bare bones and needs explicit parameter mapping.
This worked for our production systems and handles the deprecated LLMChain replacement cleanly.