Hub pull function returning false instead of ChatPromptTemplate object

I’m working on a Langchain agent implementation and running into an issue where the hub pull function isn’t functioning as expected. When I try to pull a prompt template from the hub, it returns false instead of the actual ChatPromptTemplate object.

Here’s my code:

import { GoogleSearchAPI } from "@langchain/community/tools/google_search";
import { ChatOpenAI } from "@langchain/openai";
import { pull } from "langchain/hub";
import { createOpenAIFunctionsAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";
import {
    ChatPromptTemplate,
    PromptTemplate,
    SystemMessagePromptTemplate,
    AIMessagePromptTemplate,
    HumanMessagePromptTemplate,
} from "@langchain/core/prompts";
import {
    AIMessage,
    HumanMessage,
    SystemMessage,
} from "@langchain/core/messages";

const searchAPI = new GoogleSearchAPI();

const searchResult = await searchAPI.invoke("current weather in New York");

console.log(searchResult);

const availableTools = [searchAPI];

const chatModel = new ChatOpenAI({
  modelName: "gpt-3.5-turbo",
  temperature: 0,
});

const templatePrompt = await pull<ChatPromptTemplate>(
  "hwchase17/openai-functions-agent"
);
console.log("Template Results")
console.log(templatePrompt)

const newAgent = await createOpenAIFunctionsAgent({
    llm: chatModel,
    tools: availableTools,
    prompt: templatePrompt,
});

const executor = new AgentExecutor({
    agent: newAgent,
    tools: availableTools,
});

const response = await executor.invoke({
    input: "hi there!",
});

console.log(response);

The error I get is:

Template Results
false
file:///Users/developer/Projects/langchain-test/node_modules/langchain/dist/agents/openai_functions/index.js:218
    if (!templatePrompt.inputVariables.includes("agent_scratchpad")) {
                               ^

TypeError: Cannot read properties of undefined (reading 'includes')
    at createOpenAIFunctionsAgent (file:///Users/developer/Projects/langchain-test/node_modules/langchain/dist/agents/openai_functions/index.js:218:32)
    at file:///Users/developer/Projects/langchain-test/main.js:43:21
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

Node.js v20.10.0

The pull function returns false instead of a valid prompt template object. Has anyone encountered this before? I can’t figure out why the hub pull isn’t working properly.

Yeah, this false return from hub pull is way too common. Hit this exact issue running a production agent - turns out it was DNS problems with the Langchain hub endpoint. The pull function’s garbage at handling network failures and just returns false instead of throwing a proper error you can actually catch.

Here’s what fixed it for me: check your DNS settings first, then try switching networks temporarily to test connectivity. Corporate DNS loves blocking or redirecting hub requests. If you’re in Docker or a VM, that’s probably your culprit - I’ve seen those cause silent failures all the time.

Also verify your Node version. Had weird hub pull issues on Node 18.x that disappeared after upgrading to 20.x. The Langchain hub client’s dependencies are picky and hate older runtimes.

Hit this exact issue 6 months back building an internal chatbot. When hub pull returns false, it’s usually a silent timeout or network hiccup.

I added retry logic with exponential backoff - sometimes the hub just needs a couple tries:

const pullWithRetry = async (templateName, maxRetries = 3) => {
  for (let i = 0; i < maxRetries; i++) {
    const result = await pull(templateName);
    if (result) return result;
    await new Promise(resolve => setTimeout(resolve, 1000 * Math.pow(2, i)));
  }
  return null;
};

After dealing with this flakiness in prod repeatedly, I just made a local template matching the hub structure:

const fallbackTemplate = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant"],
  ["user", "{input}"],
  ["assistant", "{agent_scratchpad}"]
]);

Try the retry first, fall back to local if it bombs. Your agents stay running even when the hub craps out. I’ve seen this false return way too much to trust it for anything mission-critical.

Classic hub connectivity issue. That false return from pull means the request failed silently instead of throwing an error. Wrap your pull call in a try-catch block to catch any exceptions that might be getting swallowed. Double-check you’re using the right hub identifier - templates sometimes get moved or renamed. Add a timeout to your pull request and verify if that template needs authentication. Quick test: try pulling a different public template first to see if it’s your setup or just that specific prompt. If nothing works, check the network requests in your dev environment to see what’s actually happening when pull runs.

hit the same bug a few weeks back. langchain hub’s unreliable - sometimes it works, sometimes just returns false with no error. i fixed it by adding a check after the pull and manually creating the template when it fails. just hardcode the system prompt with the agent_scratchpad variable. way more reliable than depending on external calls every time.

Had this exact problem last month with the same setup. Your pull function’s returning false because it can’t reach the Langchain hub - usually network or auth issues. First, check your connection and try pulling a different template to see if it’s just that specific one being down. I fixed it by adding error handling around the pull function and creating a fallback. When the hub pull failed, I just made a local version of the OpenAI functions agent template. You can build your own ChatPromptTemplate with the input variables you need, including that “agent_scratchpad” from your error. Way more control and you’re not stuck waiting on external fetches. Also check if you’re behind a corporate firewall blocking the hub requests.

Been dealing with this exact headache for years. The hub pull fails silently all the time - network issues, rate limits, or just the hub being flaky.

I stopped wrestling with external dependencies and automated the whole prompt management flow using Latenode. Built a workflow that pulls templates from multiple sources, validates them, and serves them locally with fallbacks.

When the hub’s down, my Latenode setup automatically switches to cached versions or generates whatever prompt structure I need. No more false returns breaking agent creation.

For your case, I’d set up a Latenode scenario that monitors hub availability and keeps local copies of common templates like the OpenAI functions agent one. Plus it logs failed pulls so you can track patterns.

Way cleaner than wrapping everything in try-catch blocks or hardcoding fallback templates. Your agents keep working regardless of external service issues.