Initialization issue with DataAPI client in Next.js app: fetch-h2 client loading error with httpOptions setting

I’m encountering a server error while trying to utilize the DataAPI client in my Next.js RAG app. The error indicates that it fails to load the fetch-h2 client and suggests modifying httpOptions.client to ‘fetch’. Below is the error message I’m receiving:

{
  "props": {
    "pageProps": {
      "statusCode": 500
    }
  },
  "page": "/_error",
  "query": {
    "__NEXT_PAGE": "/api/messages"
  },
  "buildId": "development",
  "isFallback": false,
  "err": {
    "name": "Error",
    "source": "server",
    "message": "Error loading the fetch-h2 client for the DataAPIClient... try setting httpOptions.client to 'fetch'"
  }
}

Here’s the API route that appears to be problematic:

import { NextRequest, NextResponse } from 'next/server';
import { getDataStore } from '@/lib/database';
import { AIMessage, HumanMessage } from '@langchain/core/messages';
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { ChatOpenAI } from '@langchain/openai';
import { Redis } from '@upstash/redis';
import { Ratelimit } from '@upstash/ratelimit';
import { LangChainStream, StreamingTextResponse } from 'ai';
import { UpstashRedisCache } from '@langchain/community/caches/upstash_redis';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { createHistoryAwareRetriever } from 'langchain/chains/history_aware_retriever';
import { createRetrievalChain } from 'langchain/chains/retrieval';
import https from 'https';

const rateLimiter = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.fixedWindow(10, '60s'),
});

export async function POST(request: NextRequest) {
  try {
    const clientIP = request.ip ?? 'unknown';
    const { success } = await rateLimiter.limit(clientIP);

    if (!success) {
      return new Response('Rate limit exceeded', { status: 429 });
    }

    const requestBody = await request.json();
    const userMessages = requestBody.messages;

    const previousMessages = userMessages
      .slice(0, -1)
      .map((msg) =>
        msg.role === 'user'
          ? new HumanMessage(msg.content)
          : new AIMessage(msg.content)
      );

    const latestMessage = userMessages[userMessages.length - 1].content;

    const redisCache = new UpstashRedisCache({
      client: Redis.fromEnv({
        agent: new https.Agent({ keepAlive: true }),
      }),
    });

    const { stream, handlers } = LangChainStream();

    const mainChatModel = new ChatOpenAI({
      apiKey: process.env.OPENAI_API_KEY,
      modelName: 'gpt-3.5-turbo',
      streaming: true,
      callbacks: [handlers],
      cache: redisCache,
    });

    const queryModel = new ChatOpenAI({
      apiKey: process.env.OPENAI_API_KEY,
      modelName: 'gpt-3.5-turbo',
      cache: redisCache,
    });

    const vectorRetriever = (await getDataStore()).asRetriever();

    const queryTemplate = ChatPromptTemplate.fromMessages([
      new MessagesPlaceholder('chat_history'),
      ['user', '{input}'],
      [
        'user',
        'Based on our conversation, create a search query to find relevant information for the current question.',
      ],
    ]);

    const contextRetriever = await createHistoryAwareRetriever({
      llm: queryModel,
      retriever: vectorRetriever,
      rephrasePrompt: queryTemplate,
    });

    const systemPrompt = ChatPromptTemplate.fromMessages([
      [
        'system',
        'You are an AI assistant for BetSmart platform. Answer questions using the provided context.',
      ],
      new MessagesPlaceholder('chat_history'),
      ['user', '{input}'],
    ]);

    const documentChain = await createStuffDocumentsChain({
      llm: mainChatModel,
      prompt: systemPrompt,
    });

    const fullChain = await createRetrievalChain({
      combineDocsChain: documentChain,
      retriever: contextRetriever,
    });

    fullChain.invoke({
      input: latestMessage,
      chat_history: previousMessages,
    });

    return new StreamingTextResponse(stream);
  } catch (err) {
    console.error('API Error:', err);
    return NextResponse.json({ error: 'Server error occurred' }, { status: 500 });
  }
}

Finally, here’s the setup for my database:

import { AstraDB, DataAPIClient } from '@datastax/astra-db-ts';
import { AstraDBVectorStore } from '@langchain/community/vectorstores/astradb';
import { OpenAIEmbeddings } from '@langchain/openai';

const dbEndpoint = process.env.ASTRA_DB_ENDPOINT || '';
const authToken = process.env.ASTRA_DB_APPLICATION_TOKEN || '';
const collectionName = process.env.ASTRA_DB_COLLECTION || '';

if (!authToken || !dbEndpoint || !collectionName) {
  throw new Error('Missing required environment variables for Astra DB');
}

export async function getDataStore() {
  return AstraDBVectorStore.fromExistingIndex(
    new OpenAIEmbeddings({ modelName: 'text-embedding-3-small' }),
    {
      token: authToken,
      endpoint: dbEndpoint,
      collection: collectionName,
      collectionOptions: {
        vector: {
          dimension: 1536,
          metric: 'cosine',
        },
      }
    }
  );
}

const apiClient = new DataAPIClient(authToken);
const database = apiClient.db(dbEndpoint);

export async function getCollection() {
  return database.collection(collectionName);
}

What confuses me is that this same code works flawlessly in another project I have. I’m implementing Next.js along with AI libraries for streaming, Upstash for caching, and DataStax AstraDB for vector embeddings. Has anyone faced this fetch-h2 client loading issue before? Any tips on how to resolve it would be greatly appreciated.

sounds like a runtime env issue. next.js edge runtime doesn’t support all the node modules fetch-h2 needs. even if your api route isn’t edge, middleware or edge functions can cause conflicts. check your next.config.js for runtime configs. also, delete node_modules and package-lock.json, then reinstall - sometimes fixes conflicting dependencies.

I’ve hit this exact issue before. It’s usually a Node.js compatibility problem between your projects. The fetch-h2 dependency that DataAPI client uses doesn’t play well with Next.js server environments, especially when there are version differences in your Node runtime or packages. Quickest fix is what the error suggests - set httpOptions.client to ‘fetch’. Do this when you initialize your DataAPIClient: const apiClient = new DataAPIClient(authToken, { httpOptions: { client: ‘fetch’ } }); You can also pass it through the AstraDBVectorStore config. This usually fixes it without any performance hit. Why it works in your other project? Probably different package versions or Node.js versions. Compare your package.json files between projects - look for version differences in datastax or related dependencies.

Had this exact problem last month in production. The fetch-h2 issue pops up when Next.js can’t load the native HTTP/2 client in certain server environments.

Beyond Samuel’s httpOptions fix, check if you’re running this on serverless (Vercel, AWS Lambda). These environments sometimes strip out native modules that fetch-h2 needs.

I see you’re using https.Agent in your Redis config but not setting any HTTP client options for DataAPI. Try adding httpOptions to both your DataAPIClient and AstraDBVectorStore:

const apiClient = new DataAPIClient(authToken, {
  httpOptions: { client: 'fetch' }
});

// And in getDataStore:
return AstraDBVectorStore.fromExistingIndex(
  new OpenAIEmbeddings({ modelName: 'text-embedding-3-small' }),
  {
    token: authToken,
    endpoint: dbEndpoint,
    collection: collectionName,
    httpOptions: { client: 'fetch' },
    collectionOptions: {
      vector: {
        dimension: 1536,
        metric: 'cosine',
      },
    }
  }
);

Your working project might be using an older datastax package version that defaults to fetch instead of trying fetch-h2 first.