How to properly execute asynchronous operations in langchain?

I’m working on a simple chain that evaluates text difficulty using language proficiency levels. I want to compare the performance between synchronous chain.invoke and asynchronous chain.ainvoke methods, but I’m running into issues with the async version.

Can someone help me figure out what’s going wrong?

import os
import asyncio
from time import time

import openai
from dotenv import load_dotenv, find_dotenv
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

_ = load_dotenv(find_dotenv())
openai.api_key = os.getenv('OPENAI_API_KEY')

model = ChatOpenAI(temperature=0)

template = ChatPromptTemplate.from_template(
    'Evaluate the following text and assign a language proficiency level '
    'according to CEFR standards. Return only the level: {input_text}',
)
processor = LLMChain(llm=model, prompt=template)

sample_texts = [
    {'input_text': 'Bonjour, je suis étudiant.'},
    {'input_text': 'Comment allez-vous aujourd\'hui?'},
    {'input_text': 'J\'aime beaucoup jouer au tennis le weekend.'}
]

start_time = time()
result_sync = processor.invoke(sample_texts)
print(result_sync)
print(f"sync time: {time() - start_time:.2f} seconds")
print()

start_time = time()
result_async = processor.ainvoke(sample_texts)
print(result_async)
print(f"async time: {time() - start_time:.2f} seconds")

The output shows that the async method returns a coroutine object instead of the actual results, and I get a warning about the coroutine never being awaited. How should I properly handle async methods in langchain?

You’re missing the key step - you need to actually execute the async code. The ainvoke method returns a coroutine that has to be awaited in an async context, but you’re calling it from sync code. I’ve run into this same langchain async problem before.

Here’s the fix: create separate functions for each approach and use asyncio.run() to bridge sync and async. Also, your current test won’t show async benefits since you’re processing the whole sample_texts array as one batch. Instead, invoke each text separately and use asyncio.gather() to run them concurrently.

The real performance gains happen when you’ve got multiple independent API calls, not batch processing. And double-check that your OpenAI client actually supports async - otherwise you’ll still get blocking calls under the hood.

You need to await the async call. Right now you’re just creating a coroutine object without running it.

Try this:

async def run_async_test():
    start_time = time()
    result_async = await processor.ainvoke(sample_texts)
    print(result_async)
    print(f"async time: {time() - start_time:.2f} seconds")

# Run it
asyncio.run(run_async_test())

I’ve hit this exact issue before when I started using langchain async methods. The ainvoke method returns a coroutine that needs awaiting inside an async function.

For proper performance comparison, run multiple operations concurrently to see async’s real benefit:

async def run_concurrent():
    tasks = [processor.ainvoke([text]) for text in sample_texts]
    results = await asyncio.gather(*tasks)
    return results

This shows the actual performance difference since async shines with multiple IO operations running simultaneously.

The problem’s simple - you’re treating an async operation like it’s synchronous. I hit this exact issue in my projects and figured out that async operations need proper event loop handling. Your code creates the coroutine but never actually runs it. Here’s what fixed it for me: completely separate the async and sync parts into different functions. Don’t mix sync and async calls in the same scope. Write an async function that handles the awaiting, then use asyncio.run() to execute it from your main code. Just remember - async really pays off when you’re processing multiple requests at once, not single operations. If you’re only testing one text at a time, you won’t see much performance gain over regular synchronous calls. Async shines when you need multiple API calls running simultaneously without blocking each other.

you’re not executing the coroutine. wrap it in asyncio.run() or call await in an async function. currently, you’re just printing the coroutine instead of running it. that’s why you’re seeing that warning!