I’m building a Python application that utilizes AI to conduct interviews with candidates and analyze their behavior. Right now, I have a basic structure in place, but I’m looking to enhance it by incorporating LangGraph for better conversational flow.
Here’s an example of my current code:
import os
import random
import openai
from personality_data import PERSONALITY_TRAITS
# Function to save chat history
def store_chat_log(chat_data):
file_id = ''.join(random.choices('abcdefghijklmnopqrstuvwxyz0123456789', k=10)) + '.txt'
with open(file_id, 'w') as f:
for msg in chat_data:
if len(msg) == 3: # Context, Query, Answer
f.write(f"Context: {msg[0]}\nQuery: {msg[1]}\nAnswer: {msg[2]}\n\n")
else: # Query, Answer
f.write(f"Query: {msg[0]}\nAnswer: {msg[1]}\n\n")
# Function to obtain candidate information
def get_candidate_info():
client = openai.OpenAI()
intro_prompt = "Create a professional greeting to ask for the candidate's name. Make it sound natural and welcoming."
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": intro_prompt}]
)
greeting = response.choices[0].message.content
name = input(f"\nAI: {greeting}\nYou: ").strip()
print(f"\nAI: Nice to meet you, {name}. Let's start the interview.\n")
return name
# Function to gather candidate response
def get_response(question):
while True:
answer = input(f"\nAI: {question}\n\nYou: ").strip()
if answer:
return answer
else:
print("AI: I didn't catch that. Could you please respond?")
# Start the interview process
candidate = get_candidate_info()
I’ve attempted to utilize Crew AI, but it proved to be too complex and wasn’t suited for my needs. I’ve heard that LangGraph might be more manageable; however, I’m unsure how to integrate it into my existing system.
What I need assistance with:
- How do I correctly implement LangGraph with my current interview bot?
- Are there simpler alternatives that can effectively handle conversational management?
- Best practices for structuring the code so that the AI can make more informed decisions during interviews?
Since I am relatively new to this, any examples or step-by-step guidance would be very much appreciated. Thank you!
Been working on conversational AI for recruitment platforms and hit the same issues. Don’t force LangGraph integration - just add a conversation orchestrator between your existing functions.
Here’s what I learned: interview flows need contextual memory way more than rigid state management. I built a middleware layer that captures conversation context and feeds it back to OpenAI calls. Keeps your current code intact while adding the smarts you want.
Skip the rewrite. Add a conversation buffer that holds interview context across questions. Your store_chat_log
function is already halfway there - just tweak it to return processed context for generating the next question. Boom, you’ve got dynamic follow-ups without learning a whole new framework.
For interview decisions, stick with prompt engineering using conversation history instead of complex graph structures. Most automation wins come from understanding what candidates said before, not fancy state transitions. Your foundation’s solid - build on it instead of starting over.
Your code’s actually closer to working than you think. Don’t jump into LangGraph yet - you might not need that complexity. I built something similar for our HR team and a simple conversation manager beat the full graph frameworks. The trick? Treat interviews as contextual exchanges, not rigid state machines. Here’s what I’d change: python class InterviewManager: def __init__(self): self.context = [] self.client = openai.OpenAI() def ask_contextual_question(self, base_question, previous_responses): context_prompt = f"Previous responses: {previous_responses}\nBase question: {base_question}\nAdapt this question based on the context." response = self.client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": context_prompt}] ) return response.choices[0].message.content
This lets your AI adapt questions on the fly without managing complex graph states. It handles follow-ups and behavioral analysis way more naturally than rigid workflows. The real game-changer? Adding response analysis between questions to pick the next best one.
honestly langgraph’s probably overkill here. had a similar setup and just used a decorator to wrap my existing functions. worked way better than expected:
def interview_flow(func):
def wrapper(*args, **kwargs):
# capture context before/after each step
return func(*args, **kwargs)
return wrapper
build conversation memory into what you’ve got instead of starting over. your code handles the basics fine - just add a context buffer that tracks sentiment and topic shifts between questions.
I’ve built automated screening systems at my company, so I’ve been exactly where you are. LangGraph works really well for interview flows.
Here’s how to modify your code to use LangGraph. You’ll create states and transitions for your interview process:
from langgraph.graph import StateGraph, END
from typing import TypedDict, List
import openai
class InterviewState(TypedDict):
candidate_name: str
current_question: str
responses: List[dict]
question_index: int
conversation_history: List[str]
def greeting_node(state: InterviewState):
# Your existing get_candidate_info logic here
name = input("What's your name?")
return {
"candidate_name": name,
"question_index": 0,
"responses": [],
"conversation_history": [f"Candidate: {name}"]
}
def ask_question_node(state: InterviewState):
questions = ["Tell me about yourself", "What's your biggest strength?", "Why this role?"]
if state["question_index"] < len(questions):
question = questions[state["question_index"]]
response = input(f"AI: {question}\nYou: ")
state["responses"].append({"question": question, "answer": response})
state["question_index"] += 1
state["conversation_history"].append(f"Q: {question}")
state["conversation_history"].append(f"A: {response}")
return state
def should_continue(state: InterviewState):
return "continue" if state["question_index"] < 3 else "end"
# Build the graph
workflow = StateGraph(InterviewState)
workflow.add_node("greeting", greeting_node)
workflow.add_node("ask_question", ask_question_node)
workflow.set_entry_point("greeting")
workflow.add_edge("greeting", "ask_question")
workflow.add_conditional_edges(
"ask_question",
should_continue,
{"continue": "ask_question", "end": END}
)
app = workflow.compile()
This gives you proper state management and makes it easy to add dynamic question routing based on previous answers.
Honestly though, if you want something simpler, just stick with your current approach but add a basic state machine using Python’s enum
module. Sometimes simple wins.
The biggest improvement you can make right now? Add context awareness. Store previous responses and pass them to your OpenAI calls so the AI can ask follow-up questions based on what the candidate already said.