Hi there!
I’m wondering what platforms and frameworks everyone is working with when creating AI agent projects. Could be anything from simple bots to complex automated systems with multiple agents working together.
- What about development frameworks - are you using CrewAI, LlamaIndex, or maybe building from scratch?
- For the AI models, do you go with Anthropic, Google Cloud AI, or roll your own solutions?
- How do you handle deployment - Docker containers, Google Cloud, local servers?
- Got any favorite tech combinations or development processes that work really well?
Really interested to learn from your experiences and see what’s working for different people!
honestly been experimenting with autogen lately and its pretty solid for multi-agent setups. using openai api but thinking about switching to local llama models for cost reasons. deploy everything on digital ocean droplets cause its cheap and simple - docker makes it easy to move stuff around if needed.
In my experience with AI agents, I’ve found that simplicity is key. I primarily use LangChain for the main framework, as it seamlessly manages agent orchestration and memory. For AI models, OpenAI’s GPT-4 through their API has been reliable, though I sometimes leverage Claude for specific tasks that require deeper reasoning. For deployment, I utilize AWS, particularly Lambda for lighter agents and EC2 for those needing a persistent state. A major takeaway has been to avoid overengineering; it’s more effective to perfect one agent before considering more complexity.
Have been working with CrewAI for the past few months and it strikes a good balance between flexibility and structure. The role-based agent system makes it easier to manage complex workflows compared to building everything from scratch. For models, I’ve settled on a hybrid approach - using Anthropic’s Claude for reasoning-heavy tasks and OpenAI for general operations. The cost difference is noticeable but worth it for specific use cases. Deployment has been mostly through Google Cloud Run since it handles scaling automatically and you only pay for what you use. One thing I learned the hard way is to implement proper logging and monitoring from day one - debugging multi-agent interactions without visibility into each step is a nightmare. The combination of CrewAI with proper observability tools has made development much smoother.
Been through quite a few setups over the years and honestly the local approach has been a game changer for me lately. Running everything on my own hardware gives you way more control and debugging becomes much easier when you can see exactly what’s happening.
I’m using Ollama for local model hosting - works great with Llama 2 and Mistral models. For the framework side, I’ve been building mostly custom solutions because the existing frameworks tend to add overhead I don’t need. When you understand the core concepts, rolling your own agent logic isn’t that complex.
Deployment wise, I containerize everything but keep it running on dedicated hardware instead of cloud. Saves money in the long run and latency is much better. Only downside is you need to handle scaling yourself, but for most agent applications that’s not really an issue.
Found this video recently that covers the local AI setup approach really well:
The key thing I learned is start simple and add complexity only when you actually need it. Most agent applications don’t require the heavy frameworks everyone talks about.
For AI agent projects, many developers opt for Google Cloud AI or Anthropic for their models, depending on complexity. For frameworks, CrewAI is ideal for scalable systems with multiple agents, especially in enterprise automation with prebuilt workflows. Alternatively, LlamaIndex is a solid choice for more specialized needs.
When it comes to deployment, Docker containers offer portability, while Google Cloud or AWS are commonly used for larger-scale solutions. For hybrid AI deployment, combining both cloud-based and on-premise systems can offer flexibility and control, allowing organizations to balance scalability with security. For smaller, more secure applications, local servers might be used.
Platforms like Agentra are particularly useful for seamlessly managing hybrid AI deployment by integrating multiple AI agents into workflows, enhancing automation and ensuring smooth operations across different environments.