Best practices for connecting two separate Rails applications via API

I’m working on a project that involves two separate Rails applications that need to communicate with each other. The first application serves as a data provider that returns JSON responses containing nutritional information about food items. The second application handles the user interface where people can search for ingredients and see the nutritional data retrieved from the first app.

We decided to split these into separate applications because we plan to open up the data service as a public API later on. This way the backend can handle multiple clients without affecting the main web interface performance.

What are some recommended approaches for architecting this kind of setup? I’m particularly interested in how to handle the communication between these two systems effectively. Should I be worried about latency issues or are there specific patterns that work well for this type of integration?

i’ve had a similar setup, so i get it! defiantly use JWT tokens for secure comms between your apps. rate limiting is a must if you plan to go public. also, faraday is awesome for handling reqs without too much hassle. hope it helps!

I’ve worked with similar microservices setups. Timeout configs are absolutely critical - don’t use the defaults. Most HTTP clients come with timeouts that are way too long for user-facing apps. I usually go with 5 seconds for connections and 10-15 seconds for reads, depending on how complex the data is. Circuit breakers saved my butt when the data service went down. Check out semian - it’s pretty straightforward to implement. One thing that blindsided me was database connections getting maxed out when traffic spiked, so keep an eye on your connection pools. For caching, hit it from both angles - cache in your consumer app AND use HTTP-level caching on the API responses. Cuts down on duplicate requests for the same nutritional data.

In my experience with similar Rails architectures, utilizing HTTP caching headers can significantly reduce response times for frequently accessed nutritional data. Implementing connection pooling and keep-alive connections will help avoid the overhead associated with repeatedly initializing new requests. If you encounter latency issues, consider developing composite endpoints that consolidate necessary data into a single response rather than relying on multiple sequential calls. Ensuring robust error handling and retry logic is vital, especially when network reliability is an issue between services. Overall, this separation can enhance performance, allowing multiple clients to interact with the API without severe slowdowns.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.