Hey everyone,
I’m working on a big project to redo an app that has over 150,000 users. We’re using a multi-tenant setup where each user gets their own database. But we’ve run into a problem with the table_definition_cache limit.
I’m wondering if it’s okay to just crank this limit way up? It’s usually set to around 2,000 or 3,000, but we need it much higher.
Has anyone dealt with this before? I’m worried about how it might affect how well the app runs. Is there a better way to handle this many users and databases?
Any tips or experiences would be super helpful. Thanks!
I’ve been in a similar situation with a large-scale multi-tenant app, and increasing the table_definition_cache can definitely help, but it’s not without risks. In our case, we found that cranking it up solved the immediate problem but led to increased memory usage and slower query performance over time.
Instead, we eventually moved to a sharded architecture, distributing tenants across multiple database servers. This approach allowed us to maintain reasonable cache sizes on each server while scaling horizontally. It took some work to implement, but it solved our scaling issues more elegantly in the long run.
Another strategy we considered was using a hybrid approach - keeping frequently accessed tenants in dedicated databases and grouping less active ones. This might be worth exploring if full sharding seems too complex initially.
Remember, there’s no one-size-fits-all solution. It really depends on your specific usage patterns and infrastructure. I’d recommend doing some load testing with increased cache sizes before making any production changes.
Having dealt with similar scaling challenges, I’d caution against simply increasing the table_definition_cache as a long-term solution. While it might offer temporary relief, it could lead to performance degradation and memory issues down the line.
One approach that worked well for us was implementing a connection pooling solution like ProxySQL. This allowed us to efficiently manage connections and reduce the overall load on the MySQL server. We also optimized our schema design, consolidating tables where possible and using more efficient data types.
Additionally, we found that implementing a caching layer (e.g., Redis) for frequently accessed data significantly reduced the load on our databases. This, combined with query optimization and proper indexing, helped us manage a large multi-tenant system without resorting to extreme configuration changes.
Ultimately, the best solution will depend on your specific use case and resource constraints. It might be worth consulting with a database specialist to review your current setup and recommend tailored optimizations.
hey man, i feel ur pain. we hit similar issues w/ our app. have u considered using a database proxy like pgbouncer? it helped us manage connections better n reduced load on mysql. also, maybe look into caching frequently accessed data w/ redis or memcached. could take sum load off ur dbs. good luck bro!