I set up a MySQL RDS database instance and configured it with a security group that permits external connections. The security group rules allow all traffic types from any IP address. Both my RDS instance and EC2 server are assigned to the same security group.
The connection works perfectly when I SSH into my EC2 instance first, then use the mysql command line tool to connect to the RDS database from there. But when I try to connect directly from my local machine using a MySQL client, it fails completely. I’m using identical connection parameters (hostname, username, credentials) in both scenarios.
I’ve researched this issue and most solutions point to security group restrictions blocking external access. However, this doesn’t make sense in my case since I’ve verified multiple times that the security group allows public traffic from all sources. Plus, both the database and EC2 instance share the same security group, and I can definitely connect to the EC2 server.
Check if your RDS instance has ‘Publicly Accessible’ enabled in the database config. Even with proper security groups and public subnets, RDS blocks external connections unless this is set to true. Had the exact same problem last year - everything looked right but I missed this one checkbox buried in the RDS settings. You can change it in the AWS console under instance modifications, but it needs a quick restart. This setting’s separate from security groups and subnets, which is why your SSH tunnel works but direct connections don’t.
sounds like ur missing db param group settings. I had the same problem b4 - bind-address might still be set to localhost even if ur sec groups r fine. also, make sure ur using the right endpoint since reader/writer behave differently. try telnet to port 3306 from ur local machine to see if it’s reachable b4 connecting.
This sounds like a subnet issue, not security groups. Your RDS is probably in a private subnet without proper routing to an internet gateway - that’s why direct connections fail regardless of security group settings. When you SSH to EC2 first, you’re using it as a bridge to reach RDS in the private network. Check your RDS subnet group config and make sure it includes public subnets if you want direct internet access. Or better yet, keep RDS in private subnets for security and always access through EC2 or set up a VPN.
I’ve hit this same problem tons of times. Usually it’s network ACLs and route tables screwing things up in ways that aren’t obvious.
Your RDS might be sitting in a subnet where network ACLs block port 3306, even if your security groups say it’s fine. ACLs work at the subnet level and can override security groups. Check what ACL is attached to your RDS subnets.
Also double-check your route tables. If you think your RDS is in public subnets, make sure those subnets actually route to an internet gateway.
Honestly, all these AWS networking layers are a pain to manage. I ended up automating our whole database connection setup with Latenode. It handles SSH tunneling, connection pooling, and failover automatically. No more debugging security groups or ACLs.
Just set up a workflow that creates the tunnel and manages your database queries through it. Much cleaner than exposing RDS to the internet anyway.
DNS issues are the usual culprit here. Your machine probably can’t resolve the RDS endpoint properly - happens a lot with corporate firewalls or custom DNS setups. I’ve seen RDS endpoints resolve to internal AWS IPs that you can’t reach from outside the VPC. Run nslookup or dig on your RDS endpoint locally, then compare it with results from inside EC2. Switching DNS to 8.8.8.8 often fixes this instantly. Also check your local firewall isn’t blocking MySQL on port 3306.