<VirtualHost *:80>
ServerName mysite.com
ProxyPreserveHost On
ProxyRequests Off
<Location /dashboard/>
ProxyPass https://MYUSERNAME.github.io/MY-REPOSITORY/
ProxyPassReverse https://MYUSERNAME.github.io/MY-REPOSITORY/
ProxySSLEngine on
</Location>
</VirtualHost>
After building and deploying everything, I keep getting 404 errors when trying to access the subdirectory. Am I approaching this correctly or did I miss something important in the configuration?
Your Apache config looks correct, but there’s a key mismatch. GitHub Pages serves content from the root domain, while you are attempting to proxy to a subfolder that doesn’t exist within GitHub’s structure. To resolve this, you can either modify your GitHub Pages custom domain setting to align with your desired structure or adjust your proxy target accordingly. I’ve encountered a similar issue in the past and found that GitHub Pages doesn’t support subfolder deployments with custom domains. My solution was to either remove the custom domain and let it serve from username.github.io/repository, or to abandon GitHub Pages and host the static files directly from EC2. This hybrid setup can lead to routing conflicts, causing the 404 issues you’re experiencing.
you’ve confused the root domain with the apache setup. github pages can’t serve subfolders! try using dashboard.mysite.com for a subdomain and make updates on your dns, it’s way simpler than creating a proxy.
Your problem is that GitHub Pages isn’t serving content where you think it is. When you set a custom domain on GitHub Pages, it serves everything from that domain’s root - not from username.github.io/repository. So your Apache config is trying to proxy to a GitHub URL that doesn’t exist anymore. Since you’ve got mysite.com set as your custom domain, GitHub serves your content directly from there. You’re basically creating a proxy loop. Two ways to fix this: either ditch the custom domain in GitHub Pages and proxy to the actual username.github.io/repository URL, or just host your static files on EC2 instead. I hit this same issue last year and ended up moving everything to an S3 bucket served through my EC2 instance. Way more control over routing that way.