-
Notifications
You must be signed in to change notification settings - Fork 74
Bump neurohackademy to g4dn.2xlarge #6537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
They need same amount of GPU but more RAM
Let's do 4x (see recent change on the other PR) |
Merging this PR will trigger the following deployment actions. Support deploymentsNo support upgrades will be triggered Staging deployments
Production deployments
|
@arokem per https://aws.amazon.com/ec2/instance-types/g4/, 2x has 32G of RAM and 1 GPU, while 4x has 64G of RAM and 1 GPU. Given there's only 1 GPU and we don't have GPU sharing enabled, do you want 2x with 30G of RAM or 4x with 61G of RAM? We shouldn't do 4x with 30G of RAM |
Oh - OK, let's do 2x with 30G then. Thanks! |
@arokem great, already deployed! I did provision (but not use) 4xlarge as well, so if you want more RAM (~61G) you can bump up straightforward with a PR (eksctl changes require manual work, which I've just done) |
🎉🎉🎉🎉 Monitor the deployment of the hubs here 👉 https://github.com/2i2c-org/infrastructure/actions/runs/16815832626 |
Hang on, turns out 30G is not right. Fixing |
@arokem should be fixed now |
They need same amount of GPU but more RAM
Replaces #6536