Skip to content

memory_max to take client total memory into consideration with placement #26360

@dmclf

Description

@dmclf

Nomad version

Nomad v1.9.3
BuildDate 2024-11-11T16:35:41Z
Revision d92bf1014886c0ff9f882f4a2691d5ae8ad8131c

Operating system and Environment details

Ubuntu 22

Issue

when defining a job with memory_max that is higher then any client can provide, the job still gets allocated

+/- Job: "job"
+/- Task Group: "group" ( 1 create/destroy update )
+/- Task: "task" ( forces create/destroy update )
+/- Resources {
CPU:	"300"
Cores:	"0"
DiskMB:	"0"
IOPS:	"0"
MemoryMB:	"300"
+/- MemoryMaxMB:	"3000" => "3221225472000"
SecretsMB:	"0"
}

Scheduler dry-run
All tasks successfully allocated.

Reproduction steps

schedule a job with any kind of high memory_max

Expected Result

job placement failure, as I do not have clients with 3221225472000 MB memory (yet)
= 3.07 billion EB (Exabytes), or = 3 ZB (Zettabytes)

some kind of 'basic' protection would be nice.

ie, maybe some clients have 128G, some 256G

then a job with a memory_max of 160G should only be placed on the 200G machine, not on the 128G

yes, there are constraints, or base 'memory' values, that can also affect placement,
but it would also be nice if Nomad could take this into consideration.

Actual Result

Scheduler dry-run
All tasks successfully allocated.

job gets placed fine

# head -n1 /proc/meminfo  
MemTotal:        8091536 kB

docker inspect

# docker inspect a49784f6486d |grep -i memory
            "Memory": 3377699720527872000,
            "MemoryReservation": 314572800,
            "MemorySwap": 3377699720527872000,
            "MemorySwappiness": null,
                "NOMAD_MEMORY_LIMIT=300",
                "NOMAD_MEMORY_MAX_LIMIT=3221225472000",

Metadata

Metadata

Assignees

Type

No type

Projects

Status

In Progress

Relationships

None yet

Development

No branches or pull requests

Issue actions