-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Description
Is this a bug report or feature request?
- Bug Report
Deviation from expected behavior:
Go is not cgroup aware, which means rook will be throttled in containerised environment with Linux (Kubernetes).
The below image shows two halfs, the first with a 13600k (20 threads) and the second with an EPYC 7763 (128 threads). The first half throttles at about 5ms, and the second at 40ms.
Expected behavior:
Rook should not throttle when it has sufficient CPU. automaxprocs can be used to do this automatically.
How to reproduce it (minimal and precise):
Set CPU limits for the operator and observe the container will throttle.
File(s) to submit:
- Cluster CR (custom resource), typically called
cluster.yaml
, if necessary
Logs to submit:
-
Operator's logs, if necessary
-
Crashing pod(s) logs, if necessary
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use theinsert code
button from the Github UI.
Read GitHub documentation if you need help.
Cluster Status to submit:
-
Output of kubectl commands, if necessary
To get the health of the cluster, use
kubectl rook-ceph health
To get the status of the cluster, usekubectl rook-ceph ceph status
For more details, see the Rook kubectl Plugin
Environment:
- OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Cloud provider or hardware configuration:
- Rook version (use
rook version
inside of a Rook Pod): - Storage backend version (e.g. for ceph do
ceph -v
): - Kubernetes version (use
kubectl version
): - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
- Storage backend status (e.g. for Ceph use
ceph health
in the Rook Ceph toolbox):