Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
289 changes: 146 additions & 143 deletions demo-notebooks/guided-demos/5_submit_rayjob_cr.ipynb
Original file line number Diff line number Diff line change
@@ -1,146 +1,149 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9259e514",
"metadata": {},
"source": [
"# Submitting a RayJob CR\n",
"\n",
"In this notebook, we will go through the basics of using the SDK to:\n",
" * Define a RayCluster configuration\n",
" * Use this configuration alongside a RayJob definition\n",
" * Submit the RayJob, and allow Kuberay Operator to lifecycle the RayCluster for the RayJob"
]
"cells": [
{
"cell_type": "markdown",
"id": "9259e514",
"metadata": {},
"source": [
"# Submitting a RayJob CR\n",
"\n",
"In this notebook, we will go through the basics of using the SDK to:\n",
" * Define a RayCluster configuration\n",
" * Use this configuration alongside a RayJob definition\n",
" * Submit the RayJob, and allow Kuberay Operator to lifecycle the RayCluster for the RayJob"
]
},
{
"cell_type": "markdown",
"id": "18136ea7",
"metadata": {},
"source": [
"## Defining and Submitting the RayJob\n",
"First, we'll need to import the relevant CodeFlare SDK packages. You can do this by executing the below cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51e18292",
"metadata": {},
"outputs": [],
"source": [
"from codeflare_sdk import RayJob, ManagedClusterConfig"
]
},
{
"cell_type": "markdown",
"id": "649c5911",
"metadata": {},
"source": [
"Run the below `oc login` command using your Token and Server URL. Ensure the command is prepended by `!` and not `%`. This will work when running both locally and within RHOAI."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc364888",
"metadata": {},
"outputs": [],
"source": [
"!oc login --token=<your-token> --server=<your-server-url>"
]
},
{
"cell_type": "markdown",
"id": "5581eca9",
"metadata": {},
"source": [
"Next we'll need to define the ManagedClusterConfig. Kuberay will use this to spin up a short-lived RayCluster that will only exist as long as the job"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3094c60a",
"metadata": {},
"outputs": [],
"source": [
"cluster_config = ManagedClusterConfig(\n",
" head_memory_requests=6,\n",
" head_memory_limits=8,\n",
" num_workers=2,\n",
" worker_cpu_requests=1,\n",
" worker_cpu_limits=1,\n",
" worker_memory_requests=4,\n",
" worker_memory_limits=6,\n",
" head_accelerators={'nvidia.com/gpu': 0},\n",
" worker_accelerators={'nvidia.com/gpu': 0},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "02a2b32b",
"metadata": {},
"source": [
"Lastly we can pass the ManagedClusterConfig into the RayJob and submit it. You do not need to worry about tearing down the cluster when the job has completed, that is handled for you!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e905ccea",
"metadata": {},
"outputs": [],
"source": [
"job = RayJob(\n",
" job_name=\"demo-rayjob\",\n",
" entrypoint=\"python -c 'print(\\\"Hello from RayJob!\\\")'\",\n",
" cluster_config=cluster_config,\n",
" namespace=\"your-namespace\",\n",
" # local_queue is optional. If omitted, the SDK will auto-detect a default\n",
" # Kueue LocalQueue. If Kueue is not installed, the job runs without it.\n",
" # local_queue=\"my-queue\",\n",
")\n",
"\n",
"job.submit()"
]
},
{
"cell_type": "markdown",
"id": "f3612de2",
"metadata": {},
"source": [
"We can check the status of our job by executing the below cell. The status may appear as `unknown` for a time while the RayCluster spins up."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "96d92f93",
"metadata": {},
"outputs": [],
"source": [
"job.status()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
{
"cell_type": "markdown",
"id": "18136ea7",
"metadata": {},
"source": [
"## Defining and Submitting the RayJob\n",
"First, we'll need to import the relevant CodeFlare SDK packages. You can do this by executing the below cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51e18292",
"metadata": {},
"outputs": [],
"source": [
"from codeflare_sdk import RayJob, ManagedClusterConfig"
]
},
{
"cell_type": "markdown",
"id": "649c5911",
"metadata": {},
"source": [
"Run the below `oc login` command using your Token and Server URL. Ensure the command is prepended by `!` and not `%`. This will work when running both locally and within RHOAI."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc364888",
"metadata": {},
"outputs": [],
"source": [
"!oc login --token=<your-token> --server=<your-server-url>"
]
},
{
"cell_type": "markdown",
"id": "5581eca9",
"metadata": {},
"source": [
"Next we'll need to define the ManagedClusterConfig. Kuberay will use this to spin up a short-lived RayCluster that will only exist as long as the job"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3094c60a",
"metadata": {},
"outputs": [],
"source": [
"cluster_config = ManagedClusterConfig(\n",
" head_memory_requests=6,\n",
" head_memory_limits=8,\n",
" num_workers=2,\n",
" worker_cpu_requests=1,\n",
" worker_cpu_limits=1,\n",
" worker_memory_requests=4,\n",
" worker_memory_limits=6,\n",
" head_accelerators={'nvidia.com/gpu': 0},\n",
" worker_accelerators={'nvidia.com/gpu': 0},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "02a2b32b",
"metadata": {},
"source": [
"Lastly we can pass the ManagedClusterConfig into the RayJob and submit it. You do not need to worry about tearing down the cluster when the job has completed, that is handled for you!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e905ccea",
"metadata": {},
"outputs": [],
"source": [
"job = RayJob(\n",
" job_name=\"demo-rayjob\",\n",
" entrypoint=\"python -c 'print(\\\"Hello from RayJob!\\\")'\",\n",
" cluster_config=cluster_config,\n",
" namespace=\"your-namespace\"\n",
")\n",
"\n",
"job.submit()"
]
},
{
"cell_type": "markdown",
"id": "f3612de2",
"metadata": {},
"source": [
"We can check the status of our job by executing the below cell. The status may appear as `unknown` for a time while the RayCluster spins up."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "96d92f93",
"metadata": {},
"outputs": [],
"source": [
"job.status()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}
26 changes: 13 additions & 13 deletions src/codeflare_sdk/ray/rayjobs/rayjob.py
Original file line number Diff line number Diff line change
Expand Up @@ -267,31 +267,31 @@ def _build_rayjob_cr(self) -> Dict[str, Any]:
if self.local_queue:
labels["kueue.x-k8s.io/queue-name"] = self.local_queue
else:
# Auto-detect default queue for new clusters
# Auto-detect default queue for new clusters.
# If no default queue is found (e.g. Kueue not installed),
# skip the label entirely so the job can run without Kueue.
# This matches the interactive Cluster behavior in build_ray_cluster.py.
default_queue = get_default_kueue_name(self.namespace)
if default_queue:
labels["kueue.x-k8s.io/queue-name"] = default_queue
else:
# No default queue found, use "default" as fallback
labels["kueue.x-k8s.io/queue-name"] = "default"
logger.warning(
logger.info(
f"No default Kueue LocalQueue found in namespace '{self.namespace}'. "
f"Using 'default' as the queue name. If a LocalQueue named 'default' "
f"does not exist, the RayJob submission will fail. "
f"To fix this, please explicitly specify the 'local_queue' parameter."
f"Submitting RayJob without Kueue queue management. "
f"To use Kueue, specify the 'local_queue' parameter or "
f"annotate a LocalQueue with 'kueue.x-k8s.io/default-queue: true'."
)

if self.priority_class:
labels["kueue.x-k8s.io/priority-class"] = self.priority_class

# Apply labels to metadata
# Apply labels to metadata.
# We intentionally do NOT set suspend=true here. Kueue's mutating
# webhook will set it automatically when it sees the queue label.
# This way, if Kueue isn't installed, the label is harmless metadata
# and the job runs immediately without hanging.
if labels:
rayjob_cr["metadata"]["labels"] = labels

# When using Kueue with lifecycled clusters, start with suspend=true
# Kueue will unsuspend the job once the workload is admitted
if labels.get("kueue.x-k8s.io/queue-name"):
rayjob_cr["spec"]["suspend"] = True
else:
if self.local_queue or self.priority_class:
logger.warning(
Expand Down
Loading
Loading