KubeRay 与 Volcano 的集成#

Volcano 是一个基于 Kubernetes 构建的批处理调度系统。它提供了一套机制(如 gang 调度、作业队列、公平调度策略),这些机制是 Kubernetes 目前所缺少的,但却是许多类别的批处理和弹性工作负载所常用的。KubeRay 的 Volcano 集成使得在多租户 Kubernetes 环境中更高效地调度 Ray pod 成为可能。

设置#

步骤 1:使用 KinD 创建一个 Kubernetes 集群#

在终端中运行以下命令:

kind create cluster

步骤 2:安装 Volcano#

在启用 Volcano 与 KubeRay 的集成之前,您需要在 Kubernetes 集群上成功安装 Volcano。有关 Volcano 安装说明,请参阅 快速入门指南

步骤 3:使用批处理调度安装 KubeRay Operator#

使用 --enable-batch-scheduler 标志部署 KubeRay Operator 以启用 Volcano 批处理调度支持。

在使用 Helm 安装 KubeRay Operator 时,您应该使用以下两种选项之一:

  • 在你的 values.yaml 文件中,将 batchScheduler.enabled 设置为 true

# values.yaml file
batchScheduler:
    enabled: true
  • 在命令行运行时传递 --set batchScheduler.enabled=true 标志:

# Install the Helm chart with --enable-batch-scheduler flag set to true
helm install kuberay-operator kuberay/kuberay-operator --version 1.0.0 --set batchScheduler.enabled=true

步骤4:使用Volcano调度器安装Ray集群#

RayCluster 自定义资源必须包含 ray.io/scheduler-name: volcano 标签,以便将集群 Pod 提交给 Volcano 进行调度。

# Path: kuberay/ray-operator/config/samples
# Includes label `ray.io/scheduler-name: volcano` in the metadata.labels
curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.0.0/ray-operator/config/samples/ray-cluster.volcano-scheduler.yaml
kubectl apply -f ray-cluster.volcano-scheduler.yaml

# Check the RayCluster
kubectl get pod -l ray.io/cluster=test-cluster-0
# NAME                                 READY   STATUS    RESTARTS   AGE
# test-cluster-0-head-jj9bg            1/1     Running   0          36s

您还可以在 RayCluster 元数据中提供以下标签:

  • ray.io/priority-class-name: 集群优先级类,由 Kubernetes 定义

    • 此标签仅在您创建 PriorityClass 资源后生效

    • labels:
        ray.io/scheduler-name: volcano
        ray.io/priority-class-name: <replace with correct PriorityClass resource name>
      
  • volcano.sh/queue-name: 集群提交到的 Volcano 队列 名称。

    • 此标签仅在您创建 Queue 资源后生效

    • labels:
        ray.io/scheduler-name: volcano
        volcano.sh/queue-name: <replace with correct Queue resource name>
      

如果启用了自动扩展,则使用 minReplicas 进行 gang 调度,否则使用所需的 replicas

步骤 5:使用 Volcano 进行批处理调度#

如需指导,请参阅 示例

示例#

在开始示例之前,请移除任何正在运行的 Ray 集群,以确保成功运行下面的示例。

kubectl delete raycluster --all

Gang 调度#

本示例演示了 Volcano 和 KubeRay 如何实现 gang 调度。

首先,创建一个容量为4个CPU和6Gi内存的队列:

kubectl create -f - <<EOF
apiVersion: scheduling.volcano.sh/v1beta1
kind: Queue
metadata:
  name: kuberay-test-queue
spec:
  weight: 1
  capability:
    cpu: 4
    memory: 6Gi
EOF

上述定义中的 weight 表示集群资源分配中某个队列的相对权重。当集群中所有队列的总 capability 超过总可用资源时,使用此参数,迫使队列之间共享资源。权重较高的队列将分配到更大比例的总资源。

能力 是对队列在任何给定时间支持的最大资源的硬性限制。您可以根据需要更新它,以允许一次运行更多或更少的工作负载。

接下来,创建一个包含一个头节点(1 CPU + 2Gi 内存)和两个工作节点(每个 1 CPU + 1Gi 内存)的 RayCluster,总共 3 CPU 和 4Gi 内存:

# Path: kuberay/ray-operator/config/samples
# Includes  the `ray.io/scheduler-name: volcano` and `volcano.sh/queue-name: kuberay-test-queue` labels in the metadata.labels
curl -LO https://raw.githubusercontent.com/ray-project/kuberay/v1.0.0/ray-operator/config/samples/ray-cluster.volcano-scheduler-queue.yaml
kubectl apply -f ray-cluster.volcano-scheduler-queue.yaml

由于队列的容量为4个CPU和6Gi的RAM,此资源应能成功调度,没有任何问题。您可以通过检查集群的Volcano PodGroup的状态来验证这一点,确认阶段为Running且最后状态为Scheduled

kubectl get podgroup ray-test-cluster-0-pg -o yaml

# apiVersion: scheduling.volcano.sh/v1beta1
# kind: PodGroup
# metadata:
#   creationTimestamp: "2022-12-01T04:43:30Z"
#   generation: 2
#   name: ray-test-cluster-0-pg
#   namespace: test
#   ownerReferences:
#   - apiVersion: ray.io/v1alpha1
#     blockOwnerDeletion: true
#     controller: true
#     kind: RayCluster
#     name: test-cluster-0
#     uid: 7979b169-f0b0-42b7-8031-daef522d25cf
#   resourceVersion: "4427347"
#   uid: 78902d3d-b490-47eb-ba12-d6f8b721a579
# spec:
#   minMember: 3
#   minResources:
#     cpu: "3"
#     memory: 4Gi
#   queue: kuberay-test-queue
# status:
#   conditions:
#   - lastTransitionTime: "2022-12-01T04:43:31Z"
#     reason: tasks in the gang are ready to be scheduled
#     status: "True"
#     transitionID: f89f3062-ebd7-486b-8763-18ccdba1d585
#     type: Scheduled
#   phase: Running

检查队列的状态,查看1个正在运行的作业:

kubectl get queue kuberay-test-queue -o yaml

# apiVersion: scheduling.volcano.sh/v1beta1
# kind: Queue
# metadata:
#   creationTimestamp: "2022-12-01T04:43:21Z"
#   generation: 1
#   name: kuberay-test-queue
#   resourceVersion: "4427348"
#   uid: a6c4f9df-d58c-4da8-8a58-e01c93eca45a
# spec:
#   capability:
#     cpu: 4
#     memory: 6Gi
#   reclaimable: true
#   weight: 1
# status:
#   reservation: {}
#   running: 1
#   state: Open

接下来,添加一个具有相同头节点和工作节点配置的额外 RayCluster,但使用不同的名称:

# Path: kuberay/ray-operator/config/samples
# Includes the `ray.io/scheduler-name: volcano` and `volcano.sh/queue-name: kuberay-test-queue` labels in the metadata.labels
# Replaces the name to test-cluster-1
sed 's/test-cluster-0/test-cluster-1/' ray-cluster.volcano-scheduler-queue.yaml | kubectl apply -f-

检查其 PodGroup 的状态,确认其阶段为 Pending,且最后的状态为 Unschedulable

kubectl get podgroup ray-test-cluster-1-pg -o yaml

# apiVersion: scheduling.volcano.sh/v1beta1
# kind: PodGroup
# metadata:
#   creationTimestamp: "2022-12-01T04:48:18Z"
#   generation: 2
#   name: ray-test-cluster-1-pg
#   namespace: test
#   ownerReferences:
#   - apiVersion: ray.io/v1alpha1
#     blockOwnerDeletion: true
#     controller: true
#     kind: RayCluster
#     name: test-cluster-1
#     uid: b3cf83dc-ef3a-4bb1-9c42-7d2a39c53358
#   resourceVersion: "4427976"
#   uid: 9087dd08-8f48-4592-a62e-21e9345b0872
# spec:
#   minMember: 3
#   minResources:
#     cpu: "3"
#     memory: 4Gi
#   queue: kuberay-test-queue
# status:
#   conditions:
#   - lastTransitionTime: "2022-12-01T04:48:19Z"
#     message: '3/3 tasks in gang unschedulable: pod group is not ready, 3 Pending,
#       3 minAvailable; Pending: 3 Undetermined'
#     reason: NotEnoughResources
#     status: "True"
#     transitionID: 3956b64f-fc52-4779-831e-d379648eecfc
#     type: Unschedulable
#   phase: Pending

由于新集群所需的CPU和RAM超出了我们队列的限制,即使其中一个Pod可以适应剩余的1个CPU和2Gi的RAM,但在所有Pod都有足够空间之前,集群中的Pod都不会被放置。如果不使用Volcano进行这种方式的gang调度,通常会有一个Pod被放置,导致集群部分分配,一些作业(如Horovod训练)会卡住等待资源可用。

看看这对我们新的 RayCluster 的 pod 调度的影响,这些 pod 被列为 Pending

kubectl get pods

# NAME                                            READY   STATUS         RESTARTS   AGE
# test-cluster-0-worker-worker-ddfbz              1/1     Running        0          7m
# test-cluster-0-head-vst5j                       1/1     Running        0          7m
# test-cluster-0-worker-worker-57pc7              1/1     Running        0          6m59s
# test-cluster-1-worker-worker-6tzf7              0/1     Pending        0          2m12s
# test-cluster-1-head-6668q                       0/1     Pending        0          2m12s
# test-cluster-1-worker-worker-n5g8k              0/1     Pending        0          2m12s

查看Pod详情,可以看到Volcano无法调度该组:

kubectl describe pod test-cluster-1-head-6668q | tail -n 3

# Type     Reason            Age   From     Message
# ----     ------            ----  ----     -------
# Warning  FailedScheduling  4m5s  volcano  3/3 tasks in gang unschedulable: pod group is not ready, 3 Pending, 3 minAvailable; Pending: 3 Undetermined

删除第一个 RayCluster 以腾出队列空间:

kubectl delete raycluster test-cluster-0

第二个集群的 PodGroup 状态变为 Running,因为现在有足够的资源来调度整个 Pod 集合:

kubectl get podgroup ray-test-cluster-1-pg -o yaml

# apiVersion: scheduling.volcano.sh/v1beta1
# kind: PodGroup
# metadata:
#   creationTimestamp: "2022-12-01T04:48:18Z"
#   generation: 9
#   name: ray-test-cluster-1-pg
#   namespace: test
#   ownerReferences:
#   - apiVersion: ray.io/v1alpha1
#     blockOwnerDeletion: true
#     controller: true
#     kind: RayCluster
#     name: test-cluster-1
#     uid: b3cf83dc-ef3a-4bb1-9c42-7d2a39c53358
#   resourceVersion: "4428864"
#   uid: 9087dd08-8f48-4592-a62e-21e9345b0872
# spec:
#   minMember: 3
#   minResources:
#     cpu: "3"
#     memory: 4Gi
#   queue: kuberay-test-queue
# status:
#   conditions:
#   - lastTransitionTime: "2022-12-01T04:54:04Z"
#     message: '3/3 tasks in gang unschedulable: pod group is not ready, 3 Pending,
#       3 minAvailable; Pending: 3 Undetermined'
#     reason: NotEnoughResources
#     status: "True"
#     transitionID: db90bbf0-6845-441b-8992-d0e85f78db77
#     type: Unschedulable
#   - lastTransitionTime: "2022-12-01T04:55:10Z"
#     reason: tasks in the gang are ready to be scheduled
#     status: "True"
#     transitionID: 72bbf1b3-d501-4528-a59d-479504f3eaf5
#     type: Scheduled
#   phase: Running
#   running: 3

再次检查Pod,确认第二个集群现在已启动并运行:

kubectl get pods

# NAME                                            READY   STATUS         RESTARTS   AGE
# test-cluster-1-worker-worker-n5g8k              1/1     Running        0          9m4s
# test-cluster-1-head-6668q                       1/1     Running        0          9m4s
# test-cluster-1-worker-worker-6tzf7              1/1     Running        0          9m4s

最后,清理剩余的集群和队列:

kubectl delete raycluster test-cluster-1
kubectl delete queue kuberay-test-queue