kubeflow/spark-operator
Fork: 1376 Star: 2789 (更新于 2024-11-10 06:06:10)
license: Apache-2.0
Language: Go .
Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
最后发布版本: v2.0.0-rc.0 ( 2024-08-12 23:17:17)
Kubeflow Spark Operator
What is Spark Operator?
The Kubernetes Operator for Apache Spark aims to make specifying and running Spark applications as easy and idiomatic as running other workloads on Kubernetes. It uses Kubernetes custom resources for specifying, running, and surfacing status of Spark applications.
Quick Start
For a more detailed guide, please refer to the Getting Started guide.
# Add the Helm repository
helm repo add spark-operator https://kubeflow.github.io/spark-operator
helm repo update
# Install the operator into the spark-operator namespace and wait for deployments to be ready
helm install spark-operator spark-operator/spark-operator \
--namespace spark-operator --create-namespace --wait
# Create an example application in the default namespace
kubectl apply -f https://raw.githubusercontent.com/kubeflow/spark-operator/refs/heads/master/examples/spark-pi.yaml
# Get the status of the application
kubectl get sparkapp spark-pi
Overview
For a complete reference of the custom resource definitions, please refer to the API Definition. For details on its design, please refer to the Architecture. It requires Spark 2.3 and above that supports Kubernetes as a native scheduler backend.
The Kubernetes Operator for Apache Spark currently supports the following list of features:
- Supports Spark 2.3 and up.
- Enables declarative application specification and management of applications through custom resources.
- Automatically runs
spark-submit
on behalf of users for eachSparkApplication
eligible for submission. - Provides native cron support for running scheduled applications.
- Supports customization of Spark pods beyond what Spark natively is able to do through the mutating admission webhook, e.g., mounting ConfigMaps and volumes, and setting pod affinity/anti-affinity.
- Supports automatic application re-submission for updated
SparkApplication
objects with updated specification. - Supports automatic application restart with a configurable restart policy.
- Supports automatic retries of failed submissions with optional linear back-off.
- Supports mounting local Hadoop configuration as a Kubernetes ConfigMap automatically via
sparkctl
. - Supports automatically staging local application dependencies to Google Cloud Storage (GCS) via
sparkctl
. - Supports collecting and exporting application-level metrics and driver/executor metrics to Prometheus.
Project Status
Project status: beta
Current API version: v1beta2
If you are currently using the v1beta1
version of the APIs in your manifests, please update them to use the v1beta2
version by changing apiVersion: "sparkoperator.k8s.io/<version>"
to apiVersion: "sparkoperator.k8s.io/v1beta2"
. You will also need to delete the previous
version of the CustomResourceDefinitions named sparkapplications.sparkoperator.k8s.io
and scheduledsparkapplications.sparkoperator.k8s.io
, and replace them with the v1beta2
version either by installing the latest version of the operator or by running kubectl create -f config/crd/bases
.
Prerequisites
-
Version >= 1.13 of Kubernetes to use the
subresource
support for CustomResourceDefinitions, which became beta in 1.13 and is enabled by default in 1.13 and higher. -
Version >= 1.16 of Kubernetes to use the
MutatingWebhook
andValidatingWebhook
ofapiVersion: admissionregistration.k8s.io/v1
.
Getting Started
For getting started with Spark operator, please refer to Getting Started.
User Guide
For detailed user guide and API documentation, please refer to User Guide and API Specification.
If you are running Spark operator on Google Kubernetes Engine (GKE) and want to use Google Cloud Storage (GCS) and/or BigQuery for reading/writing data, also refer to the GCP guide.
Version Matrix
The following table lists the most recent few versions of the operator.
Operator Version | API Version | Kubernetes Version | Base Spark Version |
---|---|---|---|
v2.0.x |
v1beta2 |
1.16+ | 3.5.2 |
v1beta2-1.6.x-3.5.0 |
v1beta2 |
1.16+ | 3.5.0 |
v1beta2-1.5.x-3.5.0 |
v1beta2 |
1.16+ | 3.5.0 |
v1beta2-1.4.x-3.5.0 |
v1beta2 |
1.16+ | 3.5.0 |
v1beta2-1.3.x-3.1.1 |
v1beta2 |
1.16+ | 3.1.1 |
v1beta2-1.2.3-3.1.1 |
v1beta2 |
1.13+ | 3.1.1 |
v1beta2-1.2.2-3.0.0 |
v1beta2 |
1.13+ | 3.0.0 |
v1beta2-1.2.1-3.0.0 |
v1beta2 |
1.13+ | 3.0.0 |
v1beta2-1.2.0-3.0.0 |
v1beta2 |
1.13+ | 3.0.0 |
v1beta2-1.1.x-2.4.5 |
v1beta2 |
1.13+ | 2.4.5 |
v1beta2-1.0.x-2.4.4 |
v1beta2 |
1.13+ | 2.4.4 |
Developer Guide
For developing with Spark Operator, please refer to Developer Guide.
Contributor Guide
For contributing to Spark Operator, please refer to Contributor Guide.
Community
- Join the CNCF Slack Channel and then join
#kubeflow-spark-operator
Channel. - Check out our blog post Announcing the Kubeflow Spark Operator: Building a Stronger Spark on Kubernetes Community.
- Join our monthly community meeting Kubeflow Spark Operator Meeting Notes.
Adopters
Check out adopters of Spark Operator.
最近版本更新:(数据更新于 2024-09-19 17:59:20)
2024-08-12 23:17:17 v2.0.0-rc.0
2024-07-26 16:40:21 spark-operator-chart-1.4.6
2024-07-22 13:10:29 spark-operator-chart-1.4.5
2024-07-22 12:03:24 spark-operator-chart-1.4.4
2024-07-03 15:36:18 spark-operator-chart-1.4.3
2024-06-18 00:37:16 spark-operator-chart-1.4.2
2024-06-16 01:42:59 spark-operator-chart-1.4.1
2024-06-05 22:39:34 spark-operator-chart-1.4.0
2024-06-05 10:11:35 spark-operator-chart-1.3.2
2024-06-01 03:39:07 spark-operator-chart-1.3.1
主题(topics):
apache-spark, google-cloud-dataproc, kubernetes, kubernetes-controller, kubernetes-crd, kubernetes-operator, spark
kubeflow/spark-operator同语言 Go最近更新仓库
2024-11-21 22:49:20 containerd/containerd
2024-11-21 13:50:50 XTLS/Xray-core
2024-11-21 06:27:30 ollama/ollama
2024-11-21 05:17:55 Melkeydev/go-blueprint
2024-11-21 04:04:03 dolthub/dolt
2024-11-21 01:52:15 SpecterOps/BloodHound