在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):GoogleCloudPlatform/spark-on-k8s-operator开源软件地址(OpenSource Url):https://github.com/GoogleCloudPlatform/spark-on-k8s-operator开源编程语言(OpenSource Language):Go 95.8%开源软件介绍(OpenSource Introduction):This is not an officially supported Google product. Community
Project StatusProject status: beta Current API version: If you are currently using the Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod affinity, is implemented using a Kubernetes Mutating Admission Webhook, which became beta in Kubernetes 1.9. The mutating admission webhook is disabled by default if you install the operator using the Helm chart. Check out the Quick Start Guide on how to enable the webhook. Prerequisites
InstallationThe easiest way to install the Kubernetes Operator for Apache Spark is to use the Helm chart. $ helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
$ helm install my-release spark-operator/spark-operator --namespace spark-operator --create-namespace This will install the Kubernetes Operator for Apache Spark into the namespace
For configuration options available in the Helm chart, please refer to the chart's README. Version MatrixThe following table lists the most recent few versions of the operator.
When installing using the Helm chart, you can choose to use a specific image tag instead of the default one, using the following option:
Get StartedGet started quickly with the Kubernetes Operator for Apache Spark using the Quick Start Guide. If you are running the Kubernetes Operator for Apache Spark on Google Kubernetes Engine and want to use Google Cloud Storage (GCS) and/or BigQuery for reading/writing data, also refer to the GCP guide. For more information, check the Design, API Specification and detailed User Guide. OverviewThe Kubernetes Operator for Apache Spark aims to make specifying and running Spark applications as easy and idiomatic as running other workloads on Kubernetes. It uses Kubernetes custom resources for specifying, running, and surfacing status of Spark applications. For a complete reference of the custom resource definitions, please refer to the API Definition. For details on its design, please refer to the design doc. It requires Spark 2.3 and above that supports Kubernetes as a native scheduler backend. The Kubernetes Operator for Apache Spark currently supports the following list of features:
ContributingPlease check CONTRIBUTING.md and the Developer Guide out. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论