Skip to content

Commit

Permalink
use kubeflow repo
Browse files Browse the repository at this point in the history
  • Loading branch information
sigmarkarl committed Aug 8, 2024
1 parent 848e664 commit c240b72
Show file tree
Hide file tree
Showing 101 changed files with 481 additions and 1,603 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ jobs:
docker build -t gcr.io/spark-operator/spark-operator:local .
minikube image load gcr.io/spark-operator/spark-operator:local
# The integration tests are currently broken see: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/1416
# The integration tests are currently broken see: https://github.com/kubeflow/spark-operator/issues/1416
# - name: Run chart-testing (integration test)
# run: make integation-test

Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

ARG SPARK_IMAGE=gcr.io/spark-operator/spark:v3.1.1

FROM golang:1.20-alpine as builder
FROM golang:1.22-alpine as builder

RUN apk update && apk add --no-cache libcap

Expand Down
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@
.SILENT:
.PHONY: clean-sparkctl

SPARK_OPERATOR_GOPATH=/go/src/github.com/GoogleCloudPlatform/spark-on-k8s-operator
SPARK_OPERATOR_GOPATH=/go/src/github.com/kubeflow/spark-operator
DEP_VERSION:=`grep DEP_VERSION= Dockerfile | awk -F\" '{print $$2}'`
BUILDER=`grep "FROM golang:" Dockerfile | awk '{print $$2}'`
UNAME:=`uname | tr '[:upper:]' '[:lower:]'`
REPO=github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg
REPO=github.com/kubeflow/spark-operator/pkg

all: clean-sparkctl build-sparkctl install-sparkctl

Expand Down Expand Up @@ -40,7 +40,7 @@ build-api-docs:
docker run -v $$(pwd):/repo/ temp-api-ref-docs \
sh -c "cd /repo/ && /go/gen-crd-api-reference-docs/gen-crd-api-reference-docs \
-config /repo/hack/api-docs/api-docs-config.json \
-api-dir github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2 \
-api-dir github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2 \
-template-dir /repo/hack/api-docs/api-docs-template \
-out-file /repo/docs/api-docs.md"

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[![Go Report Card](https://goreportcard.com/badge/github.com/GoogleCloudPlatform/spark-on-k8s-operator)](https://goreportcard.com/report/github.com/GoogleCloudPlatform/spark-on-k8s-operator)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubeflow/spark-operator)](https://goreportcard.com/report/github.com/kubeflow/spark-operator)

**This is not an officially supported Google product.**

Expand Down
2 changes: 1 addition & 1 deletion charts/spark-operator-chart/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ version: 1.1.27
appVersion: v1beta2-1.3.8-3.1.1
keywords:
- spark
home: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator
home: https://github.com/kubeflow/spark-operator
maintainers:
- name: yuchaoran2011
email: [email protected]
6 changes: 3 additions & 3 deletions charts/spark-operator-chart/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ A Helm chart for Spark on Kubernetes operator

## Introduction

This chart bootstraps a [Kubernetes Operator for Apache Spark](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator) deployment using the [Helm](https://helm.sh) package manager.
This chart bootstraps a [Kubernetes Operator for Apache Spark](https://github.com/kubeflow/spark-operator) deployment using the [Helm](https://helm.sh) package manager.

## Prerequisites

Expand Down Expand Up @@ -91,7 +91,7 @@ All charts linted successfully
| ingressUrlFormat | string | `""` | Ingress URL format. Requires the UI service to be enabled by setting `uiService.enable` to true. |
| istio.enabled | bool | `false` | When using `istio`, spark jobs need to run without a sidecar to properly terminate |
| labelSelectorFilter | string | `""` | A comma-separated list of key=value, or key labels to filter resources during watch and list based on the specified labels. |
| leaderElection.lockName | string | `"spark-operator-lock"` | Leader election lock name. Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability. |
| leaderElection.lockName | string | `"spark-operator-lock"` | Leader election lock name. Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability. |
| leaderElection.lockNamespace | string | `""` | Optionally store the lock in another namespace. Defaults to operator's namespace |
| logLevel | int | `2` | Set higher levels for more verbose logging |
| metrics.enable | bool | `true` | Enable prometheus metric scraping |
Expand All @@ -113,7 +113,7 @@ All charts linted successfully
| rbac.createClusterRole | bool | `true` | Create and use RBAC `ClusterRole` resources |
| rbac.createRole | bool | `true` | Create and use RBAC `Role` resources |
| replicaCount | int | `1` | Desired number of pods, leaderElection will be enabled if this is greater than 1 |
| resourceQuotaEnforcement.enable | bool | `false` | Whether to enable the ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled by setting `webhook.enable` to true. Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement. |
| resourceQuotaEnforcement.enable | bool | `false` | Whether to enable the ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled by setting `webhook.enable` to true. Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement. |
| resources | object | `{}` | Pod resource requests and limits Note, that each job submission will spawn a JVM within the Spark Operator Pod using "/usr/local/openjdk-11/bin/java -Xmx128m". Kubernetes may kill these Java processes at will to enforce resource limits. When that happens, you will see the following error: 'failed to run spark-submit for SparkApplication [...]: signal: killed' - when this happens, you may want to increase memory limits. |
| resyncInterval | int | `30` | Operator resync interval. Note that the operator will respond to events (e.g. create, update) unrelated to this setting |
| securityContext | object | `{}` | Operator container security context |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (unknown)
api-approved.kubernetes.io: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1298
api-approved.kubernetes.io: https://github.com/kubeflow/spark-operator/pull/1298
name: scheduledsparkapplications.sparkoperator.k8s.io
spec:
group: sparkoperator.k8s.io
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (unknown)
api-approved.kubernetes.io: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1298
api-approved.kubernetes.io: https://github.com/kubeflow/spark-operator/pull/1298
name: sparkapplications.sparkoperator.k8s.io
spec:
group: sparkoperator.k8s.io
Expand Down
4 changes: 2 additions & 2 deletions charts/spark-operator-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -165,12 +165,12 @@ batchScheduler:
resourceQuotaEnforcement:
# -- Whether to enable the ResourceQuota enforcement for SparkApplication resources.
# Requires the webhook to be enabled by setting `webhook.enable` to true.
# Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement.
# Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-resource-quota-enforcement.
enable: false

leaderElection:
# -- Leader election lock name.
# Ref: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability.
# Ref: https://github.com/kubeflow/spark-operator/blob/master/docs/user-guide.md#enabling-leader-election-for-high-availability.
lockName: "spark-operator-lock"
# -- Optionally store the lock in another namespace. Defaults to operator's namespace
lockNamespace: ""
Expand Down
4 changes: 2 additions & 2 deletions docs/api-docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2576,7 +2576,7 @@ ApplicationState
<code>executorState</code><br/>
<em>
<a href="#sparkoperator.k8s.io/v1beta2.ExecutorState">
map[string]github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.ExecutorState
map[string]github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.ExecutorState
</a>
</em>
</td>
Expand Down Expand Up @@ -2800,7 +2800,7 @@ Deprecated. Consider using <code>env</code> instead.</p>
<code>envSecretKeyRefs</code><br/>
<em>
<a href="#sparkoperator.k8s.io/v1beta2.NameKey">
map[string]github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.NameKey
map[string]github.com/kubeflow/spark-operator/pkg/apis/sparkoperator.k8s.io/v1beta2.NameKey
</a>
</em>
</td>
Expand Down
2 changes: 1 addition & 1 deletion docs/developer-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ If you'd like to build/test the spark-operator locally, follow the instructions
```bash
$ mkdir -p $GOPATH/src/github.com/GoogleCloudPlatform
$ cd $GOPATH/src/github.com/GoogleCloudPlatform
$ git clone [email protected]:GoogleCloudPlatform/spark-on-k8s-operator.git
$ git clone [email protected]:kubeflow/spark-operator.git
$ cd spark-on-k8s-operator
```

Expand Down
4 changes: 2 additions & 2 deletions docs/user-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -842,6 +842,6 @@ To customize the operator, you can follow the steps below:
1. Compile Spark distribution with Kubernetes support as per [Spark documentation](https://spark.apache.org/docs/latest/building-spark.html#building-with-kubernetes-support).
2. Create docker images to be used for Spark with [docker-image tool](https://spark.apache.org/docs/latest/running-on-kubernetes.html#docker-images).
3. Create a new operator image based on the above image. You need to modify the `FROM` tag in the [Dockerfile](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/Dockerfile) with your Spark image.
3. Create a new operator image based on the above image. You need to modify the `FROM` tag in the [Dockerfile](https://github.com/kubeflow/spark-operator/blob/master/Dockerfile) with your Spark image.
4. Build and push your operator image built above.
5. Deploy the new image by modifying the [/manifest/spark-operator.yaml](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/manifest/spark-operator.yaml) file and specifying your operator image.
5. Deploy the new image by modifying the [/manifest/spark-operator.yaml](https://github.com/kubeflow/spark-operator/blob/master/manifest/spark-operator.yaml) file and specifying your operator image.
Loading

0 comments on commit c240b72

Please sign in to comment.