Second Commit after Make Charts

pull/232/head
Ravi Lachhman 2021-11-02 07:28:45 -10:00
parent 197d045d23
commit d645bf3e4b
79 changed files with 4294 additions and 0 deletions

Binary file not shown.

View File

@ -0,0 +1,6 @@
dependencies:
- name: mongodb-replicaset
repository: https://charts.helm.sh/stable
version: 3.11.3
digest: sha256:d567aabf719102e5090b7d7cc0b8d7fd32e8959e51ec4977b6534147531649b8
generated: "2021-10-08T08:10:33.698603543Z"

View File

@ -0,0 +1,29 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Shipa
catalog.cattle.io/namespace: shipa-system
catalog.cattle.io/release-name: shipa
apiVersion: v2
appVersion: 1.4.0
dependencies:
- name: mongodb-replicaset
repository: file://./charts/mongodb-replicaset
tags:
- defaultDB
description: A Helm chart for Kubernetes to install the Shipa Control Plane
home: https://www.shipa.io
icon: https://cdn.opsmatters.com/sites/default/files/logos/shipa-logo.png
keywords:
- shipa
- deployment
- aac
kubeVersion: '>= 1.16.0-0'
maintainers:
- email: rlachhman@shipa.io
name: ravi
name: shipa
sources:
- https://github.com/shipa-corp
- https://github.com/shipa-corp/helm-chart
type: application
version: 1.4.0

View File

@ -0,0 +1,25 @@
Copyright (c) 2020, shipa authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the Globo.com nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,122 @@
**Note:** The master branch is the main development branch. Please use releases instead of the master branch in order to get stable versions.
# Documentation
Documentation for Shipa can be found at https://learn.shipa.io
# Installation Requirements
1. Kubernetes 1.14+
2. Helm v3
# Defaults
We create LoadBalancer service to expose Shipa to the internet:
1. 2379 -> etcd
1. 8080 -> shipa api over http
1. 8081 -> shipa api over https
By default we use dynamic public IP set by a cloud-provider but there is a parameter to use static ip (if you have it):
```bash
--set service.nginx.loadBalancerIP=35.192.15.168
```
# Installation
Users can install Shipa on any existing Kubernetes cluster (version 1.10.x and newer), and Shipa leverages Helm charts for the install.
> ⚠️ NOTE: Installing or upgrading Shipa may require downtime in order to perform database migrations.
Below are the steps required to have Shipa installed in your existing Kubernetes cluster:
Create a namespace where the Shipa services should be installed
```bash
NAMESPACE=shipa-system
kubectl create namespace $NAMESPACE
```
Create the values.override.yaml with the Admin user and password that will be used for Shipa
```bash
cat > values.override.yaml << EOF
auth:
adminUser: <your email here>
adminPassword: <your admin password>
EOF
```
Add Shipa helm repo
```bash
helm repo add shipa-charts https://shipa-charts.storage.googleapis.com
```
Install Shipa
```bash
helm install shipa shipa-charts/shipa -n $NAMESPACE --timeout=1000s -f values.override.yaml
```
## Upgrading shipa helm chart
```bash
helm upgrade shipa . --timeout=1000 --namespace=$NAMESPACE -f values.override.yaml
```
## Upgrading shipa helm chart if you have Pro license
We have two general ways how to execute helm upgrade if you have Pro license:
* Pass a license file to helm upgrade
```bash
helm upgrade shipa . --timeout=1000 --namespace=$NAMESPACE -f values.override.yaml -f license.yaml
```
* Merge license key from a license file to values.override.yaml and execute helm upgrade as usual
```bash
cat license.yaml | grep "license:" >> values.override.yaml
```
# CI/CD
Packaging and signing helm charts is automated using Github Actions
Charts are uploaded to multiple buckets based on condition:
1. `shipa-charts-dev`, `push` to `master`, `push` to PR opened against `master`
2. `shipa-charts-cloud`, `tag` containing `cloud`
3. `shipa-charts`, `tag` not containing `cloud`
Chart name is composed of:
`{last_tag}-{commit_hash}`
For on-prem releases, if tag is not pre-release, meaning it has semantic versioning without RC suffix (ex. 1.3.0, not 1.3.0-rc1), chart name is only `{last_tag}`, as otherwise it is seen by helm chart as development version
### Usage
```
# only first time
helm repo add shipa-dev https://shipa-charts-dev.storage.googleapis.com
helm repo add shipa-cloud https://shipa-charts-cloud.storage.googleapis.com
helm repo add shipa-onprem https://shipa-charts.storage.googleapis.com
# refresh available charts
helm repo update
# check available versions
helm search repo shipa --versions
# check available versions with development versions
helm search repo shipa --versions --devel
# check per repo
helm search repo shipa-dev --versions --devel
helm search repo shipa-cloud --versions --devel
helm search repo shipa-onprem --versions --devel
# helm install
helm install shipa shipa-dev/shipa --version 1.x.x -n shipa-system --timeout=1000s -f values.override.yaml
```
# Shipa client
If you are looking to operate Shipa from your local machine, we have binaries of shipa client: https://learn.shipa.io/docs/downloading-the-shipa-client
# Collaboration/Contributing
We welcome all feedback or pull requests. If you have any questions feel free to reach us at info@shipa.io

View File

@ -0,0 +1,39 @@
# Shipa
[Shipa](http://www.shipa.io/) is an Application-as-Code [AaC] provider that is designed for having a cleaner developer experience and allowing for guardrails to be easily created. The "platform engineering dilemma" is how do you allow for innovation yet have control. Shipa is application focused so allowing developers who are not experienced in Kubernetes run through several critical tasks such as deploying, managing, and iterating on their applications without detailed Kubernetes knowledge. From the operator or admin standpoint, easily enforcing rules/convention without building multiple abstraction layers.
## Install Shipa - Helm Chart
The [Installation Requirements](https://learn.shipa.io/docs/installation-requirements) specify up to date cluster and ingress requirements. Installing the chart is pretty straight forward.
Intially will need to set an intial Admin User and Admin Password/Secret to first access Shipa.
```
helm repo add shipa-charts https://shipa-charts.storage.googleapis.com
helm repo update
helm upgrade --install shipa shipa-charts/shipa \
--set auth.adminUser=admin@acme.com --set auth.adminPassword=admin1234 \
--namespace shipa-system --create-namespace --timeout=1000s --wait
```
## Install Shipa - ClusterIP
Shipa by default will install Traefik as the loadbalencer.
Though if this creates a conflict or there is a cluster limitation, you can also leverage ClusterIP for routing which is the
second set of optional prompts in the Rancher UI.
[Installing Shipa with ClusterIP on K3](https://shipa.io/2021/10/k3d-and-shipa-deploymnet/)
```
helm install shipa shipa-charts/shipa -n shipa-system --create-namespace \
--timeout=15m \
--set=metrics.image=gcr.io/shipa-1000/metrics:30m \
--set=auth.adminUser=admin@acme.com \
--set=auth.adminPassword=admin1234 \
--set=shipaCluster.serviceType=ClusterIP \
--set=shipaCluster.ip=10.43.10.20 \
--set=service.nginx.serviceType=ClusterIP \
--set=service.nginx.clusterIP=10.43.10.10
```

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
install

View File

@ -0,0 +1,16 @@
apiVersion: v1
appVersion: "3.6"
description: NoSQL document-oriented database that stores JSON-like documents with
dynamic schemas, simplifying the integration of data in content-driven applications.
home: https://github.com/mongodb/mongo
icon: https://webassets.mongodb.com/_com_assets/cms/mongodb-logo-rgb-j6w271g1xn.jpg
maintainers:
- email: unguiculus@gmail.com
name: unguiculus
- email: ssheehy@firescope.com
name: steven-sheehy
name: mongodb-replicaset
sources:
- https://github.com/mongodb/mongo
- https://github.com/percona/mongodb_exporter
version: 3.11.3

View File

@ -0,0 +1,6 @@
approvers:
- unguiculus
- steven-sheehy
reviewers:
- unguiculus
- steven-sheehy

View File

@ -0,0 +1,434 @@
# MongoDB Helm Chart
## Prerequisites Details
* Kubernetes 1.9+
* Kubernetes beta APIs enabled only if `podDisruptionBudget` is enabled
* PV support on the underlying infrastructure
## StatefulSet Details
* https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
## StatefulSet Caveats
* https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/#limitations
## Chart Details
This chart implements a dynamically scalable [MongoDB replica set](https://docs.mongodb.com/manual/tutorial/deploy-replica-set/)
using Kubernetes StatefulSets and Init Containers.
## Installing the Chart
To install the chart with the release name `my-release`:
``` console
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install --name my-release stable/mongodb-replicaset
```
## Configuration
The following table lists the configurable parameters of the mongodb chart and their default values.
| Parameter | Description | Default |
| ----------------------------------- | ------------------------------------------------------------------------- | --------------------------------------------------- |
| `replicas` | Number of replicas in the replica set | `3` |
| `replicaSetName` | The name of the replica set | `rs0` |
| `skipInitialization` | If `true` skip replica set initialization during bootstrapping | `false`
| `podDisruptionBudget` | Pod disruption budget | `{}` |
| `port` | MongoDB port | `27017` |
| `imagePullSecrets` | Image pull secrets | `[]` |
| `installImage.repository` | Image name for the install container | `unguiculus/mongodb-install` |
| `installImage.tag` | Image tag for the install container | `0.7` |
| `installImage.pullPolicy` | Image pull policy for the init container that establishes the replica set | `IfNotPresent` |
| `copyConfigImage.repository` | Image name for the copy config init container | `busybox` |
| `copyConfigImage.tag` | Image tag for the copy config init container | `1.29.3` |
| `copyConfigImage.pullPolicy` | Image pull policy for the copy config init container | `IfNotPresent` |
| `image.repository` | MongoDB image name | `mongo` |
| `image.tag` | MongoDB image tag | `3.6` |
| `image.pullPolicy` | MongoDB image pull policy | `IfNotPresent` |
| `podAnnotations` | Annotations to be added to MongoDB pods | `{}` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `999` |
| `securityContext.runAsUser` | User ID for the container | `999` |
| `securityContext.runAsNonRoot` | | `true` |
| `resources` | Pod resource requests and limits | `{}` |
| `persistentVolume.enabled` | If `true`, persistent volume claims are created | `true` |
| `persistentVolume.storageClass` | Persistent volume storage class | `` |
| `persistentVolume.accessModes` | Persistent volume access modes | `[ReadWriteOnce]` |
| `persistentVolume.size` | Persistent volume size | `10Gi` |
| `persistentVolume.annotations` | Persistent volume annotations | `{}` |
| `terminationGracePeriodSeconds` | Duration in seconds the pod needs to terminate gracefully | `30` |
| `tls.enabled` | Enable MongoDB TLS support including authentication | `false` |
| `tls.mode` | Set the SSL operation mode (disabled, allowSSL, preferSSL, requireSSL) | `requireSSL` |
| `tls.cacert` | The CA certificate used for the members | Our self signed CA certificate |
| `tls.cakey` | The CA key used for the members | Our key for the self signed CA certificate |
| `init.resources` | Pod resource requests and limits (for init containers) | `{}` |
| `init.timeout` | The amount of time in seconds to wait for bootstrap to finish | `900` |
| `metrics.enabled` | Enable Prometheus compatible metrics for pods and replicasets | `false` |
| `metrics.image.repository` | Image name for metrics exporter | `bitnami/mongodb-exporter` |
| `metrics.image.tag` | Image tag for metrics exporter | `0.9.0-debian-9-r2` |
| `metrics.image.pullPolicy` | Image pull policy for metrics exporter | `IfNotPresent` |
| `metrics.port` | Port for metrics exporter | `9216` |
| `metrics.path` | URL Path to expose metics | `/metrics` |
| `metrics.resources` | Metrics pod resource requests and limits | `{}` |
| `metrics.securityContext.enabled` | Enable security context | `true` |
| `metrics.securityContext.fsGroup` | Group ID for the metrics container | `1001` |
| `metrics.securityContext.runAsUser` | User ID for the metrics container | `1001` |
| `metrics.socketTimeout` | Time to wait for a non-responding socket | `3s` |
| `metrics.syncTimeout` | Time an operation with this session will wait before returning an error | `1m` |
| `metrics.prometheusServiceDiscovery`| Adds annotations for Prometheus ServiceDiscovery | `true` |
| `auth.enabled` | If `true`, keyfile access control is enabled | `false` |
| `auth.key` | Key for internal authentication | `` |
| `auth.existingKeySecret` | If set, an existing secret with this name for the key is used | `` |
| `auth.adminUser` | MongoDB admin user | `` |
| `auth.adminPassword` | MongoDB admin password | `` |
| `auth.metricsUser` | MongoDB clusterMonitor user | `` |
| `auth.metricsPassword` | MongoDB clusterMonitor password | `` |
| `auth.existingMetricsSecret` | If set, and existing secret with this name is used for the metrics user | `` |
| `auth.existingAdminSecret` | If set, and existing secret with this name is used for the admin user | `` |
| `serviceAnnotations` | Annotations to be added to the service | `{}` |
| `configmap` | Content of the MongoDB config file | `` |
| `initMongodStandalone` | If set, initContainer executes script in standalone mode | `` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `affinity` | Node/pod affinities | `{}` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `priorityClassName` | Pod priority class name | `` |
| `livenessProbe.failureThreshold` | Liveness probe failure threshold | `3` |
| `livenessProbe.initialDelaySeconds` | Liveness probe initial delay seconds | `30` |
| `livenessProbe.periodSeconds` | Liveness probe period seconds | `10` |
| `livenessProbe.successThreshold` | Liveness probe success threshold | `1` |
| `livenessProbe.timeoutSeconds` | Liveness probe timeout seconds | `5` |
| `readinessProbe.failureThreshold` | Readiness probe failure threshold | `3` |
| `readinessProbe.initialDelaySeconds`| Readiness probe initial delay seconds | `5` |
| `readinessProbe.periodSeconds` | Readiness probe period seconds | `10` |
| `readinessProbe.successThreshold` | Readiness probe success threshold | `1` |
| `readinessProbe.timeoutSeconds` | Readiness probe timeout seconds | `1` |
| `extraVars` | Set environment variables for the main container | `{}` |
| `extraLabels` | Additional labels to add to resources | `{}` |
*MongoDB config file*
All options that depended on the chart configuration are supplied as command-line arguments to `mongod`. By default, the chart creates an empty config file. Entries may be added via the `configmap` configuration value.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
``` console
helm install --name my-release -f values.yaml stable/mongodb-replicaset
```
> **Tip**: You can use the default [values.yaml](values.yaml)
Once you have all 3 nodes in running, you can run the "test.sh" script in this directory, which will insert a key into the primary and check the secondaries for output. This script requires that the `$RELEASE_NAME` environment variable be set, in order to access the pods.
## Authentication
By default, this chart creates a MongoDB replica set without authentication. Authentication can be
enabled using the parameter `auth.enabled`. Once enabled, keyfile access control is set up and an
admin user with root privileges is created. User credentials and keyfile may be specified directly.
Alternatively, existing secrets may be provided. The secret for the admin user must contain the
keys `user` and `password`, that for the key file must contain `key.txt`. The user is created with
full `root` permissions but is restricted to the `admin` database for security purposes. It can be
used to create additional users with more specific permissions.
To connect to the mongo shell with authentication enabled, use a command similar to the following (substituting values as appropriate):
```shell
kubectl exec -it mongodb-replicaset-0 -- mongo mydb -u admin -p password --authenticationDatabase admin
```
## TLS support
To enable full TLS encryption set `tls.enabled` to `true`. It is recommended to create your own CA by executing:
```console
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
```
After that paste the base64 encoded (`cat ca.key | base64 -w0`) cert and key into the fields `tls.cacert` and
`tls.cakey`. Adapt the configmap for the replicaset as follows:
```yml
configmap:
storage:
dbPath: /data/db
net:
port: 27017
ssl:
mode: requireSSL
CAFile: /data/configdb/tls.crt
PEMKeyFile: /work-dir/mongo.pem
# Set to false to require mutual TLS encryption
allowConnectionsWithoutCertificates: true
replication:
replSetName: rs0
security:
authorization: enabled
# # Uncomment to enable mutual TLS encryption
# clusterAuthMode: x509
keyFile: /keydir/key.txt
```
To access the cluster you need one of the certificates generated during cluster setup in `/work-dir/mongo.pem` of the
certain container or you generate your own one via:
```console
$ cat >openssl.cnf <<EOL
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $HOSTNAME1
DNS.1 = $HOSTNAME2
EOL
$ openssl genrsa -out mongo.key 2048
$ openssl req -new -key mongo.key -out mongo.csr -subj "/CN=$HOSTNAME" -config openssl.cnf
$ openssl x509 -req -in mongo.csr \
-CA $MONGOCACRT -CAkey $MONGOCAKEY -CAcreateserial \
-out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
$ rm mongo.csr
$ cat mongo.crt mongo.key > mongo.pem
$ rm mongo.key mongo.crt
```
Please ensure that you exchange the `$HOSTNAME` with your actual hostname and the `$HOSTNAME1`, `$HOSTNAME2`, etc. with
alternative hostnames you want to allow access to the MongoDB replicaset. You should now be able to authenticate to the
mongodb with your `mongo.pem` certificate:
```console
mongo --ssl --sslCAFile=ca.crt --sslPEMKeyFile=mongo.pem --eval "db.adminCommand('ping')"
```
## Promethus metrics
Enabling the metrics as follows will allow for each replicaset pod to export Prometheus compatible metrics
on server status, individual replicaset information, replication oplogs, and storage engine.
```yaml
metrics:
enabled: true
image:
repository: ssalaues/mongodb-exporter
tag: 0.6.1
pullPolicy: IfNotPresent
port: 9216
path: "/metrics"
socketTimeout: 3s
syncTimeout: 1m
prometheusServiceDiscovery: true
resources: {}
```
More information on [MongoDB Exporter](https://github.com/percona/mongodb_exporter) metrics available.
## Deep dive
Because the pod names are dependent on the name chosen for it, the following examples use the
environment variable `RELEASENAME`. For example, if the helm release name is `messy-hydra`, one would need to set the following before proceeding. The example scripts below assume 3 pods only.
```console
export RELEASE_NAME=messy-hydra
```
### Cluster Health
```console
for i in 0 1 2; do kubectl exec $RELEASE_NAME-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(db.serverStatus())"'; done
```
### Failover
One can check the roles being played by each node by using the following:
```console
$ for i in 0 1 2; do kubectl exec $RELEASE_NAME-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(rs.isMaster())"'; done
MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
{
"hosts" : [
"messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
"messy-hydra-mongodb-1.messy-hydra-mongodb.default.svc.cluster.local:27017",
"messy-hydra-mongodb-2.messy-hydra-mongodb.default.svc.cluster.local:27017"
],
"setName" : "rs0",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
"me" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-09-13T01:10:12.680Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
```
This lets us see which member is primary.
Let us now test persistence and failover. First, we insert a key (in the below example, we assume pod 0 is the master):
```console
$ kubectl exec $RELEASE_NAME-mongodb-replicaset-0 -- mongo --eval="printjson(db.test.insert({key1: 'value1'}))"
MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
{ "nInserted" : 1 }
```
Watch existing members:
```console
$ kubectl run --attach bbox --image=mongo:3.6 --restart=Never --env="RELEASE_NAME=$RELEASE_NAME" -- sh -c 'while true; do for i in 0 1 2; do echo $RELEASE_NAME-mongodb-replicaset-$i $(mongo --host=$RELEASE_NAME-mongodb-replicaset-$i.$RELEASE_NAME-mongodb-replicaset --eval="printjson(rs.isMaster())" | grep primary); sleep 1; done; done';
Waiting for pod default/bbox2 to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-1 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
```
Kill the primary and watch as a new master getting elected.
```console
$ kubectl delete pod $RELEASE_NAME-mongodb-replicaset-0
pod "messy-hydra-mongodb-0" deleted
```
Delete all pods and let the statefulset controller bring it up.
```console
$ kubectl delete po -l "app=mongodb-replicaset,release=$RELEASE_NAME"
$ kubectl get po --watch-only
NAME READY STATUS RESTARTS AGE
messy-hydra-mongodb-0 0/1 Pending 0 0s
messy-hydra-mongodb-0 0/1 Pending 0 0s
messy-hydra-mongodb-0 0/1 Pending 0 7s
messy-hydra-mongodb-0 0/1 Init:0/2 0 7s
messy-hydra-mongodb-0 0/1 Init:1/2 0 27s
messy-hydra-mongodb-0 0/1 Init:1/2 0 28s
messy-hydra-mongodb-0 0/1 PodInitializing 0 31s
messy-hydra-mongodb-0 0/1 Running 0 32s
messy-hydra-mongodb-0 1/1 Running 0 37s
messy-hydra-mongodb-1 0/1 Pending 0 0s
messy-hydra-mongodb-1 0/1 Pending 0 0s
messy-hydra-mongodb-1 0/1 Init:0/2 0 0s
messy-hydra-mongodb-1 0/1 Init:1/2 0 20s
messy-hydra-mongodb-1 0/1 Init:1/2 0 21s
messy-hydra-mongodb-1 0/1 PodInitializing 0 24s
messy-hydra-mongodb-1 0/1 Running 0 25s
messy-hydra-mongodb-1 1/1 Running 0 30s
messy-hydra-mongodb-2 0/1 Pending 0 0s
messy-hydra-mongodb-2 0/1 Pending 0 0s
messy-hydra-mongodb-2 0/1 Init:0/2 0 0s
messy-hydra-mongodb-2 0/1 Init:1/2 0 21s
messy-hydra-mongodb-2 0/1 Init:1/2 0 22s
messy-hydra-mongodb-2 0/1 PodInitializing 0 25s
messy-hydra-mongodb-2 0/1 Running 0 26s
messy-hydra-mongodb-2 1/1 Running 0 30s
...
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-1 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
```
Check the previously inserted key:
```console
$ kubectl exec $RELEASE_NAME-mongodb-replicaset-1 -- mongo --eval="rs.slaveOk(); db.test.find({key1:{\$exists:true}}).forEach(printjson)"
MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
{ "_id" : ObjectId("57b180b1a7311d08f2bfb617"), "key1" : "value1" }
```
### Scaling
Scaling should be managed by `helm upgrade`, which is the recommended way.
### Indexes and Maintenance
You can run Mongo in standalone mode and execute Javascript code on each replica at initContainer time using `initMongodStandalone`.
This allows you to create indexes on replicasets following [best practices](https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/).
#### Example: Creating Indexes
```js
initMongodStandalone: |+
db = db.getSiblingDB("mydb")
db.my_users.createIndex({email: 1})
```
Tail the logs to debug running indexes or to follow their progress
```sh
kubectl exec -it $RELEASE-mongodb-replicaset-0 -c bootstrap -- tail -f /work-dir/log.txt
```
### Migrate existing ReplicaSets into Kubernetes
If you have an existing ReplicaSet that currently is deployed outside of Kubernetes and want to move it into a cluster you can do so by using the `skipInitialization` flag.
First set the `skipInitialization` variable to `true` in values.yaml and install the Helm chart. That way you end up with uninitialized MongoDB pods that can be added to the existing ReplicaSet.
Now take care of realizing the DNS correct resolution of all ReplicaSet members. In Kubernetes you can for example use an `ExternalName`.
```
apiVersion: v1
kind: Service
metadata:
name: mongodb01
namespace: mongo
spec:
type: ExternalName
externalName: mongodb01.mydomain.com
```
If you also put each StatefulSet member behind a loadbalancer the ReplicaSet members outside of the cluster will also be able to reach the pods inside the cluster.
```
apiVersion: v1
kind: Service
metadata:
name: mongodb-0
namespace: mongo
spec:
selector:
statefulset.kubernetes.io/pod-name: mongodb-0
ports:
- port: 27017
targetPort: 27017
type: LoadBalancer
```
Now all that is left to do is to put the LoadBalancer IP into the `/etc/hosts` file (or realize the DNS resolution through another way)
```
1.2.3.4 mongodb-0
5.6.7.8 mongodb-1
```
With a setup like this each replicaset member can resolve the DNS entry of each other and you can just add the new pods to your existing MongoDB cluster as if they where just normal nodes.
Of course you need to make sure to get your security settings right. Enforced TLS is a good idea in a setup like this. Also make sure that you activate auth and get the firewall settings right.
Once you fully migrated remove the old nodes from the replicaset.

View File

@ -0,0 +1 @@
# No config change. Just use defaults.

View File

@ -0,0 +1,10 @@
auth:
enabled: true
adminUser: username
adminPassword: password
metricsUser: metrics
metricsPassword: password
key: keycontent
metrics:
enabled: true

View File

@ -0,0 +1,10 @@
tls:
# Enable or disable MongoDB TLS support
enabled: true
# Please generate your own TLS CA by generating it via:
# $ openssl genrsa -out ca.key 2048
# $ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
# After that you can base64 encode it and paste it here:
# $ cat ca.key | base64 -w0
cacert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNxakNDQVpJQ0NRQ1I1aXNNQlRmQzdUQU5CZ2txaGtpRzl3MEJBUXNGQURBWE1SVXdFd1lEVlFRRERBeHQKZVdSdmJXRnBiaTVqYjIwd0hoY05NVGt4TVRFeU1EZ3hOakUwV2hjTk5EY3dNek13TURneE5qRTBXakFYTVJVdwpFd1lEVlFRRERBeHRlV1J2YldGcGJpNWpiMjB3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNwM0UrdVpWanhaY3BYNUFCbEtRa2crZjFmSnJzR1JJNVQrMzcyMkIvYnRyTVo4M3FyRTg2RFdEYXEKN0k1YTdlOGFVTGt2ZVpsaW02aWxsUW5CTHJPVUtVZ3R1OWZINlZydlBuMTl3UDFibEMvU0NWZHoxemNSUWlJWQpOMVVWN2VGaWUzdjhiNXVRM2RFcVBPV2FMM0w2N0Q1T0lDb043Z21QL2QwVVBaWjNHdDJLNTZsNXBzY1h4OGYwCkd3ZWdSRGpiVnZmc2dUSW50dEJ6SGh6c0JENUxON054aDd5RWVacW5admtuTDg5S2JZUEFPUk82N3NKUlBhWHMKUDhuVDhqalFJaGlRSUZDNTVXN3JrZ1hid1Znajdwb0kyby9XSDM4WXZ6TG1OVnMyOThYUDZmUXhCQ0NwMmFjRgpkOTVQRjZmbFVJeW9RNGRuOUF5UlpRa0owdlpMQWdNQkFBRXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS21XCjY2SlB4V0E4MVlYTEZtclFrdmM2NE9ycFJXTHJtNHFqaFREREtvVzY1V291MzNyOEVVQktzQ2FQOHNWWXhobFQKaWhGcDhIemNqTXpWRjFqU3ZiT0c5UnhrSG16ZEIvL3FwMDdTVFp0S2R1cThad2RVdnh1Z1FXSFNzOHA4YVNHUAowNDlkSDBqUnpEZklyVGp4Z3ZNOUJlWmorTkdqT1FyUGRvQVBKeTl2THowZmYya1gzVjJadTFDWnNnbDNWUXFsCjRsNzB3azFuVk5tTXY4Nnl5cUZXaWFRTWhuSXFjKzBwYUJaRjJFSGNpSExuZWcweVVTZVN4REsrUkk4SE9mT3oKNVFpUHpqSGs1b3czd252NDhQWVJMODdLTWJtRzF0eThyRHMxUlVGWkZueGxHd0t4UmRmckt3aHJJbVRBT2N4Vwo5bVhCU3ZzY3RjM2tIZTRIVFdRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
cakey: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcWR4UHJtVlk4V1hLVitRQVpTa0pJUG45WHlhN0JrU09VL3QrOXRnZjI3YXpHZk42CnF4UE9nMWcycXV5T1d1M3ZHbEM1TDNtWllwdW9wWlVKd1M2emxDbElMYnZYeCtsYTd6NTlmY0Q5VzVRdjBnbFgKYzljM0VVSWlHRGRWRmUzaFludDcvRytia04zUktqemxtaTl5K3V3K1RpQXFEZTRKai8zZEZEMldkeHJkaXVlcAplYWJIRjhmSDlCc0hvRVE0MjFiMzdJRXlKN2JRY3g0YzdBUStTemV6Y1llOGhIbWFwMmI1SnkvUFNtMkR3RGtUCnV1N0NVVDJsN0QvSjAvSTQwQ0lZa0NCUXVlVnU2NUlGMjhGWUkrNmFDTnFQMWg5L0dMOHk1alZiTnZmRnorbjAKTVFRZ3FkbW5CWGZlVHhlbjVWQ01xRU9IWi9RTWtXVUpDZEwyU3dJREFRQUJBb0lCQVFDVWM3eWNBVzFMaEpmawpXcHRSemh4eFdxcnJSeEU3ZUIwZ0h2UW16bHFCanRwVyt1bWhyT3pXOC9qTFIzVmUyUVlZYktaOGJIejJwbTR0ClVPVTJsaGRTalFYTkdwZUsyMUtqTjIwN3c3aHFHa2YwL0Q4WE9lZWh5TGU5akZacmxQeGZNdWI0aDU1aGJNdUsKYTdDTElaOE8xL3ZZRWRwUFZGTzlLYlRYSk1CbEZJUERUaFJvR2RCTEFkREZNbzcrUnZYSFRUcXdyWmxDbWRDbgp5eld3WkhIQUZhdEdGWU9ybXcxdlZZY3h0OXk5c0FVZDBrVTQza05jVHVHR0MwMGh1QlZMcW9JZU9mMG12TDB0Ckg0S0d6LzBicGp4NFpoWlNKazd3ZkFsQ0xGL1N5YzVJOEJXWWNCb05Jc0RSbDdDUmpDVUoxYVNBNVNYNzZ2SVoKSlhnRWEyV3hBb0dCQU50M0pDRGtycjNXRmJ3cW1SZ2ZhUVV0UE1FVnZlVnJvQmRqZTBZVFFNbENlNTV2QU1uNQpadEFKQTVKTmxKN2VZRkVEa0JrVURJSDFDM3hlZHVWbEREWXpESzRpR0V1Wk8wVDNERFN3aks2cExsZ3JBN0QyCmZnS29ubVdGck5JdTI4UW1MNHhmcjUrWW9SNUo0L05QdFdWOWwwZk1NOHEwSTd5SVRNODlsZWlqQW9HQkFNWWoKTHk3VER1MWVJVWkrQXJFNTJFMEkwTVJPNWNLS2hxdGxJMnpCZkZlWm5LYWdwbnhCMTMxbi9wcGg0Wm1IYnMzZQpxOXBSb0RJT0h4cm5NOWFCa1JBTHJHNjBMeXF3eU5NNW1JemkvQytJK2RVOG55ZXIvZVNNRTNtdlFzbmpVcEhtClRtTjRrM0l4RWtqRnhCazVndFNlNlA5U0UyOFd6eVZoOGlkZHRjNDVBb0dBYzcwWFBvbWJaZDM3UkdxcXBrQWEKWUhLRThjY0hpSEFEMDVIUk54bDhOeWRxamhrNEwwdnAzcGlDVzZ1eVR6NHpTVVk1dmlBR29KcWNYaEJyWDNxMAp2L2lZSFZVNXZ0U21ueTR5TDY5VDRlQ3k0aWg5SDl3K2hDUnN0Rm1VMUp1RnBxSUV2V0RRKzdmQWNIckRUbE9nCjlFOFJjdm5MN29DbHdBMlpoRW1VUDBVQ2dZQWFhdUtGbWJwcHg1MGtkOEVnSkJoRTNTSUlxb1JUMWVoeXZiOWwKWnI3UFp6bk50YW04ODRKcHhBM2NRNlN5dGEzK1lPd0U1ZEU0RzAzbVptRXcvb0Y2NURPUFp4TEszRnRLWG1tSwpqMUVVZld6aUUzMGM2ditsRTFBZGIxSzJYRXJNRFNyeWRFY2tlSXA1alhUQjhEc1RZa1NxbGlUbE1PTlpscCtVCnhCZlRjUUtCZ0RoZHo4VjU1TzdNc0dyRVlQeGhoK0U0bklLb3BRc0RlNi9QdWRRVlJMRlNwVGpLNWlKcTF2RnIKajFyNDFCNFp0cjBYNGd6MzhrSUpwZGNvNUFxU25zVENreHhnYXh3RTNzVmlqNGZZRWlteDc3TS84VkZVbDZwLwphNmdBbFh2WHFaYmFvTGU3ekM2RXVZWjFtUzJGMVd4UE9KRzZpakFiMVNIQjVPOGFWdFR3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg=="

View File

@ -0,0 +1,226 @@
#!/usr/bin/env bash
# Copyright 2018 The Kubernetes Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e pipefail
port=27017
replica_set="$REPLICA_SET"
script_name=${0##*/}
SECONDS=0
timeout="${TIMEOUT:-900}"
tls_mode="${TLS_MODE}"
if [[ "$AUTH" == "true" ]]; then
admin_user="$ADMIN_USER"
admin_password="$ADMIN_PASSWORD"
admin_creds=(-u "$admin_user" -p "$admin_password")
if [[ "$METRICS" == "true" ]]; then
metrics_user="$METRICS_USER"
metrics_password="$METRICS_PASSWORD"
fi
auth_args=("--auth" "--keyFile=/data/configdb/key.txt")
fi
log() {
local msg="$1"
local timestamp
timestamp=$(date --iso-8601=ns)
echo "[$timestamp] [$script_name] $msg" 2>&1 | tee -a /work-dir/log.txt 1>&2
}
retry_until() {
local host="${1}"
local command="${2}"
local expected="${3}"
local creds=("${admin_creds[@]}")
# Don't need credentials for admin user creation and pings that run on localhost
if [[ "${host}" =~ ^localhost ]]; then
creds=()
fi
until [[ $(mongo admin --host "${host}" "${creds[@]}" "${ssl_args[@]}" --quiet --eval "${command}" | tail -n1) == "${expected}" ]]; do
sleep 1
if (! ps "${pid}" &>/dev/null); then
log "mongod shutdown unexpectedly"
exit 1
fi
if [[ "${SECONDS}" -ge "${timeout}" ]]; then
log "Timed out after ${timeout}s attempting to bootstrap mongod"
exit 1
fi
log "Retrying ${command} on ${host}"
done
}
shutdown_mongo() {
local host="${1:-localhost}"
local args='force: true'
log "Shutting down MongoDB ($args)..."
if (! mongo admin --host "${host}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "db.shutdownServer({$args})"); then
log "db.shutdownServer() failed, sending the terminate signal"
kill -TERM "${pid}"
fi
}
init_mongod_standalone() {
if [[ ! -f /init/initMongodStandalone.js ]]; then
log "Skipping init mongod standalone script"
return 0
elif [[ -z "$(ls -1A /data/db)" ]]; then
log "mongod standalone script currently not supported on initial install"
return 0
fi
local port="27018"
log "Starting a MongoDB instance as standalone..."
mongod --config /data/configdb/mongod.conf --dbpath=/data/db "${auth_args[@]}" "${ssl_server_args[@]}" --port "${port}" --bind_ip=0.0.0.0 2>&1 | tee -a /work-dir/log.txt 1>&2 &
export pid=$!
trap shutdown_mongo EXIT
log "Waiting for MongoDB to be ready..."
retry_until "localhost:${port}" "db.adminCommand('ping').ok" "1"
log "Running init js script on standalone mongod"
mongo admin --port "${port}" "${admin_creds[@]}" "${ssl_args[@]}" /init/initMongodStandalone.js
shutdown_mongo "localhost:${port}"
}
my_hostname=$(hostname)
log "Bootstrapping MongoDB replica set member: $my_hostname"
log "Reading standard input..."
while read -ra line; do
if [[ "${line}" == *"${my_hostname}"* ]]; then
service_name="$line"
fi
peers=("${peers[@]}" "$line")
done
# Generate the ca cert
ca_crt=/data/configdb/tls.crt
if [ -f "$ca_crt" ]; then
log "Generating certificate"
ca_key=/data/configdb/tls.key
pem=/work-dir/mongo.pem
ssl_args=(--ssl --sslCAFile "$ca_crt" --sslPEMKeyFile "$pem")
ssl_server_args=(--sslMode "$tls_mode" --sslCAFile "$ca_crt" --sslPEMKeyFile "$pem")
# Move into /work-dir
pushd /work-dir
cat >openssl.cnf <<EOL
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $(echo -n "$my_hostname" | sed s/-[0-9]*$//)
DNS.2 = $my_hostname
DNS.3 = $service_name
DNS.4 = localhost
DNS.5 = 127.0.0.1
EOL
# Generate the certs
openssl genrsa -out mongo.key 2048
openssl req -new -key mongo.key -out mongo.csr -subj "/OU=MongoDB/CN=$my_hostname" -config openssl.cnf
openssl x509 -req -in mongo.csr \
-CA "$ca_crt" -CAkey "$ca_key" -CAcreateserial \
-out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
rm mongo.csr
cat mongo.crt mongo.key > $pem
rm mongo.key mongo.crt
fi
init_mongod_standalone
if [[ "${SKIP_INIT}" == "true" ]]; then
log "Skipping initialization"
exit 0
fi
log "Peers: ${peers[*]}"
log "Starting a MongoDB replica"
mongod --config /data/configdb/mongod.conf --dbpath=/data/db --replSet="$replica_set" --port="${port}" "${auth_args[@]}" "${ssl_server_args[@]}" --bind_ip=0.0.0.0 2>&1 | tee -a /work-dir/log.txt 1>&2 &
pid=$!
trap shutdown_mongo EXIT
log "Waiting for MongoDB to be ready..."
retry_until "localhost" "db.adminCommand('ping').ok" "1"
log "Initialized."
# try to find a master
for peer in "${peers[@]}"; do
log "Checking if ${peer} is primary"
# Check rs.status() first since it could be in primary catch up mode which db.isMaster() doesn't show
if [[ $(mongo admin --host "${peer}" "${admin_creds[@]}" "${ssl_args[@]}" --quiet --eval "rs.status().myState") == "1" ]]; then
retry_until "${peer}" "db.isMaster().ismaster" "true"
log "Found primary: ${peer}"
primary="${peer}"
break
fi
done
if [[ "${primary}" = "${service_name}" ]]; then
log "This replica is already PRIMARY"
elif [[ -n "${primary}" ]]; then
if [[ $(mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --quiet --eval "rs.conf().members.findIndex(m => m.host == '${service_name}:${port}')") == "-1" ]]; then
log "Adding myself (${service_name}) to replica set..."
if (mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "rs.add('${service_name}')" | grep 'Quorum check failed'); then
log 'Quorum check failed, unable to join replicaset. Exiting prematurely.'
exit 1
fi
fi
sleep 3
log 'Waiting for replica to reach SECONDARY state...'
retry_until "${service_name}" "rs.status().myState" "2"
log '✓ Replica reached SECONDARY state.'
elif (mongo "${ssl_args[@]}" --eval "rs.status()" | grep "no replset config has been received"); then
log "Initiating a new replica set with myself ($service_name)..."
mongo "${ssl_args[@]}" --eval "rs.initiate({'_id': '$replica_set', 'members': [{'_id': 0, 'host': '$service_name'}]})"
sleep 3
log 'Waiting for replica to reach PRIMARY state...'
retry_until "localhost" "db.isMaster().ismaster" "true"
primary="${service_name}"
log '✓ Replica reached PRIMARY state.'
if [[ "${AUTH}" == "true" ]]; then
log "Creating admin user..."
mongo admin "${ssl_args[@]}" --eval "db.createUser({user: '${admin_user}', pwd: '${admin_password}', roles: [{role: 'root', db: 'admin'}]})"
fi
fi
# User creation
if [[ -n "${primary}" && "$AUTH" == "true" && "$METRICS" == "true" ]]; then
metric_user_count=$(mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "db.system.users.find({user: '${metrics_user}'}).count()" --quiet)
if [[ "${metric_user_count}" == "0" ]]; then
log "Creating clusterMonitor user..."
mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "db.createUser({user: '${metrics_user}', pwd: '${metrics_password}', roles: [{role: 'clusterMonitor', db: 'admin'}, {role: 'read', db: 'local'}]})"
fi
fi
log "MongoDB bootstrap complete"
exit 0

View File

@ -0,0 +1,14 @@
1. After the statefulset is created completely, one can check which instance is primary by running:
$ for ((i = 0; i < {{ .Values.replicas }}; ++i)); do kubectl exec --namespace {{ .Release.Namespace }} {{ template "mongodb-replicaset.fullname" . }}-$i -- sh -c 'mongo --eval="printjson(rs.isMaster())"'; done
2. One can insert a key into the primary instance of the mongodb replica set by running the following:
MASTER_POD_NAME must be replaced with the name of the master found from the previous step.
$ kubectl exec --namespace {{ .Release.Namespace }} MASTER_POD_NAME -- mongo --eval="printjson(db.test.insert({key1: 'value1'}))"
3. One can fetch the keys stored in the primary or any of the slave nodes in the following manner.
POD_NAME must be replaced by the name of the pod being queried.
$ kubectl exec --namespace {{ .Release.Namespace }} POD_NAME -- mongo --eval="rs.slaveOk(); db.test.find().forEach(printjson)"

View File

@ -0,0 +1,78 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "mongodb-replicaset.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mongodb-replicaset.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mongodb-replicaset.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name for the admin secret.
*/}}
{{- define "mongodb-replicaset.adminSecret" -}}
{{- if .Values.auth.existingAdminSecret -}}
{{- .Values.auth.existingAdminSecret -}}
{{- else -}}
{{- template "mongodb-replicaset.fullname" . -}}-admin
{{- end -}}
{{- end -}}
{{- define "mongodb-replicaset.metricsSecret" -}}
{{- if .Values.auth.existingMetricsSecret -}}
{{- .Values.auth.existingMetricsSecret -}}
{{- else -}}
{{- template "mongodb-replicaset.fullname" . -}}-metrics
{{- end -}}
{{- end -}}
{{/*
Create the name for the key secret.
*/}}
{{- define "mongodb-replicaset.keySecret" -}}
{{- if .Values.auth.existingKeySecret -}}
{{- .Values.auth.existingKeySecret -}}
{{- else -}}
{{- template "mongodb-replicaset.fullname" . -}}-keyfile
{{- end -}}
{{- end -}}
{{- define "mongodb-replicaset.connection-string" -}}
{{- $string := "" -}}
{{- if .Values.auth.enabled }}
{{- $string = printf "mongodb://$METRICS_USER:$METRICS_PASSWORD@localhost:%s" (.Values.port|toString) -}}
{{- else -}}
{{- $string = printf "mongodb://localhost:%s" (.Values.port|toString) -}}
{{- end -}}
{{- if .Values.tls.enabled }}
{{- printf "%s?ssl=true&tlsCertificateKeyFile=/work-dir/mongo.pem&tlsCAFile=/ca/tls.crt" $string -}}
{{- else -}}
{{- printf $string -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if and (.Values.auth.enabled) (not .Values.auth.existingAdminSecret) -}}
apiVersion: v1
kind: Secret
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.adminSecret" . }}
type: Opaque
data:
user: {{ .Values.auth.adminUser | b64enc }}
password: {{ .Values.auth.adminPassword | b64enc }}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if .Values.tls.enabled -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-ca
data:
tls.key: {{ .Values.tls.cakey }}
tls.crt: {{ .Values.tls.cacert }}
{{- end -}}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-init
data:
on-start.sh: |
{{ .Files.Get "init/on-start.sh" | indent 4 }}
{{- if .Values.initMongodStandalone }}
initMongodStandalone.js: |
{{ .Values.initMongodStandalone | indent 4 }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if and (.Values.auth.enabled) (not .Values.auth.existingKeySecret) -}}
apiVersion: v1
kind: Secret
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.keySecret" . }}
type: Opaque
data:
key.txt: {{ .Values.auth.key | b64enc }}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if and (.Values.auth.enabled) (not .Values.auth.existingMetricsSecret) (.Values.metrics.enabled) -}}
apiVersion: v1
kind: Secret
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.metricsSecret" . }}
type: Opaque
data:
user: {{ .Values.auth.metricsUser | b64enc }}
password: {{ .Values.auth.metricsPassword | b64enc }}
{{- end -}}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-mongodb
data:
mongod.conf: |
{{ toYaml .Values.configmap | indent 4 }}

View File

@ -0,0 +1,20 @@
{{- if .Values.podDisruptionBudget -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}
spec:
selector:
matchLabels:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}
{{ toYaml .Values.podDisruptionBudget | indent 2 }}
{{- end -}}

View File

@ -0,0 +1,32 @@
# A headless service for client applications to use
apiVersion: v1
kind: Service
metadata:
annotations:
{{- if .Values.serviceAnnotations }}
{{ toYaml .Values.serviceAnnotations | indent 4 }}
{{- end }}
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-client
spec:
type: ClusterIP
clusterIP: None
ports:
- name: mongodb
port: {{ .Values.port }}
{{- if .Values.metrics.enabled }}
- name: metrics
port: {{ .Values.metrics.port }}
targetPort: metrics
{{- end }}
selector:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}

View File

@ -0,0 +1,25 @@
# A headless service to create DNS records for discovery purposes. Use the -client service to connect applications
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}
spec:
type: ClusterIP
clusterIP: None
ports:
- name: mongodb
port: {{ .Values.port }}
publishNotReadyAddresses: true
selector:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}

View File

@ -0,0 +1,354 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}
spec:
selector:
matchLabels:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}
serviceName: {{ template "mongodb-replicaset.fullname" . }}
replicas: {{ .Values.replicas }}
template:
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/mongodb-mongodb-configmap.yaml") . | sha256sum }}
{{- if and (.Values.metrics.prometheusServiceDiscovery) (.Values.metrics.enabled) }}
prometheus.io/scrape: "true"
prometheus.io/port: {{ .Values.metrics.port | quote }}
prometheus.io/path: {{ .Values.metrics.path | quote }}
{{- end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.imagePullSecrets }}
- name: {{ . }}
{{- end}}
{{- end }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
initContainers:
- name: copy-config
image: "{{ .Values.copyConfigImage.repository }}:{{ .Values.copyConfigImage.tag }}"
imagePullPolicy: {{ .Values.copyConfigImage.pullPolicy | quote }}
command:
- "sh"
args:
- "-c"
- |
set -e
set -x
cp /configdb-readonly/mongod.conf /data/configdb/mongod.conf
{{- if .Values.tls.enabled }}
cp /ca-readonly/tls.key /data/configdb/tls.key
cp /ca-readonly/tls.crt /data/configdb/tls.crt
{{- end }}
{{- if .Values.auth.enabled }}
cp /keydir-readonly/key.txt /data/configdb/key.txt
chmod 600 /data/configdb/key.txt
{{- end }}
volumeMounts:
- name: workdir
mountPath: /work-dir
- name: config
mountPath: /configdb-readonly
- name: configdir
mountPath: /data/configdb
{{- if .Values.tls.enabled }}
- name: ca
mountPath: /ca-readonly
{{- end }}
{{- if .Values.auth.enabled }}
- name: keydir
mountPath: /keydir-readonly
{{- end }}
resources:
{{ toYaml .Values.init.resources | indent 12 }}
- name: install
image: "{{ .Values.installImage.repository }}:{{ .Values.installImage.tag }}"
args:
- --work-dir=/work-dir
imagePullPolicy: "{{ .Values.installImage.pullPolicy }}"
volumeMounts:
- name: workdir
mountPath: /work-dir
resources:
{{ toYaml .Values.init.resources | indent 12 }}
- name: bootstrap
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command:
- /work-dir/peer-finder
args:
- -on-start=/init/on-start.sh
- "-service={{ template "mongodb-replicaset.fullname" . }}"
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: REPLICA_SET
value: {{ .Values.replicaSetName }}
- name: TIMEOUT
value: "{{ .Values.init.timeout }}"
- name: SKIP_INIT
value: "{{ .Values.skipInitialization }}"
- name: TLS_MODE
value: {{ .Values.tls.mode }}
{{- if .Values.auth.enabled }}
- name: AUTH
value: "true"
- name: ADMIN_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: user
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: password
{{- if .Values.metrics.enabled }}
- name: METRICS
value: "true"
- name: METRICS_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: user
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: password
{{- end }}
{{- end }}
volumeMounts:
- name: workdir
mountPath: /work-dir
- name: init
mountPath: /init
- name: configdir
mountPath: /data/configdb
- name: datadir
mountPath: /data/db
resources:
{{ toYaml .Values.init.resources | indent 12 }}
containers:
- name: {{ template "mongodb-replicaset.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
{{- if .Values.extraVars }}
env:
{{ toYaml .Values.extraVars | indent 12 }}
{{- end }}
ports:
- name: mongodb
containerPort: 27017
resources:
{{ toYaml .Values.resources | indent 12 }}
command:
- mongod
args:
- --config=/data/configdb/mongod.conf
- --dbpath=/data/db
- --replSet={{ .Values.replicaSetName }}
- --port=27017
- --bind_ip=0.0.0.0
{{- if .Values.auth.enabled }}
- --auth
- --keyFile=/data/configdb/key.txt
{{- end }}
{{- if .Values.tls.enabled }}
- --sslMode={{ .Values.tls.mode }}
- --sslCAFile=/data/configdb/tls.crt
- --sslPEMKeyFile=/work-dir/mongo.pem
{{- end }}
livenessProbe:
exec:
command:
- mongo
{{- if .Values.tls.enabled }}
- --ssl
- --sslCAFile=/data/configdb/tls.crt
- --sslPEMKeyFile=/work-dir/mongo.pem
{{- end }}
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
readinessProbe:
exec:
command:
- mongo
{{- if .Values.tls.enabled }}
- --ssl
- --sslCAFile=/data/configdb/tls.crt
- --sslPEMKeyFile=/work-dir/mongo.pem
{{- end }}
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
volumeMounts:
- name: datadir
mountPath: /data/db
- name: configdir
mountPath: /data/configdb
- name: workdir
mountPath: /work-dir
{{ if .Values.metrics.enabled }}
- name: metrics
image: "{{ .Values.metrics.image.repository }}:{{ .Values.metrics.image.tag }}"
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command:
- sh
- -c
- >-
/bin/mongodb_exporter
--mongodb.uri {{ template "mongodb-replicaset.connection-string" . }}
--mongodb.socket-timeout={{ .Values.metrics.socketTimeout }}
--mongodb.sync-timeout={{ .Values.metrics.syncTimeout }}
--web.telemetry-path={{ .Values.metrics.path }}
--web.listen-address=:{{ .Values.metrics.port }}
volumeMounts:
{{- if and (.Values.tls.enabled) }}
- name: ca
mountPath: /ca
readOnly: true
{{- end }}
- name: workdir
mountPath: /work-dir
readOnly: true
env:
{{- if .Values.auth.enabled }}
- name: METRICS_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: user
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: password
{{- end }}
ports:
- name: metrics
containerPort: {{ .Values.metrics.port }}
resources:
{{ toYaml .Values.metrics.resources | indent 12 }}
{{- if .Values.metrics.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.metrics.securityContext.runAsUser }}
{{- end }}
livenessProbe:
exec:
command:
- sh
- -c
- >-
/bin/mongodb_exporter
--mongodb.uri {{ template "mongodb-replicaset.connection-string" . }}
--test
initialDelaySeconds: 30
periodSeconds: 10
{{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "mongodb-replicaset.fullname" . }}-mongodb
- name: init
configMap:
defaultMode: 0755
name: {{ template "mongodb-replicaset.fullname" . }}-init
{{- if .Values.tls.enabled }}
- name: ca
secret:
defaultMode: 0400
secretName: {{ template "mongodb-replicaset.fullname" . }}-ca
{{- end }}
{{- if .Values.auth.enabled }}
- name: keydir
secret:
defaultMode: 0400
secretName: {{ template "mongodb-replicaset.keySecret" . }}
{{- end }}
- name: workdir
emptyDir: {}
- name: configdir
emptyDir: {}
{{- if .Values.persistentVolume.enabled }}
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
{{- range $key, $value := .Values.persistentVolume.annotations }}
{{ $key }}: "{{ $value }}"
{{- end }}
spec:
accessModes:
{{- range .Values.persistentVolume.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistentVolume.size | quote }}
{{- if .Values.persistentVolume.storageClass }}
{{- if (eq "-" .Values.persistentVolume.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
{{- else }}
- name: datadir
emptyDir: {}
{{- end }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "mongodb-replicaset.fullname" . }}-tests
data:
mongodb-up-test.sh: |
{{ .Files.Get "tests/mongodb-up-test.sh" | indent 4 }}

View File

@ -0,0 +1,79 @@
apiVersion: v1
kind: Pod
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "mongodb-replicaset.fullname" . }}-test
annotations:
"helm.sh/hook": test-success
spec:
initContainers:
- name: test-framework
image: dduportal/bats:0.4.0
command:
- bash
- -c
- |
set -ex
# copy bats to tools dir
cp -R /usr/local/libexec/ /tools/bats/
volumeMounts:
- name: tools
mountPath: /tools
containers:
- name: mongo
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command:
- /tools/bats/bats
- -t
- /tests/mongodb-up-test.sh
env:
- name: FULL_NAME
value: {{ template "mongodb-replicaset.fullname" . }}
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: REPLICAS
value: "{{ .Values.replicas }}"
{{- if .Values.auth.enabled }}
- name: AUTH
value: "true"
- name: ADMIN_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: user
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: password
{{- end }}
volumeMounts:
- name: tools
mountPath: /tools
- name: tests
mountPath: /tests
{{- if .Values.tls.enabled }}
- name: tls
mountPath: /tls
{{- end }}
volumes:
- name: tools
emptyDir: {}
- name: tests
configMap:
name: {{ template "mongodb-replicaset.fullname" . }}-tests
{{- if .Values.tls.enabled }}
- name: tls
secret:
secretName: {{ template "mongodb-replicaset.fullname" . }}-ca
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
{{- end }}
restartPolicy: Never

View File

@ -0,0 +1,48 @@
#! /bin/bash
# Copyright 2016 The Kubernetes Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NS="${RELEASE_NAMESPACE:-default}"
POD_NAME="${RELEASE_NAME:-mongo}-mongodb-replicaset"
MONGOCACRT=/ca/tls.crt
MONGOPEM=/work-dir/mongo.pem
if [ -f $MONGOPEM ]; then
MONGOARGS="--ssl --sslCAFile $MONGOCACRT --sslPEMKeyFile $MONGOPEM"
fi
for i in $(seq 0 2); do
pod="${POD_NAME}-$i"
kubectl exec --namespace $NS $pod -- sh -c 'mongo '"$MONGOARGS"' --eval="printjson(rs.isMaster())"' | grep '"ismaster" : true'
if [ $? -eq 0 ]; then
echo "Found master: $pod"
MASTER=$pod
break
fi
done
kubectl exec --namespace $NS $MASTER -- mongo "$MONGOARGS" --eval='printjson(db.test.insert({"status": "success"}))'
# TODO: find maximum duration to wait for slaves to be up-to-date with master.
sleep 2
for i in $(seq 0 2); do
pod="${POD_NAME}-$i"
if [[ $pod != $MASTER ]]; then
echo "Reading from slave: $pod"
kubectl exec --namespace $NS $pod -- mongo "$MONGOARGS" --eval='rs.slaveOk(); db.test.find().forEach(printjson)'
fi
done

View File

@ -0,0 +1,120 @@
#!/usr/bin/env bash
set -ex
CACRT_FILE=/work-dir/tls.crt
CAKEY_FILE=/work-dir/tls.key
MONGOPEM=/work-dir/mongo.pem
MONGOARGS="--quiet"
if [ -e "/tls/tls.crt" ]; then
# log "Generating certificate"
mkdir -p /work-dir
cp /tls/tls.crt /work-dir/tls.crt
cp /tls/tls.key /work-dir/tls.key
# Move into /work-dir
pushd /work-dir
cat >openssl.cnf <<EOL
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $(echo -n "$(hostname)" | sed s/-[0-9]*$//)
DNS.2 = $(hostname)
DNS.3 = localhost
DNS.4 = 127.0.0.1
EOL
# Generate the certs
openssl genrsa -out mongo.key 2048
openssl req -new -key mongo.key -out mongo.csr -subj "/OU=MongoDB/CN=$(hostname)" -config openssl.cnf
openssl x509 -req -in mongo.csr \
-CA "$CACRT_FILE" -CAkey "$CAKEY_FILE" -CAcreateserial \
-out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
cat mongo.crt mongo.key > $MONGOPEM
MONGOARGS="$MONGOARGS --ssl --sslCAFile $CACRT_FILE --sslPEMKeyFile $MONGOPEM"
fi
if [[ "${AUTH}" == "true" ]]; then
MONGOARGS="$MONGOARGS --username $ADMIN_USER --password $ADMIN_PASSWORD --authenticationDatabase admin"
fi
pod_name() {
local full_name="${FULL_NAME?Environment variable FULL_NAME not set}"
local namespace="${NAMESPACE?Environment variable NAMESPACE not set}"
local index="$1"
echo "$full_name-$index.$full_name.$namespace.svc.cluster.local"
}
replicas() {
echo "${REPLICAS?Environment variable REPLICAS not set}"
}
master_pod() {
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS "--host=$(pod_name "$i")" "--eval=rs.isMaster().ismaster")
if [[ "$response" == "true" ]]; then
pod_name "$i"
break
fi
done
}
setup() {
local ready=0
until [[ "$ready" -eq $(replicas) ]]; do
echo "Waiting for application to become ready" >&2
sleep 1
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS "--host=$(pod_name "$i")" "--eval=rs.status().ok" || true)
if [[ "$response" -eq 1 ]]; then
ready=$((ready + 1))
fi
done
done
}
@test "Testing mongodb client is executable" {
mongo -h
[ "$?" -eq 0 ]
}
@test "Connect mongodb client to mongodb pods" {
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS "--host=$(pod_name "$i")" "--eval=rs.status().ok")
if [[ ! "$response" -eq 1 ]]; then
exit 1
fi
done
}
@test "Write key to primary" {
response=$(mongo $MONGOARGS --host=$(master_pod) "--eval=db.test.insert({\"abc\": \"def\"}).nInserted")
if [[ ! "$response" -eq 1 ]]; then
exit 1
fi
}
@test "Read key from slaves" {
# wait for slaves to catch up
sleep 10
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS --host=$(pod_name "$i") "--eval=rs.slaveOk(); db.test.find({\"abc\":\"def\"})")
if [[ ! "$response" =~ .*def.* ]]; then
exit 1
fi
done
# Clean up a document after test
mongo $MONGOARGS --host=$(master_pod) "--eval=db.test.deleteMany({\"abc\": \"def\"})"
}

View File

@ -0,0 +1,167 @@
# Override the name of the chart, which in turn changes the name of the containers, services etc.
nameOverride: ""
fullnameOverride: ""
replicas: 3
port: 27017
## Setting this will skip the replicaset and user creation process during bootstrapping
skipInitialization: false
replicaSetName: rs0
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
auth:
enabled: false
existingKeySecret: ""
existingAdminSecret: ""
existingMetricsSecret: ""
# adminUser: username
# adminPassword: password
# metricsUser: metrics
# metricsPassword: password
# key: keycontent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - myRegistrKeySecretName
# Specs for the Docker image for the init container that establishes the replica set
installImage:
repository: unguiculus/mongodb-install
tag: 0.7
pullPolicy: IfNotPresent
# Specs for the Docker image for the copyConfig init container
copyConfigImage:
repository: busybox
tag: 1.29.3
pullPolicy: IfNotPresent
# Specs for the MongoDB image
image:
repository: mongo
tag: 3.6
pullPolicy: IfNotPresent
# Additional environment variables to be set in the container
extraVars: {}
# - name: TCMALLOC_AGGRESSIVE_DECOMMIT
# value: "true"
# Prometheus Metrics Exporter
metrics:
enabled: false
image:
repository: bitnami/mongodb-exporter
tag: 0.10.0-debian-9-r71
pullPolicy: IfNotPresent
port: 9216
path: "/metrics"
socketTimeout: 3s
syncTimeout: 1m
prometheusServiceDiscovery: true
resources: {}
securityContext:
enabled: true
runAsUser: 1001
# Annotations to be added to MongoDB pods
podAnnotations: {}
securityContext:
enabled: true
runAsUser: 999
fsGroup: 999
runAsNonRoot: true
init:
resources: {}
timeout: 900
resources: {}
# limits:
# cpu: 500m
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 256Mi
## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
priorityClassName: ""
persistentVolume:
enabled: true
## mongodb-replicaset data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: ""
accessModes:
- ReadWriteOnce
size: 10Gi
annotations: {}
# Annotations to be added to the service
serviceAnnotations: {}
terminationGracePeriodSeconds: 30
tls:
# Enable or disable MongoDB TLS support
enabled: false
# Set the SSL operation mode (disabled|allowSSL|preferSSL|requireSSL)
mode: requireSSL
# Please generate your own TLS CA by generating it via:
# $ openssl genrsa -out ca.key 2048
# $ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
# After that you can base64 encode it and paste it here:
# $ cat ca.key | base64 -w0
# cacert:
# cakey:
# Entries for the MongoDB config file
configmap: {}
# Javascript code to execute on each replica at initContainer time
# This is the recommended way to create indexes on replicasets.
# Below is an example that creates indexes in foreground on each replica in standalone mode.
# ref: https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/
# initMongodStandalone: |+
# db = db.getSiblingDB("mydb")
# db.my_users.createIndex({email: 1})
initMongodStandalone: ""
# Readiness probe
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
# Liveness probe
livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1

View File

@ -0,0 +1,9 @@
apiVersion: v1
kind: LimitRange
metadata:
name: limits
spec:
limits:
- defaultRequest:
cpu: 40m
type: Container

View File

@ -0,0 +1,45 @@
questions:
- variable: auth.adminUser
default: ""
required: true
type: string
label: Initial Admin User Name e.g acme@yourorg.com
group: "Initial Settings - Required"
- variable: auth.adminPassword
default: ""
type: password
required: true
label: Initial Admin Password/Secret
group: "Initial Settings - Required"
- variable: shipaCluster.serviceType
default: ""
type: enum
required: false
label: Cluster Service Type e.g ClusterIP [shipaCluster.serviceType]
group: "Shipa Cluster - Optional"
options:
- "ClusterIP"
- "NodePort"
- "LoadBalancer"
- variable: shipaCluster.ip
default: ""
type: string
required: false
label: Cluster IP if using ClusterIP Service Type [shipaCluster.ip]
group: "Shipa Cluster - Optional"
- variable: service.nginx.serviceType
default: ""
type: enum
required: false
label: Overide Nginx with a Service Type like ClusterIP [service.nginx.serviceType]
group: "Shipa Cluster - Optional"
options:
- "ClusterIP"
- "NodePort"
- "LoadBalancer"
- variable: service.nginx.clusterIP
default: ""
type: string
required: false
label: Cluster IP for Nginx [service.nginx.clusterIP]
group: "Shipa Cluster - Optional"

View File

@ -0,0 +1,146 @@
#!/bin/sh
set -euxo pipefail
is_shipa_initialized() {
# By default we create secret with empty certificates
# and save them to the secret as a result of the first run of boostrap.sh
CA=$(kubectl get secret/shipa-certificates -o json | jq ".data[\"ca.pem\"]")
LENGTH=${#CA}
if [ "$LENGTH" -gt "100" ]; then
return 0
fi
return 1
}
echo "Waiting for nginx ingress to be ready"
# This helper gets an IP address or DNS name of NGINX_SERVICE and prints it to /tmp/nginx-ip
/bin/bootstrap-helper --service-name=$NGINX_SERVICE --namespace=$POD_NAMESPACE --timeout=600 --filename=/tmp/nginx-ip
NGINX_ADDRESS=$(cat /tmp/nginx-ip)
HOST_ADDRESS=$(cat /tmp/nginx-ip)
# If target CNAMEs are set by user in values.yaml, then use the first CNAME from the list as HOST_ADDRESS
# since Shipa host can be only one in the shipa.conf
if [ ! -z "$SHIPA_MAIN_TARGET" -a "$SHIPA_MAIN_TARGET" != " " ]; then
HOST_ADDRESS=$SHIPA_MAIN_TARGET
fi
echo "Prepare shipa.conf"
cp -v /etc/shipa-default/shipa.conf /etc/shipa/shipa.conf
sed -i "s/SHIPA_PUBLIC_IP/$HOST_ADDRESS/g" /etc/shipa/shipa.conf
sed -ie "s/SHIPA_ORGANIZATION_ID/$SHIPA_ORGANIZATION_ID/g" /etc/shipa/shipa.conf
echo "shipa.conf: "
cat /etc/shipa/shipa.conf
if is_shipa_initialized; then
echo "Skip bootstrapping because shipa is already initialized"
exit 0
fi
CERTIFICATES_DIRECTORY=/tmp/certs
mkdir $CERTIFICATES_DIRECTORY
# certificate generation for default domain
sed "s/SHIPA_PUBLIC_IP/$NGINX_ADDRESS/g" /scripts/csr-shipa-ca.json > $CERTIFICATES_DIRECTORY/csr-shipa-ca.json
sed "s/SHIPA_PUBLIC_IP/$NGINX_ADDRESS/g" /scripts/csr-docker-cluster.json > $CERTIFICATES_DIRECTORY/csr-docker-cluster.json
sed "s/SHIPA_PUBLIC_IP/$NGINX_ADDRESS/g" /scripts/csr-etcd.json > $CERTIFICATES_DIRECTORY/csr-etcd.json
sed "s/SHIPA_PUBLIC_IP/$NGINX_ADDRESS/g" /scripts/csr-api-config.json > $CERTIFICATES_DIRECTORY/csr-api-config.json
sed "s/SHIPA_PUBLIC_IP/$NGINX_ADDRESS/g" /scripts/csr-api-server.json > $CERTIFICATES_DIRECTORY/csr-api-server.json
sed "s/ETCD_SERVICE/$ETCD_SERVICE/g" --in-place $CERTIFICATES_DIRECTORY/csr-etcd.json
# certificate generation for CNAMES
sed "s/SHIPA_API_CNAMES/$SHIPA_API_CNAMES/g" --in-place $CERTIFICATES_DIRECTORY/csr-docker-cluster.json
sed "s/SHIPA_API_CNAMES/$SHIPA_API_CNAMES/g" --in-place $CERTIFICATES_DIRECTORY/csr-etcd.json
sed "s/SHIPA_API_CNAMES/$SHIPA_API_CNAMES/g" --in-place $CERTIFICATES_DIRECTORY/csr-api-server.json
jq 'fromstream(tostream | select(length == 1 or .[1] != ""))' $CERTIFICATES_DIRECTORY/csr-docker-cluster.json > file.tmp && mv file.tmp $CERTIFICATES_DIRECTORY/csr-docker-cluster.json
jq 'fromstream(tostream | select(length == 1 or .[1] != ""))' $CERTIFICATES_DIRECTORY/csr-etcd.json > file.tmp && mv file.tmp $CERTIFICATES_DIRECTORY/csr-etcd.json
jq 'fromstream(tostream | select(length == 1 or .[1] != ""))' $CERTIFICATES_DIRECTORY/csr-api-server.json > file.tmp && mv file.tmp $CERTIFICATES_DIRECTORY/csr-api-server.json
cp /scripts/csr-etcd-client.json $CERTIFICATES_DIRECTORY/csr-etcd-client.json
cp /scripts/csr-client-ca.json $CERTIFICATES_DIRECTORY/csr-client-ca.json
cfssl gencert -initca $CERTIFICATES_DIRECTORY/csr-shipa-ca.json | cfssljson -bare $CERTIFICATES_DIRECTORY/ca
cfssl gencert -initca $CERTIFICATES_DIRECTORY/csr-client-ca.json | cfssljson -bare $CERTIFICATES_DIRECTORY/client-ca
cfssl gencert \
-ca=$CERTIFICATES_DIRECTORY/ca.pem \
-ca-key=$CERTIFICATES_DIRECTORY/ca-key.pem \
-profile=server \
$CERTIFICATES_DIRECTORY/csr-docker-cluster.json | cfssljson -bare $CERTIFICATES_DIRECTORY/docker-cluster
cfssl gencert \
-ca=$CERTIFICATES_DIRECTORY/ca.pem \
-ca-key=$CERTIFICATES_DIRECTORY/ca-key.pem \
-profile=server \
$CERTIFICATES_DIRECTORY/csr-etcd.json | cfssljson -bare $CERTIFICATES_DIRECTORY/etcd-server
cfssl gencert \
-ca=$CERTIFICATES_DIRECTORY/ca.pem \
-ca-key=$CERTIFICATES_DIRECTORY/ca-key.pem \
-profile=client \
$CERTIFICATES_DIRECTORY/csr-etcd-client.json | cfssljson -bare $CERTIFICATES_DIRECTORY/etcd-client
cfssl gencert \
-ca=$CERTIFICATES_DIRECTORY/ca.pem \
-ca-key=$CERTIFICATES_DIRECTORY/ca-key.pem \
-config=$CERTIFICATES_DIRECTORY/csr-api-config.json \
-profile=server \
$CERTIFICATES_DIRECTORY/csr-api-server.json | cfssljson -bare $CERTIFICATES_DIRECTORY/api-server
rm -f $CERTIFICATES_DIRECTORY/*.csr
rm -f $CERTIFICATES_DIRECTORY/*.json
CA_CERT=$(cat $CERTIFICATES_DIRECTORY/ca.pem | base64)
CA_KEY=$(cat $CERTIFICATES_DIRECTORY/ca-key.pem | base64)
CLIENT_CA_CERT=$(cat $CERTIFICATES_DIRECTORY/client-ca.pem | base64)
CLIENT_CA_KEY=$(cat $CERTIFICATES_DIRECTORY/client-ca-key.pem | base64)
DOCKER_CLUSTER_CERT=$(cat $CERTIFICATES_DIRECTORY/docker-cluster.pem | base64)
DOCKER_CLUSTER_KEY=$(cat $CERTIFICATES_DIRECTORY/docker-cluster-key.pem | base64)
ETCD_SERVER_CERT=$(cat $CERTIFICATES_DIRECTORY/etcd-server.pem | base64)
ETCD_SERVER_KEY=$(cat $CERTIFICATES_DIRECTORY/etcd-server-key.pem | base64)
ETCD_CLIENT_CERT=$(cat $CERTIFICATES_DIRECTORY/etcd-client.pem | base64)
ETCD_CLIENT_KEY=$(cat $CERTIFICATES_DIRECTORY/etcd-client-key.pem | base64)
API_SERVER_CERT=$(cat $CERTIFICATES_DIRECTORY/api-server.pem | base64)
API_SERVER_KEY=$(cat $CERTIFICATES_DIRECTORY/api-server-key.pem | base64)
# FIXME: name of secret
kubectl get secrets shipa-certificates -o json \
| jq ".data[\"ca.pem\"] |= \"$CA_CERT\"" \
| jq ".data[\"ca-key.pem\"] |= \"$CA_KEY\"" \
| jq ".data[\"client-ca.crt\"] |= \"$CLIENT_CA_CERT\"" \
| jq ".data[\"client-ca.key\"] |= \"$CLIENT_CA_KEY\"" \
| jq ".data[\"cert.pem\"] |= \"$DOCKER_CLUSTER_CERT\"" \
| jq ".data[\"key.pem\"] |= \"$DOCKER_CLUSTER_KEY\"" \
| jq ".data[\"etcd-server.crt\"] |= \"$ETCD_SERVER_CERT\"" \
| jq ".data[\"etcd-server.key\"] |= \"$ETCD_SERVER_KEY\"" \
| jq ".data[\"etcd-client.crt\"] |= \"$ETCD_CLIENT_CERT\"" \
| jq ".data[\"etcd-client.key\"] |= \"$ETCD_CLIENT_KEY\"" \
| jq ".data[\"api-server.crt\"] |= \"$API_SERVER_CERT\"" \
| jq ".data[\"api-server.key\"] |= \"$API_SERVER_KEY\"" \
| kubectl apply -f -
echo "CA:"
openssl x509 -in $CERTIFICATES_DIRECTORY/ca.pem -text -noout
echo "Docker cluster:"
openssl x509 -in $CERTIFICATES_DIRECTORY/docker-cluster.pem -text -noout
echo "Etcd server:"
openssl x509 -in $CERTIFICATES_DIRECTORY/etcd-server.pem -text -noout
echo "Etcd client:"
openssl x509 -in $CERTIFICATES_DIRECTORY/etcd-client.pem -text -noout

View File

@ -0,0 +1,6 @@
#!/bin/sh
/bin/shipad root user create $USERNAME --ignore-if-exists << EOF
$PASSWORD
$PASSWORD
EOF

View File

@ -0,0 +1,17 @@
{
"signing": {
"default": {
"expiry": "168h"
},
"profiles": {
"server": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
}
}
}
}

View File

@ -0,0 +1,16 @@
{
"CN": "Shipa",
"hosts": [
"SHIPA_PUBLIC_IP",
"SHIPA_API_CNAMES"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "shipa"
}
]
}

View File

@ -0,0 +1,12 @@
{
"CN": "Shipa",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "shipa"
}
]
}

View File

@ -0,0 +1,16 @@
{
"CN": "Shipa docker cluster",
"hosts": [
"SHIPA_PUBLIC_IP",
"SHIPA_API_CNAMES"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "Shipa"
}
]
}

View File

@ -0,0 +1,15 @@
{
"CN": "Shipa etcd",
"hosts": [
""
],
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"O": "Shipa"
}
]
}

View File

@ -0,0 +1,17 @@
{
"CN": "Shipa etcd",
"hosts": [
"SHIPA_PUBLIC_IP",
"ETCD_SERVICE",
"SHIPA_API_CNAMES"
],
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"O": "Shipa"
}
]
}

View File

@ -0,0 +1,12 @@
{
"CN": "Shipa",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "shipa"
}
]
}

View File

@ -0,0 +1,103 @@
#!/bin/sh
echo "Waiting for shipa api"
until $(curl --output /dev/null --silent http://$SHIPA_ENDPOINT:$SHIPA_ENDPOINT_PORT); do
echo "."
sleep 1
done
SHIPA_CLIENT="/bin/shipa"
$SHIPA_CLIENT target add -s local $SHIPA_ENDPOINT --insecure --port=$SHIPA_ENDPOINT_PORT --disable-cert-validation
$SHIPA_CLIENT login << EOF
$USERNAME
$PASSWORD
EOF
$SHIPA_CLIENT team create shipa-admin-team
$SHIPA_CLIENT team create shipa-system-team
$SHIPA_CLIENT framework add /scripts/default-framework-template.yaml
# we need this delay because it takes some time to initialize etcd
sleep 10
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
CACERT="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
ADDR=$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
if [[ -z $ISTIO_INGRESS_IP ]]; then
$SHIPA_CLIENT cluster add shipa-cluster --framework=shipa-framework \
--cacert=$CACERT \
--addr=$ADDR \
--ingress-service-type="traefik:$INGRESS_SERVICE_TYPE" \
--ingress-ip="traefik:$INGRESS_IP" \
--ingress-debug="traefik:$INGRESS_DEBUG" \
--token=$TOKEN
else
$SHIPA_CLIENT cluster add shipa-cluster --framework=shipa-framework \
--cacert=$CACERT \
--addr=$ADDR \
--ingress-service-type="traefik:$INGRESS_SERVICE_TYPE" \
--ingress-ip="traefik:$INGRESS_IP" \
--ingress-debug="traefik:$INGRESS_DEBUG" \
--ingress-service-type="istio:$ISTIO_INGRESS_SERVICE_TYPE" \
--ingress-ip="istio:$ISTIO_INGRESS_IP" \
--token=$TOKEN
fi
$SHIPA_CLIENT role add TeamAdmin team
$SHIPA_CLIENT role permission add TeamAdmin team
$SHIPA_CLIENT role permission add TeamAdmin app
$SHIPA_CLIENT role permission add TeamAdmin cluster
$SHIPA_CLIENT role permission add TeamAdmin service
$SHIPA_CLIENT role permission add TeamAdmin service-instance
$SHIPA_CLIENT role add FrameworkAdmin framework
$SHIPA_CLIENT role permission add FrameworkAdmin framework
$SHIPA_CLIENT role permission add FrameworkAdmin node
$SHIPA_CLIENT role permission add FrameworkAdmin cluster
$SHIPA_CLIENT role add ClusterAdmin cluster
$SHIPA_CLIENT role permission add ClusterAdmin cluster
$SHIPA_CLIENT role add ServiceAdmin service
$SHIPA_CLIENT role add ServiceInstanceAdmin service-instance
$SHIPA_CLIENT role default add --team-create TeamAdmin
$SHIPA_CLIENT role default add --framework-add FrameworkAdmin
$SHIPA_CLIENT role default add --cluster-add ClusterAdmin
$SHIPA_CLIENT role default add --service-add ServiceAdmin
$SHIPA_CLIENT role default add --service-instance-add ServiceInstanceAdmin
if [ "x$DASHBOARD_ENABLED" != "xtrue" ]; then
echo "The dashboard is disabled"
exit 0
fi
echo "Creating the dashboard app"
$SHIPA_CLIENT app create dashboard \
--framework=shipa-framework \
--team=shipa-admin-team \
-e SHIPA_ADMIN_USER=$USERNAME \
-e SHIPA_CLOUD=$SHIPA_CLOUD \
-e SHIPA_TARGETS=$SHIPA_TARGETS \
-e SHIPA_PAY_API_HOST=$SHIPA_PAY_API_HOST \
-e GOOGLE_RECAPTCHA_SITEKEY=$GOOGLE_RECAPTCHA_SITEKEY \
-e SMARTLOOK_PROJECT_KEY=$SMARTLOOK_PROJECT_KEY
echo "Setting private envs for dashboard"
$SHIPA_CLIENT env set -a dashboard \
SHIPA_PAY_API_TOKEN=$SHIPA_PAY_API_TOKEN \
GOOGLE_RECAPTCHA_SECRET=$GOOGLE_RECAPTCHA_SECRET \
LAUNCH_DARKLY_SDK_KEY=$LAUNCH_DARKLY_SDK_KEY -p
COUNTER=0
until $SHIPA_CLIENT app deploy -a dashboard -i $DASHBOARD_IMAGE
do
echo "Deploy dashboard failed with $?, waiting 30 seconds then trying again"
sleep 30
let COUNTER=COUNTER+1
if [ $COUNTER -gt 3 ]; then
echo "Failed to deploy dashboard three times, giving up"
exit 1
fi
done

View File

@ -0,0 +1,34 @@
****************************************** Thanks for choosing Shipa! *********************************************
1. Configured default user:
Username: {{ .Values.auth.adminUser }}
Password: {{ .Values.auth.adminPassword }}
2. If this is a production cluster, please configure persistent volumes.
The default reclaimPolicy for dynamically provisioned persistent volumes is "Delete" and
users are advised to change it for production
The code snippet below can be used to set reclaimPolicy to "Retain" for all volumes:
PVCs=$(kubectl --namespace={{ .Release.Namespace }} get pvc -l release={{ .Release.Name }} -o name)
for pvc in $PVCs; do
volumeName=$(kubectl -n {{ .Release.Namespace }} get $pvc -o template --template=\{\{.spec.volumeName\}\})
kubectl -n {{ .Release.Namespace }} patch pv $volumeName -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
done
3. Set default target for shipa-client:
export SHIPA_HOST=$(kubectl --namespace={{ .Release.Namespace }} get svc {{ template "shipa.fullname" . }}-ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && if [[ -z $SHIPA_HOST ]]; then export SHIPA_HOST=$(kubectl --namespace={{ .Release.Namespace }} get svc {{ template "shipa.fullname" . }}-ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") ; fi
shipa target-add {{ .Release.Name }} $SHIPA_HOST -s
shipa login {{ .Values.auth.adminUser }}
shipa node-list
shipa app-list
************************************************************************************************************************
**** PLEASE BE PATIENT: Installing or upgrading Shipa may require downtime in order to perform database migrations. ****
************************************************************************************************************************

View File

@ -0,0 +1,77 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "shipa.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "shipa.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "shipa.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "shipa.labels" -}}
helm.sh/chart: {{ include "shipa.chart" . }}
{{ include "shipa.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
release: {{ .Release.Name }}
app: {{ include "shipa.name" . }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "shipa.selectorLabels" -}}
app.kubernetes.io/name: {{ include "shipa.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "shipa.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "shipa.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
If target CNAMEs are set by user in values.yaml, then use the first CNAME from
the list as main target since Shipa host can be only one in the shipa.conf
*/}}
{{- define "shipa.GetMainTarget" -}}
{{- if .Values.shipaApi.cnames }}
{{- index .Values.shipaApi.cnames 0 | quote -}}
{{- else -}}
{{- printf " " | quote -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,82 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-clair-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
config.template.yaml: |-
#
# This file is mounted to /clair-config/config.template.yaml and then processed by /entrypoint.sh
#
clair:
database:
# Database driver
type: pgsql
options:
# PostgreSQL Connection string
# https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING
source: host={{ template "shipa.fullname" . }}-postgres.{{ .Release.Namespace }} port=5432 user=postgres sslmode=disable statement_timeout=60000 password=$POSTGRES_PASSWORD
# Number of elements kept in the cache
# Values unlikely to change (e.g. namespaces) are cached in order to save prevent needless roundtrips to the database.
cachesize: 16384
# 32-bit URL-safe base64 key used to encrypt pagination tokens
# If one is not provided, it will be generated.
# Multiple clair instances in the same cluster need the same value.
paginationkey:
api:
# v3 grpc/RESTful API server address
addr: "0.0.0.0:6060"
# Health server address
# This is an unencrypted endpoint useful for load balancers to check to healthiness of the clair server.
healthaddr: "0.0.0.0:6061"
# Deadline before an API request will respond with a 503
timeout: 900s
# Optional PKI configuration
# If you want to easily generate client certificates and CAs, try the following projects:
# https://github.com/coreos/etcd-ca
# https://github.com/cloudflare/cfssl
servername:
cafile:
keyfile:
certfile:
updater:
# Frequency the database will be updated with vulnerabilities from the default data sources
# The value 0 disables the updater entirely.
interval: 2h
enabledupdaters:
- debian
- ubuntu
- rhel
- oracle
- alpine
- suse
notifier:
# Number of attempts before the notification is marked as failed to be sent
attempts: 3
# Duration before a failed notification is retried
renotifyinterval: 2h
http:
# Optional endpoint that will receive notifications via POST requests
endpoint:
# Optional PKI configuration
# If you want to easily generate client certificates and CAs, try the following projects:
# https://github.com/cloudflare/cfssl
# https://github.com/coreos/etcd-ca
servername:
cafile:
keyfile:
certfile:
# Optional HTTP Proxy: must be a valid URL (including the scheme).
proxy:

View File

@ -0,0 +1,55 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-clair
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-clair
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-clair
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: clair
image: shipasoftware/clair:v2.1.7
imagePullPolicy: Always
ports:
- name: clair
containerPort: 6060
protocol: TCP
- name: health
containerPort: 6061
protocol: TCP
volumeMounts:
- name: {{ template "shipa.fullname" . }}-clair-config
mountPath: /clair-config/
- name: config-dir
mountPath: /etc/clair/
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: postgres-password
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: config-dir
emptyDir: {}
- name: {{ template "shipa.fullname" . }}-clair-config
configMap:
name: {{ template "shipa.fullname" . }}-clair-config
items:
- key: config.template.yaml
path: config.template.yaml

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-clair
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
name: {{ template "shipa.fullname" . }}-clair
ports:
- port: 6060
targetPort: 6060
protocol: TCP
name: clair
- port: 6061
targetPort: 6061
protocol: TCP
name: health

View File

@ -0,0 +1,63 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-etcd
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-etcd
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-etcd
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: etcd
image: "quay.io/coreos/etcd:v3.3.22"
command: ['/usr/local/bin/etcd',
{{- if .Values.etcd.debug }}
'--debug',
{{- end }}
'--listen-client-urls', 'https://0.0.0.0:2379',
'--data-dir=/var/etcd/data',
'--advertise-client-urls', 'https://0.0.0.0:2379',
'--client-cert-auth',
'--max-request-bytes', '10485760',
'--trusted-ca-file', '/certs/shipa-ca.crt',
'--cert-file', '/certs/etcd-server.crt',
'--key-file', '/certs/etcd-server.key' ]
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 2379
protocol: TCP
env:
- name: ETCDCTL_API
value: "3"
volumeMounts:
- name: data
mountPath: /var/etcd/data
subPath: etcd
- name: certificates
mountPath: /certs/
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ template "shipa.fullname" . }}-etcd-pvc
- name: certificates
secret:
secretName: shipa-certificates
items:
- key: ca.pem
path: shipa-ca.crt
- key: etcd-server.crt
path: etcd-server.crt
- key: etcd-server.key
path: etcd-server.key

View File

@ -0,0 +1,18 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}-etcd-pvc
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.etcd.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.etcd.persistence.size | quote }}
{{- if .Values.etcd.persistence.storageClass }}
{{- if (eq "-" .Values.etcd.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.etcd.persistence.storageClass }}"
{{- end }}
{{- end }}

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-etcd
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
name: {{ template "shipa.fullname" . }}-etcd
ports:
- port: 2379
targetPort: http
protocol: TCP
name: http

View File

@ -0,0 +1,36 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-metrics-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
prometheus.yml: |-
#
# DO NOT EDIT. Can be updated by shipa helm chart
#
global:
scrape_interval: 1m
scrape_configs:
- job_name: "pushgateway"
honor_labels: true
scheme: http
static_configs:
- targets: ['127.0.0.1:9093']
labels:
source: pushgateway
- job_name: "traefik"
honor_labels: true
scheme: http
static_configs:
- targets: ['{{ template "shipa.fullname" . }}-traefik-internal.{{ .Release.Namespace }}:9095']
{{- if .Values.metrics.extraPrometheusConfiguration }}
#
# User defined extra configuration
#
{{- range $line, $value := ( split "\n" .Values.metrics.extraPrometheusConfiguration ) }}
{{ $value }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,55 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-metrics
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-metrics
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-metrics
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
# Please do not scale metrics container. It doesn't use storage lock (--storage.tsdb.no-lockfile)
- name: metrics
image: {{ .Values.metrics.image }}
imagePullPolicy: {{ .Values.metrics.pullPolicy }}
env:
- name: PROMETHEUS_ARGS
value: "--web.enable-admin-api {{ default ("--storage.tsdb.retention.time=1d") .Values.metrics.prometheusArgs }}"
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: metrics-password
ports:
- name: prometheus
containerPort: 9090
protocol: TCP
- name: pushgateway
containerPort: 9091
protocol: TCP
volumeMounts:
- name: "{{ template "shipa.fullname" . }}-metrics-config"
mountPath: /etc/prometheus/config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: {{ template "shipa.fullname" . }}-metrics-config
configMap:
name: {{ template "shipa.fullname" . }}-metrics-config
items:
- key: prometheus.yml
path: prometheus.yml

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-metrics
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
name: {{ template "shipa.fullname" . }}-metrics
ports:
- port: 9090
targetPort: 9090
protocol: TCP
name: prometheus
- port: 9091
targetPort: 9091
protocol: TCP
name: pushgateway

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-nginx
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
data:
{{- if .Values.service.nginx.config }}
{{- range $key, $value := .Values.service.nginx.config }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
proxy-body-size: "512M"
proxy-read-timeout: "300"
proxy-connect-timeout: "300"
proxy-send-timeout: "300"
upstream-keepalive-timeout: "300"
{{- end }}

View File

@ -0,0 +1,84 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
annotations:
sidecar.istio.io/inject: "false"
spec:
replicas: 1
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-nginx-ingress
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-nginx-ingress
annotations:
sidecar.istio.io/inject: "false"
spec:
# wait up to 30 seconds for the drain of connections
terminationGracePeriodSeconds: 30
serviceAccountName: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master
args:
- /nginx-ingress-controller
- --election-id={{ template "shipa.fullname" . }}-leader
- --configmap=$(POD_NAMESPACE)/{{ template "shipa.fullname" . }}-nginx
- --tcp-services-configmap=$(POD_NAMESPACE)/{{ template "shipa.fullname" . }}-nginx-tcp-services
- --publish-service=$(POD_NAMESPACE)/{{ template "shipa.fullname" . }}-ingress-nginx
- --http-port={{ .Values.shipaApi.port }}
- --ingress-class=shipa-nginx-ingress
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: shipa
containerPort: {{ .Values.shipaApi.port }}
protocol: TCP
- name: etcd
containerPort: 2379
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown

View File

@ -0,0 +1,131 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-clusterrole
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resourceNames:
- {{ template "shipa.fullname" . }}-leader-shipa-nginx-ingress
resources:
- configmaps
verbs:
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-role
labels: {{- include "shipa.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- "{{ template "shipa.fullname" . }}-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-role-nisa-binding
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "shipa.fullname" . }}-nginx-ingress-role
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-clusterrole-nisa-binding
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "shipa.fullname" . }}-nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
namespace: {{ .Release.Namespace }}

View File

@ -0,0 +1,44 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-ingress-nginx
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
spec:
type: "{{ .Values.service.nginx.serviceType }}"
{{- if .Values.service.nginx.loadBalancerIP }}
loadBalancerIP: "{{ .Values.service.nginx.loadBalancerIP }}"
{{- end }}
{{- if .Values.service.nginx.clusterIP }}
clusterIP: "{{ .Values.service.nginx.clusterIP }}"
{{- end }}
selector:
name: {{ template "shipa.fullname" . }}-nginx-ingress
ports:
- port: {{ .Values.shipaApi.securePort }}
name: shipa-secure
targetPort: {{ .Values.shipaApi.securePort }}
protocol: TCP
{{- if eq .Values.service.nginx.serviceType "NodePort" }}
{{- if .Values.service.nginx.secureApiNodePort }}
nodePort: {{ .Values.service.nginx.secureApiNodePort }}
{{- end }}
{{- end }}
- port: {{ .Values.shipaApi.port }}
name: shipa
targetPort: {{ .Values.shipaApi.port }}
protocol: TCP
{{- if eq .Values.service.nginx.serviceType "NodePort" }}
{{- if .Values.service.nginx.apiNodePort }}
nodePort: {{ .Values.service.nginx.apiNodePort }}
{{- end }}
{{- end }}
- port: {{ .Values.shipaApi.etcdPort }}
name: etcd
targetPort: 2379
protocol: TCP
{{- if eq .Values.service.nginx.serviceType "NodePort" }}
{{- if .Values.service.nginx.etcdNodePort }}
nodePort: {{ .Values.service.nginx.etcdNodePort }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,6 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"

View File

@ -0,0 +1,9 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}-nginx-tcp-services
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
data:
{{ .Values.shipaApi.securePort }}: "{{ .Release.Namespace }}/{{ include "shipa.fullname" . }}-api:{{ .Values.shipaApi.securePort }}"
2379: "{{ .Release.Namespace }}/{{ include "shipa.fullname" . }}-etcd:2379"

View File

@ -0,0 +1,46 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-postgres
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-postgres
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-postgres
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: postgres
image: postgres:13
imagePullPolicy: IfNotPresent
ports:
- name: postgres
containerPort: 5432
protocol: TCP
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: postgres-password
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ template "shipa.fullname" . }}-postgres-pvc

View File

@ -0,0 +1,18 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}-postgres-pvc
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.postgres.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.postgres.persistence.size | quote }}
{{- if .Values.postgres.persistence.storageClass }}
{{- if (eq "-" .Values.postgres.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.postgres.persistence.storageClass }}"
{{- end }}
{{- end }}

View File

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-postgres
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
name: {{ template "shipa.fullname" . }}-postgres
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: postgres

View File

@ -0,0 +1,143 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-api-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
shipa.conf: |-
shipaVersion: {{ .Chart.Version }}
tls-listen: "0.0.0.0:{{ .Values.shipaApi.securePort }}"
listen: "0.0.0.0:{{ .Values.shipaApi.port }}"
host: https://SHIPA_PUBLIC_IP:{{ .Values.shipaApi.securePort }}
use-tls: true
shipaCloud:
enabled: {{ .Values.shipaCloud.enabled }}
tls:
server-cert: /certs/api-server.crt
server-key: /certs/api-server.key
database:
{{- if not .Values.tags.defaultDB }}
url: {{ .Values.externalMongodb.url}}
tls: {{ .Values.externalMongodb.tls.enable }}
{{ else }}
url: {{ .Release.Name }}-mongodb-replicaset:27017
tls: false
{{- end }}
name: shipa
username: $DB_USERNAME
password: $DB_PASSWORD
license: {{ .Values.license }}
organization:
id: SHIPA_ORGANIZATION_ID
dashboard:
enabled: $DASHBOARD_ENABLED
image: $DASHBOARD_IMAGE
envs:
SHIPA_ADMIN_USER: {{ .Values.auth.adminUser | quote }}
SHIPA_CLOUD: {{ .Values.shipaCloud.enabled | quote }}
SHIPA_TARGETS: {{ join "," .Values.shipaApi.cnames }}
SHIPA_PAY_API_HOST: {{ .Values.shipaCloud.shipaPayApi.host }}
SHIPA_PAY_API_TOKEN: {{ .Values.shipaCloud.shipaPayApi.token }}
GOOGLE_RECAPTCHA_SITEKEY: {{ .Values.shipaCloud.googleRecaptcha.sitekey }}
GOOGLE_RECAPTCHA_SECRET: {{ .Values.shipaCloud.googleRecaptcha.secret }}
SMARTLOOK_PROJECT_KEY: {{ .Values.shipaCloud.smartlook.projectKey }}
LAUNCH_DARKLY_SDK_KEY: {{ .Values.shipaCloud.launchDarkly.sdkKey }}
auth:
admin-email: {{ .Values.auth.adminUser | quote }}
dummy-domain: {{ .Values.auth.dummyDomain | quote }}
token-expire-days: 2
hash-cost: 4
user-registration: true
user-activation:
cert: LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF6TXIwd3hETklDcm9JN3VEVkdoTgpFZytVbTdkQzk3NVZpM1l1NnJHUUdlc3ZwZTY5T2NhT0VxZHFML0NNWGVRMW1oTVFtUnplQnlxWEJ1Q2xOemphCjlEbjV2WTBlVnNIZUhuVTJ4bkkyV1dSR3JjUE1mRGJuRzlDSnNZQmdHd3A2eDcrYVR2RXZCRFBtS3YrcjdOcysKUXhhNzBFZEk4NTZLMWQyTTQ1U3RuZW1hcm51cjdOTDdGb2VsS1FWNGREd1hxU2EvVW1tdHdOOGNSTENUQ0N4NQpObkVya2UrTWo1RFFqTW5TUlRHbjFxOE91azlOUXRxNDlrbFMwMUhIQTJBWnR6ZExteTMrTktXRVZta3Z0cGgxClJseHBtZVQ5SERNbHI5aFI3U3BidnRHeVZVUG1pbXVYWFA4cXdOcHZab01Ka3hWRm4zbWNRVHRMbk8xa0Jjb1cKZVFJREFRQUIKLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==
provisioner: kubernetes
metrics:
host: {{ template "shipa.fullname" . }}-metrics
password: $METRICS_PASSWORD
# section contains configuration of Prometheus Metrics Exporter
prometheus-metrics-exporter:
image: shipasoftware/prometheus-metrics-exporter:v0.0.3
docker:
cluster:
storage: mongodb
mongo-database: cluster
collection: docker
registry-scheme: https
repository-namespace: shipa
router: traefik
deploy-cmd: /var/lib/shipa/deploy
run-cmd:
bin: /var/lib/shipa/start
port: "8888"
tls:
root-path: /certs
auto-scale:
enabled: true
run-interval: $DOCKER_AUTOSCALE_RUN_INTERVAL
routers:
traefik:
type: traefik
domain: shipa.cloud
kv:
endpoint: {{ template "shipa.fullname" . }}-etcd:2379
username: root
password: $ETCD_PASSWORD
ca: /certs/ca.pem
client-key: /certs/etcd-client.key
client-cert: /certs/etcd-client.crt
istio:
type: istio
queue:
mongo-database: queuedb
quota:
units-per-app: 4
apps-per-user: 8
log:
disable-syslog: true
use-stderr: true
clair:
server: http://{{ template "shipa.fullname" . }}-clair:6060
disabled: false
kubernetes:
# pod name is used by a leader election thing as an identifier for the current shipa-api instance
pod-name: $POD_NAME
pod-namespace: $POD_NAMESPACE
core-services-address: SHIPA_PUBLIC_IP
etcd-port: {{ .Values.shipaApi.etcdPort }}
use-pool-namespaces: true
remote-cluster-ingress:
http-port: 80
https-port: 443
protected-port: 31567
service-type: LoadBalancer
cluster-update:
# it's a default value that specifies if cluster-update operations can restart ingress controllers
ingress-restart-is-allowed: {{ .Values.shipaApi.allowRestartIngressControllers }}
app-auto-discovery:
enabled: {{ .Values.shipaApi.appAutoDiscoveryEnabled }}
debug: {{ .Values.shipaApi.debug }}
node-traefik:
image: {{ .Values.shipaNodeTraefik.image }}
user: {{ .Values.shipaNodeTraefik.user }}
password: $NODE_TRAEFIK_PASSWORD
certificates:
root: /certs/
ca: ca.pem
ca-key: ca-key.pem
client-ca: client-ca.crt
client-ca-key: client-ca.key
shipa-controller:
image: {{ .Values.shipaController.image }}
busybody:
image: {{ .Values.busybody.image }}
socket: /var/run/docker.sock
signatures: single # multiple/single

View File

@ -0,0 +1,206 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "shipa.fullname" . }}-api
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
{{- if .Values.shipaApi.allowMigrationDowntime }}
strategy:
type: Recreate
{{- end }}
selector:
matchLabels:
{{- include "shipa.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "shipa.selectorLabels" . | nindent 8 }}
annotations:
timestamp: "{{ date "20060102150405" .Release.Time }}"
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
{{- if .Values.rbac.enabled }}
serviceAccountName: {{ template "shipa.fullname" . }}
{{- else }}
serviceAccountName: default
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: bootstrap
image: {{ .Values.cli.image }}
command:
- /scripts/bootstrap.sh
imagePullPolicy: {{ .Values.cli.pullPolicy }}
volumeMounts:
- name: scripts
mountPath: /scripts
- name: shipa-conf
mountPath: /etc/shipa-default/
- name: config-dir
mountPath: /etc/shipa/
env:
- name: NGINX_SERVICE
value: {{ template "shipa.fullname" . }}-ingress-nginx
- name: ETCD_SERVICE
value: {{ template "shipa.fullname" . }}-etcd
- name: SHIPA_PORT
value: {{ .Values.shipaApi.port | quote }}
- name: SHIPA_API_CNAMES
value: {{ join "\",\"" .Values.shipaApi.cnames | quote }}
- name: SHIPA_ORGANIZATION_ID
valueFrom:
configMapKeyRef:
name: {{ template "shipa.fullname" . }}-defaults-configmap
key: shipa-org-id
- name: SHIPA_MAIN_TARGET
value: {{ template "shipa.GetMainTarget" . }}
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: init
image: {{ .Values.shipaApi.image }}
command:
- /scripts/create-root-user.sh
imagePullPolicy: {{ .Values.shipaApi.pullPolicy }}
volumeMounts:
- name: scripts
mountPath: /scripts
- name: config-dir
mountPath: /etc/shipa/
- name: certificates
mountPath: /certs/
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: password
{{- if not .Values.tags.defaultDB }}
{{- if and ( .Values.externalMongodb.auth.username ) ( .Values.externalMongodb.auth.password ) }}
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: password
{{- end }}
{{- end }}
containers:
- name: shipa
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: {{ .Values.shipaApi.image }}
imagePullPolicy: {{ .Values.shipaApi.pullPolicy }}
env:
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: metrics-password
- name: ETCD_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: etcd-password
- name: NODE_TRAEFIK_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: node-traefik-password
- name: DASHBOARD_IMAGE
value: {{ .Values.dashboard.image }}
- name: DASHBOARD_ENABLED
value: "{{ .Values.dashboard.enabled }}"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if not .Values.tags.defaultDB }}
{{- if and ( .Values.externalMongodb.auth.username ) ( .Values.externalMongodb.auth.password ) }}
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: password
{{- end }}
{{- end }}
ports:
- name: shipa
containerPort: {{ .Values.shipaApi.port }}
protocol: TCP
- name: shipa-secure
containerPort: {{ .Values.shipaApi.securePort }}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: {{ .Values.shipaApi.port }}
periodSeconds: 2
failureThreshold: 4
startupProbe:
httpGet:
path: /
port: {{ .Values.shipaApi.port }}
failureThreshold: 90
periodSeconds: 2
readinessProbe:
httpGet:
path: /
port: {{ .Values.shipaApi.port }}
periodSeconds: 3
initialDelaySeconds: 5
failureThreshold: 50
successThreshold: 1
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: config-dir
mountPath: /etc/shipa/
- name: certificates
mountPath: /certs/
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: config-dir
emptyDir: {}
- name: shipa-conf
configMap:
name: {{ template "shipa.fullname" . }}-api-config
items:
- key: shipa.conf
path: shipa.conf
- name: certificates
secret:
secretName: shipa-certificates
- name: scripts
configMap:
defaultMode: 0755
name: {{ template "shipa.fullname" . }}-api-init-config

View File

@ -0,0 +1,43 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-api-init-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
create-root-user.sh: |
{{ .Files.Get "scripts/create-root-user.sh" | indent 4 }}
init-job.sh: |
{{ .Files.Get "scripts/init-job.sh" | indent 4 }}
bootstrap.sh: |
{{ .Files.Get "scripts/bootstrap.sh" | indent 4 }}
csr-docker-cluster.json: |
{{ .Files.Get "scripts/csr-docker-cluster.json" | indent 4 }}
csr-etcd.json: |
{{ .Files.Get "scripts/csr-etcd.json" | indent 4 }}
csr-etcd-client.json: |
{{ .Files.Get "scripts/csr-etcd-client.json" | indent 4 }}
csr-shipa-ca.json: |
{{ .Files.Get "scripts/csr-shipa-ca.json" | indent 4 }}
csr-client-ca.json: |
{{ .Files.Get "scripts/csr-client-ca.json" | indent 4 }}
csr-api-config.json: |
{{ .Files.Get "scripts/csr-api-config.json" | indent 4 }}
csr-api-server.json: |
{{ .Files.Get "scripts/csr-api-server.json" | indent 4 }}
default-framework-template.yaml: |
shipaFramework: shipa-framework
resources:
general:
setup:
force: false
default: true
public: true
provisioner: kubernetes
kubeNamespace: {{ .Release.Namespace }}
security:
disableScan: true
scanPlatformLayers: true
access:
append:
- shipa-admin-team
- shipa-system-team

View File

@ -0,0 +1,99 @@
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ template "shipa.fullname" . }}-init-job-{{ .Release.Revision }}"
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "post-install"
sidecar.istio.io/inject: "false"
spec:
backoffLimit: 5
template:
metadata:
name: "{{ template "shipa.fullname" . }}-init-job-{{ .Release.Revision }}"
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
terminationGracePeriodSeconds: 3
{{- if .Values.rbac.enabled }}
serviceAccountName: {{ template "shipa.fullname" . }}
{{- else }}
serviceAccountName: default
{{- end }}
restartPolicy: Never
containers:
- name: migrations
image: {{ .Values.cli.image }}
command:
- /scripts/init-job.sh
imagePullPolicy: {{ .Values.cli.pullPolicy }}
env:
- name: SHIPA_ENDPOINT
value: "{{ template "shipa.fullname" . }}-api"
- name: SHIPA_ENDPOINT_PORT
value: "{{ .Values.shipaApi.port }}"
- name: USERNAME
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: password
- name: METRICS_SERVICE
value: {{ template "shipa.fullname" . }}-metrics
- name: INGRESS_SERVICE_TYPE
value: {{ default ( "LoadBalancer" ) .Values.shipaCluster.serviceType | quote }}
- name: INGRESS_IP
value: {{ default ( "" ) .Values.shipaCluster.ip | quote }}
- name: INGRESS_DEBUG
value: {{ default ( "false" ) .Values.shipaCluster.debug | quote }}
- name: ISTIO_INGRESS_SERVICE_TYPE
value: {{ default ( "LoadBalancer" ) .Values.shipaCluster.istioServiceType | quote }}
- name: ISTIO_INGRESS_IP
value: {{ default ( "" ) .Values.shipaCluster.istioIp | quote }}
- name: DASHBOARD_IMAGE
value: {{ .Values.dashboard.image }}
- name: DASHBOARD_ENABLED
value: "{{ .Values.dashboard.enabled }}"
- name: SHIPA_CLOUD
value: {{ .Values.shipaCloud.enabled | quote }}
- name: SHIPA_PAY_API_HOST
value: {{ .Values.shipaCloud.shipaPayApi.host | quote }}
- name: SHIPA_PAY_API_TOKEN
value: {{ .Values.shipaCloud.shipaPayApi.token | quote }}
- name: GOOGLE_RECAPTCHA_SITEKEY
value: {{ .Values.shipaCloud.googleRecaptcha.sitekey | quote }}
- name: GOOGLE_RECAPTCHA_SECRET
value: {{ .Values.shipaCloud.googleRecaptcha.secret | quote }}
- name: SMARTLOOK_PROJECT_KEY
value: {{ .Values.shipaCloud.smartlook.projectKey | quote }}
- name: LAUNCH_DARKLY_SDK_KEY
value: {{ .Values.shipaCloud.launchDarkly.sdkKey | quote }}
- name: SHIPA_TARGETS
value: {{ join "," .Values.shipaApi.cnames | quote }}
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: metrics-password
volumeMounts:
- name: scripts
mountPath: /scripts
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: scripts
configMap:
defaultMode: 0755
name: {{ template "shipa.fullname" . }}-api-init-config

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ template "shipa.fullname" . }}-api-init-secret
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
username: {{ required "Admin username is required! Use --set=auth.adminUser=..." .Values.auth.adminUser | b64enc }}
password: {{ required "Admin password is required! Use --set=auth.adminPassword=..." .Values.auth.adminPassword | b64enc }}

View File

@ -0,0 +1,84 @@
{{- if .Values.rbac.enabled }}
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
- services
- extensions
- rbac.authorization.k8s.io
- apiextensions.k8s.io
- networking.k8s.io
- core
- apps
- shipa.io
- config.istio.io
- networking.istio.io
- rbac.istio.io
- authentication.istio.io
- cert-manager.io
- admissionregistration.k8s.io
- coordination.k8s.io
resources: ["*"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["*"]
verbs:
- list
- get
- watch
- nonResourceURLs: ["*"]
verbs:
- list
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "shipa.fullname" . }}-role
labels: {{- include "shipa.labels" . | nindent 4 }}
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "shipa.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}
namespace: {{ .Release.Namespace }}
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "shipa.fullname" . }}-role
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "shipa.fullname" . }}-api
labels:
{{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
{{- include "shipa.selectorLabels" . | nindent 4 }}
ports:
- port: {{ .Values.shipaApi.port }}
targetPort: {{ .Values.shipaApi.port }}
protocol: TCP
name: shipa
- port: {{ .Values.shipaApi.securePort }}
targetPort: {{ .Values.shipaApi.securePort }}
protocol: TCP
name: shipa-secure

View File

@ -0,0 +1,21 @@
apiVersion: v1
kind: Secret
metadata:
name: shipa-certificates
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
ca.pem: ""
ca-key.pem: ""
cert.pem: ""
key.pem: ""
etcd-server.crt: ""
etcd-server.key: ""
etcd-client.crt: ""
etcd-client.key: ""
api-server.crt: ""
api-server.key: ""
client-ca.crt: ""
client-ca.key: ""

View File

@ -0,0 +1,14 @@
{{- if not .Values.tags.defaultDB }}
{{- if and ( .Values.externalMongodb.auth.username ) ( .Values.externalMongodb.auth.password ) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "shipa.fullname" . }}-db-auth-secret
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
data:
username: {{ required "Database username is required! Use --set=externalMongodb.auth.username=..." .Values.externalMongodb.auth.username | b64enc }}
password: {{ required "Database password is required! Use --set=externalMongodb.auth.password=..." .Values.externalMongodb.auth.password | b64enc }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-defaults-configmap
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
shipa-org-id: {{ uuidv4 | replace "-" "" | quote }}

View File

@ -0,0 +1,36 @@
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "shipa.fullname" . }}-http-ingress
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
kubernetes.io/ingress.class: "shipa-nginx-ingress"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ template "shipa.fullname" . }}-api
port:
number: {{ .Values.shipaApi.port }}
{{ else }}
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ template "shipa.fullname" . }}-http-ingress
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
kubernetes.io/ingress.class: "shipa-nginx-ingress"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: {{ template "shipa.fullname" . }}-api
servicePort: {{ .Values.shipaApi.port }}
{{ end -}}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ template "shipa.fullname" . }}-secret
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
metrics-password: {{ default (randAlphaNum 15) .Values.metrics.password | b64enc | quote }}
etcd-password: {{ default (randAlphaNum 15) .Values.etcd.password | b64enc | quote }}
postgres-password: {{ randAlphaNum 15 | b64enc | quote }}
node-traefik-password: {{ default (randAlphaNum 15) .Values.shipaNodeTraefik.password | b64enc | quote }}

View File

@ -0,0 +1,50 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
sidecar.istio.io/inject: "false"
spec:
template:
metadata:
name: "{{ template "shipa.fullname" . }}-uninstall-job-{{ .Release.Revision }}"
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
{{- if .Values.rbac.enabled }}
serviceAccountName: {{ template "shipa.fullname" . }}-uninstall
{{- else }}
serviceAccountName: default
{{- end }}
restartPolicy: Never
containers:
- name: cleanup
image: {{ .Values.cli.image }}
command: ["/bin/sh", "-c"]
args:
- /usr/local/bin/kubectl delete ds --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true;
/usr/local/bin/kubectl delete deployment --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true;
/usr/local/bin/kubectl delete pod --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true;
/usr/local/bin/kubectl delete services --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true;
/usr/local/bin/kubectl delete sa --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true;
/usr/local/bin/kubectl delete secrets --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true;
/usr/local/bin/kubectl delete crd apps.shipa.io --ignore-not-found=true;
/usr/local/bin/kubectl delete configmap {{ template "shipa.fullname" . }}-leader-nginx --ignore-not-found=true;
/usr/local/bin/kubectl delete namespaces --selector=$SELECTOR --ignore-not-found=true;
/usr/local/bin/kubectl delete clusterrolebindings --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD;
/usr/local/bin/kubectl delete clusterrole --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD;
/usr/local/bin/kubectl delete ingress --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD;
/usr/local/bin/kubectl delete endpoints --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD;
/usr/local/bin/kubectl delete netpol --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD;
imagePullPolicy: IfNotPresent
env:
- name: SELECTOR
value: "shipa.io/is-shipa=true"
- name: NAMESPACE_MOD
value: "-A"

View File

@ -0,0 +1,52 @@
{{- if .Values.rbac.enabled }}
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": "1"
"helm.sh/hook": post-delete
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": "1"
"helm.sh/hook": post-delete
rules:
- apiGroups:
- ""
- services
- extensions
- rbac.authorization.k8s.io
- networking.k8s.io
- apiextensions.k8s.io
- core
- apps
- shipa.io
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": "1"
"helm.sh/hook": post-delete
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "shipa.fullname" . }}-uninstall
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}-uninstall
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,204 @@
# Default values for shipa.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
auth:
dummyDomain: "@shipa.io"
shipaApi:
port: 8080
securePort: 8081
etcdPort: 2379
image: shipasoftware/api:v1.4.0
pullPolicy: Always
debug: false
cnames: []
allowRestartIngressControllers: true
allowMigrationDowntime: true
appAutoDiscoveryEnabled: true
license: ""
shipaCluster:
# use debug logs in traefik ingress controller
debug: false
# kubernetes service type for traefik ingress controller (LoadBalancer/ClusterIP)
serviceType: LoadBalancer
# override traefik ingress controller ip address
# ip: 10.100.10.11
# use debug logs in istio ingress controller
istioDebug: false
# kubernetes service type for istio ingress controller (LoadBalancer/ClusterIP)
istioServiceType: LoadBalancer
# override istio ingress controller ip address
# istioIp: 10.100.10.11
# populate with docker hub username to use authenticated user. Secrets should be added to cluster outside shipa helm chart
# imagePullSecrets: ""
service:
nginx:
enabled: true
# kubernetes service type for nginx ingress (LoadBalancer/ClusterIP)
serviceType: LoadBalancer
# the following *NodePort values will be used only if serviceType is "NodePort"
# apiNodePort specifies "nodePort" for shipa-api over http
#apiNodePort: 32200
# secureNodePort specifies "nodePort" for shipa-api over https
#secureApiNodePort: 32201
# etcdNodePort specifies "nodePort" for etcd
#etcdNodePort: 32202
# override nginx ingress controller ip address if its service type is ClusterIP
#clusterIP: 10.100.10.10
# override nginx ingress controller ip address if its service type is LoadBalancer
#loadBalancerIP: 35.202.88.71
# If set, defines nginx configuration as described in the manual:
# https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap
# there are default values, take a look at templates/nginx-configmap.yaml
#config:
# proxy-body-size: "128M"
dashboard:
enabled: true
image: shipasoftware/dashboard:v1.4.0
etcd:
debug: false
persistence:
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
## storageClass: ""
accessMode: "ReadWriteOnce"
size: 10Gi
postgres:
persistence:
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
## storageClass: ""
accessMode: "ReadWriteOnce"
size: 10Gi
cli:
image: shipasoftware/cli:v1.4.0
pullPolicy: Always
metrics:
image: shipasoftware/metrics:v0.0.7
pullPolicy: Always
# Extra configuration to add to prometheus.yaml
# extraPrometheusConfiguration: |
# remote_read:
# - url: http://localhost:9268/read
# remote_write:
# - url: http://localhost:9268/write
extraPrometheusConfiguration:
#password: hardcoded
prometheusArgs: "--storage.tsdb.retention.time=1d"
busybody:
image: shipasoftware/bb:v0.0.10
shipaController:
image: shipasoftware/image-controller:v0.0.16
shipaNodeTraefik:
user: admin
# --------------------------------------------------------------------------
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
rbac:
enabled: true
# Connect your own instance of mongodb
externalMongodb:
# url must follow Standard Connection String Format as described here: https://docs.mongodb.com/manual/reference/connection-string/#standard-connection-string-format
# For a sharded cluster it should be a comma separated list of hosts:
# e.g. "mongos0.example.com:27017,mongos1.example.com:27017,mongos2.example.com:27017"
# Due to some limitations of the dependencies, we currently do not support url with 'DNS Seed List Connection Format'.
url: < database url >
auth:
username: < username >
password: < password >
# Enable/Disable TLS when connectiong to external DB instance.
tls:
enable: true
# tags are standard way to handle chart dependencies.
tags:
# Set defaultDB to 'false' when using external DB to not install default DB.
# It will also prevent creating Persistent Volumes.
defaultDB: true
# Default DB config
mongodb-replicaset:
replicaSetName: rs0
replicas: 1
port: 27017
nodeSelector:
kubernetes.io/os: linux
auth:
enabled: false
installImage:
name: k8s.gcr.io/mongodb-install
tag: 0.6
pullPolicy: IfNotPresent
image:
name: mongo
tag: latest
pullPolicy: IfNotPresent
persistentVolume:
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
## storageClass: ""
enabled: true
size: 10Gi
tls:
enabled: false
configmap:
shipaCloud:
enabled: false
shipaPayApi:
host: ""
token: ""
googleRecaptcha:
sitekey: ""
secret: ""
smartlook:
projectKey: ""
launchDarkly:
sdkKey: ""

View File

@ -1857,6 +1857,40 @@ entries:
urls:
- assets/portworx/portworx-2.8.0.tgz
version: 2.8.0
shipa:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Shipa
catalog.cattle.io/namespace: shipa-system
catalog.cattle.io/release-name: shipa
apiVersion: v2
appVersion: 1.4.0
created: "2021-11-02T07:22:28.305068-10:00"
dependencies:
- name: mongodb-replicaset
repository: file://./charts/mongodb-replicaset
tags:
- defaultDB
description: A Helm chart for Kubernetes to install the Shipa Control Plane
digest: f47c64376ac5972b4d324beb0ef3b96f10e06b00abe5f322e98bfafe9d64cf2c
home: https://www.shipa.io
icon: https://cdn.opsmatters.com/sites/default/files/logos/shipa-logo.png
keywords:
- shipa
- deployment
- aac
kubeVersion: '>= 1.16.0-0'
maintainers:
- email: rlachhman@shipa.io
name: ravi
name: shipa
sources:
- https://github.com/shipa-corp
- https://github.com/shipa-corp/helm-chart
type: application
urls:
- assets/shipa/shipa-1.4.0.tgz
version: 1.4.0
sysdig:
- annotations:
catalog.cattle.io/certified: partner