Second Commit after Make Charts

pull/366/head
Ravi Lachhman 2022-03-10 11:17:22 -05:00
parent 84673cb594
commit b99bc5906f
76 changed files with 4945 additions and 0 deletions

Binary file not shown.

View File

@ -0,0 +1,6 @@
dependencies:
- name: mongodb-replicaset
repository: https://charts.helm.sh/stable
version: 3.11.3
digest: sha256:d567aabf719102e5090b7d7cc0b8d7fd32e8959e51ec4977b6534147531649b8
generated: "2022-03-08T21:57:32.877642724Z"

View File

@ -0,0 +1,29 @@
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Shipa
catalog.cattle.io/namespace: shipa-system
catalog.cattle.io/release-name: shipa
apiVersion: v2
appVersion: 1.6.3
dependencies:
- name: mongodb-replicaset
repository: file://./charts/mongodb-replicaset
tags:
- defaultDB
description: A Helm chart for Kubernetes to install the Shipa Control Plane
home: https://www.shipa.io
icon: https://www.shipa.io/wp-content/uploads/2020/11/Shipa-banner-768x307.png
keywords:
- shipa
- deployment
- aac
kubeVersion: '>= 1.16.0-0'
maintainers:
- email: rlachhman@shipa.io
name: ravi
name: shipa
sources:
- https://github.com/shipa-corp
- https://github.com/shipa-corp/helm-chart
type: application
version: 1.6.300

View File

@ -0,0 +1,25 @@
Copyright (c) 2020, shipa authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the Globo.com nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,121 @@
**Note:** The master branch is the main development branch. Please use releases instead of the master branch in order to get stable versions.
# Documentation
Documentation for Shipa can be found at https://learn.shipa.io
# Installation Requirements
1. Kubernetes 1.18 - 1.22. Check out the actual [documentation](https://learn.shipa.io/docs/installation-requirements#kubernetes-clusters)
2. Helm v3
# Defaults
We create LoadBalancer service to expose Shipa to the internet:
1. 8080 -> shipa api over http
1. 8081 -> shipa api over https
By default we use dynamic public IP set by a cloud-provider but there is a parameter to use static ip (if you have it):
```bash
--set shipaCluster.ingress.ip=35.192.15.168
```
# Installation
Users can install Shipa on any existing Kubernetes cluster, and Shipa leverages Helm charts for the install.
> ⚠️ NOTE: Installing or upgrading Shipa may require downtime in order to perform database migrations.
Below are the steps required to have Shipa installed in your existing Kubernetes cluster:
Create a namespace where the Shipa services should be installed
```bash
NAMESPACE=shipa-system
kubectl create namespace $NAMESPACE
```
Create the values.override.yaml with the Admin user and password that will be used for Shipa
```bash
cat > values.override.yaml << EOF
auth:
adminUser: <your email here>
adminPassword: <your admin password>
EOF
```
Add Shipa helm repo
```bash
helm repo add shipa-charts https://shipa-charts.storage.googleapis.com
```
Install Shipa
```bash
helm install shipa shipa-charts/shipa -n $NAMESPACE --timeout=1000s -f values.override.yaml
```
## Upgrading shipa helm chart
```bash
helm upgrade shipa . --timeout=1000 --namespace=$NAMESPACE -f values.override.yaml
```
## Upgrading shipa helm chart if you have Pro license
We have two general ways how to execute helm upgrade if you have Pro license:
* Pass a license file to helm upgrade
```bash
helm upgrade shipa . --timeout=1000 --namespace=$NAMESPACE -f values.override.yaml -f license.yaml
```
* Merge license key from a license file to values.override.yaml and execute helm upgrade as usual
```bash
cat license.yaml | grep "license:" >> values.override.yaml
```
# CI/CD
Packaging and signing helm charts is automated using Github Actions
Charts are uploaded to multiple buckets based on condition:
1. `shipa-charts-dev`, `push` to `master`, `push` to PR opened against `master`
2. `shipa-charts-cloud`, `tag` containing `cloud`
3. `shipa-charts`, `tag` not containing `cloud`
Chart name is composed of:
`{last_tag}-{commit_hash}`
For on-prem releases, if tag is not pre-release, meaning it has semantic versioning without RC suffix (ex. 1.3.0, not 1.3.0-rc1), chart name is only `{last_tag}`, as otherwise it is seen by helm chart as development version
### Usage
```
# only first time
helm repo add shipa-dev https://shipa-charts-dev.storage.googleapis.com
helm repo add shipa-cloud https://shipa-charts-cloud.storage.googleapis.com
helm repo add shipa-onprem https://shipa-charts.storage.googleapis.com
# refresh available charts
helm repo update
# check available versions
helm search repo shipa --versions
# check available versions with development versions
helm search repo shipa --versions --devel
# check per repo
helm search repo shipa-dev --versions --devel
helm search repo shipa-cloud --versions --devel
helm search repo shipa-onprem --versions --devel
# helm install
helm install shipa shipa-dev/shipa --version 1.x.x -n shipa-system --timeout=1000s -f values.override.yaml
```
# Shipa client
If you are looking to operate Shipa from your local machine, we have binaries of shipa client: https://learn.shipa.io/docs/downloading-the-shipa-client
# Collaboration/Contributing
We welcome all feedback or pull requests. If you have any questions feel free to reach us at info@shipa.io

View File

@ -0,0 +1,39 @@
# Shipa
[Shipa](http://www.shipa.io/) is an Application-as-Code [AaC] provider that is designed for having a cleaner developer experience and allowing for guardrails to be easily created. The "platform engineering dilemma" is how do you allow for innovation yet have control. Shipa is application focused so allowing developers who are not experienced in Kubernetes run through several critical tasks such as deploying, managing, and iterating on their applications without detailed Kubernetes knowledge. From the operator or admin standpoint, easily enforcing rules/convention without building multiple abstraction layers.
## Install Shipa - Helm Chart
The [Installation Requirements](https://learn.shipa.io/docs/installation-requirements) specify up to date cluster and ingress requirements. Installing the chart is pretty straight forward.
Intially will need to set an intial Admin User and Admin Password/Secret to first access Shipa.
```
helm repo add shipa-charts https://shipa-charts.storage.googleapis.com
helm repo update
helm upgrade --install shipa shipa-charts/shipa \
--set auth.adminUser=admin@acme.com --set auth.adminPassword=admin1234 \
--namespace shipa-system --create-namespace --timeout=1000s --wait
```
## Install Shipa - ClusterIP
Shipa by default will install Traefik as the loadbalencer.
Though if this creates a conflict or there is a cluster limitation, you can also leverage ClusterIP for routing which is the
second set of optional prompts in the Rancher UI.
[Installing Shipa with ClusterIP on K3](https://shipa.io/2021/10/k3d-and-shipa-deploymnet/)
```
helm install shipa shipa-charts/shipa -n shipa-system --create-namespace \
--timeout=15m \
--set=metrics.image=gcr.io/shipa-1000/metrics:30m \
--set=auth.adminUser=admin@acme.com \
--set=auth.adminPassword=admin1234 \
--set=shipaCluster.serviceType=ClusterIP \
--set=shipaCluster.ip=10.43.10.20 \
--set=service.nginx.serviceType=ClusterIP \
--set=service.nginx.clusterIP=10.43.10.10
```

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
install

View File

@ -0,0 +1,16 @@
apiVersion: v1
appVersion: "3.6"
description: NoSQL document-oriented database that stores JSON-like documents with
dynamic schemas, simplifying the integration of data in content-driven applications.
home: https://github.com/mongodb/mongo
icon: https://webassets.mongodb.com/_com_assets/cms/mongodb-logo-rgb-j6w271g1xn.jpg
maintainers:
- email: unguiculus@gmail.com
name: unguiculus
- email: ssheehy@firescope.com
name: steven-sheehy
name: mongodb-replicaset
sources:
- https://github.com/mongodb/mongo
- https://github.com/percona/mongodb_exporter
version: 3.11.3

View File

@ -0,0 +1,6 @@
approvers:
- unguiculus
- steven-sheehy
reviewers:
- unguiculus
- steven-sheehy

View File

@ -0,0 +1,434 @@
# MongoDB Helm Chart
## Prerequisites Details
* Kubernetes 1.9+
* Kubernetes beta APIs enabled only if `podDisruptionBudget` is enabled
* PV support on the underlying infrastructure
## StatefulSet Details
* https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
## StatefulSet Caveats
* https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/#limitations
## Chart Details
This chart implements a dynamically scalable [MongoDB replica set](https://docs.mongodb.com/manual/tutorial/deploy-replica-set/)
using Kubernetes StatefulSets and Init Containers.
## Installing the Chart
To install the chart with the release name `my-release`:
``` console
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install --name my-release stable/mongodb-replicaset
```
## Configuration
The following table lists the configurable parameters of the mongodb chart and their default values.
| Parameter | Description | Default |
| ----------------------------------- | ------------------------------------------------------------------------- | --------------------------------------------------- |
| `replicas` | Number of replicas in the replica set | `3` |
| `replicaSetName` | The name of the replica set | `rs0` |
| `skipInitialization` | If `true` skip replica set initialization during bootstrapping | `false`
| `podDisruptionBudget` | Pod disruption budget | `{}` |
| `port` | MongoDB port | `27017` |
| `imagePullSecrets` | Image pull secrets | `[]` |
| `installImage.repository` | Image name for the install container | `unguiculus/mongodb-install` |
| `installImage.tag` | Image tag for the install container | `0.7` |
| `installImage.pullPolicy` | Image pull policy for the init container that establishes the replica set | `IfNotPresent` |
| `copyConfigImage.repository` | Image name for the copy config init container | `busybox` |
| `copyConfigImage.tag` | Image tag for the copy config init container | `1.29.3` |
| `copyConfigImage.pullPolicy` | Image pull policy for the copy config init container | `IfNotPresent` |
| `image.repository` | MongoDB image name | `mongo` |
| `image.tag` | MongoDB image tag | `3.6` |
| `image.pullPolicy` | MongoDB image pull policy | `IfNotPresent` |
| `podAnnotations` | Annotations to be added to MongoDB pods | `{}` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `999` |
| `securityContext.runAsUser` | User ID for the container | `999` |
| `securityContext.runAsNonRoot` | | `true` |
| `resources` | Pod resource requests and limits | `{}` |
| `persistentVolume.enabled` | If `true`, persistent volume claims are created | `true` |
| `persistentVolume.storageClass` | Persistent volume storage class | `` |
| `persistentVolume.accessModes` | Persistent volume access modes | `[ReadWriteOnce]` |
| `persistentVolume.size` | Persistent volume size | `10Gi` |
| `persistentVolume.annotations` | Persistent volume annotations | `{}` |
| `terminationGracePeriodSeconds` | Duration in seconds the pod needs to terminate gracefully | `30` |
| `tls.enabled` | Enable MongoDB TLS support including authentication | `false` |
| `tls.mode` | Set the SSL operation mode (disabled, allowSSL, preferSSL, requireSSL) | `requireSSL` |
| `tls.cacert` | The CA certificate used for the members | Our self signed CA certificate |
| `tls.cakey` | The CA key used for the members | Our key for the self signed CA certificate |
| `init.resources` | Pod resource requests and limits (for init containers) | `{}` |
| `init.timeout` | The amount of time in seconds to wait for bootstrap to finish | `900` |
| `metrics.enabled` | Enable Prometheus compatible metrics for pods and replicasets | `false` |
| `metrics.image.repository` | Image name for metrics exporter | `bitnami/mongodb-exporter` |
| `metrics.image.tag` | Image tag for metrics exporter | `0.9.0-debian-9-r2` |
| `metrics.image.pullPolicy` | Image pull policy for metrics exporter | `IfNotPresent` |
| `metrics.port` | Port for metrics exporter | `9216` |
| `metrics.path` | URL Path to expose metics | `/metrics` |
| `metrics.resources` | Metrics pod resource requests and limits | `{}` |
| `metrics.securityContext.enabled` | Enable security context | `true` |
| `metrics.securityContext.fsGroup` | Group ID for the metrics container | `1001` |
| `metrics.securityContext.runAsUser` | User ID for the metrics container | `1001` |
| `metrics.socketTimeout` | Time to wait for a non-responding socket | `3s` |
| `metrics.syncTimeout` | Time an operation with this session will wait before returning an error | `1m` |
| `metrics.prometheusServiceDiscovery`| Adds annotations for Prometheus ServiceDiscovery | `true` |
| `auth.enabled` | If `true`, keyfile access control is enabled | `false` |
| `auth.key` | Key for internal authentication | `` |
| `auth.existingKeySecret` | If set, an existing secret with this name for the key is used | `` |
| `auth.adminUser` | MongoDB admin user | `` |
| `auth.adminPassword` | MongoDB admin password | `` |
| `auth.metricsUser` | MongoDB clusterMonitor user | `` |
| `auth.metricsPassword` | MongoDB clusterMonitor password | `` |
| `auth.existingMetricsSecret` | If set, and existing secret with this name is used for the metrics user | `` |
| `auth.existingAdminSecret` | If set, and existing secret with this name is used for the admin user | `` |
| `serviceAnnotations` | Annotations to be added to the service | `{}` |
| `configmap` | Content of the MongoDB config file | `` |
| `initMongodStandalone` | If set, initContainer executes script in standalone mode | `` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `affinity` | Node/pod affinities | `{}` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `priorityClassName` | Pod priority class name | `` |
| `livenessProbe.failureThreshold` | Liveness probe failure threshold | `3` |
| `livenessProbe.initialDelaySeconds` | Liveness probe initial delay seconds | `30` |
| `livenessProbe.periodSeconds` | Liveness probe period seconds | `10` |
| `livenessProbe.successThreshold` | Liveness probe success threshold | `1` |
| `livenessProbe.timeoutSeconds` | Liveness probe timeout seconds | `5` |
| `readinessProbe.failureThreshold` | Readiness probe failure threshold | `3` |
| `readinessProbe.initialDelaySeconds`| Readiness probe initial delay seconds | `5` |
| `readinessProbe.periodSeconds` | Readiness probe period seconds | `10` |
| `readinessProbe.successThreshold` | Readiness probe success threshold | `1` |
| `readinessProbe.timeoutSeconds` | Readiness probe timeout seconds | `1` |
| `extraVars` | Set environment variables for the main container | `{}` |
| `extraLabels` | Additional labels to add to resources | `{}` |
*MongoDB config file*
All options that depended on the chart configuration are supplied as command-line arguments to `mongod`. By default, the chart creates an empty config file. Entries may be added via the `configmap` configuration value.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
``` console
helm install --name my-release -f values.yaml stable/mongodb-replicaset
```
> **Tip**: You can use the default [values.yaml](values.yaml)
Once you have all 3 nodes in running, you can run the "test.sh" script in this directory, which will insert a key into the primary and check the secondaries for output. This script requires that the `$RELEASE_NAME` environment variable be set, in order to access the pods.
## Authentication
By default, this chart creates a MongoDB replica set without authentication. Authentication can be
enabled using the parameter `auth.enabled`. Once enabled, keyfile access control is set up and an
admin user with root privileges is created. User credentials and keyfile may be specified directly.
Alternatively, existing secrets may be provided. The secret for the admin user must contain the
keys `user` and `password`, that for the key file must contain `key.txt`. The user is created with
full `root` permissions but is restricted to the `admin` database for security purposes. It can be
used to create additional users with more specific permissions.
To connect to the mongo shell with authentication enabled, use a command similar to the following (substituting values as appropriate):
```shell
kubectl exec -it mongodb-replicaset-0 -- mongo mydb -u admin -p password --authenticationDatabase admin
```
## TLS support
To enable full TLS encryption set `tls.enabled` to `true`. It is recommended to create your own CA by executing:
```console
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
```
After that paste the base64 encoded (`cat ca.key | base64 -w0`) cert and key into the fields `tls.cacert` and
`tls.cakey`. Adapt the configmap for the replicaset as follows:
```yml
configmap:
storage:
dbPath: /data/db
net:
port: 27017
ssl:
mode: requireSSL
CAFile: /data/configdb/tls.crt
PEMKeyFile: /work-dir/mongo.pem
# Set to false to require mutual TLS encryption
allowConnectionsWithoutCertificates: true
replication:
replSetName: rs0
security:
authorization: enabled
# # Uncomment to enable mutual TLS encryption
# clusterAuthMode: x509
keyFile: /keydir/key.txt
```
To access the cluster you need one of the certificates generated during cluster setup in `/work-dir/mongo.pem` of the
certain container or you generate your own one via:
```console
$ cat >openssl.cnf <<EOL
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $HOSTNAME1
DNS.1 = $HOSTNAME2
EOL
$ openssl genrsa -out mongo.key 2048
$ openssl req -new -key mongo.key -out mongo.csr -subj "/CN=$HOSTNAME" -config openssl.cnf
$ openssl x509 -req -in mongo.csr \
-CA $MONGOCACRT -CAkey $MONGOCAKEY -CAcreateserial \
-out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
$ rm mongo.csr
$ cat mongo.crt mongo.key > mongo.pem
$ rm mongo.key mongo.crt
```
Please ensure that you exchange the `$HOSTNAME` with your actual hostname and the `$HOSTNAME1`, `$HOSTNAME2`, etc. with
alternative hostnames you want to allow access to the MongoDB replicaset. You should now be able to authenticate to the
mongodb with your `mongo.pem` certificate:
```console
mongo --ssl --sslCAFile=ca.crt --sslPEMKeyFile=mongo.pem --eval "db.adminCommand('ping')"
```
## Promethus metrics
Enabling the metrics as follows will allow for each replicaset pod to export Prometheus compatible metrics
on server status, individual replicaset information, replication oplogs, and storage engine.
```yaml
metrics:
enabled: true
image:
repository: ssalaues/mongodb-exporter
tag: 0.6.1
pullPolicy: IfNotPresent
port: 9216
path: "/metrics"
socketTimeout: 3s
syncTimeout: 1m
prometheusServiceDiscovery: true
resources: {}
```
More information on [MongoDB Exporter](https://github.com/percona/mongodb_exporter) metrics available.
## Deep dive
Because the pod names are dependent on the name chosen for it, the following examples use the
environment variable `RELEASENAME`. For example, if the helm release name is `messy-hydra`, one would need to set the following before proceeding. The example scripts below assume 3 pods only.
```console
export RELEASE_NAME=messy-hydra
```
### Cluster Health
```console
for i in 0 1 2; do kubectl exec $RELEASE_NAME-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(db.serverStatus())"'; done
```
### Failover
One can check the roles being played by each node by using the following:
```console
$ for i in 0 1 2; do kubectl exec $RELEASE_NAME-mongodb-replicaset-$i -- sh -c 'mongo --eval="printjson(rs.isMaster())"'; done
MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
{
"hosts" : [
"messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
"messy-hydra-mongodb-1.messy-hydra-mongodb.default.svc.cluster.local:27017",
"messy-hydra-mongodb-2.messy-hydra-mongodb.default.svc.cluster.local:27017"
],
"setName" : "rs0",
"setVersion" : 3,
"ismaster" : true,
"secondary" : false,
"primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
"me" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-09-13T01:10:12.680Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
```
This lets us see which member is primary.
Let us now test persistence and failover. First, we insert a key (in the below example, we assume pod 0 is the master):
```console
$ kubectl exec $RELEASE_NAME-mongodb-replicaset-0 -- mongo --eval="printjson(db.test.insert({key1: 'value1'}))"
MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
{ "nInserted" : 1 }
```
Watch existing members:
```console
$ kubectl run --attach bbox --image=mongo:3.6 --restart=Never --env="RELEASE_NAME=$RELEASE_NAME" -- sh -c 'while true; do for i in 0 1 2; do echo $RELEASE_NAME-mongodb-replicaset-$i $(mongo --host=$RELEASE_NAME-mongodb-replicaset-$i.$RELEASE_NAME-mongodb-replicaset --eval="printjson(rs.isMaster())" | grep primary); sleep 1; done; done';
Waiting for pod default/bbox2 to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-1 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
```
Kill the primary and watch as a new master getting elected.
```console
$ kubectl delete pod $RELEASE_NAME-mongodb-replicaset-0
pod "messy-hydra-mongodb-0" deleted
```
Delete all pods and let the statefulset controller bring it up.
```console
$ kubectl delete po -l "app=mongodb-replicaset,release=$RELEASE_NAME"
$ kubectl get po --watch-only
NAME READY STATUS RESTARTS AGE
messy-hydra-mongodb-0 0/1 Pending 0 0s
messy-hydra-mongodb-0 0/1 Pending 0 0s
messy-hydra-mongodb-0 0/1 Pending 0 7s
messy-hydra-mongodb-0 0/1 Init:0/2 0 7s
messy-hydra-mongodb-0 0/1 Init:1/2 0 27s
messy-hydra-mongodb-0 0/1 Init:1/2 0 28s
messy-hydra-mongodb-0 0/1 PodInitializing 0 31s
messy-hydra-mongodb-0 0/1 Running 0 32s
messy-hydra-mongodb-0 1/1 Running 0 37s
messy-hydra-mongodb-1 0/1 Pending 0 0s
messy-hydra-mongodb-1 0/1 Pending 0 0s
messy-hydra-mongodb-1 0/1 Init:0/2 0 0s
messy-hydra-mongodb-1 0/1 Init:1/2 0 20s
messy-hydra-mongodb-1 0/1 Init:1/2 0 21s
messy-hydra-mongodb-1 0/1 PodInitializing 0 24s
messy-hydra-mongodb-1 0/1 Running 0 25s
messy-hydra-mongodb-1 1/1 Running 0 30s
messy-hydra-mongodb-2 0/1 Pending 0 0s
messy-hydra-mongodb-2 0/1 Pending 0 0s
messy-hydra-mongodb-2 0/1 Init:0/2 0 0s
messy-hydra-mongodb-2 0/1 Init:1/2 0 21s
messy-hydra-mongodb-2 0/1 Init:1/2 0 22s
messy-hydra-mongodb-2 0/1 PodInitializing 0 25s
messy-hydra-mongodb-2 0/1 Running 0 26s
messy-hydra-mongodb-2 1/1 Running 0 30s
...
messy-hydra-mongodb-0 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-1 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
messy-hydra-mongodb-2 "primary" : "messy-hydra-mongodb-0.messy-hydra-mongodb.default.svc.cluster.local:27017",
```
Check the previously inserted key:
```console
$ kubectl exec $RELEASE_NAME-mongodb-replicaset-1 -- mongo --eval="rs.slaveOk(); db.test.find({key1:{\$exists:true}}).forEach(printjson)"
MongoDB shell version: 3.6.3
connecting to: mongodb://127.0.0.1:27017
{ "_id" : ObjectId("57b180b1a7311d08f2bfb617"), "key1" : "value1" }
```
### Scaling
Scaling should be managed by `helm upgrade`, which is the recommended way.
### Indexes and Maintenance
You can run Mongo in standalone mode and execute Javascript code on each replica at initContainer time using `initMongodStandalone`.
This allows you to create indexes on replicasets following [best practices](https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/).
#### Example: Creating Indexes
```js
initMongodStandalone: |+
db = db.getSiblingDB("mydb")
db.my_users.createIndex({email: 1})
```
Tail the logs to debug running indexes or to follow their progress
```sh
kubectl exec -it $RELEASE-mongodb-replicaset-0 -c bootstrap -- tail -f /work-dir/log.txt
```
### Migrate existing ReplicaSets into Kubernetes
If you have an existing ReplicaSet that currently is deployed outside of Kubernetes and want to move it into a cluster you can do so by using the `skipInitialization` flag.
First set the `skipInitialization` variable to `true` in values.yaml and install the Helm chart. That way you end up with uninitialized MongoDB pods that can be added to the existing ReplicaSet.
Now take care of realizing the DNS correct resolution of all ReplicaSet members. In Kubernetes you can for example use an `ExternalName`.
```
apiVersion: v1
kind: Service
metadata:
name: mongodb01
namespace: mongo
spec:
type: ExternalName
externalName: mongodb01.mydomain.com
```
If you also put each StatefulSet member behind a loadbalancer the ReplicaSet members outside of the cluster will also be able to reach the pods inside the cluster.
```
apiVersion: v1
kind: Service
metadata:
name: mongodb-0
namespace: mongo
spec:
selector:
statefulset.kubernetes.io/pod-name: mongodb-0
ports:
- port: 27017
targetPort: 27017
type: LoadBalancer
```
Now all that is left to do is to put the LoadBalancer IP into the `/etc/hosts` file (or realize the DNS resolution through another way)
```
1.2.3.4 mongodb-0
5.6.7.8 mongodb-1
```
With a setup like this each replicaset member can resolve the DNS entry of each other and you can just add the new pods to your existing MongoDB cluster as if they where just normal nodes.
Of course you need to make sure to get your security settings right. Enforced TLS is a good idea in a setup like this. Also make sure that you activate auth and get the firewall settings right.
Once you fully migrated remove the old nodes from the replicaset.

View File

@ -0,0 +1 @@
# No config change. Just use defaults.

View File

@ -0,0 +1,10 @@
auth:
enabled: true
adminUser: username
adminPassword: password
metricsUser: metrics
metricsPassword: password
key: keycontent
metrics:
enabled: true

View File

@ -0,0 +1,10 @@
tls:
# Enable or disable MongoDB TLS support
enabled: true
# Please generate your own TLS CA by generating it via:
# $ openssl genrsa -out ca.key 2048
# $ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
# After that you can base64 encode it and paste it here:
# $ cat ca.key | base64 -w0
cacert: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNxakNDQVpJQ0NRQ1I1aXNNQlRmQzdUQU5CZ2txaGtpRzl3MEJBUXNGQURBWE1SVXdFd1lEVlFRRERBeHQKZVdSdmJXRnBiaTVqYjIwd0hoY05NVGt4TVRFeU1EZ3hOakUwV2hjTk5EY3dNek13TURneE5qRTBXakFYTVJVdwpFd1lEVlFRRERBeHRlV1J2YldGcGJpNWpiMjB3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNwM0UrdVpWanhaY3BYNUFCbEtRa2crZjFmSnJzR1JJNVQrMzcyMkIvYnRyTVo4M3FyRTg2RFdEYXEKN0k1YTdlOGFVTGt2ZVpsaW02aWxsUW5CTHJPVUtVZ3R1OWZINlZydlBuMTl3UDFibEMvU0NWZHoxemNSUWlJWQpOMVVWN2VGaWUzdjhiNXVRM2RFcVBPV2FMM0w2N0Q1T0lDb043Z21QL2QwVVBaWjNHdDJLNTZsNXBzY1h4OGYwCkd3ZWdSRGpiVnZmc2dUSW50dEJ6SGh6c0JENUxON054aDd5RWVacW5admtuTDg5S2JZUEFPUk82N3NKUlBhWHMKUDhuVDhqalFJaGlRSUZDNTVXN3JrZ1hid1Znajdwb0kyby9XSDM4WXZ6TG1OVnMyOThYUDZmUXhCQ0NwMmFjRgpkOTVQRjZmbFVJeW9RNGRuOUF5UlpRa0owdlpMQWdNQkFBRXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBS21XCjY2SlB4V0E4MVlYTEZtclFrdmM2NE9ycFJXTHJtNHFqaFREREtvVzY1V291MzNyOEVVQktzQ2FQOHNWWXhobFQKaWhGcDhIemNqTXpWRjFqU3ZiT0c5UnhrSG16ZEIvL3FwMDdTVFp0S2R1cThad2RVdnh1Z1FXSFNzOHA4YVNHUAowNDlkSDBqUnpEZklyVGp4Z3ZNOUJlWmorTkdqT1FyUGRvQVBKeTl2THowZmYya1gzVjJadTFDWnNnbDNWUXFsCjRsNzB3azFuVk5tTXY4Nnl5cUZXaWFRTWhuSXFjKzBwYUJaRjJFSGNpSExuZWcweVVTZVN4REsrUkk4SE9mT3oKNVFpUHpqSGs1b3czd252NDhQWVJMODdLTWJtRzF0eThyRHMxUlVGWkZueGxHd0t4UmRmckt3aHJJbVRBT2N4Vwo5bVhCU3ZzY3RjM2tIZTRIVFdRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
cakey: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcWR4UHJtVlk4V1hLVitRQVpTa0pJUG45WHlhN0JrU09VL3QrOXRnZjI3YXpHZk42CnF4UE9nMWcycXV5T1d1M3ZHbEM1TDNtWllwdW9wWlVKd1M2emxDbElMYnZYeCtsYTd6NTlmY0Q5VzVRdjBnbFgKYzljM0VVSWlHRGRWRmUzaFludDcvRytia04zUktqemxtaTl5K3V3K1RpQXFEZTRKai8zZEZEMldkeHJkaXVlcAplYWJIRjhmSDlCc0hvRVE0MjFiMzdJRXlKN2JRY3g0YzdBUStTemV6Y1llOGhIbWFwMmI1SnkvUFNtMkR3RGtUCnV1N0NVVDJsN0QvSjAvSTQwQ0lZa0NCUXVlVnU2NUlGMjhGWUkrNmFDTnFQMWg5L0dMOHk1alZiTnZmRnorbjAKTVFRZ3FkbW5CWGZlVHhlbjVWQ01xRU9IWi9RTWtXVUpDZEwyU3dJREFRQUJBb0lCQVFDVWM3eWNBVzFMaEpmawpXcHRSemh4eFdxcnJSeEU3ZUIwZ0h2UW16bHFCanRwVyt1bWhyT3pXOC9qTFIzVmUyUVlZYktaOGJIejJwbTR0ClVPVTJsaGRTalFYTkdwZUsyMUtqTjIwN3c3aHFHa2YwL0Q4WE9lZWh5TGU5akZacmxQeGZNdWI0aDU1aGJNdUsKYTdDTElaOE8xL3ZZRWRwUFZGTzlLYlRYSk1CbEZJUERUaFJvR2RCTEFkREZNbzcrUnZYSFRUcXdyWmxDbWRDbgp5eld3WkhIQUZhdEdGWU9ybXcxdlZZY3h0OXk5c0FVZDBrVTQza05jVHVHR0MwMGh1QlZMcW9JZU9mMG12TDB0Ckg0S0d6LzBicGp4NFpoWlNKazd3ZkFsQ0xGL1N5YzVJOEJXWWNCb05Jc0RSbDdDUmpDVUoxYVNBNVNYNzZ2SVoKSlhnRWEyV3hBb0dCQU50M0pDRGtycjNXRmJ3cW1SZ2ZhUVV0UE1FVnZlVnJvQmRqZTBZVFFNbENlNTV2QU1uNQpadEFKQTVKTmxKN2VZRkVEa0JrVURJSDFDM3hlZHVWbEREWXpESzRpR0V1Wk8wVDNERFN3aks2cExsZ3JBN0QyCmZnS29ubVdGck5JdTI4UW1MNHhmcjUrWW9SNUo0L05QdFdWOWwwZk1NOHEwSTd5SVRNODlsZWlqQW9HQkFNWWoKTHk3VER1MWVJVWkrQXJFNTJFMEkwTVJPNWNLS2hxdGxJMnpCZkZlWm5LYWdwbnhCMTMxbi9wcGg0Wm1IYnMzZQpxOXBSb0RJT0h4cm5NOWFCa1JBTHJHNjBMeXF3eU5NNW1JemkvQytJK2RVOG55ZXIvZVNNRTNtdlFzbmpVcEhtClRtTjRrM0l4RWtqRnhCazVndFNlNlA5U0UyOFd6eVZoOGlkZHRjNDVBb0dBYzcwWFBvbWJaZDM3UkdxcXBrQWEKWUhLRThjY0hpSEFEMDVIUk54bDhOeWRxamhrNEwwdnAzcGlDVzZ1eVR6NHpTVVk1dmlBR29KcWNYaEJyWDNxMAp2L2lZSFZVNXZ0U21ueTR5TDY5VDRlQ3k0aWg5SDl3K2hDUnN0Rm1VMUp1RnBxSUV2V0RRKzdmQWNIckRUbE9nCjlFOFJjdm5MN29DbHdBMlpoRW1VUDBVQ2dZQWFhdUtGbWJwcHg1MGtkOEVnSkJoRTNTSUlxb1JUMWVoeXZiOWwKWnI3UFp6bk50YW04ODRKcHhBM2NRNlN5dGEzK1lPd0U1ZEU0RzAzbVptRXcvb0Y2NURPUFp4TEszRnRLWG1tSwpqMUVVZld6aUUzMGM2ditsRTFBZGIxSzJYRXJNRFNyeWRFY2tlSXA1alhUQjhEc1RZa1NxbGlUbE1PTlpscCtVCnhCZlRjUUtCZ0RoZHo4VjU1TzdNc0dyRVlQeGhoK0U0bklLb3BRc0RlNi9QdWRRVlJMRlNwVGpLNWlKcTF2RnIKajFyNDFCNFp0cjBYNGd6MzhrSUpwZGNvNUFxU25zVENreHhnYXh3RTNzVmlqNGZZRWlteDc3TS84VkZVbDZwLwphNmdBbFh2WHFaYmFvTGU3ekM2RXVZWjFtUzJGMVd4UE9KRzZpakFiMVNIQjVPOGFWdFR3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg=="

View File

@ -0,0 +1,226 @@
#!/usr/bin/env bash
# Copyright 2018 The Kubernetes Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e pipefail
port=27017
replica_set="$REPLICA_SET"
script_name=${0##*/}
SECONDS=0
timeout="${TIMEOUT:-900}"
tls_mode="${TLS_MODE}"
if [[ "$AUTH" == "true" ]]; then
admin_user="$ADMIN_USER"
admin_password="$ADMIN_PASSWORD"
admin_creds=(-u "$admin_user" -p "$admin_password")
if [[ "$METRICS" == "true" ]]; then
metrics_user="$METRICS_USER"
metrics_password="$METRICS_PASSWORD"
fi
auth_args=("--auth" "--keyFile=/data/configdb/key.txt")
fi
log() {
local msg="$1"
local timestamp
timestamp=$(date --iso-8601=ns)
echo "[$timestamp] [$script_name] $msg" 2>&1 | tee -a /work-dir/log.txt 1>&2
}
retry_until() {
local host="${1}"
local command="${2}"
local expected="${3}"
local creds=("${admin_creds[@]}")
# Don't need credentials for admin user creation and pings that run on localhost
if [[ "${host}" =~ ^localhost ]]; then
creds=()
fi
until [[ $(mongo admin --host "${host}" "${creds[@]}" "${ssl_args[@]}" --quiet --eval "${command}" | tail -n1) == "${expected}" ]]; do
sleep 1
if (! ps "${pid}" &>/dev/null); then
log "mongod shutdown unexpectedly"
exit 1
fi
if [[ "${SECONDS}" -ge "${timeout}" ]]; then
log "Timed out after ${timeout}s attempting to bootstrap mongod"
exit 1
fi
log "Retrying ${command} on ${host}"
done
}
shutdown_mongo() {
local host="${1:-localhost}"
local args='force: true'
log "Shutting down MongoDB ($args)..."
if (! mongo admin --host "${host}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "db.shutdownServer({$args})"); then
log "db.shutdownServer() failed, sending the terminate signal"
kill -TERM "${pid}"
fi
}
init_mongod_standalone() {
if [[ ! -f /init/initMongodStandalone.js ]]; then
log "Skipping init mongod standalone script"
return 0
elif [[ -z "$(ls -1A /data/db)" ]]; then
log "mongod standalone script currently not supported on initial install"
return 0
fi
local port="27018"
log "Starting a MongoDB instance as standalone..."
mongod --config /data/configdb/mongod.conf --dbpath=/data/db "${auth_args[@]}" "${ssl_server_args[@]}" --port "${port}" --bind_ip=0.0.0.0 2>&1 | tee -a /work-dir/log.txt 1>&2 &
export pid=$!
trap shutdown_mongo EXIT
log "Waiting for MongoDB to be ready..."
retry_until "localhost:${port}" "db.adminCommand('ping').ok" "1"
log "Running init js script on standalone mongod"
mongo admin --port "${port}" "${admin_creds[@]}" "${ssl_args[@]}" /init/initMongodStandalone.js
shutdown_mongo "localhost:${port}"
}
my_hostname=$(hostname)
log "Bootstrapping MongoDB replica set member: $my_hostname"
log "Reading standard input..."
while read -ra line; do
if [[ "${line}" == *"${my_hostname}"* ]]; then
service_name="$line"
fi
peers=("${peers[@]}" "$line")
done
# Generate the ca cert
ca_crt=/data/configdb/tls.crt
if [ -f "$ca_crt" ]; then
log "Generating certificate"
ca_key=/data/configdb/tls.key
pem=/work-dir/mongo.pem
ssl_args=(--ssl --sslCAFile "$ca_crt" --sslPEMKeyFile "$pem")
ssl_server_args=(--sslMode "$tls_mode" --sslCAFile "$ca_crt" --sslPEMKeyFile "$pem")
# Move into /work-dir
pushd /work-dir
cat >openssl.cnf <<EOL
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $(echo -n "$my_hostname" | sed s/-[0-9]*$//)
DNS.2 = $my_hostname
DNS.3 = $service_name
DNS.4 = localhost
DNS.5 = 127.0.0.1
EOL
# Generate the certs
openssl genrsa -out mongo.key 2048
openssl req -new -key mongo.key -out mongo.csr -subj "/OU=MongoDB/CN=$my_hostname" -config openssl.cnf
openssl x509 -req -in mongo.csr \
-CA "$ca_crt" -CAkey "$ca_key" -CAcreateserial \
-out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
rm mongo.csr
cat mongo.crt mongo.key > $pem
rm mongo.key mongo.crt
fi
init_mongod_standalone
if [[ "${SKIP_INIT}" == "true" ]]; then
log "Skipping initialization"
exit 0
fi
log "Peers: ${peers[*]}"
log "Starting a MongoDB replica"
mongod --config /data/configdb/mongod.conf --dbpath=/data/db --replSet="$replica_set" --port="${port}" "${auth_args[@]}" "${ssl_server_args[@]}" --bind_ip=0.0.0.0 2>&1 | tee -a /work-dir/log.txt 1>&2 &
pid=$!
trap shutdown_mongo EXIT
log "Waiting for MongoDB to be ready..."
retry_until "localhost" "db.adminCommand('ping').ok" "1"
log "Initialized."
# try to find a master
for peer in "${peers[@]}"; do
log "Checking if ${peer} is primary"
# Check rs.status() first since it could be in primary catch up mode which db.isMaster() doesn't show
if [[ $(mongo admin --host "${peer}" "${admin_creds[@]}" "${ssl_args[@]}" --quiet --eval "rs.status().myState") == "1" ]]; then
retry_until "${peer}" "db.isMaster().ismaster" "true"
log "Found primary: ${peer}"
primary="${peer}"
break
fi
done
if [[ "${primary}" = "${service_name}" ]]; then
log "This replica is already PRIMARY"
elif [[ -n "${primary}" ]]; then
if [[ $(mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --quiet --eval "rs.conf().members.findIndex(m => m.host == '${service_name}:${port}')") == "-1" ]]; then
log "Adding myself (${service_name}) to replica set..."
if (mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "rs.add('${service_name}')" | grep 'Quorum check failed'); then
log 'Quorum check failed, unable to join replicaset. Exiting prematurely.'
exit 1
fi
fi
sleep 3
log 'Waiting for replica to reach SECONDARY state...'
retry_until "${service_name}" "rs.status().myState" "2"
log '✓ Replica reached SECONDARY state.'
elif (mongo "${ssl_args[@]}" --eval "rs.status()" | grep "no replset config has been received"); then
log "Initiating a new replica set with myself ($service_name)..."
mongo "${ssl_args[@]}" --eval "rs.initiate({'_id': '$replica_set', 'members': [{'_id': 0, 'host': '$service_name'}]})"
sleep 3
log 'Waiting for replica to reach PRIMARY state...'
retry_until "localhost" "db.isMaster().ismaster" "true"
primary="${service_name}"
log '✓ Replica reached PRIMARY state.'
if [[ "${AUTH}" == "true" ]]; then
log "Creating admin user..."
mongo admin "${ssl_args[@]}" --eval "db.createUser({user: '${admin_user}', pwd: '${admin_password}', roles: [{role: 'root', db: 'admin'}]})"
fi
fi
# User creation
if [[ -n "${primary}" && "$AUTH" == "true" && "$METRICS" == "true" ]]; then
metric_user_count=$(mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "db.system.users.find({user: '${metrics_user}'}).count()" --quiet)
if [[ "${metric_user_count}" == "0" ]]; then
log "Creating clusterMonitor user..."
mongo admin --host "${primary}" "${admin_creds[@]}" "${ssl_args[@]}" --eval "db.createUser({user: '${metrics_user}', pwd: '${metrics_password}', roles: [{role: 'clusterMonitor', db: 'admin'}, {role: 'read', db: 'local'}]})"
fi
fi
log "MongoDB bootstrap complete"
exit 0

View File

@ -0,0 +1,14 @@
1. After the statefulset is created completely, one can check which instance is primary by running:
$ for ((i = 0; i < {{ .Values.replicas }}; ++i)); do kubectl exec --namespace {{ .Release.Namespace }} {{ template "mongodb-replicaset.fullname" . }}-$i -- sh -c 'mongo --eval="printjson(rs.isMaster())"'; done
2. One can insert a key into the primary instance of the mongodb replica set by running the following:
MASTER_POD_NAME must be replaced with the name of the master found from the previous step.
$ kubectl exec --namespace {{ .Release.Namespace }} MASTER_POD_NAME -- mongo --eval="printjson(db.test.insert({key1: 'value1'}))"
3. One can fetch the keys stored in the primary or any of the slave nodes in the following manner.
POD_NAME must be replaced by the name of the pod being queried.
$ kubectl exec --namespace {{ .Release.Namespace }} POD_NAME -- mongo --eval="rs.slaveOk(); db.test.find().forEach(printjson)"

View File

@ -0,0 +1,78 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "mongodb-replicaset.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "mongodb-replicaset.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "mongodb-replicaset.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name for the admin secret.
*/}}
{{- define "mongodb-replicaset.adminSecret" -}}
{{- if .Values.auth.existingAdminSecret -}}
{{- .Values.auth.existingAdminSecret -}}
{{- else -}}
{{- template "mongodb-replicaset.fullname" . -}}-admin
{{- end -}}
{{- end -}}
{{- define "mongodb-replicaset.metricsSecret" -}}
{{- if .Values.auth.existingMetricsSecret -}}
{{- .Values.auth.existingMetricsSecret -}}
{{- else -}}
{{- template "mongodb-replicaset.fullname" . -}}-metrics
{{- end -}}
{{- end -}}
{{/*
Create the name for the key secret.
*/}}
{{- define "mongodb-replicaset.keySecret" -}}
{{- if .Values.auth.existingKeySecret -}}
{{- .Values.auth.existingKeySecret -}}
{{- else -}}
{{- template "mongodb-replicaset.fullname" . -}}-keyfile
{{- end -}}
{{- end -}}
{{- define "mongodb-replicaset.connection-string" -}}
{{- $string := "" -}}
{{- if .Values.auth.enabled }}
{{- $string = printf "mongodb://$METRICS_USER:$METRICS_PASSWORD@localhost:%s" (.Values.port|toString) -}}
{{- else -}}
{{- $string = printf "mongodb://localhost:%s" (.Values.port|toString) -}}
{{- end -}}
{{- if .Values.tls.enabled }}
{{- printf "%s?ssl=true&tlsCertificateKeyFile=/work-dir/mongo.pem&tlsCAFile=/ca/tls.crt" $string -}}
{{- else -}}
{{- printf $string -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if and (.Values.auth.enabled) (not .Values.auth.existingAdminSecret) -}}
apiVersion: v1
kind: Secret
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.adminSecret" . }}
type: Opaque
data:
user: {{ .Values.auth.adminUser | b64enc }}
password: {{ .Values.auth.adminPassword | b64enc }}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if .Values.tls.enabled -}}
apiVersion: v1
kind: Secret
type: kubernetes.io/tls
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-ca
data:
tls.key: {{ .Values.tls.cakey }}
tls.crt: {{ .Values.tls.cacert }}
{{- end -}}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-init
data:
on-start.sh: |
{{ .Files.Get "init/on-start.sh" | indent 4 }}
{{- if .Values.initMongodStandalone }}
initMongodStandalone.js: |
{{ .Values.initMongodStandalone | indent 4 }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- if and (.Values.auth.enabled) (not .Values.auth.existingKeySecret) -}}
apiVersion: v1
kind: Secret
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.keySecret" . }}
type: Opaque
data:
key.txt: {{ .Values.auth.key | b64enc }}
{{- end -}}

View File

@ -0,0 +1,18 @@
{{- if and (.Values.auth.enabled) (not .Values.auth.existingMetricsSecret) (.Values.metrics.enabled) -}}
apiVersion: v1
kind: Secret
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.metricsSecret" . }}
type: Opaque
data:
user: {{ .Values.auth.metricsUser | b64enc }}
password: {{ .Values.auth.metricsPassword | b64enc }}
{{- end -}}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-mongodb
data:
mongod.conf: |
{{ toYaml .Values.configmap | indent 4 }}

View File

@ -0,0 +1,20 @@
{{- if .Values.podDisruptionBudget -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}
spec:
selector:
matchLabels:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}
{{ toYaml .Values.podDisruptionBudget | indent 2 }}
{{- end -}}

View File

@ -0,0 +1,32 @@
# A headless service for client applications to use
apiVersion: v1
kind: Service
metadata:
annotations:
{{- if .Values.serviceAnnotations }}
{{ toYaml .Values.serviceAnnotations | indent 4 }}
{{- end }}
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}-client
spec:
type: ClusterIP
clusterIP: None
ports:
- name: mongodb
port: {{ .Values.port }}
{{- if .Values.metrics.enabled }}
- name: metrics
port: {{ .Values.metrics.port }}
targetPort: metrics
{{- end }}
selector:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}

View File

@ -0,0 +1,25 @@
# A headless service to create DNS records for discovery purposes. Use the -client service to connect applications
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}
spec:
type: ClusterIP
clusterIP: None
ports:
- name: mongodb
port: {{ .Values.port }}
publishNotReadyAddresses: true
selector:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}

View File

@ -0,0 +1,354 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ template "mongodb-replicaset.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "mongodb-replicaset.fullname" . }}
spec:
selector:
matchLabels:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}
serviceName: {{ template "mongodb-replicaset.fullname" . }}
replicas: {{ .Values.replicas }}
template:
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
release: {{ .Release.Name }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 8 }}
{{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/mongodb-mongodb-configmap.yaml") . | sha256sum }}
{{- if and (.Values.metrics.prometheusServiceDiscovery) (.Values.metrics.enabled) }}
prometheus.io/scrape: "true"
prometheus.io/port: {{ .Values.metrics.port | quote }}
prometheus.io/path: {{ .Values.metrics.path | quote }}
{{- end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.imagePullSecrets }}
- name: {{ . }}
{{- end}}
{{- end }}
{{- if .Values.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
initContainers:
- name: copy-config
image: "{{ .Values.copyConfigImage.repository }}:{{ .Values.copyConfigImage.tag }}"
imagePullPolicy: {{ .Values.copyConfigImage.pullPolicy | quote }}
command:
- "sh"
args:
- "-c"
- |
set -e
set -x
cp /configdb-readonly/mongod.conf /data/configdb/mongod.conf
{{- if .Values.tls.enabled }}
cp /ca-readonly/tls.key /data/configdb/tls.key
cp /ca-readonly/tls.crt /data/configdb/tls.crt
{{- end }}
{{- if .Values.auth.enabled }}
cp /keydir-readonly/key.txt /data/configdb/key.txt
chmod 600 /data/configdb/key.txt
{{- end }}
volumeMounts:
- name: workdir
mountPath: /work-dir
- name: config
mountPath: /configdb-readonly
- name: configdir
mountPath: /data/configdb
{{- if .Values.tls.enabled }}
- name: ca
mountPath: /ca-readonly
{{- end }}
{{- if .Values.auth.enabled }}
- name: keydir
mountPath: /keydir-readonly
{{- end }}
resources:
{{ toYaml .Values.init.resources | indent 12 }}
- name: install
image: "{{ .Values.installImage.repository }}:{{ .Values.installImage.tag }}"
args:
- --work-dir=/work-dir
imagePullPolicy: "{{ .Values.installImage.pullPolicy }}"
volumeMounts:
- name: workdir
mountPath: /work-dir
resources:
{{ toYaml .Values.init.resources | indent 12 }}
- name: bootstrap
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command:
- /work-dir/peer-finder
args:
- -on-start=/init/on-start.sh
- "-service={{ template "mongodb-replicaset.fullname" . }}"
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: REPLICA_SET
value: {{ .Values.replicaSetName }}
- name: TIMEOUT
value: "{{ .Values.init.timeout }}"
- name: SKIP_INIT
value: "{{ .Values.skipInitialization }}"
- name: TLS_MODE
value: {{ .Values.tls.mode }}
{{- if .Values.auth.enabled }}
- name: AUTH
value: "true"
- name: ADMIN_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: user
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: password
{{- if .Values.metrics.enabled }}
- name: METRICS
value: "true"
- name: METRICS_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: user
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: password
{{- end }}
{{- end }}
volumeMounts:
- name: workdir
mountPath: /work-dir
- name: init
mountPath: /init
- name: configdir
mountPath: /data/configdb
- name: datadir
mountPath: /data/db
resources:
{{ toYaml .Values.init.resources | indent 12 }}
containers:
- name: {{ template "mongodb-replicaset.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
{{- if .Values.extraVars }}
env:
{{ toYaml .Values.extraVars | indent 12 }}
{{- end }}
ports:
- name: mongodb
containerPort: 27017
resources:
{{ toYaml .Values.resources | indent 12 }}
command:
- mongod
args:
- --config=/data/configdb/mongod.conf
- --dbpath=/data/db
- --replSet={{ .Values.replicaSetName }}
- --port=27017
- --bind_ip=0.0.0.0
{{- if .Values.auth.enabled }}
- --auth
- --keyFile=/data/configdb/key.txt
{{- end }}
{{- if .Values.tls.enabled }}
- --sslMode={{ .Values.tls.mode }}
- --sslCAFile=/data/configdb/tls.crt
- --sslPEMKeyFile=/work-dir/mongo.pem
{{- end }}
livenessProbe:
exec:
command:
- mongo
{{- if .Values.tls.enabled }}
- --ssl
- --sslCAFile=/data/configdb/tls.crt
- --sslPEMKeyFile=/work-dir/mongo.pem
{{- end }}
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
readinessProbe:
exec:
command:
- mongo
{{- if .Values.tls.enabled }}
- --ssl
- --sslCAFile=/data/configdb/tls.crt
- --sslPEMKeyFile=/work-dir/mongo.pem
{{- end }}
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
volumeMounts:
- name: datadir
mountPath: /data/db
- name: configdir
mountPath: /data/configdb
- name: workdir
mountPath: /work-dir
{{ if .Values.metrics.enabled }}
- name: metrics
image: "{{ .Values.metrics.image.repository }}:{{ .Values.metrics.image.tag }}"
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command:
- sh
- -c
- >-
/bin/mongodb_exporter
--mongodb.uri {{ template "mongodb-replicaset.connection-string" . }}
--mongodb.socket-timeout={{ .Values.metrics.socketTimeout }}
--mongodb.sync-timeout={{ .Values.metrics.syncTimeout }}
--web.telemetry-path={{ .Values.metrics.path }}
--web.listen-address=:{{ .Values.metrics.port }}
volumeMounts:
{{- if and (.Values.tls.enabled) }}
- name: ca
mountPath: /ca
readOnly: true
{{- end }}
- name: workdir
mountPath: /work-dir
readOnly: true
env:
{{- if .Values.auth.enabled }}
- name: METRICS_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: user
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.metricsSecret" . }}"
key: password
{{- end }}
ports:
- name: metrics
containerPort: {{ .Values.metrics.port }}
resources:
{{ toYaml .Values.metrics.resources | indent 12 }}
{{- if .Values.metrics.securityContext.enabled }}
securityContext:
runAsUser: {{ .Values.metrics.securityContext.runAsUser }}
{{- end }}
livenessProbe:
exec:
command:
- sh
- -c
- >-
/bin/mongodb_exporter
--mongodb.uri {{ template "mongodb-replicaset.connection-string" . }}
--test
initialDelaySeconds: 30
periodSeconds: 10
{{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config
configMap:
name: {{ template "mongodb-replicaset.fullname" . }}-mongodb
- name: init
configMap:
defaultMode: 0755
name: {{ template "mongodb-replicaset.fullname" . }}-init
{{- if .Values.tls.enabled }}
- name: ca
secret:
defaultMode: 0400
secretName: {{ template "mongodb-replicaset.fullname" . }}-ca
{{- end }}
{{- if .Values.auth.enabled }}
- name: keydir
secret:
defaultMode: 0400
secretName: {{ template "mongodb-replicaset.keySecret" . }}
{{- end }}
- name: workdir
emptyDir: {}
- name: configdir
emptyDir: {}
{{- if .Values.persistentVolume.enabled }}
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
{{- range $key, $value := .Values.persistentVolume.annotations }}
{{ $key }}: "{{ $value }}"
{{- end }}
spec:
accessModes:
{{- range .Values.persistentVolume.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistentVolume.size | quote }}
{{- if .Values.persistentVolume.storageClass }}
{{- if (eq "-" .Values.persistentVolume.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
{{- else }}
- name: datadir
emptyDir: {}
{{- end }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "mongodb-replicaset.fullname" . }}-tests
data:
mongodb-up-test.sh: |
{{ .Files.Get "tests/mongodb-up-test.sh" | indent 4 }}

View File

@ -0,0 +1,79 @@
apiVersion: v1
kind: Pod
metadata:
labels:
app: {{ template "mongodb-replicaset.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "mongodb-replicaset.fullname" . }}-test
annotations:
"helm.sh/hook": test-success
spec:
initContainers:
- name: test-framework
image: dduportal/bats:0.4.0
command:
- bash
- -c
- |
set -ex
# copy bats to tools dir
cp -R /usr/local/libexec/ /tools/bats/
volumeMounts:
- name: tools
mountPath: /tools
containers:
- name: mongo
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command:
- /tools/bats/bats
- -t
- /tests/mongodb-up-test.sh
env:
- name: FULL_NAME
value: {{ template "mongodb-replicaset.fullname" . }}
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: REPLICAS
value: "{{ .Values.replicas }}"
{{- if .Values.auth.enabled }}
- name: AUTH
value: "true"
- name: ADMIN_USER
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: user
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: "{{ template "mongodb-replicaset.adminSecret" . }}"
key: password
{{- end }}
volumeMounts:
- name: tools
mountPath: /tools
- name: tests
mountPath: /tests
{{- if .Values.tls.enabled }}
- name: tls
mountPath: /tls
{{- end }}
volumes:
- name: tools
emptyDir: {}
- name: tests
configMap:
name: {{ template "mongodb-replicaset.fullname" . }}-tests
{{- if .Values.tls.enabled }}
- name: tls
secret:
secretName: {{ template "mongodb-replicaset.fullname" . }}-ca
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
{{- end }}
restartPolicy: Never

View File

@ -0,0 +1,48 @@
#! /bin/bash
# Copyright 2016 The Kubernetes Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NS="${RELEASE_NAMESPACE:-default}"
POD_NAME="${RELEASE_NAME:-mongo}-mongodb-replicaset"
MONGOCACRT=/ca/tls.crt
MONGOPEM=/work-dir/mongo.pem
if [ -f $MONGOPEM ]; then
MONGOARGS="--ssl --sslCAFile $MONGOCACRT --sslPEMKeyFile $MONGOPEM"
fi
for i in $(seq 0 2); do
pod="${POD_NAME}-$i"
kubectl exec --namespace $NS $pod -- sh -c 'mongo '"$MONGOARGS"' --eval="printjson(rs.isMaster())"' | grep '"ismaster" : true'
if [ $? -eq 0 ]; then
echo "Found master: $pod"
MASTER=$pod
break
fi
done
kubectl exec --namespace $NS $MASTER -- mongo "$MONGOARGS" --eval='printjson(db.test.insert({"status": "success"}))'
# TODO: find maximum duration to wait for slaves to be up-to-date with master.
sleep 2
for i in $(seq 0 2); do
pod="${POD_NAME}-$i"
if [[ $pod != $MASTER ]]; then
echo "Reading from slave: $pod"
kubectl exec --namespace $NS $pod -- mongo "$MONGOARGS" --eval='rs.slaveOk(); db.test.find().forEach(printjson)'
fi
done

View File

@ -0,0 +1,120 @@
#!/usr/bin/env bash
set -ex
CACRT_FILE=/work-dir/tls.crt
CAKEY_FILE=/work-dir/tls.key
MONGOPEM=/work-dir/mongo.pem
MONGOARGS="--quiet"
if [ -e "/tls/tls.crt" ]; then
# log "Generating certificate"
mkdir -p /work-dir
cp /tls/tls.crt /work-dir/tls.crt
cp /tls/tls.key /work-dir/tls.key
# Move into /work-dir
pushd /work-dir
cat >openssl.cnf <<EOL
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = $(echo -n "$(hostname)" | sed s/-[0-9]*$//)
DNS.2 = $(hostname)
DNS.3 = localhost
DNS.4 = 127.0.0.1
EOL
# Generate the certs
openssl genrsa -out mongo.key 2048
openssl req -new -key mongo.key -out mongo.csr -subj "/OU=MongoDB/CN=$(hostname)" -config openssl.cnf
openssl x509 -req -in mongo.csr \
-CA "$CACRT_FILE" -CAkey "$CAKEY_FILE" -CAcreateserial \
-out mongo.crt -days 3650 -extensions v3_req -extfile openssl.cnf
cat mongo.crt mongo.key > $MONGOPEM
MONGOARGS="$MONGOARGS --ssl --sslCAFile $CACRT_FILE --sslPEMKeyFile $MONGOPEM"
fi
if [[ "${AUTH}" == "true" ]]; then
MONGOARGS="$MONGOARGS --username $ADMIN_USER --password $ADMIN_PASSWORD --authenticationDatabase admin"
fi
pod_name() {
local full_name="${FULL_NAME?Environment variable FULL_NAME not set}"
local namespace="${NAMESPACE?Environment variable NAMESPACE not set}"
local index="$1"
echo "$full_name-$index.$full_name.$namespace.svc.cluster.local"
}
replicas() {
echo "${REPLICAS?Environment variable REPLICAS not set}"
}
master_pod() {
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS "--host=$(pod_name "$i")" "--eval=rs.isMaster().ismaster")
if [[ "$response" == "true" ]]; then
pod_name "$i"
break
fi
done
}
setup() {
local ready=0
until [[ "$ready" -eq $(replicas) ]]; do
echo "Waiting for application to become ready" >&2
sleep 1
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS "--host=$(pod_name "$i")" "--eval=rs.status().ok" || true)
if [[ "$response" -eq 1 ]]; then
ready=$((ready + 1))
fi
done
done
}
@test "Testing mongodb client is executable" {
mongo -h
[ "$?" -eq 0 ]
}
@test "Connect mongodb client to mongodb pods" {
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS "--host=$(pod_name "$i")" "--eval=rs.status().ok")
if [[ ! "$response" -eq 1 ]]; then
exit 1
fi
done
}
@test "Write key to primary" {
response=$(mongo $MONGOARGS --host=$(master_pod) "--eval=db.test.insert({\"abc\": \"def\"}).nInserted")
if [[ ! "$response" -eq 1 ]]; then
exit 1
fi
}
@test "Read key from slaves" {
# wait for slaves to catch up
sleep 10
for ((i = 0; i < $(replicas); ++i)); do
response=$(mongo $MONGOARGS --host=$(pod_name "$i") "--eval=rs.slaveOk(); db.test.find({\"abc\":\"def\"})")
if [[ ! "$response" =~ .*def.* ]]; then
exit 1
fi
done
# Clean up a document after test
mongo $MONGOARGS --host=$(master_pod) "--eval=db.test.deleteMany({\"abc\": \"def\"})"
}

View File

@ -0,0 +1,167 @@
# Override the name of the chart, which in turn changes the name of the containers, services etc.
nameOverride: ""
fullnameOverride: ""
replicas: 3
port: 27017
## Setting this will skip the replicaset and user creation process during bootstrapping
skipInitialization: false
replicaSetName: rs0
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
auth:
enabled: false
existingKeySecret: ""
existingAdminSecret: ""
existingMetricsSecret: ""
# adminUser: username
# adminPassword: password
# metricsUser: metrics
# metricsPassword: password
# key: keycontent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
imagePullSecrets: []
# - myRegistrKeySecretName
# Specs for the Docker image for the init container that establishes the replica set
installImage:
repository: unguiculus/mongodb-install
tag: 0.7
pullPolicy: IfNotPresent
# Specs for the Docker image for the copyConfig init container
copyConfigImage:
repository: busybox
tag: 1.29.3
pullPolicy: IfNotPresent
# Specs for the MongoDB image
image:
repository: mongo
tag: 3.6
pullPolicy: IfNotPresent
# Additional environment variables to be set in the container
extraVars: {}
# - name: TCMALLOC_AGGRESSIVE_DECOMMIT
# value: "true"
# Prometheus Metrics Exporter
metrics:
enabled: false
image:
repository: bitnami/mongodb-exporter
tag: 0.10.0-debian-9-r71
pullPolicy: IfNotPresent
port: 9216
path: "/metrics"
socketTimeout: 3s
syncTimeout: 1m
prometheusServiceDiscovery: true
resources: {}
securityContext:
enabled: true
runAsUser: 1001
# Annotations to be added to MongoDB pods
podAnnotations: {}
securityContext:
enabled: true
runAsUser: 999
fsGroup: 999
runAsNonRoot: true
init:
resources: {}
timeout: 900
resources: {}
# limits:
# cpu: 500m
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 256Mi
## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
priorityClassName: ""
persistentVolume:
enabled: true
## mongodb-replicaset data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: ""
accessModes:
- ReadWriteOnce
size: 10Gi
annotations: {}
# Annotations to be added to the service
serviceAnnotations: {}
terminationGracePeriodSeconds: 30
tls:
# Enable or disable MongoDB TLS support
enabled: false
# Set the SSL operation mode (disabled|allowSSL|preferSSL|requireSSL)
mode: requireSSL
# Please generate your own TLS CA by generating it via:
# $ openssl genrsa -out ca.key 2048
# $ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=mydomain.com"
# After that you can base64 encode it and paste it here:
# $ cat ca.key | base64 -w0
# cacert:
# cakey:
# Entries for the MongoDB config file
configmap: {}
# Javascript code to execute on each replica at initContainer time
# This is the recommended way to create indexes on replicasets.
# Below is an example that creates indexes in foreground on each replica in standalone mode.
# ref: https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/
# initMongodStandalone: |+
# db = db.getSiblingDB("mydb")
# db.my_users.createIndex({email: 1})
initMongodStandalone: ""
# Readiness probe
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
# Liveness probe
livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1

View File

@ -0,0 +1,9 @@
apiVersion: v1
kind: LimitRange
metadata:
name: limits
spec:
limits:
- defaultRequest:
cpu: 40m
type: Container

View File

@ -0,0 +1,45 @@
questions:
- variable: auth.adminUser
default: ""
required: true
type: string
label: Initial Admin User Name e.g acme@yourorg.com
group: "Initial Settings - Required"
- variable: auth.adminPassword
default: ""
type: password
required: true
label: Initial Admin Password/Secret
group: "Initial Settings - Required"
- variable: shipaCluster.serviceType
default: ""
type: enum
required: false
label: Cluster Service Type e.g ClusterIP [shipaCluster.serviceType]
group: "Shipa Cluster - Optional"
options:
- "ClusterIP"
- "NodePort"
- "LoadBalancer"
- variable: shipaCluster.ip
default: ""
type: string
required: false
label: Cluster IP if using ClusterIP Service Type [shipaCluster.ip]
group: "Shipa Cluster - Optional"
- variable: service.nginx.serviceType
default: ""
type: enum
required: false
label: Overide Nginx with a Service Type like ClusterIP [service.nginx.serviceType]
group: "Shipa Cluster - Optional"
options:
- "ClusterIP"
- "NodePort"
- "LoadBalancer"
- variable: service.nginx.clusterIP
default: ""
type: string
required: false
label: Cluster IP for Nginx [service.nginx.clusterIP]
group: "Shipa Cluster - Optional"

View File

@ -0,0 +1,162 @@
#!/bin/sh
set -euxo pipefail
is_shipa_initialized() {
# By default we create secret with empty certificates
# and save them to the secret as a result of the first run of boostrap.sh
CA=$(kubectl get secret/shipa-certificates -o json | jq ".data[\"ca.pem\"]")
LENGTH=${#CA}
if [ "$LENGTH" -gt "100" ]; then
return 0
fi
return 1
}
echo "Waiting for nginx ingress to be ready"
if [[ $WAIT_FOR_NGINX == "true" ]]; then
# This helper gets an IP address or DNS name of NGINX_SERVICE and prints it to /tmp/nginx-ip
/bin/bootstrap-helper --service-name=$NGINX_SERVICE --namespace=$POD_NAMESPACE --timeout=600 --filename=/tmp/nginx-ip
MAIN_INGRESS_IP=$(cat /tmp/nginx-ip)
HOST_ADDRESS=$(cat /tmp/nginx-ip)
else
MAIN_INGRESS_IP=$INGRESS_IP
HOST_ADDRESS=$INGRESS_IP
fi
# If target CNAMEs are set by user in values.yaml, then use the first CNAME from the list as HOST_ADDRESS
# since Shipa host can be only one in the shipa.conf
if [ ! -z "$SHIPA_MAIN_TARGET" -a "$SHIPA_MAIN_TARGET" != " " ]; then
HOST_ADDRESS=$SHIPA_MAIN_TARGET
fi
echo "Prepare shipa.conf"
cp -v /etc/shipa-default/shipa.conf /etc/shipa/shipa.conf
sed -i "s/SHIPA_PUBLIC_IP/$HOST_ADDRESS/g" /etc/shipa/shipa.conf
sed -ie "s/SHIPA_ORGANIZATION_ID/$SHIPA_ORGANIZATION_ID/g" /etc/shipa/shipa.conf
echo "shipa.conf: "
cat /etc/shipa/shipa.conf
CERTIFICATES_DIRECTORY=/tmp/certs
mkdir $CERTIFICATES_DIRECTORY
if is_shipa_initialized; then
# migration for before API was assessable over any ingress controller
if [[ $INGRESS_TYPE == "nginx" ]]; then
echo "Refreshing API secrets"
# before we used TCP streaming on ports 8080 and 8081 and Shipa API was doing certificate checks
# today we let nginx do certificate checks
# because 80 and 443 are reserverd for Ingress and can't use TCP streaming, we need to create secret for nginx
# we want to create dedicated secret for nginx based on shipa-certificates secret
if [[ $WAIT_FOR_NGINX == "true" ]]; then
kubectl get secrets -n "$POD_NAMESPACE" shipa-certificates -o json | jq ".data[\"api-server.crt\"]" | xargs echo | base64 -d > $CERTIFICATES_DIRECTORY/api-server.pem
kubectl get secrets -n "$POD_NAMESPACE" shipa-certificates -o json | jq ".data[\"api-server.key\"]" | xargs echo | base64 -d > $CERTIFICATES_DIRECTORY/api-server-key.pem
API_SERVER_CERT=$(cat $CERTIFICATES_DIRECTORY/api-server.pem | base64)
API_SERVER_KEY=$(cat $CERTIFICATES_DIRECTORY/api-server-key.pem | base64)
kubectl -n $POD_NAMESPACE create secret tls $RELEASE_NAME-api-ingress-secret --key=$CERTIFICATES_DIRECTORY/api-server-key.pem --cert=$CERTIFICATES_DIRECTORY/api-server.pem --dry-run -o yaml | kubectl apply -f -
fi
fi
echo "Skip bootstrapping because shipa is already initialized"
exit 0
fi
echo "Cert For: $MAIN_INGRESS_IP"
echo "Cert For: $SHIPA_API_CNAMES"
# certificate generation for default domain
sed "s/SHIPA_PUBLIC_IP/$MAIN_INGRESS_IP/g" /scripts/csr-shipa-ca.json > $CERTIFICATES_DIRECTORY/csr-shipa-ca.json
sed "s/SHIPA_PUBLIC_IP/$MAIN_INGRESS_IP/g" /scripts/csr-docker-cluster.json > $CERTIFICATES_DIRECTORY/csr-docker-cluster.json
sed "s/SHIPA_PUBLIC_IP/$MAIN_INGRESS_IP/g" /scripts/csr-api-config.json > $CERTIFICATES_DIRECTORY/csr-api-config.json
sed "s/SHIPA_PUBLIC_IP/$MAIN_INGRESS_IP/g" /scripts/csr-api-server.json > $CERTIFICATES_DIRECTORY/csr-api-server.json
# certificate generation for CNAMES
sed "s/SHIPA_API_CNAMES/$SHIPA_API_CNAMES/g" --in-place $CERTIFICATES_DIRECTORY/csr-docker-cluster.json
sed "s/SHIPA_API_CNAMES/$SHIPA_API_CNAMES/g" --in-place $CERTIFICATES_DIRECTORY/csr-api-server.json
jq 'fromstream(tostream | select(length == 1 or .[1] != ""))' $CERTIFICATES_DIRECTORY/csr-docker-cluster.json > $CERTIFICATES_DIRECTORY/file.tmp && mv $CERTIFICATES_DIRECTORY/file.tmp $CERTIFICATES_DIRECTORY/csr-docker-cluster.json
jq 'fromstream(tostream | select(length == 1 or .[1] != ""))' $CERTIFICATES_DIRECTORY/csr-api-server.json > $CERTIFICATES_DIRECTORY/file.tmp && mv $CERTIFICATES_DIRECTORY/file.tmp $CERTIFICATES_DIRECTORY/csr-api-server.json
cp /scripts/csr-client-ca.json $CERTIFICATES_DIRECTORY/csr-client-ca.json
cfssl gencert -initca $CERTIFICATES_DIRECTORY/csr-shipa-ca.json | cfssljson -bare $CERTIFICATES_DIRECTORY/ca
cfssl gencert -initca $CERTIFICATES_DIRECTORY/csr-client-ca.json | cfssljson -bare $CERTIFICATES_DIRECTORY/client-ca
cfssl gencert \
-ca=$CERTIFICATES_DIRECTORY/ca.pem \
-ca-key=$CERTIFICATES_DIRECTORY/ca-key.pem \
-profile=server \
$CERTIFICATES_DIRECTORY/csr-docker-cluster.json | cfssljson -bare $CERTIFICATES_DIRECTORY/docker-cluster
cfssl gencert \
-ca=$CERTIFICATES_DIRECTORY/ca.pem \
-ca-key=$CERTIFICATES_DIRECTORY/ca-key.pem \
-config=$CERTIFICATES_DIRECTORY/csr-api-config.json \
-profile=server \
$CERTIFICATES_DIRECTORY/csr-api-server.json | cfssljson -bare $CERTIFICATES_DIRECTORY/api-server
rm -f $CERTIFICATES_DIRECTORY/*.csr
rm -f $CERTIFICATES_DIRECTORY/*.json
CA_CERT=$(cat $CERTIFICATES_DIRECTORY/ca.pem | base64)
CA_KEY=$(cat $CERTIFICATES_DIRECTORY/ca-key.pem | base64)
CLIENT_CA_CERT=$(cat $CERTIFICATES_DIRECTORY/client-ca.pem | base64)
CLIENT_CA_KEY=$(cat $CERTIFICATES_DIRECTORY/client-ca-key.pem | base64)
DOCKER_CLUSTER_CERT=$(cat $CERTIFICATES_DIRECTORY/docker-cluster.pem | base64)
DOCKER_CLUSTER_KEY=$(cat $CERTIFICATES_DIRECTORY/docker-cluster-key.pem | base64)
API_SERVER_CERT=$(cat $CERTIFICATES_DIRECTORY/api-server.pem | base64)
API_SERVER_KEY=$(cat $CERTIFICATES_DIRECTORY/api-server-key.pem | base64)
# all ingress controlelers require different type of secret to work with self signed
if [[ $INGRESS_TYPE == "nginx" ]]; then
kubectl -n $POD_NAMESPACE create secret tls $RELEASE_NAME-api-ingress-secret --key=$CERTIFICATES_DIRECTORY/api-server-key.pem --cert=$CERTIFICATES_DIRECTORY/api-server.pem --dry-run -o yaml | kubectl apply -f -
# restart nginx to reload secret
if [[ $WAIT_FOR_NGINX == "true" ]]; then
kubectl -n $POD_NAMESPACE rollout restart deployment $NGINX_DEPLOYMENT_NAME
fi
fi
if [[ $INGRESS_TYPE == "traefik" ]]; then
openssl x509 -in $CERTIFICATES_DIRECTORY/api-server.pem -out $CERTIFICATES_DIRECTORY/api-server.crt
openssl pkey -in $CERTIFICATES_DIRECTORY/api-server-key.pem -out $CERTIFICATES_DIRECTORY/api-server.key
kubectl -n $POD_NAMESPACE create secret generic $RELEASE_NAME-api-ingress-secret --from-file=tls.crt=$CERTIFICATES_DIRECTORY/api-server.crt --from-file=tls.key=$CERTIFICATES_DIRECTORY/api-server.key --dry-run -o yaml | kubectl apply -f -
fi
if [[ $INGRESS_TYPE == "istio" ]]; then
openssl x509 -in $CERTIFICATES_DIRECTORY/api-server.pem -out $CERTIFICATES_DIRECTORY/api-server.crt
openssl pkey -in $CERTIFICATES_DIRECTORY/api-server-key.pem -out $CERTIFICATES_DIRECTORY/api-server.key
kubectl -n istio-system create secret tls $RELEASE_NAME-api-ingress-secret --key=$CERTIFICATES_DIRECTORY/api-server.key --cert=$CERTIFICATES_DIRECTORY/api-server.crt --dry-run -o yaml | kubectl apply -f -
fi
# FIXME: name of secret
kubectl get secrets shipa-certificates -o json \
| jq ".data[\"ca.pem\"] |= \"$CA_CERT\"" \
| jq ".data[\"ca-key.pem\"] |= \"$CA_KEY\"" \
| jq ".data[\"client-ca.crt\"] |= \"$CLIENT_CA_CERT\"" \
| jq ".data[\"client-ca.key\"] |= \"$CLIENT_CA_KEY\"" \
| jq ".data[\"cert.pem\"] |= \"$DOCKER_CLUSTER_CERT\"" \
| jq ".data[\"key.pem\"] |= \"$DOCKER_CLUSTER_KEY\"" \
| jq ".data[\"api-server.crt\"] |= \"$API_SERVER_CERT\"" \
| jq ".data[\"api-server.key\"] |= \"$API_SERVER_KEY\"" \
| kubectl apply -f -
echo "CA:"
openssl x509 -in $CERTIFICATES_DIRECTORY/ca.pem -text -noout
echo "Docker cluster:"
openssl x509 -in $CERTIFICATES_DIRECTORY/docker-cluster.pem -text -noout

View File

@ -0,0 +1,17 @@
{
"signing": {
"default": {
"expiry": "168h"
},
"profiles": {
"server": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
}
}
}
}

View File

@ -0,0 +1,16 @@
{
"CN": "Shipa",
"hosts": [
"SHIPA_PUBLIC_IP",
"SHIPA_API_CNAMES"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "shipa"
}
]
}

View File

@ -0,0 +1,12 @@
{
"CN": "Shipa",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "shipa"
}
]
}

View File

@ -0,0 +1,16 @@
{
"CN": "Shipa docker cluster",
"hosts": [
"SHIPA_PUBLIC_IP",
"SHIPA_API_CNAMES"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "Shipa"
}
]
}

View File

@ -0,0 +1,12 @@
{
"CN": "Shipa",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "shipa"
}
]
}

View File

@ -0,0 +1,103 @@
#!/bin/sh
echo "Waiting for shipa api"
until $(curl --output /dev/null --silent http://$SHIPA_ENDPOINT:$SHIPA_ENDPOINT_PORT); do
echo "."
sleep 1
done
SHIPA_CLIENT="/bin/shipa"
$SHIPA_CLIENT target add -s local $SHIPA_ENDPOINT --insecure --port=$SHIPA_ENDPOINT_PORT --disable-cert-validation
$SHIPA_CLIENT login <<EOF
$USERNAME
$PASSWORD
EOF
$SHIPA_CLIENT team create shipa-admin-team
$SHIPA_CLIENT team create shipa-system-team
$SHIPA_CLIENT framework add /scripts/default-framework-template.yaml
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
ADDR=$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
cp -v /scripts/default-cluster-template.yaml /etc/shipa/default-cluster-template.yaml
# replace vars in shipa-cluster yaml
sed -i "s/CLUSTER_TOKEN/$TOKEN/g" /etc/shipa/default-cluster-template.yaml
sed -i "s/CLUSTER_ADDR/$ADDR/g" /etc/shipa/default-cluster-template.yaml
# append yaml head, before CLUSTER_CACERT
grep "CLUSTER_CACERT" /etc/shipa/default-cluster-template.yaml -B 10000 | head -n -1 > /etc/shipa/default-cluster-template-final.yaml
# append ca.crt with indentation
sed 's/^/ /g' /var/run/secrets/kubernetes.io/serviceaccount/ca.crt >> /etc/shipa/default-cluster-template-final.yaml
# append yaml tail, after CLUSTER_CACERT
grep "CLUSTER_CACERT" /etc/shipa/default-cluster-template.yaml -A 10000 | awk 'FNR>1' >> /etc/shipa/default-cluster-template-final.yaml
$SHIPA_CLIENT cluster add --from-file /etc/shipa/default-cluster-template-final.yaml
$SHIPA_CLIENT role add TeamAdmin team
$SHIPA_CLIENT role permission add TeamAdmin team
$SHIPA_CLIENT role permission add TeamAdmin app
$SHIPA_CLIENT role permission add TeamAdmin cluster
$SHIPA_CLIENT role permission add TeamAdmin service
$SHIPA_CLIENT role permission add TeamAdmin service-instance
$SHIPA_CLIENT role add FrameworkAdmin framework
$SHIPA_CLIENT role permission add FrameworkAdmin framework
$SHIPA_CLIENT role permission add FrameworkAdmin node
$SHIPA_CLIENT role permission add FrameworkAdmin cluster
$SHIPA_CLIENT role add ClusterAdmin cluster
$SHIPA_CLIENT role permission add ClusterAdmin cluster
$SHIPA_CLIENT role default add --team-create TeamAdmin
$SHIPA_CLIENT role default add --framework-add FrameworkAdmin
$SHIPA_CLIENT role default add --cluster-add ClusterAdmin
if [ "x$DASHBOARD_ENABLED" != "xtrue" ]; then
echo "The dashboard is disabled"
exit 0
fi
COUNTER=0
echo "Creating the dashboard app"
until $SHIPA_CLIENT app create dashboard \
--framework=shipa-framework \
--team=shipa-admin-team \
-e SHIPA_ADMIN_USER=$USERNAME \
-e SHIPA_CLOUD=$SHIPA_CLOUD \
-e SHIPA_TARGETS=$SHIPA_TARGETS \
-e SHIPA_PAY_API_HOST=$SHIPA_PAY_API_HOST \
-e GOOGLE_RECAPTCHA_SITEKEY=$GOOGLE_RECAPTCHA_SITEKEY \
-e SHIPA_API_INTERNAL_URL=http://$SHIPA_ENDPOINT:$SHIPA_ENDPOINT_PORT \
-e SMARTLOOK_PROJECT_KEY=$SMARTLOOK_PROJECT_KEY; do
echo "Create dashboard failed with $?, waiting 15 seconds then trying again"
sleep 15
let COUNTER=COUNTER+1
if [ $COUNTER -gt 3 ]; then
echo "Failed to create dashboard three times, giving up"
exit 1
fi
done
echo "Setting private envs for dashboard"
$SHIPA_CLIENT env set -a dashboard \
SHIPA_PAY_API_TOKEN=$SHIPA_PAY_API_TOKEN \
GOOGLE_RECAPTCHA_SECRET=$GOOGLE_RECAPTCHA_SECRET \
LAUNCH_DARKLY_SDK_KEY=$LAUNCH_DARKLY_SDK_KEY -p
COUNTER=0
until $SHIPA_CLIENT app deploy -a dashboard -i $DASHBOARD_IMAGE; do
echo "Deploy dashboard failed with $?, waiting 30 seconds then trying again"
sleep 30
let COUNTER=COUNTER+1
if [ $COUNTER -gt 3 ]; then
echo "Failed to deploy dashboard three times, giving up"
exit 1
fi
done
# we need to restart api because of sidecar injection
if [[ $INGRESS_TYPE == "istio" ]]; then
kubectl rollout restart deployments $RELEASE_NAME-api -n $POD_NAMESPACE
fi

View File

@ -0,0 +1,45 @@
#!/bin/sh
set -o xtrace
kubectl delete crd apps.shipa.io --ignore-not-found=true;
kubectl delete crd frameworks.shipa.io --ignore-not-found=true
kubectl delete crd jobs.shipa.io --ignore-not-found=true
kubectl delete crd autodiscoveredapps.shipa.io --ignore-not-found=true
kubectl delete ds --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true
kubectl delete deployment --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true
kubectl delete jobs --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD
kubectl delete daemonsets --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD
kubectl delete services --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true
kubectl delete sa --selector=$SELECTOR $NAMESPACE_MOD --ignore-not-found=true
kubectl delete configmap {{ template "shipa.fullname" . }}-leader -n {{ .Release.Namespace }} --ignore-not-found=true
kubectl delete clusterrolebindings --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD
kubectl delete clusterrole --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD
kubectl delete ingress --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD
kubectl delete endpoints --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD
kubectl delete netpol --selector=$SELECTOR --ignore-not-found=true $NAMESPACE_MOD
NAMESPACES=$(kubectl get ns --no-headers -o custom-columns=":metadata.name" --selector=$SELECTOR)
for NAMESPACE in $NAMESPACES; do
echo "Removing for namespace $NAMESPACE"
SECRETS=$(kubectl -n $NAMESPACE get secrets --selector=$SELECTOR -o name)
for SECRET in $SECRETS; do
echo "Removing secret $SECRET"
# remove all secrets, except secret for helm release
if [[ $SECRET != "secret/sh.helm.release.*.$RELEASE_NAME.*" ]]; then
kubectl -n $NAMESPACE delete $SECRET
fi
done
kubectl delete secret $RELEASE_NAME-api-ingress-secret
echo "Removing namespace $NAMESPACE"
# remove all namespaces, except namespace of helm installation
if [[ $NAMESPACE != $RELEASE_NAMESPACE ]]; then
kubectl delete ns $NAMESPACE
fi
done

View File

@ -0,0 +1,34 @@
****************************************** Thanks for choosing Shipa! *********************************************
1. Configured default user:
Username: {{ .Values.auth.adminUser }}
Password: {{ .Values.auth.adminPassword }}
2. If this is a production cluster, please configure persistent volumes.
The default reclaimPolicy for dynamically provisioned persistent volumes is "Delete" and
users are advised to change it for production
The code snippet below can be used to set reclaimPolicy to "Retain" for all volumes:
PVCs=$(kubectl --namespace={{ .Release.Namespace }} get pvc -l release={{ .Release.Name }} -o name)
for pvc in $PVCs; do
volumeName=$(kubectl -n {{ .Release.Namespace }} get $pvc -o template --template=\{\{.spec.volumeName\}\})
kubectl -n {{ .Release.Namespace }} patch pv $volumeName -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
done
3. Set default target for shipa-client:
export SHIPA_HOST=$(kubectl --namespace={{ .Release.Namespace }} get svc {{ template "shipa.fullname" . }}-ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && if [[ -z $SHIPA_HOST ]]; then export SHIPA_HOST=$(kubectl --namespace={{ .Release.Namespace }} get svc {{ template "shipa.fullname" . }}-ingress-nginx -o jsonpath="{.status.loadBalancer.ingress[0].hostname}") ; fi
shipa target-add {{ .Release.Name }} $SHIPA_HOST -s
shipa login {{ .Values.auth.adminUser }}
shipa node list
shipa app list
************************************************************************************************************************
**** PLEASE BE PATIENT: Installing or upgrading Shipa may require downtime in order to perform database migrations. ****
************************************************************************************************************************

View File

@ -0,0 +1,120 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "shipa.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "shipa.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "shipa.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "shipa.labels" -}}
helm.sh/chart: {{ include "shipa.chart" . }}
{{ include "shipa.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
release: {{ .Release.Name }}
app: {{ include "shipa.name" . }}
shipa.io/is-shipa: "true"
{{- end -}}
{{/*
Uninstall labels
*/}}
{{- define "shipa.uninstall-labels" -}}
helm.sh/chart: {{ include "shipa.chart" . }}
{{ include "shipa.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
release: {{ .Release.Name }}
app: {{ include "shipa.name" . }}
{{- end -}}
{{/*
Selector labels
*/}}
{{- define "shipa.selectorLabels" -}}
app.kubernetes.io/name: {{ include "shipa.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "shipa.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (include "shipa.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
If target CNAMEs are set by user in values.yaml, then use the first CNAME from
the list as main target since Shipa host can be only one in the shipa.conf
*/}}
{{- define "shipa.GetMainTarget" -}}
{{- if not (empty (splitList "," (trimPrefix "\n" (include "shipa.cnames" .)))) }}
{{- index (splitList "," (trimPrefix "\n" (include "shipa.cnames" .))) 0 | quote -}}
{{- else -}}
{{- printf " " | quote -}}
{{- end -}}
{{- end -}}
{{/*
CNAMES is all defined cnames from values.yaml, with addition of api.<.Values.shipaCluster.ip>.shipa.cloud
it should be used instead of shipaApi.cnames, as we always want to have this default address
*/}}
{{- define "shipa.cnames" -}}
{{- if has (printf "api.%s.shipa.cloud" .Values.shipaCluster.ingress.ip) .Values.shipaApi.cnames }}
{{ join "," .Values.shipaApi.cnames }}
{{- else }}
{{- if .Values.shipaCluster.ingress.ip }}
{{ join "," (append .Values.shipaApi.cnames (printf "api.%s.shipa.cloud" .Values.shipaCluster.ingress.ip)) }}
{{- else }}
{{ join "," .Values.shipaApi.cnames }}
{{- end }}
{{- end }}
{{- end }}
{{/*
for shipa managed nginx we use shipa-nginx-ingress as classname
for user managed nginx default is nginx, but user can modify it through values.yaml
*/}}
{{- define "shipa.defaultNginxClassName" }}
{{ if and (eq .Values.shipaCluster.ingress.type "nginx") (not .Values.shipaCluster.ingress.ip)}}
shipa-nginx-ingress
{{- else }}
nginx
{{- end }}
{{- end }}

View File

@ -0,0 +1,292 @@
{{ if eq .Values.shipaCluster.ingress.type "istio" }}
{{- if not .Values.shipaApi.secureIngressOnly }}
{{- range $i, $servicePort := .Values.shipaApi.servicePorts }}
{{- if $.Values.shipaCluster.ingress.apiAccessOnIngressIp }}
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
kubernetes.io/ingress.class: {{ default ( "istio" ) $.Values.shipaCluster.ingress.className }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-http-virutal-service-{{ $i }}
spec:
gateways:
- {{ template "shipa.fullname" $ }}-api-http-gateway-{{ $i }}
hosts:
- "*"
http:
- route:
- destination:
host: {{ template "shipa.fullname" $ }}-api
port:
number: {{ $servicePort }}
weight: 100
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-http-gateway-{{ $i }}
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: http
number: {{ $servicePort }}
protocol: HTTP
---
{{- if empty $.Values.shipaApi.serviceSecurePorts }}
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-rule-{{ $i }}
spec:
host: {{ template "shipa.fullname" $ }}-api
subsets:
- labels:
app: {{ template "shipa.fullname" $ }}-api
version: "1"
name: v1
---
{{- end }}
{{- end }}
{{- range $j, $cname := splitList "," (trimPrefix "\n" (include "shipa.cnames" $)) }}
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
kubernetes.io/ingress.class: {{ default ( "istio" ) $.Values.shipaCluster.ingress.className }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-http-virutal-service-cname-{{ $i }}-{{ $j }}
spec:
gateways:
- {{ template "shipa.fullname" $ }}-api-http-gateway-cname-{{ $i }}-{{ $j }}
hosts:
- {{ $cname }}
http:
- route:
- destination:
host: {{ template "shipa.fullname" $ }}-api
port:
number: {{ $servicePort }}
weight: 100
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-http-gateway-cname-{{ $i }}-{{ $j }}
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- {{ $cname }}
port:
name: http
number: {{ $servicePort }}
protocol: HTTP
---
{{- if empty $.Values.shipaApi.serviceSecurePorts }}
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-rule-cname-{{ $i }}-{{ $j }}
spec:
host: {{ template "shipa.fullname" $ }}-api
subsets:
- labels:
app: {{ template "shipa.fullname" $ }}-api
version: "1"
name: v1
---
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $i, $servicePort := .Values.shipaApi.serviceSecurePorts }}
{{- if $.Values.shipaCluster.ingress.apiAccessOnIngressIp }}
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
kubernetes.io/ingress.class: {{ default ( "istio" ) $.Values.shipaCluster.ingress.className }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-https-virutal-service-{{ $i }}
spec:
gateways:
- {{ template "shipa.fullname" $ }}-api-https-gateway-{{ $i }}
hosts:
- "*"
http:
- route:
- destination:
host: {{ template "shipa.fullname" $ }}-api
port:
number: {{ $servicePort }}
weight: 100
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-https-gateway-{{ $i }}
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- "*"
port:
name: https
number: {{ $servicePort }}
protocol: HTTPS
tls:
mode: SIMPLE
{{ if $.Values.shipaApi.customSecretName}}
credentialName: {{ $.Values.shipaApi.customSecretName }}
{{- else }}
credentialName: {{ template "shipa.fullname" $ }}-api-ingress-secret
{{- end }}
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-rule-{{ $i }}
spec:
host: {{ template "shipa.fullname" $ }}-api
subsets:
- labels:
app: {{ template "shipa.fullname" $ }}-api
version: "1"
name: v1
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: {{ $servicePort }}
tls:
mode: SIMPLE
---
{{- end }}
{{- range $j, $cname := splitList "," (trimPrefix "\n" (include "shipa.cnames" $)) }}
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
kubernetes.io/ingress.class: {{ default ( "istio" ) $.Values.shipaCluster.ingress.className }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-https-virutal-service-cname-{{ $i }}-{{ $j }}
spec:
gateways:
- {{ template "shipa.fullname" $ }}-api-https-gateway-cname-{{ $i }}-{{ $j }}
hosts:
- {{ $cname }}
http:
- route:
- destination:
host: {{ template "shipa.fullname" $ }}-api
port:
number: {{ $servicePort }}
weight: 100
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-https-gateway-cname-{{ $i }}-{{ $j }}
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- {{ $cname }}
port:
name: https
number: {{ $servicePort }}
protocol: HTTPS
tls:
mode: SIMPLE
{{ if $.Values.shipaApi.customSecretName}}
credentialName: {{ $.Values.shipaApi.customSecretName }}
{{- else }}
credentialName: {{ template "shipa.fullname" $ }}-api-ingress-secret
{{- end }}
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
labels: {{- include "shipa.labels" $ | nindent 4 }}
name: {{ template "shipa.fullname" $ }}-api-rule-cname-{{ $i }}-{{ $j }}
spec:
host: {{ template "shipa.fullname" $ }}-api
subsets:
- labels:
app: {{ template "shipa.fullname" $ }}-api
version: "1"
name: v1
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: {{ $servicePort }}
tls:
mode: SIMPLE
---
{{- end }}
{{- end }}
{{- if .Values.tags.defaultDB }}
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: {{ template "shipa.fullname" $ }}-mongodb-peer
spec:
mtls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: {{ template "shipa.fullname" $ }}-mongo-rule
spec:
host: "{{ template "shipa.fullname" $ }}-mongodb-replicaset.{{ .Release.Namespace }}.svc.{{ .Values.shipaCluster.clusterDomain }}"
trafficPolicy:
tls:
mode: DISABLE
{{ else }}
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: mongo
spec:
hosts:
{{- range $mongoShard := (splitList "," $.Values.externalMongodb.url) }}
- {{ trimSuffix ":27017" $mongoShard }}
{{- end }}
ports:
- number: 27017
name: tls
protocol: TLS
resolution: DNS
{{- end }}
{{- end }}

View File

@ -0,0 +1,107 @@
{{ if eq .Values.shipaCluster.ingress.type "nginx" }}
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}-nginx-tcp-services
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
data:
{{- if not .Values.shipaApi.secureIngressOnly }}
{{- range $servicePort := without (.Values.shipaApi.servicePorts | toStrings) "80" }}
{{ $servicePort }}: "{{ $.Release.Namespace }}/{{ include "shipa.fullname" $ }}-api:{{ $servicePort }}"
{{- end }}
{{- end }}
{{- range $secureContainerPort := without (.Values.shipaApi.serviceSecurePorts | toStrings) "443" }}
{{ $secureContainerPort }}: "{{ $.Release.Namespace }}/{{ include "shipa.fullname" $ }}-api:{{ $secureContainerPort }}"
{{- end }}
---
{{- if has "80" (.Values.shipaApi.servicePorts | toStrings) }}
{{- if .Values.shipaCluster.ingress.apiAccessOnIngressIp }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "shipa.fullname" . }}-api-http-ingress
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
{{- if $.Values.shipaApi.customIngressAnnotations }}
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
kubernetes.io/ingress.class: {{ default (include "shipa.defaultNginxClassName" . | trim) .Values.shipaCluster.ingress.className }}
nginx.org/websocket-services: "{{ template "shipa.fullname" . }}-api"
{{- if and $.Values.shipaApi.secureIngressOnly (has "443" ($.Values.shipaApi.serviceSecurePorts | toStrings)) }}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
{{- else }}
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/ssl-redirect: "false"
{{- end }}
spec:
rules:
- http:
paths:
- backend:
service:
name: {{ template "shipa.fullname" . }}-api
port:
number: 80
path: /
pathType: Prefix
{{ if has "443" (.Values.shipaApi.serviceSecurePorts | toStrings) }}
tls:
- secretName: {{ template "shipa.fullname" . }}-api-ingress-secret
{{- end }}
{{- end }}
{{- end }}
---
{{ if not (empty (trimPrefix "\n" (include "shipa.cnames" .))) }}
{{- range $i, $cname := splitList "," (trimPrefix "\n" (include "shipa.cnames" .)) }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "shipa.fullname" $ }}-api-http-ingress-cname-{{ $i }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
annotations:
{{- if $.Values.shipaApi.customIngressAnnotations }}
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
kubernetes.io/ingress.class: {{ default ( include "shipa.defaultNginxClassName" $ | trim) $.Values.shipaCluster.ingress.className }}
{{- if and $.Values.shipaApi.secureIngressOnly (has "443" ($.Values.shipaApi.serviceSecurePorts | toStrings)) }}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
{{- else }}
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/ssl-redirect: "false"
{{- end }}
nginx.org/websocket-services: "{{ template "shipa.fullname" $ }}-api"
spec:
rules:
- host: {{ $cname }}
http:
paths:
- backend:
service:
name: {{ template "shipa.fullname" $ }}-api
port:
number: 80
path: /
pathType: ImplementationSpecific
{{ if has "443" ($.Values.shipaApi.serviceSecurePorts | toStrings) }}
{{ if $.Values.shipaApi.customSecretName}}
tls:
- secretName: {{ $.Values.shipaApi.customSecretName }}
hosts:
- {{ $cname }}
{{- else }}
tls:
- secretName: {{ template "shipa.fullname" $ }}-api-ingress-secret
hosts:
- {{ $cname }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,108 @@
{{ if eq .Values.shipaCluster.ingress.type "traefik" }}
{{- if not .Values.shipaApi.secureIngressOnly }}
{{- range $i, $servicePort := .Values.shipaApi.servicePorts }}
{{- if $.Values.shipaCluster.ingress.apiAccessOnIngressIp }}
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: {{ template "shipa.fullname" $ }}-api-http-ingress-{{ $i }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
annotations:
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/`)
kind: Rule
services:
- name: {{ template "shipa.fullname" $ }}-api
port: {{ $servicePort }}
scheme: http
---
{{- end }}
{{- range $j, $cname := splitList "," (trimPrefix "\n" (include "shipa.cnames" $)) }}
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: {{ template "shipa.fullname" $ }}-api-http-ingress-cname-{{ $i }}-{{ $j }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
annotations:
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
spec:
entryPoints:
- web
routes:
- match: Host(`{{ $cname }}`)
kind: Rule
services:
- name: {{ template "shipa.fullname" $ }}-api
port: {{ $servicePort }}
scheme: http
---
{{- end }}
{{- end }}
{{- end }}
{{- if $.Values.shipaCluster.ingress.apiAccessOnIngressIp }}
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: {{ template "shipa.fullname" $ }}-api-https-ingress
labels: {{- include "shipa.labels" $ | nindent 4 }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
annotations:
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4 }}
{{- end }}
spec:
entryPoints:
- websecure
routes:
- match: PathPrefix(`/`)
kind: Rule
services:
- name: {{ template "shipa.fullname" $ }}-api
port: {{ first .Values.shipaApi.servicePorts }}
scheme: http
tls:
{{ if $.Values.shipaApi.customSecretName}}
secretName: {{ $.Values.shipaApi.customSecretName }}
{{- else }}
secretName: {{ template "shipa.fullname" $ }}-api-ingress-secret
{{- end }}
---
{{- end }}
{{- range $i, $cname := splitList "," (trimPrefix "\n" (include "shipa.cnames" $)) }}
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: {{ template "shipa.fullname" $ }}-api-https-ingress-cname-{{ $i }}
labels: {{- include "shipa.labels" $ | nindent 4 }}
{{- if $.Values.shipaApi.customIngressAnnotations }}
annotations:
{{ toYaml $.Values.shipaApi.customIngressAnnotations | indent 4}}
{{- end }}
spec:
entryPoints:
- websecure
routes:
- match: Host(`{{ $cname }}`)
kind: Rule
services:
- name: {{ template "shipa.fullname" $ }}-api
port: {{ first $.Values.shipaApi.servicePorts }}
scheme: http
tls:
{{ if $.Values.shipaApi.customSecretName}}
secretName: {{ $.Values.shipaApi.customSecretName }}
{{- else }}
secretName: {{ template "shipa.fullname" $ }}-api-ingress-secret
{{- end }}
domains:
- main: {{ $cname }}
---
{{- end }}
{{- end }}

View File

@ -0,0 +1,86 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-clair-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
config.template.yaml: |-
#
# This file is mounted to /clair-config/config.template.yaml and then processed by /entrypoint.sh
#
clair:
database:
# Database driver
type: pgsql
options:
# PostgreSQL Connection string
# https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING
{{- $host := (default (printf "%s-postgres.%s" (include "shipa.fullname" .) .Release.Namespace) .Values.postgres.source.host) }}
{{- $port := .Values.postgres.source.port }}
{{- $user := .Values.postgres.source.user }}
{{- $sslmode := .Values.postgres.source.sslmode }}
source: host={{ $host }} port={{ $port }} user={{ $user }} sslmode={{ $sslmode }} statement_timeout=60000 password=$POSTGRES_PASSWORD
# Number of elements kept in the cache
# Values unlikely to change (e.g. namespaces) are cached in order to save prevent needless roundtrips to the database.
cachesize: 16384
# 32-bit URL-safe base64 key used to encrypt pagination tokens
# If one is not provided, it will be generated.
# Multiple clair instances in the same cluster need the same value.
paginationkey:
api:
# v3 grpc/RESTful API server address
addr: "0.0.0.0:6060"
# Health server address
# This is an unencrypted endpoint useful for load balancers to check to healthiness of the clair server.
healthaddr: "0.0.0.0:6061"
# Deadline before an API request will respond with a 503
timeout: 900s
# Optional PKI configuration
# If you want to easily generate client certificates and CAs, try the following projects:
# https://github.com/coreos/etcd-ca
# https://github.com/cloudflare/cfssl
servername:
cafile:
keyfile:
certfile:
updater:
# Frequency the database will be updated with vulnerabilities from the default data sources
# The value 0 disables the updater entirely.
interval: 2h
enabledupdaters:
- debian
- ubuntu
- rhel
- oracle
- alpine
- suse
notifier:
# Number of attempts before the notification is marked as failed to be sent
attempts: 3
# Duration before a failed notification is retried
renotifyinterval: 2h
http:
# Optional endpoint that will receive notifications via POST requests
endpoint:
# Optional PKI configuration
# If you want to easily generate client certificates and CAs, try the following projects:
# https://github.com/cloudflare/cfssl
# https://github.com/coreos/etcd-ca
servername:
cafile:
keyfile:
certfile:
# Optional HTTP Proxy: must be a valid URL (including the scheme).
proxy:

View File

@ -0,0 +1,59 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-clair
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-clair
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-clair
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: clair
{{- if .Values.clair.image }}
image: "{{ .Values.clair.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.clair.repositoryBasename }}:{{ .Values.clair.tag }}"
{{- end }}
imagePullPolicy: IfNotPresent
ports:
- name: clair
containerPort: 6060
protocol: TCP
- name: health
containerPort: 6061
protocol: TCP
volumeMounts:
- name: {{ template "shipa.fullname" . }}-clair-config
mountPath: /clair-config/
- name: config-dir
mountPath: /etc/clair/
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: postgres-password
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: config-dir
emptyDir: {}
- name: {{ template "shipa.fullname" . }}-clair-config
configMap:
name: {{ template "shipa.fullname" . }}-clair-config
items:
- key: config.template.yaml
path: config.template.yaml

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-clair
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
name: {{ template "shipa.fullname" . }}-clair
ports:
- port: 6060
targetPort: 6060
protocol: TCP
name: clair
- port: 6061
targetPort: 6061
protocol: TCP
name: health

View File

@ -0,0 +1,36 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-metrics-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
prometheus.yml: |-
#
# DO NOT EDIT. Can be updated by shipa helm chart
#
global:
scrape_interval: 1m
scrape_configs:
- job_name: "pushgateway"
honor_labels: true
scheme: http
static_configs:
- targets: ['127.0.0.1:9093']
labels:
source: pushgateway
- job_name: "traefik"
honor_labels: true
scheme: http
static_configs:
- targets: ['{{ template "shipa.fullname" . }}-traefik-internal.{{ .Release.Namespace }}:9095']
{{- if .Values.metrics.extraPrometheusConfiguration }}
#
# User defined extra configuration
#
{{- range $line, $value := ( split "\n" .Values.metrics.extraPrometheusConfiguration ) }}
{{ $value }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,59 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-metrics
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-metrics
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-metrics
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
# Please do not scale metrics container. It doesn't use storage lock (--storage.tsdb.no-lockfile)
- name: metrics
{{- if .Values.metrics.image }}
image: "{{ .Values.metrics.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.metrics.repositoryBasename }}:{{ .Values.metrics.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.metrics.pullPolicy }}
env:
- name: PROMETHEUS_ARGS
value: "--web.enable-admin-api {{ default ("--storage.tsdb.retention.time=1d") .Values.metrics.prometheusArgs }}"
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: metrics-password
ports:
- name: prometheus
containerPort: 9090
protocol: TCP
- name: pushgateway
containerPort: 9091
protocol: TCP
volumeMounts:
- name: "{{ template "shipa.fullname" . }}-metrics-config"
mountPath: /etc/prometheus/config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: {{ template "shipa.fullname" . }}-metrics-config
configMap:
name: {{ template "shipa.fullname" . }}-metrics-config
items:
- key: prometheus.yml
path: prometheus.yml

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-metrics
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
name: {{ template "shipa.fullname" . }}-metrics
ports:
- port: 9090
targetPort: 9090
protocol: TCP
name: prometheus
- port: 9091
targetPort: 9091
protocol: TCP
name: pushgateway

View File

@ -0,0 +1,20 @@
{{ if and (eq .Values.shipaCluster.ingress.type "nginx") (not .Values.shipaCluster.ingress.ip) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-nginx
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
data:
{{- if .Values.shipaCluster.ingress.config }}
{{- range $key, $value := .Values.shipaCluster.ingress.config }}
{{ $key }}: {{ $value }}
{{- end }}
{{- else }}
proxy-body-size: "512M"
proxy-read-timeout: "300"
proxy-connect-timeout: "300"
proxy-send-timeout: "300"
upstream-keepalive-timeout: "300"
{{- end }}
{{- end }}

View File

@ -0,0 +1,94 @@
{{ if and (eq .Values.shipaCluster.ingress.type "nginx") (not .Values.shipaCluster.ingress.ip) }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
annotations:
sidecar.istio.io/inject: "false"
spec:
replicas: 1
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-nginx-ingress
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-nginx-ingress
annotations:
sidecar.istio.io/inject: "false"
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to 30 seconds for the drain of connections
terminationGracePeriodSeconds: 30
serviceAccountName: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: {{ .Values.shipaCluster.ingress.image }}
args:
- /nginx-ingress-controller
- --election-id={{ template "shipa.fullname" . }}-leader
- --configmap=$(POD_NAMESPACE)/{{ template "shipa.fullname" . }}-nginx
- --tcp-services-configmap=$(POD_NAMESPACE)/{{ template "shipa.fullname" . }}-nginx-tcp-services
- --publish-service=$(POD_NAMESPACE)/{{ template "shipa.fullname" . }}-ingress-nginx
- --ingress-class={{ default ( include "shipa.defaultNginxClassName" . | trim) .Values.shipaCluster.ingress.className }}
- --default-ssl-certificate={{ .Release.Namespace }}/{{ template "shipa.fullname" . }}-api-ingress-secret
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
{{- if not .Values.shipaApi.secureIngressOnly }}
{{ range $i, $servicePort := .Values.shipaApi.servicePorts }}
- name: shipa-{{ $i }}
containerPort: {{ $servicePort }}
protocol: TCP
{{- end }}
{{- end }}
{{ range $i, $servicePort := .Values.shipaApi.serviceSecurePorts }}
- name: shipa-secure-{{ $i }}
containerPort: {{ $servicePort }}
protocol: TCP
{{- end }}
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
{{- end }}

View File

@ -0,0 +1,130 @@
{{ if and (eq .Values.shipaCluster.ingress.type "nginx") (not .Values.shipaCluster.ingress.ip) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-clusterrole
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-role
labels: {{- include "shipa.labels" . | nindent 4 }}
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- "{{ template "shipa.fullname" . }}-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-role-nisa-binding
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "shipa.fullname" . }}-nginx-ingress-role
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-clusterrole-nisa-binding
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "shipa.fullname" . }}-nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,55 @@
{{ if and (eq .Values.shipaCluster.ingress.type "nginx") (not .Values.shipaCluster.ingress.ip) }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-ingress-nginx
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
spec:
type: "{{ .Values.shipaCluster.ingress.serviceType }}"
{{- if eq .Values.shipaCluster.ingress.serviceType "LoadBalancer" }}
{{- if .Values.shipaCluster.ingress.loadBalancerIp }}
loadBalancerIP: "{{ .Values.shipaCluster.ingress.loadBalancerIp }}"
{{- end }}
{{- end }}
{{- if eq .Values.shipaCluster.ingress.serviceType "ClusterIP" }}
{{- if .Values.shipaCluster.ingress.clusterIp }}
clusterIP: "{{ .Values.shipaCluster.ingress.clusterIp }}"
{{- end }}
{{- end }}
selector:
name: {{ template "shipa.fullname" . }}-nginx-ingress
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
{{- if not .Values.shipaApi.secureIngressOnly }}
{{- range $i, $servicePort := without (.Values.shipaApi.servicePorts | toStrings) "80" }}
- port: {{ $servicePort }}
name: shipa-{{ $i }}
targetPort: {{ $.Values.shipaApi.port }}
protocol: TCP
{{- if eq $.Values.shipaCluster.ingress.serviceType "NodePort" }}
{{- if $.Values.shipaCluster.ingress.nodePort }}
nodePort: {{ $.Values.shipaCluster.ingress.nodePort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $i, $servicePort := without (.Values.shipaApi.serviceSecurePorts | toStrings) "443" }}
- port: {{ $servicePort }}
name: shipa-secure-{{ $i }}
targetPort: {{ $.Values.shipaApi.securePort }}
protocol: TCP
{{- if eq $.Values.shipaCluster.ingress.serviceType "NodePort" }}
{{- if $.Values.shipaCluster.ingress.nodePort }}
nodePort: {{ $.Values.shipaCluster.ingress.nodePort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,8 @@
{{ if and (eq .Values.shipaCluster.ingress.type "nginx") (not .Values.shipaCluster.ingress.ip) }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "shipa.fullname" . }}-nginx-ingress-serviceaccount
labels: {{- include "shipa.labels" . | nindent 4 }}
shipa.io/shipa-api-ingress-controller: "true"
{{- end }}

View File

@ -0,0 +1,48 @@
{{- if .Values.postgres.create }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "shipa.fullname" . }}-postgres
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: "false"
spec:
selector:
matchLabels:
name: {{ template "shipa.fullname" . }}-postgres
template:
metadata:
labels:
name: {{ template "shipa.fullname" . }}-postgres
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: postgres
image: {{ .Values.postgres.image }}
imagePullPolicy: IfNotPresent
ports:
- name: postgres
containerPort: 5432
protocol: TCP
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: postgres-password
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ template "shipa.fullname" . }}-postgres-pvc
{{- end }}

View File

@ -0,0 +1,20 @@
{{- if .Values.postgres.create }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}-postgres-pvc
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
accessModes:
- {{ .Values.postgres.persistence.accessMode | quote }}
resources:
requests:
storage: {{ .Values.postgres.persistence.size | quote }}
{{- if .Values.postgres.persistence.storageClass }}
{{- if (eq "-" .Values.postgres.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.postgres.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,16 @@
{{- if .Values.postgres.create }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "shipa.fullname" . }}-postgres
labels: {{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
name: {{ template "shipa.fullname" . }}-postgres
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: postgres
{{- end }}

View File

@ -0,0 +1,169 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-api-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
shipa.conf: |-
shipaVersion: {{ .Chart.Version }}
tls-listen: "0.0.0.0:{{ .Values.shipaApi.securePort }}"
listen: "0.0.0.0:{{ .Values.shipaApi.port }}"
host: https://SHIPA_PUBLIC_IP:{{ first .Values.shipaApi.serviceSecurePorts }}
use-tls: true
shipaCloud:
enabled: {{ .Values.shipaCloud.enabled }}
tls:
server-cert: /certs/api-server.crt
server-key: /certs/api-server.key
database:
{{- if not .Values.tags.defaultDB }}
url: {{ .Values.externalMongodb.url}}
tls: {{ .Values.externalMongodb.tls.enable }}
{{ else }}
{{- if eq .Values.shipaCluster.ingress.type "istio" }}
url: {{ .Release.Name }}-mongodb-replicaset.{{ .Release.Namespace }}.svc.{{ .Values.shipaCluster.clusterDomain }}:27017
{{ else }}
url: {{ .Release.Name }}-mongodb-replicaset:27017
{{- end }}
tls: false
{{- end }}
name: shipa
username: $DB_USERNAME
password: $DB_PASSWORD
license: {{ .Values.license }}
organization:
id: SHIPA_ORGANIZATION_ID
dashboard:
enabled: $DASHBOARD_ENABLED
image: $DASHBOARD_IMAGE
envs:
SHIPA_ADMIN_USER: {{ .Values.auth.adminUser | quote }}
SHIPA_CLOUD: {{ .Values.shipaCloud.enabled | quote }}
SHIPA_TARGETS: {{ trimPrefix "\n" (include "shipa.cnames" .) }}
SHIPA_PAY_API_HOST: {{ .Values.shipaCloud.shipaPayApi.host }}
SHIPA_PAY_API_TOKEN: {{ .Values.shipaCloud.shipaPayApi.token }}
GOOGLE_RECAPTCHA_SITEKEY: {{ .Values.shipaCloud.googleRecaptcha.sitekey }}
GOOGLE_RECAPTCHA_SECRET: {{ .Values.shipaCloud.googleRecaptcha.secret }}
SMARTLOOK_PROJECT_KEY: {{ .Values.shipaCloud.smartlook.projectKey }}
LAUNCH_DARKLY_SDK_KEY: {{ .Values.shipaCloud.launchDarkly.sdkKey }}
SHIPA_API_INTERNAL_URL: http://{{ template "shipa.fullname" . }}-api.{{ .Release.Namespace }}.svc.{{ .Values.shipaCluster.clusterDomain }}:{{ first .Values.shipaApi.servicePorts }}
auth:
admin-email: {{ .Values.auth.adminUser | quote }}
dummy-domain: {{ .Values.auth.dummyDomain | quote }}
token-expire-days: 2
hash-cost: 4
user-registration: true
user-activation:
cert: LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF6TXIwd3hETklDcm9JN3VEVkdoTgpFZytVbTdkQzk3NVZpM1l1NnJHUUdlc3ZwZTY5T2NhT0VxZHFML0NNWGVRMW1oTVFtUnplQnlxWEJ1Q2xOemphCjlEbjV2WTBlVnNIZUhuVTJ4bkkyV1dSR3JjUE1mRGJuRzlDSnNZQmdHd3A2eDcrYVR2RXZCRFBtS3YrcjdOcysKUXhhNzBFZEk4NTZLMWQyTTQ1U3RuZW1hcm51cjdOTDdGb2VsS1FWNGREd1hxU2EvVW1tdHdOOGNSTENUQ0N4NQpObkVya2UrTWo1RFFqTW5TUlRHbjFxOE91azlOUXRxNDlrbFMwMUhIQTJBWnR6ZExteTMrTktXRVZta3Z0cGgxClJseHBtZVQ5SERNbHI5aFI3U3BidnRHeVZVUG1pbXVYWFA4cXdOcHZab01Ka3hWRm4zbWNRVHRMbk8xa0Jjb1cKZVFJREFRQUIKLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==
provisioner: kubernetes
metrics:
host: {{ template "shipa.fullname" . }}-metrics
password: $METRICS_PASSWORD
# section contains configuration of Prometheus Metrics Exporter
prometheus-metrics-exporter:
{{- if .Values.prometheusMetricsExporter.image }}
image: "{{ .Values.prometheusMetricsExporter.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.prometheusMetricsExporter.repositoryBasename }}:{{ .Values.prometheusMetricsExporter.tag }}"
{{- end }}
docker:
cluster:
storage: mongodb
mongo-database: cluster
collection: docker
registry-scheme: https
repository-namespace: shipa
router: traefik
deploy-cmd: /var/lib/shipa/deploy
run-cmd:
bin: /var/lib/shipa/start
port: "8888"
tls:
root-path: /certs
auto-scale:
enabled: true
run-interval: $DOCKER_AUTOSCALE_RUN_INTERVAL
routers:
traefik:
type: traefik
domain: shipa.cloud
istio:
type: istio
nginx:
type: nginx
serviceType: {{ .Values.shipaCluster.ingress.serviceType }}
ip: {{ .Values.shipaCluster.ingress.ip }}
queue:
mongo-database: queuedb
quota:
units-per-app: 4
apps-per-user: 8
log:
disable-syslog: true
use-stderr: true
clair:
server: http://{{ template "shipa.fullname" . }}-clair:6060
disabled: false
kubernetes:
# pod name is used by a leader election thing as an identifier for the current shipa-api instance
pod-name: $POD_NAME
pod-namespace: $POD_NAMESPACE
core-services-address: SHIPA_PUBLIC_IP
use-pool-namespaces: true
remote-cluster-ingress:
http-port: 80
https-port: 443
protected-port: 31567
service-type: LoadBalancer
ketch:
enabled: true
{{- if .Values.ketch.image }}
image: "{{ .Values.ketch.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.ketch.repositoryBasename }}:{{ .Values.ketch.tag }}"
{{- end }}
metrics-address: {{ .Values.ketch.metricsAddress }}
cert-manager:
install-url: {{ .Values.certManager.installUrl }}
cluster-update:
# it's a default value that specifies if cluster-update operations can restart ingress controllers
ingress-restart-is-allowed: {{ .Values.shipaApi.allowRestartIngressControllers }}
app-auto-discovery:
enabled: {{ .Values.shipaApi.appAutoDiscoveryEnabled }}
debug: {{ .Values.shipaApi.debug }}
node-traefik:
image: {{ .Values.shipaNodeTraefik.image }}
user: {{ .Values.shipaNodeTraefik.user }}
password: $NODE_TRAEFIK_PASSWORD
certificates:
root: /certs/
ca: ca.pem
ca-key: ca-key.pem
client-ca: client-ca.crt
client-ca-key: client-ca.key
is-ca-endpoint-disabled: {{ .Values.shipaApi.isCAEndpointDisabled }}
shipa-controller:
{{- if .Values.shipaController.image }}
image: "{{ .Values.shipaController.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.shipaController.repositoryBasename }}:{{ .Values.shipaController.tag }}"
{{- end }}
busybody:
{{- if .Values.busybody.image }}
image: "{{ .Values.busybody.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.busybody.repositoryBasename }}:{{ .Values.busybody.tag }}"
{{- end }}
socket: /var/run/docker.sock
signatures: single # multiple/single
launch-darkly:
api-key: {{ .Values.shipaCloud.launchDarkly.sdkKey }}

View File

@ -0,0 +1,239 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "shipa.fullname" . }}-api
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
sidecar.istio.io/inject: {{ eq .Values.shipaCluster.ingress.type "istio" | quote }}
checksum/config: {{ include (print $.Template.BasePath "/shipa-api-configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/shipa-secret.yaml") . | sha256sum }}
checksum/db-auth-secret: {{ include (print $.Template.BasePath "/shipa-db-auth-secrets.yaml") . | sha256sum }}
spec:
{{- if .Values.shipaApi.allowMigrationDowntime }}
strategy:
type: Recreate
{{- end }}
selector:
matchLabels:
{{- include "shipa.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "shipa.selectorLabels" . | nindent 8 }}
annotations:
timestamp: "{{ date "20060102150405" now }}"
sidecar.istio.io/inject: {{ eq .Values.shipaCluster.ingress.type "istio" | quote }}
spec:
nodeSelector:
kubernetes.io/os: linux
{{- if .Values.rbac.enabled }}
serviceAccountName: {{ template "shipa.fullname" . }}
{{- else }}
serviceAccountName: default
{{- end }}
securityContext:
runAsNonRoot: true
runAsUser: 65532
runAsGroup: 65532
initContainers:
- name: bootstrap
{{- if .Values.cli.image }}
image: "{{ .Values.cli.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.cli.repositoryBasename }}:{{ .Values.cli.tag }}"
{{- end }}
command:
- /scripts/bootstrap.sh
imagePullPolicy: {{ .Values.cli.pullPolicy }}
volumeMounts:
- name: scripts
mountPath: /scripts
- name: shipa-conf
mountPath: /etc/shipa-default/
- name: config-dir
mountPath: /etc/shipa/
env:
- name: RELEASE_NAME
value: {{ template "shipa.fullname" . }}
- name: INGRESS_TYPE
value: {{ default ( "nginx" ) .Values.shipaCluster.ingress.type | quote }}
- name: NGINX_SERVICE
value: {{ template "shipa.fullname" . }}-ingress-nginx
- name: SHIPA_PORT
value: {{ first .Values.shipaApi.servicePorts | quote }}
- name: SHIPA_API_CNAMES
value: {{ join "\",\"" (splitList "," (trimPrefix "\n" (include "shipa.cnames" .)) ) | quote }}
- name: SHIPA_ORGANIZATION_ID
valueFrom:
configMapKeyRef:
name: {{ template "shipa.fullname" . }}-defaults-configmap
key: shipa-org-id
- name: SHIPA_MAIN_TARGET
value: {{ template "shipa.GetMainTarget" . }}
- name: WAIT_FOR_NGINX
value: {{ and (eq .Values.shipaCluster.ingress.type "nginx") (not .Values.shipaCluster.ingress.ip) | quote }}
- name: INGRESS_IP
value: {{ .Values.shipaCluster.ingress.ip }}
- name: NGINX_DEPLOYMENT_NAME
value: {{ template "shipa.fullname" . }}-nginx-ingress
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: init
{{- if .Values.shipaApi.image }}
image: "{{ .Values.shipaApi.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.shipaApi.repositoryBasename }}:{{ .Values.shipaApi.tag }}"
{{- end }}
# this init container creates an admin user.
# Let's avoid having ENV variables with admin credentials in the main shipa container.
command:
- /bin/shipad
- root
- user
- create
- --ignore-if-exists
imagePullPolicy: {{ .Values.shipaApi.pullPolicy }}
volumeMounts:
- name: config-dir
mountPath: /etc/shipa/
- name: certificates
mountPath: /certs/
env:
- name: SHIPA_ADMIN_USER
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: username
- name: SHIPA_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: password
{{- if not .Values.tags.defaultDB }}
{{- if and ( .Values.externalMongodb.auth.username ) ( .Values.externalMongodb.auth.password ) }}
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: password
{{- end }}
{{- end }}
containers:
- name: shipa
{{- if .Values.shipaApi.image }}
image: "{{ .Values.shipaApi.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.shipaApi.repositoryBasename }}:{{ .Values.shipaApi.tag }}"
{{- end }}
imagePullPolicy: {{ .Values.shipaApi.pullPolicy }}
env:
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: metrics-password
- name: NODE_TRAEFIK_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: node-traefik-password
- name: DASHBOARD_IMAGE
{{- if .Values.dashboard.image }}
value: "{{ .Values.dashboard.image }}"
{{- else }}
value: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.dashboard.repositoryBasename }}:{{ .Values.dashboard.tag }}"
{{- end }}
- name: DASHBOARD_ENABLED
value: "{{ .Values.dashboard.enabled }}"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- if not .Values.tags.defaultDB }}
{{- if and ( .Values.externalMongodb.auth.username ) ( .Values.externalMongodb.auth.password ) }}
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-db-auth-secret
key: password
{{- end }}
{{- end }}
ports:
- name: shipa
containerPort: {{ .Values.shipaApi.port }}
protocol: TCP
- name: shipa-secure
containerPort: {{ .Values.shipaApi.securePort }}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: {{ .Values.shipaApi.port }}
periodSeconds: 2
failureThreshold: 4
startupProbe:
httpGet:
path: /
port: {{ .Values.shipaApi.port }}
failureThreshold: 90
periodSeconds: 2
readinessProbe:
httpGet:
path: /
port: {{ .Values.shipaApi.port }}
periodSeconds: 3
initialDelaySeconds: 5
failureThreshold: 50
successThreshold: 1
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
runAsNonRoot: true
capabilities:
drop:
- ALL
volumeMounts:
- name: config-dir
mountPath: /etc/shipa/
readOnly: true
- name: certificates
mountPath: /certs/
readOnly: true
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: config-dir
emptyDir: {}
- name: shipa-conf
configMap:
name: {{ template "shipa.fullname" . }}-api-config
items:
- key: shipa.conf
path: shipa.conf
- name: certificates
secret:
secretName: shipa-certificates
- name: scripts
configMap:
defaultMode: 0755
name: {{ template "shipa.fullname" . }}-api-init-config

View File

@ -0,0 +1,61 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-api-init-config
labels: {{- include "shipa.labels" . | nindent 4 }}
data:
init-job.sh: |
{{ .Files.Get "scripts/init-job.sh" | indent 4 }}
bootstrap.sh: |
{{ .Files.Get "scripts/bootstrap.sh" | indent 4 }}
csr-docker-cluster.json: |
{{ .Files.Get "scripts/csr-docker-cluster.json" | indent 4 }}
csr-shipa-ca.json: |
{{ .Files.Get "scripts/csr-shipa-ca.json" | indent 4 }}
csr-client-ca.json: |
{{ .Files.Get "scripts/csr-client-ca.json" | indent 4 }}
csr-api-config.json: |
{{ .Files.Get "scripts/csr-api-config.json" | indent 4 }}
csr-api-server.json: |
{{ .Files.Get "scripts/csr-api-server.json" | indent 4 }}
default-framework-template.yaml: |
shipaFramework: shipa-framework
resources:
general:
setup:
force: false
default: true
public: true
provisioner: kubernetes
kubeNamespace: {{ .Release.Namespace }}
security:
disableScan: true
router: {{ default ( "nginx" ) .Values.shipaCluster.ingress.type }}
access:
append:
- shipa-admin-team
- shipa-system-team
default-cluster-template.yaml: |
name: shipa-cluster
endpoint:
addresses:
- CLUSTER_ADDR
caCert: |
CLUSTER_CACERT
token: CLUSTER_TOKEN
resources:
frameworks:
- name: shipa-framework
ingressControllers:
- ingressip: {{ .Values.shipaCluster.ingress.ip }}
serviceType: {{ default ( "LoadBalancer" ) .Values.shipaCluster.ingress.serviceType | quote }}
type: {{ default ( "nginx" ) .Values.shipaCluster.ingress.type }}
{{ if eq .Values.shipaCluster.ingress.type "nginx" }}
className: {{ default ( include "shipa.defaultNginxClassName" . | trim ) .Values.shipaCluster.ingress.className }}
{{- end }}
{{ if eq .Values.shipaCluster.ingress.type "traefik" }}
className: {{ default ("traefik") .Values.shipaCluster.ingress.className }}
{{- end }}
{{ if eq .Values.shipaCluster.ingress.type "istio" }}
className: {{ default ("istio") .Values.shipaCluster.ingress.className }}
{{- end }}

View File

@ -0,0 +1,115 @@
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ template "shipa.fullname" . }}-init-job-{{ .Release.Revision }}"
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "post-install"
sidecar.istio.io/inject: "false"
spec:
backoffLimit: 5
template:
metadata:
name: "{{ template "shipa.fullname" . }}-init-job-{{ .Release.Revision }}"
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
terminationGracePeriodSeconds: 3
{{- if .Values.rbac.enabled }}
serviceAccountName: {{ template "shipa.fullname" . }}
{{- else }}
serviceAccountName: default
{{- end }}
restartPolicy: Never
containers:
- name: migrations
{{- if .Values.cli.image }}
image: "{{ .Values.cli.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.cli.repositoryBasename }}:{{ .Values.cli.tag }}"
{{- end }}
command:
- /scripts/init-job.sh
imagePullPolicy: {{ .Values.cli.pullPolicy }}
env:
- name: RELEASE_NAME
value: {{ template "shipa.fullname" . }}
- name: SHIPA_ENDPOINT
value: "{{ template "shipa.fullname" . }}-api.{{ .Release.Namespace }}.svc.{{ .Values.shipaCluster.clusterDomain }}"
- name: SHIPA_ENDPOINT_PORT
value: "{{ first .Values.shipaApi.servicePorts }}"
- name: USERNAME
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-api-init-secret
key: password
- name: METRICS_SERVICE
value: {{ template "shipa.fullname" . }}-metrics
- name: INGRESS_TYPE
value: {{ default ( "nginx" ) .Values.shipaCluster.ingress.type | quote }}
- name: INGRESS_SERVICE_TYPE
value: {{ default ( "LoadBalancer" ) .Values.shipaCluster.serviceType | quote }}
- name: INGRESS_IP
value: {{ default ( "" ) .Values.shipaCluster.ip | quote }}
- name: INGRESS_DEBUG
value: {{ default ( "false" ) .Values.shipaCluster.debug | quote }}
- name: ISTIO_INGRESS_SERVICE_TYPE
value: {{ default ( "LoadBalancer" ) .Values.shipaCluster.istioServiceType | quote }}
- name: ISTIO_INGRESS_IP
value: {{ default ( "" ) .Values.shipaCluster.istioIp | quote }}
- name: DASHBOARD_IMAGE
{{- if .Values.dashboard.image }}
value: "{{ .Values.dashboard.image }}"
{{- else }}
value: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.dashboard.repositoryBasename }}:{{ .Values.dashboard.tag }}"
{{- end }}
- name: DASHBOARD_ENABLED
value: "{{ .Values.dashboard.enabled }}"
- name: SHIPA_CLOUD
value: {{ .Values.shipaCloud.enabled | quote }}
- name: SHIPA_PAY_API_HOST
value: {{ .Values.shipaCloud.shipaPayApi.host | quote }}
- name: SHIPA_PAY_API_TOKEN
value: {{ .Values.shipaCloud.shipaPayApi.token | quote }}
- name: GOOGLE_RECAPTCHA_SITEKEY
value: {{ .Values.shipaCloud.googleRecaptcha.sitekey | quote }}
- name: GOOGLE_RECAPTCHA_SECRET
value: {{ .Values.shipaCloud.googleRecaptcha.secret | quote }}
- name: SMARTLOOK_PROJECT_KEY
value: {{ .Values.shipaCloud.smartlook.projectKey | quote }}
- name: LAUNCH_DARKLY_SDK_KEY
value: {{ .Values.shipaCloud.launchDarkly.sdkKey | quote }}
- name: SHIPA_TARGETS
value: {{ trimPrefix "\n" (include "shipa.cnames" .) }}
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: METRICS_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "shipa.fullname" . }}-secret
key: metrics-password
volumeMounts:
- name: scripts
mountPath: /scripts
- name: scripts-out
mountPath: /etc/shipa/
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end }}
volumes:
- name: scripts
configMap:
defaultMode: 0755
name: {{ template "shipa.fullname" . }}-api-init-config
- name: scripts-out
emptyDir: {}

View File

@ -0,0 +1,16 @@
{{- if or (.Release.IsInstall) (.Values.auth.adminUser) -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "shipa.fullname" . }}-api-init-secret
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
{{- if or (lt (len .Values.auth.adminPassword) 6) (gt (len .Values.auth.adminPassword) 50) }}
{{- fail "adminPassword must be between 6 and 50 characters" }}
{{- end }}
username: {{ required "Admin username is required! Use --set=auth.adminUser=..." .Values.auth.adminUser | toString | b64enc | quote }}
password: {{ required "Admin password is required! Use --set=auth.adminPassword=..." .Values.auth.adminPassword | toString | b64enc | quote }}
{{- end }}

View File

@ -0,0 +1,98 @@
{{- if .Values.rbac.enabled }}
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
rules:
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
- services
- extensions
- rbac.authorization.k8s.io
- apiextensions.k8s.io
- networking.k8s.io
- core
- apps
- shipa.io
- config.istio.io
- networking.istio.io
- rbac.istio.io
- authentication.istio.io
- cert-manager.io
- admissionregistration.k8s.io
- coordination.k8s.io
- theketch.io
- traefik.containo.us
resources: ["*"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["*"]
verbs:
- list
- get
- watch
- nonResourceURLs: ["*"]
verbs:
- list
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "shipa.fullname" . }}-role
labels: {{- include "shipa.labels" . | nindent 4 }}
rules:
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "shipa.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}
namespace: {{ .Release.Namespace }}
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}
labels: {{- include "shipa.labels" . | nindent 4 }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ template "shipa.fullname" . }}-role
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,23 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "shipa.fullname" . }}-api
labels:
{{- include "shipa.labels" . | nindent 4 }}
spec:
type: ClusterIP
selector:
{{- include "shipa.selectorLabels" . | nindent 4 }}
ports:
{{- range $i, $servicePort := .Values.shipaApi.servicePorts }}
- targetPort: {{ $.Values.shipaApi.port }}
port: {{ $servicePort }}
protocol: TCP
name: shipa-{{ $i }}
{{- end }}
{{- range $i, $servicePort := .Values.shipaApi.serviceSecurePorts }}
- targetPort: {{ $.Values.shipaApi.securePort }}
port: {{ $servicePort }}
protocol: TCP
name: shipa-secure-{{ $i }}
{{- end }}

View File

@ -0,0 +1,17 @@
apiVersion: v1
kind: Secret
metadata:
name: shipa-certificates
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
ca.pem: ""
ca-key.pem: ""
cert.pem: ""
key.pem: ""
api-server.crt: ""
api-server.key: ""
client-ca.crt: ""
client-ca.key: ""

View File

@ -0,0 +1,14 @@
{{- if not .Values.tags.defaultDB }}
{{- if and ( .Values.externalMongodb.auth.username ) ( .Values.externalMongodb.auth.password ) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "shipa.fullname" . }}-db-auth-secret
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
data:
username: {{ required "Database username is required! Use --set=externalMongodb.auth.username=..." .Values.externalMongodb.auth.username | toString | b64enc | quote }}
password: {{ required "Database password is required! Use --set=externalMongodb.auth.password=..." .Values.externalMongodb.auth.password | toString | b64enc | quote }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-defaults-configmap
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
shipa-org-id: {{ uuidv4 | replace "-" "" | quote }}

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ template "shipa.fullname" . }}-secret
labels: {{- include "shipa.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
data:
metrics-password: {{ default (randAlphaNum 15) .Values.metrics.password | toString | b64enc | quote }}
postgres-password: {{ default (randAlphaNum 15) .Values.postgres.source.password | toString | b64enc | quote }}
node-traefik-password: {{ default (randAlphaNum 15) .Values.shipaNodeTraefik.password | toString | b64enc | quote }}

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "shipa.fullname" . }}-uninstall-job-config
labels: {{- include "shipa.uninstall-labels" . | nindent 4 }}
annotations:
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": "1"
"helm.sh/hook": post-delete
data:
uninstall-job.sh: |
{{ .Files.Get "scripts/uninstall-job.sh" | indent 4 }}

View File

@ -0,0 +1,52 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.uninstall-labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
sidecar.istio.io/inject: "false"
spec:
template:
metadata:
name: "{{ template "shipa.fullname" . }}-uninstall-job-{{ .Release.Revision }}"
annotations:
sidecar.istio.io/inject: "false"
spec:
nodeSelector:
kubernetes.io/os: linux
{{- if .Values.rbac.enabled }}
serviceAccountName: {{ template "shipa.fullname" . }}-uninstall
{{- else }}
serviceAccountName: default
{{- end }}
restartPolicy: Never
containers:
- name: cleanup
{{- if .Values.cli.image }}
image: "{{ .Values.cli.image }}"
{{- else }}
image: "{{ .Values.images.shipaRepositoryDirname }}/{{ .Values.cli.repositoryBasename }}:{{ .Values.cli.tag }}"
{{- end }}
command:
- /scripts/uninstall-job.sh
imagePullPolicy: IfNotPresent
env:
- name: SELECTOR
value: "shipa.io/is-shipa=true"
- name: NAMESPACE_MOD
value: "-A"
- name: RELEASE_NAME
value: {{ template "shipa.fullname" . }}
- name: RELEASE_NAMESPACE
value: {{ .Release.Namespace }}
volumeMounts:
- name: scripts
mountPath: /scripts
volumes:
- name: scripts
configMap:
defaultMode: 0755
name: {{ template "shipa.fullname" . }}-uninstall-job-config

View File

@ -0,0 +1,58 @@
{{- if .Values.rbac.enabled }}
kind: ServiceAccount
apiVersion: v1
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.uninstall-labels" . | nindent 4 }}
annotations:
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": "1"
"helm.sh/hook": post-delete
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.uninstall-labels" . | nindent 4 }}
annotations:
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": "1"
"helm.sh/hook": post-delete
rules:
- apiGroups:
- ""
- apps
- batch
- services
- extensions
- rbac.authorization.k8s.io
- networking.k8s.io
- apiextensions.k8s.io
- core
- shipa.io
- clusterroles
- ingresses
- endpoints
- networkpolicies
- namespaces
resources: ["*"]
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ template "shipa.fullname" . }}-uninstall
labels: {{- include "shipa.uninstall-labels" . | nindent 4 }}
annotations:
"helm.sh/hook-delete-policy": hook-succeeded
"helm.sh/hook-weight": "1"
"helm.sh/hook": post-delete
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "shipa.fullname" . }}-uninstall
subjects:
- kind: ServiceAccount
name: {{ template "shipa.fullname" . }}-uninstall
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,250 @@
# Default values for shipa.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
auth:
dummyDomain: "@shipa.io"
images:
# The base directory for Shipa Corp images. For Shipa Corp images this value has repositoryBasename and tag appended to it to determine the location to pull images from
# This does not affect non-Shipa Corp images, such as k8s.gcr.io/ingress-nginx/controller, docker.io/postgres, k8s.gcr.io/mongodb-install, docker.io/mongo, docker.io/busybox, and docker.io/traefik
shipaRepositoryDirname: docker.io/shipasoftware
shipaApi:
port: "8080"
securePort: "8081"
servicePorts:
- "80"
serviceSecurePorts:
- "443"
repositoryBasename: api
tag: d6de93a58d76c7a48c9204051304bd8de2c90720
pullPolicy: Always
debug: false
cnames: []
allowRestartIngressControllers: true
allowMigrationDowntime: true
appAutoDiscoveryEnabled: true
isCAEndpointDisabled: false
secureIngressOnly: false
# if set this secret will be used for api ingress controller resources instead of default one
# customSecretName: shipa-api-secret
# if set these annotations will be appended for API ingress resources
# customIngressAnnotations:
# aaa: "bbb"
# ccc: "ddd"
license: ""
shipaCluster:
# use debug logs in traefik ingress controller
debug: false
ingress:
# ingress controller type
# supported: (nginx, istio, traefik)
type: nginx
# NGINX ingress controller image
# If the ingress controller type is nginx and no ngress controller ip address is provided, an ingress controller will be deployed using this image
image: k8s.gcr.io/ingress-nginx/controller:v1.1.0
# ingress controller serviceType
# when using shipa managed nginx, we reconcile looking for the right Host of LoadBalancer or ClusterIP based on what is provided here
# when using non user managed ingress controller we use this just to store it in DB
serviceType: LoadBalancer
# ingress controller ip address
# if provided we asume user provided ingress controller should be used and create api resources for it
# ip: 10.100.10.11
# ingress controller class name.
# If undefined, in most places we set default: nginx, traefik, istio. If we detect that it's shipa managed nginx, we default to shipa-nginx-ingress
# className: shipa-nginx-ingress
# if enabled we will create ingress controller resources to allow api to be accessable on root ip of ingress controller
# NOTE: all ingresses require Host targeting instead of Path targeting for TLS
# also if you use nginxinc/kubernetes-ingress, using Ingress without host is not allowed until this is resolved: https://github.com/nginxinc/kubernetes-ingress/issues/209
apiAccessOnIngressIp: true
# shipa managed nginx only configs:
# ingress controller ClusterIp address
# if provided it will be used for shipa managed nginx ingress controller
# clusterIp: 10.100.10.11
# ingress controller LoadBalancerIp address
# if provided it will be used for shipa managed nginx ingress controller
# loadBalancerIp: 10.100.10.11
# if provided it will be used as node port for shipa managed nginx ingress controller
# nodePort: 31000
clusterDomain: cluster.local
# populate with docker hub username to use authenticated user. Secrets should be added to cluster outside shipa helm chart
# imagePullSecrets: ""
dashboard:
enabled: true
repositoryBasename: dashboard
tag: ed695d5a2035e1a6a06240bb85984b1ab035c755
postgres:
source:
## Leave blank to default to {{ template "shipa.fullname" . }}-postgres.{{ .Release.Namespace }} e.g. shipa-postgres.shipa-system
host:
port: 5432
user: postgres
## Leave blank to generate a random value
password:
## options for postgres.source.sslmode are "require", "verify-full", "verify-ca", or "disable
sslmode: disable
## set postgres.create to false to avoid creating a postgres instance
create: true
## If create is set to true, this is the image that will be used
image: docker.io/postgres:13
persistence:
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
## storageClass: ""
accessMode: "ReadWriteOnce"
size: 10Gi
cli:
repositoryBasename: cli
tag: 735058a725bbd13b9b3d3b521063d55ec4b83cff
pullPolicy: Always
metrics:
repositoryBasename: metrics
tag: v0.0.7
pullPolicy: Always
# Extra configuration to add to prometheus.yaml
# extraPrometheusConfiguration: |
# remote_read:
# - url: http://localhost:9268/read
# remote_write:
# - url: http://localhost:9268/write
extraPrometheusConfiguration:
#password: hardcoded
prometheusArgs: "--storage.tsdb.retention.time=1d"
busybody:
repositoryBasename: bb
tag: 0a1d6ad5aa4f849c96812a17916ca887e92be272
shipaController:
repositoryBasename: shipa-controller
tag: a7f265cd9787dd5b37cc094fe8ba16cd022af120
prometheusMetricsExporter:
repositoryBasename: prometheus-metrics-exporter
tag: b123eb79bdbe56f83812b5ad3cfb8bbb568b2e3d
clair:
repositoryBasename: clair
tag: v2.1.7
shipaNodeTraefik:
# image: docker.io/traefik:v1.7.24
user: admin
# --------------------------------------------------------------------------
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
rbac:
enabled: true
# Connect your own instance of mongodb
externalMongodb:
# url must follow Standard Connection String Format as described here: https://docs.mongodb.com/manual/reference/connection-string/#standard-connection-string-format
# For a sharded cluster it should be a comma separated list of hosts:
# e.g. "mongos0.example.com:27017,mongos1.example.com:27017,mongos2.example.com:27017"
# Due to some limitations of the dependencies, we currently do not support url with 'DNS Seed List Connection Format'.
url: < database url >
auth:
username: < username >
password: < password >
# Enable/Disable TLS when connecting to external DB instance.
tls:
enable: true
# tags are standard way to handle chart dependencies.
tags:
# Set defaultDB to 'false' when using external DB to not install default DB.
# It will also prevent creating Persistent Volumes.
defaultDB: true
certManager:
installUrl: https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml
# Default DB config
mongodb-replicaset:
replicaSetName: rs0
replicas: 1
port: 27017
nodeSelector:
kubernetes.io/os: linux
auth:
enabled: false
installImage:
repository: k8s.gcr.io/mongodb-install
tag: 0.6
pullPolicy: IfNotPresent
image:
repository: docker.io/mongo
tag: 5.0
pullPolicy: IfNotPresent
copyConfigImage:
repository: docker.io/busybox
tag: 1.29.3
pullPolicy: IfNotPresent
persistentVolume:
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner.
##
## storageClass: ""
enabled: true
size: 10Gi
tls:
enabled: false
configmap:
shipaCloud:
enabled: false
shipaPayApi:
host: ""
token: ""
googleRecaptcha:
sitekey: ""
secret: ""
smartlook:
projectKey: ""
launchDarkly:
sdkKey: ""
ketch:
enabled: true
repositoryBasename: ketch
tag: 4b313a63ff9efddec0f93820c02816d4f2da9b7f
metricsAddress: 127.0.0.1:8080

View File

@ -3165,6 +3165,39 @@ entries:
- assets/portworx/portworx-2.8.0.tgz
version: 2.8.0
shipa:
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Shipa
catalog.cattle.io/namespace: shipa-system
catalog.cattle.io/release-name: shipa
apiVersion: v2
appVersion: 1.6.3
created: "2022-03-10T11:11:46.734501-05:00"
dependencies:
- name: mongodb-replicaset
repository: file://./charts/mongodb-replicaset
tags:
- defaultDB
description: A Helm chart for Kubernetes to install the Shipa Control Plane
digest: e02796981c55f681bc10a1e626217dc08b9bbc6010cba9839da07002086208bc
home: https://www.shipa.io
icon: https://www.shipa.io/wp-content/uploads/2020/11/Shipa-banner-768x307.png
keywords:
- shipa
- deployment
- aac
kubeVersion: '>= 1.16.0-0'
maintainers:
- email: rlachhman@shipa.io
name: ravi
name: shipa
sources:
- https://github.com/shipa-corp
- https://github.com/shipa-corp/helm-chart
type: application
urls:
- assets/shipa/shipa-1.6.300.tgz
version: 1.6.300
- annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/display-name: Shipa