Compare commits
42 Commits
dev/kubern
...
feature/in
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
67d87d7d5b | ||
|
|
3cdc07cdf3 | ||
|
|
91dab0f121 | ||
|
|
ed48c92e4f | ||
|
|
ae8d6a6cf3 | ||
|
|
9f4a5b452c | ||
| 12fc74d8ff | |||
|
|
1d69fad451 | ||
|
|
f73c58d7ee | ||
|
|
68771a8e61 | ||
|
|
e3de3af82f | ||
| 490248909b | |||
|
|
e0741b4438 | ||
|
|
8050126d2d | ||
|
|
b5bbc5fa34 | ||
| 915fd7db8a | |||
|
|
bbfa3f2964 | ||
|
|
0a60d8831c | ||
|
|
cc6f31b4b9 | ||
|
|
069502d056 | ||
|
|
278e6a9cd7 | ||
|
|
eba722992f | ||
|
|
871d47fff8 | ||
| 9ef15325cc | |||
| 526f4b9035 | |||
| 43ad6f842b | |||
| 21238a032d | |||
|
|
34068bc7ac | ||
| 28b73f88a8 | |||
|
|
8a2d491e45 | ||
|
|
8073861775 | ||
|
|
78bccff685 | ||
| cef013a40a | |||
| 3692204ce4 | |||
| a16143a3f4 | |||
| c3bf234cba | |||
| 9e8370f577 | |||
| 8496b19ec5 | |||
| 4a43305a35 | |||
|
|
8bb6bddb00 | ||
|
|
bf62157f26 | ||
| 0cf7ba527e |
3
.gitmodules
vendored
Normal file
3
.gitmodules
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
[submodule "vendor/ark"]
|
||||
path = vendor/ark
|
||||
url = git@github.com:heptio/ark.git
|
||||
49
README.md
49
README.md
@@ -3,38 +3,25 @@
|
||||
This repository contains configuration files and other assets, that are used to
|
||||
deploy and operate this Gitea instance.
|
||||
|
||||
To create a new image containing the customizations:
|
||||
|
||||
Edit `packer/custom.json` to increment the tag, then run this script (needs
|
||||
[Packer](https://www.packer.io/) in your path)
|
||||
|
||||
```
|
||||
./script/build_customizations_image
|
||||
```
|
||||
|
||||
Then edit `kubernetes/gitea-server.yaml` to use the new tag
|
||||
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
|
||||
change:
|
||||
|
||||
```
|
||||
cd kubernetes
|
||||
kubectl apply -f gitea-server.yaml
|
||||
```
|
||||
|
||||
Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
|
||||
whatever else you want to discuss or resolve.
|
||||
|
||||
[open issues]: https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues
|
||||
|
||||
## Kubernetes
|
||||
|
||||
### Apply changes to resources
|
||||
|
||||
```
|
||||
kubectl apply -f gitea-db.yaml
|
||||
kubectl apply -f gitea-server.yaml
|
||||
```
|
||||
|
||||
### Write the secrets to the local filesystem
|
||||
|
||||
```
|
||||
./script/get_secrets
|
||||
```
|
||||
|
||||
It writes the secrets (currently the app.ini file, as well as auto-generated
|
||||
TLS certificates that are only used when no Let's Encrypt cert is available)
|
||||
to the `kubernetes/config/` folder. These files are not in Git because they
|
||||
contain credentials.
|
||||
|
||||
Once you have edited them locally, you need to delete the secrets stored on
|
||||
Kubernetes before uploading them again. This is done by this script:
|
||||
|
||||
```
|
||||
./script/replace_secrets
|
||||
```
|
||||
|
||||
### Reuse a released persistent volume:
|
||||
|
||||
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616
|
||||
|
||||
11
custom/options/label/Default
Normal file
11
custom/options/label/Default
Normal file
@@ -0,0 +1,11 @@
|
||||
#db231d bug ; Something is not working
|
||||
#76db1d enhancement ; Improving existing functionality
|
||||
#1d76db feature ; New functionality
|
||||
#db1d76 idea ; Something to consider
|
||||
#db1d76 question ; Looking for an answer
|
||||
#fbca04 security ; All your base are belong to us
|
||||
#1dd5db ui/ux ; User interface, process design, etc.
|
||||
#333333 dev environment ; Config, builds, CI, deployment, etc.
|
||||
#cccccc duplicate ; This issue or pull request already exists
|
||||
#cccccc invalid ; Not a bug
|
||||
#cccccc wontfix ; This won't be fixed
|
||||
14
custom/options/label/Kosmos
Normal file
14
custom/options/label/Kosmos
Normal file
@@ -0,0 +1,14 @@
|
||||
#db231d bug ; Something is not working
|
||||
#76db1d enhancement ; Improving existing functionality
|
||||
#1d76db feature ; New functionality
|
||||
#db1d76 idea ; Something to consider
|
||||
#db1d76 question ; Looking for an answer
|
||||
#fbca04 security ; All your base are belong to us
|
||||
#1dd5db ui/ux ; User interface, process design, etc.
|
||||
#333333 dev environment ; Config, builds, CI, deployment, etc.
|
||||
#008080 kredits-1 ; Small contribution
|
||||
#008080 kredits-2 ; Medium contribution
|
||||
#008080 kredits-3 ; Large contribution
|
||||
#cccccc duplicate ; This issue or pull request already exists
|
||||
#cccccc invalid ; Not a bug
|
||||
#cccccc wontfix ; This won't be fixed
|
||||
36
doc/backup-and-restore.md
Normal file
36
doc/backup-and-restore.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Backups
|
||||
|
||||
We're using [Ark][1] for backing up Kubernetes config and GKE resources. It is
|
||||
available as a Git submodule in the `vendor/` folder (incl. the `ark`
|
||||
executable).
|
||||
|
||||
In order to initialize and update submodules in your local repo, run once:
|
||||
|
||||
git submodule update --init
|
||||
|
||||
Then, to fetch/update the modules, run:
|
||||
|
||||
git submodule update
|
||||
|
||||
The Ark service is running on the Sidamo cluster and was set up using the
|
||||
[official docs' GCP instructions and config files][4]. There's a daily backup
|
||||
schedule in effect for Gitea (using the label `app=gitea`).
|
||||
|
||||
Please refer to Ark's [ Getting Started ][5] doc for all backup and restore
|
||||
commands.
|
||||
|
||||
## Backup location
|
||||
|
||||
Cluster configuration (including all live resources) is backed up to [a Google
|
||||
Cloud Storage container][3].
|
||||
|
||||
## Persistent volumes
|
||||
|
||||
Persistent volumes are just GCE disks. Thus, with the current config, Ark
|
||||
creates volume snapshots as native [GCE disk snapshots][2].
|
||||
|
||||
[1]: https://heptio.github.io/ark/v0.10.0
|
||||
[2]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50
|
||||
[3]: https://console.cloud.google.com/storage/browser/sidamo-backups?project=fluted-magpie-218106&organizationId=772167872692
|
||||
[4]: https://heptio.github.io/ark/v0.10.0/gcp-config
|
||||
[5]: https://heptio.github.io/ark/v0.10.0/get-started
|
||||
180
doc/ingress.md
Normal file
180
doc/ingress.md
Normal file
@@ -0,0 +1,180 @@
|
||||
# HTTP(S) load balancing with Ingress
|
||||
|
||||
## Resources
|
||||
|
||||
Features of GKE Ingress from the Google Cloud docs:
|
||||
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
|
||||
|
||||
It does hostname-aware HTTP(S) load balancing, and is billed like a regular
|
||||
Load Balancer (https://cloud.google.com/compute/pricing#lb). The advantages are
|
||||
that we can use one set of firewall rules (ports 80 and 443) for multiple
|
||||
services, and easy Let's Encrypt certificates for services with no built-in
|
||||
support for it
|
||||
|
||||
This 3 part article was a good resource:
|
||||
|
||||
https://medium.com/google-cloud/global-kubernetes-in-3-steps-on-gcp-8a3585ec8547
|
||||
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
|
||||
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-2-demo-cf587765702
|
||||
|
||||
I couldn't find information about setting
|
||||
`ingress.kubernetes.io/rewrite-target` to `/` anywhere else, without it only
|
||||
`/` worked on an host, all other URLs would go to the default backend and
|
||||
return a 404.
|
||||
|
||||
cert-manager, for automated (among others) Let's Encrypt certificates:
|
||||
https://docs.cert-manager.io/en/release-0.8/
|
||||
|
||||
## Create a global IP
|
||||
|
||||
Ephemeral IPs are only regional, and you lose them if you have to recreate the
|
||||
Ingress
|
||||
|
||||
gcloud compute addresses create ingress-ip --global
|
||||
|
||||
## Create the ingress
|
||||
|
||||
A ClusterIP will not work, because it is allocating random ports. Explicitly
|
||||
create a NodePort to expose your service. On GKE, health checks are configured
|
||||
automatically
|
||||
|
||||
cat <<EOF > test-server-nodeport.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: test-server-nodeport
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 3000
|
||||
type: NodePort
|
||||
selector:
|
||||
name: test-server
|
||||
EOF
|
||||
kubectl apply -f test-server-nodeport.yaml
|
||||
|
||||
Create the ingress resource
|
||||
|
||||
cat <<EOF > ingress-main.yaml
|
||||
# A GCE Ingress that uses cert-manager to manage Let's Encrypt certificates
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress-main
|
||||
annotations:
|
||||
# Required, otherwise only the / path works
|
||||
# https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
|
||||
ingress.kubernetes.io/rewrite-target: /
|
||||
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
|
||||
certmanager.k8s.io/acme-challenge-type: http01
|
||||
# Created using the following command
|
||||
# gcloud compute addresses create ingress-ip --global
|
||||
kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- test.kosmos.org
|
||||
secretName: test-kosmos-org-cert
|
||||
- test2.kosmos.org
|
||||
secretName: test2-kosmos-org-cert
|
||||
rules:
|
||||
- host: test.kosmos.org
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: test-server-nodeport
|
||||
servicePort: 80
|
||||
- host: test2.kosmos.org
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: test-server-nodeport
|
||||
servicePort: 80
|
||||
EOF
|
||||
kubectl apply -f ingress-main.yaml
|
||||
|
||||
|
||||
## cert-manager
|
||||
|
||||
### Create the cert-manager resources
|
||||
|
||||
cert-manager provides a Let's Encrypt certificate issuer, and lets you mount it
|
||||
in an Ingress resource, making it possible to use HTTP ACME challenges
|
||||
|
||||
Get the reserved IP you created in the first step:
|
||||
|
||||
$ gcloud compute addresses list --global
|
||||
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
|
||||
ingress-ip 35.244.164.133 EXTERNAL IN_USE
|
||||
|
||||
Set the DNS record for the domain you want a Let's Encrypt cert for to this IP.
|
||||
|
||||
Now it's time to create the cert-manager
|
||||
|
||||
https://docs.cert-manager.io/en/release-0.8/getting-started/install/kubernetes.html
|
||||
|
||||
kubectl create namespace cert-manager
|
||||
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml --validate=false
|
||||
|
||||
I had to run the apply command twice for it to create all the resources. On the
|
||||
first run I got these errors. Running it a second time successfully created all
|
||||
the resources
|
||||
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
|
||||
|
||||
We name the ingress explicitely so it only runs on one. Having only one IP to
|
||||
set on the DNS records makes the HTTP validation easier. Using the class would
|
||||
attach the validation endpoint to all Ingresses of that class
|
||||
(https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1)
|
||||
|
||||
cat <<EOF > letsencrypt-staging.yaml
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-staging-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: ingress-main
|
||||
EOF
|
||||
|
||||
cat <<EOF > letsencrypt-production.yaml
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
spec:
|
||||
acme:
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-production-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: ingress-main
|
||||
|
||||
|
||||
## Add another service
|
||||
|
||||
To add another service behind the Ingress, you set the DNS entry for its domain
|
||||
to the Ingress IP, deploy your service, create a NodePort to expose it, and
|
||||
finally add its host to the Ingress config (both tls and rules, see example
|
||||
above)
|
||||
71
doc/kubernetes.md
Normal file
71
doc/kubernetes.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Kubernetes / GKE
|
||||
|
||||
This Gitea instance is currently hosted on Google Kubernetes Engine.
|
||||
|
||||
## Apply changes to resources
|
||||
|
||||
```
|
||||
kubectl apply -f gitea-db.yaml
|
||||
kubectl apply -f gitea-server.yaml
|
||||
```
|
||||
|
||||
## Write the secrets to the local filesystem
|
||||
|
||||
```
|
||||
./script/get_secrets
|
||||
```
|
||||
|
||||
It writes the secrets (currently the app.ini file, as well as auto-generated
|
||||
TLS certificates that are only used when no Let's Encrypt cert is available)
|
||||
to the `kubernetes/config/` folder. These files are not in Git because they
|
||||
contain credentials.
|
||||
|
||||
Once you have edited them locally, you need to delete the secrets stored on
|
||||
Kubernetes before uploading them again. This is done by this script:
|
||||
|
||||
```
|
||||
./script/replace_secrets
|
||||
```
|
||||
|
||||
## Reuse a released persistent volume:
|
||||
|
||||
> When you delete a PVC, corresponding PV becomes `Released`. This PV can contain sensitive data (say credit card numbers) and therefore nobody can ever bind to it, even if it is a PVC with the same name and in the same namespace as the previous one - who knows who's trying to steal the data!
|
||||
>
|
||||
> Admin intervention is required here. He has two options:
|
||||
>
|
||||
> * Make the PV available to everybody - delete `PV.Spec.ClaimRef`, Such PV can bound to any PVC (assuming that capacity, access mode and selectors match)
|
||||
>
|
||||
> * Make the PV available to a specific PVC - pre-fill `PV.Spec.ClaimRef` with a pointer to a PVC. Leave the `PV.Spec.ClaimRef,UID` empty, as the PVC does not to need exist at this point and you don't know PVC's UID. This PV can be bound only to the specified PVC.
|
||||
>
|
||||
>
|
||||
> @whitecolor, in your case you should be fine by clearing `PV.Spec.ClaimRef.UID` in the PV. Only the re-created PVC (with any UID) can then use the PV. And it's your responsibility that only the right person can craft appropriate PVC so nobody can steal your data.
|
||||
|
||||
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616
|
||||
|
||||
## Update Gitea
|
||||
|
||||
### Released version
|
||||
|
||||
Change the image for the gitea-server container
|
||||
(`kubernetes/gitea-server.yaml`) to `gitea/gitea:TAG`, for example:
|
||||
`gitea/gitea:1.7.0-rc2`
|
||||
|
||||
### Unreleased version
|
||||
|
||||
This is useful to deploy features that are in master but not yet in a release.
|
||||
|
||||
$ docker pull gitea/gitea
|
||||
$ docker tag gitea/gitea:latest kosmosorg/gitea:production
|
||||
$ docker push kosmosorg/gitea
|
||||
|
||||
Set the image for the gitea-server container to `kosmosorg/gitea:latest`, or run
|
||||
this command to force a deployment if it is already set to it
|
||||
|
||||
$ kubectl patch deployment gitea-server -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
|
||||
|
||||
### Build our own image
|
||||
|
||||
At the root of the [https://github.com/go-gitea/gitea](gitea repo)
|
||||
|
||||
$ DOCKER_TAG=production DOCKER_IMAGE=kosmosorg/gitea make docker # builds and tags kosmosorg/gitea:production locally
|
||||
$ docker push kosmosorg/gitea
|
||||
1791
kubernetes/cert-manager.yaml
Normal file
1791
kubernetes/cert-manager.yaml
Normal file
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,8 @@ apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea-db
|
||||
namespace: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
@@ -11,6 +12,7 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
name: gitea-db
|
||||
app: gitea
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
@@ -30,13 +32,19 @@ spec:
|
||||
value: gitea
|
||||
image: mariadb:10.3.10
|
||||
name: gitea-db
|
||||
resources: {}
|
||||
ports:
|
||||
- containerPort: 3306
|
||||
name: mysql
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/mysql
|
||||
name: gitea-db-data
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 150Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 300Mi
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: gitea-db-data
|
||||
@@ -47,9 +55,9 @@ apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-db-data
|
||||
namespace: gitea
|
||||
labels:
|
||||
name: gitea-db-data
|
||||
app: gitea
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
@@ -61,9 +69,9 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-db
|
||||
namespace: gitea
|
||||
labels:
|
||||
service: gitea-db
|
||||
app: gitea
|
||||
spec:
|
||||
selector:
|
||||
name: gitea-db
|
||||
|
||||
276
kubernetes/gitea-ingress.yaml
Normal file
276
kubernetes/gitea-ingress.yaml
Normal file
@@ -0,0 +1,276 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- endpoints
|
||||
- nodes
|
||||
- pods
|
||||
- secrets
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ingress-controller
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: ingress-controller
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: ingress-controller
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: ingress-controller
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
run: ingress-default-backend
|
||||
name: ingress-default-backend
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
run: ingress-default-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: ingress-default-backend
|
||||
spec:
|
||||
containers:
|
||||
- name: ingress-default-backend
|
||||
image: gcr.io/google_containers/defaultbackend:1.0
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ingress-default-backend
|
||||
namespace: default
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
selector:
|
||||
run: ingress-default-backend
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: haproxy-ingress
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: haproxy-ingress-tcp
|
||||
namespace: default
|
||||
data:
|
||||
"22": "default/gitea-server:22"
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
run: haproxy-ingress
|
||||
name: haproxy-ingress
|
||||
namespace: default
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
run: haproxy-ingress
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: haproxy-ingress
|
||||
spec:
|
||||
hostNetwork: true
|
||||
nodeSelector:
|
||||
role: ingress-controller
|
||||
serviceAccountName: ingress-controller
|
||||
containers:
|
||||
- name: haproxy-ingress
|
||||
image: quay.io/jcmoraisjr/haproxy-ingress
|
||||
args:
|
||||
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
|
||||
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
|
||||
- --tcp-services-configmap=$(POD_NAMESPACE)/haproxy-ingress-tcp
|
||||
- --sort-backends
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
- name: https
|
||||
containerPort: 443
|
||||
- name: stat
|
||||
containerPort: 1936
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10253
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
value: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-server-nodeport
|
||||
namespace: default
|
||||
labels:
|
||||
app: gitea
|
||||
name: gitea-server
|
||||
annotations:
|
||||
# add an annotation indicating the issuer to use.
|
||||
# TODO: Switch to production when we're ready
|
||||
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
- name: ssh
|
||||
port: 22
|
||||
targetPort: 22
|
||||
protocol: TCP
|
||||
type: NodePort
|
||||
selector:
|
||||
name: gitea-server
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: gitea-ingress
|
||||
namespace: default
|
||||
labels:
|
||||
name: gitea-server
|
||||
app: gitea
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "haproxy"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- gitea.kosmos.org
|
||||
secretName: gitea-kosmos-org-cert
|
||||
rules:
|
||||
- host: gitea.kosmos.org
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: gitea-server-nodeport
|
||||
servicePort: 3000
|
||||
@@ -1,6 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
@@ -2,62 +2,61 @@ apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea-server
|
||||
namespace: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: gitea-server
|
||||
app: gitea
|
||||
spec:
|
||||
initContainers:
|
||||
- name: init-config
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && chown -R 1000:1000 /data/gitea']
|
||||
# This is a busybox image with our gitea customizations saved to
|
||||
# /custom, built using ./script/build_customizations_image from the
|
||||
# root of the repo
|
||||
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1
|
||||
command: [
|
||||
'sh', '-c',
|
||||
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp -R /custom/* /data/gitea && chown -R 1000:1000 /data/gitea'
|
||||
]
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: gitea-server-data
|
||||
- mountPath: /root/conf
|
||||
name: config
|
||||
containers:
|
||||
# This is only used for the initial setup, it does nothing once a app.ini
|
||||
# file exists in the conf/ directory of the data directory
|
||||
# (/data/gitea/conf in our case)
|
||||
- env:
|
||||
- name: DB_HOST
|
||||
value: gitea-db:3306
|
||||
- name: DB_NAME
|
||||
value: gitea
|
||||
- name: DB_PASSWD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-mysql-pass
|
||||
key: password
|
||||
- name: DB_TYPE
|
||||
value: mysql
|
||||
- name: DB_USER
|
||||
value: gitea
|
||||
- name: ROOT_URL
|
||||
value: https://gitea.kosmos.org
|
||||
- name: RUN_MODE
|
||||
value: prod
|
||||
- name: SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secret-key
|
||||
key: password
|
||||
- name: SSH_DOMAIN
|
||||
value: gitea.kosmos.org
|
||||
image: 5apps/gitea:latest
|
||||
name: gitea-server
|
||||
- name: gitea-server
|
||||
image: gitea/gitea:1.8.1
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
- containerPort: 3001
|
||||
- containerPort: 22
|
||||
resources: {}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 3000
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 3000
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: gitea-server-data
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 256Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: gitea-server-data
|
||||
@@ -81,21 +80,22 @@ apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-server-data
|
||||
namespace: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
storage: 20Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-server
|
||||
namespace: gitea
|
||||
labels:
|
||||
name: gitea-server
|
||||
app: gitea
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
# preserves the client source IP
|
||||
@@ -106,9 +106,6 @@ spec:
|
||||
targetPort: 22
|
||||
- name: "http"
|
||||
port: 80
|
||||
targetPort: 3001
|
||||
- name: "https"
|
||||
port: 443
|
||||
targetPort: 3000
|
||||
selector:
|
||||
name: gitea-server
|
||||
|
||||
20
kubernetes/letsencrypt-production.yaml
Normal file
20
kubernetes/letsencrypt-production.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
spec:
|
||||
acme:
|
||||
# You must replace this email address with your own.
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-production-account-key
|
||||
# Add a single challenge solver, HTTP01 using the gitea-ingress
|
||||
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: gitea-ingress
|
||||
19
kubernetes/letsencrypt-staging.yaml
Normal file
19
kubernetes/letsencrypt-staging.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-staging-account-key
|
||||
# Add a single challenge solver, HTTP01 using the gitea-ingress
|
||||
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: gitea-ingress
|
||||
29
packer/custom.json
Normal file
29
packer/custom.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"builders": [{
|
||||
"type": "docker",
|
||||
"image": "busybox",
|
||||
"run_command": ["-d", "-i", "-t", "{{.Image}}", "/bin/sh"],
|
||||
"commit": true
|
||||
}],
|
||||
"provisioners": [
|
||||
{
|
||||
"inline": ["mkdir /custom"],
|
||||
"type": "shell"
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source": "../custom/",
|
||||
"destination": "/custom"
|
||||
}
|
||||
],
|
||||
"post-processors": [
|
||||
[
|
||||
{
|
||||
"type": "docker-tag",
|
||||
"repository": "eu.gcr.io/fluted-magpie-218106/gitea_custom",
|
||||
"tag": "0.1"
|
||||
},
|
||||
"docker-push"
|
||||
]
|
||||
]
|
||||
}
|
||||
7
script/build_customizations_image
Executable file
7
script/build_customizations_image
Executable file
@@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# fail fast
|
||||
set -e
|
||||
|
||||
cd packer/
|
||||
packer build custom.json
|
||||
cd -
|
||||
1
vendor/ark
vendored
Submodule
1
vendor/ark
vendored
Submodule
Submodule vendor/ark added at 0fd7872ef4
Reference in New Issue
Block a user