42 Commits

Author SHA1 Message Date
Greg Karékinian
67d87d7d5b Use the correct name for the ingress 2019-07-05 11:13:22 +02:00
Greg Karékinian
3cdc07cdf3 Add initial Ingress documentation 2019-07-04 17:34:19 +02:00
Greg Karékinian
91dab0f121 Explicitly set the ingress name to gitea-ingress
The old config was generating a separate nginx ingress instead of
attaching the HTTP challenge URL to our existing ingress (gitea-ingress)
2019-07-04 14:44:49 +02:00
Greg Karékinian
ed48c92e4f Update cert-manager to 0.8.1 2019-07-04 14:44:20 +02:00
Greg Karékinian
ae8d6a6cf3 WIP: Set up ingress with Let's Encrypt certificates using cert-manager
This is using haproxy-ingress to support forwarding SSH on port 22

Since we're using cert-manager with ingress to get Let's Encrypt certs,
we're not using the Let's Encrypt functionality that's part of Gitea. To
run this we need to change the config file, have Gitea run on port 3000
as HTTP and disable all the Let's Encrypt config keys. Currently the
gitea-ingress.yaml uses the letsencrypt-staging ClusterIssuer

This has been tested on a local Kubernetes cluster using Docker for Mac
2019-06-05 17:57:16 +02:00
gregkare
9f4a5b452c Merge branch 'chore/gitea_181' of kosmos/gitea.kosmos.org into master 2019-05-23 14:31:55 +00:00
12fc74d8ff Upgrade Gitea to 1.8.1 2019-05-21 15:15:04 +02:00
gregkare
1d69fad451 Merge branch 'upgrade/22-gitea' of kosmos/gitea.kosmos.org into master 2019-05-02 15:36:14 +00:00
Greg Karékinian
f73c58d7ee Merge branch 'master' into upgrade/22-gitea 2019-05-02 17:35:52 +02:00
gregkare
68771a8e61 Merge branch 'feature/4-label_sets' of kosmos/gitea.kosmos.org into master 2019-05-02 15:27:31 +00:00
gregkare
e3de3af82f Merge branch 'chore/resource_config' of kosmos/gitea.kosmos.org into master 2019-05-02 15:25:13 +00:00
490248909b Update Gitea to 1.8.0 2019-05-02 15:34:12 +01:00
Greg Karékinian
e0741b4438 Ship the customizations as a Docker image
The Docker image is used in the initialization process, to copy
everything in the custom folder to the Gitea data dir (mounted as a
persistent volume). It is built using Packer and is based on the busybox
image, so we can use its minimalist shell system to copy files and set
permissions
2019-04-01 17:01:16 +02:00
Greg Karékinian
8050126d2d Merge branch 'master' into feature/4-label_sets 2019-03-29 15:14:15 +01:00
Greg Karékinian
b5bbc5fa34 Update Gitea to 1.7.5
Running on GKE

Closes #21
2019-03-29 15:04:23 +01:00
915fd7db8a Add resource requests and limits for Gitea
Based on recent usage stats. If these are not set, the scheduler's
capacity check doesn't work and it will place new pods on nodes that are
actually not free enough for them.
2019-03-04 13:48:20 +07:00
Greg Karékinian
bbfa3f2964 Add a script to copy the content of the custom folder to a running pod
For now it is only labels, but adding anything supported will work
(robots.txt, public files, templates, etc)

The content will be copied to the /data/gitea/ folder that is a mounted
persistent volume

https://docs.gitea.io/en-us/customizing-gitea/
2019-02-27 17:47:48 +01:00
Greg Karékinian
0a60d8831c Merge branch 'master' into feature/4-label_sets 2019-02-27 12:43:45 +01:00
Greg Karékinian
cc6f31b4b9 Update Gitea to 1.7.2
Closes #18
2019-02-25 16:54:59 +01:00
Greg Karékinian
069502d056 Bump the gitea data storage to 20GB 2019-02-25 13:29:09 +01:00
Greg Karékinian
278e6a9cd7 Use a 10GB persistent storage volume for gitea data 2019-02-25 13:18:45 +01:00
Greg Karékinian
eba722992f Copy the labels to the persistent data volume
Move the custom label definitions to a custom folder in the kubernetes
folder, as well as the config files
2019-02-05 20:29:08 +01:00
Greg Karékinian
871d47fff8 Merge branch 'master' into feature/4-label_sets 2019-02-05 20:16:27 +01:00
9ef15325cc Merge branch 'chore/upgrade_gitea' of kosmos/gitea.kosmos.org into master 2019-02-03 05:29:39 +00:00
526f4b9035 Upgrade Gitea to 1.7.1 2019-02-03 12:28:21 +07:00
43ad6f842b Merge branch 'docs/update' of kosmos/gitea.kosmos.org into master 2019-01-28 22:42:17 +00:00
21238a032d Add default and Kosmos label sets
Adds custom label set configs, overriding the default set and adding a
new one for Kosmos (that includes kredits labels).

closes #4
2019-01-27 16:19:19 +08:00
Greg Karékinian
34068bc7ac Add docs about building our own images 2019-01-25 16:52:17 +01:00
28b73f88a8 Use 1.7 release of Gitea 2019-01-09 07:55:13 +08:00
Greg Karékinian
8a2d491e45 Add documentation about updating Gitea 2019-01-08 12:14:41 +01:00
gregkare
8073861775 Merge branch 'feature/5-backups' of kosmos/gitea.kosmos.org into master 2019-01-07 11:02:37 +00:00
Greg Karékinian
78bccff685 Use the git submodule update command with the --init flag in the docs 2019-01-07 12:01:49 +01:00
cef013a40a Update backup docs 2019-01-05 11:23:17 +08:00
3692204ce4 Add app label for all Gitea resources
This way one can address them all at once, like e.g. for Ark backups.
2019-01-05 11:09:25 +08:00
a16143a3f4 Add docs for Ark dependency 2019-01-05 10:22:48 +08:00
c3bf234cba Add Ark as submodule
Heptio Ark is a Kubernetes backup solution. See docs.
2019-01-05 10:14:49 +08:00
9e8370f577 Add backup doc 2019-01-02 12:50:14 +08:00
8496b19ec5 Update 'doc/kubernetes.md' 2019-01-02 04:20:49 +00:00
4a43305a35 Merge branch 'docs/kubernetes' of kosmos/gitea.kosmos.org into master 2018-12-24 08:05:03 +00:00
gregkare
8bb6bddb00 Merge branch 'feature/6-remove_init_env_vars' of kosmos/gitea.kosmos.org into master 2018-12-17 10:36:45 +00:00
Greg Karékinian
bf62157f26 Remove the init environment variables
They were never used since we create an ini config file before starting
the container

Refs #6
2018-12-17 11:34:15 +01:00
0cf7ba527e Move Kubernetes docs out of README 2018-12-14 18:12:39 +00:00
16 changed files with 2526 additions and 70 deletions

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "vendor/ark"]
path = vendor/ark
url = git@github.com:heptio/ark.git

View File

@@ -3,38 +3,25 @@
This repository contains configuration files and other assets, that are used to This repository contains configuration files and other assets, that are used to
deploy and operate this Gitea instance. deploy and operate this Gitea instance.
To create a new image containing the customizations:
Edit `packer/custom.json` to increment the tag, then run this script (needs
[Packer](https://www.packer.io/) in your path)
```
./script/build_customizations_image
```
Then edit `kubernetes/gitea-server.yaml` to use the new tag
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
change:
```
cd kubernetes
kubectl apply -f gitea-server.yaml
```
Feel free to [open issues] for questions, suggestions, bugs, to-do items, and Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
whatever else you want to discuss or resolve. whatever else you want to discuss or resolve.
[open issues]: https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues [open issues]: https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues
## Kubernetes
### Apply changes to resources
```
kubectl apply -f gitea-db.yaml
kubectl apply -f gitea-server.yaml
```
### Write the secrets to the local filesystem
```
./script/get_secrets
```
It writes the secrets (currently the app.ini file, as well as auto-generated
TLS certificates that are only used when no Let's Encrypt cert is available)
to the `kubernetes/config/` folder. These files are not in Git because they
contain credentials.
Once you have edited them locally, you need to delete the secrets stored on
Kubernetes before uploading them again. This is done by this script:
```
./script/replace_secrets
```
### Reuse a released persistent volume:
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616

View File

@@ -0,0 +1,11 @@
#db231d bug ; Something is not working
#76db1d enhancement ; Improving existing functionality
#1d76db feature ; New functionality
#db1d76 idea ; Something to consider
#db1d76 question ; Looking for an answer
#fbca04 security ; All your base are belong to us
#1dd5db ui/ux ; User interface, process design, etc.
#333333 dev environment ; Config, builds, CI, deployment, etc.
#cccccc duplicate ; This issue or pull request already exists
#cccccc invalid ; Not a bug
#cccccc wontfix ; This won't be fixed

View File

@@ -0,0 +1,14 @@
#db231d bug ; Something is not working
#76db1d enhancement ; Improving existing functionality
#1d76db feature ; New functionality
#db1d76 idea ; Something to consider
#db1d76 question ; Looking for an answer
#fbca04 security ; All your base are belong to us
#1dd5db ui/ux ; User interface, process design, etc.
#333333 dev environment ; Config, builds, CI, deployment, etc.
#008080 kredits-1 ; Small contribution
#008080 kredits-2 ; Medium contribution
#008080 kredits-3 ; Large contribution
#cccccc duplicate ; This issue or pull request already exists
#cccccc invalid ; Not a bug
#cccccc wontfix ; This won't be fixed

36
doc/backup-and-restore.md Normal file
View File

@@ -0,0 +1,36 @@
# Backups
We're using [Ark][1] for backing up Kubernetes config and GKE resources. It is
available as a Git submodule in the `vendor/` folder (incl. the `ark`
executable).
In order to initialize and update submodules in your local repo, run once:
git submodule update --init
Then, to fetch/update the modules, run:
git submodule update
The Ark service is running on the Sidamo cluster and was set up using the
[official docs' GCP instructions and config files][4]. There's a daily backup
schedule in effect for Gitea (using the label `app=gitea`).
Please refer to Ark's [ Getting Started ][5] doc for all backup and restore
commands.
## Backup location
Cluster configuration (including all live resources) is backed up to [a Google
Cloud Storage container][3].
## Persistent volumes
Persistent volumes are just GCE disks. Thus, with the current config, Ark
creates volume snapshots as native [GCE disk snapshots][2].
[1]: https://heptio.github.io/ark/v0.10.0
[2]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50
[3]: https://console.cloud.google.com/storage/browser/sidamo-backups?project=fluted-magpie-218106&organizationId=772167872692
[4]: https://heptio.github.io/ark/v0.10.0/gcp-config
[5]: https://heptio.github.io/ark/v0.10.0/get-started

180
doc/ingress.md Normal file
View File

@@ -0,0 +1,180 @@
# HTTP(S) load balancing with Ingress
## Resources
Features of GKE Ingress from the Google Cloud docs:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
It does hostname-aware HTTP(S) load balancing, and is billed like a regular
Load Balancer (https://cloud.google.com/compute/pricing#lb). The advantages are
that we can use one set of firewall rules (ports 80 and 443) for multiple
services, and easy Let's Encrypt certificates for services with no built-in
support for it
This 3 part article was a good resource:
https://medium.com/google-cloud/global-kubernetes-in-3-steps-on-gcp-8a3585ec8547
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-2-demo-cf587765702
I couldn't find information about setting
`ingress.kubernetes.io/rewrite-target` to `/` anywhere else, without it only
`/` worked on an host, all other URLs would go to the default backend and
return a 404.
cert-manager, for automated (among others) Let's Encrypt certificates:
https://docs.cert-manager.io/en/release-0.8/
## Create a global IP
Ephemeral IPs are only regional, and you lose them if you have to recreate the
Ingress
gcloud compute addresses create ingress-ip --global
## Create the ingress
A ClusterIP will not work, because it is allocating random ports. Explicitly
create a NodePort to expose your service. On GKE, health checks are configured
automatically
cat <<EOF > test-server-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: test-server-nodeport
spec:
ports:
- name: http
port: 80
targetPort: 3000
type: NodePort
selector:
name: test-server
EOF
kubectl apply -f test-server-nodeport.yaml
Create the ingress resource
cat <<EOF > ingress-main.yaml
# A GCE Ingress that uses cert-manager to manage Let's Encrypt certificates
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-main
annotations:
# Required, otherwise only the / path works
# https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
certmanager.k8s.io/acme-challenge-type: http01
# Created using the following command
# gcloud compute addresses create ingress-ip --global
kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
spec:
tls:
- hosts:
- test.kosmos.org
secretName: test-kosmos-org-cert
- test2.kosmos.org
secretName: test2-kosmos-org-cert
rules:
- host: test.kosmos.org
http:
paths:
- backend:
serviceName: test-server-nodeport
servicePort: 80
- host: test2.kosmos.org
http:
paths:
- backend:
serviceName: test-server-nodeport
servicePort: 80
EOF
kubectl apply -f ingress-main.yaml
## cert-manager
### Create the cert-manager resources
cert-manager provides a Let's Encrypt certificate issuer, and lets you mount it
in an Ingress resource, making it possible to use HTTP ACME challenges
Get the reserved IP you created in the first step:
$ gcloud compute addresses list --global
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
ingress-ip 35.244.164.133 EXTERNAL IN_USE
Set the DNS record for the domain you want a Let's Encrypt cert for to this IP.
Now it's time to create the cert-manager
https://docs.cert-manager.io/en/release-0.8/getting-started/install/kubernetes.html
kubectl create namespace cert-manager
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml --validate=false
I had to run the apply command twice for it to create all the resources. On the
first run I got these errors. Running it a second time successfully created all
the resources
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
We name the ingress explicitely so it only runs on one. Having only one IP to
set on the DNS records makes the HTTP validation easier. Using the class would
attach the validation endpoint to all Ingresses of that class
(https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1)
cat <<EOF > letsencrypt-staging.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-staging-account-key
solvers:
- http01:
ingress:
name: ingress-main
EOF
cat <<EOF > letsencrypt-production.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-production-account-key
solvers:
- http01:
ingress:
name: ingress-main
## Add another service
To add another service behind the Ingress, you set the DNS entry for its domain
to the Ingress IP, deploy your service, create a NodePort to expose it, and
finally add its host to the Ingress config (both tls and rules, see example
above)

71
doc/kubernetes.md Normal file
View File

@@ -0,0 +1,71 @@
# Kubernetes / GKE
This Gitea instance is currently hosted on Google Kubernetes Engine.
## Apply changes to resources
```
kubectl apply -f gitea-db.yaml
kubectl apply -f gitea-server.yaml
```
## Write the secrets to the local filesystem
```
./script/get_secrets
```
It writes the secrets (currently the app.ini file, as well as auto-generated
TLS certificates that are only used when no Let's Encrypt cert is available)
to the `kubernetes/config/` folder. These files are not in Git because they
contain credentials.
Once you have edited them locally, you need to delete the secrets stored on
Kubernetes before uploading them again. This is done by this script:
```
./script/replace_secrets
```
## Reuse a released persistent volume:
> When you delete a PVC, corresponding PV becomes `Released`. This PV can contain sensitive data (say credit card numbers) and therefore nobody can ever bind to it, even if it is a PVC with the same name and in the same namespace as the previous one - who knows who's trying to steal the data!
>
> Admin intervention is required here. He has two options:
>
> * Make the PV available to everybody - delete `PV.Spec.ClaimRef`, Such PV can bound to any PVC (assuming that capacity, access mode and selectors match)
>
> * Make the PV available to a specific PVC - pre-fill `PV.Spec.ClaimRef` with a pointer to a PVC. Leave the `PV.Spec.ClaimRef,UID` empty, as the PVC does not to need exist at this point and you don't know PVC's UID. This PV can be bound only to the specified PVC.
>
>
> @whitecolor, in your case you should be fine by clearing `PV.Spec.ClaimRef.UID` in the PV. Only the re-created PVC (with any UID) can then use the PV. And it's your responsibility that only the right person can craft appropriate PVC so nobody can steal your data.
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616
## Update Gitea
### Released version
Change the image for the gitea-server container
(`kubernetes/gitea-server.yaml`) to `gitea/gitea:TAG`, for example:
`gitea/gitea:1.7.0-rc2`
### Unreleased version
This is useful to deploy features that are in master but not yet in a release.
$ docker pull gitea/gitea
$ docker tag gitea/gitea:latest kosmosorg/gitea:production
$ docker push kosmosorg/gitea
Set the image for the gitea-server container to `kosmosorg/gitea:latest`, or run
this command to force a deployment if it is already set to it
$ kubectl patch deployment gitea-server -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
### Build our own image
At the root of the [https://github.com/go-gitea/gitea](gitea repo)
$ DOCKER_TAG=production DOCKER_IMAGE=kosmosorg/gitea make docker # builds and tags kosmosorg/gitea:production locally
$ docker push kosmosorg/gitea

1791
kubernetes/cert-manager.yaml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,8 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: gitea-db name: gitea-db
labels:
app: gitea
spec: spec:
replicas: 1 replicas: 1
strategy: strategy:
@@ -10,6 +12,7 @@ spec:
metadata: metadata:
labels: labels:
name: gitea-db name: gitea-db
app: gitea
spec: spec:
containers: containers:
- env: - env:
@@ -29,13 +32,19 @@ spec:
value: gitea value: gitea
image: mariadb:10.3.10 image: mariadb:10.3.10
name: gitea-db name: gitea-db
resources: {}
ports: ports:
- containerPort: 3306 - containerPort: 3306
name: mysql name: mysql
volumeMounts: volumeMounts:
- mountPath: /var/lib/mysql - mountPath: /var/lib/mysql
name: gitea-db-data name: gitea-db-data
resources:
requests:
cpu: 250m
memory: 150Mi
limits:
cpu: 500m
memory: 300Mi
restartPolicy: Always restartPolicy: Always
volumes: volumes:
- name: gitea-db-data - name: gitea-db-data
@@ -48,6 +57,7 @@ metadata:
name: gitea-db-data name: gitea-db-data
labels: labels:
name: gitea-db-data name: gitea-db-data
app: gitea
spec: spec:
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
@@ -61,6 +71,7 @@ metadata:
name: gitea-db name: gitea-db
labels: labels:
service: gitea-db service: gitea-db
app: gitea
spec: spec:
selector: selector:
name: gitea-db name: gitea-db

View File

@@ -0,0 +1,276 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress-tcp
namespace: default
data:
"22": "default/gitea-server:22"
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
hostNetwork: true
nodeSelector:
role: ingress-controller
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --tcp-services-configmap=$(POD_NAMESPACE)/haproxy-ingress-tcp
- --sort-backends
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
value: default
---
apiVersion: v1
kind: Service
metadata:
name: gitea-server-nodeport
namespace: default
labels:
app: gitea
name: gitea-server
annotations:
# add an annotation indicating the issuer to use.
# TODO: Switch to production when we're ready
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
ports:
- name: http
port: 3000
targetPort: 3000
- name: ssh
port: 22
targetPort: 22
protocol: TCP
type: NodePort
selector:
name: gitea-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gitea-ingress
namespace: default
labels:
name: gitea-server
app: gitea
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- hosts:
- gitea.kosmos.org
secretName: gitea-kosmos-org-cert
rules:
- host: gitea.kosmos.org
http:
paths:
- path: /
backend:
serviceName: gitea-server-nodeport
servicePort: 3000

View File

@@ -2,61 +2,61 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: gitea-server name: gitea-server
labels:
app: gitea
spec: spec:
replicas: 1 replicas: 1
template: template:
metadata: metadata:
labels: labels:
name: gitea-server name: gitea-server
app: gitea
spec: spec:
initContainers: initContainers:
- name: init-config - name: init-config
image: busybox # This is a busybox image with our gitea customizations saved to
command: ['sh', '-c', 'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && chown -R 1000:1000 /data/gitea'] # /custom, built using ./script/build_customizations_image from the
# root of the repo
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1
command: [
'sh', '-c',
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp -R /custom/* /data/gitea && chown -R 1000:1000 /data/gitea'
]
volumeMounts: volumeMounts:
- mountPath: /data - mountPath: /data
name: gitea-server-data name: gitea-server-data
- mountPath: /root/conf - mountPath: /root/conf
name: config name: config
containers: containers:
# This is only used for the initial setup, it does nothing once a app.ini - name: gitea-server
# file exists in the conf/ directory of the data directory image: gitea/gitea:1.8.1
# (/data/gitea/conf in our case)
- env:
- name: DB_HOST
value: gitea-db:3306
- name: DB_NAME
value: gitea
- name: DB_PASSWD
valueFrom:
secretKeyRef:
name: gitea-mysql-pass
key: password
- name: DB_TYPE
value: mysql
- name: DB_USER
value: gitea
- name: ROOT_URL
value: https://gitea.kosmos.org
- name: RUN_MODE
value: prod
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: gitea-secret-key
key: password
- name: SSH_DOMAIN
value: gitea.kosmos.org
image: 5apps/gitea:latest
name: gitea-server
ports: ports:
- containerPort: 3000 - containerPort: 3000
- containerPort: 3001
- containerPort: 22 - containerPort: 22
resources: {} livenessProbe:
httpGet:
path: /
port: 3000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 3000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
volumeMounts: volumeMounts:
- mountPath: /data - mountPath: /data
name: gitea-server-data name: gitea-server-data
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
restartPolicy: Always restartPolicy: Always
volumes: volumes:
- name: gitea-server-data - name: gitea-server-data
@@ -80,12 +80,14 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: gitea-server-data name: gitea-server-data
labels:
app: gitea
spec: spec:
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 1Gi storage: 20Gi
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
@@ -93,6 +95,7 @@ metadata:
name: gitea-server name: gitea-server
labels: labels:
name: gitea-server name: gitea-server
app: gitea
spec: spec:
type: LoadBalancer type: LoadBalancer
# preserves the client source IP # preserves the client source IP
@@ -103,9 +106,6 @@ spec:
targetPort: 22 targetPort: 22
- name: "http" - name: "http"
port: 80 port: 80
targetPort: 3001
- name: "https"
port: 443
targetPort: 3000 targetPort: 3000
selector: selector:
name: gitea-server name: gitea-server

View File

@@ -0,0 +1,20 @@
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-production-account-key
# Add a single challenge solver, HTTP01 using the gitea-ingress
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
solvers:
- http01:
ingress:
name: gitea-ingress

View File

@@ -0,0 +1,19 @@
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-staging-account-key
# Add a single challenge solver, HTTP01 using the gitea-ingress
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
solvers:
- http01:
ingress:
name: gitea-ingress

29
packer/custom.json Normal file
View File

@@ -0,0 +1,29 @@
{
"builders": [{
"type": "docker",
"image": "busybox",
"run_command": ["-d", "-i", "-t", "{{.Image}}", "/bin/sh"],
"commit": true
}],
"provisioners": [
{
"inline": ["mkdir /custom"],
"type": "shell"
},
{
"type": "file",
"source": "../custom/",
"destination": "/custom"
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "eu.gcr.io/fluted-magpie-218106/gitea_custom",
"tag": "0.1"
},
"docker-push"
]
]
}

View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
# fail fast
set -e
cd packer/
packer build custom.json
cd -

1
vendor/ark vendored Submodule

Submodule vendor/ark added at 0fd7872ef4