Compare commits
5 Commits
master
...
feature/in
Author | SHA1 | Date | |
---|---|---|---|
|
67d87d7d5b | ||
|
3cdc07cdf3 | ||
|
91dab0f121 | ||
|
ed48c92e4f | ||
|
ae8d6a6cf3 |
3
.gitmodules
vendored
Normal file
3
.gitmodules
vendored
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
[submodule "vendor/ark"]
|
||||||
|
path = vendor/ark
|
||||||
|
url = git@github.com:heptio/ark.git
|
28
README.md
28
README.md
@ -1,9 +1,27 @@
|
|||||||
# gitea.kosmos.org
|
# gitea.kosmos.org
|
||||||
|
|
||||||
This repository contains configuration files and other assets, that are used to
|
This repository contains configuration files and other assets, that are used to
|
||||||
deploy and operate this Gitea instance. Feel free to [open
|
deploy and operate this Gitea instance.
|
||||||
issues](https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues) for questions,
|
|
||||||
suggestions, bugs, to-do items, and whatever else you want to discuss or
|
|
||||||
resolve.
|
|
||||||
|
|
||||||
See `doc/` folder for some technical info.
|
To create a new image containing the customizations:
|
||||||
|
|
||||||
|
Edit `packer/custom.json` to increment the tag, then run this script (needs
|
||||||
|
[Packer](https://www.packer.io/) in your path)
|
||||||
|
|
||||||
|
```
|
||||||
|
./script/build_customizations_image
|
||||||
|
```
|
||||||
|
|
||||||
|
Then edit `kubernetes/gitea-server.yaml` to use the new tag
|
||||||
|
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
|
||||||
|
change:
|
||||||
|
|
||||||
|
```
|
||||||
|
cd kubernetes
|
||||||
|
kubectl apply -f gitea-server.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
|
||||||
|
whatever else you want to discuss or resolve.
|
||||||
|
|
||||||
|
[open issues]: https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues
|
||||||
|
@ -1,5 +1,4 @@
|
|||||||
#db231d bug ; Something is not working
|
#db231d bug ; Something is not working
|
||||||
#ead746 docs ; Documentation
|
|
||||||
#76db1d enhancement ; Improving existing functionality
|
#76db1d enhancement ; Improving existing functionality
|
||||||
#1d76db feature ; New functionality
|
#1d76db feature ; New functionality
|
||||||
#db1d76 idea ; Something to consider
|
#db1d76 idea ; Something to consider
|
||||||
|
@ -1,28 +1,36 @@
|
|||||||
# Backups
|
# Backups
|
||||||
|
|
||||||
We're using [Velero][1] (formerly Ark) for backing up Kubernetes config and GKE
|
We're using [Ark][1] for backing up Kubernetes config and GKE resources. It is
|
||||||
resources. It is available as a compiled binary for your platform [on GitHub][2]
|
available as a Git submodule in the `vendor/` folder (incl. the `ark`
|
||||||
|
executable).
|
||||||
|
|
||||||
The Velero service is running on the Sidamo cluster and was set up using the
|
In order to initialize and update submodules in your local repo, run once:
|
||||||
[official docs' GCP instructions][3]. There's a daily backup
|
|
||||||
|
git submodule update --init
|
||||||
|
|
||||||
|
Then, to fetch/update the modules, run:
|
||||||
|
|
||||||
|
git submodule update
|
||||||
|
|
||||||
|
The Ark service is running on the Sidamo cluster and was set up using the
|
||||||
|
[official docs' GCP instructions and config files][4]. There's a daily backup
|
||||||
schedule in effect for Gitea (using the label `app=gitea`).
|
schedule in effect for Gitea (using the label `app=gitea`).
|
||||||
|
|
||||||
Please refer to Velero's [ Getting Started ][4] doc for all backup and restore
|
Please refer to Ark's [ Getting Started ][5] doc for all backup and restore
|
||||||
commands.
|
commands.
|
||||||
|
|
||||||
## Backup location
|
## Backup location
|
||||||
|
|
||||||
Cluster configuration (including all live resources) is backed up to [a Google
|
Cluster configuration (including all live resources) is backed up to [a Google
|
||||||
Cloud Storage container][5].
|
Cloud Storage container][3].
|
||||||
|
|
||||||
## Persistent volumes
|
## Persistent volumes
|
||||||
|
|
||||||
Persistent volumes are just GCE disks. Thus, with the current config, Velero
|
Persistent volumes are just GCE disks. Thus, with the current config, Ark
|
||||||
creates volume snapshots as native [GCE disk snapshots][6].
|
creates volume snapshots as native [GCE disk snapshots][2].
|
||||||
|
|
||||||
[1]: https://velero.io/docs/v1.0.0
|
[1]: https://heptio.github.io/ark/v0.10.0
|
||||||
[2]: https://github.com/heptio/velero/releases/tag/v1.0.0
|
[2]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50
|
||||||
[3]: https://velero.io/docs/v1.0.0/gcp-config/
|
[3]: https://console.cloud.google.com/storage/browser/sidamo-backups?project=fluted-magpie-218106&organizationId=772167872692
|
||||||
[4]: https://velero.io/docs/v1.0.0/about/
|
[4]: https://heptio.github.io/ark/v0.10.0/gcp-config
|
||||||
[5]: https://console.cloud.google.com/storage/browser/sidamo-backups-new?project=fluted-magpie-218106&organizationId=772167872692
|
[5]: https://heptio.github.io/ark/v0.10.0/get-started
|
||||||
[6]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50
|
|
||||||
|
@ -1,20 +0,0 @@
|
|||||||
## Customizations image
|
|
||||||
|
|
||||||
### Build
|
|
||||||
|
|
||||||
To create a new Docker image containing our Gitea customizations (label sets,
|
|
||||||
styles, page content, etc.):
|
|
||||||
|
|
||||||
Edit `packer/custom.json` to increment the tag, then run this script (needs
|
|
||||||
[Packer](https://www.packer.io/) in your path)
|
|
||||||
|
|
||||||
./script/build_customizations_image
|
|
||||||
|
|
||||||
### Deploy
|
|
||||||
|
|
||||||
Edit `kubernetes/gitea-server.yaml` to use the new tag
|
|
||||||
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
|
|
||||||
change:
|
|
||||||
|
|
||||||
cd kubernetes
|
|
||||||
kubectl apply -f gitea-server.yaml
|
|
180
doc/ingress.md
Normal file
180
doc/ingress.md
Normal file
@ -0,0 +1,180 @@
|
|||||||
|
# HTTP(S) load balancing with Ingress
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
Features of GKE Ingress from the Google Cloud docs:
|
||||||
|
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
|
||||||
|
|
||||||
|
It does hostname-aware HTTP(S) load balancing, and is billed like a regular
|
||||||
|
Load Balancer (https://cloud.google.com/compute/pricing#lb). The advantages are
|
||||||
|
that we can use one set of firewall rules (ports 80 and 443) for multiple
|
||||||
|
services, and easy Let's Encrypt certificates for services with no built-in
|
||||||
|
support for it
|
||||||
|
|
||||||
|
This 3 part article was a good resource:
|
||||||
|
|
||||||
|
https://medium.com/google-cloud/global-kubernetes-in-3-steps-on-gcp-8a3585ec8547
|
||||||
|
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
|
||||||
|
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-2-demo-cf587765702
|
||||||
|
|
||||||
|
I couldn't find information about setting
|
||||||
|
`ingress.kubernetes.io/rewrite-target` to `/` anywhere else, without it only
|
||||||
|
`/` worked on an host, all other URLs would go to the default backend and
|
||||||
|
return a 404.
|
||||||
|
|
||||||
|
cert-manager, for automated (among others) Let's Encrypt certificates:
|
||||||
|
https://docs.cert-manager.io/en/release-0.8/
|
||||||
|
|
||||||
|
## Create a global IP
|
||||||
|
|
||||||
|
Ephemeral IPs are only regional, and you lose them if you have to recreate the
|
||||||
|
Ingress
|
||||||
|
|
||||||
|
gcloud compute addresses create ingress-ip --global
|
||||||
|
|
||||||
|
## Create the ingress
|
||||||
|
|
||||||
|
A ClusterIP will not work, because it is allocating random ports. Explicitly
|
||||||
|
create a NodePort to expose your service. On GKE, health checks are configured
|
||||||
|
automatically
|
||||||
|
|
||||||
|
cat <<EOF > test-server-nodeport.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: test-server-nodeport
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
port: 80
|
||||||
|
targetPort: 3000
|
||||||
|
type: NodePort
|
||||||
|
selector:
|
||||||
|
name: test-server
|
||||||
|
EOF
|
||||||
|
kubectl apply -f test-server-nodeport.yaml
|
||||||
|
|
||||||
|
Create the ingress resource
|
||||||
|
|
||||||
|
cat <<EOF > ingress-main.yaml
|
||||||
|
# A GCE Ingress that uses cert-manager to manage Let's Encrypt certificates
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: ingress-main
|
||||||
|
annotations:
|
||||||
|
# Required, otherwise only the / path works
|
||||||
|
# https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
|
||||||
|
ingress.kubernetes.io/rewrite-target: /
|
||||||
|
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
|
||||||
|
certmanager.k8s.io/acme-challenge-type: http01
|
||||||
|
# Created using the following command
|
||||||
|
# gcloud compute addresses create ingress-ip --global
|
||||||
|
kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
|
||||||
|
spec:
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- test.kosmos.org
|
||||||
|
secretName: test-kosmos-org-cert
|
||||||
|
- test2.kosmos.org
|
||||||
|
secretName: test2-kosmos-org-cert
|
||||||
|
rules:
|
||||||
|
- host: test.kosmos.org
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- backend:
|
||||||
|
serviceName: test-server-nodeport
|
||||||
|
servicePort: 80
|
||||||
|
- host: test2.kosmos.org
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- backend:
|
||||||
|
serviceName: test-server-nodeport
|
||||||
|
servicePort: 80
|
||||||
|
EOF
|
||||||
|
kubectl apply -f ingress-main.yaml
|
||||||
|
|
||||||
|
|
||||||
|
## cert-manager
|
||||||
|
|
||||||
|
### Create the cert-manager resources
|
||||||
|
|
||||||
|
cert-manager provides a Let's Encrypt certificate issuer, and lets you mount it
|
||||||
|
in an Ingress resource, making it possible to use HTTP ACME challenges
|
||||||
|
|
||||||
|
Get the reserved IP you created in the first step:
|
||||||
|
|
||||||
|
$ gcloud compute addresses list --global
|
||||||
|
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
|
||||||
|
ingress-ip 35.244.164.133 EXTERNAL IN_USE
|
||||||
|
|
||||||
|
Set the DNS record for the domain you want a Let's Encrypt cert for to this IP.
|
||||||
|
|
||||||
|
Now it's time to create the cert-manager
|
||||||
|
|
||||||
|
https://docs.cert-manager.io/en/release-0.8/getting-started/install/kubernetes.html
|
||||||
|
|
||||||
|
kubectl create namespace cert-manager
|
||||||
|
|
||||||
|
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml --validate=false
|
||||||
|
|
||||||
|
I had to run the apply command twice for it to create all the resources. On the
|
||||||
|
first run I got these errors. Running it a second time successfully created all
|
||||||
|
the resources
|
||||||
|
|
||||||
|
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||||
|
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
|
||||||
|
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||||
|
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
|
||||||
|
|
||||||
|
We name the ingress explicitely so it only runs on one. Having only one IP to
|
||||||
|
set on the DNS records makes the HTTP validation easier. Using the class would
|
||||||
|
attach the validation endpoint to all Ingresses of that class
|
||||||
|
(https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1)
|
||||||
|
|
||||||
|
cat <<EOF > letsencrypt-staging.yaml
|
||||||
|
apiVersion: certmanager.k8s.io/v1alpha1
|
||||||
|
kind: ClusterIssuer
|
||||||
|
metadata:
|
||||||
|
name: letsencrypt-staging
|
||||||
|
spec:
|
||||||
|
acme:
|
||||||
|
# Let's Encrypt will use this to contact you about expiring
|
||||||
|
# certificates, and issues related to your account.
|
||||||
|
email: ops@kosmos.org
|
||||||
|
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||||
|
privateKeySecretRef:
|
||||||
|
# Secret resource used to store the account's private key.
|
||||||
|
name: letsencrypt-staging-account-key
|
||||||
|
solvers:
|
||||||
|
- http01:
|
||||||
|
ingress:
|
||||||
|
name: ingress-main
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cat <<EOF > letsencrypt-production.yaml
|
||||||
|
apiVersion: certmanager.k8s.io/v1alpha1
|
||||||
|
kind: ClusterIssuer
|
||||||
|
metadata:
|
||||||
|
name: letsencrypt-production
|
||||||
|
spec:
|
||||||
|
acme:
|
||||||
|
# Let's Encrypt will use this to contact you about expiring
|
||||||
|
# certificates, and issues related to your account.
|
||||||
|
email: ops@kosmos.org
|
||||||
|
server: https://acme-v02.api.letsencrypt.org/directory
|
||||||
|
privateKeySecretRef:
|
||||||
|
# Secret resource used to store the account's private key.
|
||||||
|
name: letsencrypt-production-account-key
|
||||||
|
solvers:
|
||||||
|
- http01:
|
||||||
|
ingress:
|
||||||
|
name: ingress-main
|
||||||
|
|
||||||
|
|
||||||
|
## Add another service
|
||||||
|
|
||||||
|
To add another service behind the Ingress, you set the DNS entry for its domain
|
||||||
|
to the Ingress IP, deploy your service, create a NodePort to expose it, and
|
||||||
|
finally add its host to the Ingress config (both tls and rules, see example
|
||||||
|
above)
|
1791
kubernetes/cert-manager.yaml
Normal file
1791
kubernetes/cert-manager.yaml
Normal file
File diff suppressed because it is too large
Load Diff
276
kubernetes/gitea-ingress.yaml
Normal file
276
kubernetes/gitea-ingress.yaml
Normal file
@ -0,0 +1,276 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: ingress-controller
|
||||||
|
namespace: default
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: ingress-controller
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
- endpoints
|
||||||
|
- nodes
|
||||||
|
- pods
|
||||||
|
- secrets
|
||||||
|
verbs:
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- nodes
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- services
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- "extensions"
|
||||||
|
resources:
|
||||||
|
- ingresses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- events
|
||||||
|
verbs:
|
||||||
|
- create
|
||||||
|
- patch
|
||||||
|
- apiGroups:
|
||||||
|
- "extensions"
|
||||||
|
resources:
|
||||||
|
- ingresses/status
|
||||||
|
verbs:
|
||||||
|
- update
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
name: ingress-controller
|
||||||
|
namespace: default
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
- pods
|
||||||
|
- secrets
|
||||||
|
- namespaces
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- update
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- configmaps
|
||||||
|
verbs:
|
||||||
|
- create
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- endpoints
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- create
|
||||||
|
- update
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: ingress-controller
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: ingress-controller
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: ingress-controller
|
||||||
|
namespace: default
|
||||||
|
- apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: User
|
||||||
|
name: ingress-controller
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
name: ingress-controller
|
||||||
|
namespace: default
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: ingress-controller
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: ingress-controller
|
||||||
|
namespace: default
|
||||||
|
- apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: User
|
||||||
|
name: ingress-controller
|
||||||
|
---
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: ingress-default-backend
|
||||||
|
name: ingress-default-backend
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
run: ingress-default-backend
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: ingress-default-backend
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: ingress-default-backend
|
||||||
|
image: gcr.io/google_containers/defaultbackend:1.0
|
||||||
|
ports:
|
||||||
|
- containerPort: 8080
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: 10m
|
||||||
|
memory: 20Mi
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: ingress-default-backend
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 8080
|
||||||
|
selector:
|
||||||
|
run: ingress-default-backend
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: haproxy-ingress
|
||||||
|
namespace: default
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: haproxy-ingress-tcp
|
||||||
|
namespace: default
|
||||||
|
data:
|
||||||
|
"22": "default/gitea-server:22"
|
||||||
|
---
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: haproxy-ingress
|
||||||
|
name: haproxy-ingress
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
run: haproxy-ingress
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
run: haproxy-ingress
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
nodeSelector:
|
||||||
|
role: ingress-controller
|
||||||
|
serviceAccountName: ingress-controller
|
||||||
|
containers:
|
||||||
|
- name: haproxy-ingress
|
||||||
|
image: quay.io/jcmoraisjr/haproxy-ingress
|
||||||
|
args:
|
||||||
|
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
|
||||||
|
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
|
||||||
|
- --tcp-services-configmap=$(POD_NAMESPACE)/haproxy-ingress-tcp
|
||||||
|
- --sort-backends
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
containerPort: 80
|
||||||
|
- name: https
|
||||||
|
containerPort: 443
|
||||||
|
- name: stat
|
||||||
|
containerPort: 1936
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /healthz
|
||||||
|
port: 10253
|
||||||
|
env:
|
||||||
|
- name: POD_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.name
|
||||||
|
- name: POD_NAMESPACE
|
||||||
|
value: default
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: gitea-server-nodeport
|
||||||
|
namespace: default
|
||||||
|
labels:
|
||||||
|
app: gitea
|
||||||
|
name: gitea-server
|
||||||
|
annotations:
|
||||||
|
# add an annotation indicating the issuer to use.
|
||||||
|
# TODO: Switch to production when we're ready
|
||||||
|
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
port: 3000
|
||||||
|
targetPort: 3000
|
||||||
|
- name: ssh
|
||||||
|
port: 22
|
||||||
|
targetPort: 22
|
||||||
|
protocol: TCP
|
||||||
|
type: NodePort
|
||||||
|
selector:
|
||||||
|
name: gitea-server
|
||||||
|
---
|
||||||
|
apiVersion: extensions/v1beta1
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: gitea-ingress
|
||||||
|
namespace: default
|
||||||
|
labels:
|
||||||
|
name: gitea-server
|
||||||
|
app: gitea
|
||||||
|
annotations:
|
||||||
|
kubernetes.io/ingress.class: "haproxy"
|
||||||
|
spec:
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- gitea.kosmos.org
|
||||||
|
secretName: gitea-kosmos-org-cert
|
||||||
|
rules:
|
||||||
|
- host: gitea.kosmos.org
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: /
|
||||||
|
backend:
|
||||||
|
serviceName: gitea-server-nodeport
|
||||||
|
servicePort: 3000
|
@ -1,4 +1,4 @@
|
|||||||
apiVersion: apps/v1
|
apiVersion: extensions/v1beta1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: gitea-server
|
name: gitea-server
|
||||||
@ -6,9 +6,6 @@ metadata:
|
|||||||
app: gitea
|
app: gitea
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: gitea
|
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
@ -20,7 +17,7 @@ spec:
|
|||||||
# This is a busybox image with our gitea customizations saved to
|
# This is a busybox image with our gitea customizations saved to
|
||||||
# /custom, built using ./script/build_customizations_image from the
|
# /custom, built using ./script/build_customizations_image from the
|
||||||
# root of the repo
|
# root of the repo
|
||||||
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1.2
|
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1
|
||||||
command: [
|
command: [
|
||||||
'sh', '-c',
|
'sh', '-c',
|
||||||
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp -R /custom/* /data/gitea && chown -R 1000:1000 /data/gitea'
|
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp -R /custom/* /data/gitea && chown -R 1000:1000 /data/gitea'
|
||||||
@ -32,20 +29,33 @@ spec:
|
|||||||
name: config
|
name: config
|
||||||
containers:
|
containers:
|
||||||
- name: gitea-server
|
- name: gitea-server
|
||||||
image: gitea/gitea:1.11.2
|
image: gitea/gitea:1.8.1
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 3000
|
- containerPort: 3000
|
||||||
- containerPort: 3001
|
|
||||||
- containerPort: 22
|
- containerPort: 22
|
||||||
|
livenessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /
|
||||||
|
port: 3000
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
timeoutSeconds: 5
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /
|
||||||
|
port: 3000
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
timeoutSeconds: 5
|
||||||
volumeMounts:
|
volumeMounts:
|
||||||
- mountPath: /data
|
- mountPath: /data
|
||||||
name: gitea-server-data
|
name: gitea-server-data
|
||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
cpu: 150m
|
cpu: 250m
|
||||||
memory: 256Mi
|
memory: 256Mi
|
||||||
limits:
|
limits:
|
||||||
cpu: 250m
|
cpu: 500m
|
||||||
memory: 512Mi
|
memory: 512Mi
|
||||||
restartPolicy: Always
|
restartPolicy: Always
|
||||||
volumes:
|
volumes:
|
||||||
@ -96,9 +106,6 @@ spec:
|
|||||||
targetPort: 22
|
targetPort: 22
|
||||||
- name: "http"
|
- name: "http"
|
||||||
port: 80
|
port: 80
|
||||||
targetPort: 3001
|
|
||||||
- name: "https"
|
|
||||||
port: 443
|
|
||||||
targetPort: 3000
|
targetPort: 3000
|
||||||
selector:
|
selector:
|
||||||
name: gitea-server
|
name: gitea-server
|
||||||
|
20
kubernetes/letsencrypt-production.yaml
Normal file
20
kubernetes/letsencrypt-production.yaml
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
apiVersion: certmanager.k8s.io/v1alpha1
|
||||||
|
kind: ClusterIssuer
|
||||||
|
metadata:
|
||||||
|
name: letsencrypt-production
|
||||||
|
spec:
|
||||||
|
acme:
|
||||||
|
# You must replace this email address with your own.
|
||||||
|
# Let's Encrypt will use this to contact you about expiring
|
||||||
|
# certificates, and issues related to your account.
|
||||||
|
email: ops@kosmos.org
|
||||||
|
server: https://acme-v02.api.letsencrypt.org/directory
|
||||||
|
privateKeySecretRef:
|
||||||
|
# Secret resource used to store the account's private key.
|
||||||
|
name: letsencrypt-production-account-key
|
||||||
|
# Add a single challenge solver, HTTP01 using the gitea-ingress
|
||||||
|
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
|
||||||
|
solvers:
|
||||||
|
- http01:
|
||||||
|
ingress:
|
||||||
|
name: gitea-ingress
|
19
kubernetes/letsencrypt-staging.yaml
Normal file
19
kubernetes/letsencrypt-staging.yaml
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
apiVersion: certmanager.k8s.io/v1alpha1
|
||||||
|
kind: ClusterIssuer
|
||||||
|
metadata:
|
||||||
|
name: letsencrypt-staging
|
||||||
|
spec:
|
||||||
|
acme:
|
||||||
|
# Let's Encrypt will use this to contact you about expiring
|
||||||
|
# certificates, and issues related to your account.
|
||||||
|
email: ops@kosmos.org
|
||||||
|
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||||
|
privateKeySecretRef:
|
||||||
|
# Secret resource used to store the account's private key.
|
||||||
|
name: letsencrypt-staging-account-key
|
||||||
|
# Add a single challenge solver, HTTP01 using the gitea-ingress
|
||||||
|
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
|
||||||
|
solvers:
|
||||||
|
- http01:
|
||||||
|
ingress:
|
||||||
|
name: gitea-ingress
|
@ -21,7 +21,7 @@
|
|||||||
{
|
{
|
||||||
"type": "docker-tag",
|
"type": "docker-tag",
|
||||||
"repository": "eu.gcr.io/fluted-magpie-218106/gitea_custom",
|
"repository": "eu.gcr.io/fluted-magpie-218106/gitea_custom",
|
||||||
"tag": "0.1.2"
|
"tag": "0.1"
|
||||||
},
|
},
|
||||||
"docker-push"
|
"docker-push"
|
||||||
]
|
]
|
||||||
|
1
vendor/ark
vendored
Submodule
1
vendor/ark
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 0fd7872ef48ce617e561e6e45f8ccb0f11637f58
|
Reference in New Issue
Block a user