Compare commits
21 Commits
eba722992f
...
feature/in
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
67d87d7d5b | ||
|
|
3cdc07cdf3 | ||
|
|
91dab0f121 | ||
|
|
ed48c92e4f | ||
|
|
ae8d6a6cf3 | ||
|
|
9f4a5b452c | ||
| 12fc74d8ff | |||
|
|
1d69fad451 | ||
|
|
f73c58d7ee | ||
|
|
68771a8e61 | ||
|
|
e3de3af82f | ||
| 490248909b | |||
|
|
e0741b4438 | ||
|
|
8050126d2d | ||
|
|
b5bbc5fa34 | ||
| 915fd7db8a | |||
|
|
bbfa3f2964 | ||
|
|
0a60d8831c | ||
|
|
cc6f31b4b9 | ||
|
|
069502d056 | ||
|
|
278e6a9cd7 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1 +1 @@
|
||||
/kubernetes/custom/config/
|
||||
/kubernetes/config/
|
||||
|
||||
18
README.md
18
README.md
@@ -3,6 +3,24 @@
|
||||
This repository contains configuration files and other assets, that are used to
|
||||
deploy and operate this Gitea instance.
|
||||
|
||||
To create a new image containing the customizations:
|
||||
|
||||
Edit `packer/custom.json` to increment the tag, then run this script (needs
|
||||
[Packer](https://www.packer.io/) in your path)
|
||||
|
||||
```
|
||||
./script/build_customizations_image
|
||||
```
|
||||
|
||||
Then edit `kubernetes/gitea-server.yaml` to use the new tag
|
||||
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
|
||||
change:
|
||||
|
||||
```
|
||||
cd kubernetes
|
||||
kubectl apply -f gitea-server.yaml
|
||||
```
|
||||
|
||||
Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
|
||||
whatever else you want to discuss or resolve.
|
||||
|
||||
|
||||
180
doc/ingress.md
Normal file
180
doc/ingress.md
Normal file
@@ -0,0 +1,180 @@
|
||||
# HTTP(S) load balancing with Ingress
|
||||
|
||||
## Resources
|
||||
|
||||
Features of GKE Ingress from the Google Cloud docs:
|
||||
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
|
||||
|
||||
It does hostname-aware HTTP(S) load balancing, and is billed like a regular
|
||||
Load Balancer (https://cloud.google.com/compute/pricing#lb). The advantages are
|
||||
that we can use one set of firewall rules (ports 80 and 443) for multiple
|
||||
services, and easy Let's Encrypt certificates for services with no built-in
|
||||
support for it
|
||||
|
||||
This 3 part article was a good resource:
|
||||
|
||||
https://medium.com/google-cloud/global-kubernetes-in-3-steps-on-gcp-8a3585ec8547
|
||||
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
|
||||
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-2-demo-cf587765702
|
||||
|
||||
I couldn't find information about setting
|
||||
`ingress.kubernetes.io/rewrite-target` to `/` anywhere else, without it only
|
||||
`/` worked on an host, all other URLs would go to the default backend and
|
||||
return a 404.
|
||||
|
||||
cert-manager, for automated (among others) Let's Encrypt certificates:
|
||||
https://docs.cert-manager.io/en/release-0.8/
|
||||
|
||||
## Create a global IP
|
||||
|
||||
Ephemeral IPs are only regional, and you lose them if you have to recreate the
|
||||
Ingress
|
||||
|
||||
gcloud compute addresses create ingress-ip --global
|
||||
|
||||
## Create the ingress
|
||||
|
||||
A ClusterIP will not work, because it is allocating random ports. Explicitly
|
||||
create a NodePort to expose your service. On GKE, health checks are configured
|
||||
automatically
|
||||
|
||||
cat <<EOF > test-server-nodeport.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: test-server-nodeport
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 80
|
||||
targetPort: 3000
|
||||
type: NodePort
|
||||
selector:
|
||||
name: test-server
|
||||
EOF
|
||||
kubectl apply -f test-server-nodeport.yaml
|
||||
|
||||
Create the ingress resource
|
||||
|
||||
cat <<EOF > ingress-main.yaml
|
||||
# A GCE Ingress that uses cert-manager to manage Let's Encrypt certificates
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress-main
|
||||
annotations:
|
||||
# Required, otherwise only the / path works
|
||||
# https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
|
||||
ingress.kubernetes.io/rewrite-target: /
|
||||
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
|
||||
certmanager.k8s.io/acme-challenge-type: http01
|
||||
# Created using the following command
|
||||
# gcloud compute addresses create ingress-ip --global
|
||||
kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- test.kosmos.org
|
||||
secretName: test-kosmos-org-cert
|
||||
- test2.kosmos.org
|
||||
secretName: test2-kosmos-org-cert
|
||||
rules:
|
||||
- host: test.kosmos.org
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: test-server-nodeport
|
||||
servicePort: 80
|
||||
- host: test2.kosmos.org
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: test-server-nodeport
|
||||
servicePort: 80
|
||||
EOF
|
||||
kubectl apply -f ingress-main.yaml
|
||||
|
||||
|
||||
## cert-manager
|
||||
|
||||
### Create the cert-manager resources
|
||||
|
||||
cert-manager provides a Let's Encrypt certificate issuer, and lets you mount it
|
||||
in an Ingress resource, making it possible to use HTTP ACME challenges
|
||||
|
||||
Get the reserved IP you created in the first step:
|
||||
|
||||
$ gcloud compute addresses list --global
|
||||
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
|
||||
ingress-ip 35.244.164.133 EXTERNAL IN_USE
|
||||
|
||||
Set the DNS record for the domain you want a Let's Encrypt cert for to this IP.
|
||||
|
||||
Now it's time to create the cert-manager
|
||||
|
||||
https://docs.cert-manager.io/en/release-0.8/getting-started/install/kubernetes.html
|
||||
|
||||
kubectl create namespace cert-manager
|
||||
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml --validate=false
|
||||
|
||||
I had to run the apply command twice for it to create all the resources. On the
|
||||
first run I got these errors. Running it a second time successfully created all
|
||||
the resources
|
||||
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
|
||||
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
|
||||
|
||||
We name the ingress explicitely so it only runs on one. Having only one IP to
|
||||
set on the DNS records makes the HTTP validation easier. Using the class would
|
||||
attach the validation endpoint to all Ingresses of that class
|
||||
(https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1)
|
||||
|
||||
cat <<EOF > letsencrypt-staging.yaml
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-staging-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: ingress-main
|
||||
EOF
|
||||
|
||||
cat <<EOF > letsencrypt-production.yaml
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
spec:
|
||||
acme:
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-production-account-key
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: ingress-main
|
||||
|
||||
|
||||
## Add another service
|
||||
|
||||
To add another service behind the Ingress, you set the DNS entry for its domain
|
||||
to the Ingress IP, deploy your service, create a NodePort to expose it, and
|
||||
finally add its host to the Ingress config (both tls and rules, see example
|
||||
above)
|
||||
1791
kubernetes/cert-manager.yaml
Normal file
1791
kubernetes/cert-manager.yaml
Normal file
File diff suppressed because it is too large
Load Diff
@@ -32,13 +32,19 @@ spec:
|
||||
value: gitea
|
||||
image: mariadb:10.3.10
|
||||
name: gitea-db
|
||||
resources: {}
|
||||
ports:
|
||||
- containerPort: 3306
|
||||
name: mysql
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/mysql
|
||||
name: gitea-db-data
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 150Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 300Mi
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: gitea-db-data
|
||||
|
||||
276
kubernetes/gitea-ingress.yaml
Normal file
276
kubernetes/gitea-ingress.yaml
Normal file
@@ -0,0 +1,276 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- endpoints
|
||||
- nodes
|
||||
- pods
|
||||
- secrets
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- services
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
||||
- patch
|
||||
- apiGroups:
|
||||
- "extensions"
|
||||
resources:
|
||||
- ingresses/status
|
||||
verbs:
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- pods
|
||||
- secrets
|
||||
- namespaces
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- create
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- endpoints
|
||||
verbs:
|
||||
- get
|
||||
- create
|
||||
- update
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: ingress-controller
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: ingress-controller
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: ingress-controller
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: ingress-controller
|
||||
namespace: default
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: ingress-controller
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
run: ingress-default-backend
|
||||
name: ingress-default-backend
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
run: ingress-default-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: ingress-default-backend
|
||||
spec:
|
||||
containers:
|
||||
- name: ingress-default-backend
|
||||
image: gcr.io/google_containers/defaultbackend:1.0
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ingress-default-backend
|
||||
namespace: default
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
selector:
|
||||
run: ingress-default-backend
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: haproxy-ingress
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: haproxy-ingress-tcp
|
||||
namespace: default
|
||||
data:
|
||||
"22": "default/gitea-server:22"
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
run: haproxy-ingress
|
||||
name: haproxy-ingress
|
||||
namespace: default
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
run: haproxy-ingress
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
run: haproxy-ingress
|
||||
spec:
|
||||
hostNetwork: true
|
||||
nodeSelector:
|
||||
role: ingress-controller
|
||||
serviceAccountName: ingress-controller
|
||||
containers:
|
||||
- name: haproxy-ingress
|
||||
image: quay.io/jcmoraisjr/haproxy-ingress
|
||||
args:
|
||||
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
|
||||
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
|
||||
- --tcp-services-configmap=$(POD_NAMESPACE)/haproxy-ingress-tcp
|
||||
- --sort-backends
|
||||
ports:
|
||||
- name: http
|
||||
containerPort: 80
|
||||
- name: https
|
||||
containerPort: 443
|
||||
- name: stat
|
||||
containerPort: 1936
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10253
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
value: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-server-nodeport
|
||||
namespace: default
|
||||
labels:
|
||||
app: gitea
|
||||
name: gitea-server
|
||||
annotations:
|
||||
# add an annotation indicating the issuer to use.
|
||||
# TODO: Switch to production when we're ready
|
||||
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
|
||||
spec:
|
||||
ports:
|
||||
- name: http
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
- name: ssh
|
||||
port: 22
|
||||
targetPort: 22
|
||||
protocol: TCP
|
||||
type: NodePort
|
||||
selector:
|
||||
name: gitea-server
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: gitea-ingress
|
||||
namespace: default
|
||||
labels:
|
||||
name: gitea-server
|
||||
app: gitea
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "haproxy"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- gitea.kosmos.org
|
||||
secretName: gitea-kosmos-org-cert
|
||||
rules:
|
||||
- host: gitea.kosmos.org
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: gitea-server-nodeport
|
||||
servicePort: 3000
|
||||
@@ -14,31 +14,49 @@ spec:
|
||||
spec:
|
||||
initContainers:
|
||||
- name: init-config
|
||||
image: busybox
|
||||
# This is a busybox image with our gitea customizations saved to
|
||||
# /custom, built using ./script/build_customizations_image from the
|
||||
# root of the repo
|
||||
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1
|
||||
command: [
|
||||
'sh', '-c',
|
||||
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && mkdir -p /data/gitea/options/label && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp /root/options/label/* /data/gitea/options/label/ && chown -R 1000:1000 /data/gitea'
|
||||
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp -R /custom/* /data/gitea && chown -R 1000:1000 /data/gitea'
|
||||
]
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: gitea-server-data
|
||||
- mountPath: /root/conf
|
||||
name: config
|
||||
# The labels have been created as a ConfigMap from local files using this command:
|
||||
#
|
||||
# kubectl create configmap gitea-options-label --from-file=custom/options/label/
|
||||
- mountPath: /root/options/label
|
||||
name: label
|
||||
containers:
|
||||
- name: gitea-server
|
||||
image: gitea/gitea:1.7.1
|
||||
image: gitea/gitea:1.8.1
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
- containerPort: 3001
|
||||
- containerPort: 22
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 3000
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 3000
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: gitea-server-data
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 256Mi
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: gitea-server-data
|
||||
@@ -57,9 +75,6 @@ spec:
|
||||
- key: key.pem
|
||||
path: key.pem
|
||||
mode: 256
|
||||
- name: label
|
||||
configMap:
|
||||
name: gitea-options-label
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
@@ -72,7 +87,7 @@ spec:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
storage: 20Gi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
@@ -91,9 +106,6 @@ spec:
|
||||
targetPort: 22
|
||||
- name: "http"
|
||||
port: 80
|
||||
targetPort: 3001
|
||||
- name: "https"
|
||||
port: 443
|
||||
targetPort: 3000
|
||||
selector:
|
||||
name: gitea-server
|
||||
|
||||
20
kubernetes/letsencrypt-production.yaml
Normal file
20
kubernetes/letsencrypt-production.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-production
|
||||
spec:
|
||||
acme:
|
||||
# You must replace this email address with your own.
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-production-account-key
|
||||
# Add a single challenge solver, HTTP01 using the gitea-ingress
|
||||
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: gitea-ingress
|
||||
19
kubernetes/letsencrypt-staging.yaml
Normal file
19
kubernetes/letsencrypt-staging.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
apiVersion: certmanager.k8s.io/v1alpha1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
spec:
|
||||
acme:
|
||||
# Let's Encrypt will use this to contact you about expiring
|
||||
# certificates, and issues related to your account.
|
||||
email: ops@kosmos.org
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
privateKeySecretRef:
|
||||
# Secret resource used to store the account's private key.
|
||||
name: letsencrypt-staging-account-key
|
||||
# Add a single challenge solver, HTTP01 using the gitea-ingress
|
||||
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
name: gitea-ingress
|
||||
29
packer/custom.json
Normal file
29
packer/custom.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"builders": [{
|
||||
"type": "docker",
|
||||
"image": "busybox",
|
||||
"run_command": ["-d", "-i", "-t", "{{.Image}}", "/bin/sh"],
|
||||
"commit": true
|
||||
}],
|
||||
"provisioners": [
|
||||
{
|
||||
"inline": ["mkdir /custom"],
|
||||
"type": "shell"
|
||||
},
|
||||
{
|
||||
"type": "file",
|
||||
"source": "../custom/",
|
||||
"destination": "/custom"
|
||||
}
|
||||
],
|
||||
"post-processors": [
|
||||
[
|
||||
{
|
||||
"type": "docker-tag",
|
||||
"repository": "eu.gcr.io/fluted-magpie-218106/gitea_custom",
|
||||
"tag": "0.1"
|
||||
},
|
||||
"docker-push"
|
||||
]
|
||||
]
|
||||
}
|
||||
7
script/build_customizations_image
Executable file
7
script/build_customizations_image
Executable file
@@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
# fail fast
|
||||
set -e
|
||||
|
||||
cd packer/
|
||||
packer build custom.json
|
||||
cd -
|
||||
@@ -7,7 +7,7 @@ secret = `kubectl get secret gitea-config -o yaml`
|
||||
yaml = YAML.load(secret)
|
||||
|
||||
yaml['data'].each do |key, data|
|
||||
filename = File.join('kubernetes', 'custom', 'config', key)
|
||||
filename = File.join('kubernetes', 'config', key)
|
||||
File.open(filename, "w+") do |f|
|
||||
puts "Writing #{filename}"
|
||||
f.write Base64.decode64(data)
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
|
||||
# Delete the gitea-config secrets
|
||||
kubectl delete secret gitea-config
|
||||
# Replace it from the local files in kubernetes/custom/config/* (acquired by running
|
||||
# Replace it from the local files in kubernetes/config/* (acquired by running
|
||||
# ./script/get_secrets)
|
||||
kubectl create secret generic gitea-config --from-file=cert.pem=kubernetes/custom/config/cert.pem --from-file=key.pem=kubernetes/custom/config/key.pem --from-file=app.ini=kubernetes/custom/config/app.ini
|
||||
kubectl create secret generic gitea-config --from-file=cert.pem=kubernetes/config/cert.pem --from-file=key.pem=kubernetes/config/key.pem --from-file=app.ini=kubernetes/config/app.ini
|
||||
# Force the pod to restart by patching the deployment resource
|
||||
kubectl patch deployment gitea-server -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
|
||||
|
||||
Reference in New Issue
Block a user