4 Commits

Author SHA1 Message Date
122cb1232c Switch to latest Drone build
Looks like the resource limit support from drone-runtime wasn't in -rc5.
2019-03-04 15:41:11 +07:00
69f62182a1 Add resource requests and limits for Drone 2019-03-04 13:38:10 +07:00
08cd2ad211 Fix rbac role
Drone is using the "default" service account.
2019-03-03 14:11:59 +07:00
30c3f47afd Initial Drone CI configs 2019-03-03 12:59:07 +07:00
15 changed files with 121 additions and 2403 deletions

View File

@@ -3,24 +3,6 @@
This repository contains configuration files and other assets, that are used to
deploy and operate this Gitea instance.
To create a new image containing the customizations:
Edit `packer/custom.json` to increment the tag, then run this script (needs
[Packer](https://www.packer.io/) in your path)
```
./script/build_customizations_image
```
Then edit `kubernetes/gitea-server.yaml` to use the new tag
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
change:
```
cd kubernetes
kubectl apply -f gitea-server.yaml
```
Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
whatever else you want to discuss or resolve.

View File

@@ -1,11 +0,0 @@
#db231d bug ; Something is not working
#76db1d enhancement ; Improving existing functionality
#1d76db feature ; New functionality
#db1d76 idea ; Something to consider
#db1d76 question ; Looking for an answer
#fbca04 security ; All your base are belong to us
#1dd5db ui/ux ; User interface, process design, etc.
#333333 dev environment ; Config, builds, CI, deployment, etc.
#cccccc duplicate ; This issue or pull request already exists
#cccccc invalid ; Not a bug
#cccccc wontfix ; This won't be fixed

View File

@@ -1,14 +0,0 @@
#db231d bug ; Something is not working
#76db1d enhancement ; Improving existing functionality
#1d76db feature ; New functionality
#db1d76 idea ; Something to consider
#db1d76 question ; Looking for an answer
#fbca04 security ; All your base are belong to us
#1dd5db ui/ux ; User interface, process design, etc.
#333333 dev environment ; Config, builds, CI, deployment, etc.
#008080 kredits-1 ; Small contribution
#008080 kredits-2 ; Medium contribution
#008080 kredits-3 ; Large contribution
#cccccc duplicate ; This issue or pull request already exists
#cccccc invalid ; Not a bug
#cccccc wontfix ; This won't be fixed

View File

@@ -1,180 +0,0 @@
# HTTP(S) load balancing with Ingress
## Resources
Features of GKE Ingress from the Google Cloud docs:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
It does hostname-aware HTTP(S) load balancing, and is billed like a regular
Load Balancer (https://cloud.google.com/compute/pricing#lb). The advantages are
that we can use one set of firewall rules (ports 80 and 443) for multiple
services, and easy Let's Encrypt certificates for services with no built-in
support for it
This 3 part article was a good resource:
https://medium.com/google-cloud/global-kubernetes-in-3-steps-on-gcp-8a3585ec8547
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-2-demo-cf587765702
I couldn't find information about setting
`ingress.kubernetes.io/rewrite-target` to `/` anywhere else, without it only
`/` worked on an host, all other URLs would go to the default backend and
return a 404.
cert-manager, for automated (among others) Let's Encrypt certificates:
https://docs.cert-manager.io/en/release-0.8/
## Create a global IP
Ephemeral IPs are only regional, and you lose them if you have to recreate the
Ingress
gcloud compute addresses create ingress-ip --global
## Create the ingress
A ClusterIP will not work, because it is allocating random ports. Explicitly
create a NodePort to expose your service. On GKE, health checks are configured
automatically
cat <<EOF > test-server-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: test-server-nodeport
spec:
ports:
- name: http
port: 80
targetPort: 3000
type: NodePort
selector:
name: test-server
EOF
kubectl apply -f test-server-nodeport.yaml
Create the ingress resource
cat <<EOF > ingress-main.yaml
# A GCE Ingress that uses cert-manager to manage Let's Encrypt certificates
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-main
annotations:
# Required, otherwise only the / path works
# https://medium.com/google-cloud/global-ingress-in-practice-on-google-container-engine-part-1-discussion-ccc1e5b27bd0
ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: "letsencrypt-production"
certmanager.k8s.io/acme-challenge-type: http01
# Created using the following command
# gcloud compute addresses create ingress-ip --global
kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
spec:
tls:
- hosts:
- test.kosmos.org
secretName: test-kosmos-org-cert
- test2.kosmos.org
secretName: test2-kosmos-org-cert
rules:
- host: test.kosmos.org
http:
paths:
- backend:
serviceName: test-server-nodeport
servicePort: 80
- host: test2.kosmos.org
http:
paths:
- backend:
serviceName: test-server-nodeport
servicePort: 80
EOF
kubectl apply -f ingress-main.yaml
## cert-manager
### Create the cert-manager resources
cert-manager provides a Let's Encrypt certificate issuer, and lets you mount it
in an Ingress resource, making it possible to use HTTP ACME challenges
Get the reserved IP you created in the first step:
$ gcloud compute addresses list --global
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
ingress-ip 35.244.164.133 EXTERNAL IN_USE
Set the DNS record for the domain you want a Let's Encrypt cert for to this IP.
Now it's time to create the cert-manager
https://docs.cert-manager.io/en/release-0.8/getting-started/install/kubernetes.html
kubectl create namespace cert-manager
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml --validate=false
I had to run the apply command twice for it to create all the resources. On the
first run I got these errors. Running it a second time successfully created all
the resources
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
unable to recognize "cert-manager.yaml": no matches for kind "Issuer" in version "certmanager.k8s.io/v1alpha1"
unable to recognize "cert-manager.yaml": no matches for kind "Certificate" in version "certmanager.k8s.io/v1alpha1"
We name the ingress explicitely so it only runs on one. Having only one IP to
set on the DNS records makes the HTTP validation easier. Using the class would
attach the validation endpoint to all Ingresses of that class
(https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1)
cat <<EOF > letsencrypt-staging.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-staging-account-key
solvers:
- http01:
ingress:
name: ingress-main
EOF
cat <<EOF > letsencrypt-production.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-production-account-key
solvers:
- http01:
ingress:
name: ingress-main
## Add another service
To add another service behind the Ingress, you set the DNS entry for its domain
to the Ingress IP, deploy your service, create a NodePort to expose it, and
finally add its host to the Ingress config (both tls and rules, see example
above)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kosmos-drone-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: kosmos
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,91 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kosmos-drone-server
namespace: kosmos
labels:
app: kosmos-drone
spec:
replicas: 1
template:
metadata:
labels:
name: kosmos-drone-server
app: kosmos-drone
spec:
containers:
- name: kosmos-drone-server
image: drone/drone:latest
imagePullPolicy: Always
env:
- name: DRONE_KUBERNETES_ENABLED
value: "true"
- name: DRONE_KUBERNETES_NAMESPACE
value: kosmos
- name: DRONE_GITEA_SERVER
value: https://gitea.kosmos.org
- name: DRONE_RPC_SECRET
value: 0500c55b6ae97a7f1e7c207477698b6d
- name: DRONE_SERVER_HOST
value: drone.kosmos.org
- name: DRONE_SERVER_PROTO
value: https
- name: DRONE_TLS_AUTOCERT
value: "true"
- name: DRONE_ADMIN
value: raucao,gregkare,galfert
- name: DRONE_LOGS_DEBUG
value: "true"
volumeMounts:
- mountPath: /var/lib/drone
name: kosmos-drone-data
ports:
- containerPort: 80
- containerPort: 443
resources:
requests:
cpu: 50m
memory: 50Mi
limits:
cpu: 100m
memory: 100Mi
volumes:
- name: kosmos-drone-data
persistentVolumeClaim:
claimName: kosmos-drone-data
restartPolicy: Always
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kosmos-drone-data
namespace: kosmos
labels:
app: kosmos-drone
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3000Mi
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: kosmos-drone-server
namespace: kosmos
labels:
name: kosmos-drone-server
app: kosmos-drone
spec:
type: LoadBalancer
ports:
- name: "http"
port: 80
targetPort: 80
- name: "https"
port: 443
targetPort: 443
selector:
name: kosmos-drone-server

View File

@@ -32,19 +32,13 @@ spec:
value: gitea
image: mariadb:10.3.10
name: gitea-db
resources: {}
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: gitea-db-data
resources:
requests:
cpu: 250m
memory: 150Mi
limits:
cpu: 500m
memory: 300Mi
restartPolicy: Always
volumes:
- name: gitea-db-data

View File

@@ -1,276 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress-tcp
namespace: default
data:
"22": "default/gitea-server:22"
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
hostNetwork: true
nodeSelector:
role: ingress-controller
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --tcp-services-configmap=$(POD_NAMESPACE)/haproxy-ingress-tcp
- --sort-backends
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
value: default
---
apiVersion: v1
kind: Service
metadata:
name: gitea-server-nodeport
namespace: default
labels:
app: gitea
name: gitea-server
annotations:
# add an annotation indicating the issuer to use.
# TODO: Switch to production when we're ready
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
ports:
- name: http
port: 3000
targetPort: 3000
- name: ssh
port: 22
targetPort: 22
protocol: TCP
type: NodePort
selector:
name: gitea-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gitea-ingress
namespace: default
labels:
name: gitea-server
app: gitea
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- hosts:
- gitea.kosmos.org
secretName: gitea-kosmos-org-cert
rules:
- host: gitea.kosmos.org
http:
paths:
- path: /
backend:
serviceName: gitea-server-nodeport
servicePort: 3000

View File

@@ -14,49 +14,26 @@ spec:
spec:
initContainers:
- name: init-config
# This is a busybox image with our gitea customizations saved to
# /custom, built using ./script/build_customizations_image from the
# root of the repo
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1
command: [
'sh', '-c',
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp -R /custom/* /data/gitea && chown -R 1000:1000 /data/gitea'
]
image: busybox
command: ['sh', '-c', 'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && chown -R 1000:1000 /data/gitea']
volumeMounts:
- mountPath: /data
name: gitea-server-data
- mountPath: /root/conf
name: config
containers:
# This is only used for the initial setup, it does nothing once a app.ini
# file exists in the conf/ directory of the data directory
# (/data/gitea/conf in our case)
- name: gitea-server
image: gitea/gitea:1.8.1
image: gitea/gitea:1.7.1
ports:
- containerPort: 3000
- containerPort: 3001
- containerPort: 22
livenessProbe:
httpGet:
path: /
port: 3000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 3000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
volumeMounts:
- mountPath: /data
name: gitea-server-data
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
restartPolicy: Always
volumes:
- name: gitea-server-data
@@ -87,7 +64,7 @@ spec:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storage: 1Gi
---
apiVersion: v1
kind: Service
@@ -106,6 +83,9 @@ spec:
targetPort: 22
- name: "http"
port: 80
targetPort: 3001
- name: "https"
port: 443
targetPort: 3000
selector:
name: gitea-server

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: kosmos
labels:
app: kosmos

View File

@@ -1,20 +0,0 @@
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-production-account-key
# Add a single challenge solver, HTTP01 using the gitea-ingress
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
solvers:
- http01:
ingress:
name: gitea-ingress

View File

@@ -1,19 +0,0 @@
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-staging-account-key
# Add a single challenge solver, HTTP01 using the gitea-ingress
# https://docs.cert-manager.io/en/latest/reference/api-docs/index.html#acmechallengesolverhttp01ingress-v1alpha1
solvers:
- http01:
ingress:
name: gitea-ingress

View File

@@ -1,29 +0,0 @@
{
"builders": [{
"type": "docker",
"image": "busybox",
"run_command": ["-d", "-i", "-t", "{{.Image}}", "/bin/sh"],
"commit": true
}],
"provisioners": [
{
"inline": ["mkdir /custom"],
"type": "shell"
},
{
"type": "file",
"source": "../custom/",
"destination": "/custom"
}
],
"post-processors": [
[
{
"type": "docker-tag",
"repository": "eu.gcr.io/fluted-magpie-218106/gitea_custom",
"tag": "0.1"
},
"docker-push"
]
]
}

View File

@@ -1,7 +0,0 @@
#!/usr/bin/env bash
# fail fast
set -e
cd packer/
packer build custom.json
cd -