22 Commits

Author SHA1 Message Date
122cb1232c Switch to latest Drone build
Looks like the resource limit support from drone-runtime wasn't in -rc5.
2019-03-04 15:41:11 +07:00
69f62182a1 Add resource requests and limits for Drone 2019-03-04 13:38:10 +07:00
08cd2ad211 Fix rbac role
Drone is using the "default" service account.
2019-03-03 14:11:59 +07:00
30c3f47afd Initial Drone CI configs 2019-03-03 12:59:07 +07:00
9ef15325cc Merge branch 'chore/upgrade_gitea' of kosmos/gitea.kosmos.org into master 2019-02-03 05:29:39 +00:00
526f4b9035 Upgrade Gitea to 1.7.1 2019-02-03 12:28:21 +07:00
43ad6f842b Merge branch 'docs/update' of kosmos/gitea.kosmos.org into master 2019-01-28 22:42:17 +00:00
Greg Karékinian
34068bc7ac Add docs about building our own images 2019-01-25 16:52:17 +01:00
28b73f88a8 Use 1.7 release of Gitea 2019-01-09 07:55:13 +08:00
Greg Karékinian
8a2d491e45 Add documentation about updating Gitea 2019-01-08 12:14:41 +01:00
gregkare
8073861775 Merge branch 'feature/5-backups' of kosmos/gitea.kosmos.org into master 2019-01-07 11:02:37 +00:00
Greg Karékinian
78bccff685 Use the git submodule update command with the --init flag in the docs 2019-01-07 12:01:49 +01:00
cef013a40a Update backup docs 2019-01-05 11:23:17 +08:00
3692204ce4 Add app label for all Gitea resources
This way one can address them all at once, like e.g. for Ark backups.
2019-01-05 11:09:25 +08:00
a16143a3f4 Add docs for Ark dependency 2019-01-05 10:22:48 +08:00
c3bf234cba Add Ark as submodule
Heptio Ark is a Kubernetes backup solution. See docs.
2019-01-05 10:14:49 +08:00
9e8370f577 Add backup doc 2019-01-02 12:50:14 +08:00
8496b19ec5 Update 'doc/kubernetes.md' 2019-01-02 04:20:49 +00:00
4a43305a35 Merge branch 'docs/kubernetes' of kosmos/gitea.kosmos.org into master 2018-12-24 08:05:03 +00:00
gregkare
8bb6bddb00 Merge branch 'feature/6-remove_init_env_vars' of kosmos/gitea.kosmos.org into master 2018-12-17 10:36:45 +00:00
Greg Karékinian
bf62157f26 Remove the init environment variables
They were never used since we create an ini config file before starting
the container

Refs #6
2018-12-17 11:34:15 +01:00
0cf7ba527e Move Kubernetes docs out of README 2018-12-14 18:12:39 +00:00
10 changed files with 233 additions and 59 deletions

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "vendor/ark"]
path = vendor/ark
url = git@github.com:heptio/ark.git

View File

@@ -7,34 +7,3 @@ Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
whatever else you want to discuss or resolve.
[open issues]: https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues
## Kubernetes
### Apply changes to resources
```
kubectl apply -f gitea-db.yaml
kubectl apply -f gitea-server.yaml
```
### Write the secrets to the local filesystem
```
./script/get_secrets
```
It writes the secrets (currently the app.ini file, as well as auto-generated
TLS certificates that are only used when no Let's Encrypt cert is available)
to the `kubernetes/config/` folder. These files are not in Git because they
contain credentials.
Once you have edited them locally, you need to delete the secrets stored on
Kubernetes before uploading them again. This is done by this script:
```
./script/replace_secrets
```
### Reuse a released persistent volume:
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616

36
doc/backup-and-restore.md Normal file
View File

@@ -0,0 +1,36 @@
# Backups
We're using [Ark][1] for backing up Kubernetes config and GKE resources. It is
available as a Git submodule in the `vendor/` folder (incl. the `ark`
executable).
In order to initialize and update submodules in your local repo, run once:
git submodule update --init
Then, to fetch/update the modules, run:
git submodule update
The Ark service is running on the Sidamo cluster and was set up using the
[official docs' GCP instructions and config files][4]. There's a daily backup
schedule in effect for Gitea (using the label `app=gitea`).
Please refer to Ark's [ Getting Started ][5] doc for all backup and restore
commands.
## Backup location
Cluster configuration (including all live resources) is backed up to [a Google
Cloud Storage container][3].
## Persistent volumes
Persistent volumes are just GCE disks. Thus, with the current config, Ark
creates volume snapshots as native [GCE disk snapshots][2].
[1]: https://heptio.github.io/ark/v0.10.0
[2]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50
[3]: https://console.cloud.google.com/storage/browser/sidamo-backups?project=fluted-magpie-218106&organizationId=772167872692
[4]: https://heptio.github.io/ark/v0.10.0/gcp-config
[5]: https://heptio.github.io/ark/v0.10.0/get-started

71
doc/kubernetes.md Normal file
View File

@@ -0,0 +1,71 @@
# Kubernetes / GKE
This Gitea instance is currently hosted on Google Kubernetes Engine.
## Apply changes to resources
```
kubectl apply -f gitea-db.yaml
kubectl apply -f gitea-server.yaml
```
## Write the secrets to the local filesystem
```
./script/get_secrets
```
It writes the secrets (currently the app.ini file, as well as auto-generated
TLS certificates that are only used when no Let's Encrypt cert is available)
to the `kubernetes/config/` folder. These files are not in Git because they
contain credentials.
Once you have edited them locally, you need to delete the secrets stored on
Kubernetes before uploading them again. This is done by this script:
```
./script/replace_secrets
```
## Reuse a released persistent volume:
> When you delete a PVC, corresponding PV becomes `Released`. This PV can contain sensitive data (say credit card numbers) and therefore nobody can ever bind to it, even if it is a PVC with the same name and in the same namespace as the previous one - who knows who's trying to steal the data!
>
> Admin intervention is required here. He has two options:
>
> * Make the PV available to everybody - delete `PV.Spec.ClaimRef`, Such PV can bound to any PVC (assuming that capacity, access mode and selectors match)
>
> * Make the PV available to a specific PVC - pre-fill `PV.Spec.ClaimRef` with a pointer to a PVC. Leave the `PV.Spec.ClaimRef,UID` empty, as the PVC does not to need exist at this point and you don't know PVC's UID. This PV can be bound only to the specified PVC.
>
>
> @whitecolor, in your case you should be fine by clearing `PV.Spec.ClaimRef.UID` in the PV. Only the re-created PVC (with any UID) can then use the PV. And it's your responsibility that only the right person can craft appropriate PVC so nobody can steal your data.
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616
## Update Gitea
### Released version
Change the image for the gitea-server container
(`kubernetes/gitea-server.yaml`) to `gitea/gitea:TAG`, for example:
`gitea/gitea:1.7.0-rc2`
### Unreleased version
This is useful to deploy features that are in master but not yet in a release.
$ docker pull gitea/gitea
$ docker tag gitea/gitea:latest kosmosorg/gitea:production
$ docker push kosmosorg/gitea
Set the image for the gitea-server container to `kosmosorg/gitea:latest`, or run
this command to force a deployment if it is already set to it
$ kubectl patch deployment gitea-server -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
### Build our own image
At the root of the [https://github.com/go-gitea/gitea](gitea repo)
$ DOCKER_TAG=production DOCKER_IMAGE=kosmosorg/gitea make docker # builds and tags kosmosorg/gitea:production locally
$ docker push kosmosorg/gitea

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kosmos-drone-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: kosmos
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,91 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kosmos-drone-server
namespace: kosmos
labels:
app: kosmos-drone
spec:
replicas: 1
template:
metadata:
labels:
name: kosmos-drone-server
app: kosmos-drone
spec:
containers:
- name: kosmos-drone-server
image: drone/drone:latest
imagePullPolicy: Always
env:
- name: DRONE_KUBERNETES_ENABLED
value: "true"
- name: DRONE_KUBERNETES_NAMESPACE
value: kosmos
- name: DRONE_GITEA_SERVER
value: https://gitea.kosmos.org
- name: DRONE_RPC_SECRET
value: 0500c55b6ae97a7f1e7c207477698b6d
- name: DRONE_SERVER_HOST
value: drone.kosmos.org
- name: DRONE_SERVER_PROTO
value: https
- name: DRONE_TLS_AUTOCERT
value: "true"
- name: DRONE_ADMIN
value: raucao,gregkare,galfert
- name: DRONE_LOGS_DEBUG
value: "true"
volumeMounts:
- mountPath: /var/lib/drone
name: kosmos-drone-data
ports:
- containerPort: 80
- containerPort: 443
resources:
requests:
cpu: 50m
memory: 50Mi
limits:
cpu: 100m
memory: 100Mi
volumes:
- name: kosmos-drone-data
persistentVolumeClaim:
claimName: kosmos-drone-data
restartPolicy: Always
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kosmos-drone-data
namespace: kosmos
labels:
app: kosmos-drone
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3000Mi
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: kosmos-drone-server
namespace: kosmos
labels:
name: kosmos-drone-server
app: kosmos-drone
spec:
type: LoadBalancer
ports:
- name: "http"
port: 80
targetPort: 80
- name: "https"
port: 443
targetPort: 443
selector:
name: kosmos-drone-server

View File

@@ -2,6 +2,8 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitea-db
labels:
app: gitea
spec:
replicas: 1
strategy:
@@ -10,6 +12,7 @@ spec:
metadata:
labels:
name: gitea-db
app: gitea
spec:
containers:
- env:
@@ -48,6 +51,7 @@ metadata:
name: gitea-db-data
labels:
name: gitea-db-data
app: gitea
spec:
accessModes:
- ReadWriteOnce
@@ -61,6 +65,7 @@ metadata:
name: gitea-db
labels:
service: gitea-db
app: gitea
spec:
selector:
name: gitea-db

View File

@@ -2,12 +2,15 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitea-server
labels:
app: gitea
spec:
replicas: 1
template:
metadata:
labels:
name: gitea-server
app: gitea
spec:
initContainers:
- name: init-config
@@ -22,38 +25,12 @@ spec:
# This is only used for the initial setup, it does nothing once a app.ini
# file exists in the conf/ directory of the data directory
# (/data/gitea/conf in our case)
- env:
- name: DB_HOST
value: gitea-db:3306
- name: DB_NAME
value: gitea
- name: DB_PASSWD
valueFrom:
secretKeyRef:
name: gitea-mysql-pass
key: password
- name: DB_TYPE
value: mysql
- name: DB_USER
value: gitea
- name: ROOT_URL
value: https://gitea.kosmos.org
- name: RUN_MODE
value: prod
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: gitea-secret-key
key: password
- name: SSH_DOMAIN
value: gitea.kosmos.org
image: 5apps/gitea:latest
name: gitea-server
- name: gitea-server
image: gitea/gitea:1.7.1
ports:
- containerPort: 3000
- containerPort: 3001
- containerPort: 22
resources: {}
volumeMounts:
- mountPath: /data
name: gitea-server-data
@@ -80,6 +57,8 @@ apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
@@ -93,6 +72,7 @@ metadata:
name: gitea-server
labels:
name: gitea-server
app: gitea
spec:
type: LoadBalancer
# preserves the client source IP

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: kosmos
labels:
app: kosmos

1
vendor/ark vendored Submodule

Submodule vendor/ark added at 0fd7872ef4