Compare commits
1 Commits
21238a032d
...
dev/kubern
| Author | SHA1 | Date | |
|---|---|---|---|
| 199b3e94cf |
3
.gitmodules
vendored
3
.gitmodules
vendored
@@ -1,3 +0,0 @@
|
||||
[submodule "vendor/ark"]
|
||||
path = vendor/ark
|
||||
url = git@github.com:heptio/ark.git
|
||||
31
README.md
31
README.md
@@ -7,3 +7,34 @@ Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
|
||||
whatever else you want to discuss or resolve.
|
||||
|
||||
[open issues]: https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues
|
||||
|
||||
## Kubernetes
|
||||
|
||||
### Apply changes to resources
|
||||
|
||||
```
|
||||
kubectl apply -f gitea-db.yaml
|
||||
kubectl apply -f gitea-server.yaml
|
||||
```
|
||||
|
||||
### Write the secrets to the local filesystem
|
||||
|
||||
```
|
||||
./script/get_secrets
|
||||
```
|
||||
|
||||
It writes the secrets (currently the app.ini file, as well as auto-generated
|
||||
TLS certificates that are only used when no Let's Encrypt cert is available)
|
||||
to the `kubernetes/config/` folder. These files are not in Git because they
|
||||
contain credentials.
|
||||
|
||||
Once you have edited them locally, you need to delete the secrets stored on
|
||||
Kubernetes before uploading them again. This is done by this script:
|
||||
|
||||
```
|
||||
./script/replace_secrets
|
||||
```
|
||||
|
||||
### Reuse a released persistent volume:
|
||||
|
||||
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
#db231d bug ; Something is not working
|
||||
#76db1d enhancement ; Improving existing functionality
|
||||
#1d76db feature ; New functionality
|
||||
#db1d76 idea ; Something to consider
|
||||
#db1d76 question ; Looking for an answer
|
||||
#fbca04 security ; All your base are belong to us
|
||||
#1dd5db ui/ux ; User interface, process design, etc.
|
||||
#333333 dev environment ; Config, builds, CI, deployment, etc.
|
||||
#cccccc duplicate ; This issue or pull request already exists
|
||||
#cccccc invalid ; Not a bug
|
||||
#cccccc wontfix ; This won't be fixed
|
||||
@@ -1,14 +0,0 @@
|
||||
#db231d bug ; Something is not working
|
||||
#76db1d enhancement ; Improving existing functionality
|
||||
#1d76db feature ; New functionality
|
||||
#db1d76 idea ; Something to consider
|
||||
#db1d76 question ; Looking for an answer
|
||||
#fbca04 security ; All your base are belong to us
|
||||
#1dd5db ui/ux ; User interface, process design, etc.
|
||||
#333333 dev environment ; Config, builds, CI, deployment, etc.
|
||||
#008080 kredits-1 ; Small contribution
|
||||
#008080 kredits-2 ; Medium contribution
|
||||
#008080 kredits-3 ; Large contribution
|
||||
#cccccc duplicate ; This issue or pull request already exists
|
||||
#cccccc invalid ; Not a bug
|
||||
#cccccc wontfix ; This won't be fixed
|
||||
@@ -1,36 +0,0 @@
|
||||
# Backups
|
||||
|
||||
We're using [Ark][1] for backing up Kubernetes config and GKE resources. It is
|
||||
available as a Git submodule in the `vendor/` folder (incl. the `ark`
|
||||
executable).
|
||||
|
||||
In order to initialize and update submodules in your local repo, run once:
|
||||
|
||||
git submodule update --init
|
||||
|
||||
Then, to fetch/update the modules, run:
|
||||
|
||||
git submodule update
|
||||
|
||||
The Ark service is running on the Sidamo cluster and was set up using the
|
||||
[official docs' GCP instructions and config files][4]. There's a daily backup
|
||||
schedule in effect for Gitea (using the label `app=gitea`).
|
||||
|
||||
Please refer to Ark's [ Getting Started ][5] doc for all backup and restore
|
||||
commands.
|
||||
|
||||
## Backup location
|
||||
|
||||
Cluster configuration (including all live resources) is backed up to [a Google
|
||||
Cloud Storage container][3].
|
||||
|
||||
## Persistent volumes
|
||||
|
||||
Persistent volumes are just GCE disks. Thus, with the current config, Ark
|
||||
creates volume snapshots as native [GCE disk snapshots][2].
|
||||
|
||||
[1]: https://heptio.github.io/ark/v0.10.0
|
||||
[2]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50
|
||||
[3]: https://console.cloud.google.com/storage/browser/sidamo-backups?project=fluted-magpie-218106&organizationId=772167872692
|
||||
[4]: https://heptio.github.io/ark/v0.10.0/gcp-config
|
||||
[5]: https://heptio.github.io/ark/v0.10.0/get-started
|
||||
@@ -1,43 +0,0 @@
|
||||
# Kubernetes / GKE
|
||||
|
||||
This Gitea instance is currently hosted on Google Kubernetes Engine.
|
||||
|
||||
## Apply changes to resources
|
||||
|
||||
```
|
||||
kubectl apply -f gitea-db.yaml
|
||||
kubectl apply -f gitea-server.yaml
|
||||
```
|
||||
|
||||
## Write the secrets to the local filesystem
|
||||
|
||||
```
|
||||
./script/get_secrets
|
||||
```
|
||||
|
||||
It writes the secrets (currently the app.ini file, as well as auto-generated
|
||||
TLS certificates that are only used when no Let's Encrypt cert is available)
|
||||
to the `kubernetes/config/` folder. These files are not in Git because they
|
||||
contain credentials.
|
||||
|
||||
Once you have edited them locally, you need to delete the secrets stored on
|
||||
Kubernetes before uploading them again. This is done by this script:
|
||||
|
||||
```
|
||||
./script/replace_secrets
|
||||
```
|
||||
|
||||
## Reuse a released persistent volume:
|
||||
|
||||
> When you delete a PVC, corresponding PV becomes `Released`. This PV can contain sensitive data (say credit card numbers) and therefore nobody can ever bind to it, even if it is a PVC with the same name and in the same namespace as the previous one - who knows who's trying to steal the data!
|
||||
>
|
||||
> Admin intervention is required here. He has two options:
|
||||
>
|
||||
> * Make the PV available to everybody - delete `PV.Spec.ClaimRef`, Such PV can bound to any PVC (assuming that capacity, access mode and selectors match)
|
||||
>
|
||||
> * Make the PV available to a specific PVC - pre-fill `PV.Spec.ClaimRef` with a pointer to a PVC. Leave the `PV.Spec.ClaimRef,UID` empty, as the PVC does not to need exist at this point and you don't know PVC's UID. This PV can be bound only to the specified PVC.
|
||||
>
|
||||
>
|
||||
> @whitecolor, in your case you should be fine by clearing `PV.Spec.ClaimRef.UID` in the PV. Only the re-created PVC (with any UID) can then use the PV. And it's your responsibility that only the right person can craft appropriate PVC so nobody can steal your data.
|
||||
|
||||
https://github.com/kubernetes/kubernetes/issues/48609#issuecomment-314066616
|
||||
@@ -2,8 +2,7 @@ apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea-db
|
||||
labels:
|
||||
app: gitea
|
||||
namespace: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
@@ -12,7 +11,6 @@ spec:
|
||||
metadata:
|
||||
labels:
|
||||
name: gitea-db
|
||||
app: gitea
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
@@ -49,9 +47,9 @@ apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-db-data
|
||||
namespace: gitea
|
||||
labels:
|
||||
name: gitea-db-data
|
||||
app: gitea
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
@@ -63,9 +61,9 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-db
|
||||
namespace: gitea
|
||||
labels:
|
||||
service: gitea-db
|
||||
app: gitea
|
||||
spec:
|
||||
selector:
|
||||
name: gitea-db
|
||||
|
||||
6
kubernetes/gitea-namespace.yaml
Normal file
6
kubernetes/gitea-namespace.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
@@ -2,15 +2,13 @@ apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea-server
|
||||
labels:
|
||||
app: gitea
|
||||
namespace: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: gitea-server
|
||||
app: gitea
|
||||
spec:
|
||||
initContainers:
|
||||
- name: init-config
|
||||
@@ -25,12 +23,38 @@ spec:
|
||||
# This is only used for the initial setup, it does nothing once a app.ini
|
||||
# file exists in the conf/ directory of the data directory
|
||||
# (/data/gitea/conf in our case)
|
||||
- name: gitea-server
|
||||
image: gitea/gitea:1.7
|
||||
- env:
|
||||
- name: DB_HOST
|
||||
value: gitea-db:3306
|
||||
- name: DB_NAME
|
||||
value: gitea
|
||||
- name: DB_PASSWD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-mysql-pass
|
||||
key: password
|
||||
- name: DB_TYPE
|
||||
value: mysql
|
||||
- name: DB_USER
|
||||
value: gitea
|
||||
- name: ROOT_URL
|
||||
value: https://gitea.kosmos.org
|
||||
- name: RUN_MODE
|
||||
value: prod
|
||||
- name: SECRET_KEY
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: gitea-secret-key
|
||||
key: password
|
||||
- name: SSH_DOMAIN
|
||||
value: gitea.kosmos.org
|
||||
image: 5apps/gitea:latest
|
||||
name: gitea-server
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
- containerPort: 3001
|
||||
- containerPort: 22
|
||||
resources: {}
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: gitea-server-data
|
||||
@@ -57,8 +81,7 @@ apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-server-data
|
||||
labels:
|
||||
app: gitea
|
||||
namespace: gitea
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
@@ -70,9 +93,9 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-server
|
||||
namespace: gitea
|
||||
labels:
|
||||
name: gitea-server
|
||||
app: gitea
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
# preserves the client source IP
|
||||
|
||||
1
vendor/ark
vendored
1
vendor/ark
vendored
Submodule vendor/ark deleted from 0fd7872ef4
Reference in New Issue
Block a user