29 Commits

Author SHA1 Message Date
91755e8744 Merge branch 'chore/update_gitea' of kosmos/gitea.kosmos.org into master 2020-03-08 00:00:47 +00:00
ee6ec82157 Upgrade gitea.kosmos.org to Gitea 1.11.2 2020-03-07 18:39:10 -05:00
fa11c50687 Merge branch 'chore/update_gitea' of kosmos/gitea.kosmos.org into master 2020-02-15 16:10:34 +00:00
21e158737d Update Gitea to 1.11.0
https://blog.gitea.io/2020/02/gitea-1.11.0-is-released/
2020-02-15 11:08:49 -05:00
515b4a4483 Merge branch 'chore/upgrade_gitea' of kosmos/gitea.kosmos.org into master 2020-01-28 17:38:54 +00:00
a3c1b2d1f7 Upgrade Gitea to 1.10.3 2020-01-28 12:33:00 -05:00
2ab0db6c2a Merge branch 'chore/update_gitea' of kosmos/gitea.kosmos.org into master 2019-12-19 13:10:13 +00:00
d6d70af0ad Upgrade Gitea to 1.10.1 2019-12-19 14:09:35 +01:00
75881c4c3f Merge branch 'chore/update_gitea' of kosmos/gitea.kosmos.org into master 2019-11-24 12:17:10 +00:00
3cd4b3102c Update Gitea to 1.10.0 2019-11-24 13:13:12 +01:00
11ae884dea Merge branch 'chore/update_gitea' of kosmos/gitea.kosmos.org into master 2019-10-31 11:58:21 +00:00
6785287227 Update Gitea to 1.9.5 2019-10-31 12:57:25 +01:00
d4784e9787 Merge branch 'chore/update_gitea' of kosmos/gitea.kosmos.org into master 2019-10-13 11:45:42 +00:00
da278556ed Adapt CPU resource request and limit
After the new pod wasn't able to be scheduled due to insufficient CPU
resources, I checked the current usage and it was well below the
requested amount. Lowering request and limit fixed the deployment issue.
2019-10-13 13:42:34 +02:00
4dd0f4b844 Update Gitea to 1.9.4 2019-10-13 13:42:24 +02:00
b6894598a6 Update Gitea to 1.9.2 2019-08-31 12:01:16 +02:00
972bbedb87 Improve customizations documentation (#35) 2019-08-16 09:26:00 +00:00
db396d6ee1 Merge branch 'chore/update_gitea' of kosmos/gitea.kosmos.org into master 2019-08-15 15:33:48 +00:00
d821e02dec Update Gitea to 1.9.1 2019-08-15 17:32:38 +02:00
2f02ddb79d Merge branch 'docs_label' of kosmos/gitea.kosmos.org into master 2019-08-12 11:38:11 +00:00
Greg Karékinian
90cb219d79 Change the color for the docs label to ead746 2019-08-12 13:27:41 +02:00
Greg Karékinian
9c36ebeb14 Add the docs label to the Kosmos label set 2019-08-09 16:06:01 +02:00
gregkare
5f3b80ab9e Merge branch 'deployment_api_version' of kosmos/gitea.kosmos.org into master 2019-08-07 09:20:50 +00:00
b00931352f Improve README 2019-08-06 13:16:07 +02:00
Greg Karékinian
f8d964f8d2 Bump the api version for the Deployment resource to apps/v1
It was previously set to extensions/v1beta1. I have discovered that when
the Deployment is created as a extensions/v1beta1, it causes the
existing pod to be killed immediately when doing a rolling update. When
the Deployment was created as apps/v1, a rolling update behaves as
expected: a new pod is created, and the old one is only terminated once
the new pod is ready to serve traffic.

The existing Deployment resource will need to be deleted and recreated:

    kubectl delete deployment gitea-server
    kubectl apply -f gitea-server.yaml

Applying the file without deleting it first will not fix the issue with
rolling updates. It will cause a short downtime
2019-08-06 12:44:57 +02:00
810482c755 Merge branch 'chore/30-update_1.9.0' of kosmos/gitea.kosmos.org into master 2019-08-02 15:59:38 +00:00
Greg Karékinian
4e225ab1af Update Gitea to 1.9.0
Closes #30
2019-08-02 17:34:28 +02:00
1f6e0b7d57 Merge branch 'feature/ark_to_velero' of kosmos/gitea.kosmos.org into master 2019-06-22 12:26:30 +00:00
Greg Karékinian
a3fa72bb56 Update the documentation, Ark is now Velero
Refs #27
2019-06-19 18:33:44 +02:00
12 changed files with 53 additions and 2174 deletions

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "vendor/ark"]
path = vendor/ark
url = git@github.com:heptio/ark.git

View File

@@ -1,27 +1,9 @@
# gitea.kosmos.org
This repository contains configuration files and other assets, that are used to
deploy and operate this Gitea instance.
deploy and operate this Gitea instance. Feel free to [open
issues](https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues) for questions,
suggestions, bugs, to-do items, and whatever else you want to discuss or
resolve.
To create a new image containing the customizations:
Edit `packer/custom.json` to increment the tag, then run this script (needs
[Packer](https://www.packer.io/) in your path)
```
./script/build_customizations_image
```
Then edit `kubernetes/gitea-server.yaml` to use the new tag
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
change:
```
cd kubernetes
kubectl apply -f gitea-server.yaml
```
Feel free to [open issues] for questions, suggestions, bugs, to-do items, and
whatever else you want to discuss or resolve.
[open issues]: https://gitea.kosmos.org/kosmos/gitea.kosmos.org/issues
See `doc/` folder for some technical info.

View File

@@ -1,4 +1,5 @@
#db231d bug ; Something is not working
#ead746 docs ; Documentation
#76db1d enhancement ; Improving existing functionality
#1d76db feature ; New functionality
#db1d76 idea ; Something to consider

View File

@@ -1,36 +1,28 @@
# Backups
We're using [Ark][1] for backing up Kubernetes config and GKE resources. It is
available as a Git submodule in the `vendor/` folder (incl. the `ark`
executable).
We're using [Velero][1] (formerly Ark) for backing up Kubernetes config and GKE
resources. It is available as a compiled binary for your platform [on GitHub][2]
In order to initialize and update submodules in your local repo, run once:
git submodule update --init
Then, to fetch/update the modules, run:
git submodule update
The Ark service is running on the Sidamo cluster and was set up using the
[official docs' GCP instructions and config files][4]. There's a daily backup
The Velero service is running on the Sidamo cluster and was set up using the
[official docs' GCP instructions][3]. There's a daily backup
schedule in effect for Gitea (using the label `app=gitea`).
Please refer to Ark's [ Getting Started ][5] doc for all backup and restore
Please refer to Velero's [ Getting Started ][4] doc for all backup and restore
commands.
## Backup location
Cluster configuration (including all live resources) is backed up to [a Google
Cloud Storage container][3].
Cloud Storage container][5].
## Persistent volumes
Persistent volumes are just GCE disks. Thus, with the current config, Ark
creates volume snapshots as native [GCE disk snapshots][2].
Persistent volumes are just GCE disks. Thus, with the current config, Velero
creates volume snapshots as native [GCE disk snapshots][6].
[1]: https://heptio.github.io/ark/v0.10.0
[2]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50
[3]: https://console.cloud.google.com/storage/browser/sidamo-backups?project=fluted-magpie-218106&organizationId=772167872692
[4]: https://heptio.github.io/ark/v0.10.0/gcp-config
[5]: https://heptio.github.io/ark/v0.10.0/get-started
[1]: https://velero.io/docs/v1.0.0
[2]: https://github.com/heptio/velero/releases/tag/v1.0.0
[3]: https://velero.io/docs/v1.0.0/gcp-config/
[4]: https://velero.io/docs/v1.0.0/about/
[5]: https://console.cloud.google.com/storage/browser/sidamo-backups-new?project=fluted-magpie-218106&organizationId=772167872692
[6]: https://console.cloud.google.com/compute/snapshots?organizationId=772167872692&project=fluted-magpie-218106&tab=snapshots&snapshotssize=50

View File

@@ -0,0 +1,20 @@
## Customizations image
### Build
To create a new Docker image containing our Gitea customizations (label sets,
styles, page content, etc.):
Edit `packer/custom.json` to increment the tag, then run this script (needs
[Packer](https://www.packer.io/) in your path)
./script/build_customizations_image
### Deploy
Edit `kubernetes/gitea-server.yaml` to use the new tag
(`image: eu.gcr.io/fluted-magpie-218106/gitea_custom:$VERSION`) and apply the
change:
cd kubernetes
kubectl apply -f gitea-server.yaml

File diff suppressed because it is too large Load Diff

View File

@@ -1,276 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress-tcp
namespace: default
data:
"22": "default/gitea-server:22"
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
hostNetwork: true
nodeSelector:
role: ingress-controller
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --tcp-services-configmap=$(POD_NAMESPACE)/haproxy-ingress-tcp
- --sort-backends
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
value: default
---
apiVersion: v1
kind: Service
metadata:
name: gitea-server-nodeport
namespace: default
labels:
app: gitea
name: gitea-server
annotations:
# add an annotation indicating the issuer to use.
# TODO: Switch to production when we're ready
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
ports:
- name: http
port: 3000
targetPort: 3000
- name: ssh
port: 22
targetPort: 22
protocol: TCP
type: NodePort
selector:
name: gitea-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gitea-ingress
namespace: default
labels:
name: gitea-server
app: gitea
annotations:
kubernetes.io/ingress.class: "haproxy"
spec:
tls:
- hosts:
- gitea.kosmos.org
secretName: gitea-kosmos-org-cert
rules:
- host: gitea.kosmos.org
http:
paths:
- path: /
backend:
serviceName: gitea-server-nodeport
servicePort: 3000

View File

@@ -1,4 +1,4 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitea-server
@@ -6,6 +6,9 @@ metadata:
app: gitea
spec:
replicas: 1
selector:
matchLabels:
app: gitea
template:
metadata:
labels:
@@ -17,7 +20,7 @@ spec:
# This is a busybox image with our gitea customizations saved to
# /custom, built using ./script/build_customizations_image from the
# root of the repo
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1
image: eu.gcr.io/fluted-magpie-218106/gitea_custom:0.1.2
command: [
'sh', '-c',
'mkdir -p /data/gitea/conf && mkdir -p /data/gitea/https && cp /root/conf/app.ini /data/gitea/conf/app.ini && chown 1000:1000 /data/gitea/conf/app.ini && chmod 660 /data/gitea/conf/app.ini && cp /root/conf/*.pem /data/gitea/https && chmod 600 /data/gitea/https/*.pem && cp -R /custom/* /data/gitea && chown -R 1000:1000 /data/gitea'
@@ -29,33 +32,20 @@ spec:
name: config
containers:
- name: gitea-server
image: gitea/gitea:1.8.1
image: gitea/gitea:1.11.2
ports:
- containerPort: 3000
- containerPort: 3001
- containerPort: 22
livenessProbe:
httpGet:
path: /
port: 3000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: 3000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
volumeMounts:
- mountPath: /data
name: gitea-server-data
resources:
requests:
cpu: 250m
cpu: 150m
memory: 256Mi
limits:
cpu: 500m
cpu: 250m
memory: 512Mi
restartPolicy: Always
volumes:
@@ -106,6 +96,9 @@ spec:
targetPort: 22
- name: "http"
port: 80
targetPort: 3001
- name: "https"
port: 443
targetPort: 3000
selector:
name: gitea-server

View File

@@ -1,19 +0,0 @@
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-production-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx

View File

@@ -1,19 +0,0 @@
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: ops@kosmos.org
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource used to store the account's private key.
name: letsencrypt-staging-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx

View File

@@ -21,7 +21,7 @@
{
"type": "docker-tag",
"repository": "eu.gcr.io/fluted-magpie-218106/gitea_custom",
"tag": "0.1"
"tag": "0.1.2"
},
"docker-push"
]

1
vendor/ark vendored

Submodule vendor/ark deleted from 0fd7872ef4