1 Commits

Author SHA1 Message Date
4ca5fba94e WIP: Configure Gitea commit signing with SSH key 2026-02-13 16:08:11 +04:00
71 changed files with 268 additions and 1297 deletions

View File

@@ -1,41 +0,0 @@
# AGENTS.md
Welcome, AI Agent! This file contains essential context and rules for interacting with the Kosmos Chef repository. Read this carefully before planning or executing any changes.
## 🏢 Project Overview
This repository contains the infrastructure automation code used by Kosmos to provision and configure bare metal servers (KVM hosts) and Ubuntu virtual machines (KVM guests).
We use **Chef Infra**, managed locally via **Knife Zero** (agentless Chef), and **Berkshelf** for dependency management.
## 📂 Directory Structure & Rules
* **`site-cookbooks/`**: 🟢 **EDITABLE.** This directory contains all custom, internal cookbooks written specifically for Kosmos services (e.g., `kosmos-postgresql`, `kosmos_gitea`, `kosmos-mastodon`). *Active development happens here.*
* **`cookbooks/`**: 🔴 **DO NOT EDIT.** This directory contains third-party/community cookbooks that are vendored. These are managed by Berkshelf. Modifying them directly will result in lost changes.
* **`roles/`**: 🟢 **EDITABLE.** Contains Chef roles written in Ruby (e.g., `base.rb`, `kvm_guest.rb`, `postgresql_primary.rb`). These define run-lists and role-specific default attributes for servers.
* **`environments/`**: Contains Chef environment definitions (like `production.rb`).
* **`data_bags/`**: Contains data bag configurations, often encrypted. Be cautious and do not expose secrets. (Note: Agents should not manage data bag secrets directly unless provided the `.chef/encrypted_data_bag_secret`).
* **`nodes/`**: Contains JSON state files for bootstrapped nodes. *Agents typically do not edit these directly unless cleaning up a deleted node.*
* **`Berksfile`**: Defines community cookbook dependencies.
* **`Vagrantfile` / `.kitchen/`**: Used for local virtualization and integration testing.
## 🛠️ Tooling & Workflows
1. **Dependency Management (Berkshelf)**
If a new community cookbook is required:
- Add it to the `Berksfile` at the root.
- Instruct the user to run `berks install` and `berks vendor cookbooks/ --delete` (or run it via the `bash` tool if permitted).
2. **Provisioning (Knife Zero)**
- Bootstrapping and converging nodes is done using `knife zero`.
- *Example:* `knife zero converge name:server-name.kosmos.org`
3. **Code Style & Conventions**
- Chef recipes, resources, and roles are written in **Ruby**.
- Follow standard Chef and Ruby (RuboCop) idioms. Look at neighboring files in `site-cookbooks/` or `roles/` to match formatting and naming conventions.
## 🚨 Core Directives for AI Agents
1. **Infrastructure as Code**: Manual server configurations are highly discouraged. All changes must be codified in a cookbook or role.
2. **Test Safety Nets**: Look for `.kitchen.yml` within specific `site-cookbooks/<name>` to understand if local integration tests are available.
3. **No Assumptions**: Do not assume standard test commands. Check `README.md` and repository config files first.
4. **Secret Handling**: Avoid hardcoding passwords or API keys in recipes or roles. Assume sensitive information is managed via Chef `data_bags`.

View File

@@ -24,7 +24,6 @@ cookbook 'composer', '~> 2.7.0'
cookbook 'fail2ban', '~> 7.0.4' cookbook 'fail2ban', '~> 7.0.4'
cookbook 'git', '~> 10.0.0' cookbook 'git', '~> 10.0.0'
cookbook 'golang', '~> 5.3.1' cookbook 'golang', '~> 5.3.1'
cookbook 'homebrew', '>= 6.0.0'
cookbook 'hostname', '= 0.4.2' cookbook 'hostname', '= 0.4.2'
cookbook 'hostsfile', '~> 3.0.1' cookbook 'hostsfile', '~> 3.0.1'
cookbook 'java', '~> 4.3.0' cookbook 'java', '~> 4.3.0'

View File

@@ -8,7 +8,6 @@ DEPENDENCIES
firewall (~> 6.2.16) firewall (~> 6.2.16)
git (~> 10.0.0) git (~> 10.0.0)
golang (~> 5.3.1) golang (~> 5.3.1)
homebrew (>= 6.0.0)
hostname (= 0.4.2) hostname (= 0.4.2)
hostsfile (~> 3.0.1) hostsfile (~> 3.0.1)
ipfs ipfs
@@ -63,7 +62,7 @@ GRAPH
git (10.0.0) git (10.0.0)
golang (5.3.1) golang (5.3.1)
ark (>= 6.0) ark (>= 6.0)
homebrew (6.0.2) homebrew (5.4.1)
hostname (0.4.2) hostname (0.4.2)
hostsfile (>= 0.0.0) hostsfile (>= 0.0.0)
hostsfile (3.0.1) hostsfile (3.0.1)

View File

@@ -1,4 +1,4 @@
{ {
"name": "garage-14", "name": "garage-14",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAypINv1zTZ7+pyT0iRhik\n0W70ASYADo7qK7QyE9/3nu2sUrP1IjoNFsv/ceKwicH7Fw2Ei1o+yKZlKn7zJzY7\n93YRZndF04VH2bmqy0uOWK0Bdat7gCld5bvS6FmRflg7g64LFb33/64QIVsVGHGL\nYF2TO//x79t9JKcQDa4h5MOWzJNTFuEcUGa0gJjMYpWGVHEJSgRuIgyhXmyIJJgY\nguj6ymTm5+3VS7NzoNy2fbTt1LRpHb5UWrCR15oiLZiDSMLMx0CcGOCmrhvODi4k\n0umw+2NPd1G50s9z7KVbTqybuQ65se2amRnkVcNfaBIU5qk9bVqcmhZlEozmBZCd\ndwIDAQAB\n-----END PUBLIC KEY-----\n" "public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAqNY8AuaM4byhaTZacfRJ\nv/qyHxcDJOMX/ElF1H908spdbB2ZiLXHOH1Ucw1d+NV6/QUtWk+ikKFPpasnatD7\nmjE57noH+H47Rll0nD7oT+in+fOBDHF9R0P6/qyRSdJbJkHOh0iC0MG4LcUfv0AY\nnVBW5iLZSe/PC3+PvhCv7yrx3ikSs0mg1ZWppw0ka5Ek3ZCZp5FB4L6++GYWpM+1\n6YI0CjMoRcXsaEQsJWhxHXT8/KDhW0BR8woZUGm0/Yn4teLYJzioxRfBep3lbygx\nOIsDN9IJzo2zVTGPDZQLXhVemIhzaepqTC77ibH7F0gN/1vsQBc/qf7UhbwaF4rR\ndQIDAQAB\n-----END PUBLIC KEY-----\n"
} }

View File

@@ -1,4 +0,0 @@
{
"name": "garage-15",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAy14sTt5gxVZi9C3KIEBu\nDyUgbb6jc3/GR22fNPTqV6uDHhxzhE2UsYwY/7yuA1RasdwHEOBWZaoC0Om5/Zmi\n8gn6//v1ILyLNaAcw+SQcxZkCN8Sk/0atRS9HYk1agE8Mvh72Fe2z3l+92VMefy7\nJwJUNNBTbnV2WVCchChoWnfhI7bkSLSHp0M2MO2pI+lkpSdmfkJSa5z9zihgxKO8\nXfvhryDCZNvfRVHhwc+ffpap0gLF0H9riGKE4FwLy4YqbuW1Tgm6bObb9bpOIw6Q\nVfH3kC/KMK5FlnxGmYtDkhRJ/wjGInRBk9WK/QOmjyd2FVxipEQmA4RdjlznRC9I\nrwIDAQAB\n-----END PUBLIC KEY-----\n"
}

View File

@@ -1,4 +0,0 @@
{
"name": "leo",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnFfQsJnREjbXTtpT6BVt\naBaUzRmCQi8Du0TzeUG0ENrY0p5Exqleye2rC6bJlB3PER1xr5zdtuXLgbcVumIb\nzroU5JPtFbQk7r/pj0atT+UEYzl16iuEpprQ/bug+f0nE514USr6YG4G+tlZ/jBI\nSHsCQF1P8ufXFLW0ewC7rdvBkgA+DwK14naRxS4jO5MSl4wmNTjs/jymTg508mQq\nf5tG52t8qFdgn9pRdBXmyTpPtwK7I4rZ+1Qn+1E5m4oQUZsxh8Ba1bGbKotVO7Ua\nYL1yCGx7zRRUvLLIdSMvlRXTJBUSQtQ8P4QUDWTY1Na2w3t9sulKg2Lwsw8tktvC\nCwIDAQAB\n-----END PUBLIC KEY-----\n"
}

View File

@@ -1,4 +0,0 @@
{
"name": "postgres-10",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2oBb5omC7ZionWhudgFm\n2NGcRXsI0c7+g1+0duaxj5dziaRTltqzpRJTfiJD6R36FcvEqwGc+qQgYSMzc1Xd\nY4OTvJFIDiFAmROm/DZYgFtTDldVNJZO2bbU3COYf/Z2Poq56gC4zLLd/zf6shgb\n2Mty8PlQ82JJAY9EMI3aAifdnZ1k/g4weFC4LFg9lUcNNXOwlAjp//LJ3ku3aY1r\nwW74msSeWEjE44YZdWyMYgM7Fy1hz5giHFQtRdOLemRCWQ8h26wn/cmWld7lsLg+\nlYqxokxWXGv8r5zR8kDTBkd0dxY7ZMbo7oESY4Uhuf4UReMe2ZGHto1E7w3llSj+\n7wIDAQAB\n-----END PUBLIC KEY-----\n"
}

View File

@@ -1,4 +0,0 @@
{
"name": "postgres-11",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1foYpuubS2ovlg3uHO12\nQ/ROZ8MpG+LkCAM46uVfPaoWwfY0vdfMsBOanHDgm9DGUCEBJZ6LPrvCvGXbpPy6\n9GSswK75zVWODblNjvvV4ueGFq4bBFwRuZNjyMlqgyzeU+srZL0ivelu5XEuGuoD\nPYCBKWYqGMz85/eMC7/tinTJtKPyOtXe/G8meji+r7gh3j+ypj/EWeKfcRDa4aGe\n/DmMCurIjjPAXFLMAA6fIqPWVfcPw4APNPE60Z92yPGsTbPu7bL54M5f7udmmu7H\nOgk1HjMAmXCuLDzTkfaxqHP+57yELg/YpXR1E93VmBeQuIBsyOFEk6AmUmA1Ib6e\nnQIDAQAB\n-----END PUBLIC KEY-----\n"
}

View File

@@ -1,4 +0,0 @@
{
"name": "postgres-12",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1mYGrYB8keUKmXA8dhWc\ncCLzp50xR0ajSw+bWYydyRqD5wuEVKjiJu4+G9QmTVXkVgJ+AYI0Y9/WZYpDqVH6\nvLUo6BSNQaWx20q93qIdOGLy8YG3Qyznezk4l8T9u9vWZDyDpKw6gCxzikMkrXxb\n0cqOYtyud8+PtSEEMogSjOKhRURVHlVrlVH3SQO7Whke9rkiFcbXzubsK9yjkUtF\nxZafSoGorOlDsPvFTfYnkepVB+GHcgiribRYSrO+73GypC2kqMhCpWrb6a0VWsP/\nh53+q3JL3vBvdvjcv51Wpf4n6JdnXnQGn2/MdXEzw+NXgjU4/IdYtbORSbaI8F5t\nowIDAQAB\n-----END PUBLIC KEY-----\n"
}

View File

@@ -1,4 +0,0 @@
{
"name": "rsk-testnet-6",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAl1p4+F536/peA4XWMJtm\njggPl6yJb42V5bg3kDa8SHoIoQgXn59d3BclZ1Oz2+JhFd3Rrn4FN3Z1wzGpP+gA\nnxQOfgRG1ucahh7Nxaw3IdoHm7r/EdEOc9FrxvGJ+09YnmLfzn4iVQpsUiOiNVS7\n0LXtMXYtsjD+o6BTbOhGU8FMmGhMhQfXFVgoDdTiM/Q62zPw8Vtpa3yFpFJAu+dA\n+mm5h5W6FnaWJXM2arn3PxDOt+JQSWp5PYG4goU1FFreU9iFuoeGEfLy8unlbbXt\ne96QhNuCkOA15xqta0Z3oL7IlXWns7dLgZYlpZT9zaExIs3AEDaQcleacQPzXKSG\nswIDAQAB\n-----END PUBLIC KEY-----\n"
}

View File

@@ -3,5 +3,3 @@ config:
line-length: false # MD013 line-length: false # MD013
no-duplicate-heading: false # MD024 no-duplicate-heading: false # MD024
reference-links-images: false # MD052 reference-links-images: false # MD052
ignores:
- .github/copilot-instructions.md

View File

@@ -2,48 +2,6 @@
This file is used to list changes made in each version of the homebrew cookbook. This file is used to list changes made in each version of the homebrew cookbook.
## 6.0.2 - *2025-09-04*
Standardise files with files in sous-chefs/repo-management
Standardise files with files in sous-chefs/repo-management
## 6.0.1 - *2025-03-24*
## 6.0.0 - *2025-03-17*
- Updated library call for new homebrew class name found in chef-client 18.6.2+ releases
## 5.4.9 - *2024-11-18*
Standardise files with files in sous-chefs/repo-management
Standardise files with files in sous-chefs/repo-management
Standardise files with files in sous-chefs/repo-management
Standardise files with files in sous-chefs/repo-management
Standardise files with files in sous-chefs/repo-management
## 5.4.8 - *2024-05-07*
## 5.4.7 - *2024-05-06*
- Explicitly include `Which` module from `Chef` which fixes runs on 18.x clients.
## 5.4.6 - *2024-05-06*
## 5.4.5 - *2023-11-01*
Standardise files with files in sous-chefs/repo-management
## 5.4.4 - *2023-09-28*
## 5.4.3 - *2023-09-04*
## 5.4.2 - *2023-07-10*
## 5.4.1 - *2023-06-01* ## 5.4.1 - *2023-06-01*
## 5.4.0 - *2023-04-24* ## 5.4.0 - *2023-04-24*

View File

@@ -20,9 +20,8 @@
# #
class HomebrewUserWrapper class HomebrewUserWrapper
require 'chef/mixin/homebrew' require 'chef/mixin/homebrew_user'
include Chef::Mixin::Homebrew include Chef::Mixin::HomebrewUser
include Chef::Mixin::Which
end end
module Homebrew module Homebrew
@@ -60,17 +59,41 @@ module Homebrew
def owner def owner
@owner ||= begin @owner ||= begin
HomebrewUserWrapper.new.find_homebrew_username # once we only support 14.0 we can switch this to find_homebrew_username
rescue require 'etc'
Chef::Exceptions::CannotDetermineHomebrewPath ::Etc.getpwuid(HomebrewUserWrapper.new.find_homebrew_uid).name
rescue Chef::Exceptions::CannotDetermineHomebrewOwner
calculate_owner
end.tap do |owner| end.tap do |owner|
Chef::Log.debug("Homebrew owner is #{owner}") Chef::Log.debug("Homebrew owner is #{owner}")
end end
end end
private
def calculate_owner
owner = homebrew_owner_attr || sudo_user || current_user
if owner == 'root'
raise Chef::Exceptions::User,
"Homebrew owner is 'root' which is not supported. " \
"To set an explicit owner, please set node['homebrew']['owner']."
end
owner
end
def homebrew_owner_attr
Chef.node['homebrew']['owner']
end
def sudo_user
ENV['SUDO_USER']
end
def current_user
ENV['USER']
end
end unless defined?(Homebrew) end unless defined?(Homebrew)
class HomebrewWrapper class HomebrewWrapper
include Homebrew include Homebrew
end end
Chef::Mixin::Homebrew.include(Homebrew)

View File

@@ -17,13 +17,13 @@
"recipes": { "recipes": {
}, },
"version": "6.0.2", "version": "5.4.1",
"source_url": "https://github.com/sous-chefs/homebrew", "source_url": "https://github.com/sous-chefs/homebrew",
"issues_url": "https://github.com/sous-chefs/homebrew/issues", "issues_url": "https://github.com/sous-chefs/homebrew/issues",
"privacy": false, "privacy": false,
"chef_versions": [ "chef_versions": [
[ [
">= 18.6.2" ">= 15.3"
] ]
], ],
"ohai_versions": [ "ohai_versions": [

View File

@@ -3,9 +3,9 @@ maintainer 'Sous Chefs'
maintainer_email 'help@sous-chefs.org' maintainer_email 'help@sous-chefs.org'
license 'Apache-2.0' license 'Apache-2.0'
description 'Install Homebrew and includes resources for working with taps and casks' description 'Install Homebrew and includes resources for working with taps and casks'
version '6.0.2' version '5.4.1'
supports 'mac_os_x' supports 'mac_os_x'
source_url 'https://github.com/sous-chefs/homebrew' source_url 'https://github.com/sous-chefs/homebrew'
issues_url 'https://github.com/sous-chefs/homebrew/issues' issues_url 'https://github.com/sous-chefs/homebrew/issues'
chef_version '>= 18.6.2' chef_version '>= 15.3'

View File

@@ -1,10 +1,9 @@
{ {
"$schema": "https://docs.renovatebot.com/renovate-schema.json", "$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": ["config:base"], "extends": ["config:base"],
"packageRules": [ "packageRules": [{
{
"groupName": "Actions", "groupName": "Actions",
"matchUpdateTypes": ["minor", "patch", "pin"], "matchUpdateTypes": ["patch", "pin", "digest"],
"automerge": true, "automerge": true,
"addLabels": ["Release: Patch", "Skip: Announcements"] "addLabels": ["Release: Patch", "Skip: Announcements"]
}, },

View File

@@ -19,7 +19,6 @@
# limitations under the License. # limitations under the License.
# #
unified_mode true
chef_version_for_provides '< 14.0' if respond_to?(:chef_version_for_provides) chef_version_for_provides '< 14.0' if respond_to?(:chef_version_for_provides)
property :cask_name, String, regex: %r{^[\w/-]+$}, name_property: true property :cask_name, String, regex: %r{^[\w/-]+$}, name_property: true

View File

@@ -19,7 +19,6 @@
# limitations under the License. # limitations under the License.
# #
unified_mode true
chef_version_for_provides '< 14.0' if respond_to?(:chef_version_for_provides) chef_version_for_provides '< 14.0' if respond_to?(:chef_version_for_provides)
property :tap_name, String, name_property: true, regex: %r{^[\w-]+(?:\/[\w-]+)+$} property :tap_name, String, name_property: true, regex: %r{^[\w-]+(?:\/[\w-]+)+$}

View File

@@ -1,16 +1,23 @@
{ {
"id": "gandi_api", "id": "gandi_api",
"key": {
"encrypted_data": "lU7/xYTmP5Sb6SsK5TNNIyegWozzBtUzpg7oDdl6gcz9FEMmG2ft0Ljh5Q==\n",
"iv": "EZPQD3C+wsP/mBhF\n",
"auth_tag": "vF9E8Pj4Z8quJJdOMg/QTw==\n",
"version": 3,
"cipher": "aes-256-gcm"
},
"access_token": { "access_token": {
"encrypted_data": "+skwxHnpAj/3d3e2u7s7B9EydbETj8b0flWahvb5gt/o4JYFWHrhIyX/0IVa\n4wgmu08eDgU51i0knGA=\n", "encrypted_data": "1Uw69JkNrmb8LU/qssuod1SlqxxrWR7TJQZeeivRrNzrMIVTEW/1uwJIYL6b\nM4GeeYl9lIRlMMmLBkc=\n",
"iv": "ONKrFCt8Oj3GKIQ5\n", "iv": "cc1GJKu6Cf4DkIgX\n",
"auth_tag": "j9Hrk8ZZFMQub4NUO+2e4g==\n", "auth_tag": "ERem4S7ozG695kjvWIMghw==\n",
"version": 3, "version": 3,
"cipher": "aes-256-gcm" "cipher": "aes-256-gcm"
}, },
"domains": { "domains": {
"encrypted_data": "lGfoPHdXEYYdJmoIA9M119wjVl1v4UzIv5gHADwx0A==\n", "encrypted_data": "scZ5blsSjs54DlitR7KZ3enLbyceOR5q0wjHw1golQ==\n",
"iv": "q6XKbxhW7X9ONxNt\n", "iv": "oDcHm7shAzW97b4t\n",
"auth_tag": "ns9WJH8Oe75siWu+sOZkRg==\n", "auth_tag": "62Zais9yf68SwmZRsmZ3hw==\n",
"version": 3, "version": 3,
"cipher": "aes-256-gcm" "cipher": "aes-256-gcm"
} }

View File

@@ -1,287 +0,0 @@
# Migrating PostgreSQL cluster to a new major version
## Summary
1. Dump from a replica
2. Restore to fresh VM running new major version
3. Add logical replication for delta sync from current/old primary
4. Switch primary to new server
5. Remove logical replication on new server
## Runbook
* Primary host: `PRIMARY_HOST`
* Replica host: `REPLICA_HOST`
* New PG14 host: `NEW_HOST`
* PostgreSQL superuser: `postgres`
* Running locally on each machine via `sudo -u postgres`
Adjust hostnames/IPs/etc. where needed.
---
### 🟢 0. PRIMARY — Pre-checks
```bash
sudo -u postgres psql -c "SHOW wal_level;"
sudo -u postgres psql -c "SHOW max_replication_slots;"
```
If needed, edit config:
```bash
sudo -u postgres vi $PGDATA/postgresql.conf
```
Ensure:
```conf
wal_level = logical
max_replication_slots = 10
```
Restart if changed:
```bash
sudo systemctl restart postgresql
```
---
### 🔵🟡 3. Create keypair for syncing dump later
🔵 On NEW_HOST:
```bash
sudo mkdir -p /home/postgres/.ssh && \
sudo chown -R postgres:postgres /home/postgres && \
sudo chmod 700 /home/postgres/.ssh && \
sudo -u postgres bash -c 'ssh-keygen -t ecdsa -b 256 -f /home/postgres/.ssh/id_ecdsa -N "" -C "postgres@$(hostname)"' && \
sudo cat /home/postgres/.ssh/id_ecdsa.pub
```
Copy the public key from the above output
🟡 On replica:
```bash
sudo mkdir -p /home/postgres/.ssh && \
sudo chown -R postgres:postgres /home/postgres && \
sudo chmod 700 /home/postgres/.ssh && \
echo [public_key] | sudo tee /home/postgres/.ssh/authorized_keys > /dev/null && \
sudo chmod 700 /home/postgres/.ssh
```
---
### 🟢 1. PRIMARY — Create publication and replication slots
```bash
sudo -u postgres pg_create_replication_publications
```
or
```bash
sudo -u postgres pg_create_replication_publication [db_name]
```
Listing publications and slots:
```bash
sudo -u postgres pg_list_replication_publications
sudo -u postgres pg_list_replication_slots
```
---
### 🟡 3. REPLICA — Pause replication
```bash
sudo -u postgres psql -c "SELECT pg_wal_replay_pause();"
```
Verify:
```bash
sudo -u postgres psql -c "SELECT pg_is_wal_replay_paused();"
```
---
### 🟡 4. REPLICA — Run dump
```bash
sudo -u postgres pg_dump_all_databases
```
or
```bash
sudo -u postgres bash -c "pg_dumpall --globals-only > /tmp/globals.sql"
sudo -u postgres pg_dump_database [db_name]
```
---
### 🟡 5. REPLICA — Resume replication
```bash
sudo -u postgres psql -c "SELECT pg_wal_replay_resume();"
```
---
### 🔵 6. COPY dumps to NEW HOST
From NEW_HOST:
```bash
export REPLICA_HOST=[private_ip] && \
cd /tmp && \
sudo -u postgres scp "postgres@$REPLICA_HOST:/tmp/globals.sql" . && \
sudo -u postgres scp "postgres@$REPLICA_HOST:/tmp/dump_*.tar.zst" .
```
---
### 🔵 7. NEW HOST (PostgreSQL 14) — Restore
#### 7.1 Restore globals
```bash
sudo -u postgres psql -f /tmp/globals.sql
```
---
#### 7.2 Create databases
```bash
sudo -u postgres psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN ('template1')" | \
xargs -I{} sudo -u postgres createdb {}
```
or
```bash
sudo -u postgres createdb [db_name]
```
---
#### 7.3 Restore each database
```bash
sudo -u postgres pg_restore_all_databases
```
or
```bash
sudo -u postgres pg_restore_database [db_name]
```
---
### 🔵 8. NEW HOST — Create subscriptions
```bash
sudo -u postgres pg_create_replication_subscriptions
```
or
```bash
sudo -u postgres pg_create_replication_subscription [db_name]
```
---
### 🔵 9. NEW HOST — Monitor replication
```bash
sudo -u postgres pg_list_replication_subscriptions
```
---
### 🔴 11. CUTOVER
#### 11.1 Stop writes on old primary
Put app(s) in maintenance mode, stop the app/daemons.
---
#### 11.2 Wait for replication to catch up
TODO: not the best way to check, since WAL LSNs keep increasing
```bash
sudo -u postgres psql -d [db_name] -c "SELECT * FROM pg_stat_subscription;"
```
---
#### 11.3 Fix sequences
Run per DB:
```bash
sudo -u postgres pg_fix_sequences_in_all_databases
```
or
```bash
sudo -u postgres pg_fix_sequences [db_name]
```
---
#### 11.4 Point app to NEW_HOST
1. Update `pg.kosmos.local` in `/etc/hosts` on app server(s). For example:
```bash
export NEW_PG_PRIMARY=[private_ip]
knife ssh roles:ejabberd -a knife_zero.host "sudo sed -r \"s/^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+\s(pg.kosmos.local)/$NEW_PG_PRIMARY\t\1/\" -i /etc/hosts"
```
Or override node attribute(s) if necessary and/or approporiate.
2. Start the app/daemons, and deactivate maintenance mode.
---
### 🧹 12. CLEANUP NEW_HOST
```bash
sudo -u postgres pg_drop_replication_subscriptions
```
---
### 🧹 13. CLEANUP PRIMARY
TODO: Looks like slots are dropped automatically, when subscriptions are dropped
```bash
sudo -u postgres pg_drop_replication_publications
```
---
### 🧹 13. CLEANUP Chef
Once all apps/databases are migrated, update the role in the node
config of the new primary to 'postgres_primary' and converge it.
Also delete the old primary node config from the Chef repo.
---
### ✅ DONE
---

View File

@@ -3,15 +3,15 @@
"chef_environment": "production", "chef_environment": "production",
"normal": { "normal": {
"knife_zero": { "knife_zero": {
"host": "10.1.1.151" "host": "10.1.1.157"
} }
}, },
"automatic": { "automatic": {
"fqdn": "garage-14", "fqdn": "garage-14",
"os": "linux", "os": "linux",
"os_version": "5.15.0-1095-kvm", "os_version": "5.15.0-1059-kvm",
"hostname": "garage-14", "hostname": "garage-14",
"ipaddress": "192.168.122.36", "ipaddress": "192.168.122.251",
"roles": [ "roles": [
"base", "base",
"kvm_guest", "kvm_guest",
@@ -30,7 +30,6 @@
"timezone_iii::debian", "timezone_iii::debian",
"ntp::default", "ntp::default",
"ntp::apparmor", "ntp::apparmor",
"kosmos-base::journald_conf",
"kosmos-base::systemd_emails", "kosmos-base::systemd_emails",
"apt::unattended-upgrades", "apt::unattended-upgrades",
"kosmos-base::firewall", "kosmos-base::firewall",
@@ -47,13 +46,13 @@
"cloud": null, "cloud": null,
"chef_packages": { "chef_packages": {
"chef": { "chef": {
"version": "18.10.17", "version": "18.8.54",
"chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.10.17/lib", "chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.8.54/lib",
"chef_effortless": null "chef_effortless": null
}, },
"ohai": { "ohai": {
"version": "18.2.13", "version": "18.2.8",
"ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.13/lib/ohai" "ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.8/lib/ohai"
} }
} }
}, },

View File

@@ -1,56 +0,0 @@
{
"name": "leo",
"normal": {
"knife_zero": {
"host": "leo.kosmos.org"
}
},
"automatic": {
"fqdn": "leo",
"os": "linux",
"os_version": "5.15.0-164-generic",
"hostname": "leo",
"ipaddress": "5.9.81.116",
"roles": [
"base"
],
"recipes": [
"kosmos-base",
"kosmos-base::default",
"kosmos_kvm::host",
"apt::default",
"timezone_iii::default",
"timezone_iii::debian",
"ntp::default",
"ntp::apparmor",
"kosmos-base::journald_conf",
"kosmos-base::systemd_emails",
"apt::unattended-upgrades",
"kosmos-base::firewall",
"kosmos-postfix::default",
"postfix::default",
"postfix::_common",
"postfix::_attributes",
"postfix::sasl_auth",
"hostname::default"
],
"platform": "ubuntu",
"platform_version": "22.04",
"cloud": null,
"chef_packages": {
"chef": {
"version": "18.10.17",
"chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.10.17/lib",
"chef_effortless": null
},
"ohai": {
"version": "18.2.13",
"ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.13/lib/ohai"
}
}
},
"run_list": [
"role[base]",
"recipe[kosmos_kvm::host]"
]
}

View File

@@ -1,17 +1,16 @@
{ {
"name": "postgres-11", "name": "postgres-6",
"chef_environment": "production",
"normal": { "normal": {
"knife_zero": { "knife_zero": {
"host": "10.1.1.91" "host": "10.1.1.196"
} }
}, },
"automatic": { "automatic": {
"fqdn": "postgres-11", "fqdn": "postgres-6",
"os": "linux", "os": "linux",
"os_version": "5.15.0-1095-kvm", "os_version": "5.4.0-173-generic",
"hostname": "postgres-11", "hostname": "postgres-6",
"ipaddress": "192.168.122.142", "ipaddress": "192.168.122.60",
"roles": [ "roles": [
"base", "base",
"kvm_guest", "kvm_guest",
@@ -22,20 +21,18 @@
"kosmos-base::default", "kosmos-base::default",
"kosmos_kvm::guest", "kosmos_kvm::guest",
"kosmos_postgresql::primary", "kosmos_postgresql::primary",
"kosmos_postgresql::firewall",
"kosmos-akkounts::pg_db", "kosmos-akkounts::pg_db",
"kosmos-bitcoin::lndhub-go_pg_db", "kosmos-bitcoin::lndhub-go_pg_db",
"kosmos-bitcoin::nbxplorer_pg_db", "kosmos-bitcoin::nbxplorer_pg_db",
"kosmos_drone::pg_db", "kosmos_drone::pg_db",
"kosmos_gitea::pg_db", "kosmos_gitea::pg_db",
"kosmos-mastodon::pg_db", "kosmos-mastodon::pg_db",
"kosmos_postgresql::firewall",
"kosmos_postgresql::management_scripts",
"apt::default", "apt::default",
"timezone_iii::default", "timezone_iii::default",
"timezone_iii::debian", "timezone_iii::debian",
"ntp::default", "ntp::default",
"ntp::apparmor", "ntp::apparmor",
"kosmos-base::journald_conf",
"kosmos-base::systemd_emails", "kosmos-base::systemd_emails",
"apt::unattended-upgrades", "apt::unattended-upgrades",
"kosmos-base::firewall", "kosmos-base::firewall",
@@ -47,17 +44,17 @@
"hostname::default" "hostname::default"
], ],
"platform": "ubuntu", "platform": "ubuntu",
"platform_version": "22.04", "platform_version": "20.04",
"cloud": null, "cloud": null,
"chef_packages": { "chef_packages": {
"chef": { "chef": {
"version": "18.10.17", "version": "18.4.2",
"chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.10.17/lib", "chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.4.2/lib",
"chef_effortless": null "chef_effortless": null
}, },
"ohai": { "ohai": {
"version": "18.2.13", "version": "18.1.11",
"ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.13/lib/ohai" "ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.1.11/lib/ohai"
} }
} }
}, },

View File

@@ -1,36 +1,34 @@
{ {
"name": "garage-15", "name": "postgres-8",
"chef_environment": "production", "chef_environment": "production",
"normal": { "normal": {
"knife_zero": { "knife_zero": {
"host": "10.1.1.82" "host": "10.1.1.99"
} }
}, },
"automatic": { "automatic": {
"fqdn": "garage-15", "fqdn": "postgres-8",
"os": "linux", "os": "linux",
"os_version": "5.15.0-1095-kvm", "os_version": "5.15.0-1059-kvm",
"hostname": "garage-15", "hostname": "postgres-8",
"ipaddress": "192.168.122.57", "ipaddress": "192.168.122.100",
"roles": [ "roles": [
"base", "base",
"kvm_guest", "kvm_guest",
"garage_node" "postgresql_replica"
], ],
"recipes": [ "recipes": [
"kosmos-base", "kosmos-base",
"kosmos-base::default", "kosmos-base::default",
"kosmos_kvm::guest", "kosmos_kvm::guest",
"kosmos_garage", "kosmos_postgresql::hostsfile",
"kosmos_garage::default", "kosmos_postgresql::replica",
"kosmos_garage::firewall_rpc", "kosmos_postgresql::firewall",
"kosmos_garage::firewall_apis",
"apt::default", "apt::default",
"timezone_iii::default", "timezone_iii::default",
"timezone_iii::debian", "timezone_iii::debian",
"ntp::default", "ntp::default",
"ntp::apparmor", "ntp::apparmor",
"kosmos-base::journald_conf",
"kosmos-base::systemd_emails", "kosmos-base::systemd_emails",
"apt::unattended-upgrades", "apt::unattended-upgrades",
"kosmos-base::firewall", "kosmos-base::firewall",
@@ -39,27 +37,26 @@
"postfix::_common", "postfix::_common",
"postfix::_attributes", "postfix::_attributes",
"postfix::sasl_auth", "postfix::sasl_auth",
"hostname::default", "hostname::default"
"firewall::default"
], ],
"platform": "ubuntu", "platform": "ubuntu",
"platform_version": "22.04", "platform_version": "22.04",
"cloud": null, "cloud": null,
"chef_packages": { "chef_packages": {
"chef": { "chef": {
"version": "18.10.17", "version": "18.5.0",
"chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.10.17/lib", "chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.5.0/lib",
"chef_effortless": null "chef_effortless": null
}, },
"ohai": { "ohai": {
"version": "18.2.13", "version": "18.1.11",
"ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.13/lib/ohai" "ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.1.11/lib/ohai"
} }
} }
}, },
"run_list": [ "run_list": [
"role[base]", "role[base]",
"role[kvm_guest]", "role[kvm_guest]",
"role[garage_node]" "role[postgresql_replica]"
] ]
} }

View File

@@ -1,17 +1,17 @@
{ {
"name": "postgres-12", "name": "postgres-9",
"chef_environment": "production", "chef_environment": "production",
"normal": { "normal": {
"knife_zero": { "knife_zero": {
"host": "10.1.1.134" "host": "10.1.1.3"
} }
}, },
"automatic": { "automatic": {
"fqdn": "postgres-12", "fqdn": "postgres-9",
"os": "linux", "os": "linux",
"os_version": "5.15.0-1096-kvm", "os_version": "5.15.0-1059-kvm",
"hostname": "postgres-12", "hostname": "postgres-9",
"ipaddress": "192.168.122.139", "ipaddress": "192.168.122.64",
"roles": [ "roles": [
"base", "base",
"kvm_guest", "kvm_guest",
@@ -24,7 +24,6 @@
"kosmos_postgresql::hostsfile", "kosmos_postgresql::hostsfile",
"kosmos_postgresql::replica", "kosmos_postgresql::replica",
"kosmos_postgresql::firewall", "kosmos_postgresql::firewall",
"kosmos_postgresql::management_scripts",
"apt::default", "apt::default",
"timezone_iii::default", "timezone_iii::default",
"timezone_iii::debian", "timezone_iii::debian",
@@ -46,13 +45,13 @@
"cloud": null, "cloud": null,
"chef_packages": { "chef_packages": {
"chef": { "chef": {
"version": "18.10.17", "version": "18.8.54",
"chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.10.17/lib", "chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.8.54/lib",
"chef_effortless": null "chef_effortless": null
}, },
"ohai": { "ohai": {
"version": "18.2.13", "version": "18.2.8",
"ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.13/lib/ohai" "ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.8/lib/ohai"
} }
} }
}, },

View File

@@ -1,60 +0,0 @@
{
"name": "rsk-testnet-6",
"normal": {
"knife_zero": {
"host": "10.1.1.20"
}
},
"automatic": {
"fqdn": "rsk-testnet-6",
"os": "linux",
"os_version": "6.8.0-107-generic",
"hostname": "rsk-testnet-6",
"ipaddress": "192.168.122.231",
"roles": [
"base",
"kvm_guest",
"rskj_testnet"
],
"recipes": [
"kosmos-base",
"kosmos-base::default",
"kosmos_kvm::guest",
"kosmos_rsk::rskj",
"apt::default",
"timezone_iii::default",
"timezone_iii::debian",
"kosmos-base::journald_conf",
"kosmos-base::systemd_emails",
"apt::unattended-upgrades",
"kosmos-base::firewall",
"kosmos-postfix::default",
"postfix::default",
"postfix::_common",
"postfix::_attributes",
"postfix::sasl_auth",
"hostname::default",
"kosmos_rsk::firewall",
"firewall::default"
],
"platform": "ubuntu",
"platform_version": "24.04",
"cloud": null,
"chef_packages": {
"chef": {
"version": "18.10.17",
"chef_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/chef-18.10.17/lib",
"chef_effortless": null
},
"ohai": {
"version": "18.2.13",
"ohai_root": "/opt/chef/embedded/lib/ruby/gems/3.1.0/gems/ohai-18.2.13/lib/ohai"
}
}
},
"run_list": [
"role[base]",
"role[kvm_guest]",
"role[rskj_testnet]"
]
}

View File

@@ -1,13 +1,12 @@
name "postgresql_primary" name "postgresql_primary"
run_list [ run_list %w(
"kosmos_postgresql::primary", kosmos_postgresql::primary
"kosmos-akkounts::pg_db", kosmos_postgresql::firewall
"kosmos-bitcoin::lndhub-go_pg_db", kosmos-akkounts::pg_db
"kosmos-bitcoin::nbxplorer_pg_db", kosmos-bitcoin::lndhub-go_pg_db
"kosmos_drone::pg_db", kosmos-bitcoin::nbxplorer_pg_db
"kosmos_gitea::pg_db", kosmos_drone::pg_db
"kosmos-mastodon::pg_db", kosmos_gitea::pg_db
"kosmos_postgresql::firewall", kosmos-mastodon::pg_db
"kosmos_postgresql::management_scripts" )
]

View File

@@ -1,8 +1,7 @@
name "postgresql_replica" name "postgresql_replica"
run_list [ run_list %w(
"kosmos_postgresql::hostsfile", kosmos_postgresql::hostsfile
"kosmos_postgresql::replica", kosmos_postgresql::replica
"kosmos_postgresql::firewall", kosmos_postgresql::firewall
"kosmos_postgresql::management_scripts" )
]

View File

@@ -1,8 +0,0 @@
name "postgresql_replica_logical"
run_list [
"kosmos_postgresql::hostsfile",
"kosmos_postgresql::replica_logical",
"kosmos_postgresql::firewall",
"kosmos_postgresql::management_scripts"
]

View File

@@ -230,6 +230,7 @@ systemd_unit "akkounts.service" do
WorkingDirectory: deploy_path, WorkingDirectory: deploy_path,
Environment: "RAILS_ENV=#{rails_env} SOLID_QUEUE_IN_PUMA=true", Environment: "RAILS_ENV=#{rails_env} SOLID_QUEUE_IN_PUMA=true",
ExecStart: "#{bundle_path} exec puma -C config/puma.rb --pidfile #{deploy_path}/tmp/puma.pid", ExecStart: "#{bundle_path} exec puma -C config/puma.rb --pidfile #{deploy_path}/tmp/puma.pid",
ExecStop: "#{bundle_path} exec puma -C config/puma.rb --pidfile #{deploy_path}/tmp/puma.pid stop",
ExecReload: "#{bundle_path} exec pumactl -F config/puma.rb --pidfile #{deploy_path}/tmp/puma.pid phased-restart", ExecReload: "#{bundle_path} exec pumactl -F config/puma.rb --pidfile #{deploy_path}/tmp/puma.pid phased-restart",
PIDFile: "#{deploy_path}/tmp/puma.pid", PIDFile: "#{deploy_path}/tmp/puma.pid",
TimeoutSec: "10", TimeoutSec: "10",

View File

@@ -24,17 +24,11 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE. # THE SOFTWARE.
include_recipe "apt" include_recipe 'apt'
include_recipe 'timezone_iii'
directory "/etc/apt/keyrings" do include_recipe 'ntp'
mode "0755" include_recipe 'kosmos-base::journald_conf'
action :create include_recipe 'kosmos-base::systemd_emails'
end
include_recipe "timezone_iii"
include_recipe "ntp" if node["platform"] == "ubuntu" && node["platform_version"].to_f < 24.04
include_recipe "kosmos-base::journald_conf"
include_recipe "kosmos-base::systemd_emails"
node.override["apt"]["unattended_upgrades"]["enable"] = true node.override["apt"]["unattended_upgrades"]["enable"] = true
node.override["apt"]["unattended_upgrades"]["mail_only_on_error"] = false node.override["apt"]["unattended_upgrades"]["mail_only_on_error"] = false
@@ -49,20 +43,20 @@ node.override["apt"]["unattended_upgrades"]["allowed_origins"] = [
] ]
node.override["apt"]["unattended_upgrades"]["mail"] = "ops@kosmos.org" node.override["apt"]["unattended_upgrades"]["mail"] = "ops@kosmos.org"
node.override["apt"]["unattended_upgrades"]["syslog_enable"] = true node.override["apt"]["unattended_upgrades"]["syslog_enable"] = true
include_recipe "apt::unattended-upgrades" include_recipe 'apt::unattended-upgrades'
package "mailutils" package 'mailutils'
package "mosh" package 'mosh'
package "vim" package 'vim'
# Don't create users and rewrite the sudo config in development environment. # Don't create users and rewrite the sudo config in development environment.
# It breaks the vagrant user # It breaks the vagrant user
unless node.chef_environment == "development" unless node.chef_environment == "development"
# Searches data bag "users" for groups attribute "sysadmin". # Searches data bag "users" for groups attribute "sysadmin".
# Places returned users in Unix group "sysadmin" with GID 2300. # Places returned users in Unix group "sysadmin" with GID 2300.
users_manage "sysadmin" do users_manage 'sysadmin' do
group_id 2300 group_id 2300
action %i[remove create] action [:remove, :create]
end end
sudo "sysadmin" do sudo "sysadmin" do
@@ -71,35 +65,35 @@ unless node.chef_environment == "development"
defaults [ defaults [
# not default on Ubuntu, explicitely enable. Uses a minimal white list of # not default on Ubuntu, explicitely enable. Uses a minimal white list of
# environment variables # environment variables
"env_reset", 'env_reset',
# Send emails on unauthorized attempts # Send emails on unauthorized attempts
"mail_badpass", 'mail_badpass',
'secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"' 'secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"',
] ]
end end
include_recipe "kosmos-base::firewall" include_recipe "kosmos-base::firewall"
include_recipe "kosmos-postfix" include_recipe 'kosmos-postfix'
node.override["set_fqdn"] = "*" node.override['set_fqdn'] = '*'
include_recipe "hostname" include_recipe 'hostname'
package "ca-certificates" package 'ca-certificates'
directory "/usr/local/share/ca-certificates/cacert" do directory '/usr/local/share/ca-certificates/cacert' do
action :create action :create
end end
["http://www.cacert.org/certs/root.crt", "http://www.cacert.org/certs/class3.crt"].each do |cert| ['http://www.cacert.org/certs/root.crt', 'http://www.cacert.org/certs/class3.crt'].each do |cert|
remote_file "/usr/local/share/ca-certificates/cacert/#{File.basename(cert)}" do remote_file "/usr/local/share/ca-certificates/cacert/#{File.basename(cert)}" do
source cert source cert
action :create_if_missing action :create_if_missing
notifies :run, "execute[update-ca-certificates]", :immediately notifies :run, 'execute[update-ca-certificates]', :immediately
end end
end end
execute "update-ca-certificates" do execute 'update-ca-certificates' do
action :nothing action :nothing
end end
end end

View File

@@ -1,86 +1,49 @@
#!/bin/bash #!/bin/bash
set -e
set -o pipefail
# Calculate yesterday's date in YYYY-MM-DD format # Calculate yesterday's date in YYYY-MM-DD format
YESTERDAY=$(date -d "yesterday" +%Y-%m-%d) YESTERDAY=$(date -d "yesterday" +%Y-%m-%d)
echo "Starting price tracking for $YESTERDAY" >&2 echo "Starting price tracking for $YESTERDAY" >&2
# Helper function to perform HTTP requests with retries
# Usage: make_request <retries> <method> <url> [data] [header1] [header2] ...
make_request() {
local retries=$1
local method=$2
local url=$3
local data=$4
shift 4
local headers=("$@")
local count=0
local wait_time=3
local response
while [ "$count" -lt "$retries" ]; do
local curl_opts=(-s -S -f -X "$method")
if [ -n "$data" ]; then
curl_opts+=(-d "$data")
fi
for h in "${headers[@]}"; do
curl_opts+=(-H "$h")
done
if response=$(curl "${curl_opts[@]}" "$url"); then
echo "$response"
return 0
fi
echo "Request to $url failed (Attempt $((count+1))/$retries). Retrying in ${wait_time}s..." >&2
sleep "$wait_time"
count=$((count + 1))
done
echo "ERROR: Request to $url failed after $retries attempts" >&2
return 1
}
# Fetch and process rates for a fiat currency # Fetch and process rates for a fiat currency
get_price_data() { get_price_data() {
local currency=$1 local currency=$1
local data avg open24 last local data avg open24 last
if data=$(make_request 3 "GET" "https://www.bitstamp.net/api/v2/ticker/btc${currency,,}/" ""); then data=$(curl -s "https://www.bitstamp.net/api/v2/ticker/btc${currency,,}/")
if [ $? -eq 0 ] && [ ! -z "$data" ]; then
echo "Successfully retrieved ${currency} price data" >&2 echo "Successfully retrieved ${currency} price data" >&2
open24=$(echo "$data" | jq -r '.open_24') open24=$(echo "$data" | jq -r '.open_24')
last=$(echo "$data" | jq -r '.last') last=$(echo "$data" | jq -r '.last')
avg=$(echo "$open24 $last" | awk '{printf "%.0f", ($1 + $2) / 2}') avg=$(( (${open24%.*} + ${last%.*}) / 2 ))
echo $avg echo $avg
else else
echo "ERROR: Failed to retrieve ${currency} price data" >&2 echo "ERROR: Failed to retrieve ${currency} price data" >&2
return 1 exit 1
fi fi
} }
# Get price data for each currency # Get price data for each currency
usd_avg=$(get_price_data "USD") || exit 1 usd_avg=$(get_price_data "USD")
eur_avg=$(get_price_data "EUR") || exit 1 eur_avg=$(get_price_data "EUR")
gbp_avg=$(get_price_data "GBP") || exit 1 gbp_avg=$(get_price_data "GBP")
# Create JSON # Create JSON
json=$(jq -n \ json="{\"EUR\":$eur_avg,\"USD\":$usd_avg,\"GBP\":$gbp_avg}"
--argjson eur "$eur_avg" \
--argjson usd "$usd_avg" \
--argjson gbp "$gbp_avg" \
'{"EUR": $eur, "USD": $usd, "GBP": $gbp}')
echo "Rates: $json" >&2 echo "Rates: $json" >&2
# PUT in remote storage # PUT in remote storage
if make_request 3 "PUT" "<%= @rs_base_url %>/$YESTERDAY" "$json" \ response=$(curl -X PUT \
"Authorization: Bearer $RS_AUTH" \ -H "Authorization: Bearer $RS_AUTH" \
"Content-Type: application/json" > /dev/null; then -H "Content-Type: application/json" \
-d "$json" \
-w "%{http_code}" \
-s \
-o /dev/null \
"<%= @rs_base_url %>/$YESTERDAY")
if [ "$response" -eq 200 ] || [ "$response" -eq 201 ]; then
echo "Successfully uploaded price data" >&2 echo "Successfully uploaded price data" >&2
else else
echo "ERROR: Failed to upload price data" >&2 echo "ERROR: Failed to upload price data. HTTP status: $response" >&2
exit 1 exit 1
fi fi

View File

@@ -1,6 +1,2 @@
node.default["kosmos_drone"]["domain"] = "drone.kosmos.org" node.default["kosmos_drone"]["domain"] = "drone.kosmos.org"
node.default["kosmos_drone"]["upstream_port"] = 80 node.default["kosmos_drone"]["upstream_port"] = 80
node.default["kosmos_drone"]["pg_host"] = "pg.kosmos.local"
node.default["kosmos_drone"]["pg_port"] = 5432
node.default["kosmos_drone"]["pg_db"] = "drone"
node.default["kosmos_drone"]["pg_user"] = "drone"

View File

@@ -9,11 +9,11 @@ credentials = data_bag_item("credentials", "drone")
drone_credentials = data_bag_item('credentials', 'drone') drone_credentials = data_bag_item('credentials', 'drone')
postgres_config = { postgres_config = {
host: node["kosmos_drone"]["pg_host"], username: "drone",
port: node["kosmos_drone"]["pg_port"], password: drone_credentials["postgresql_password"],
database: node["kosmos_drone"]["pg_db"], host: "pg.kosmos.local",
username: node["kosmos_drone"]["pg_user"], port: 5432,
password: drone_credentials["postgresql_password"] database: "drone"
} }
directory deploy_path do directory deploy_path do

View File

@@ -18,7 +18,6 @@ server {
} }
location / { location / {
add_header 'Access-Control-Allow-Origin' '*' always;
proxy_intercept_errors on; proxy_intercept_errors on;
proxy_cache garage_cache; proxy_cache garage_cache;
proxy_pass http://garage_web; proxy_pass http://garage_web;

View File

@@ -19,17 +19,6 @@ jwt_secret = gitea_data_bag_item["jwt_secret"]
internal_token = gitea_data_bag_item["internal_token"] internal_token = gitea_data_bag_item["internal_token"]
secret_key = gitea_data_bag_item["secret_key"] secret_key = gitea_data_bag_item["secret_key"]
apt_repository "git-core-ppa" do
uri "http://ppa.launchpad.net/git-core/ppa/ubuntu"
components ["main"]
key "E1DF1F24"
action :add
only_if do
node['platform'] == 'ubuntu' &&
Gem::Version.new(node['platform_version']) < Gem::Version.new('22.04')
end
end
package "git" package "git"
user "git" do user "git" do
@@ -37,10 +26,10 @@ user "git" do
home "/home/git" home "/home/git"
end end
directory "/home/git/.ssh" do directory '/home/git/.ssh' do
owner "git" owner 'git'
group "git" group 'git'
mode "0700" mode '0700'
recursive true recursive true
end end

View File

@@ -33,7 +33,7 @@ DISABLE_DOWNLOAD_SOURCE_ARCHIVES = true
[repository.signing] [repository.signing]
SIGNING_KEY = <%= @git_home_directory %>/.ssh/id_ed25519.pub SIGNING_KEY = <%= @git_home_directory %>/.ssh/id_ed25519.pub
SIGNING_NAME = Gitea SIGNING_NAME = Gitea
SIGNING_EMAIL = git@<%= @domain %> SIGNING_EMAIL = <%= @email %>
SIGNING_FORMAT = ssh SIGNING_FORMAT = ssh
INITIAL_COMMIT = always INITIAL_COMMIT = always
CRUD_ACTIONS = always CRUD_ACTIONS = always

View File

@@ -18,8 +18,6 @@ server {
client_max_body_size 121M; client_max_body_size 121M;
proxy_intercept_errors on;
location ~ ^/(avatars|repo-avatars)/.*$ { location ~ ^/(avatars|repo-avatars)/.*$ {
proxy_buffers 1024 8k; proxy_buffers 1024 8k;
proxy_pass http://_gitea_web; proxy_pass http://_gitea_web;
@@ -54,18 +52,5 @@ server {
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
error_page 404 = @slow_404;
}
# Slow down 404 responses to make scraping random URLs less attractive
location @slow_404 {
internal;
default_type text/plain;
content_by_lua_block {
ngx.sleep(10)
ngx.status = 404
ngx.say("Not Found")
ngx.exit(ngx.HTTP_NOT_FOUND)
}
} }
} }

View File

@@ -1,9 +1,9 @@
release = "20260321" release = "20240514"
img_filename = "ubuntu-24.04-server-cloudimg-amd64" img_filename = "ubuntu-22.04-server-cloudimg-amd64-disk-kvm"
node.default["kosmos_kvm"]["host"]["qemu_base_image"] = { node.default["kosmos_kvm"]["host"]["qemu_base_image"] = {
"url" => "https://cloud-images.ubuntu.com/releases/noble/release-#{release}/#{img_filename}.img", "url" => "https://cloud-images.ubuntu.com/releases/jammy/release-#{release}/#{img_filename}.img",
"checksum" => "5c3ddb00f60bc455dac0862fabe9d8bacec46c33ac1751143c5c3683404b110d", "checksum" => "2e7698b3ebd7caead06b08bd3ece241e6ce294a6db01f92ea12bcb56d6972c3f",
"path" => "/var/lib/libvirt/images/base/#{img_filename}-#{release}.qcow2" "path" => "/var/lib/libvirt/images/base/#{img_filename}-#{release}.qcow2"
} }

View File

@@ -3,7 +3,7 @@
# Recipe:: host # Recipe:: host
# #
package %w(virtinst libvirt-daemon-system libvirt-clients) package %w(virtinst libvirt-daemon-system)
directory "/var/lib/libvirt/images/base" do directory "/var/lib/libvirt/images/base" do
recursive true recursive true

View File

@@ -17,7 +17,7 @@ DISKSIZE=${4:-10} # 10GB default
# Directory where image files will be stored # Directory where image files will be stored
IMAGE_DIR=/var/lib/libvirt/images IMAGE_DIR=/var/lib/libvirt/images
IMAGE_PATH=$IMAGE_DIR/${VMNAME}.qcow2 IMAGE_PATH=$IMAGE_DIR/${VMNAME}.qcow2
CIDATA_PATH=${IMAGE_DIR}/${VMNAME}-cloudinit CIDATA_PATH=${IMAGE_DIR}/cidata-${VMNAME}.iso
BASE_FILE=<%= @base_image_path %> BASE_FILE=<%= @base_image_path %>
# Create the VM image if it does not already exist # Create the VM image if it does not already exist
@@ -38,8 +38,9 @@ qemu-img info "$IMAGE_PATH"
# Check if the cloud-init metadata file exists # Check if the cloud-init metadata file exists
# if not, generate it # if not, generate it
if [ ! -r $CIDATA_PATH ]; then if [ ! -r $CIDATA_PATH ]; then
mkdir -p $CIDATA_PATH pushd $(dirname $CIDATA_PATH)
pushd $CIDATA_PATH mkdir -p $VMNAME
cd $VMNAME
cat > user-data <<-EOS cat > user-data <<-EOS
#cloud-config #cloud-config
@@ -61,19 +62,25 @@ instance-id: $VMNAME
local-hostname: $VMNAME local-hostname: $VMNAME
EOS EOS
genisoimage -output "$CIDATA_PATH" -volid cidata -joliet -rock user-data meta-data
chown libvirt-qemu:kvm "$CIDATA_PATH"
chmod 600 "$CIDATA_PATH"
popd popd
fi fi
# setting --os-variant to ubuntu20.04 and ubuntu18.04 breaks SSH and networking
virt-install \ virt-install \
--name "$VMNAME" \ --name "$VMNAME" \
--ram "$RAM" \ --ram "$RAM" \
--vcpus "$CPUS" \ --vcpus "$CPUS" \
--cpu host \ --cpu host \
--arch x86_64 \ --arch x86_64 \
--osinfo detect=on,name=ubuntu24.04 \ --os-type linux \
--os-variant ubuntu16.04 \
--hvm \ --hvm \
--virt-type kvm \ --virt-type kvm \
--disk "$IMAGE_PATH" \ --disk "$IMAGE_PATH" \
--cdrom "$CIDATA_PATH" \
--boot hd \ --boot hd \
--network=bridge=virbr0,model=virtio \ --network=bridge=virbr0,model=virtio \
--graphics none \ --graphics none \
@@ -81,5 +88,4 @@ virt-install \
--console pty \ --console pty \
--channel unix,mode=bind,path=/var/lib/libvirt/qemu/$VMNAME.guest_agent.0,target_type=virtio,name=org.qemu.guest_agent.0 \ --channel unix,mode=bind,path=/var/lib/libvirt/qemu/$VMNAME.guest_agent.0,target_type=virtio,name=org.qemu.guest_agent.0 \
--autostart \ --autostart \
--import \ --import
--cloud-init root-password-generate=off,disable=on,meta-data=$CIDATA_PATH/meta-data,user-data=$CIDATA_PATH/user-data

View File

@@ -1,8 +1,3 @@
node.default['kosmos_postgresql']['postgresql_version'] = "14"
# This is set to false by default, and set to true in the server resource # This is set to false by default, and set to true in the server resource
# for replicas. # for replicas.
node.default['kosmos_postgresql']['ready_to_set_up_replica'] = false node.default['kosmos_postgresql']['ready_to_set_up_replica'] = false
# Address space from which clients are allowed to connect
node.default['kosmos_postgresql']['access_addr'] = "10.1.1.0/24"

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -euo pipefail
DB_NAME="${1:?Usage: $0 <database_name>}"
echo "== Processing DB: $DB_NAME =="
# Create publication (idempotent)
psql -d "$DB_NAME" -v ON_ERROR_STOP=1 <<'SQL'
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_publication WHERE pubname = 'migrate_pub'
) THEN
CREATE PUBLICATION migrate_pub FOR ALL TABLES;
END IF;
END
$$;
SQL
# Create logical replication slot (idempotent-ish)
SLOT="migrate_slot_${DB_NAME}"
if ! psql -d "$DB_NAME" -Atqc "SELECT 1 FROM pg_replication_slots WHERE slot_name = '$SLOT'" | grep -q 1; then
echo " Creating slot: $SLOT"
psql -d "$DB_NAME" -c "SELECT pg_create_logical_replication_slot('$SLOT', 'pgoutput');"
else
echo " Slot already exists: $SLOT"
fi
echo "== Done =="

View File

@@ -1,34 +0,0 @@
#!/bin/bash
set -e
echo "== Creating publication in each database =="
for db in $(psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN ('template1','postgres')"); do
echo "Processing DB: $db"
# Create publication (idempotent)
psql -d "$db" -v ON_ERROR_STOP=1 <<SQL
DO \$\$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_publication WHERE pubname = 'migrate_pub'
) THEN
CREATE PUBLICATION migrate_pub FOR ALL TABLES;
END IF;
END
\$\$;
SQL
# Create logical replication slot (idempotent-ish)
SLOT="migrate_slot_${db}"
if ! psql -d "$db" -Atqc "SELECT 1 FROM pg_replication_slots WHERE slot_name = '$SLOT'" | grep -q 1; then
echo " Creating slot: $SLOT"
psql -d "$db" -c "SELECT pg_create_logical_replication_slot('$SLOT', 'pgoutput');"
else
echo " Slot already exists: $SLOT"
fi
done
echo "== Done =="

View File

@@ -1,34 +0,0 @@
#!/bin/bash
set -e
echo "== Dropping subscriptions slots and publications =="
for db in $(psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN ('template1','postgres')"); do
echo "Processing DB: $db"
SLOT="migrate_slot_${db}"
# Drop slot if exists
if psql -d "$db" -Atqc "SELECT 1 FROM pg_replication_slots WHERE slot_name = '$SLOT'" | grep -q 1; then
echo " Dropping slot: $SLOT"
psql -d "$db" -c "SELECT pg_drop_replication_slot('$SLOT');"
else
echo " Slot not found: $SLOT"
fi
# Drop publication if exists
psql -d "$db" -v ON_ERROR_STOP=1 <<SQL
DO \$\$
BEGIN
IF EXISTS (
SELECT 1 FROM pg_publication WHERE pubname = 'migrate_pub'
) THEN
DROP PUBLICATION migrate_pub;
END IF;
END
\$\$;
SQL
done
echo "== Done =="

View File

@@ -1,29 +0,0 @@
#!/usr/bin/env bash
set -e
echo "== Dropping subscriptions =="
for db in $(psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN ('template1','postgres')"); do
echo "Processing DB: $db"
SUB="migrate_sub_${db}"
# Check if subscription exists
EXISTS=$(psql -d "$db" -Atqc "SELECT 1 FROM pg_subscription WHERE subname = '$SUB'")
if [ "$EXISTS" = "1" ]; then
echo " Found subscription: $SUB"
# Disable first (good practice)
psql -d "$db" -c "ALTER SUBSCRIPTION $SUB DISABLE;"
# Drop it (must be top-level)
psql -d "$db" -c "DROP SUBSCRIPTION $SUB;"
else
echo " No subscription: $SUB"
fi
done
echo "== Done =="

View File

@@ -1,9 +0,0 @@
#!/bin/bash
cd /tmp && \
(pg_dumpall --globals-only > globals.sql) && \
psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN (''template1'',''postgres'')" | \
xargs -I{} -P4 sh -c "
pg_dump -Fd -j 4 -d \"{}\" -f dump_{} &&
tar -cf - dump_{} | zstd -19 -T0 > dump_{}.tar.zst &&
rm -rf dump_{}
"

View File

@@ -1,10 +0,0 @@
#!/bin/bash
set -euo pipefail
DB_NAME="${1:?Usage: $0 <database_name>}"
cd /tmp
pg_dump -Fd -j 4 -d "$DB_NAME" -f "dump_${DB_NAME}"
tar -cf - "dump_${DB_NAME}" | zstd -19 -T0 > "dump_${DB_NAME}.tar.zst"
rm -rf "dump_${DB_NAME}"

View File

@@ -1,35 +0,0 @@
#!/bin/bash
set -e
DB="$1"
if [ -z "$DB" ]; then
echo "Usage: $0 <database>"
exit 1
fi
echo "== Fixing sequences in database: $DB =="
SQL=$(psql -d "$DB" -Atqc "
SELECT
'SELECT setval(' ||
quote_literal(pg_get_serial_sequence(quote_ident(n.nspname)||'.'||quote_ident(c.relname), a.attname)) ||
', COALESCE(MAX(' || quote_ident(a.attname) || '), 0) + 1, false) FROM ' ||
quote_ident(n.nspname)||'.'||quote_ident(c.relname) || ';'
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
WHERE c.relkind = 'r'
AND a.attnum > 0
AND NOT a.attisdropped
AND pg_get_serial_sequence(quote_ident(n.nspname)||'.'||quote_ident(c.relname), a.attname) IS NOT NULL;
")
if [ -z "$SQL" ]; then
echo "No sequences found in $DB"
exit 0
fi
echo "$SQL" | psql -d "$DB"
echo "== Done =="

View File

@@ -1,38 +0,0 @@
#!/bin/bash
set -e
echo "== Fixing sequences across all databases =="
for db in $(psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN ('template1','postgres')"); do
echo "---- DB: $db ----"
# Generate fix statements
SQL=$(psql -d "$db" -Atqc "
SELECT
'SELECT setval(' ||
quote_literal(pg_get_serial_sequence(quote_ident(n.nspname)||'.'||quote_ident(c.relname), a.attname)) ||
', COALESCE(MAX(' || quote_ident(a.attname) || '), 0) + 1, false) FROM ' ||
quote_ident(n.nspname)||'.'||quote_ident(c.relname) || ';'
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_attribute a ON a.attrelid = c.oid
WHERE c.relkind = 'r'
AND a.attnum > 0
AND NOT a.attisdropped
AND pg_get_serial_sequence(quote_ident(n.nspname)||'.'||quote_ident(c.relname), a.attname) IS NOT NULL;
")
if [ -z "$SQL" ]; then
echo "No sequences found in $db"
continue
fi
echo "Fixing sequences in $db..."
# Execute generated statements
echo "$SQL" | psql -d "$db"
done
echo "== Done fixing sequences =="

View File

@@ -1,5 +0,0 @@
#!/bin/bash
for db in $(psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN ('template1','postgres')"); do
echo "DB: $db"
psql -d "$db" -Atqc "SELECT pubname FROM pg_publication;"
done

View File

@@ -1,5 +0,0 @@
#!/bin/bash
psql -c "
SELECT slot_name,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn))
FROM pg_replication_slots;"

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -euo pipefail
psql -Atqc "
SELECT datname
FROM pg_database
WHERE datallowconn
AND datname NOT IN ('template1','postgres')
" | while read -r db; do
result=$(psql -X -At -d "$db" -c "SELECT * FROM pg_stat_subscription;" 2>/dev/null || true)
if [[ -n "$result" ]]; then
echo "==== DB: $db ===="
echo "$result"
fi
done

View File

@@ -1,12 +0,0 @@
#!/bin/bash
set -euo pipefail
cd /tmp
for f in dump_*.tar.zst; do
db=$(echo $f | sed "s/dump_\(.*\)\.tar\.zst/\1/")
echo "Restoring $db"
zstd -d "$f" -c | tar -xf -
pg_restore -j 4 -d "$db" dump_$db
rm -rf "dump_$db"
done

View File

@@ -1,14 +0,0 @@
#!/bin/bash
set -euo pipefail
DB_NAME="${1:?Usage: $0 <database_name>}"
cd /tmp
FILE="dump_${DB_NAME}.tar.zst"
DIR="dump_${DB_NAME}"
echo "Restoring $DB_NAME"
zstd -d "$FILE" -c | tar -xf -
pg_restore -j 4 -d "$DB_NAME" "$DIR"
rm -rf "$DIR"

View File

@@ -36,8 +36,10 @@ class Chef
end end
end end
def postgresql_version def postgresql_service_name
node['kosmos_postgresql']['postgresql_version'] postgresql_version = "12"
"postgresql@#{postgresql_version}-main"
end end
end end
end end

View File

@@ -1,121 +0,0 @@
#
# Cookbook:: kosmos_postgresql
# Recipe:: management_scripts
#
credentials = data_bag_item('credentials', 'postgresql')
cookbook_file "/usr/local/bin/pg_dump_all_databases" do
source "dump_all_databases.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_dump_database" do
source "dump_database.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_restore_all_databases" do
source "restore_all_databases.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_restore_database" do
source "restore_database.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_create_replication_publications" do
source "create_publications.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_create_replication_publication" do
source "create_publication.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_drop_replication_publications" do
source "drop_publications.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_list_replication_publications" do
source "list_publications.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_list_replication_slots" do
source "list_replication_slots.sh"
user "postgres"
group "postgres"
mode "0744"
end
template "/usr/local/bin/pg_create_replication_subscriptions" do
source "create_subscriptions.sh.erb"
user "postgres"
group "postgres"
mode "0740"
variables pg_host: "pg.kosmos.local",
pg_port: 5432,
pg_user: "replication",
pg_pass: credentials["replication_password"]
sensitive true
end
template "/usr/local/bin/pg_create_replication_subscription" do
source "create_subscription.sh.erb"
user "postgres"
group "postgres"
mode "0740"
variables pg_host: "pg.kosmos.local",
pg_port: 5432,
pg_user: "replication",
pg_pass: credentials["replication_password"]
sensitive true
end
cookbook_file "/usr/local/bin/pg_drop_replication_subscriptions" do
source "drop_subscriptions.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_list_replication_subscriptions" do
source "list_subscriptions.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_fix_sequences_in_all_databases" do
source "fix_sequences.sh"
user "postgres"
group "postgres"
mode "0744"
end
cookbook_file "/usr/local/bin/pg_fix_sequences" do
source "fix_sequences.sh"
user "postgres"
group "postgres"
mode "0744"
end

View File

@@ -3,6 +3,31 @@
# Recipe:: primary # Recipe:: primary
# #
postgresql_version = "12"
postgresql_service = "postgresql@#{postgresql_version}-main"
service postgresql_service do
supports restart: true, status: true, reload: true
end
postgresql_custom_server postgresql_version do postgresql_custom_server postgresql_version do
role "primary" role "primary"
end end
postgresql_access "zerotier members" do
access_type "host"
access_db "all"
access_user "all"
access_addr "10.1.1.0/24"
access_method "md5"
notifies :reload, "service[#{postgresql_service}]", :immediately
end
postgresql_access "zerotier members replication" do
access_type "host"
access_db "replication"
access_user "replication"
access_addr "10.1.1.0/24"
access_method "md5"
notifies :reload, "service[#{postgresql_service}]", :immediately
end

View File

@@ -3,34 +3,54 @@
# Recipe:: replica # Recipe:: replica
# #
postgresql_version = "12"
postgresql_service = "postgresql@#{postgresql_version}-main"
postgresql_custom_server postgresql_version do postgresql_custom_server postgresql_version do
role "replica" role "replica"
end end
service postgresql_service do
supports restart: true, status: true, reload: true
end
postgresql_data_bag_item = data_bag_item('credentials', 'postgresql') postgresql_data_bag_item = data_bag_item('credentials', 'postgresql')
primary = postgresql_primary primary = postgresql_primary
if primary.nil? unless primary.nil?
Chef::Log.warn("No PostgreSQL primary node found. Skipping replication setup.") # TODO
return
end
postgresql_service_name = "postgresql@#{postgresql_version}-main"
postgresql_data_dir = "/var/lib/postgresql/#{postgresql_version}/main" postgresql_data_dir = "/var/lib/postgresql/#{postgresql_version}/main"
# TODO Replace pg.kosmos.local with private IP once available # FIXME get zerotier IP
# via proper node attribute
# https://gitea.kosmos.org/kosmos/chef/issues/263
execute "set up replication" do execute "set up replication" do
command <<-EOF command <<-EOF
systemctl stop #{postgresql_service_name} systemctl stop #{postgresql_service}
mv #{postgresql_data_dir} #{postgresql_data_dir}.old mv #{postgresql_data_dir} #{postgresql_data_dir}.old
pg_basebackup -h pg.kosmos.local -U replication -D #{postgresql_data_dir} -R pg_basebackup -h pg.kosmos.local -U replication -D #{postgresql_data_dir} -R
chown -R postgres:postgres #{postgresql_data_dir} chown -R postgres:postgres #{postgresql_data_dir}
systemctl start #{postgresql_service_name} systemctl start #{postgresql_service}
EOF EOF
environment 'PGPASSWORD' => postgresql_data_bag_item['replication_password'] environment 'PGPASSWORD' => postgresql_data_bag_item['replication_password']
sensitive true sensitive true
not_if { ::File.exist? "#{postgresql_data_dir}/standby.signal" } not_if { ::File.exist? "#{postgresql_data_dir}/standby.signal" }
end end
postgresql_access "zerotier members" do
access_type "host"
access_db "all"
access_user "all"
access_addr "10.1.1.0/24"
access_method "md5"
notifies :reload, "service[#{postgresql_service}]", :immediately
end
postgresql_access "zerotier members replication" do
access_type "host"
access_db "replication"
access_user "replication"
access_addr "10.1.1.0/24"
access_method "md5"
notifies :reload, "service[#{postgresql_service}]", :immediately
end
end

View File

@@ -1,8 +0,0 @@
#
# Cookbook:: kosmos_postgresql
# Recipe:: replica_logical
#
postgresql_custom_server postgresql_version do
role "replica_logical"
end

View File

@@ -44,28 +44,25 @@ action :create do
shared_buffers = if node['memory']['total'].to_i / 1024 < 1024 # < 1GB RAM shared_buffers = if node['memory']['total'].to_i / 1024 < 1024 # < 1GB RAM
"128MB" "128MB"
else # >= 1GB RAM, use 25% of total RAM else # >= 1GB RAM, use 50% of total RAM
"#{node['memory']['total'].to_i / 1024 / 4}MB" "#{node['memory']['total'].to_i / 1024 / 2}MB"
end end
additional_config = { additional_config = {
max_connections: 200, # default max_connections: 200, # default
shared_buffers: shared_buffers, shared_buffers: shared_buffers,
work_mem: "4MB",
unix_socket_directories: "/var/run/postgresql", unix_socket_directories: "/var/run/postgresql",
dynamic_shared_memory_type: "posix", dynamic_shared_memory_type: "posix",
timezone: "UTC", # default is GMT timezone: "UTC", # default is GMT
listen_addresses: "0.0.0.0", listen_addresses: "0.0.0.0",
promote_trigger_file: "#{postgresql_data_dir}/failover.trigger", promote_trigger_file: "#{postgresql_data_dir}/failover.trigger",
wal_level: "logical", wal_keep_segments: 256
wal_keep_size: 4096, # 256 segments, 16MB each
max_replication_slots: 16
} }
postgresql_server_conf "main" do postgresql_server_conf "main" do
version postgresql_version version postgresql_version
additional_config additional_config additional_config additional_config
notifies :restart, "service[#{postgresql_service}]", :delayed notifies :reload, "service[#{postgresql_service}]", :delayed
end end
postgresql_user "replication" do postgresql_user "replication" do
@@ -73,24 +70,6 @@ action :create do
replication true replication true
password postgresql_credentials['replication_password'] password postgresql_credentials['replication_password']
end end
postgresql_access "all members" do
access_type "host"
access_db "all"
access_user "all"
access_addr node['kosmos_postgresql']['access_addr']
access_method "md5"
notifies :reload, "service[#{postgresql_service}]", :immediately
end
postgresql_access "replication members" do
access_type "host"
access_db "replication"
access_user "replication"
access_addr node['kosmos_postgresql']['access_addr']
access_method "md5"
notifies :reload, "service[#{postgresql_service}]", :immediately
end
end end
action_class do action_class do

View File

@@ -1,31 +0,0 @@
#!/bin/bash
set -euo pipefail
DB_NAME="${1:?Usage: $0 <database_name>}"
echo "== Processing DB: $DB_NAME =="
SLOT="migrate_slot_${DB_NAME}"
SUB="migrate_sub_${DB_NAME}"
psql -d "$DB_NAME" -v ON_ERROR_STOP=1 <<SQL
DO \$\$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_subscription WHERE subname = '$SUB'
) THEN
CREATE SUBSCRIPTION $SUB
CONNECTION 'host=<%= @pg_host %> port=<%= @pg_port %> dbname=$DB_NAME user=<%= @pg_user %> password=<%= @pg_pass %>'
PUBLICATION migrate_pub
WITH (
slot_name = '$SLOT',
create_slot = false,
copy_data = false,
enabled = true
);
END IF;
END
\$\$;
SQL
echo "== Done =="

View File

@@ -1,34 +0,0 @@
#!/bin/bash
set -e
echo "== Creating subscriptions for all databases =="
for db in $(psql -Atqc "SELECT datname FROM pg_database WHERE datallowconn AND datname NOT IN ('template1','postgres')"); do
echo "Processing DB: $db"
SLOT="migrate_slot_${db}"
SUB="migrate_sub_${db}"
psql -d "$db" -v ON_ERROR_STOP=1 <<SQL
DO \$\$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM pg_subscription WHERE subname = '$SUB'
) THEN
CREATE SUBSCRIPTION $SUB
CONNECTION 'host=<%= @pg_host %> port=<%= @pg_port %> dbname=$db user=<%= @pg_user %> password=<%= @pg_pass %>'
PUBLICATION migrate_pub
WITH (
slot_name = '$SLOT',
create_slot = false,
copy_data = false,
enabled = true
);
END IF;
END
\$\$;
SQL
done
echo "== Done =="

View File

@@ -1,8 +1,5 @@
source 'https://supermarket.chef.io' source 'https://supermarket.chef.io'
cookbook 'kosmos_openresty', path: '../../site-cookbooks/kosmos_openresty' cookbook 'kosmos-nginx', path: '../../site-cookbooks/kosmos-nginx'
cookbook 'kosmos-base', path: '../../site-cookbooks/kosmos-base'
cookbook 'openresty', path: '../../site-cookbooks/openresty'
cookbook 'kosmos-postfix', path: '../../site-cookbooks/kosmos-postfix'
metadata metadata

View File

@@ -1,4 +1,4 @@
node.default['rskj']['version'] = "9.0.1~#{node['lsb']['codename']}" node.default['rskj']['version'] = '7.0.0~jammy'
node.default['rskj']['network'] = 'testnet' node.default['rskj']['network'] = 'testnet'
node.default['rskj']['nginx']['domain'] = nil node.default['rskj']['nginx']['domain'] = nil

View File

@@ -34,9 +34,9 @@ verifier:
name: inspec name: inspec
platforms: platforms:
- name: ubuntu-24.04 - name: ubuntu-22.04
driver: driver:
image: dokken/ubuntu-24.04 image: dokken/ubuntu-22.04
privileged: true privileged: true
pid_one_command: /usr/lib/systemd/systemd pid_one_command: /usr/lib/systemd/systemd
intermediate_instructions: intermediate_instructions:

View File

@@ -3,7 +3,7 @@ maintainer 'Kosmos Developers'
maintainer_email 'ops@kosmos.org' maintainer_email 'ops@kosmos.org'
license 'MIT' license 'MIT'
description 'Installs/configures RSKj and related software' description 'Installs/configures RSKj and related software'
version '0.5.0' version '0.4.0'
chef_version '>= 18.2' chef_version '>= 18.2'
issues_url 'https://gitea.kosmos.org/kosmos/chef/issues' issues_url 'https://gitea.kosmos.org/kosmos/chef/issues'
source_url 'https://gitea.kosmos.org/kosmos/chef' source_url 'https://gitea.kosmos.org/kosmos/chef'

View File

@@ -20,19 +20,10 @@ apt_repository 'rskj' do
end end
apt_package 'openjdk-17-jdk' apt_package 'openjdk-17-jdk'
apt_package 'debconf-utils'
execute 'preseed-rskj-license' do
command 'echo "rskj shared/accepted-rsk-license-v1-1 boolean true" | debconf-set-selections'
not_if 'debconf-get-selections | grep -q "shared/accepted-rsk-license-v1-1.*true"'
end
execute 'preseed-rskj-config' do
command "echo \"rskj shared/config select #{node['rskj']['network']}\" | debconf-set-selections"
not_if "debconf-get-selections | grep -q \"shared/config.*#{node['rskj']['network']}\""
end
apt_package 'rskj' do apt_package 'rskj' do
response_file 'rskj-preseed.cfg.erb'
response_file_variables network: node['rskj']['network']
options '--assume-yes' options '--assume-yes'
version node['rskj']['version'] version node['rskj']['version']
end end

View File

@@ -1,6 +1,6 @@
#_preseed_V1 #_preseed_V1
# Do you agree to the terms of the applicable licenses? # Do you agree to the terms of the applicable licenses?
rskj shared/accepted-rsk-license-v1-1 boolean true rskj shared/accepted-rsk-license-v1-1 select true
# Choose a configuration environment to run your node. # Choose a configuration environment to run your node.
# Choices: mainnet, testnet, regtest # Choices: mainnet, testnet, regtest
rskj shared/config select <%= @network %> rskj shared/config select <%= @network %>

View File

@@ -9,7 +9,7 @@ end
describe package('rskj') do describe package('rskj') do
it { should be_installed } it { should be_installed }
its('version') { should eq '9.0.1~noble' } its('version') { should eq '7.0.0~jammy' }
end end
describe service('rsk') do describe service('rsk') do