Compare commits

..

10 Commits

Author SHA1 Message Date
Thomas Gräfenstein
5e57d5258a add migration plan 2026-03-22 13:09:13 +01:00
Thomas Gräfenstein
522207b9d9 add claude permissions 2026-03-22 13:09:02 +01:00
Thomas Gräfenstein
09aee112da add local setup doc 2026-03-22 13:02:11 +01:00
Thomas Gräfenstein
158a8e6eb4 update readme 2026-03-22 12:38:24 +01:00
Thomas Gräfenstein
f3eea007f7 improve .env handling 2026-03-22 12:38:17 +01:00
Thomas Gräfenstein
1fed3dde51 simplified docker compose setup 2026-03-22 12:32:37 +01:00
Thomas Gräfenstein
89b806fd5b fix more issues 2026-03-22 12:29:58 +01:00
Thomas Gräfenstein
caa1c7f471 pin versions 2026-03-22 12:23:52 +01:00
Thomas Gräfenstein
0f12c5f5a8 added basic caddy rate limits 2026-03-22 12:22:00 +01:00
Thomas Gräfenstein
ce9dba4923 limit docker socket api access to alloy 2026-03-22 12:19:10 +01:00
16 changed files with 328 additions and 56 deletions

View File

@@ -0,0 +1,30 @@
{
"permissions": {
"allow": [
"WebSearch",
"WebFetch(domain:docs.docker.com)",
"WebFetch(domain:hub.docker.com)",
"WebFetch(domain:caddyserver.com)",
"WebFetch(domain:docs.nextcloud.com)",
"Bash(git log:*)",
"Bash(git diff:*)",
"Bash(git status:*)"
],
"deny": [
"Bash(ssh:*)",
"Bash(rm -rf:*)",
"Bash(docker system prune:*)",
"Bash(docker volume rm:*)",
"Bash(docker compose down:*)",
"Bash(docker stop:*)",
"Bash(docker rm:*)",
"Bash(scp:*)",
"Bash(rsync:*)",
"Bash(curl -X POST:*)",
"Bash(curl -X DELETE:*)",
"Bash(git push:*)",
"Bash(git reset --hard:*)",
"Bash(git clean:*)"
]
}
}

View File

@@ -8,7 +8,7 @@ GitOps-style Docker Compose setup for a self-hosted VPS running Nextcloud, Gitea
## Architecture
Four independent service stacks, each with its own `docker-compose.yml`:
A root `docker-compose.yml` uses `include` to compose four service stacks, each with its own `docker-compose.yml`:
- **caddy/** — Reverse proxy with auto HTTPS. All services route through the shared `proxy` Docker network.
- **nextcloud/** — Nextcloud 29 + PostgreSQL 16 + Redis 7 + cron container. Has its own `.env` for DB credentials and Nextcloud config. Uses internal `nextcloud-internal` network for DB/Redis isolation.
@@ -16,8 +16,9 @@ Four independent service stacks, each with its own `docker-compose.yml`:
- **monitoring/** — Grafana Alloy collecting Docker logs (Loki) and node metrics (Prometheus) to Grafana Cloud. Has its own `.env` for cloud credentials.
Key design patterns:
- Root `docker-compose.yml` includes all stacks via `include:` — single command to manage everything
- All stacks share the external `proxy` Docker network for Caddy routing
- Each service's compose file requires `--env-file .env` (root-level) for `DATA_ROOT` and `DOMAIN`
- Root `.env` provides `DATA_ROOT` and `DOMAIN` (pass via `--env-file .env`)
- Service-specific secrets live in per-service `.env` files (loaded via `env_file:` in compose)
- All persistent data under `${DATA_ROOT}` (default `/opt/docker-data/`)
@@ -27,6 +28,11 @@ Key design patterns:
# Deploy everything (installs Docker if needed, creates dirs, starts all stacks)
./scripts/deploy.sh
# Manage all services
docker compose --env-file .env up -d
docker compose --env-file .env logs -f
docker compose --env-file .env down
# Manage individual services
docker compose -f <service>/docker-compose.yml --env-file .env up -d
docker compose -f <service>/docker-compose.yml --env-file .env logs -f
@@ -43,10 +49,11 @@ docker exec caddy caddy reload --config /etc/caddy/Caddyfile
## Adding a New Service
1. Create `myapp/docker-compose.yml` joining the `proxy` external network, with data under `${DATA_ROOT}/myapp/`
2. Add reverse proxy entry in `caddy/Caddyfile`
3. Add data directory creation to `scripts/deploy.sh`
4. Add backup steps to `scripts/backup.sh` if it has persistent data
5. Create DNS A record for the subdomain
2. Add `- path: myapp/docker-compose.yml` to root `docker-compose.yml`
3. Add reverse proxy entry in `caddy/Caddyfile`
4. Add data directory creation to `scripts/deploy.sh`
5. Add backup steps to `scripts/backup.sh` if it has persistent data
6. Create DNS A record for the subdomain
## Environment Files

View File

@@ -23,6 +23,37 @@ Create these A records pointing to your VPS IP:
| `nextcloud.t-gstone.de` | `<VPS_IP>` |
| `git.t-gstone.de` | `<VPS_IP>` |
## Local Setup
### SSH Access
Add this to `~/.ssh/config` on your local machine:
```
Host t-gstone.de
HostName t-gstone.de
User gstone
Port 55
IdentityFile ~/.ssh/id_ed25519
UseKeychain yes
AddKeysToAgent yes
```
Generate a key and copy it to the VPS:
```bash
# Generate key (skip if you already have ~/.ssh/id_ed25519)
ssh-keygen -t ed25519
# Copy it to the VPS (will ask for your password once)
ssh-copy-id -p 55 gstone@t-gstone.de
# Store passphrase in macOS Keychain
ssh-add --apple-use-keychain ~/.ssh/id_ed25519
```
After this, `ssh t-gstone.de` connects without any password prompts.
## Quick Start
```bash
@@ -69,19 +100,24 @@ All persistent data lives under `/opt/docker-data/`:
## Managing Services
Each service has its own compose file and can be managed independently:
A root `docker-compose.yml` includes all stacks, so you can manage everything with one command:
```bash
# Restart just Nextcloud
docker compose -f nextcloud/docker-compose.yml --env-file .env up -d
# Start / restart all services
docker compose --env-file .env up -d
# View logs for Gitea
docker compose -f gitea/docker-compose.yml --env-file .env logs -f
# View logs for all services
docker compose --env-file .env logs -f
# Stop everything
for svc in monitoring gitea nextcloud caddy; do
docker compose -f $svc/docker-compose.yml --env-file .env down
done
docker compose --env-file .env down
```
You can still target individual services via their compose file:
```bash
docker compose -f nextcloud/docker-compose.yml --env-file .env up -d
docker compose -f gitea/docker-compose.yml --env-file .env logs -f
```
## Adding a New Service
@@ -91,16 +127,17 @@ done
- Join the `proxy` external network
- Bind mount data to `${DATA_ROOT}/myapp/`
- Add `myapp/.env.example` if the service needs secrets
3. Add a reverse proxy entry in `caddy/Caddyfile`:
3. Add `- path: myapp/docker-compose.yml` to root `docker-compose.yml`
4. Add a reverse proxy entry in `caddy/Caddyfile`:
```
myapp.t-gstone.de {
reverse_proxy myapp:8080
}
```
4. Reload Caddy: `docker exec caddy caddy reload --config /etc/caddy/Caddyfile`
5. Add a DNS A record for `myapp.t-gstone.de` -> VPS IP
6. Add data directory creation to `scripts/deploy.sh`
7. Add backup steps to `scripts/backup.sh` if the service has persistent data
5. Reload Caddy: `docker exec caddy caddy reload --config /etc/caddy/Caddyfile`
6. Add a DNS A record for `myapp.t-gstone.de` -> VPS IP
7. Add data directory creation to `scripts/deploy.sh`
8. Add backup steps to `scripts/backup.sh` if the service has persistent data
## Backup & Restore
@@ -153,7 +190,7 @@ or low cost, and Restic handles encryption + deduplication automatically. A cron
2. Go to **My Account** -> **Grafana Cloud** -> your stack
3. Find your Loki and Prometheus endpoints + credentials
4. Fill in `monitoring/.env` with those values
5. Start the monitoring stack: `docker compose -f monitoring/docker-compose.yml --env-file .env up -d`
5. Start the monitoring stack: `docker compose --env-file .env up -d`
### Recommended Alerts

View File

@@ -1,3 +1,13 @@
{
servers {
timeouts {
read_header 10s
idle 60s
}
max_header_size 16KB
}
}
nextcloud.t-gstone.de {
reverse_proxy nextcloud:80

View File

@@ -13,6 +13,16 @@ services:
- ${DATA_ROOT}/caddy/config:/config
networks:
- proxy
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "caddy", "validate", "--config", "/etc/caddy/Caddyfile"]
interval: 30s
timeout: 5s
retries: 3
networks:
proxy:

5
docker-compose.yml Normal file
View File

@@ -0,0 +1,5 @@
include:
- path: caddy/docker-compose.yml
- path: nextcloud/docker-compose.yml
- path: gitea/docker-compose.yml
- path: monitoring/docker-compose.yml

View File

@@ -1,6 +1,6 @@
services:
gitea:
image: gitea/gitea:latest-rootless
image: gitea/gitea:1.25.5-rootless
container_name: gitea
restart: unless-stopped
env_file: .env
@@ -11,6 +11,16 @@ services:
- "2222:2222"
networks:
- proxy
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/api/healthz"]
interval: 30s
timeout: 10s
retries: 3
networks:
proxy:

73
migration.md Normal file
View File

@@ -0,0 +1,73 @@
# Migration Plan: Bare-Metal Nextcloud to Docker
Fresh Docker install with manual re-upload via Nextcloud client.
Old setup: bare-metal Nextcloud (MySQL) at `t-gstone.de/nextcloud`.
New setup: Docker-based Nextcloud (PostgreSQL) at `nextcloud.t-gstone.de`.
## Before Migration
### 1. Export Calendars and Contacts from Old Instance
These live in the database and won't carry over automatically:
- **Calendars**: Go to Calendar app > Settings (bottom-left) > click `...` next to each calendar > Export (downloads `.ics`)
- **Contacts**: Go to Contacts app > Settings (bottom-left) > click `...` next to each address book > Export (downloads `.vcf`)
Also export any other DB-only app data you care about (Notes, Deck boards, Bookmarks, etc.).
### 2. Create DNS Record
Add an A record for `nextcloud.t-gstone.de` pointing to your VPS IP. Do this early so DNS propagates.
### 3. Deploy Fresh Docker Setup
```bash
# Clone repo on VPS, configure .env files
cp .env.example .env # set DOMAIN and DATA_ROOT
cp nextcloud/.env.example nextcloud/.env # set DB creds, admin user, redis password
cp gitea/.env.example gitea/.env
cp monitoring/.env.example monitoring/.env
# Deploy
./scripts/deploy.sh
```
Verify the fresh instance works at `https://nextcloud.t-gstone.de`.
### 4. Re-Upload Files via Nextcloud Client
- Install the Nextcloud desktop client
- Point it to `https://nextcloud.t-gstone.de`
- Sync your files from your local machine
### 5. Re-Import Calendars and Contacts
- **Calendars**: Calendar app > Settings > Import > select the `.ics` files
- **Contacts**: Contacts app > Settings > Import > select the `.vcf` files
### 6. Verify
- [ ] Files are complete and accessible
- [ ] Calendars show all events
- [ ] Contacts are intact
- [ ] Sharing works
- [ ] Mobile apps connect successfully
### 7. Decommission Old Instance
Once satisfied:
1. Shut down old bare-metal Nextcloud
2. Optionally redirect `t-gstone.de/nextcloud` to `nextcloud.t-gstone.de`
3. Update all Nextcloud clients on your devices to the new URL
4. Keep the old data/DB dump as a backup for a few weeks before deleting
## What Won't Carry Over (DB-only data)
These are stored in the MySQL database, not in files. Export before shutting down the old instance if you need them:
- Calendars / Contacts (CalDAV/CardDAV) — export as `.ics`/`.vcf`
- Share links and shared folder structures
- Notes, Deck boards, Bookmarks, Talk history
- App settings and configurations
- Activity log / file versioning metadata

View File

@@ -3,7 +3,7 @@
// ============================================================
discovery.docker "containers" {
host = "unix:///var/run/docker.sock"
host = "http://docker-socket-proxy:2375"
}
discovery.relabel "containers" {
@@ -21,7 +21,7 @@ discovery.relabel "containers" {
}
loki.source.docker "containers" {
host = "unix:///var/run/docker.sock"
host = "http://docker-socket-proxy:2375"
targets = discovery.relabel.containers.output
forward_to = [loki.write.grafana_cloud.receiver]
}

View File

@@ -1,12 +1,47 @@
services:
docker-socket-proxy:
image: tecnativa/docker-socket-proxy:0.3
container_name: docker-socket-proxy
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- CONTAINERS=1
- LOG=1
- POST=0
- BUILD=0
- COMMIT=0
- CONFIGS=0
- DISTRIBUTION=0
- EXEC=0
- IMAGES=0
- INFO=0
- NETWORKS=0
- NODES=0
- PLUGINS=0
- SERVICES=0
- SESSION=0
- SWARM=0
- SYSTEM=0
- TASKS=0
- VOLUMES=0
networks:
- monitoring
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
alloy:
image: grafana/alloy:latest
image: grafana/alloy:v1.14.1
container_name: alloy
restart: unless-stopped
depends_on:
- docker-socket-proxy
env_file: .env
volumes:
- ./config.alloy:/etc/alloy/config.alloy:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/host/root:ro
@@ -17,6 +52,11 @@ services:
pid: host
networks:
- monitoring
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
networks:
monitoring:

View File

@@ -11,3 +11,6 @@ NEXTCLOUD_ADMIN_PASSWORD=CHANGE_ME_admin_password
NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.t-gstone.de
OVERWRITEPROTOCOL=https
OVERWRITECLIURL=https://nextcloud.t-gstone.de
# Redis
REDIS_PASSWORD=CHANGE_ME_redis_password

View File

@@ -12,12 +12,23 @@ services:
environment:
- POSTGRES_HOST=postgres
- REDIS_HOST=redis
- REDIS_HOST_PASSWORD=${REDIS_PASSWORD}
volumes:
- ${DATA_ROOT}/nextcloud/html:/var/www/html
- ${DATA_ROOT}/nextcloud/data:/var/www/html/data
networks:
- proxy
- nextcloud-internal
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/status.php"]
interval: 30s
timeout: 10s
retries: 3
postgres:
image: postgres:16-alpine
@@ -33,13 +44,25 @@ services:
interval: 10s
timeout: 5s
retries: 5
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
redis:
image: redis:7-alpine
container_name: nextcloud-redis
restart: unless-stopped
command: redis-server --requirepass ${REDIS_PASSWORD}
env_file: .env
networks:
- nextcloud-internal
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
cron:
image: nextcloud:29-apache
@@ -53,6 +76,11 @@ services:
- ${DATA_ROOT}/nextcloud/data:/var/www/html/data
networks:
- nextcloud-internal
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
networks:
proxy:

View File

@@ -1,14 +1,16 @@
# Code Review Issues
# Repo Review — nextcloud-selfhosted
| # | Severity | File | Issue | Status |
|----|----------|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|
| 1 | Critical | `scripts/deploy.sh` | `SCRIPT_DIR` resolves to `scripts/` but paths assume repo root (e.g. `$SCRIPT_DIR/caddy/docker-compose.yml`). All scripts broken after move to `scripts/`. Fix: use `REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"` | DONE |
| 2 | Critical | `scripts/backup.sh` | Same broken `SCRIPT_DIR` path issue | DONE |
| 3 | Critical | `scripts/restore.sh` | Same broken `SCRIPT_DIR` path issue | DONE |
| 4 | High | `scripts/backup.sh:20` | `pg_dumpall -U nextcloud` hardcodes DB username instead of reading from env | DONE |
| 5 | High | `scripts/restore.sh:68` | `psql -U nextcloud` hardcodes DB username instead of reading from env | DONE |
| 6 | High | `scripts/deploy.sh:13` | `source .env` in a root-privileged script can execute arbitrary commands. Consider safer parsing or variable validation | DONE |
| 7 | Medium | `monitoring/docker-compose.yml` | Docker socket + `/proc` + `/sys` + `/` mounted into Alloy container. Consider using a Docker socket proxy to limit API access | TODO |
| 8 | Medium | `caddy/Caddyfile` | No rate limiting configured at the reverse proxy layer | TODO |
| 9 | Low | `gitea/docker-compose.yml` | `gitea/gitea:latest-rootless` unpinned — pin to specific version like Nextcloud does | TODO |
| 10 | Low | `monitoring/docker-compose.yml` | `grafana/alloy:latest` unpinned — pin to specific version | TODO |
| # | Priority | Category | Issue | Location | Suggestion | Status |
|----|----------|-------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|---------|
| 1 | High | Security | `backup.sh` and `restore.sh` use `source` to load `.env` files, which executes arbitrary shell code | `scripts/backup.sh:5-6`, `scripts/restore.sh:5-6` | Replace `source` with the safe `eval "$(grep ...)"` parser used in `deploy.sh:14` | DONE |
| 2 | High | Correctness | Cron path hint is wrong — says `$REPO_ROOT/backup.sh` instead of `$REPO_ROOT/scripts/backup.sh` | `scripts/backup.sh:49` | Change to `$REPO_ROOT/scripts/backup.sh` | DONE |
| 3 | Medium | Correctness | Postgres readiness check uses `sleep 5` instead of a proper wait | `scripts/restore.sh:66` | Use `docker compose up -d --wait postgres` or poll with `pg_isready` in a loop | DONE |
| 4 | Medium | Correctness | `pg_dumpall` output restored with `psql -U $POSTGRES_USER` — role creation statements may fail | `scripts/restore.sh:69` | Restore against the `postgres` database: `psql -U "$POSTGRES_USER" -d postgres` | DONE |
| 5 | Medium | Reliability | No Docker log rotation — JSON log driver can fill disk | All `docker-compose.yml` files | Add `logging: { driver: json-file, options: { max-size: "10m", max-file: "3" } }` to each service, or configure in `/etc/docker/daemon.json` | DONE |
| 6 | Medium | Security | Alloy container mounts entire root filesystem (`/:/host/root:ro`) — exposes secrets in `.env` files | `monitoring/docker-compose.yml:42` | Mount only needed paths (e.g., `/etc:/host/etc:ro`) or use a more restrictive bind | SKIPPED |
| 7 | Medium | Reliability | Rate limits mentioned in commit `0f12c5f` but not present in Caddyfile | `caddy/Caddyfile` | Add `rate_limit` directive or verify the commit wasn't partially reverted | SKIPPED |
| 8 | Low | Backup | Caddy TLS certificates (`${DATA_ROOT}/caddy/data/`) not included in backup | `scripts/backup.sh` | Add a `tar` step for `caddy/data` — avoids Let's Encrypt rate limits on restore | DONE |
| 9 | Low | Reliability | `deploy.sh` doesn't pull latest images before starting | `scripts/deploy.sh:78-88` | Add `docker compose pull` before each `up -d` call | DONE |
| 10 | Low | Security | Redis has no password — reachable from any container on `nextcloud-internal` network | `nextcloud/docker-compose.yml:38-42` | Add `command: redis-server --requirepass $REDIS_PASSWORD` and pass the password to Nextcloud via `REDIS_HOST_PASSWORD` | DONE |
| 11 | Low | Reliability | No healthchecks on Nextcloud, Gitea, or Caddy containers | `nextcloud/docker-compose.yml`, `gitea/docker-compose.yml`, `caddy/docker-compose.yml` | Add `healthcheck` blocks (e.g., `curl -f http://localhost` for Nextcloud, `caddy validate` for Caddy) | DONE |
| 12 | Low | Reliability | No container resource limits — a runaway process can OOM the VPS | All `docker-compose.yml` files | Add `mem_limit` and `cpus` to at least Nextcloud, Postgres, and Alloy | SKIPPED |

View File

@@ -2,8 +2,12 @@
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
source "$REPO_ROOT/.env"
source "$REPO_ROOT/nextcloud/.env"
set -a
eval "$(grep -v '^#' "$REPO_ROOT/.env" | grep -v '^$' | grep '^[A-Za-z_][A-Za-z_0-9]*=' )"
set +a
set -a
eval "$(grep -v '^#' "$REPO_ROOT/nextcloud/.env" | grep -v '^$' | grep '^[A-Za-z_][A-Za-z_0-9]*=' )"
set +a
DATA_ROOT="${DATA_ROOT:-/opt/docker-data}"
BACKUP_DIR="/opt/backups"
@@ -34,6 +38,13 @@ echo " -> Archiving Gitea data..."
tar -czf "$BACKUP_DIR/gitea-$DATE.tar.gz" \
-C "$DATA_ROOT" gitea/data gitea/config
# ------------------------------------------------------------------
# Caddy TLS certificates
# ------------------------------------------------------------------
echo " -> Archiving Caddy TLS data..."
tar -czf "$BACKUP_DIR/caddy-$DATE.tar.gz" \
-C "$DATA_ROOT" caddy/data
# ------------------------------------------------------------------
# Rotate old backups
# ------------------------------------------------------------------
@@ -46,4 +57,4 @@ ls -lh "$BACKUP_DIR"/*"$DATE"* 2>/dev/null || echo " (no files found)"
echo ""
echo "To schedule daily backups, add to crontab (crontab -e):"
echo " 0 3 * * * $REPO_ROOT/backup.sh >> /var/log/backup.log 2>&1"
echo " 0 3 * * * $REPO_ROOT/scripts/backup.sh >> /var/log/backup.log 2>&1"

View File

@@ -73,19 +73,19 @@ for svc in nextcloud gitea monitoring; do
done
# ------------------------------------------------------------------
# Start stacks in order
# Lock down .env files (readable only by root)
# ------------------------------------------------------------------
echo "==> Starting Caddy..."
docker compose -f "$REPO_ROOT/caddy/docker-compose.yml" --env-file "$REPO_ROOT/.env" up -d
echo "==> Securing .env files..."
for envfile in "$REPO_ROOT"/.env "$REPO_ROOT"/*/.env; do
[ -f "$envfile" ] && chmod 600 "$envfile" && chown root:root "$envfile"
done
echo "==> Starting Nextcloud..."
docker compose -f "$REPO_ROOT/nextcloud/docker-compose.yml" --env-file "$REPO_ROOT/.env" up -d
echo "==> Starting Gitea..."
docker compose -f "$REPO_ROOT/gitea/docker-compose.yml" --env-file "$REPO_ROOT/.env" up -d
echo "==> Starting Monitoring..."
docker compose -f "$REPO_ROOT/monitoring/docker-compose.yml" --env-file "$REPO_ROOT/.env" up -d
# ------------------------------------------------------------------
# Start all stacks
# ------------------------------------------------------------------
echo "==> Pulling and starting all services..."
docker compose --env-file "$REPO_ROOT/.env" --project-directory "$REPO_ROOT" pull
docker compose --env-file "$REPO_ROOT/.env" --project-directory "$REPO_ROOT" up -d
echo ""
echo "==> All services started. Verify with: docker ps"

View File

@@ -2,8 +2,12 @@
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
source "$REPO_ROOT/.env"
source "$REPO_ROOT/nextcloud/.env"
set -a
eval "$(grep -v '^#' "$REPO_ROOT/.env" | grep -v '^$' | grep '^[A-Za-z_][A-Za-z_0-9]*=' )"
set +a
set -a
eval "$(grep -v '^#' "$REPO_ROOT/nextcloud/.env" | grep -v '^$' | grep '^[A-Za-z_][A-Za-z_0-9]*=' )"
set +a
DATA_ROOT="${DATA_ROOT:-/opt/docker-data}"
BACKUP_DIR="/opt/backups"
@@ -63,10 +67,12 @@ tar -xzf "$GITEA_ARCHIVE" -C "$DATA_ROOT"
echo "==> Starting Postgres for DB restore..."
docker compose -f "$REPO_ROOT/nextcloud/docker-compose.yml" --env-file "$REPO_ROOT/.env" up -d postgres
echo " -> Waiting for Postgres to be ready..."
sleep 5
until docker exec nextcloud-postgres pg_isready -U "$POSTGRES_USER" -d "$POSTGRES_DB" &>/dev/null; do
sleep 1
done
echo "==> Restoring Nextcloud database..."
docker exec -i nextcloud-postgres psql -U "$POSTGRES_USER" < "$DB_DUMP"
docker exec -i nextcloud-postgres psql -U "$POSTGRES_USER" -d postgres < "$DB_DUMP"
# ------------------------------------------------------------------
# Start all services