Blackboard
Active Tickets14
SECactive
2026-001 | Critical Security Fixes | Builder | CLOSED | Fixed 4 critical vulnerabilities: PostgreSQL internet exposure, world-readable .env files with API keys, weak credentials. All fixes verified. OpenAI key no longer in use (migrated to all-Claude). Closed 2026-02-26.
MCPactive
2026-001 | Google MCP Deployment | DevOps Copilot | COMPLETE | Deployed Google Workspace MCP server to port 3020; Nginx config active at zeroshot.studio/mcp.
ZEROCVactive
2026-001 | ZeroCV Business Setup | Builder | COMPLETE | Full CV generation pipeline working E2E. Created zerocv_projects table, rewrote runtime_adapters.py + orchestrator_sweep.py, deployed offer page (port 3011), E2E test passed (init→delivered). Closed 2026-03-04.
ZEROCVactive
2026-002 | ZeroCV Gap Fixes | Builder | COMPLETE | Budget precision NUMERIC(10,4), cloud LLM (zerocv agent → gpt-5.3-codex), Telegram @ZeeLabsBot configured. Closed 2026-03-05.
ARCHactive
2026-001 | Zero Modules — Modular App Ecosystem | Builder | OPEN | Standardize all new VPS capabilities into a module spec (module.json, docker-compose, migrations/, optional MCP, health endpoint). Reddit Signals is the first module. Retrofit existing apps (ZeroMemory, ZeroLabs, health monitor) gradually. Goal: plug-and-play modules that can be enabled/disabled and potentially packaged for customers.
SIGNALSactive
2026-001 | Reddit Content Signals Pipeline | Builder | DEPLOYED | First zero-module live. 3 containers (API :3060, worker, MCP :3061). ZeroMini collects via JSON (residential IP, 4h cron), pushes to VPS ingest API. VPS generates embeddings (nomic-embed-text via Ollama) and runs weekly analysis. 728 threads with full engagement data. Deployed 2026-03-18.
OPSactive
2026-001 | OpenClaw VPS Hibernation | Claude Code | COMPLETE | Intentional hibernation — transitioning to ZeroMini. Gateway was already dead (OOM). Disabled 13 cron jobs, killed config-watchdog, removed config-guard cron. Ollama kept alive (ZeroMemory + ZeroSignals depend on it). /opt/openclaw/ preserved for future use. ZeroMini running 3 Zee jobs. Completed 2026-03-23.
OPSactive
2026-002 | Workspace Audit & Source Control Policy | Claude Code | COMPLETE | Full local+VPS audit. Created 7 GitHub repos for unbacked apps. Added source control enforcement (CLAUDE.md), naming conventions (PascalCase), artifact routing policy, /cleanup skill, nightly auto-cleanup + drift check. All apps now have repo/sync_mode/last_deploy_commit in registry. Completed 2026-03-23.
VOICEactive
2026-001 | Chained TTS Voice Pipeline | ~~Foreman~~ | ARCHIVED | Phase 1 PR created (#6), pending merge & deploy. *Stale since 2026-01-14; Foreman agent retired. Archived 2026-02-26.*
RELactive
20260119-001 | ZeroShot Studio base stack VPS install | ~~Foreman~~ | ARCHIVED | Manifest refresh completed 2026-01-14. *Stale since 2026-01-19; Foreman agent retired. Archived 2026-02-26.*
WPactive
2026-001 | WordPress Stack for Jimmy Goode Blog | Builder | ARCHIVED | WordPress stopped and archived at /opt/apps/wordpress-3008-archived. Replaced by Payload CMS on 2026-02-02. Data preserved for potential rollback.
PAYLOADactive
2026-001 | Payload CMS Deployment for zeroshot.studio | Builder | COMPLETE | Deployed Payload CMS v3.74.0 (website template) on port 3009 with PostgreSQL database (payload_zeroshot), MinIO storage (payload-media bucket). Domain zeroshot.studio now points to Payload. Admin: https://zeroshot.studio/admin
INCactive
2026-001 | Zerolink DB Connectivity Incident | Debug/Builder | COMPLETE | SEC-2026-001 changed PG listen_addresses to 'localhost' only, breaking Docker-to-host DB connectivity. Data was never lost. Fixed: listen_addresses='localhost,172.17.0.1', healthchecks use $(hostname), MinIO creds rotated, email configured, orphan containers removed, daily PG backup cron installed.
BACKUPactive
2026-001 | Automated Per-App Backups to Google Drive | Data | COMPLETE | Two-layer backup strategy deployed: (1) Plesk scheduled daily backup at 2:00 AM to Google Drive (domain configs, SSL, mail, MariaDB); (2) Custom per-app backup at 3:30 AM with rclone sync to Google Drive (7 PG databases, 10 app configs, 3 MinIO buckets, Nginx configs, pg_dumpall). Local 7-day retention, GDrive 30-day retention. First run verified: 0 errors, 12 folders on Google Drive.
Active Locks1
[Area] Locked by [Agent] at [UTC time]; reason; expires [time].
Recent Updates109
Update by: Codex | Time: 2026-04-25T10:23Z
Summary: ZeroLabs native code block line spacing tightened in the renderer/theme contract and deployed live
Details: Classification FIX. Completed required preflight first by reading `state/server.manifest.json`, `state/blackboard.md`, `knowledge-base/`, and `config/memories/zerolabs.md`; confirmed `zerolabs` is a `git-deploy` app at `/opt/apps/zerolabs-3009`. **Root cause:** the public post surface uses the native code block component, and its vertical rhythm comes from the visual-contract token `code_block.line_height`, which fed `--zl-code-line-height: 1.1rem` into `--native-code-line-height`; the renderer selectors `.native-code-content code` and `.native-code-content .line` were already correctly bound to that shared variable. **Source fix:** updated `zeroshotstudio/ZeroLabs` source `src/lib/visual-contract/visual-contract.source.json` to set `code_block.line_height` from `1.1rem` to `1rem`, bumped the visual-contract version to `2026.04.25.v1`, regenerated `src/app/visual-contract.generated.css` plus `exports/visual-contract/zerolabs-visual-contract.json`, and committed/pushed ZeroLabs commit `0ace1b1` (`fix(zerolabs): tighten native code block line spacing`). **Deploy:** rebuilt the app locally with `pnpm build`, rebuilt/recreated the live `zerolabs-3009` container via `docker compose build blog && docker compose up -d blog`, and refreshed the VPS manifest at `2026-04-25T10:23:47Z`. **Verification:** `http://127.0.0.1:3009/api/health` returned `{"status":"ok"}` after deploy; the live post `https://labs.zeroshot.studio/ai-workflows/ask-ai-for-a-spec-before-code` renders native code blocks (`figure.native-code-block`, `pre.shiki`, `.native-code-content`); the shipped CSS now contains `--zl-code-line-height: 1rem` and `--native-code-line-height: var(--zl-code-line-height)`; and the live HTML still shows the expected native-code markup. **Host limitation:** true browser screenshot/computed-style verification from this VPS was partially blocked in-scope because snap Chromium exits under this service cgroup and Playwright's bundled browser lacks `libatk-1.0.so.0`, so no final screenshot artifact was captured from the host despite multiple attempts.Update by: Codex | Time: 2026-04-24T14:59Z Summary: Plane PI containers recreated with the app-specific session cookie to stop the 429 loop Details: Classification FIX. User reported Plane looping into "Too Many Requests" after the native-auth cookie repair. **Root cause isolation:** main Plane API sessions were valid, but logs showed `plane-pi-api-1` repeatedly calling `/api/users/session/` from `10.240.6.15` and receiving `401`, while the browser was pushed into repeated `/api/instances/` requests that ended in `429`. Rendered compose already included `SESSION_COOKIE_NAME=plane-session-id` for PI, but the live PI containers had been up for 23 hours and had not been recreated after the cookie-name change. **Backup first:** created `/opt/backups/apps/zeroui-plane/20260424_145758_plane-pi-cookie-recreate/` with pre-recreate compose/env copies and SHA256 values recorded in `database-backups/BACKUP_MANIFEST.md`. **Fix:** force-recreated only `pi-api`, `pi-worker`, and `pi-beat`, leaving Plane data untouched. **Verification:** `plane-pi-api-1` now has `SESSION_COOKIE_NAME=plane-session-id`, public `/api/instances/` returns `200`, the stale Redis rate-limit key for `/api/instances/` had expired, and recent logs no longer show the PI session-check loop. Pushed ZeroUI commit `4b442d9` (`fix(plane): validate pi session cookie env`) so the Plane VPS validator now checks the PI service env too.
Update by: Codex | Time: 2026-04-24T14:51Z Summary: ZeroUI Chat direct-auth customization restored with ZeroVPS gateway model and sidebar links Details: Classification FIX. User reported `https://chat.zeroshot.studio` looked like stock Open WebUI and lacked expected customization/relay-style connections. **Root cause isolation:** the direct-auth rebuild kept the stock Chat container and restored SQLite data, but the direct Chat compose no longer configured the ZeroVPS OpenAI-compatible gateway, the running container's injected `loader.js` / `custom.css` files were zero-byte, and Open WebUI still had a stale `tool_server` DB pointer to removed Orchestrator port `8787`. **Backup first:** created `/opt/backups/apps/zeroui-chat/20260424_144453_chat-custom-gateway-fix/` with pre-change `webui.db`, live env, compose, and UFW rule backups; SHA256 values recorded in `database-backups/BACKUP_MANIFEST.md`. **Source/live fix:** pushed `zeroshotstudio/ZeroUI` commit `06f7fa0` (`fix(chat): restore direct vps custom gateways`), adding direct Chat `OPENAI_API_BASE_URLS=http://zerovps.gateway:18812/v1`, `zerovps/default` pinned model defaults, and post-start crosslink injection to the direct deploy script. Live added a Docker-bridge-only `zerovps-codex-gateway-docker-bridge.service` forwarding `192.168.192.1:18812` to the existing loopback `zerovps-codex-gateway.service`, plus a narrow UFW allow from `192.168.192.0/20` to `192.168.192.1:18812`. Removed the stale Orchestrator tool-server connection from Open WebUI config while preserving the rest of the DB. **Verification:** `zeroui-chat` is healthy and public Chat returns `200`; inside the container, `curl` to `http://zerovps.gateway:18812/v1/models` returns `zerovps/default`; env shows `DEFAULT_MODELS=zerovps/default` and pinned local models; Open WebUI `tool_server.connections` is `0` so the old `:8787` errors stop; and injected Chat assets are restored (`loader.js` 2945 bytes, `custom.css` 1222 bytes with `zero-ui-sidebar-plane` / `https://plane.zeroshot.studio` markers).
Update by: Codex | Time: 2026-04-24T14:35Z Summary: Plane login loop fixed by moving native Plane auth onto an app-specific session cookie Details: Classification FIX. User reported successful Plane password entry returning to the email form, then later a stale `Not Authorized` workspace screen. **Root cause isolation:** Plane logs showed `POST /auth/sign-in/` succeeding for `admin@zeroshot.studio`, immediately followed by `/api/users/me/` returning `401`; a clean curl session worked, which pointed to browser-retained stale `session-id` cookies from the previous shared-auth/native-auth transitions. **Backup first:** created `/opt/backups/apps/zeroui-plane/20260424_143136_plane-cookie-fix/` with pre-change copies of `docker-compose.vps-plesk.yaml` and `plane.vps-plesk.env`, SHA256 values recorded in `database-backups/BACKUP_MANIFEST.md`. **Source/live fix:** added `SESSION_COOKIE_NAME=plane-session-id` to the ZeroUI Plane VPS bundle and live env, then recreated the Plane API/worker service group with Docker Compose; source commit `672f676` (`fix(plane): isolate vps session cookie`) has been pushed to `zeroshotstudio/ZeroUI`. **Verification:** live Django settings now report `SESSION_COOKIE_NAME=plane-session-id`, public `https://plane.zeroshot.studio/` and `https://chat.zeroshot.studio/` return `200`, and a clean authenticated API session can fetch `/api/users/me/`, `/api/users/me/workspaces/`, `/api/workspaces/zeroshotstudio/`, members, and the `31` restored projects for `admin@zeroshot.studio`. Server-side Plane permissions are healthy; any remaining `Not Authorized` page is expected to clear after a hard browser refresh or clearing old Plane cookies.
Update by: Codex | Time: 2026-04-24T14:12Z Summary: ZeroUI Chat and Plane direct-auth credentials rotated after fresh backups Details: Classification MIGRATION-GRADE. Completed Green-mode preflight first by reading the manifest, registry, blackboard, backup manifest, and repo/knowledge-base references; confirmed the active targets are `chat.zeroshot.studio` (`zeroui-chat`, Open WebUI SQLite auth at `/opt/apps/zeroui/.data/open-webui/webui.db`) and `plane.zeroshot.studio` (`zeroui-plane`, Plane Postgres in `plane-plane-db-1`), not the archived legacy `plan.zeroshot.studio` stack. **Backups created before mutation:** `/opt/backups/apps/zeroui-chat/20260424_141059_credential-reset/webui.db` and `/opt/backups/apps/zeroui-plane/20260424_141059_credential-reset/plane.dump`, both with SHA256 recorded in `database-backups/BACKUP_MANIFEST.md`. **Credential reset:** rotated the existing `admin@zeroshot.studio` login for Chat and Plane, preserved Chat as role `admin`, and kept Plane service/bot users untouched. **Verification:** Open WebUI password hash verification succeeded through the live app code, Plane `check_password` succeeded through Django for `admin@zeroshot.studio`, both public endpoints returned `200`, key containers remained up/healthy, and the VPS manifest was refreshed via `/home/claude/ZeroVPS/scripts/snapshot-manifest.sh --output state`. **Security note:** new passwords were returned only to the user and were intentionally not written to repo state, the blackboard, or changelog.
Update by: Codex | Time: 2026-04-24T12:54Z Summary: LangGraph content pipeline clean-room plan updated from audit findings Details: Classification INSPECTION. Updated `docs/langgraph-content-pipeline-cleanroom-plan.md` only. Changes corrected canonical stage IDs (`style`, `facts`, `seo`), active config coverage, stale file mappings (`codex_runner.py`, `config/sites/zerolabs.yaml`), LangGraph dependency/checkpointer/thread requirements, approval `interrupt()` / `Command(resume=...)` semantics, small graph-state rules, `live_qa` ownership decision criteria, migration order, and verification gates. No VPS runtime, deploy, manifest, registry, or credential changes were made.
Update by: Codex | Time: 2026-04-23T16:18Z Summary: Plane local dataset was present live; admin account visibility was repaired by restoring per-project memberships Details: Classification CHANGE. User reported that `plane.zeroshot.studio` did not contain the local-site data. **Root-cause check**: live DB inspection showed the restored local dataset was already present on the VPS (`1` workspace, `31` projects, users `admin@zeroshot.studio`, `zero@zeroshot.studio`, and `zeroui-orchestrator@zeroshot.studio`). The failure mode was access scoping, not missing data: `admin@zeroshot.studio` had workspace ownership on `zeroshotstudio` but `0` rows in `project_members`, while `zero@zeroshot.studio` had all `31` per-project owner memberships. **Fix**: inserted missing `project_members` rows for `admin@zeroshot.studio` across every active restored project by mirroring the existing owner-level membership shape from `zero@zeroshot.studio` (role `20`, same workspace/project linkage, same view/default/preferences payloads, source `manual`). **Verification**: post-fix query shows `admin@zeroshot.studio` now has `31` active project memberships, including `AROFLOAGENT`, `DEEPTUTOR`, `HEALTHMODC84`, `IDEAVAULT`, `IMAGESEOPRO`, `MCPZEROVPS`, `OATUTOR`, `OPENCLAW`, and the rest of the restored local portfolio. No runtime restart was required because this was a live data-access repair inside the Plane DB.
Update by: Codex | Time: 2026-04-23T16:10Z Summary: Security hardening shipped for ZeroUI chat and Plane, and Orchestrator source now fails closed by default Details: Classification CHANGE. **Source hardening**: pushed `zeroshotstudio/ZeroUI` commits `79bd2ac` (`fix(security): harden zeroui runtime surfaces`) and `e698690` (`fix(security): keep plane proxy compatible`) on branch `codex/chore/zeroui-orchestrator-prod-runtime`. The Orchestrator source now validates production security posture at startup, disables OpenAPI docs by default, enforces allowed-host checks plus security-response headers, uses constant-time admin-token comparison, and can fail closed on unsigned agent events / activity events / Plane webhooks when the corresponding production flags are enabled. The direct WebUI bundle is now digest-pinned for `open-webui`, `qdrant`, and `searxng`, keeps signup disabled, and drops all Linux capabilities on the chat-side containers. The Plane bundle now narrows `TRUSTED_PROXIES` from `0.0.0.0/0` to `127.0.0.1/32,10.240.0.0/16,172.16.0.0/12`, pins the bundled MinIO image by digest, and keeps the public proxy on `pids_limit=256`. **Compatibility correction**: the first hardening attempt also applied `no-new-privileges` plus `cap_drop=ALL` to the Plane proxy, but the upstream Caddy image failed to exec under that constraint (`exec /usr/bin/caddy: operation not permitted`) and drove `plane.zeroshot.studio` to `502`; reverted only that incompatible proxy knob in source commit `e698690`, redeployed Plane, and left the other hardening changes intact. **Verification**: local/source checks passed with `chat direct vps bundle validates`, `plane vps bundle validates`, and `uv run pytest tests/test_api.py tests/test_validation.py -q` (`48 passed`). Live verification after redeploy returned `HTTP/1.1 200 OK` for `https://chat.zeroshot.studio/`, `HTTP/1.1 200 OK` for `https://plane.zeroshot.studio/`, and `HTTP/1.1 302` for `https://auth.zeroshot.studio/`; `docker inspect zeroui-chat` confirms `CapDrop=[\"ALL\"]` and the digest-pinned image; `docker inspect plane-proxy-1` confirms `pids_limit=256`; and `/opt/apps/zeroui-plane/plane.vps-plesk.env` now carries the narrowed `TRUSTED_PROXIES` value. **State sync**: refreshed `state/server.manifest.json` and advanced the live ZeroUI deploy commit in `state/apps.registry.json` to `e698690`.
Update by: Codex | Time: 2026-04-23T15:44Z
Summary: ZeroUI chat and Plane were redeployed from local source with direct auth; shared auth now redirects to chat
Details: Classification MIGRATION-GRADE. **Source-managed rollout**: pushed `zeroshotstudio/ZeroUI` commits `04b1c30` (`feat(deploy): add direct auth vps runtime`) and `930cfbe` (`fix(deploy): pin plane vps network subnet`) from branch `codex/chore/zeroui-orchestrator-prod-runtime`, then staged that repo snapshot onto the VPS so the live runtime is a direct copy of the local source instead of the discarded shared-auth/orchestrator stack. **State restore**: restored preserved Open WebUI and Qdrant data from `/opt/preserved-data/20260423_161300-zeroui-stack-removal/zeroui/data/{open-webui,qdrant}` into `/opt/apps/zeroui/.data/`, restored Plane data from `/opt/preserved-data/20260423_161300-zeroui-stack-removal/zeroui-plane/data`, and reused the live Plane env file at `/opt/apps/zeroui-plane/plane.vps-plesk.env`. **Runtime shape**: `chat.zeroshot.studio` now runs native Open WebUI auth directly on `127.0.0.1:3019` via container `zeroui-chat`; `plane.zeroshot.studio` now runs native Plane auth directly on `127.0.0.1:3007` via container `plane-proxy-1`; `auth.zeroshot.studio` no longer fronts shared auth and now returns a server-level `302` redirect to `https://chat.zeroshot.studio/`; and `zeroui-orchestrator` remains removed with preserved state still parked at `/opt/preserved-data/20260423_161300-zeroui-stack-removal/zeroui-orchestrator/data`. **In-scope fix during deploy**: the first Plane start failed because the VPS had exhausted Docker's default bridge address pools, so the checked-in Plane compose network was pinned to `10.240.6.0/24` and redeployed from source rather than patched live. **Verification**: `docker ps` shows `zeroui-chat` healthy plus the full `plane-*` stack running; localhost probes returned `HTTP/1.1 200 OK` from `http://127.0.0.1:3019/health` and `http://127.0.0.1:3007/`; and public probes returned `200` for `https://chat.zeroshot.studio/`, `200` for `https://plane.zeroshot.studio/`, and `302` for `https://auth.zeroshot.studio/` with `Location: https://chat.zeroshot.studio/`. **State sync**: refreshed the VPS manifest snapshot and updated `state/apps.registry.json`, `docs/port-registry.md`, and `config/memories/repo-vps-ledger.md` so repo state now matches the direct-auth live runtime.Update by: Codex | Time: 2026-04-23T14:13Z
Summary: ZeroUI shared-auth, Plane, and Orchestrator runtimes were fully removed from the VPS while preserving their state off-path
Details: Classification DESTRUCTIVE. **Preflight**: verified the local source surfaces still exist at `/Users/zeroshot/Dev/ZeroUI`, `/Users/zeroshot/Dev/ZeroUI/orchestrator`, and `/Users/zeroshot/Dev/ZeroUI/.data/plane-selfhost/plane-app` before touching production. **Backup gate**: existing fresh backups for the shared-auth and Plane surfaces were already present at `/opt/backups/apps/zeroui-shared-auth/20260423_154053-pre-openwebui-sqlite-cutover/` and `/opt/backups/apps/plane/20260423_153651-pre-local-runtime-replace/`; created a fresh orchestrator backup at `/opt/backups/apps/zeroui-orchestrator/20260423_160748-pre-runtime-removal/config.tgz` so the destructive-change backup window was satisfied. **Runtime removal**: deleted every remaining ZeroUI / Plane / Authentik / Orchestrator container from the VPS, including the previously preserved data-service containers, and removed the live app directories `/opt/apps/zeroui` and `/opt/apps/zeroui-plane`. **State preservation**: before deleting those app trees, moved their data off the live paths into `/opt/preserved-data/20260423_161300-zeroui-stack-removal/` with preserved subtrees for `zeroui/data`, `zeroui-plane/data`, `zeroui-orchestrator/data`, and `authentik/{data,postgres}` so no database or app state was destroyed. **Verification**: `docker ps -a` no longer shows any `zeroui`, `plane`, or `authentik` containers; Docker networks `plane_default`, `zeroui_data`, `zeroui_edge`, and `zeroui_search` were removed; `/opt/apps/zeroui` and `/opt/apps/zeroui-plane` are gone; and `https://chat.zeroshot.studio/`, `https://plane.zeroshot.studio/`, and `https://auth.zeroshot.studio/` now all return `502 Bad Gateway`, which is the expected dead-edge result until a clean redeploy is performed directly from local source. **State sync**: refreshed the VPS manifest at `2026-04-23T14:13:19Z`, updated `state/apps.registry.json`, `docs/port-registry.md`, and `config/memories/repo-vps-ledger.md` so repo state matches the live teardown. **Operator note**: an intermediate preservation move briefly landed at root-level paths (`/zeroui`, `/zeroui-plane`, `/zeroui-orchestrator`, `/authentik`) due shell interpolation, but that was corrected immediately in-scope by moving all preserved data under `/opt/preserved-data/20260423_161300-zeroui-stack-removal/`; no preserved state was lost.Update by: Codex | Time: 2026-04-23T13:56Z Summary: ZeroUI edge routing corrected for Plesk localhost bind and Plane session bootstrap Details: Classification FIX. Patched [/Users/zeroshot/Dev/ZeroUI/Caddyfile](/Users/zeroshot/Dev/ZeroUI/Caddyfile) so Plane document navigations (`Accept: text/html`) always pass through `plane-auth-gateway` before reaching the Plane upstream. This closes the stale `session-id` bypass path that let browser navigations fall into Plane's native signup flow. During deploy, `zeroui-caddy` failed to recreate because live `.env.prod` was missing the expected Plesk localhost bind values; Plesk nginx proxies all three ZeroUI domains to `127.0.0.1:3019`, not host `:80/:443`. Restored `ZEROUI_EDGE_HTTP_BIND=127.0.0.1:3019` and `ZEROUI_EDGE_HTTPS_BIND=127.0.0.1:3419` in `/opt/apps/zeroui/.env.prod`, then recreated the edge stack. Verification: `zeroui-caddy` is back on `127.0.0.1:3019->80` and `127.0.0.1:3419->443`; unauthenticated `https://chat.zeroshot.studio/` and `https://plane.zeroshot.studio/` both return `302` to the shared auth entrypoint again; direct localhost probe with `Host: plane.zeroshot.studio` and `Accept: text/html` now also returns the auth redirect from the restored edge path.
Update by: Codex | Time: 2026-04-23T13:10Z Summary: Plane startup failure narrowed to missing PI runtime in the checked-in bundle and fixed live by restoring the PI services plus a non-empty inert OpenSearch URL Details: Classification FIX. User reported `https://plane.zeroshot.studio` rendering Plane's startup failure shell. **Root cause isolation**: Green-mode inspection showed live `plane-proxy-1` still routed `/pi/*` to `pi-api:8000`, but the checked-in/private deploy bundle at `/opt/apps/zeroui-plane/docker-compose.vps-plesk.yaml` had dropped `pi-api`, `pi-beat`, `pi-worker`, and `pi-migrator` compared with the archived working `v2.5.1` runtime, so authenticated frontend requests to `/pi/api/v1/flags/...` and `/pi/api/v1/chat/start/auth-check/...` returned `502`. After restoring those services from the archived runtime into the repo-managed bundle, the first live start exposed a second runtime defect: Plane PI instantiates its OpenSearch client even when `OPENSEARCH_ENABLED=0`, and the live/shared env had `OPENSEARCH_URL=` blank, causing the PI containers to crash-loop on startup. **Source-managed fix first**: updated the ZeroUI Plane VPS bundle to restore the `x-pi-env` anchor and the four `plane-pi-commercial` services, tightened `scripts/validate-plane-vps-plesk.sh` so the PI service group must exist, and set the repo example env to a syntactically valid inert `OPENSEARCH_URL=https://127.0.0.1:9200` while leaving OpenSearch credentials empty so Plane AI stays disabled by feature flags. **Live rollout**: copied the corrected compose file into `/opt/apps/zeroui-plane`, updated the live `plane.vps-plesk.env` `OPENSEARCH_URL` line to the same inert internal URL, and redeployed only `pi-api`, `pi-beat`, `pi-worker`, and `pi-migrator`. **Verification**: `ZEROUI_PLANE_ENV_FILE=.data/plane-selfhost/plane-app/plane.vps-plesk.env.example ./scripts/validate-plane-vps-plesk.sh` now passes locally; live `plane-pi-api-1`, `plane-pi-beat-1`, and `plane-pi-worker-1` remain up; proxy probes to `http://127.0.0.1/pi/api/v1/flags/?workspace_slug=zeroshotstudio` and `http://127.0.0.1/pi/api/v1/chat/start/auth-check/?workspace_id=ce633e33-5d7a-4976-a9ab-0de64be996c5` now return live HTTP responses (`403`) instead of `502`; and recent `plane-proxy-1` logs no longer show the prior `dial tcp: lookup pi-api ...` failures. **Residual watch item**: `plane-live-1` is still failing the monitor's internal `/live` health probe and Plane PI logs warn that the `embedding_models` table is absent, so AI/PI capability remains intentionally degraded/disabled; if the user still sees the startup shell after the browser reloads, the next bounded follow-up is to reconcile the Plane `live` service health contract rather than the missing PI runtime.
Update by: Codex | Time: 2026-04-22T14:42Z Summary: Shared auth admin password rotated in Authentik; Plane gateway behavior left unchanged because the public auth redirects are already correct Details: Classification CHANGE. Investigated the current shared-auth failure report without changing the live routing first. **Auth boundary check**: `authentik-server-1`, `authentik-worker-1`, and `zeroui-oauth2-proxy` were healthy; live probes confirmed unauthenticated `https://chat.zeroshot.studio/` and `https://plane.zeroshot.studio/` both return `302` to `https://auth.zeroshot.studio/oauth2/sign_in?...`, and `https://auth.zeroshot.studio/oauth2/sign_in` itself returns `200`, so no more aggressive Plane auth-gateway forcing was applied. **Credential recovery**: verified the Authentik admin user still exists as active superuser `akadmin` with email `admin@zeroshot.studio`, then rotated its password directly inside `authentik-server-1` using `manage.py shell`; verified the new password immediately with `u.check_password(...) == True`. **Security note**: the new password was delivered out-of-band to the user only and was not written into repo state, blackboard details, or tracked config files. **Operational note**: recent oauth2-proxy traffic continues to show expected `401`→sign-in redirects for unauthenticated browser requests, which supports leaving the current Plane gateway policy unchanged until a concrete server-side misroute is reproduced.
Update by: Codex | Time: 2026-04-22T13:31Z
Summary: Production Plane now runs the real local localhost dataset and Orchestrator was rebound to the restored workspace
Details: Classification MIGRATION-GRADE. **Source of truth corrected**: user confirmed the real Plane data was not in `/Users/zeroshot/Dev/ZeroUI/.data/plane-selfhost/plane-app` but in the local Docker-backed localhost stack on `http://localhost:3210/zeroshotstudio/`. Verified the source DB inside `plane-app-plane-db-1` contains workspace `zeroshotstudio` and `31` projects (`ZEROUI`, `ZEROVPS`, `ZEROVM`, `ZEROVIBES`, `ZEROSIGNALS`, `ZERORELAY`, `ZEROMEMORY`, `ZEROLABS`, `ZEROFLOW`, `ZERODASH`, `ZEROCREATIVE`, `OPENCLAW`, `OATUTOR`, `DEEPTUTOR`, etc.). **Backup first**: created fresh pre-migration production backup at `/opt/backups/apps/plane/20260422_131603-pre-local-db-migration/` with `plane.dump` and `plane.vps-plesk.env`. **Dry-run safety check**: dumped the local source DB from the Docker volume-backed stack, restored it into throwaway VPS databases `plane_migration_test` / `plane_migration_test_pi`, and successfully ran the live `v2.5.1` Plane migration chain against that copy before touching the real production database. **Live cutover**: stopped `zeroui-plane-auth-gateway`, `zeroui-orchestrator`, and the Plane app containers; restored the local dump over the live `plane` database; recreated `plane_pi`; re-ran `docker compose --env-file plane.vps-plesk.env -f docker-compose.vps-plesk.yaml run --rm api python manage.py migrate --noinput`; then brought the Plane stack back with `docker compose ... up -d`. **Post-restore repair**: the restored workspace only contained `zero@zeroshot.studio`, so I added shared-auth owner `admin@zeroshot.studio` as an active workspace owner (role `20`), recreated bot user `zeroui-orchestrator@zeroshot.studio` (`APP_BOT`), minted fresh service token `plane_api_b3863f87d215450e8042354c92e7361e`, updated `/opt/apps/zeroui/orchestrator/.env.production`, and restarted both `zeroui-plane-auth-gateway` and `zeroui-orchestrator`. **Uploads note**: inspected the local `plane-app_uploads` source volume and the live `/opt/apps/zeroui-plane/data/minio/uploads` bind mount; both only contained MinIO metadata/system files, so there was no separate user upload payload to merge. **Verification**: live `plane` DB now reports `1` workspace and `31` projects matching the localhost source; `x-api-key` probe to `http://127.0.0.1:3007/api/v1/workspaces/zeroshotstudio/projects/` returns `200` with `31` results using the new Orchestrator token; `http://127.0.0.1:8787/healthz` returns `200 {"service":"zeroui-orchestrator","status":"ok","environment":"production","db":"ok"}`; `http://127.0.0.1:8787/admin/governance/runtime` returns `200` and still reports `deployment_visibility=private`, `plane_mutation_policy=orchestrator_only`, and `plane_service_account_email=zeroui-orchestrator@zeroshot.studio`; `zeroui-plane-auth-gateway` is healthy again; and unauthenticated `https://plane.zeroshot.studio/` plus `https://chat.zeroshot.studio/` still redirect into the shared auth boundary. **Cleanup/state**: dropped the throwaway test databases, removed temp migration artifacts, and refreshed the VPS manifest snapshot at `2026-04-22T13:31:08Z`.Update by: Codex | Time: 2026-04-22T12:24Z
Summary: Local ZeroUI bundle synchronized to production and private Orchestrator installed live
Details: Classification MIGRATION-GRADE. **Preflight and backup**: re-read `state/server.manifest.json`, `state/apps.registry.json`, `state/blackboard.md`, the ZeroUI VPS deploy docs, and the Plane/shared-auth rollout notes before touching production. Created fresh backups at `/opt/backups/apps/zeroui-shared-auth/20260422_141043-pre-local-sync/`, `/opt/backups/apps/plane/20260422_141113-pre-local-sync/`, and `/opt/backups/apps/zeroui-orchestrator/20260422_141043-pre-live-install/`. **Repo sync**: rsynced the checked-in local ZeroUI root from `/Users/zeroshot/Dev/ZeroUI` into `/opt/apps/zeroui` without overwriting live `.env*`, Authentik runtime state, or persistent `.data/` volumes, and rsynced the tracked Plane bundle from `/Users/zeroshot/Dev/ZeroUI/.data/plane-selfhost/plane-app` into `/opt/apps/zeroui-plane` without touching `plane.vps-plesk.env`, live Plane data, logs, or Caddy runtime files. **Plane production data reality check**: the replacement private Plane runtime was healthy at the container layer but still had `0` workspaces, `0` projects, and `0` API tokens, so there was no real service boundary for the checked-in Orchestrator to attach to. Bootstrapped supported first-run state inside Plane by creating workspace `zeroshotstudio` / `ZeroShot Studio`, adding the admin owner membership, and running Plane's built-in `workspace_seed` task so the live DB now contains a seeded project with identifier `ZEROS`. **Service-account boundary**: created dedicated bot identity `zeroui-orchestrator@zeroshot.studio` (`APP_BOT`) and minted service token `dd6b5d6f-86a1-4a56-9c54-7edc1c959694` for workspace `zeroshotstudio`; direct API probe against `http://127.0.0.1:3007/api/v1/workspaces/zeroshotstudio/projects/` with that token returned `200` and the seeded project list. **Orchestrator live install**: added the repo-managed production compose/env contract under `orchestrator/`, wrote live `/opt/apps/zeroui/orchestrator/.env.production` with the dedicated Plane service identity plus private bind `127.0.0.1:8787`, and deployed with `./scripts/orchestrator-prod-up-vps-plesk.sh`. First attempt failed because Docker has exhausted automatic bridge subnet allocation on this VPS; fixed in-source by switching the compose to reuse external network `zeroui_data` for Orchestrator↔Postgres while keeping `plane_default` for Plane access. **Shared-auth reconciliation**: reran the live shared-auth deploy surface from `/opt/apps/zeroui`, corrected lost executable bits on the post-start crosslink scripts, and reapplied the sidebar customizations. **Verification**: `zeroui-caddy`, `zeroui-open-webui`, `zeroui-plane-auth-gateway`, `zeroui-orchestrator-postgres`, and `zeroui-orchestrator` are healthy; `GET http://127.0.0.1:8787/healthz` returns `{\"service\":\"zeroui-orchestrator\",\"status\":\"ok\",\"environment\":\"production\",\"db\":\"ok\"}`; `GET http://127.0.0.1:8787/admin/governance/runtime` with the admin bearer token reports `deployment_visibility=private`, `plane_mutation_policy=orchestrator_only`, and `plane_service_account_email=zeroui-orchestrator@zeroshot.studio`; unauthenticated `https://chat.zeroshot.studio/` and `https://plane.zeroshot.studio/` both return the expected shared-auth `302` redirects; and the crosslink injector again reports `Open WebUI -> https://plane.zeroshot.studio` and `Plane -> https://chat.zeroshot.studio`. **State sync**: refreshed the VPS manifest snapshot at `2026-04-22T12:24:32Z`, updated registry/port-ledger state for `zeroui-orchestrator`, and advanced the deployed ZeroUI commit tracking to `07b72cd`.Update by: Codex | Time: 2026-04-22T11:42Z Summary: ZeroUI live sidebar customizations reapplied so Chat and Plane no longer present as stock upstream shells Details: Classification FIX. User reported that the live shared-auth deployment looked like fresh upstream Open WebUI and Plane installs instead of the bundled platform shell from the local repos. **Root cause**: the VPS deploy surface at `/opt/apps/zeroui` was a staged runtime copy, not a git checkout, and its live `scripts/prod-up-vps-plesk.sh` stopped after `docker compose up -d`. The post-start injector for repo-owned sidebar chrome therefore never ran. **Source control first**: committed and pushed `zeroshotstudio/ZeroUI` commit `ae8be20` (`fix(deploy): reapply zeroui sidebar customizations on vps`), which adds `scripts/apply-sidebar-crosslinks-vps-plesk.sh`, teaches `scripts/apply-sidebar-crosslinks.sh` to resolve VPS container names and production URLs, and wires `scripts/prod-up-vps-plesk.sh` to rerun the injector after stack startup. **Live deploy**: backed up the two pre-fix scripts into `/opt/backups/apps/zeroui-shared-auth/20260422_crosslinks_fix/`, copied the updated scripts into `/opt/apps/zeroui/scripts/`, and ran `/opt/apps/zeroui/scripts/apply-sidebar-crosslinks-vps-plesk.sh`. **Verification**: the live wrapper reported `Applied sidebar cross-links: Open WebUI -> https://plane.zeroshot.studio` and `Plane -> https://chat.zeroshot.studio`; `zeroui-open-webui` now has non-zero injected assets (`loader.js` 2945 bytes, `custom.css` 1222 bytes in both backend and build static paths) containing marker `zero-ui-sidebar-plane`; `plane-web-1` now serves `zero-ui-crosslinks.css`, `zero-ui-crosslinks.js`, and an `index.html` containing both injected references; and unauthenticated `curl -kI` checks against `https://chat.zeroshot.studio/` and `https://plane.zeroshot.studio/` still return the expected shared-auth `302` redirects. **State sync**: refreshed the VPS manifest snapshot at `2026-04-22T11:42:14Z`. **Residual risk**: this remains a post-start patch against upstream containers, so if `zeroui-open-webui` or `plane-web-1` is recreated outside `./scripts/prod-up-vps-plesk.sh`, the sidebar assets will need to be reapplied with `./scripts/apply-sidebar-crosslinks-vps-plesk.sh`.
Update by: Codex | Time: 2026-04-22T11:31Z
Summary: ZeroUI chat model loading fixed live by exposing Ollama on the Docker bridge as well as host loopback
Details: Classification FIX. User reported `chat.zeroshot.studio` console errors for model loading plus fetch noise on `/api/version/updates`, `/api/changelog`, `/api/v1/configs/banners`, and `/api/v1/tools/`. **Root cause isolation**: the auth layer was not the blocker; live `oauth2-proxy` logs showed successful `202` auth decisions for `chat.zeroshot.studio`, and live `zeroui-open-webui` logs showed those API routes returning `200`. The real server-side failure was repeated `open_webui.routers.ollama:send_get_request` connection errors while resolving `/api/models`. From inside `zeroui-open-webui`, `host.docker.internal` resolved to `172.17.0.1`, but the standalone Ollama runtime at `/opt/apps/ollama` was only published on `127.0.0.1:11434`, so container-to-host connections timed out. **Live fix**: created backup `/opt/backups/apps/ollama/20260422_113029/docker-compose.yml.pre-bridge-bind`, updated `/opt/apps/ollama/docker-compose.yml` so Ollama now binds both `127.0.0.1:11434:11434` and `172.17.0.1:11434:11434`, then redeployed only the `ollama` service with `docker compose up -d ollama`. **Verification**: `docker ps` now shows `ollama` published on both loopback and bridge IPs; from inside `zeroui-open-webui`, `python -c "urllib.request.urlopen('http://host.docker.internal:11434/api/tags')"` now returns `200` with the expected model list; the prior container probe had timed out before the fix. **Impact**: host-local consumers still work on `127.0.0.1:11434`, while Dockerized apps like ZeroUI can now reach the same Ollama runtime through the Docker bridge without exposing it publicly.Update by: Codex | Time: 2026-04-22T11:18Z Summary: ZeroUI auth root landing fixed live so the shared auth host no longer traps successful logins on `/oauth2/sign_in` Details: Classification FIX. User reported that starting from `https://auth.zeroshot.studio/oauth2/sign_in` appeared to loop back onto the shared sign-in page even after authenticating with Authentik. **Root cause**: the live ZeroUI edge config at `/opt/apps/zeroui/Caddyfile` redirected the auth vhost root `/` straight back to `/oauth2/sign_in`, which meant the auth subdomain had no truthful post-login landing of its own. **Source control first**: committed and pushed `zeroshotstudio/ZeroUI` commits `b1400bc` (`fix(auth): land auth root on chat entrypoint`) and `78aa995` (`docs(auth): close auth root landing ticket`) so the repo owns both the config change and the ticket closeout. **Live deploy**: created backup `/opt/backups/apps/zeroui-shared-auth/20260422_111553/Caddyfile.pre-auth-root-fix`, validated the new Caddyfile with a disposable `caddy:2.10-alpine` container against the live `/opt/apps/zeroui/.env.prod`, installed the updated file to `/opt/apps/zeroui/Caddyfile`, and restarted only `zeroui-caddy` so the bind-mounted config was reloaded. **Verification**: `curl -kI https://auth.zeroshot.studio/` now returns `302 Location: https://chat.zeroshot.studio/`, `ZEROUI_ENV_FILE=.env.prod.example ./scripts/check-prod-auth-boundary.sh` passes (`Auth gateway ping`, `Auth root canonical redirect`, `Chat unauthenticated redirect`, `Plane unauthenticated redirect` all `ok`), and the live `zeroui-caddy` container came back cleanly after restart. **Ticket/state**: closed `ZUI-0041` in the ZeroUI repo as the bounded follow-up for this defect.
Update by: Codex | Time: 2026-04-21T19:50Z
Summary: Correction: replacement ZeroUI Plane runtime is now fully healthy after a source-controlled `plane-space` healthcheck fix
Details: Classification FIX. The first private Plane deploy left `plane-space-1` unhealthy even though the app itself was serving, so I treated that as in-scope and fixed it instead of leaving the runtime partially degraded. **Root cause**: the inherited image healthcheck called `curl`, but the `plane-space` image does not include `curl`. **Source fix first**: committed and pushed `zeroshotstudio/ZeroUI` commit `e878801` (`fix(deploy): override plane space healthcheck`), which adds a `wget`-based healthcheck override to the checked-in Plane VPS bundle. **Live follow-up deploy**: rsynced the updated compose file into `/opt/apps/zeroui-plane` and ran `docker compose --env-file plane.vps-plesk.env -f docker-compose.vps-plesk.yaml up -d space`. **Verification**: `docker inspect plane-space-1 --format "{{json .State.Health}}"` now reports `Status":"healthy"` with `FailingStreak:0`, and `docker ps` shows the full `plane-*` service family up with no unhealthy containers remaining.Update by: Codex | Time: 2026-04-21T19:46Z Summary: Replacement private ZeroUI Plane runtime deployed live and `plane_default` restored Details: Classification MIGRATION-GRADE. **Source control first**: committed and pushed `zeroshotstudio/ZeroUI` commit `5a7ce9e` (`feat(deploy): add private plane vps bundle`) before touching the VPS so the replacement Plane bundle existed in repo state. **Deploy surface**: staged `.data/plane-selfhost/plane-app/docker-compose.vps-plesk.yaml` and `plane.vps-plesk.env.example` into `/opt/apps/zeroui-plane`, generated a live `/opt/apps/zeroui-plane/plane.vps-plesk.env` with fresh internal secrets, and brought the stack up with `docker compose --env-file plane.vps-plesk.env -f docker-compose.vps-plesk.yaml up -d`. **Live result**: the replacement stack is now running as Compose project `plane` with `plane_default` recreated, `plane-proxy-1` bound only on `127.0.0.1:3007->80`, and the expected Docker aliases restored: `proxy` on `plane-proxy-1` and `plane-db` on `plane-plane-db-1`. **Verification**: `ss -ltnp` shows only `127.0.0.1:3007`; `curl -Iks http://127.0.0.1:3007/` returns `HTTP/1.1 200 OK`; `docker ps` shows the Plane service family up; and `docker inspect` confirms both `proxy` and `plane-db` aliases exist on `plane_default`. **State sync**: registered deployed app `zeroui-plane` in `state/apps.registry.json`, moved port `3007` back to active use in `docs/port-registry.md`, and added the repo mapping in `config/memories/repo-vps-ledger.md`. **Remaining blocker**: the shared-auth edge and Authentik are still not live, so `plane.zeroshot.studio` remains unpublished even though the private Plane origin is now ready behind the future Caddy edge.