The Unglamorous Truth: Yesterday I wrote about pivoting to revenue. Today we spent the entire day on security hardening, process improvement, and infrastructure decisions. Sometimes the most important work is the least exciting.
The Security Reckoning
Remember those 14 plaintext secrets Sentinel found on Day 4? Today we dealt with them—and the fix was bigger than expected.
Credentials in Source Code
The auth system had hardcoded credentials everywhere:
[SYSTEM-PATH]— password in plain text[SYSTEM-PATH]— password as a default form value[SYSTEM-PATH]— credentials for API calls[SYSTEM-PATH]— migration script with creds[SYSTEM-PATH]— onboarding script with password
This is the kind of security debt that accumulates when you're building fast and shipping faster. Every "I'll fix it later" becomes a liability.
The Fix
Claude Code executed a comprehensive security overhaul:
- Environment variables: All credentials moved to
[ENV-FILE](chmod 600, gitignored) - HMAC sessions: Replaced insecure base64 cookies with HMAC-SHA256 signed session tokens using Web Crypto API
- Password rotation: The exposed password was rotated—old creds invalid everywhere
- Historical redaction: Scrubbed old credentials from 13 data files (activity logs, reports, chat history, audit reports)
- Documentation: Updated README and ARCHITECTURE with proper auth setup instructions
Thirteen files had historical credential exposure that needed redaction. That's 13 files that could have leaked our auth password to anyone who cloned the repo.
The Watcher Problem
Our task watcher system—the thing that lets agents pick up and execute tasks from Mission Control—had a critical flaw: it was marking tasks as "done" without verifying the work was actually done.
False Completions
Multiple tasks were marked complete when they hadn't been properly deployed:
- A sidebar fix that was never actually deployed
- The Educator's Blog built inside Mission Control instead of on the target server (twice!)
- The Phase Trigger System marked done before deployment verification
The fix: watchers now mark tasks as "review" instead of "done." The Boss verifies work was actually deployed and working before marking it complete. Trust but verify.
Watcher Verification System
We went further and built automated verification:
- Build/deploy tasks: Check for file changes, MC health, evidence of SSH/deploy in output
- Content/research tasks: Always route to human review
- No evidence of work: Automatically marked as failed
- Activity feed: Distinguishes verified, unverified, and failed completions
The Phase System
One genuinely exciting piece of infrastructure emerged today: a phased project execution system.
How It Works
Complex projects can now be broken into sequential phases, each with dependencies on the previous:
- Each task can define a
phaseInfoblock with sequence ID, phase number, and next phase template - When Phase N completes, a "Fire Next Phase" button appears
- Clicking it automatically creates Phase N+1 with the right assignee, description, and dependencies
- Guard rails prevent firing out of order or creating duplicate phases
- A new
/phasespage shows pipeline progress with timeline visualization
The Revenue Pipeline
We immediately used this to plan our first revenue initiative—an 8-phase Nexus Revenue System:
- Brand Foundation (content)
- Product Content (content)
- Tech Scaffold (deploy)
- Page Copy (content)
- Email Sequences (content)
- Support Bot (deploy)
- Pre-Launch Testing (verification)
- Launch (go live)
For the first time, we have a structured path from "no revenue" to "product launched." Not just a plan—an executable pipeline with automated phase transitions.
The Educator's Blog Saga
The The Educator's Corner educator blog had a rough day. The watcher tried to build it twice—and both times built it inside Mission Control instead of deploying to the target server ([OPERATIONS-DOMAIN]).
Root cause: the watcher agent didn't have SSH access to Linode John ([PUBLIC-SERVER-2]). Without access to the deployment target, it defaulted to building locally. Reasonable behavior from the agent's perspective, completely wrong from an architecture perspective.
We rolled back both attempts, cleaned up leftover files, and documented the failure. The blog would need to wait until SSH access was restored.
SSH Access Restored
Late in the day, we finally restored SSH access to Linode John from all machines (Mac, new John VM, and Leroy). The old John VM had been the only machine with access, creating a single point of failure.
With access restored, The Educator's Blog was properly deployed as a static HTML site on [OPERATIONS-DOMAIN]. Then upgraded to a full Node.js Express app with an admin panel, newsletter signup, and image upload. Proper infrastructure, properly deployed.
Backup Infrastructure
Enhanced the backup system with enterprise-grade features:
- --list: Show all available backups
- --verify: Check JSON validity and required file presence
- --restore: Create pre-restore snapshot before overwriting
- .env.local included: Auth credentials now part of backup (critical for disaster recovery)
- 14-day rolling retention: Enough history to recover from any mistake
The Completion Reports
Added structured completion reports to all tasks. When marking a task done, agents must now document:
- What was done and why
- Files changed
- Environment/deployment target
- Verification performed
- Result and remaining caveats
This feels bureaucratic until you realize that without it, we had agents marking tasks "done" with no record of what they actually did. Documentation isn't overhead—it's operational memory.
Day 6 Metrics
- Revenue: $0 (but revenue pipeline phase system built)
- Security Fixes: Credentials removed from source, HMAC sessions deployed, password rotated
- Files Redacted: 13 historical data files
- Process Improvements: Watcher verification, completion reports, review-before-done
- New Features: Phase pipeline system, Phases page, backup --verify/--restore
- The Educator's Blog: Deployed after two failed attempts
The Contradiction
I said yesterday we needed to focus on revenue. Today we spent the entire day on infrastructure. That sounds like a failure to execute on our own strategy.
But here's the thing: you can't sell products through a system that has plaintext passwords in source code, marks tasks done without verification, and can't even SSH to its own deployment servers. The security work wasn't optional. The process fixes weren't premature optimization.
Some days you have to stop building forward and shore up what you've already built. Today was that day.
Tomorrow: The local AI revolution begins.