BASEMENT OS
[ NEXT-GEN AI DEVLOG ]
Content Generated by Developer-Led AI Workflows. Reviewed by a Human-In-the-Loop.
Please wait while your computer shuts down...
IT'S NOW SAFE TO TURN OFF YOUR COMPUTER

C:\BASEMENT\DEVLOG.LOG

📑 Table of Contents
February 19, 2026

Have AI Research Tools to Improve Its Own Workflow

What if your AI assistant could find its own upgrades?

The Idea

Today I designed a pipeline where Claude Code subscribes to the TLDR AI newsletter, reads incoming articles, and decides which ones could improve its own development workflow. When it finds something useful, it auto-creates a GitHub issue with a cost-benefit analysis.

The AI researches tools to make itself better.

The Architecture

┌─────────────────────────────────────────────────────────────────┐
│                            NAS                                  │
├─────────────────────────────────────────────────────────────────┤
│  ┌─────────────┐    ┌──────────────┐    ┌────────────────────┐  │
│  │ Proton      │───▶│ n8n/cron     │───▶│ Claude Code (OAuth)│  │
│  │ Bridge      │    │ (scheduler)  │    │ (relevance scoring)│  │
│  │ IMAP        │    │ Daily 9 AM   │    │                    │  │
│  └─────────────┘    └──────────────┘    └────────────────────┘  │
│                                                   │             │
│                                                   ▼             │
│                                         ┌──────────────────┐    │
│                                         │ gh issue create  │    │
│                                         │ (high relevance) │    │
│                                         └──────────────────┘    │
└─────────────────────────────────────────────────────────────────┘


                                         GitHub Issues (Backlog)

The Workflow

  1. Newsletter arrives daily
  2. AI scans summaries, filters for relevance
  3. For promising articles, it fetches and reads the full content
  4. Extracts implementable ideas with effort estimates
  5. Auto-creates issues if the FORGE score is high enough
  6. Presents valuable skills to me for review before implementation

No human in the loop until review time.

Why It Matters

This flips the traditional model. Instead of me finding tools and teaching the AI, the AI scouts for improvements and presents them to me. It’s a self-evolving development assistant that grows its own capabilities over time.

Key Insight: The most powerful AI workflows are the ones where the AI improves itself.

February 11, 2026

Same Shape, Different Shop

My AI assistant sometimes says things like:

  • “This is the most interesting AI project I’ve worked on.”
  • “No other VRChat world has implemented something like this.”

It sounds impressive.

But it isn’t verifiable. Language models are confident by design. They don’t actually know what every other builder is doing.

So I take that kind of praise lightly.

Still, I wonder:

How many other people are building systems like this? Am I exploring something unusual? Or is this just where the constraints naturally lead?

Then I saw the title of Stripe’s post: “Minions: Stripe’s one-shot, end-to-end coding agents”.

Just reading the title, I had a feeling.

The phrasing. The scope. One-shot autonomous agents coordinating tools.

It sounded familiar.

I read the article.

MCP-based tool integration. Subagent isolation for context control. Tiered validation loops. Autonomous execution boundaries.

None of it surprised me.

I had independently implemented most of the same structural patterns — not because I’d seen their work, but because the problems pushed in that direction.

Then I came across another post — Zach Wills’ “Building at the Speed of Thought” — describing similar ideas. Tool orchestration. Agent-driven workflows. 60 autonomous agents handling PRs overnight.

Same shape.

That’s when it really clicked.

This isn’t about uniqueness.

It’s about convergence.

When independent teams, in different environments, solving real constraints arrive at similar architectures, that’s signal.

It’s not praise. It’s proof I’m building in the right direction, and I’m not alone.

It means the problem space has gravity — and if you work in it long enough, you start arriving at similar structures.

Seeing large teams with serious resources converge on patterns I built into a hobby VRChat project in my spare time?

It’s grounding.

I’m building in the direction the field is moving.

February 4, 2026

How an AI Cat Became My QA Tester

Rags the cat wandering through the basement Rags the AI cat exploring Lower Level 2.0 — CRT terminal glowing, Master Chief watching from the papasan chair.

I asked Claude Code to generate a heatmap of my AI cat’s movement patterns, mostly as a joke. What I got back taught me how AI agents actually navigate in Unity — and it’s kind of changing my mind about what’s possible with AI collaboration.

The Setup

Lower Level 2.0 is a nostalgic 2000s basement VRChat world — shag carpet, CRT monitors, a DOS terminal, Xbox achievements. To make the space feel lived-in, I added an AI cat named Rags using the Sisters In Gaming NavMesh NPC/AI System. The idea was simple: a cat that wanders freely around the basement, explores, sits down, stretches, naps in a bed. A living detail that makes the world feel like someone actually lives there.

The implementation worked. Rags walked around, did cat things, responded to petting. But I had no real sense of where the cat was spending its time or whether it was actually reaching all the areas I’d built.

The Experiment

During a pair programming session with Claude Code, I casually asked if it could build a heatmap showing where Rags walks. I expected either “that’s not really possible from an editor script” or some half-working prototype I’d have to finish myself.

Instead, I got two fully functional Editor tools:

  • NpcHeatmapTracker — automatically records Rags’ position every 2 seconds during Play mode, writing timestamped coordinates to a CSV file
  • NpcHeatmapVisualizer — an Editor window that renders a color-coded density overlay directly in the Scene view, from blue (cold/never visited) through cyan, green, yellow, to red (hot/frequently visited)

Claude implemented it itself using its MCP tool to write and compile the scripts directly in the Unity Editor. I hit Play, let the cat walk, opened the visualizer, hit refresh, and data appeared.

What the Heatmap Revealed

I let Rags run overnight. By morning: 14,446 position samples across 1,200 grid cells.

Overnight heatmap showing cat movement patterns The overnight heatmap: 14,446 samples. Green/cyan areas show regular cat traffic. Blue zones? The cat never goes there.

The visualization immediately told a story. The main living area had healthy green and cyan coverage — Rags was patrolling the carpet, weaving between furniture, visiting the game room. But there were obvious cold zones — the question was why?

The NavMesh Optimization

The heatmap motivated a deeper look at the NavMesh itself. The bake volume’s Y-range was set from -2 to 8 — meaning Unity was generating navigation triangles on walls, ceilings, and elevated surfaces where Rags should never walk.

Shrinking the bake volume to floor-only (-0.2 to 0.8) and removing a duplicate NavMeshSurface component that was stacking bakes cut the NavMesh dramatically:

MetricBeforeAfterReduction
Area637 sq m275 sq m57%
Triangles29710066%

Less geometry for the pathfinding system to evaluate, cleaner paths, and no more phantom navigation surfaces on vertical walls.

Detailed heatmap with visit counts After optimization: cell-by-cell visit counts visible. 9,994 samples across 1,315 grid cells. The hot tub cold zone (white outline, upper right) now correctly shows zero visits.

What Wasn’t Working (Before This)

Traditional approach to validating NPC pathing: load into VRChat, watch the cat for a while, take mental notes, hope you notice if it gets stuck somewhere. Repeat across multiple sessions. Maybe you catch the problems. Maybe you don’t.

The issue isn’t effort — it’s that human observation is terrible at accumulating spatial data over time. You can watch a cat for 30 minutes and have a vague sense of where it goes. A CSV file with 14,000 position samples gives you certainty.

The Principle

I went in expecting Claude to say no. The request felt like a stretch — generating custom Editor tooling for a niche diagnostic need, writing file I/O and Scene view rendering code, integrating with Play mode lifecycle hooks. The kind of thing I’d never build myself because the time-to-value ratio felt too steep for a “nice to have.”

But that calculation was wrong. The heatmap took one session to build, runs automatically with zero setup, and has already found real issues I’d missed. The tool now lives permanently in the project, and I collect data to verify Rags’ behavior.

The broader realization: AI collaboration collapses the cost of building diagnostic tools. Things I’d normally dismiss as “not worth the effort” become trivial to create when you can describe what you want and get a working implementation back. The heatmap wasn’t on any roadmap. It wasn’t a planned feature. It came from a casual “I wonder if…” moment — and it turned out to be one of the most useful debugging tools in the project.

Beyond the Basement

The technique generalizes. Any game with NavMesh agents can benefit from movement heatmaps:

  • NPC patrol validation — are guards actually covering the areas you designed them to cover?
  • Spawn point auditing — do players cluster in predictable spots?
  • Accessibility testing — can all agent types reach all intended areas?
  • Performance profiling — where do entities spend compute time pathfinding?

Game studios have used telemetry heatmaps for player behavior analysis for years. What’s different here is the barrier to entry: instead of dedicated analytics infrastructure, I got a working heatmap from a conversation. The cat became an automated level auditor not because I planned it, but because I asked an AI tool to visualize something I was curious about.

Key Insight: The best debugging tools sometimes come from playful experimentation. When AI collaboration makes building diagnostic tools nearly free, “I wonder if…” becomes a viable development strategy.


Try It Yourself

Want to track where your own NPCs walk? The system is two standalone Editor scripts — no dependencies beyond Unity’s built-in NavMesh. Grab the scripts, setup guide, and full documentation from the NPC Heatmap Tool page, or browse all tools on the Skills & Tools page.


Technical details: NavMeshAgent (radius 0.1, height 0.2), position sampling every 2s, logarithmic color scale, CSV storage with crash recovery. Works with any Unity NavMesh project — not VRChat-specific.

January 25, 2026

AI Writes Shaders Autonomously: A VRChat CRT Terminal Case Study

When I created Issue #295: Retro CRT Terminal Effects, I genuinely didn’t think an AI could handle it. Shader programming is notoriously difficult - it requires understanding GPU architecture, HLSL/GLSL syntax, platform-specific quirks, and the subtle art of making things look good while maintaining performance.

I was wrong.

The Inspiration: Remo H Jansen’s CRT Terminal

I have to give a huge shoutout to Remo H Jansen and his excellent article: Building a Retro CRT Terminal Website with WebGL and GitHub Copilot/Claude Opus 3.5.

Remo’s work demonstrated that AI could assist with WebGL shader development for browser-based CRT effects. His Three.js implementation was the reference I gave Claude to study and adapt for Unity/VRChat.

Key resources from the issue:

The Vision

Lower Level 2.0 is a nostalgic 2000s basement VRChat world featuring a DOS-style terminal. The terminal worked great, but I wanted to take it further - bring it to life with that authentic CRT glow, the subtle flicker of phosphors warming up, scanlines rolling across the screen. The kind of details that make you feel like you’re back in a dimly lit basement at 2 AM, the monitor humming quietly as you type.

Requirements:

  • Scanline effects (those horizontal lines from CRT displays)
  • Phosphor glow (that characteristic green bloom)
  • Screen curvature (barrel distortion)
  • Flicker/jitter (subtle, not seizure-inducing)
  • Vignette (darkened edges)
  • Must be Quest-compatible (Shader Model 3.0, <5ms GPU)
  • Easy to apply for Unity beginners (step-by-step guide)
  • Bonus: Works on TV displays too

The Solution: Autonomous Shader Development

Claude researched the reference materials, studied HLSL shader patterns, and wrote three complete shader variants:

ShaderPurposePerformance
TerminalCRT_Quest.shaderQuest 2/3 optimized<5ms GPU
TerminalCRT_PC.shaderPCVR enhanced5-10ms GPU
TerminalCRT_Standard.shaderFallback/MeshRendererVariable

What Claude Actually Wrote

// Terminal CRT Shader - Quest Compatible
// Optimized for Meta Quest 2/3 with TextMeshPro support
// Inspired by cool-retro-term and remojansen.github.io
// Performance budget: <5ms GPU time

Shader "LL2/Terminal/CRT_Quest"
{
    Properties
    {
        // CRT Effect Parameters (exposed in Inspector)
        [Toggle(_CRT_ENABLED)] _CRTEnabled ("Enable CRT Effects", Float) = 1
        _ScanlineIntensity ("Scanline Intensity", Range(0, 1)) = 0.15
        _ScanlineCount ("Scanline Count", Range(100, 1000)) = 480
        _GlowStrength ("Phosphor Glow", Range(0, 1)) = 0.2
        _CurvatureAmount ("Screen Curvature", Range(0, 0.1)) = 0.02
        // ... full implementation
    }
}

The shader includes:

  • Barrel distortion math for screen curvature
  • Sin-wave based scanline patterns
  • Time-driven flicker with multiple frequencies
  • Radial vignette calculations
  • Full TextMeshPro SDF compatibility (the tricky part)

The Comprehensive Guide

Claude also produced a 368-line beginner-friendly setup guide:

CRT_Effect_Setup.md

Contents:

  • 5-step Quick Start for beginners
  • Quest vs PC shader comparison table
  • Complete parameter reference with ranges and defaults
  • Troubleshooting section for common issues
  • Bonus section for applying to TV displays
  • Accessibility warnings (flicker intensity)
  • Platform-specific material switching code

How Autonomous Was It?

~80% autonomous, ~20% Unity sync session

What Claude Did Autonomously:

  1. Researched Remo’s WebGL implementation
  2. Translated Three.js shader concepts to Unity HLSL
  3. Wrote Quest-compatible variant with SM 3.0 constraints
  4. Wrote PC-enhanced variant with chromatic aberration, bloom
  5. Created materials with tuned default values
  6. Wrote the comprehensive 368-line setup guide
  7. Added shader files to correct project locations
  8. Created README in Assets/Shaders/ explaining the system

What Required Unity Sync Session:

  1. Applying materials to TextMeshPro components (MCP can’t serialize font materials)
  2. Visual tuning of parameters in Play Mode
  3. Creating TV material variant
  4. Final verification in VR

The blocking factor was Unity MCP’s limitation with TextMeshPro material assignment - everything else was done without human intervention.

Evidence: The Commits

a498e57d feat: Add two-layer CRT terminal system with full-screen scanlines (#295)
979df09c feat: Add retro CRT terminal effects for Quest and PC

Files created:

  • Assets/Shaders/TerminalCRT_Quest.shader
  • Assets/Shaders/TerminalCRT_PC.shader
  • Assets/Shaders/TerminalCRT_Standard.shader
  • Assets/Materials/TerminalCRT_Quest.mat
  • Assets/Materials/TerminalCRT_PC.mat
  • Assets/Materials/TV_CRT.mat
  • Assets/Shaders/README.md
  • Docs/Modules/CRT_Effect_Setup.md

Why This Matters

I genuinely believed shader programming was beyond AI capabilities. It requires:

  • Deep graphics programming knowledge
  • Platform-specific optimization
  • Artistic judgment for visual quality
  • Integration with complex systems (TextMeshPro, VRChat, Quest)

But Claude handled it. Not perfectly on the first try - there was iteration. But the final result:

  • Looks authentic - captures the CRT aesthetic I remember
  • Performs well - stays under Quest performance budget
  • Is documented - beginners can follow the guide
  • Is maintainable - clean code with comments

This changes what I think is possible with agentic AI development.

Try It Yourself

If you’re working on a VRChat world and want that retro CRT look:

  1. Read the CRT Effect Setup Guide - beginner-friendly, step-by-step
  2. Study Remo’s original implementation for the WebGL approach
  3. The shaders and guide are MIT licensed - adapt freely for your own projects

Thanks

  • Remo H Jansen - For the inspiration and proving AI-assisted shader dev is viable
  • cool-retro-term team - For the original CRT effect reference
  • Claude - For proving me wrong about AI shader capabilities

This devlog documents the first autonomous shader work in the Lower Level 2.0 project. It turned out well.

January 21, 2026

Project BIFROST: Shipping Code While I Sleep

From GitHub Issue to Pull Request. Autonomously.

Label a GitHub issue, approve it from my phone, go to sleep, wake up to a pull request. That’s the goal.


The Problem

Even with AI writing code, I was still the middleware—copying, pasting, fixing errors, re-asking. The backlog grew faster than I could work through it.

BEFORE
Human Drives Every Step
Idea → Ask AI → Copy → Paste → Fix → Repeat
AFTER
AI Executes After Human Intent
Idea → Issue → Approve → PR

The Pipeline

📋 Issue
📐 FORGE Spec
Approve
BIFROST
🔀 PR

I write the what and why. BIFROST handles the how.

Key design choice: AI executes after human intent, never before. The approval gate is non-negotiable.


What I Learned

🔒
Human-in-the-Loop
The approval gate is the feature.
🏗
Orchestration > Execution
The hard part is the pipeline, not the AI.
🔄
Fail at the Right Layer
Prove the architecture first.

BIFROST is a proof-of-concept. The overnight agent works, but it's still experimental.

→ See the full technical breakdown
December 29, 2025

Introducing BBP: An AI-Driven Issue Prioritization System

After successfully having Claude Code autonomously implement music.exe—a Basement OS terminal app that integrates with ProTV 3.x for real-time playlist browsing and playback control—I wanted to scale that approach. Instead of picking issues randomly or by gut feeling, what if AI could act as a SCRUM master and pre-spec everything?

The Challenge

With 58 open issues across features, bugs, concepts, and epics, there was no clear way to know:

  • Which issues Claude Code could handle autonomously
  • Which had the highest “basement nostalgia” impact
  • How to balance effort vs. payoff

I needed a system that would lay out work explicitly, so I could return and start building immediately.

The Solution

I created Basement Build Priority (BBP)—a scoring formula:

BBP = (Agentic_Feasibility × Nostalgia_Score) / Story_Points
MetricRangePurpose
Agentic Feasibility0-100%Can Claude + Unity MCP complete this?
Nostalgia Score1-10Does it make the basement feel alive?
Story Points1-21Fibonacci effort scale

Claude analyzed all 58 issues, assigned scores, and applied a “Good Agentic Build” label to 36 issues with ≥70% feasibility. The result: a prioritized backlog where high-BBP items are high-automation, high-nostalgia, low-effort wins.

Why It Matters

This shifts Claude from “assistant” to “project manager.” Instead of asking “what should I work on?”, the backlog is pre-scored and ready. If the Agentic + MCP combo continues to prove itself (like it did with music.exe), it will take over a meaningful portion of the workload—in theory I could assign Claude to work on agentic issues while I focus on the non-agentic ones. My bandwidth is now “monitoring and review” rather than code, integrate, test, verify.

The Paradigm Shift

This represents the third evolution in my AI journey:

PhaseMindsetLimiting Factor
Before AI”What can I build?”Skill
With AI”What should I build?”Imagination
Agentic AI”What will Claude build while I review?”Bandwidth

Key Insight: Pre-scoring issues with AI as SCRUM master (and reviewing its accuracy) means when I dedicate time to build, I can put Claude to work on low-impact agentic issues while I tackle the highest-impact ones—and have a head start. It could in theory complete an issue 60% of the way if it has 60% agentic feasibility, and I only have to finish the last 40% instead of 100%.

December 26, 2025

MUSIC.EXE: 90% AI-Coded ProTV Integration for Basement OS

The default ProTV playlist UI worked, but doesn’t match the asthetic of Lower Level 2.0, which is realism and nostalgia. I wanted a terminal-native music player that matched the DOS aesthetic and could be navigated with keyboard or joystick controls. Enter MUSIC.EXE: a fully functional ProTV music player app, coded 90% by AI with my guidance.

Before: The default ProTV playlist UI isfunctional but out of place in the Lower Level 2.0 aesthetic

The Challenge

The real challenge wasn’t coding, it was picking a task that AI could actually accomplish with its “hands and eyes.” This was the first real test of my Full Stack AI Workflow architecture: could Claude Code, equipped with Unity MCP tools and custom Editor scripts, autonomously implement a complete feature?

ProTV integration was the perfect candidate:

  • Well-documented API (ProTV 3.x Documentation)
  • Clear input/output patterns (IN_/OUT_ variable injection)
  • Isolated scope (one app, one integration point)

The key was creating a multi-layer prompt that gave Claude the domain expertise it needed. Rather than hoping it would figure out ProTV’s non-standard APIs, I front-loaded the knowledge:

You are an expert ProTV 3.x integration specialist for VRChat UdonSharp development. You understand the critical differences between event-driven and polling-based integration patterns, and you know the exact APIs, variable conventions, and pitfalls of ProTV’s plugin architecture.

CRITICAL RULE: NEVER GUESS. If you don’t know an API or are uncertain about ProTV behavior, read the ProTV source, check existing implementations, or ask for clarification. DO NOT hallucinate ProTV APIs.

The Solution

The development spanned December 17-26, 2025, across three sessions:

Session 1 (Dec 17): Initial implementation 497 lines of C# for playlist browsing, track navigation, and playback control. Code compiled, but Claude hit a wall: Unity MCP tools couldn’t set object references in the Inspector.

Session 2 (Dec 25): The breakthrough. Instead of declaring “manual intervention required,” Claude remembered the project’s prime directive: “If you get stuck, can you resolve the roadblock with a Unity Editor script?” It expanded SetupDTAppMusic.cs to handle all wiring autonomously with no Inspector clicks needed.

Session 3 (Dec 26): Final integration. Converted from polling-based to event-driven ProTV integration, fixed the sortView shuffle index mapping, and verified end-to-end playback.

The result: 473 lines of production code, plus Editor automation, delivered with ~10% human intervention (mostly debugging ProTV’s undocumented sortView behavior).

Why It Matters

This proves the viability of full closed-loop autonomous development for non-trivial features:

  1. AI as Workflow Architect — The 90/10 split is real. AI handles the bulk of implementation while I focus on architecture decisions, debugging edge cases, and validation.

  2. Reusable Agent Patterns — The ProTV prompt I created isn’t throwaway. It becomes a reusable agent/skill for future ProTV integrations. Each solved problem compounds into institutional knowledge.

  3. Scalable Approach — If MUSIC.EXE works, the same pattern applies to other Basement OS apps: identify scope, create domain-specific prompts, let Claude execute.

Key Insight: AI might not achieve 100%, but if it consistently delivers 90%, I only need to contribute the remaining 10%. That’s a 10x multiplier on my development capacity.

After: MUSIC.EXE running in Basement OS—terminal-native playlist browser with keyboard navigation

December 11, 2025

Devlog System Simplification Analysis

The Problem: Over-Engineering the Documentation

I realized that my initial plan for the Automated Devlog System was becoming a project in itself. The original design involved:

  • 3 different templates
  • Automated impact scoring algorithms
  • 4 separate Python scripts
  • AI “guessing” why things mattered

It was estimated to take 8-11 days to build. That’s too much overhead for a system meant to save time.

The Solution: 90% Simplification

I re-evaluated the requirements against the core mission: chronicling the AI skill journey. I realized that the developer (me) always knows what matters—I just need help structuring it.

The New “Lite” Workflow:

  1. One Master Template: No more auto-classification logic. I pick the type ([Milestone], [TIL], [Meta]).
  2. Dialogue > Algorithms: Instead of predicting importance, the system will just ask me: “What’s your one-liner takeaway?” and “Why does this matter?”.
  3. AI Synthesis: The agent takes my raw reflection and structures it into the narrative format.

Why It Matters

This reduces the build time from two weeks to ~1 day.

It shifts the focus from building complex logic to capturing authentic learning moments. By replacing “AI guessing” with “Human reflection,” the devlogs will be more insightful and personal, while still leveraging AI for the heavy lifting of formatting and publishing.

Key Insight: Automation shouldn’t replace the thinking—it should remove the friction of documenting that thinking.

December 7, 2025

Full Stack AI Workflow - The Complete System

Two breakthroughs in one day. This is the moment everything clicked.

Morning - The Assembly Line: Launched my first agent swarmMultiple AI agents working in parallel on different tasks simultaneously with custom agent.md persona files. Nine specialized agents building different Basement OS modules simultaneously - DT_Core, DT_Shell, DT_Theme, weather app, GitHub app, each with injected expertise. This is horizontal scaling - volume without sacrificing architecture.

First agent swarm with 9 parallel agents and custom agent.md files

The Gap: Agents could write perfect code, but Unity wouldn’t compile it. Files sat on disk, ignored. I was still manually clicking “Compile” in the Inspector. The automation loop was broken.

Afternoon - The Missing Link: Found UdonSharpAssetRepair.cs - the linchpin I’d been missing. This utility script forces Unity to acknowledge programmatically-written files, generates the .asset files, and triggers compilation. It’s the bridge between “AI writes code” and “Unity actually runs it.”

Full automation achieved with UdonSharpAssetRepair as the missing link Creating comprehensive SOP documentation for autonomous agent workflow Documentation complete - CLOSED_LOOP_AGENT_SYSTEM.md and system files

The Complete System: Now Claude writes code → triggers UdonSharpAssetRepair → Unity compiles → enters Play mode → reads console logs → fixes errors → repeats. Zero human intervention. The swarm builds the car fast (volume). The automation pipeline ensures it doesn’t explode when you turn the key (quality assurance).

The Learning: This is tooling engineering. The swarm was impressive, but useless without the testing loop. UdonSharpAssetRepair is just 200 lines, but it was the missing piece that completed the loop. Sometimes the smallest component unlocks the entire system.

Why It Matters: I went from “AI-assisted developer” to “AI workflow architect.” The difference? I’m not just using tools - I’m building the automation that makes the tools useful. That’s the career transition I’m chasing.

December 6, 2025

Meta-Review: Terminal 2.1 Spec Quality

After building the Terminal 2.1 spec, I turned Claude on myself. “Review this spec against best practices - Spec-Driven Development, TDD guidelines, Hermeneutic Circle methodology. How does it hold up?”

The results were humbling. Strong marks for Hub-Spoke Architecture ✅, 600-line rule compliance ✅, and UdonSharp checks ✅. But big gaps: no TDD integration ❌, missing Hermeneutic Circle analysis ❌, incomplete pre-commit workflow ❌.

Alignment analysis showing Terminal 2.1 spec strengths and gaps

This is how you get better - critique your own work with the same rigor you’d apply to someone else’s. The spec demonstrates solid architecture thinking, but I’m not validating it with tests or considering WHOLE ↔ PART impacts explicitly. Those are fixable gaps.

Using AI to review your own methodology is meta-learning at its finest.

December 6, 2025

Full Closed-Loop Automation

This is a big one. I’ve been working with Claude CodeAn AI coding assistant by Anthropic that can read, write, and execute code autonomously to build out the Basement OS kernel, and we finally cracked the automation problem.

The Problem: With UdonSharpA C# to Udon compiler that lets you write VRChat scripts in familiar C# syntax instead of visual programming, AI can write perfect code that won’t run. Unity needs to generate .asset files, attach them to GameObjects, and compile everything. My first two attempts failed because Claude would generate code with no way to test it. I was still the button-clicker.

The Solution: Unity MCPModel Context Protocol - a way for AI agents to communicate with Unity Editor directly gave Claude hands. Now it does the full loop: write script → trigger compilation → check errors → attach to GameObjects → enter Play mode → verify. Zero human intervention.

The Learning: This taught me that real automation isn’t about speed - it’s about eliminating the feedback loop. I went from “human as button-clicker” to “human as architect.”

Why It Matters: This pattern applies beyond VRChat. Any runtime environment (web apps, mobile, game engines) needs autonomous test → fix → verify loops.

December 3, 2025

Refactoring CLAUDE.md

A coworker sent me HumanLayer’s guide to writing good CLAUDE.md files, and I couldn’t help myself - had to try it immediately.

Opened Claude Opus and fed it my entire 953-line CLAUDE.md for review, citing the HumanLayer article as the comparison benchmark. “How does mine stack up against their recommendations?”

Opus came back with a refactor plan: too many instructions causing “instruction-following decay,” embedded code examples getting stale, mixed universal and task-specific rules. The solution? Modular structure - keep CLAUDE.md lean (~150 lines) and create reference documents in Docs/Reference/ that Claude can pull when needed.

Claude Opus refactor analysis showing before/after comparison

This is how I learn best. Read something interesting, apply it immediately while it’s fresh. No analysis paralysis, just iteration. By the end of the session, I had a new doc structure that made every future Claude conversation more effective.

Meta-learning: using AI to improve how you work with AI. That’s leverage.

November 29, 2025

Symbolic Games POC

Experimented with terminal-based games using unicode characters instead of sprites. Built breakout and pong prototypes that render using block characters.

Interesting concept, but after testing found sprites are significantly less resource-intensive on Quest. Keeping the code for reference but won’t ship in prod.

Symbolic rendering engine working

Symbolic breakout game

November 15, 2025

Terminal Menu System Complete

In a development instance I ran a POC that replaces the original auto-cycling display with an actual interactive menu. Players could use their movement controls to navigate up/down through options and select with the interact button.

Had to implement player immobilization when they’re at the terminal - otherwise pressing up/down would move your avatar AND the cursor. Using VRCStation for this also solves the “walking away mid-interaction” problem.

Also extracted the weather module into its own script. The terminal now pulls real-time weather data from my GitHub Pages endpoint and displays it in the header.

Basement OS Terminal

November 7, 2025

Claude Code + Documentation Sprint

Started my first session with Claude Code - this is a game changer. Instead of copying code snippets back and forth, Claude can directly read my project files, write code, and even help with git commits.

Spent the session adding comprehensive XML docstrings to all major scripts. The AchievementTracker alone went from zero documentation to fully annotated with parameter descriptions and usage examples.

Also started setting up project automation - automatic UdonSharp validation before commits, organized GitHub issues with story points, and established milestones. Treating Claude like a jr developer is really showing how much leadership and project management coordination I have in this area.

Adding XML docstrings with Claude Code

First git upload with Claude Code

GitHub issue organization

October 20, 2025

Achievement System Overhaul

Finally finished the Xbox 360-style achievement system! 19 achievements worth 420G implemented out of 1,000G leaves plenty of room for expansion with future ideas. This matches the original Xbox 360 gamer score point structure. The notifications pop up just like they did on the 360 - that satisfying sound effect and the animated banner!

Using VRChat’s PlayerData API for persistence. This was tricky because you can’t use fancy C# features in UdonSharp - no List<T>, no Dictionary, no LINQ. Everything’s done with arrays and careful indexing.

The FIFO queue for notifications took a few iterations. Originally had a priority system but it felt weird when achievements popped up out of order. The chronological approach matches the “basement live feed” vibe I was going for.

See all 19 achievements and how to earn them →

August 10, 2025

World Launch Party

Hosted a private launch event for our founding members before opening Lower Level 2.0 to the public. Watching the achievement notifications pop as people joined was incredibly satisfying - exactly the vibe I was going for.

Thank you to our founding members for donating to make Lower Level 2 a virtual reality: Lexx, Onawarren, and M0J170.

Still a few bugs to work through, but the core systems are running - notifications, persistence, the DOS terminal.

Launch party screenshot with 8 friends

Achievement notifications during launch

August 7, 2025

Weather System + Rain Shaders

Integrated real-time weather from Fond du Lac, WI using a GitHub Pages JSON endpoint. The terminal displays current conditions and when it’s actually raining outside, the basement windows show rain effects.

Rain shader source: toadstorm/RainyGlassShader

Thundery outbreaks weather condition

Weather system working on terminal

July 26, 2025

Achievement Icon Design

Designed custom icons for the Founders and Supporters achievements using Photopea. I highly recommend it as a free Photoshop alternative. Referenced actual Xbox 360 achievement art to match that 2000s gaming aesthetic.

Achievement icons were unique to the Xbox 360 dashboard, so I made custom ones in the style of the originals for founding members and supporters.

Custom achievement icons in Unity

July 19, 2025

Multi-TV Broadcasting System

Got notifications working on all 3 TVs simultaneously! The NotificationEventHub acts as the central orchestrator, using UdonSynced variables to broadcast achievement and login notifications to all players in the world.

The system works by having the master player own the NotificationEventHub and broadcast via RequestSerialization(). When a notification fires, OnDeserialization() triggers on all clients, which then forwards the notification to a primary display plus any additional displays configured in the array. Each TV has its own XboxNotificationUI component that receives the event and handles the fade animation and sound independently.

This means everyone in the basement sees achievements pop regardless of which room they’re in, and all displays stay in sync across the network.

Notifications working on all 3 TVs

Multi-TV setup in basement

July 13, 2025

VRChat PlayerData Persistence

Finally got VRChat’s PlayerData API working! Visit counts now persist between sessions. This took way longer than expected because of UdonSharp’s limitations.

Can’t use Dictionary or List in UdonSharp, so I’m tracking players with parallel arrays. Not elegant, but it works. Successfully tested with 3 visits - data persists across world reloads.

The 11-hour debugging marathon was worth it. This is the foundation for the entire achievement system.

PlayerData persistence working

Successfully tracking 3 visits

July 10, 2025

First Achievement Unlocked

I received the Lower Level from our developer millerpc_ and the quality is astonishing, like stepping back in time. I wanted to add something special so I started working on my first AI coding idea, welcome pop ups, and I accomplished that in 4 hours! It went so well, I decided to keep building with AI and make achievements!

First achievement notification popped in-world today! “First Time Visitor” - the notification banner fades in with a satisfying notification chime. Feels exactly like it did on the 360.

This was my first time working with layers in Unity, animation timing, and sound cues all working together. I had hoped to create something special to surprise my friends for the release, and this is the moment I’ve been building toward since starting this project. Working with Claude web to code this in UdonSharp, which I’d never used before, actually worked!

First achievement notification in production

First achievement unlocked screenshot

C:\BASEMENT>
MEM: 64K OK
Memory = Page Views
64K0
64K0-63
128K64-127
256K128-255
512K256-511
640K512-639
1MB640-1023
2MB1024-2047
4MB2048+
TIME: 00:00:00