Your AI
Sanctuary
Build it. Own it. Never rent again.
OpenClaw is a personal AI agent that runs on hardware you control — a cloud server, a mini PC, a home server in your closet. Your data stays yours. No subscriptions to cancel. No monthly bills to the big boys.
These guides walk you through every step. Start where you are, go at your own pace, and build something that's genuinely yours.
Get It Running
Pick your platform. Both paths lead to the same place — a running, Discord-connected OpenClaw agent you own completely. The cloud path costs nothing upfront. The local path costs once and pays for itself within the year.
A free-tier cloud instance, SSH hardened, Discord connected. No hardware needed. Start here to prove the concept before spending anything on hardware.
Mini PC SetupYour hardware. No monthly bills. Your data never leaves your home. Full control over everything. The setup that pays for itself.
Fortify Your Sanctuary
A running agent is just the beginning. These guides turn a generic AI into something personal — one that knows who you are, protects your credentials, follows your rules, and gets sharper over time.
Out of the box your agent knows nothing about you. Memory files change that. Build an agent with real context, real preferences, and real rules.
Skills & Slash CommandsMemory teaches your agent who you are. Skills teach it what to do. Build a health check, a deploy pre-flight — all with plain markdown files.
HooksSkills tell your agent what to do. Hooks enforce what it must and must not. Protect sensitive files, block deploys until tests pass, stay in control.
Securing Your SecretsAPI keys in plaintext config files have already burned thousands of people. Move your secrets to environment variables before it matters.
Build Your Local AI Stack
The final piece. Stop renting intelligence — run real language models on your own hardware with no API fees and no data leaving your machine. Install Ollama, pull your first model, wire it into OpenClaw.
Requires local hardware — follow the Mini PC Setup guide first.
Install Ollama and run your first local language model. Llama 3.2 and Qwen2.5 run well on modern CPUs. No API keys, no internet required, ever.
Ollama AdvancedTune parameters, benchmark your hardware, run multiple models simultaneously, and wire Ollama directly into OpenClaw as one unified system.
Work through these in order. Understand what you're building before spending on hardware. Earn your sanctuary first — then invest in it.
If this helped, grab me a coffee. It genuinely keeps the guides coming. 🦞