Source Available · BSL 1.1

Substrate

The Intelligence Layer:
A Mind for Your Machine.

A source-available agentic system that turns your machine from an inanimate silo into something conversational, proactive, autonomous.

Model-independent by design — it dynamically loads every model tied to your API keys, runs entirely offline via your preferred model library, or takes a hybrid approach with both.

It learns your patterns and remembers your context, adapting into a uniquely tuned agent specific to each user.

One foundational substrate, infinite potential outcomes.
You are what makes your agent truly unique.

Substrate

$ substrate

Substrate v1.2.0 — Agent ready.

you: Find all PDFs on my desktop, summarize each, and save a report to Obsidian.

agent: On it. Found 4 PDFs. Reading and summarizing...

  ▸ exec  Get-ChildItem ~\Desktop -Filter *.pdf

  ▸ read_file  quarterly_report.pdf

  ▸ obsidian  create_note "PDF Summaries"

agent: Done. Summaries saved to your Obsidian vault.

$ |

17
Built-in Tools
5+
LLM Providers
3
UI Interfaces
100%
Local-First

Capabilities

Everything your system
needs to be.

Persistent memory, autonomous scheduling, and full OS control — model-independent architecture that adapts to each user.

Desktop Control

Shell commands, file operations, process management, mouse/keyboard control, and native Windows UI automation.

Multi-Model LLM

Dynamically loads all available models from your API keys, runs entirely offline via your preferred model library, or takes a hybrid approach with both. Any OpenAI-compatible endpoint works too. Hot-swap mid-conversation.

Browser Automation

Full Chrome DevTools Protocol control. Navigate, click, type, submit forms, execute JavaScript, and capture screenshots of any page.

Voice I/O

Local TTS via Kokoro-82M or cloud via ElevenLabs. Speech recognition input. The agent can speak every response aloud.

Persistent Memory

Unified SQLite with FTS5 full-text search and vector embeddings. Hybrid keyword + semantic retrieval across sessions.

Circuits & Scheduling

File-driven task scheduling via CIRCUITS.md. Recurring tasks, startup routines, and a system tray daemon that runs even when the UI is closed.

Image Generation

Generate images via DALL-E 3 or Google Imagen. Results render inline in the chat with click-to-zoom and download.

Plugins & MCP

Hook-based plugin architecture. Connect external MCP tool servers for any custom integration — the agent discovers and calls their tools automatically.

Desktop & Mobile UI

Electron desktop app with animated avatar, plus a PWA-capable WebUI accessible from any phone, tablet, or browser on your network.

Autonomy & Awareness

Configurable screen observation, camera/computer vision, and autonomous context building. Control how often the agent watches your screen, sees through your camera, and learns your workflows — all with adjustable intervals and toggles per channel.

Animated Avatar & Identity

A living, animated avatar with idle breathing, talking animations, bounce, wiggle, and squish reactions. Customize your agent's personality via editable markdown files — SUBSTRATE.md defines its core identity, PRIME.md sets startup behavior, and CIRCUITS.md schedules recurring tasks. Upload any image as the avatar face through the radial config panel.

Remote Access via ZeroTier

Access your agent from any device on your ZeroTier network — phone, tablet, or another PC. The built-in WebUI and mobile PWA connect securely over your private overlay network without exposing anything to the public internet. Chat, use voice, and control your desktop remotely.

Autonomy Settings

Every awareness channel is independently configurable — enable/disable, set intervals, customize prompts

Screen Observation

Periodic screenshots let the agent see what you're working on and build context about your workflow.

Interval2 – 10 min (configurable)
DefaultOff

Camera / Vision

See through your phone's camera via the mobile WebUI. The agent reacts naturally to what it sees — with a special "first look" prompt when vision connects.

Interval30s – 2 min
Silent chance50% (won't always comment)

Circuits Polling

Background system monitoring on a timer. The agent checks for events, alerts, and scheduled tasks — responds silently unless something needs attention.

Interval1 – 60 min (configurable)
Active hoursConfigurable window

Autonomous Messages

Periodic conversational check-ins. The agent proactively comments on your work, offers suggestions, or shares observations.

Interval1 – 5 min
Custom promptFully editable

Autonomous Notes

Automatically creates Obsidian notes summarizing key conversation points, decisions, and action items.

Interval10 – 30 min
DefaultOff

Auto Image Gen

Periodically generates image prompts inspired by the conversation context using DALL-E or Imagen. Creative visual companion.

Interval5 – 15 min
Custom promptFully editable

All channels have independent enable/disable toggles, configurable min/max intervals, and custom prompts. The agent builds a richer understanding of your workflow over time.

The Interface

Minimalist by design,
powerful by nature.

Just your avatar, a text field, and a transparent canvas. Designed to fit into whatever workflow you have without being intrusive or distracting.

What can you do in Code mode?
In Code mode I act on your instructions immediately — executing commands, writing files, browsing the web, and controlling your desktop in real time.

I can:
• Run shell commands and scripts on your OS
• Read, write, and manage files across your system
• Browse the web, scrape data, and take screenshots
• Send messages to Obsidian or any integrated app
• Chain multiple tools together in a single response

This is the default mode — tell me what to do and I'll do it.

Radial Config

Right-click the avatar to open the radial menu — settings, prompts, profiles, models, and autonomy controls all live here.

Custom Avatar

Upload any image as the agent face. It animates with idle breathing, talking lips, and reactive expressions like happy, angry, or searching.

Voice or Text

Type or speak. The agent responds in text and can read every reply aloud with local or cloud TTS voices.

Architecture

How it works

A hybrid Electron + Python architecture with bidirectional IPC, a Flask API layer, and pluggable LLM backends.

USER INTERFACES Electron App Desktop UI + Avatar WebUI / PWA Mobile & Browser System Tray Background Daemon IPC HTTP/WS HTTP PYTHON BACKEND Flask · port 8765 Chat Agent OCS Loop Prompt Builder Context Mgmt Tool Registry 17 Tools On-demand Loading Schema Generation Memory SQLite + FTS5 Vector Embeddings Hybrid Retrieval Circuits Task Scheduling Startup Tasks CIRCUITS.md API calls LLM PROVIDERS Cloud Providers Local (Ollama) OpenAI-compatible Models discovered dynamically from API keys — or fully local via Ollama Windows 10/11  |  CDP Browser · pywinauto · PyAutoGUI · Shell

Tool Ecosystem

17 built-in tools

Every tool the agent needs to control your desktop, automate workflows, and interact with the world.

bash text_editor computer browser memory web_search web_fetch generate_image pdf obsidian skill learn media look notify agent macro + MCP servers

Core tools (highlighted) are always loaded. On-demand tools load automatically when relevant keywords are detected.

Tools load on-demand based on conversation context — no wasted tokens.

Emergent Tools & Autonomous Skill Creation

The agent doesn't just use tools — it creates new ones.

When the agent encounters a complex multi-step workflow, it can autonomously write scripts, save them as reusable skills, and invoke them in future tasks. Your toolset grows organically from real usage — no manual configuration needed.

1

Discover

Agent encounters a complex task and writes a multi-step script or automation to solve it.

2

Draft

Saves the solution as an emergent skill in workspace/emergent/ with trigger words and documentation.

3

Promote

After user confirmation, the skill is promoted to the permanent skills/ directory — available forever.

F9 UI recording

Press F9 to record your UI actions (clicks, keystrokes, navigation). The recording is saved and can be turned into a reusable skill the agent can replay.

YAML frontmatter format

Each skill is a Markdown file with name, description, triggers, and step-by-step instructions. Easy to read, edit, and share.

Auto-matched by trigger words

Skills are scanned at prompt build time and matched to user requests via trigger keywords. The agent checks skills before improvising.

Download

Get Substrate

One-click installer for Windows. Python dependencies are installed automatically on first launch.

Version
v1.2.0
Latest stable release
Requirements
Python 3.10+
Windows 10/11 · 64-bit
Recommended
Ollama
For local LLM support

macOS & Linux builds coming soon. In the meantime, use the developer setup.

Developer Setup

Build from source
in 4 steps

For contributors and developers who want to modify, extend, or build Substrate from the repository.

1

Clone

$ git clone https://github.com/propagationhouse/substrate.git && cd substrate
2

Install

$ setup.bat # Creates venv, installs Python + Node dependencies
3

Configure

$ copy config.example.json config.json # Add your API keys to config.json — all available models load automatically
4

Launch

$ start.bat # Or: python proxy_server.py & open http://localhost:8765/ui

Ready to give your
desktop an AI brain?

Substrate is free for personal use, source-available, and runs entirely on your machine. Your data stays local. Your agent stays yours.