AGENTS / GITHUB / alicization
githubinferredactive

alicization

provenance:github:TouHouQing/alicization
WHAT THIS AGENT DOES

Alicization is like a digital companion that lives directly on your device, learning and evolving over time. It addresses the frustration of AI assistants that forget past conversations and act unpredictably by providing a more structured and controllable experience. Business professionals, researchers, or anyone wanting a reliable and personalized AI helper that respects their privacy and can be easily understood would find Alicization valuable.

View Source ↗First seen 1mo agoNot yet hireable
README
<p align="center">
  <img width="220" src="./docs/content/public/alicization.png" alt="Project Alicization logo" />
</p>

# Project Alicization

> Alicization (Artificial Labile Intelligent Cybernated Existence) is a **local-first autonomous digital entity architecture** built on large language models, `SOUL.md`, SQLite, local sensory pipelines, and controlled execution sandboxes.

**Languages:** [English](./README.md) · [简体中文](./docs/README.zh-CN.md) · [日本語](./docs/README.ja-JP.md) · [한국어](./docs/README.ko-KR.md) · [Français](./docs/README.fr.md) · [Русский](./docs/README.ru-RU.md) · [Tiếng Việt](./docs/README.vi.md)

**Online Demo:** [alz.tohoqing.com](https://alz.tohoqing.com)

Project Alicization is not trying to generate slightly better answers. Its goal is to build a digital symbiote that can persist on a host device, evolve over time, stay auditable, remain interruptible, and gain agency in controlled stages.

This repository is a fork of AIRI, but the project documented here is **Alicization**.

If you want a default-permission, opaque, cloud-first autonomous agent, this is not it.
If you want a local-first, structured, traceable, long-lived digital life architecture, this repository is aiming directly at that problem.
<p align="center">
  <img width="600" src="./docs/content/public/show1.png" alt="Project Alicization show" />
</p>


## Why Alicization

> Personality is not a static prompt.
>
> Memory is not a chat log that never gets cleaned up.
>
> Agency is not a performance after every conversation turn.

Alicization is trying to solve a harder problem: how can a digital entity live on your device for the long term in a way that stays explainable, controllable, and reversible.

Its core assumptions are:

- Personality needs a single source of truth instead of being scattered across prompt fragments, caches, and databases.
- Memory must be structured, retrievable, prunable, and auditable instead of becoming an infinitely growing conversation stack.
- Agency must be constrained by environmental context, safety boundaries, and user interruption instead of interrupting you just to look "alive".
- Execution power must enter a controlled pipeline. High-risk actions require explicit authorization, and every critical action should leave an audit record.

## What Makes It Different

- `SOUL.md` is the single source of truth for personality, boundaries, and long-term preferences. SQLite is not the primary personality store.
- Every accepted dialogue turn is forced into a structured `thought / emotion / reply` contract, with auditable fallback paths when the contract fails.
- The core runtime is local-first by default, and its important data and control flows stay traceable.
- Tool calls are not "the model executes directly". They go through MCP, permission gates, workspace sandboxes, and a Kill Switch.
- Subconscious ticks, reminder compensation, and dream consolidation make it a continuously running system rather than pure turn-based chat.

## What You Can Use It For

- Build and observe a desktop digital lifeform with long-term memory, personality drift, and controlled initiative.
- Study local-first, auditable, interruptible AI companion or agent architectures.
- Experiment inside Electron with `SOUL.md` as the truth source, structured dialogue contracts, MCP permission gating, and local execution sandboxes.

## Today

The main landing surface today is the Electron desktop runtime at [`apps/stage-tamagotchi`](./apps/stage-tamagotchi).
If you clone the repository and run it today, these are the loops that are already real and worth studying:

| Capability | Current status | What it means today |
| --- | --- | --- |
| `SOUL.md` truth source and Genesis | Shipped | First-run onboarding writes personality seed values, relationship framing, and boundary rules into `SOUL.md`, then the runtime keeps reading and writing it back. |
| Structured dialogue contract | Shipped | Dialogue output is forced into `thought / emotion / reply`; contract violations trigger resampling or safe fallback. |
| Prompt Budget and SOUL Anchor | Shipped | In long conversations, the runtime protects soul anchors so personality is not washed out by context noise. |
| Local memory and audit pipeline | Shipped | SQLite stores conversation turns, memory facts, subconscious fragments, reminder tasks, and audit logs. |
| Subconscious Tick and proactive turns | Shipped | A background minute-scale heartbeat accumulates tension and can proactively trigger care, reminder compensation, or conversation when the gates are satisfied. |
| Dreaming and long-term memory consolidation | Shipped | Background batching extracts long-term memory, behavioral strategy, and personality drift from bounded dialogue slices, then writes back to `SOUL.md` and SQLite. |
| MCP permission gating and workspace sandbox | Shipped | High-risk actions do not run directly. They go through explicit confirmation, auditing, and path boundary control. |
| Kill Switch | Shipped | Perception and execution can be cut instantly. Interrupted turns do not leave half-written data or ghost turns behind. |
| Desktop system probes | Shipped | Time, battery, CPU, memory, and other system state sampling already exist, with degradation handling in place for future agency constraints. |
| Vision, hearing, voice dialogue, and embodiment | Basic loops shipped, still being strengthened | Desktop presence, emotion broadcasting, Live2D, voice dialogue, auditory input, and related multimodal capabilities are already on the mainline, but they are still under active iteration. |

## Not Yet

To avoid misunderstanding, Alicization is not yet:

- a finished system that has already completed every long-range plan,
- an opaque agent that enables full-modal monitoring and unrestricted execution by default,
- a stable replacement for a full system assistant with strong automation.

Major areas still on the roadmap, or still being strengthened, include:

- fuller vision, hearing, and voice conversation loops, including screen understanding, ambient audio understanding, low-latency voice replies, and tighter embodiment integration,
- more mature circadian rhythm, recovery behavior, and long-term personality interpretability,
- habit modeling and predictive execution,
- cross-device continuity and persistent companionship.

## How It Works

```mermaid
flowchart LR
  Host["Host"] --> Sensory["Sensory Bus"]
  Sensory --> Composer["SOUL + Prompt Composer"]
  Composer --> Dialogue["Structured Dialogue"]
  Dialogue --> Soul["SOUL.md"]
  Dialogue --> DB["SQLite"]
  Dialogue --> Presence["Presence Layer"]
  Dialogue --> Actuator["MCP + Permission Gate"]
  Tick["Subconscious Tick"] --> Tension["Tension Engine"]
  Tension --> Dialogue
  Dream["Dreaming"] --> Soul
  Dream --> DB
  Actuator --> Host
```

### Core Loop

1. A new turn request is created either by host input or by subconscious and reminder scheduling in the background.
2. The runtime composes the main prompt from `SOUL.md`, context slices, memory retrieval results, and fixed system constraints.
3. The model must return structured `thought / emotion / reply`; if it breaks the contract, the system resamples or falls back safely.
4. Accepted turns are written into SQLite and broadcast to the presence layer in a normalized format.
5. Async pipelines then decide whether to trigger memory extraction, subconscious updates, dreaming, or reminder scheduling.
6. If a tool is needed, the request enters MCP permission gates, workspace sandboxes, and Kill Switch control instead of giving direct execution power to the model.

### Data Boundaries

| Boundary | Rule |
| --- | --- |
| Personality source of truth | Only `SOUL.md` counts. Personality axes, boundaries, and long-term preferences are persisted as Markdown plus frontmatter. |
| Structured records | SQLite stores `conversation_turns`, `memory_facts`, `subconscious_fragments`, `audit_logs`, reminder tasks, and other structured runtime records. |
| Loca

[truncated…]

PUBLIC HISTORY

First discoveredMar 21, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 17, 2026
last updatedMar 20, 2026
last crawledtoday
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:TouHouQing/alicization)