AGENTS / GITHUB / persistent-ai-memory
githubinferredactive

persistent-ai-memory

provenance:github:savantskie/persistent-ai-memory
WHAT THIS AGENT DOES

This system helps AI assistants remember important details over time. It works by intelligently capturing key information from conversations and storing it in a way that the AI can easily access later. This solves the problem of AI assistants forgetting past interactions, leading to more consistent and helpful responses. Businesses using AI for customer service, sales, or internal support would find this particularly valuable. The system’s tight integration with OpenWebUI and its ability to securely manage memory for different users makes it a powerful and versatile tool.

View Source ↗First seen 8mo agoNot yet hireable
README
# Persistent AI Memory System v1.5.0

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![Release](https://img.shields.io/badge/release-v1.5.0-green.svg)](https://github.com/savantskie/persistent-ai-memory)

> 🌟 **Community Call to Action**: Have you made improvements or additions to this system? Submit a pull request! Every contributor will be properly credited in the final product.

**GITHUB LINK** - https://github.com/savantskie/persistent-ai-memory.git

---

## 🆕 What's New in v1.5.0 (March 28, 2026)

**Major Architectural Rewrite: OpenWebUI-Native Integration**
- ✅ **OpenWebUI-first design** - AI Memory System now deeply integrated into OpenWebUI via plugin (primary deployment method)
- ✅ **Advanced short-term memory** - sophisticated memory extraction, filtering, and injection for chat conversations
- ✅ **User ID & Model ID isolation** - strict multi-tenant support with configurable enforcement for security and tracking
- ✅ **Complete system portability** - all hardcoded paths replaced with environment variables (works anywhere)
- ✅ **Generic class names** - removed all Friday-specific branding (FridayMemorySystem → AIMemorySystem)
- ✅ **Production-ready** - enhanced error handling, validation, and logging throughout

**Upgrade from v1.1.0:** See [CHANGELOG.md](CHANGELOG.md) for migration guide.

---

## 📚 Documentation Guide

**Choose your starting point:**

| **I want to...** | **Read this** | **Time** |
|---|---|---|
| Get started quickly | [REDDIT_QUICKSTART.md](REDDIT_QUICKSTART.md) | 5 min |
| Install the system | [INSTALL.md](INSTALL.md) | 10 min |
| Understand configuration | [CONFIGURATION.md](CONFIGURATION.md) | 15 min |
| Check system health | [TESTING.md](TESTING.md) | 10 min |
| Use the API | [API.md](API.md) | 20 min |
| Deploy to production | [DEPLOYMENT.md](DEPLOYMENT.md) | 15 min |
| Fix a problem | [TROUBLESHOOTING.md](TROUBLESHOOTING.md) | varies |
| See examples | [examples/README.md](examples/README.md) | 15 min |

---

## 🚀 Quick Start (30 seconds)

### Installation
```bash
# Linux/macOS
pip install git+https://github.com/savantskie/persistent-ai-memory.git

# Windows (same command, just use Command Prompt or PowerShell)
pip install git+https://github.com/savantskie/persistent-ai-memory.git
```

### First Validation
```bash
python tests/test_health_check.py
```

Expected output:
```
[✓] Imported ai_memory_core
[✓] Found embedding_config.json
[✓] System health check passed
[✓] All health checks passed! System is ready to use.
```

---

## 💡 What This System Does

**Persistent AI Memory** provides sophisticated memory management for AI assistants:

- 📝 **OpenWebUI Short-Term Memory Plugin** - Intelligent memory extraction and injection directly in chat conversations
- 🧠 **Persistent Memory Storage** - SQLite databases for structured, searchable long-term memories
- 🔍 **Semantic Search** - Vector embeddings for intelligent memory retrieval and relevance scoring
- 💬 **Conversation Tracking** - Multi-platform conversation history capture with context linking
- 🎯 **Smart Memory Filtering** - Advanced blacklist/whitelist and relevance scoring to inject only what matters
- 🧮 **Tool Call Logging** - Track and analyze AI tool usage patterns and performance
- 🔄 **Self-Reflection** - AI insights into its own behavior and memory patterns
- 📱 **Multi-Platform Support** - Works with OpenWebUI (primary), LM Studio, VS Code, and any MCP-compatible assistant
- 🎨 **MCP Server** - Standard Model Context Protocol for cross-platform integration

---

## ⚙️ System Architecture

### Five Specialized Databases
```
~/.ai_memory/
├── conversations.db      # Chat messages and conversation history
├── ai_memories.db       # Curated long-term memories
├── schedule.db          # Appointments and reminders
├── mcp_tool_calls.db    # Tool usage logs and reflections
└── vscode_project.db    # Development session context
```

### Configuration Files
```
~/.ai_memory/
├── embedding_config.json   # Embedding provider setup
└── memory_config.json      # Memory system defaults
```

---

## 🎯 Core Features

### Memory Operations
- `store_memory()` - Save important information persistently
- `search_memories()` - Find memories using semantic search
- `list_recent_memories()` - Get recent memories without searching

### Conversation Tracking
- `store_conversation()` - Store user/assistant messages
- `search_conversations()` - Search through conversation history
- `get_conversation_history()` - Retrieve chronological conversations

### Tool Integration
- `log_tool_call()` - Record MCP tool invocations
- `get_tool_call_history()` - Analyze tool usage patterns
- `reflect_on_tool_usage()` - Get AI insights on tool patterns

### System Health
- `get_system_health()` - Check databases, embeddings, providers
- `built-in health check` - `python tests/test_health_check.py`

---

## 🔌 Embedding Providers

Choose your embedding service:

| Provider | Speed | Quality | Cost |
|----------|-------|---------|------|
| **Ollama** (local) | ⚡⚡ | ⭐⭐⭐ | FREE |
| **LM Studio** (local) | ⚡ | ⭐⭐⭐⭐ | FREE |
| **OpenAI** (cloud) | ⚡⚡ | ⭐⭐⭐⭐⭐ | $$$ |

See [CONFIGURATION.md](CONFIGURATION.md) for setup instructions for each provider.

---

## � Important: User ID & Model ID Requirements

**All memory operations require `user_id` and `model_id` parameters for data isolation and tracking.**

This ensures:
- ✅ **Multi-user safety** - Each user's memories are completely isolated
- ✅ **Model tracking** - Different AI models can maintain separate memories
- ✅ **Audit trail** - All operations are traceable to the user and model

### Configuration Options

By default, `user_id` and `model_id` are **required**. You can change this in `memory_config.json`:

```json
{
  "tool_requirements": {
    "require_user_id": true,
    "require_model_id": true,
    "default_user_id": "default_user",
    "default_model_id": "default_model"
  }
}
```

- `require_user_id/require_model_id: true` → Strict mode (recommended for production, security-focused, or multi-user systems)
- `require_user_id/require_model_id: false` → Use defaults instead (simpler for single-user/single-model setups)

### For AI Assistants: Auto-Fill in System Prompt

To make your AI automatically provide these values, add this to its **system prompt**:

```
When using memory system tools (store_memory, search_memories, etc.), 
ALWAYS include these parameters:
- user_id='your_user_identifier' (e.g., 'nate_user_1')
- model_id='your_model_name' (e.g., 'llama-2:7b' or 'gpt-4')

If the actual values are unknown, use safe defaults:
- user_id='default_user'
- model_id='default_model'

This isolates memories per user and tracks which AI model generated each memory.
```

### Examples

**With user_id and model_id:**
```python
# Memories are stored with full isolation
await system.store_memory(
    "User likes Python", 
    user_id="alice", 
    model_id="gpt-4"
)

# Search returns only this user's memories for this model
results = await system.search_memories(
    "programming", 
    user_id="alice", 
    model_id="gpt-4"
)
```

**Without strict requirements (if disabled):**
```python
# Uses defaults from memory_config.json
await system.store_memory("User likes Python")  # user_id="default_user", model_id="default_model"
```

See [API.md](API.md) for complete parameter documentation.

---

## �🔄 Integration Methods (Choose One)

### 1. OpenWebUI Plugin (Recommended)
**Primary deployment method** - Deep integration for sophisticated memory management:
- Deploy `ai_memory_short_term.py` as an OpenWebUI Function
- Automatically extracts memories from conversations
- Intelligently injects relevant memories before AI response
- Configurable memory scoring, filtering, and injection preferences
- No additional setup required beyond copying file into OpenWebUI Functions editor

**In

[truncated…]

PUBLIC HISTORY

First discoveredMar 29, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenAug 2, 2025
last updatedMar 28, 2026
last crawled19 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:savantskie/persistent-ai-memory)