githubinferredactive
Ai-research-analyst
provenance:github:Oleksy1121/Ai-research-analyst
WHAT THIS AGENT DOES
The AI Research Analyst is like having a dedicated research assistant that can investigate any topic you give it. It automatically gathers information from various sources, checks for accuracy, and then creates a clear, structured report with all its findings. Businesses and individuals can use it to quickly get reliable insights on complex topics, saving them time and effort on research projects.
README
# AI Research Analyst
An autonomous research agent that conducts multi-stage research on any topic using LangGraph. The system plans search strategies, gathers information from multiple sources, verifies facts, identifies contradictions, and generates structured reports with citations.
## Demo
**Try the live demo:** [https://marcin-oleszczyk.pl/ai-research-analyst](https://marcin-oleszczyk.pl/ai-research-analyst)
## Features
- **Multi-Node LangGraph Workflow** — Intake → Planning → Search → Analysis → Quality Gate → Synthesis → Report
- **Session RAG** — Sources indexed in ChromaDB per session for semantic retrieval and contradiction detection
- **Quality Loop** — Iterative research expansion until coverage thresholds are met
- **Human-in-the-Loop** — Plan approval checkpoints for deep dive research
- **Real-time Streaming** — SSE-based progress updates
- **Research Depth Modes** — Quick scan, standard, and deep dive options
## Architecture
```
┌─────────┐ ┌──────────┐ ┌────────┐ ┌──────────┐ ┌─────────────┐
│ Intake │ → │ Planning │ → │ Search │ → │ Analysis │ → │ Quality Gate│
└─────────┘ └──────────┘ └────────┘ └──────────┘ └─────────────┘
│
┌──────────────────────────────────────────┘
│ (expand_research) │ (proceed)
▼ ▼
┌────────┐ ┌───────────┐ ┌────────┐
│ Search │ ◄────────────────│ Synthesis │ → │ Report │
└────────┘ └───────────┘ └────────┘
```
## Tech Stack
| Component | Technology |
|-----------|------------|
| Workflow Orchestration | LangGraph |
| LLM | OpenAI GPT-4o-mini |
| Web Search | Tavily API |
| Vector Store | ChromaDB |
| API | FastAPI |
| Deployment | Docker, AWS App Runner |
## Installation
### Prerequisites
- Python 3.12+
- API keys: OpenAI, Tavily
### Setup
```bash
# Clone repository
git clone https://github.com/Oleksy1121/ai-research-analyst.git
cd ai-research-analyst
# Install dependencies
pip install -e .
# Configure environment
cp .env.example .env
# Edit .env with your API keys
# Run server
uvicorn src.api.app:app --reload
```
## Usage
### CLI
```bash
# Standard research
python -m src.main "What are the latest developments in quantum computing?"
# Deep dive (requires plan approval)
python -m src.main "Compare renewable energy policies in EU vs US" --depth deep_dive
# Quick scan
python -m src.main "What is LangGraph?" --depth quick_scan
```
### API
```bash
# Start research
curl -X POST http://localhost:8000/api/v1/research/start \
-H "Content-Type: application/json" \
-d '{"query": "AI trends 2024", "depth": "standard"}'
# Stream progress (SSE)
curl -N http://localhost:8000/api/v1/research/{session_id}/stream
# Get report
curl http://localhost:8000/api/v1/research/{session_id}/report
```
## API Endpoints
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | `/api/v1/health` | Health check |
| POST | `/api/v1/research/start` | Start new research session |
| GET | `/api/v1/research/{id}/status` | Get session status |
| GET | `/api/v1/research/{id}/stream` | SSE progress stream |
| POST | `/api/v1/research/{id}/approve` | Approve/modify research plan |
| GET | `/api/v1/research/{id}/report` | Get final report |
| GET | `/api/v1/research/{id}/plan` | Get pending plan for approval |
## Project Structure
```
ai-research-analyst/
├── src/
│ ├── api/ # REST API (FastAPI)
│ │ ├── app.py
│ │ ├── limiter.py
│ │ ├── routes.py
│ │ ├── schemas.py
│ │ ├── session_manager.py
│ │ └── streaming.py
│ ├── graph/ # LangGraph workflow
│ │ ├── builder.py
│ │ ├── models.py
│ │ ├── state.py
│ │ ├── routing.py
│ │ └── nodes/
│ │ ├── intake.py
│ │ ├── planning.py
│ │ ├── search.py
│ │ ├── analysis.py
│ │ ├── quality.py
│ │ ├── synthesis.py
│ │ └── report.py
│ ├── rag/ # RAG / Vector Store
│ ├── tools/ # External tools (Tavily)
│ ├── utils/ # Helper functions
│ ├── config.py
│ └── main.py # CLI entry point
├── tests/
├── .github/workflows/ # CI/CD (manual deploy to AWS)
├── Dockerfile
├── pyproject.toml
└── README.md
```
## Configuration
| Variable | Description | Required |
|----------|-------------|----------|
| `OPENAI_API_KEY` | OpenAI API key | Yes |
| `TAVILY_API_KEY` | Tavily search API key | Yes |
| `LOG_LEVEL` | Logging level (INFO, DEBUG) | No |
| `LLM_MODEL` | OpenAI model name | No |
## Research Depth Modes
| Mode | Questions | Sources | Plan Approval |
|------|-----------|---------|---------------|
| `quick_scan` | 2 | 1-2 | No |
| `standard` | 4 | 3-5 | No |
| `deep_dive` | 6 | 5+ | Yes |
## Deployment
The project includes GitHub Actions workflow for manual deployment to AWS App Runner:
```bash
# Trigger deployment manually via GitHub Actions UI
# or use gh CLI:
gh workflow run deploy.yml
```
## License
MITPUBLIC HISTORY
First discoveredMar 21, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenJan 6, 2026
last updatedJan 11, 2026
last crawled24 days ago
version—
README BADGE
Add to your README:
