Merge pull request #1586 from Mai-with-u/r-dev

麦麦1.0版本
This commit is contained in:
SengokuCola
2026-04-18 00:22:09 +08:00
committed by GitHub
1211 changed files with 259383 additions and 61093 deletions

49
.coderabbit.yaml Normal file
View File

@@ -0,0 +1,49 @@
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
language: "zh-CN"
reviews:
profile: "chill"
request_changes_workflow: false
high_level_summary: true
high_level_summary_placeholder: "@coderabbitai summary"
poem: false
review_status: true
commit_status: true
collapse_walkthrough: false
auto_review:
enabled: true
drafts: false
base_branches:
- "main"
- "dev"
path_filters:
- "!logs/**"
- "!data/**"
- "!depends-data/**"
- "!dashboard/dist-electron/**"
- "!dashboard/node_modules/**"
- "!**/*.log"
- "!**/*.jsonl"
- "!**/*.db"
- "!**/*.db-shm"
- "!**/*.db-wal"
- "!**/*.bak"
path_instructions:
- path: "src/**/*.py"
instructions: |
本项目使用 Ruff 进行代码检查与格式化,行宽限制为 120 字符,字符串使用双引号。
请重点关注以下方面:
- 异步代码的正确性async/await 使用是否合理)
- 异常处理是否覆盖了边界情况
- import 顺序需遵循项目规范:标准库/第三方库在前,本地模块在后;本地同级模块使用相对导入,跨目录使用以 `from src` 开头的绝对导入
- 避免硬编码的敏感信息API Key、密码等
- path: "plugins/**/*.py"
instructions: |
插件目录,请关注插件接口的规范使用以及与核心模块的依赖隔离性。
- path: "*.toml"
instructions: |
配置文件,请检查字段合法性和格式规范,注意不要泄露敏感默认值。
chat:
auto_reply: true

View File

@@ -1,6 +1,7 @@
<!-- 提交前必读 -->
- ✅ 接受与main直接相关的Bug修复提交到dev分支
- 新增功能类pr需要经过issue提前讨论否则不会被合并
- 🌐 i18n 提醒:除 bootstrap 或紧急修复外,请不要把非 `zh-CN` 目标翻译作为常规 GitHub 编辑面;常规翻译以 Crowdin -> `l10n_*` PR 回流为准,详见 `docs/i18n.md`
# 请填写以下内容
(删除掉中括号内的空格,并替换为**小写的x**

77
.github/workflows/crowdin-sync.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: Crowdin Sync
on:
workflow_dispatch:
schedule:
- cron: "17 */6 * * *"
push:
branches:
- main
- r-dev
paths:
- "crowdin.yml"
- "locales/zh-CN/*.json"
- "prompts/zh-CN/**/*.prompt"
- "dashboard/src/i18n/locales/zh.json"
permissions:
contents: write
pull-requests: write
jobs:
sync-current-branch:
if: github.event_name != 'schedule'
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- name: Sync translations with Crowdin
uses: crowdin/github-action@v2
with:
config: crowdin.yml
upload_sources: true
upload_translations: false
download_translations: true
localization_branch_name: l10n_${{ github.ref_name }}
create_pull_request: true
pull_request_title: "chore(i18n): sync Crowdin translations"
pull_request_body: "Automated translation sync from Crowdin."
pull_request_base_branch_name: ${{ github.ref_name }}
commit_message: "chore(i18n): sync Crowdin translations"
env:
GITHUB_TOKEN: ${{ github.token }}
CROWDIN_PROJECT_ID: ${{ secrets.CROWDIN_PROJECT_ID }}
CROWDIN_PERSONAL_TOKEN: ${{ secrets.CROWDIN_PERSONAL_TOKEN }}
sync-scheduled-branches:
if: github.event_name == 'schedule'
runs-on: ubuntu-24.04
strategy:
fail-fast: false
matrix:
base_branch:
- main
- r-dev
steps:
- uses: actions/checkout@v4
with:
ref: ${{ matrix.base_branch }}
- name: Sync scheduled translations with Crowdin
uses: crowdin/github-action@v2
with:
config: crowdin.yml
skip_ref_checkout: true
upload_sources: true
upload_translations: false
download_translations: true
localization_branch_name: l10n_${{ matrix.base_branch }}
create_pull_request: true
pull_request_title: "chore(i18n): sync Crowdin translations"
pull_request_body: "Automated translation sync from Crowdin."
pull_request_base_branch_name: ${{ matrix.base_branch }}
commit_message: "chore(i18n): sync Crowdin translations"
env:
GITHUB_TOKEN: ${{ github.token }}
CROWDIN_PROJECT_ID: ${{ secrets.CROWDIN_PROJECT_ID }}
CROWDIN_PERSONAL_TOKEN: ${{ secrets.CROWDIN_PERSONAL_TOKEN }}

38
.github/workflows/i18n-validate.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
name: i18n Validate
on:
pull_request:
paths:
- "locales/**/*.json"
- "prompts/**/*.prompt"
- "dashboard/src/i18n/index.ts"
- "dashboard/src/i18n/locales/*.json"
- "scripts/i18n_validate.py"
- "src/common/i18n/**/*.py"
- "src/common/prompt_i18n.py"
- "src/prompt/prompt_manager.py"
push:
branches:
- main
- r-dev
paths:
- "locales/**/*.json"
- "prompts/**/*.prompt"
- "dashboard/src/i18n/index.ts"
- "dashboard/src/i18n/locales/*.json"
- "scripts/i18n_validate.py"
- "src/common/i18n/**/*.py"
- "src/common/prompt_i18n.py"
- "src/prompt/prompt_manager.py"
jobs:
validate:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Validate locale files
run: python scripts/i18n_validate.py

View File

@@ -2,30 +2,52 @@
name: PR Precheck
on: [pull_request]
permissions:
contents: read
issues: write
jobs:
conflict-check:
runs-on: [self-hosted, Windows, X64]
runs-on: ubuntu-24.04
outputs:
conflict: ${{ steps.check-conflicts.outputs.conflict }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- name: Check Conflicts
id: check-conflicts
env:
BASE_REF: ${{ github.event.pull_request.base.ref }}
run: |
git fetch origin main
$conflicts = git diff --name-only --diff-filter=U origin/main...HEAD
if ($conflicts) {
echo "conflict=true" >> $env:GITHUB_OUTPUT
Write-Host "Conflicts detected in files: $conflicts"
} else {
echo "conflict=false" >> $env:GITHUB_OUTPUT
Write-Host "No conflicts detected"
}
shell: pwsh
set -euo pipefail
git fetch origin "$BASE_REF":"refs/remotes/origin/$BASE_REF" --depth=1
git config user.email "github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
if git merge --no-commit --no-ff "origin/$BASE_REF" > /tmp/precheck-merge.log 2>&1; then
echo "conflict=false" >> "$GITHUB_OUTPUT"
echo "No conflicts detected against origin/$BASE_REF"
git merge --abort > /dev/null 2>&1 || true
exit 0
fi
if git diff --name-only --diff-filter=U | grep -q .; then
echo "conflict=true" >> "$GITHUB_OUTPUT"
echo "Conflicts detected against origin/$BASE_REF:"
git diff --name-only --diff-filter=U
else
echo "conflict=false" >> "$GITHUB_OUTPUT"
echo "Merge check returned non-zero without unmerged files against origin/$BASE_REF"
cat /tmp/precheck-merge.log
fi
git merge --abort > /dev/null 2>&1 || true
shell: bash
labeler:
runs-on: [self-hosted, Windows, X64]
runs-on: ubuntu-24.04
needs: conflict-check
if: needs.conflict-check.outputs.conflict == 'true'
steps:

View File

@@ -0,0 +1,98 @@
name: Publish WebUI Dist
on:
push:
branches:
- main
- dev
- r-dev
paths:
- "dashboard/**"
workflow_dispatch:
permissions:
contents: read
jobs:
build-and-publish:
runs-on: ubuntu-24.04
environment: webui
steps:
- uses: actions/checkout@v4
- name: Setup Bun
uses: oven-sh/setup-bun@v2
with:
bun-version: "1.2.0"
- name: Build dashboard
working-directory: dashboard
run: |
bun install
bun run build
- name: Prepare dist package
run: |
rm -rf .webui_dist_pkg
mkdir -p .webui_dist_pkg/maibot_dashboard/dist
BASE_VERSION=$(python -c "import json; print(json.load(open('dashboard/package.json'))['version'])")
if [ "${GITHUB_REF_NAME}" = "main" ]; then
WEBUI_VERSION="${BASE_VERSION}"
else
TODAY=$(date -u +%Y%m%d)
WEBUI_VERSION="${BASE_VERSION}.dev${TODAY}${GITHUB_RUN_NUMBER}"
fi
cat > .webui_dist_pkg/pyproject.toml <<EOF
[project]
name = "maibot-dashboard"
version = "${WEBUI_VERSION}"
description = "MaiBot WebUI static assets"
readme = "README.md"
requires-python = ">=3.10"
[build-system]
requires = ["setuptools>=80.9.0", "wheel"]
build-backend = "setuptools.build_meta"
[tool.setuptools]
include-package-data = true
[tool.setuptools.packages.find]
where = ["."]
include = ["maibot_dashboard"]
exclude = ["maibot_dashboard.dist*"]
[tool.setuptools.package-data]
maibot_dashboard = ["dist/**"]
EOF
cat > .webui_dist_pkg/README.md <<'EOF'
# MaiBot WebUI Dist
该包仅包含 MaiBot WebUI 的前端构建产物dist
EOF
cat > .webui_dist_pkg/maibot_dashboard/__init__.py <<'EOF'
from .resources import get_dist_path
__all__ = ["get_dist_path"]
EOF
cat > .webui_dist_pkg/maibot_dashboard/resources.py <<'EOF'
from pathlib import Path
def get_dist_path() -> Path:
return Path(__file__).parent / "dist"
EOF
cp -a dashboard/dist/. .webui_dist_pkg/maibot_dashboard/dist/
- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Build and publish
working-directory: .webui_dist_pkg
env:
PYPI_API_TOKEN: ${{ secrets.PYPI_API_TOKEN }}
run: |
python -m pip install --upgrade build twine
python -m build
python -m twine upload -u __token__ -p "$PYPI_API_TOKEN" dist/*

View File

@@ -1,8 +1,18 @@
name: Ruff PR Check
on: [ pull_request ]
on:
pull_request:
paths:
- "*.py"
- "**/*.py"
- "pyproject.toml"
- "ruff.toml"
- ".ruff.toml"
- "setup.cfg"
- "tox.ini"
- ".pre-commit-config.yaml"
jobs:
ruff:
runs-on: [self-hosted, Windows, X64]
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
@@ -18,4 +28,3 @@ jobs:
- name: Run Ruff Format Check
run: ruff format --check --diff
shell: pwsh

View File

@@ -8,17 +8,12 @@ on:
# - dev-refactor # 例如:匹配所有以 feature/ 开头的分支
# # 添加你希望触发此 workflow 的其他分支
workflow_dispatch: # 允许手动触发工作流
branches:
- main
- dev
- dev-refactor
permissions:
contents: write
jobs:
ruff:
runs-on: [self-hosted, Windows, X64]
runs-on: ubuntu-24.04
# 关键修改:添加条件判断
# 确保只有在 event_name 是 'push' 且不是由 Pull Request 引起的 push 时才运行
if: github.event_name == 'push' && !startsWith(github.ref, 'refs/pull/')

50
.gitignore vendored
View File

@@ -1,4 +1,10 @@
data/
!pytests/A_memorix_test/data/
!pytests/A_memorix_test/data/benchmarks/
!pytests/A_memorix_test/data/benchmarks/long_novel_memory_benchmark.json
!pytests/A_memorix_test/data/real_dialogues/
!pytests/A_memorix_test/data/real_dialogues/private_alice_weekend.json
pytests/A_memorix_test/data/benchmarks/results/
data1/
mongodb/
NapCat.Framework.Windows.Once/
@@ -35,9 +41,11 @@ message_queue_content.bat
message_queue_window.bat
message_queue_window.txt
queue_update.txt
start_saka.bat
.env
.env.*
.cursor
start_all.bat
config/bot_config_dev.toml
config/bot_config.toml
config/bot_config.toml.bak
@@ -45,9 +53,31 @@ config/lpmm_config.toml
config/lpmm_config.toml.bak
template/compare/bot_config_template.toml
template/compare/model_config_template.toml
CLAUDE.md
MaiBot-Dashboard/
# CLAUDE.md
cloudflare-workers/
log_viewer/
dev/
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules/
dist/
dist-ssr/
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
result.json
# Byte-compiled / optimized / DLL files
__pycache__/
@@ -69,7 +99,6 @@ develop-eggs/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
@@ -255,6 +284,8 @@ logs
.vscode
/config/*
config/mcp_config.json
!config/mcp_config.json.template
config/old/bot_config_20250405_212257.toml
temp/
@@ -319,10 +350,6 @@ run_pet.bat
!/plugins
!/plugins/hello_world_plugin
!/plugins/emoji_manage_plugin
!/plugins/take_picture_plugin
!/plugins/deep_think
!/plugins/MaiBot_MCPBridgePlugin
!/plugins/ChatFrequency/
!/plugins/__init__.py
config.toml
@@ -331,3 +358,12 @@ interested_rates.txt
MaiBot.code-workspace
*.lock
actionlint
.sisyphus/
dist-electron/
packages/
## Claude Code and OMC data
.claude/
.omc/
/.venv312
/src/A_memorix/algorithm_redesign

48
AGENTS.md Normal file
View File

@@ -0,0 +1,48 @@
# 代码规范
# import 规范
在从外部库进行导入时候,请遵循以下顺序:
1. 对于标准库和第三方库的导入,请按照如下顺序:
- 需要使用`from ... import ...`语法的导入放在前面。
- 直接使用`import ...`语法的导入放在后面。
- 对于使用`from ... import ...`导入的多个项,请**在保证不会引起import错误的前提下**,按照**字母顺序**排列。
- 对于使用`import ...`导入的多个项,请**在保证不会引起import错误的前提下**,按照**字母顺序**排列。
2. 对于本地模块的导入,请按照如下顺序:
- 对于同一个文件夹下的模块导入,使用相对导入,排列顺序按照**不发生import错误的前提下**,随便排列。
- 对于不同文件夹下的模块导入,使用绝对导入。这些导入应该以`from src`开头,并且按照**不发生import错误的前提下**,尽量使得第二层的文件夹名称相同的导入放在一起;第二层文件夹名称排列随机。
3. 标准库和第三方库的导入应该放在本地模块导入的前面。
4. 各个导入块之间应该使用一个空行进行分隔。
5. 对于现有的代码,如果导入顺序不符合上述规范,在重构代码时应该调整导入顺序以符合规范。
## 注释规范
1. 尽量保持良好的注释
2. 如果原来的代码中有注释,则重构的时候,除非这部分代码被删除,否则相同功能的代码应该保留注释(可以对注释进行修改以保持准确性,但不应该删除注释)。
3. 如果原来的代码中没有注释,则重构的时候,如果某个功能块的代码较长或者逻辑较为复杂,则应该添加注释来解释这部分代码的功能和逻辑。
## 类型注解规范
1. 重构代码时,如果原来的代码中有类型注解,则相同功能的代码应该保留类型注解(可以对类型注解进行修改以保持准确性,但不应该删除类型注解)。
2. 重构代码时,如果原来的代码中没有类型注解,则重构的时候,如果某个函数的功能较为复杂或者参数较多,则应该添加类型注解来提高代码的可读性和可维护性。(对于简单的变量,可以不添加类型注解)
3. 对于参数化泛型,应该使用`typing`模块中的类型注解来指定参数化泛型的类型。
- 例如,使用`List[int]`来表示一个包含整数的列表,使用`Dict[str, Any]`来表示一个键为字符串,值为任意类型的字典。
## 变量规范
1. 当确定某个变量/实例是某种类型的时候(优先按照类型注解确定,除非你分析出类型注解是错误的),可以不必使用`or`进行fallback。
- 例如,`bot_nickname = (global_config.bot.nickname or "").strip()` 可以改为 `bot_nickname = global_config.bot.nickname.strip()`,前提是我们确定`global_config.bot.nickname`一定是一个字符串。
## 类属性使用规范
1. 应该尽量减少使用getattr和setattr方法除非是在对一个动态类进行处理或者使用Monkeypatch完成Pytest
2. 在重构代码时如果遇到getattr和setattr应该尝试检查这个类实例是否有这个属性如果有则直接替换为类属性访问写法。
- 举例:`v = getattr(instance, "value", "")` 在检查到`instance``value`属性后应该改为`v = instance.value`
# 运行/调试/构建/测试/依赖
优先使用uv
依赖项以 pyproject.toml 为准
# 语言规范
项目的首选语言为简体中文,无论是注释语言,日志展示语言,还是 WebUI 展示语言都首要以简体中文为首要实现目标
# 配置文件修改
如果你需要改动配置文件不需要修改实际的bot_config.toml或者model_config.toml只需要修改配置文件模版并新增一个版本号即可也不必要为配置改动创建测试文件。
# 关于webui修改
不要修改dashboard下的内容因为这部分内容由另一个仓库build
# maibot插件开发文档
https://github.com/Mai-with-u/maibot-plugin-sdk/blob/main/docs/guide.md

1
CLAUDE.md Symbolic link
View File

@@ -0,0 +1 @@
AGENTS.md

9
Plan.md Normal file
View File

@@ -0,0 +1,9 @@
Context 在消息接收的时候就进行解析,不再放到 MaiMessage 里面,由消息注册的时候直接进去注册
- [ ] 实现`update_chat_context`方法,主要关注`format_info`
1. **预计不对发送的时候进行`accept_format`的格式判断**,希望所有消息适配器接收的时候做一下不兼容内容主动丢弃
2. 在发送消息的时候进行`accept_format`的判断,判断不兼容内容是否存在,如果存在则丢弃掉
- [ ] 实现 status_api

170
README.md
View File

@@ -1,14 +1,24 @@
<a id="-双语--bilingual"></a>
<div align="center">
<h1>麦麦 MaiBot <sub><small>MaiCore</small></sub></h1>
<!-- Language Switcher -->
<a href="#-双语--bilingual">双语 / Bilingual</a> | <a href="docs/README_CN.md">中文</a> | <a href="#english">English</a>
<br>
<br>
<h1>麦麦 MaiBot <sub><small>MaiSaka</small></sub></h1>
<sub><sup>An interactive agent based on large language models.</sup></sub>
<!-- Badges Row -->
<p>
<img src="https://img.shields.io/badge/Python-3.10+-blue" alt="Python Version">
<img src="https://img.shields.io/github/license/Mai-with-u/MaiBot?label=%E5%8D%8F%E8%AE%AE" alt="License">
<img src="https://img.shields.io/badge/状态-开发中-yellow" alt="Status">
<img src="https://img.shields.io/github/contributors/Mai-with-u/MaiBot.svg?style=flat&label=%E8%B4%A1%E7%8C%AE%E8%80%85" alt="Contributors">
<img src="https://img.shields.io/github/forks/Mai-with-u/MaiBot.svg?style=flat&label=%E5%88%86%E6%94%AF%E6%95%B0" alt="Forks">
<img src="https://img.shields.io/github/stars/Mai-with-u/MaiBot?style=flat&label=%E6%98%9F%E6%A0%87%E6%95%B0" alt="Stars">
<img src="https://img.shields.io/github/license/Mai-with-u/MaiBot?label=License" alt="License">
<img src="https://img.shields.io/badge/Status-In%20Development-yellow" alt="Status">
<img src="https://img.shields.io/github/contributors/Mai-with-u/MaiBot.svg?style=flat&label=Contributors" alt="Contributors">
<img src="https://img.shields.io/github/forks/Mai-with-u/MaiBot.svg?style=flat&label=Forks" alt="Forks">
<img src="https://img.shields.io/github/stars/Mai-with-u/MaiBot?style=flat&label=Stars" alt="Stars">
<a href="https://deepwiki.com/DrSmoothl/MaiBot"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
</p>
</div>
@@ -18,24 +28,36 @@
<!-- Mascot on the Right (Float) -->
<img src="depends-data/maimai-v2.png" align="right" width="40%" alt="MaiBot Character" style="margin-left: 20px; margin-bottom: 20px;">
## 🎉 介绍
<a id="english"></a>
**🍔 MaiCore 是一个基于大语言模型的可交互智能体**
## 介绍
<sub><sup>Introduction</sup></sub>
MaiBot 不仅仅是一个机器人,她致力于成为一个活跃在 QQ 群聊中的“生命体”。她不追求完美,但追求真实。
麦麦 MaiSaka 是一个基于大语言模型的可交互智能体。
<sub><sup>MaiSaka is an interactive agent based on large language models.</sup></sub>
- 💭 **拟人构建**:使用自然语言风格构建 Prompt回复贴近人类习惯。
- 🎭 **行为规划**:懂得在合适的时间说话,使用合适的动作。
- 🧠 **表达学习**:模仿群友的说话风格,学习黑话,不断进化。
- 🔌 **插件系统**:提供强大的 API 和事件系统,无限扩展可能。
- 💝 **情感表达**:拥有独立的情绪系统和表情包互动能力。
MaiSaka 不仅仅是一个机器人,不仅仅是一个可以帮你完成任务的“有帮助的助手”,她还是一个致力于了解你,并以真实人类的风格进行交互的数字生命。她不追求完美,不追求高效,但追求亲切和真实。
<sub><sup>MaiSaka is more than just a bot, and more than a "helpful assistant" that completes tasks. She is a digital life form that tries to understand you and interact in a genuinely human style. She does not pursue perfection or efficiency above all else. She pursues warmth and authenticity.</sup></sub>
- 💭 **没有人喜欢 GPT 的语言风格**:麦麦使用了更加自然、贴合人类对话习惯的交互方式,不是长篇大论或者 markdown 格式的分点,而是或长或短的闲谈。
<sub><sup><strong>No one likes GPT-sounding dialogue</strong>: MaiSaka uses a more natural conversational style. Instead of long-winded markdown-heavy replies, she chats in a way that feels casual, varied, and human.</sup></sub>
- 🎭 **不再是傻乎乎的一问一答**:懂得在合适的时间说话,把握聊天中的气氛,在合适的时候开口,在合适的时候闭嘴。
<sub><sup><strong>No longer stuck in rigid Q&A</strong>: She knows when to speak, how to read the room, when to join a conversation, and when to stay quiet.</sup></sub>
- 🧠 **麦麦·成为人类**:在多人对话中,麦麦会模仿其他人的说话风格,还会自主理解新词或者小圈子里的黑话,不断进化。
<sub><sup><strong>MaiSaka becoming human</strong>: In group conversations, MaiSaka imitates how people around her speak, learns new slang and in-group language, and keeps evolving.</sup></sub>
- ❤️ **永远都在更加了解你**:基于心理学中人格理论,麦麦会不断积累对于你的了解,不论是你的信息、喜恶或是行为风格,她都记在心里。
<sub><sup><strong>Always learning more about you</strong>: Inspired by personality theory in psychology, MaiSaka gradually builds an understanding of your preferences, traits, habits, and behavior style.</sup></sub>
- 🔌 **插件系统**:提供强大的 API 和事件系统,拥有无限扩展可能。
<sub><sup><strong>Plugin system</strong>: Provides powerful APIs and an event system with virtually unlimited room for extension.</sup></sub>
### 快速导航
<sub><sup>Quick Navigation</sup></sub>
### 🚀 快速导航
<p>
<a href="https://www.bilibili.com/video/BV1amAneGE3P">🌟 演示视频</a> &nbsp;|&nbsp;
<a href="#-更新和安装">📦 快速入门</a> &nbsp;|&nbsp;
<a href="#-部署教程">📃 核心文档</a> &nbsp;|&nbsp;
<a href="#-讨论与社区">💬 加入社区</a>
<a href="https://www.bilibili.com/video/BV1amAneGE3P">🌟 演示视频 <sub>Demo Video</sub></a> &nbsp;|&nbsp;
<a href="#-更新和安装--updates-and-installation">📦 快速入门 <sub>Quick Start</sub></a> &nbsp;|&nbsp;
<a href="#-部署教程--deployment-guide">📃 核心文档 <sub>Core Documentation</sub></a> &nbsp;|&nbsp;
<a href="#-讨论与社区--discussion-and-community">💬 加入社区 <sub>Join Community</sub></a>
</p>
<!-- Clear float to ensure subsequent content starts below the image area if text is short -->
@@ -43,112 +65,146 @@ MaiBot 不仅仅是一个机器人,她致力于成为一个活跃在 QQ 群聊
<div align="center">
<br>
<h3>🎥 精彩演示</h3>
<a href="https://www.bilibili.com/video/BV1amAneGE3P" target="_blank">
<picture>
<source media="(max-width: 600px)" srcset="depends-data/video.png" width="100%">
<img src="depends-data/video.png" width="60%" alt="麦麦演示视频" style="border-radius: 10px; box-shadow: 0 4px 8px rgba(0,0,0,0.1);">
</picture>
<br>
<small>👆 点击观看麦麦演示视频 👆</small>
<small>前往观看麦麦演示视频 / Watch the MaiSaka demo video</small>
</a>
</div>
---
<a id="-更新和安装--updates-and-installation"></a>
## 🔥 更新和安装
<sub><sup>Updates and Installation</sup></sub>
> **最新版本: v0.12.2** ([📄 更新日志](changelogs/changelog.md))
> **最新版本: v1.0.0** ([📄 更新日志](changelogs/changelog.md))
> <sub><sup><strong>Latest Version: v1.0.0</strong> (<a href="changelogs/changelog.md">📄 Changelog</a>)</sup></sub>
- **下载**: 前往 [Release](https://github.com/MaiM-with-u/MaiBot/releases/) 页面下载最新版本
- **启动器**: [Mailauncher](https://github.com/MaiM-with-u/mailauncher/releases/) (仅支持 MacOS, 早期开发中)
- **下载**前往 [Release](https://github.com/MaiM-with-u/MaiBot/releases/) 页面下载最新版本
<sub><sup><strong>Download</strong>: Visit the <a href="https://github.com/MaiM-with-u/MaiBot/releases/">Release</a> page to get the latest version.</sup></sub>
- **启动器**[Mailauncher](https://github.com/MaiM-with-u/mailauncher/releases/)(仅支持 MacOS早期开发中
<sub><sup><strong>Launcher</strong>: <a href="https://github.com/MaiM-with-u/mailauncher/releases/">Mailauncher</a> (MacOS only, still in early development).</sup></sub>
| 分支 | 说明 |
| 分支 / Branch | 说明 / Description |
| :--- | :--- |
| `main` | ✅ **稳定发布版本 (推荐)** |
| `dev` | 🚧 开发测试版本 (不稳定) |
| `classical` | 🛑 经典版本 (停止维护) |
| `main` | ✅ **稳定发布版本推荐**<br><sub><sup>Stable release (recommended)</sup></sub> |
| `dev` | 🚧 开发测试版本,包含新功能,可能不稳定<br><sub><sup>Development testing branch with new features, may be unstable</sup></sub> |
<a id="-部署教程--deployment-guide"></a>
### 📚 部署教程
👉 **[🚀 最新版本部署教程](https://docs.mai-mai.org/manual/deployment/mmc_deploy_windows.html)**
*(注意MaiCore 新版本部署方式与旧版本不兼容)*
<sub><sup>Deployment Guide</sup></sub>
> [!WARNING]
> - ⚠️ 项目处于活跃开发阶段API 可能随时调整。
> - ⚠️ QQ 机器人存在风控风险,请谨慎使用。
> - ⚠️ AI 模型运行可能消耗较多 Token。
👉 **[🚀 最新版本部署教程](https://docs.mai-mai.org/manual/deployment/mmc_deploy_windows.html)**
<sub><sup>Latest Deployment Guide</sup></sub>
---
<a id="-讨论与社区--discussion-and-community"></a>
## 💬 讨论与社区
<sub><sup>Discussion and Community</sup></sub>
我们欢迎所有对 MaiBot 感兴趣的朋友加入!
我们欢迎所有对 MaiBot 感兴趣的朋友加入!
<sub><sup>We welcome everyone interested in MaiBot to join us.</sup></sub>
| 类别 | 群组 | 说明 |
| 类别 / Category | 群组 / Group | 说明 / Description |
| :--- | :--- | :--- |
| **技术交流** | [麦麦脑电图](https://qm.qq.com/q/RzmCiRtHEW) | 技术交流/答疑 |
| **技术交流** | [麦麦大脑磁共振](https://qm.qq.com/q/VQ3XZrWgMs) | 技术交流/答疑 |
| **技术交流** | [麦麦要当VTB](https://qm.qq.com/q/wGePTl1UyY) | 技术交流/答疑 |
| **闲聊吹水** | [麦麦之闲聊群](https://qm.qq.com/q/JxvHZnxyec) | 仅限闲聊,不答疑 |
| **插件开发** | [插件开发群](https://qm.qq.com/q/1036092828) | 进阶开发与测试 |
| **技术交流**<br><sub><sup>Technical</sup></sub> | [麦麦脑电图](https://qm.qq.com/q/RzmCiRtHEW)<br><sub><sup>MaiBrain EEG</sup></sub> | 技术交流 / 答疑<br><sub><sup>Technical discussion / Q&A</sup></sub> |
| **技术交流**<br><sub><sup>Technical</sup></sub> | [麦麦大脑磁共振](https://qm.qq.com/q/VQ3XZrWgMs)<br><sub><sup>MaiBrain MRI</sup></sub> | 技术交流 / 答疑<br><sub><sup>Technical discussion / Q&A</sup></sub> |
| **技术交流**<br><sub><sup>Technical</sup></sub> | [麦麦要当 VTB](https://qm.qq.com/q/wGePTl1UyY)<br><sub><sup>Mai Wants to Be a VTuber</sup></sub> | 技术交流 / 答疑<br><sub><sup>Technical discussion / Q&A</sup></sub> |
| **闲聊吹水**<br><sub><sup>Casual Chat</sup></sub> | [麦麦之闲聊群](https://qm.qq.com/q/JxvHZnxyec)<br><sub><sup>Mai Casual Chat Group</sup></sub> | 仅限闲聊,不答疑<br><sub><sup>Casual chat only, no support</sup></sub> |
| **插件开发**<br><sub><sup>Plugin Development</sup></sub> | [插件开发群](https://qm.qq.com/q/1036092828)<br><sub><sup>Plugin Dev Group</sup></sub> | 进阶开发与测试<br><sub><sup>Advanced development and testing</sup></sub> |
---
## 📚 文档
<sub><sup>Documentation</sup></sub>
> [!NOTE]
> 部分内容可能更新不够及时,请注意版本对应。
> 部分内容可能更新不够及时,请注意版本对应。
> <sub><sup>Some content may not be updated promptly, so please pay attention to version compatibility.</sup></sub>
- **[📚 核心 Wiki 文档](https://docs.mai-mai.org)**: 最全面的文档中心,了解麦麦的一切。
- **[📚 核心 Wiki 文档](https://docs.mai-mai.org)**最全面的文档中心,了解麦麦的一切。
<sub><sup><strong><a href="https://docs.mai-mai.org">📚 Core Wiki Documentation</a></strong>: The most comprehensive documentation hub for everything about MaiSaka.</sup></sub>
### 🧩 衍生项目
<sub><sup>Related Projects</sup></sub>
- **[MaiCraft](https://github.com/MaiM-with-u/Maicraft)**: 让麦麦陪你玩 Minecraft (早期开发中)。
- **[MoFox_Bot](https://github.com/MoFox-Studio/MoFox-Core)**: 基于 MaiCore 0.10.0 的增强型 Fork更稳定更有趣。
- **[Amaidesu](https://github.com/MaiM-with-u/Amaidesu)**让麦麦在 B 站开播。
<sub><sup>Let MaiSaka stream on Bilibili.</sup></sub>
- **[MoFox_Bot](https://github.com/MoFox-Studio/MoFox-Core)**:基于 MaiCore 0.10.0 的增强型 Fork更稳定更有趣。
<sub><sup>An enhanced fork based on MaiCore 0.10.0, with improved stability and more fun features.</sup></sub>
- **[MaiCraft](https://github.com/MaiM-with-u/Maicraft)**:让麦麦陪你玩 Minecraft暂时停止维护中
<sub><sup>Let MaiSaka accompany you in Minecraft (currently paused).</sup></sub>
---
## 💡 设计理念 (原始时代的火花)
## 💡 设计理念
<sub><sup>Design Philosophy</sup></sub>
> **千石可乐说:**
> - 这个项目最初只是为了给牛牛 bot 添加一点额外的功能,但是功能越写越多,最后决定重写。其目的是为了创造一个活跃在 QQ 群聊的"生命体"。目的并不是为了写一个功能齐全的机器人,而是一个尽可能让人感知到真实的类人存在。
> - 程序的功能设计理念基于一个核心的原则:"最像而不是好"。
> - 如果人类真的需要一个 AI 来陪伴自己,并不是所有人都需要一个完美的,能解决所有问题的"helpful assistant",而是一个会犯错的,拥有自己感知和想法的"生命形式"。
> - 代码会保持开源和开放,但个人希望 MaiMbot 的运行时数据保持封闭,尽量避免以显式命令来对其进行控制和调试。我认为一个你无法完全掌控的个体才更能让你感觉到它的自主性,而视其成为一个对话机器。
> - SengokuCola~~纯编程外行,面向 cursor 编程,很多代码写得不好多多包涵~~已得到大脑升级。
> **千石可乐说:**
> <sub><sup><strong>SengokuCola says:</strong></sup></sub>
> - 这个项目最初只是为了给牛牛 bot 添加一点额外的功能,但是功能越写越多,最后决定重写。其目的是为了创造一个活跃在 QQ 群聊的“生命体”。目的并不是为了写一个功能齐全的机器人,而是一个尽可能让人感知到真实的类人存在。
> <sub><sup>This project originally started as a few extra features for the NiuNiu bot, but it kept growing until a full rewrite became inevitable. The goal was to create a "life form" active in QQ group chats, not a feature-complete bot, but something as human-like and real-feeling as possible.</sup></sub>
> - 程序的功能设计理念基于一个核心的原则:“最像而不是好”。
> <sub><sup>The core design principle is: "more lifelike, not merely better."</sup></sub>
> - 如果人类真的需要一个 AI 来陪伴自己并不是所有人都需要一个完美的能解决所有问题的“helpful assistant”而是一个会犯错的拥有自己感知和想法的“生命形式”。
> <sub><sup>If people truly want AI companionship, not everyone needs a perfect "helpful assistant" that solves every problem. Some people may want a life form that can make mistakes and has its own perceptions and thoughts.</sup></sub>
> **xxxxx 说:**
> <sub><sup><strong>xxxxx says:</strong></sup></sub>
> *Code is open, but the soul is yours.*
---
## 🙋 贡献和致谢
<sub><sup>Contributing and Acknowledgments</sup></sub>
欢迎参与贡献!请先阅读 [贡献指南](docs-src/CONTRIBUTE.md)。
欢迎参与贡献!请先阅读 [贡献指南](docs-src/CONTRIBUTE.md)。
<sub><sup>Contributions are welcome. Please read the <a href="docs-src/CONTRIBUTE.md">Contribution Guide</a> first.</sup></sub>
### 🌟 贡献者
<sub><sup>Contributors</sup></sub>
<a href="https://github.com/MaiM-with-u/MaiBot/graphs/contributors">
<img alt="contributors" src="https://contrib.rocks/image?repo=MaiM-with-u/MaiBot" />
</a>
### ❤️ 特别致谢
<sub><sup>Special Thanks</sup></sub>
- **[略nd](https://space.bilibili.com/1344099355)**: 🎨 为麦麦绘制精美人设。
- **[NapCat](https://github.com/NapNeko/NapCatQQ)**: 🚀 现代化的基于 NTQQ 的 Bot 协议实现。
- **[萨卡班甲鱼](https://en.wikipedia.org/wiki/Sacabambaspis)**:千石可乐很喜欢的生物。
<sub><sup><strong><a href="https://en.wikipedia.org/wiki/Sacabambaspis">Sacabambaspis</a></strong>: SengokuCola's favorite creature.</sup></sub>
- **[略nd](https://space.bilibili.com/1344099355)**:为麦麦绘制早期的精美人设。
<sub><sup>Drew MaiSaka's beautiful early character design.</sup></sub>
- **[NapCat](https://github.com/NapNeko/NapCatQQ)**:现代化的基于 NTQQ 的 Bot 协议实现。
<sub><sup>A modern NTQQ-based bot protocol implementation.</sup></sub>
---
## 📊 仓库状态
<sub><sup>Repository Status</sup></sub>
![Alt](https://repobeats.axiom.co/api/embed/9faca9fccfc467931b87dd357b60c6362b5cfae0.svg "麦麦仓库状态")
### Star 趋势
<sub><sup>Star History</sup></sub>
[![Star 趋势](https://starchart.cc/MaiM-with-u/MaiBot.svg?variant=adaptive)](https://starchart.cc/MaiM-with-u/MaiBot)
---
## 📌 注意事项 & License
<sub><sup>Notice & License</sup></sub>
> [!IMPORTANT]
> 使用前请阅读 [用户协议 (EULA)](EULA.md) 和 [隐私协议](PRIVACY.md)。AI 生成内容请仔细甄别。
> 使用前请阅读 [用户协议 (EULA)](EULA.md) 和 [隐私协议](PRIVACY.md)。AI 生成内容请仔细甄别。
> <sub><sup>Please read the <a href="EULA.md">End User License Agreement (EULA)</a> and <a href="PRIVACY.md">Privacy Policy</a> before use. Please evaluate AI-generated content carefully.</sup></sub>
**License**: GPL-3.0

31
agentlite/CHANGELOG.md Normal file
View File

@@ -0,0 +1,31 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.1.0] - 2025-01-30
### Added
- Initial release of AgentLite
- Core Agent class with streaming and non-streaming interfaces
- OpenAI-compatible provider implementation
- Tool system with decorator and class-based tools
- MCP client for loading tools from MCP servers
- Pydantic-based configuration system
- Multi-agent support
- Full type hints and async/await throughout
- Comprehensive documentation and examples
### Features
- **Agent**: Main agent class with tool calling loop
- **OpenAIProvider**: OpenAI API integration with streaming support
- **MCPClient**: MCP server integration for external tools
- **Tool System**: Decorator (`@tool`) and class-based (`CallableTool`, `CallableTool2`) tools
- **Configuration**: Pydantic models for providers, models, and agent settings
- **Message Types**: ContentPart, Message, ToolCall with streaming merge support
[0.1.0]: https://github.com/yourusername/agentlite/releases/tag/v0.1.0

1013
agentlite/TEST_PLAN.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,279 @@
# LLM Client
Simple LLM client for direct LLM calls without agent overhead.
## Overview
The `LLMClient` provides a simple interface for making direct LLM calls, reusing the agentlite configuration system. This is useful when you don't need the full agent capabilities (tools, conversation history, etc.) and just want to call an LLM.
## Features
- **Simple Interface**: Just system prompt + user prompt → response
- **Configuration Reuse**: Uses existing `AgentConfig` for provider/model setup
- **Streaming Support**: Both non-streaming and streaming interfaces
- **Flexible Usage**: Use with config, direct provider, or simple functions
## Quick Start
### Method 1: Simple Function (Quickest)
```python
import asyncio
from agentlite import llm_complete
async def main():
response = await llm_complete(
user_prompt="What is Python?",
api_key="your-api-key",
model="gpt-4",
)
print(response)
asyncio.run(main())
```
### Method 2: Using Configuration
```python
import asyncio
from agentlite import LLMClient, AgentConfig, ProviderConfig, ModelConfig
async def main():
# Create configuration
config = AgentConfig(
providers={
"openai": ProviderConfig(api_key="your-api-key")
},
models={
"gpt4": ModelConfig(provider="openai", model="gpt-4")
},
default_model="gpt4",
)
# Create client
client = LLMClient(config)
# Make a call
response = await client.complete(
system_prompt="You are a helpful assistant.",
user_prompt="What is Python?"
)
print(response.content)
print(f"Model: {response.model}")
if response.usage:
print(f"Tokens: {response.usage.total}")
asyncio.run(main())
```
### Method 3: Direct Provider
```python
import asyncio
from agentlite import LLMClient, OpenAIProvider
async def main():
# Create provider directly
provider = OpenAIProvider(
api_key="your-api-key",
model="gpt-4",
temperature=0.8,
)
# Create client
client = LLMClient(provider=provider)
# Make a call
response = await client.complete(
user_prompt="Explain async/await",
system_prompt="You are a Python expert.",
)
print(response.content)
asyncio.run(main())
```
## Streaming
### Using Client
```python
async for chunk in client.stream(
user_prompt="Write a poem about AI",
system_prompt="You are a creative writer.",
):
print(chunk, end="")
```
### Using Function
```python
async for chunk in llm_stream(
user_prompt="Write a haiku",
api_key="your-api-key",
):
print(chunk, end="")
```
## API Reference
### LLMClient
```python
class LLMClient:
def __init__(
self,
config: Optional[AgentConfig] = None,
provider: Optional[ChatProvider] = None,
model: Optional[str] = None,
)
async def complete(
self,
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> LLMResponse
async def stream(
self,
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> AsyncIterator[str]
```
### LLMResponse
```python
class LLMResponse:
content: str # The response text
usage: TokenUsage | None # Token usage stats
model: str # Model name used
```
### Convenience Functions
```python
async def llm_complete(
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
api_key: Optional[str] = None,
model: str = "gpt-4",
base_url: str = "https://api.openai.com/v1",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> str
async def llm_stream(
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
api_key: Optional[str] = None,
model: str = "gpt-4",
base_url: str = "https://api.openai.com/v1",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> AsyncIterator[str]
```
## Configuration Options
### Temperature and Max Tokens
You can override temperature and max_tokens per call:
```python
response = await client.complete(
user_prompt="Creative writing task",
temperature=0.9, # More creative
max_tokens=500, # Limit response length
)
```
### Model Switching
When using `AgentConfig`, you can switch models:
```python
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="...")},
models={
"gpt4": ModelConfig(provider="openai", model="gpt-4"),
"gpt35": ModelConfig(provider="openai", model="gpt-3.5-turbo"),
},
default_model="gpt4",
)
# Use default model (gpt4)
client = LLMClient(config)
# Use specific model
client_gpt35 = LLMClient(config, model="gpt35")
```
## Comparison with Agent
| Feature | LLMClient | Agent |
|---------|-----------|-------|
| Tools | ❌ No | ✅ Yes |
| Conversation History | ❌ No | ✅ Yes |
| System Prompt | ✅ Yes | ✅ Yes |
| Configuration | ✅ Reuses AgentConfig | ✅ AgentConfig |
| Streaming | ✅ Yes | ✅ Yes |
| Use Case | Simple LLM calls | Complex agent workflows |
## Examples
### Translation
```python
async def translate(text: str, target_language: str) -> str:
response = await llm_complete(
user_prompt=f"Translate to {target_language}: {text}",
system_prompt="You are a translator. Return only the translation.",
api_key="your-api-key",
)
return response
```
### Code Review
```python
async def review_code(code: str) -> str:
client = LLMClient(config)
response = await client.complete(
user_prompt=f"Review this code:\n\n```python\n{code}\n```",
system_prompt="You are a code reviewer. Provide constructive feedback.",
)
return response.content
```
### Streaming Chat
```python
async def chat_stream(user_message: str):
async for chunk in client.stream(
user_prompt=user_message,
system_prompt="You are a helpful chat assistant.",
):
yield chunk
```
## Error Handling
```python
from agentlite.provider import APIConnectionError, APITimeoutError, APIStatusError
try:
response = await client.complete(user_prompt="Hello")
except APIConnectionError:
print("Failed to connect to API")
except APITimeoutError:
print("Request timed out")
except APIStatusError as e:
print(f"API error {e.status_code}: {e.message}")
```

271
agentlite/docs/tools.md Normal file
View File

@@ -0,0 +1,271 @@
# AgentLite Tool Suite
A comprehensive tool suite for AgentLite, inspired by kimi-cli's tools, with configuration support for enabling/disabling individual tools.
## Overview
This tool suite provides:
- **File Operations**: Read, write, edit, search files
- **Shell Execution**: Execute shell commands
- **Web Access**: Fetch URLs and search the web
- **Multi-Agent**: Task delegation and subagent creation
- **Utilities**: Todo lists and thinking tools
- **Configuration**: Fine-grained control over which tools are available
## Installation
The tool suite is included with AgentLite. No additional installation required.
## Quick Start
```python
from agentlite.tools import ConfigurableToolset, ToolSuiteConfig
from agentlite import Agent, OpenAIProvider
# Create toolset with default config (all tools enabled)
toolset = ConfigurableToolset()
# Create agent with tools
provider = OpenAIProvider(api_key="your-key", model="gpt-4")
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant.",
tools=toolset.tools,
)
```
## Configuration
### Basic Configuration
```python
from agentlite.tools import (
ToolSuiteConfig,
FileToolsConfig,
ShellToolsConfig,
)
# Disable specific tools
config = ToolSuiteConfig(
file_tools=FileToolsConfig(
tools={"WriteFile": False, "StrReplaceFile": False}
)
)
toolset = ConfigurableToolset(config)
```
### Disable Entire Tool Groups
```python
# Disable all shell tools
config = ToolSuiteConfig(
shell_tools=ShellToolsConfig(enabled=False)
)
toolset = ConfigurableToolset(config)
```
### Custom Tool Settings
```python
config = ToolSuiteConfig(
file_tools=FileToolsConfig(
max_lines=500,
max_bytes=50 * 1024, # 50KB
allow_write_outside_work_dir=False,
),
shell_tools=ShellToolsConfig(
timeout=60,
blocked_commands=["rm -rf", "sudo"],
),
)
```
### Dynamic Configuration
```python
# Create toolset
config = ToolSuiteConfig()
toolset = ConfigurableToolset(config)
# Disable tools and reload
config.file_tools.disable_tool("WriteFile")
config.shell_tools.enabled = False
toolset.reload()
```
## Available Tools
### File Tools
| Tool | Description | Config Options |
|------|-------------|----------------|
| `ReadFile` | Read text files with line numbers | `max_lines`, `max_bytes` |
| `WriteFile` | Write or append to files | `allow_write_outside_work_dir` |
| `StrReplaceFile` | Edit files using string replacement | `allow_write_outside_work_dir` |
| `Glob` | Search files using glob patterns | `max_glob_matches` |
| `Grep` | Search file contents with regex | - |
| `ReadMediaFile` | Read images and videos | `max_size_mb` |
### Shell Tools
| Tool | Description | Config Options |
|------|-------------|----------------|
| `Shell` | Execute shell commands | `timeout`, `blocked_commands` |
### Web Tools
| Tool | Description | Config Options |
|------|-------------|----------------|
| `FetchURL` | Fetch web page content | `timeout`, `user_agent` |
| `SearchWeb` | Search the web | `timeout` |
### Multi-Agent Tools
| Tool | Description | Config Options |
|------|-------------|----------------|
| `Task` | Delegate tasks to subagents | `max_steps` |
| `CreateSubagent` | Create custom subagents | - |
### Utility Tools
| Tool | Description |
|------|-------------|
| `SetTodoList` | Manage todo lists |
| `Think` | Record thinking steps |
## Safety Features
### Path Security
- Files outside the working directory require absolute paths
- Optional restriction on writing outside working directory
- Path traversal protection
### Shell Security
- Configurable command timeout
- Blocked command list
- No shell injection (uses `execve` style execution)
### Resource Limits
- File size limits
- Line count limits
- Glob match limits
- HTTP content size limits
## Examples
### Safe Configuration for Untrusted Agents
```python
from agentlite.tools import ToolSuiteConfig, FileToolsConfig, ShellToolsConfig
# Safe config - read-only file access, no shell
safe_config = ToolSuiteConfig(
file_tools=FileToolsConfig(
allow_write_outside_work_dir=False,
),
shell_tools=ShellToolsConfig(enabled=False),
)
toolset = ConfigurableToolset(safe_config)
```
### Using Individual Tools
```python
from agentlite.tools.file import ReadFile, Glob
from pathlib import Path
# Create tools directly
read_tool = ReadFile(work_dir=Path("."))
glob_tool = Glob(work_dir=Path("."))
# Use tools
result = await read_tool.read({"path": "README.md"})
if not result.is_error:
print(result.output)
result = await glob_tool.glob({"pattern": "*.py"})
if not result.is_error:
print(result.output)
```
### Configuration from File
```python
import json
from agentlite.tools import ToolSuiteConfig
# Load config from file
with open("tool_config.json") as f:
config_dict = json.load(f)
config = ToolSuiteConfig.model_validate(config_dict)
toolset = ConfigurableToolset(config)
```
## API Reference
### Config Classes
#### `ToolSuiteConfig`
Main configuration class for all tools.
```python
class ToolSuiteConfig(BaseModel):
file_tools: FileToolsConfig
shell_tools: ShellToolsConfig
web_tools: WebToolsConfig
multiagent_tools: MultiAgentToolsConfig
misc_tools: ToolGroupConfig
```
#### `FileToolsConfig`
```python
class FileToolsConfig(ToolGroupConfig):
max_lines: int = 1000
max_line_length: int = 2000
max_bytes: int = 100 * 1024
allow_write_outside_work_dir: bool = False
max_glob_matches: int = 1000
```
#### `ShellToolsConfig`
```python
class ShellToolsConfig(ToolGroupConfig):
timeout: int = 60
max_timeout: int = 300
blocked_commands: list[str] = []
```
#### `WebToolsConfig`
```python
class WebToolsConfig(ToolGroupConfig):
timeout: int = 30
user_agent: str = "Mozilla/5.0 ..."
max_content_length: int = 1024 * 1024
```
### ConfigurableToolset
```python
class ConfigurableToolset(SimpleToolset):
def __init__(
self,
config: ToolSuiteConfig | None = None,
work_dir: str | None = None,
)
def reload(self, config: ToolSuiteConfig | None = None) -> None
```
## License
MIT License - same as AgentLite.

View File

@@ -0,0 +1,80 @@
# AgentLite Examples
This directory contains examples demonstrating various features of AgentLite.
## Setup
Before running the examples, set your OpenAI API key:
```bash
export OPENAI_API_KEY="sk-..."
```
Or create a `.env` file:
```
OPENAI_API_KEY=sk-...
```
## Examples
### 1. Single Agent (`single_agent.py`)
Basic usage of a single agent with conversation history.
```bash
python examples/single_agent.py
```
### 2. Multi-Agent (`multi_agent.py`)
Multiple specialized agents working together on a task.
```bash
python examples/multi_agent.py
```
### 3. Custom Tools (`custom_tools.py`)
Defining and using custom tools with agents.
```bash
python examples/custom_tools.py
```
### 4. MCP Tools (`mcp_tools.py`)
Using tools from MCP (Model Context Protocol) servers.
**Prerequisites:**
- Node.js installed
- MCP filesystem server: `npm install -g @modelcontextprotocol/server-filesystem`
```bash
python examples/mcp_tools.py
```
## Creating Your Own
Use these examples as templates for your own applications:
```python
import asyncio
from agentlite import Agent, OpenAIProvider
async def main():
provider = OpenAIProvider(
api_key="your-api-key",
model="gpt-4",
)
agent = Agent(
provider=provider,
system_prompt="Your system prompt here.",
)
response = await agent.run("Your question here")
print(response)
asyncio.run(main())
```

View File

@@ -0,0 +1,118 @@
"""Example: Custom Tools
This example demonstrates how to define and use custom tools with agents.
"""
import asyncio
import os
from datetime import datetime
from pydantic import BaseModel
from agentlite import Agent, OpenAIProvider, tool
from agentlite.tool import CallableTool2, ToolOk
# Define a tool using the decorator
@tool()
async def get_current_time() -> str:
"""Get the current date and time."""
return datetime.now().isoformat()
@tool()
async def calculate(expression: str) -> str:
"""Evaluate a mathematical expression.
Args:
expression: The mathematical expression to evaluate (e.g., "2 + 2").
"""
try:
# Safe evaluation - only allow basic math operations
allowed_names = {
"abs": abs,
"max": max,
"min": min,
"pow": pow,
"round": round,
}
result = eval(expression, {"__builtins__": {}}, allowed_names)
return str(result)
except Exception as e:
return f"Error: {e}"
# Define a tool using CallableTool2 (type-safe)
class WeatherParams(BaseModel):
"""Parameters for weather tool."""
city: str
units: str = "celsius"
class GetWeather(CallableTool2[WeatherParams]):
"""Get weather information for a city."""
name = "get_weather"
description = "Get the current weather for a city."
params = WeatherParams
async def __call__(self, params: WeatherParams) -> ToolOk:
# This is a mock implementation
# In a real scenario, you would call a weather API
weather_data = {
"Beijing": {"temp": 22, "condition": "Sunny"},
"Shanghai": {"temp": 25, "condition": "Cloudy"},
"New York": {"temp": 18, "condition": "Rainy"},
"London": {"temp": 15, "condition": "Overcast"},
}
city = params.city
if city in weather_data:
data = weather_data[city]
temp = data["temp"]
if params.units == "fahrenheit":
temp = temp * 9 // 5 + 32
return ToolOk(
output=f"Weather in {city}: {data['condition']}, {temp}°{params.units[0].upper()}"
)
return ToolOk(output=f"Weather data not available for {city}")
async def main():
"""Run the custom tools example."""
# Create provider
provider = OpenAIProvider(
api_key=os.getenv("OPENAI_API_KEY", "your-api-key"),
model="gpt-4o-mini",
)
# Create agent with tools
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant with access to tools.",
tools=[
get_current_time,
calculate,
GetWeather(),
],
)
# Test tools
print("=== Testing Tools ===\n")
print("User: What time is it?")
response = await agent.run("What time is it?")
print(f"Agent: {response}\n")
print("User: What is 123 * 456?")
response = await agent.run("What is 123 * 456?")
print(f"Agent: {response}\n")
print("User: What's the weather in Beijing?")
response = await agent.run("What's the weather in Beijing?")
print(f"Agent: {response}\n")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,124 @@
"""Example demonstrating LLMClient usage.
This example shows how to use LLMClient for simple LLM calls
without the overhead of an Agent.
"""
import asyncio
from agentlite import LLMClient
from agentlite.config import AgentConfig, ProviderConfig, ModelConfig
async def main():
"""Run LLM client examples."""
# Example 1: Using simple function interface
print("=== Example 1: Simple Function ===")
print("Using llm_complete() function:")
# Note: This requires a valid API key
# response = await llm_complete(
# user_prompt="What is Python?",
# api_key="your-api-key",
# model="gpt-4",
# )
# print(response)
print("(Requires API key - uncomment to run)")
# Example 2: Using configuration-based client
print("\n=== Example 2: Configuration-Based Client ===")
config = AgentConfig(
name="simple_llm",
system_prompt="You are a helpful coding assistant.",
providers={
"openai": ProviderConfig(
type="openai",
api_key="your-api-key", # Replace with actual key
)
},
models={
"gpt4": ModelConfig(
provider="openai",
model="gpt-4",
temperature=0.7,
),
"gpt35": ModelConfig(
provider="openai",
model="gpt-3.5-turbo",
temperature=0.5,
),
},
default_model="gpt4",
)
# Create client
LLMClient(config)
# Make a call
# response = await client.complete(
# user_prompt="Explain async/await in Python",
# )
# print(f"Response: {response.content}")
# print(f"Model: {response.model}")
# if response.usage:
# print(f"Tokens: {response.usage.total}")
print("(Requires API key - uncomment to run)")
# Example 3: Streaming
print("\n=== Example 3: Streaming ===")
print("Using llm_stream() function:")
# async for chunk in llm_stream(
# user_prompt="Write a haiku about programming",
# api_key="your-api-key",
# ):
# print(chunk, end="")
print("\n(Requires API key - uncomment to run)")
# Example 4: Direct provider usage
print("\n=== Example 4: Direct Provider ===")
from agentlite import OpenAIProvider
provider = OpenAIProvider(
api_key="your-api-key",
model="gpt-4",
temperature=0.8,
)
LLMClient(provider=provider)
# response = await client.complete(
# user_prompt="What are the benefits of type hints?",
# system_prompt="You are a Python expert.",
# )
# print(response.content)
print("(Requires API key - uncomment to run)")
# Example 5: Model switching
print("\n=== Example 5: Model Switching ===")
# Use default model (gpt4)
# response1 = await client.complete(user_prompt="Hello!")
# Switch to different model
# client_gpt35 = LLMClient(config, model="gpt35")
# response2 = await client_gpt35.complete(user_prompt="Hello!")
print("(Requires API key - uncomment to run)")
print("\n=== Examples Complete ===")
print("To run these examples:")
print("1. Set your OpenAI API key")
print("2. Uncomment the example code")
print("3. Run: python examples/llm_client_example.py")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,79 @@
"""Example: MCP Tools
This example demonstrates how to use MCP (Model Context Protocol) tools
with AgentLite agents.
Note: This example requires an MCP server to be available.
"""
import asyncio
import os
from agentlite import Agent, MCPClient, OpenAIProvider
async def main():
"""Run the MCP tools example."""
# Create provider
provider = OpenAIProvider(
api_key=os.getenv("OPENAI_API_KEY", "your-api-key"),
model="gpt-4o-mini",
)
# Connect to MCP server
# This example uses the filesystem MCP server
# You can install it with: npm install -g @modelcontextprotocol/server-filesystem
print("Connecting to MCP server...")
async with MCPClient() as mcp:
# Connect via stdio
await mcp.connect_stdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
)
# Load tools from MCP server
print("Loading MCP tools...")
mcp_tools = await mcp.load_tools()
print(f"Loaded {len(mcp_tools)} tools from MCP server")
# Create agent with MCP tools
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant with access to filesystem tools.",
tools=mcp_tools,
)
# Test MCP tools
print("\n=== Testing MCP Tools ===\n")
print("User: List files in /tmp")
response = await agent.run("List files in /tmp")
print(f"Agent: {response}\n")
print("User: Create a file called test.txt with 'Hello from AgentLite!'")
response = await agent.run(
"Create a file called test.txt with content 'Hello from AgentLite!'"
)
print(f"Agent: {response}\n")
print("User: Read the test.txt file")
response = await agent.run("Read the test.txt file")
print(f"Agent: {response}\n")
if __name__ == "__main__":
# Note: This example requires Node.js and the MCP filesystem server
# npm install -g @modelcontextprotocol/server-filesystem
print("Note: This example requires Node.js and @modelcontextprotocol/server-filesystem")
print("Install with: npm install -g @modelcontextprotocol/server-filesystem\n")
try:
asyncio.run(main())
except Exception as e:
print(f"Error: {e}")
print("\nMake sure you have:")
print("1. Node.js installed")
print("2. @modelcontextprotocol/server-filesystem installed globally")
print("3. OPENAI_API_KEY environment variable set")

View File

@@ -0,0 +1,54 @@
"""Example: Multi-Agent Usage
This example demonstrates using multiple agents working independently.
"""
import asyncio
import os
from agentlite import Agent, OpenAIProvider
async def main():
"""Run the multi-agent example."""
# Create provider
provider = OpenAIProvider(
api_key=os.getenv("OPENAI_API_KEY", "your-api-key"),
model="gpt-4o-mini",
)
# Create specialized agents
researcher = Agent(
provider=provider,
system_prompt="You are a research assistant. Provide factual, well-researched information.",
)
writer = Agent(
provider=provider,
system_prompt="You are a creative writer. Write engaging and clear content.",
)
critic = Agent(
provider=provider,
system_prompt="You are an editor. Review and improve content for clarity and accuracy.",
)
# Research phase
print("=== Research Phase ===")
topic = "artificial intelligence in healthcare"
research = await researcher.run(f"Research {topic}. Provide key points.")
print(f"Research:\n{research}\n")
# Writing phase
print("=== Writing Phase ===")
content = await writer.run(f"Write a blog post about {topic} using this research:\n{research}")
print(f"Draft:\n{content}\n")
# Review phase
print("=== Review Phase ===")
review = await critic.run(f"Review this blog post and suggest improvements:\n{content}")
print(f"Review:\n{review}\n")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,42 @@
"""Example: Single Agent Usage
This example demonstrates basic usage of the AgentLite Agent class.
"""
import asyncio
import os
from agentlite import Agent, OpenAIProvider
async def main():
"""Run the single agent example."""
# Create provider
provider = OpenAIProvider(
api_key=os.getenv("OPENAI_API_KEY", "your-api-key"),
model="gpt-4o-mini",
)
# Create agent
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant. Be concise.",
)
# Run conversation
print("User: What is Python?")
response = await agent.run("What is Python?")
print(f"Agent: {response}\n")
print("User: What are its main features?")
response = await agent.run("What are its main features?")
print(f"Agent: {response}\n")
# Show conversation history
print("--- Conversation History ---")
for msg in agent.history:
print(f"{msg.role}: {msg.extract_text()[:100]}...")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,68 @@
---
name: code-reviewer
description: Review code for bugs, style issues, security vulnerabilities, and best practices. Use when the user asks to review, check, or audit code.
type: standard
---
# Code Reviewer
A comprehensive code review skill that checks for common issues and provides actionable feedback.
## Review Checklist
### 1. Correctness
- Check for logical errors
- Verify edge cases are handled
- Look for off-by-one errors
- Check null/None handling
- Verify error handling paths
### 2. Style & Readability
- Naming conventions (clear, descriptive names)
- Code organization and structure
- Comments where needed (not obvious code)
- Consistent formatting
- Function/class length
### 3. Performance
- Inefficient algorithms (O(n²) when O(n) possible)
- Unnecessary object creation
- Memory leaks
- Redundant operations
### 4. Security
- SQL injection vulnerabilities
- XSS vulnerabilities (for web code)
- Hardcoded secrets/passwords
- Unsafe deserialization
- Path traversal risks
### 5. Best Practices
- DRY principle (Don't Repeat Yourself)
- SOLID principles
- Proper use of language features
- Test coverage considerations
## Output Format
Provide your review in this structure:
```
## Summary
Brief overall assessment
## Critical Issues
- Issue 1: Description and fix
- Issue 2: Description and fix
## Warnings
- Warning 1: Description and suggestion
## Suggestions
- Suggestion 1: How to improve
## Positive Notes
- What's done well
```
Be constructive and specific. Include code examples for suggested fixes.

View File

@@ -0,0 +1,63 @@
---
name: release-process
description: Execute the release workflow including version checks, changelog updates, and PR creation. Use when the user wants to create a new release or version.
type: flow
---
# Release Process
Follow this structured workflow to create a new release.
## Flow
```mermaid
flowchart TD
BEGIN(( )) --> CHECK[Check for uncommitted changes]
CHECK --> CHANGES{Changes?}
CHANGES -->|Yes| COMMIT[Commit or stash changes]
CHANGES -->|No| VERSION{Version type?}
COMMIT --> VERSION
VERSION -->|Patch| UPDATE_PATCH[Update patch version]
VERSION -->|Minor| UPDATE_MINOR[Update minor version]
VERSION -->|Major| UPDATE_MAJOR[Update major version]
UPDATE_PATCH --> CHANGELOG[Update CHANGELOG.md]
UPDATE_MINOR --> CHANGELOG
UPDATE_MAJOR --> CHANGELOG
CHANGELOG --> BRANCH[Create release branch]
BRANCH --> PR[Create Pull Request]
PR --> END(( ))
```
## Node Details
### Check for uncommitted changes
Run `git status` and check if there are any uncommitted changes.
### Commit or stash changes
Ask the user whether to commit the changes or stash them for later.
### Version type
Ask the user what type of release this is:
- **Patch**: Bug fixes (0.0.X)
- **Minor**: New features, backward compatible (0.X.0)
- **Major**: Breaking changes (X.0.0)
### Update version
Update the version number in:
- `pyproject.toml` or `package.json`
- Any other version files
### Update CHANGELOG
Add a new section to CHANGELOG.md with:
- Version number and date
- List of changes
- Breaking changes (if any)
- Migration notes (if needed)
### Create release branch
Create a new branch: `release/vX.Y.Z`
### Create Pull Request
Open a PR with:
- Title: "Release vX.Y.Z"
- Description summarizing the changes

View File

@@ -0,0 +1,86 @@
"""Example demonstrating the skills system for AgentLite.
This example shows how to use skills with an Agent.
"""
import asyncio
from pathlib import Path
from agentlite.skills import discover_skills, index_skills_by_name
async def main():
"""Run skills example."""
print("=" * 60)
print("AgentLite Skills Example")
print("=" * 60)
# Discover skills from examples directory
skills_dir = Path(__file__).parent / "skills"
skills = discover_skills(skills_dir)
print(f"\nDiscovered {len(skills)} skill(s):")
for skill in skills:
print(f" - {skill.name}: {skill.description}")
print(f" Type: {skill.type}")
if skill.flow:
print(f" Flow nodes: {len(skill.flow.nodes)}")
# Index skills by name
skill_index = index_skills_by_name(skills)
print(f"\nIndexed {len(skill_index)} skill(s)")
# Create agent (would need API key to actually run)
print("\n" + "-" * 40)
print("To use skills with an agent:")
print("-" * 40)
code = """
# Create provider
provider = OpenAIProvider(api_key="your-key", model="gpt-4")
# Create agent
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant with access to skills.",
)
# Create skill tool
skill_tool = SkillTool(skill_index, parent_agent=agent)
# Add skill tool to agent
agent.tools.add(skill_tool)
# Now the agent can use skills!
# The agent will see available skills in its context
# Example usage:
response = await agent.run("Review this Python code: def add(a, b): return a + b")
# The agent may choose to use the code-reviewer skill
"""
print(code)
print("\n" + "=" * 60)
print("Key Concepts:")
print("=" * 60)
print("1. Skills are defined in SKILL.md files")
print("2. YAML frontmatter specifies name, description, and type")
print("3. Standard skills load the markdown as a prompt")
print("4. Flow skills execute a structured flowchart")
print("5. Skills are discovered from directories")
print("6. SkillTool allows agents to execute skills")
print("\nSkill Format (SKILL.md):")
print(""" ---
name: skill-name
description: When to use this skill...
type: standard | flow
---
# Skill Content
Instructions for the skill...
""")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,168 @@
"""Example demonstrating subagent usage in AgentLite.
This example shows how to create a parent agent with subagents
and delegate tasks to them using the Task tool.
"""
import asyncio
from agentlite import Agent, OpenAIProvider
from agentlite.tools.multiagent.task import Task
async def main():
"""Run subagent example."""
print("=" * 60)
print("AgentLite Subagent Example")
print("=" * 60)
# Note: This example requires a valid API key
# Replace with your actual API key to run
api_key = "your-api-key"
if api_key == "your-api-key":
print("\nNOTE: Set your API key to run this example")
print("Example code is shown below:\n")
print("-" * 40)
# Create provider
provider = OpenAIProvider(api_key=api_key, model="gpt-4")
# Example 1: Create subagents manually
print("\n=== Example 1: Manual Subagent Setup ===")
# Create parent agent with empty labor market
parent = Agent(
provider=provider,
system_prompt="You are a coordinator agent that delegates tasks to specialists.",
name="coordinator",
)
# Create subagents
coder = Agent(
provider=provider,
system_prompt="You are a coding specialist. Write clean, well-documented code.",
name="coder",
)
reviewer = Agent(
provider=provider,
system_prompt="You are a code reviewer. Provide constructive feedback.",
name="reviewer",
)
# Register subagents with parent
parent.add_subagent("coder", coder, "Writes code", dynamic=False)
parent.add_subagent("reviewer", reviewer, "Reviews code", dynamic=False)
# Add Task tool to parent
parent.tools.add(Task(labor_market=parent.labor_market))
print("Created parent agent with subagents:")
print(" - coder: Writes code")
print(" - reviewer: Reviews code")
# Example 2: Using subagents
print("\n=== Example 2: Delegating Tasks ===")
# Parent agent delegates to coder
# response = await parent.run(
# "I need a Python function to calculate fibonacci numbers. "
# "Use the coder subagent to write it."
# )
print("(Requires API key - uncomment to run)")
# Example 3: Nested subagents (hierarchy)
print("\n=== Example 3: Hierarchical Structure ===")
# Create a team lead with team members as subagents
team_lead = Agent(
provider=provider,
system_prompt="You are a team lead. Coordinate work among your team members.",
name="team_lead",
)
# Create team members
backend_dev = Agent(
provider=provider,
system_prompt="You are a backend developer. Focus on API design and database.",
name="backend_dev",
)
frontend_dev = Agent(
provider=provider,
system_prompt="You are a frontend developer. Focus on UI/UX.",
name="frontend_dev",
)
tester = Agent(
provider=provider,
system_prompt="You are a QA engineer. Write test cases and find bugs.",
name="tester",
)
# Add subagents to team lead
team_lead.add_subagent("backend", backend_dev, "Backend development")
team_lead.add_subagent("frontend", frontend_dev, "Frontend development")
team_lead.add_subagent("qa", tester, "Quality assurance")
# Add Task tool
team_lead.tools.add(Task(labor_market=team_lead.labor_market))
print("Created team hierarchy:")
print(" team_lead/")
print(" ├── backend: Backend development")
print(" ├── frontend: Frontend development")
print(" └── qa: Quality assurance")
# Example 4: Dynamic subagents
print("\n=== Example 4: Dynamic Subagents ===")
# Create subagent dynamically
specialist = Agent(
provider=provider,
system_prompt="You are a specialist for a specific task.",
name="specialist",
)
# Add as dynamic subagent
team_lead.add_subagent("specialist", specialist, "Temporary specialist", dynamic=True)
print("Added dynamic subagent 'specialist' to team_lead")
# Example 5: Agent discovery
print("\n=== Example 5: Agent Discovery ===")
print(f"Team lead's subagents: {team_lead.labor_market.list_subagents()}")
print(f"Descriptions: {team_lead.labor_market.subagent_descriptions}")
# Check if subagent exists
if "backend" in team_lead.labor_market:
print("Backend subagent is available")
# Get specific subagent
backend = team_lead.get_subagent("backend")
print(f"Backend agent name: {backend.name if backend else 'not found'}")
# Example 6: Create subagent copy
print("\n=== Example 6: Subagent Copy ===")
# Create a copy of parent for use as subagent elsewhere
parent_copy = parent.create_subagent_copy()
print(f"Created copy of parent: {parent_copy.name}")
print(f"Copy has empty labor market: {len(parent_copy.labor_market) == 0}")
print("\n" + "=" * 60)
print("Examples Complete")
print("=" * 60)
print("\nKey Concepts:")
print("1. Parent agent holds subagents in LaborMarket")
print("2. Task tool allows parent to delegate to subagents")
print("3. Subagents have independent history and context")
print("4. Fixed subagents are defined at setup")
print("5. Dynamic subagents can be added at runtime")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,130 @@
"""Example demonstrating the configurable tool suite for AgentLite.
This example shows how to use the tool suite with configuration
to enable/disable specific tools.
"""
import asyncio
from pathlib import Path
from agentlite.tools import (
ConfigurableToolset,
ToolSuiteConfig,
FileToolsConfig,
ShellToolsConfig,
)
async def main():
"""Demonstrate the configurable tool suite."""
# Example 1: Default configuration (all tools enabled)
print("=== Example 1: Default Configuration ===")
config = ToolSuiteConfig()
toolset = ConfigurableToolset(config)
print(f"Enabled tools: {len(toolset.tools)}")
for tool in toolset.tools:
print(f" - {tool.name}")
# Example 2: Disable specific tools
print("\n=== Example 2: Disable WriteFile ===")
config = ToolSuiteConfig(
file_tools=FileToolsConfig(
tools={"WriteFile": False} # Disable WriteFile
)
)
toolset = ConfigurableToolset(config)
print(f"Enabled tools: {len(toolset.tools)}")
for tool in toolset.tools:
print(f" - {tool.name}")
# Example 3: Disable entire tool groups
print("\n=== Example 3: Disable Shell Tools ===")
config = ToolSuiteConfig(shell_tools=ShellToolsConfig(enabled=False))
toolset = ConfigurableToolset(config)
print(f"Enabled tools: {len(toolset.tools)}")
for tool in toolset.tools:
print(f" - {tool.name}")
# Example 4: Custom file tool settings
print("\n=== Example 4: Custom File Tool Settings ===")
config = ToolSuiteConfig(
file_tools=FileToolsConfig(
max_lines=500,
max_bytes=50 * 1024, # 50KB
allow_write_outside_work_dir=True,
)
)
toolset = ConfigurableToolset(config)
print("File tool settings:")
print(f" Max lines: {config.file_tools.max_lines}")
print(f" Max bytes: {config.file_tools.max_bytes}")
print(f" Allow outside work dir: {config.file_tools.allow_write_outside_work_dir}")
# Example 5: Using with an Agent
print("\n=== Example 5: Using with Agent ===")
# Create a safe configuration (no shell, no write outside work dir)
ToolSuiteConfig(
file_tools=FileToolsConfig(
allow_write_outside_work_dir=False,
),
shell_tools=ShellToolsConfig(enabled=False),
)
# This would require an API key to actually run
# provider = OpenAIProvider(api_key="your-api-key", model="gpt-4")
# agent = Agent(
# provider=provider,
# system_prompt="You are a helpful assistant with file access.",
# tools=ConfigurableToolset(safe_config).tools,
# )
print("Safe configuration created:")
print(" - Shell tools: DISABLED")
print(" - Write outside work dir: DISABLED")
print(" - Read file: ENABLED")
print(" - Glob/Grep: ENABLED")
# Example 6: Dynamic configuration reload
print("\n=== Example 6: Dynamic Reload ===")
config = ToolSuiteConfig()
toolset = ConfigurableToolset(config)
print(f"Initial tools: {len(toolset.tools)}")
# Disable some tools and reload
config.file_tools.disable_tool("WriteFile")
config.shell_tools.enabled = False
toolset.reload()
print(f"After reload: {len(toolset.tools)}")
for tool in toolset.tools:
print(f" - {tool.name}")
# Example 7: Using individual tools directly
print("\n=== Example 7: Direct Tool Usage ===")
from agentlite.tools.file import ReadFile, Glob
# Create tools directly
read_tool = ReadFile(work_dir=Path("."))
glob_tool = Glob(work_dir=Path("."))
# Use ReadFile
result = await read_tool.read({"path": "README.md"})
if not result.is_error:
print(f"README.md: {len(result.output)} characters")
else:
print(f"Could not read README.md: {result.message}")
# Use Glob
result = await glob_tool.glob({"pattern": "*.py"})
if not result.is_error:
files = result.output.split("\n") if result.output else []
print(f"Python files found: {len(files)}")
else:
print(f"Glob error: {result.message}")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,116 @@
"""AgentLite - A lightweight, async-first Agent component library.
AgentLite provides clean abstractions for building LLM-powered agents with
OpenAI-compatible APIs, supporting tools (including MCP), streaming, and
multi-agent usage.
Example:
>>> import asyncio
>>> from agentlite import Agent, OpenAIProvider
>>>
>>> async def main():
... provider = OpenAIProvider(api_key="sk-...", model="gpt-4")
... agent = Agent(provider=provider, system_prompt="You are helpful.")
... response = await agent.run("Hello!")
... print(response)
>>>
>>> asyncio.run(main())
"""
__version__ = "0.1.0"
# Core types
from agentlite.message import (
ContentPart,
Message,
Role,
TextPart,
ImageURLPart,
AudioURLPart,
ToolCall,
ToolCallPart,
)
from agentlite.tool import (
Tool,
ToolResult,
ToolOk,
ToolError,
CallableTool,
CallableTool2,
SimpleToolset,
tool,
)
from agentlite.provider import (
ChatProvider,
StreamedMessage,
TokenUsage,
ChatProviderError,
APIConnectionError,
APITimeoutError,
APIStatusError,
)
# Configuration
from agentlite.config import (
ProviderConfig,
ModelConfig,
AgentConfig,
)
# Agent
from agentlite.agent import Agent
# MCP
from agentlite.mcp import MCPClient
# OpenAI Provider
from agentlite.providers.openai import OpenAIProvider
# LLM Client
from agentlite.llm_client import LLMClient, LLMResponse, llm_complete, llm_stream
__all__ = [
# Version
"__version__",
# Message types
"ContentPart",
"Message",
"Role",
"TextPart",
"ImageURLPart",
"AudioURLPart",
"ToolCall",
"ToolCallPart",
# Tool types
"Tool",
"ToolResult",
"ToolOk",
"ToolError",
"CallableTool",
"CallableTool2",
"SimpleToolset",
"tool",
# Provider types
"ChatProvider",
"StreamedMessage",
"TokenUsage",
"ChatProviderError",
"APIConnectionError",
"APITimeoutError",
"APIStatusError",
# Configuration
"ProviderConfig",
"ModelConfig",
"AgentConfig",
# Agent
"Agent",
# MCP
"MCPClient",
# Providers
"OpenAIProvider",
# LLM Client
"LLMClient",
"LLMResponse",
"llm_complete",
"llm_stream",
]

View File

@@ -0,0 +1,452 @@
"""Main Agent class for AgentLite.
This module provides the core Agent class that orchestrates LLM interactions,
tool calling, and conversation management.
"""
from __future__ import annotations
import asyncio
from collections.abc import AsyncIterator, Sequence
from typing import TYPE_CHECKING
from agentlite.message import (
ContentPart,
Message,
TextPart,
ToolCall,
ToolCallPart,
)
from agentlite.provider import ChatProvider
from agentlite.tool import SimpleToolset, ToolResult, ToolType
from agentlite.labor_market import LaborMarket
if TYPE_CHECKING:
pass
class Agent:
"""An LLM agent that can use tools and maintain conversation history.
The Agent class is the main interface for interacting with LLMs. It handles:
- Sending messages to the LLM
- Managing tool calls and execution
- Maintaining conversation history
- Streaming responses
Attributes:
provider: The LLM provider to use.
system_prompt: The system prompt for the agent.
tools: The toolset containing available tools.
history: The conversation history.
Example:
>>> provider = OpenAIProvider(api_key="sk-...", model="gpt-4")
>>> agent = Agent(
... provider=provider,
... system_prompt="You are a helpful assistant.",
... )
>>> response = await agent.run("Hello!")
>>> print(response)
"""
def __init__(
self,
provider: ChatProvider,
system_prompt: str = "You are a helpful assistant.",
tools: Sequence[ToolType] | None = None,
max_iterations: int = 80,
labor_market: LaborMarket | None = None,
name: str = "agent",
allow_subagents: bool = False,
):
"""Initialize the agent.
Args:
provider: The LLM provider to use.
system_prompt: The system prompt for the agent.
tools: Optional sequence of tools to make available.
max_iterations: Maximum number of tool call iterations per request.
labor_market: Optional LaborMarket for managing subagents.
name: Name of the agent (for identification in subagent hierarchies).
allow_subagents: Whether this agent is allowed to register subagents.
"""
self.provider = provider
self.system_prompt = system_prompt
self.tools = SimpleToolset(tools)
self.max_iterations = max_iterations
self.labor_market = labor_market or LaborMarket()
self.name = name
self.allow_subagents = allow_subagents
self._history: list[Message] = []
@property
def history(self) -> list[Message]:
"""Get the conversation history.
Returns:
A copy of the conversation history.
"""
return self._history.copy()
def clear_history(self) -> None:
"""Clear the conversation history."""
self._history.clear()
def add_message(self, message: Message) -> None:
"""Add a message to the history.
Args:
message: The message to add.
"""
self._history.append(message)
async def run(
self,
message: str,
*,
stream: bool = False,
) -> str | AsyncIterator[str]:
"""Run the agent with a user message.
This method sends the message to the LLM and handles any tool calls
that the model requests. It continues the conversation until the
model produces a final response without tool calls.
Args:
message: The user message.
stream: Whether to stream the response.
Returns:
If stream=False: The complete response as a string.
If stream=True: An async iterator yielding response chunks.
Example:
# Non-streaming
>>> response = await agent.run("What is 2 + 2?")
>>> print(response)
# Streaming
>>> async for chunk in await agent.run("Tell me a story", stream=True):
... print(chunk, end="")
"""
# Add user message to history
self._history.append(Message(role="user", content=message))
if stream:
return self._run_streaming()
else:
return await self._run_non_streaming()
async def _run_non_streaming(self) -> str:
"""Run the agent in non-streaming mode.
Returns:
The complete response as a string.
"""
iterations = 0
tool_calls: list[ToolCall] = []
while iterations < self.max_iterations:
iterations += 1
# Generate response
stream = await self.provider.generate(
system_prompt=self.system_prompt,
tools=self.tools.tools,
history=self._history,
)
# Collect response parts
response_parts: list[ContentPart] = []
tool_calls: list[ToolCall] = []
async for part in stream:
if isinstance(part, ToolCall):
tool_calls.append(part)
elif isinstance(part, ToolCallPart):
if tool_calls:
tool_calls[-1].merge_in_place(part)
elif isinstance(part, ContentPart):
response_parts.append(part)
# Extract text from response
response_text = ""
for part in response_parts:
if isinstance(part, TextPart):
response_text += part.text
# Add assistant message to history
self._history.append(
Message(
role="assistant",
content=response_parts,
tool_calls=tool_calls if tool_calls else None,
)
)
# If no tool calls, we're done
if not tool_calls:
return response_text
# Execute tool calls
tool_results = await self._execute_tool_calls(tool_calls)
# Add tool results to history
for result in tool_results:
self._history.append(
Message(
role="tool",
content=result.output,
tool_call_id=result.tool_call_id,
)
)
# Max iterations reached
last_tools_msg = ""
try:
if tool_calls:
tool_names = [tc.function.name for tc in tool_calls if hasattr(tc, "function")]
if tool_names:
last_tools_msg = f" Last tools called: {', '.join(tool_names)}."
except Exception:
pass
return (
f"Maximum tool call iterations reached ({self.max_iterations})."
f"{last_tools_msg}"
f" Consider increasing max_iterations or breaking the task into smaller steps."
)
async def _run_streaming(self) -> AsyncIterator[str]:
"""Run the agent in streaming mode.
Yields:
Response text chunks.
"""
iterations = 0
tool_calls: list[ToolCall] = []
while iterations < self.max_iterations:
iterations += 1
# Generate response
stream = await self.provider.generate(
system_prompt=self.system_prompt,
tools=self.tools.tools,
history=self._history,
)
# Collect response parts and yield text
response_parts: list[ContentPart] = []
tool_calls: list[ToolCall] = []
async for part in stream:
if isinstance(part, ToolCall):
tool_calls.append(part)
elif isinstance(part, ToolCallPart):
if tool_calls:
tool_calls[-1].merge_in_place(part)
elif isinstance(part, ContentPart):
response_parts.append(part)
if isinstance(part, TextPart):
yield part.text
# Add assistant message to history
self._history.append(
Message(
role="assistant",
content=response_parts,
tool_calls=tool_calls if tool_calls else None,
)
)
# If no tool calls, we're done
if not tool_calls:
return
# Execute tool calls
tool_results = await self._execute_tool_calls(tool_calls)
# Add tool results to history
for result in tool_results:
self._history.append(
Message(
role="tool",
content=result.output,
tool_call_id=result.tool_call_id,
)
)
# Max iterations reached
last_tools_msg = ""
try:
if tool_calls:
tool_names = [tc.function.name for tc in tool_calls if hasattr(tc, "function")]
if tool_names:
last_tools_msg = f" Last tools called: {', '.join(tool_names)}."
except Exception:
pass
yield (
f"Maximum tool call iterations reached ({self.max_iterations})."
f"{last_tools_msg}"
f" Consider increasing max_iterations or breaking the task into smaller steps."
)
async def _execute_tool_calls(
self,
tool_calls: list[ToolCall],
) -> list[_ToolResult]:
"""Execute a list of tool calls.
Args:
tool_calls: The tool calls to execute.
Returns:
List of tool results.
"""
results: list[_ToolResult] = []
# Execute all tool calls concurrently
futures = [self.tools.handle(tc) for tc in tool_calls]
for tc, future in zip(tool_calls, futures, strict=False):
try:
if asyncio.isfuture(future):
result = await future
else:
result = future
results.append(
_ToolResult(
tool_call_id=tc.id,
output=result.output if isinstance(result, ToolResult) else str(result),
is_error=result.is_error if isinstance(result, ToolResult) else False,
)
)
except Exception as e:
results.append(
_ToolResult(
tool_call_id=tc.id,
output=str(e),
is_error=True,
)
)
return results
async def generate(
self,
message: str,
) -> Message:
"""Generate a single response without tool calling loop.
This method sends a message to the LLM and returns the response
without executing any tool calls. This is useful when you want
to handle tool calls manually.
Args:
message: The user message.
Returns:
The assistant's response message.
"""
# Add user message to history
self._history.append(Message(role="user", content=message))
# Generate response
stream = await self.provider.generate(
system_prompt=self.system_prompt,
tools=self.tools.tools,
history=self._history,
)
# Collect response parts
response_parts: list[ContentPart] = []
tool_calls: list[ToolCall] = []
async for part in stream:
if isinstance(part, ToolCall):
tool_calls.append(part)
elif isinstance(part, ToolCallPart):
if tool_calls:
tool_calls[-1].merge_in_place(part)
elif isinstance(part, ContentPart):
response_parts.append(part)
# Create response message
response = Message(
role="assistant",
content=response_parts,
tool_calls=tool_calls if tool_calls else None,
)
# Add to history
self._history.append(response)
return response
def add_subagent(
self,
name: str,
agent: Agent,
description: str,
dynamic: bool = False,
) -> None:
"""Add a subagent to this agent's labor market.
Args:
name: Unique name for the subagent
agent: The Agent instance to add
description: Description of what the subagent does
dynamic: If True, add as dynamic subagent; otherwise fixed
"""
if not self.allow_subagents:
raise RuntimeError("Subagent delegation is disabled for this agent runtime.")
if dynamic:
self.labor_market.add_dynamic_subagent(name, agent)
else:
self.labor_market.add_fixed_subagent(name, agent, description)
def get_subagent(self, name: str) -> Agent | None:
"""Get a subagent by name.
Args:
name: Name of the subagent
Returns:
The subagent Agent if found, None otherwise
"""
return self.labor_market.get_subagent(name)
def create_subagent_copy(self) -> Agent:
"""Create a copy of this agent for use as a subagent.
The copy will have:
- Same provider
- Independent history (empty)
- Empty labor market (subagents cannot have their own subagents by default)
Returns:
A new Agent instance configured as a subagent
"""
return Agent(
provider=self.provider,
system_prompt=self.system_prompt,
tools=list(self.tools._tools.values()),
max_iterations=self.max_iterations,
labor_market=LaborMarket(), # Empty labor market
allow_subagents=False,
name=f"{self.name}_sub",
)
class _ToolResult:
"""Internal class for tool execution results."""
def __init__(self, tool_call_id: str, output: str, is_error: bool):
self.tool_call_id = tool_call_id
self.output = output
self.is_error = is_error

View File

@@ -0,0 +1,201 @@
"""Configuration models for AgentLite.
This module provides Pydantic-based configuration models for providers,
models, and agent settings.
"""
from __future__ import annotations
from typing import Literal, Optional
from pydantic import BaseModel, Field, SecretStr, model_validator
ProviderType = Literal["openai", "anthropic", "google", "custom"]
ModelCapability = Literal[
"streaming",
"tool_calling",
"vision",
"json_mode",
"function_calling",
]
class ProviderConfig(BaseModel):
"""Configuration for an LLM provider.
Attributes:
type: The provider type (openai, anthropic, etc.)
base_url: The API base URL
api_key: The API key (stored securely)
headers: Additional headers to include in requests
timeout: Request timeout in seconds
Example:
>>> config = ProviderConfig(
... type="openai",
... base_url="https://api.openai.com/v1",
... api_key="sk-...",
... )
"""
type: ProviderType = "openai"
base_url: str = "https://api.openai.com/v1"
api_key: SecretStr
headers: dict[str, str] = Field(default_factory=dict)
timeout: float = 60.0
@model_validator(mode="after")
def validate_base_url(self) -> "ProviderConfig":
"""Validate that base_url is a valid URL."""
if not self.base_url.startswith(("http://", "https://")):
raise ValueError("base_url must start with http:// or https://")
return self
class ModelConfig(BaseModel):
"""Configuration for an LLM model.
Attributes:
provider: Name of the provider to use
model: The model name/ID
max_tokens: Maximum tokens to generate
temperature: Sampling temperature
top_p: Nucleus sampling parameter
capabilities: Set of model capabilities
Example:
>>> config = ModelConfig(
... provider="openai",
... model="gpt-4",
... temperature=0.7,
... )
"""
provider: str
model: str
max_tokens: Optional[int] = None
temperature: Optional[float] = Field(default=None, ge=0.0, le=2.0)
top_p: Optional[float] = Field(default=None, ge=0.0, le=1.0)
capabilities: set[ModelCapability] = Field(default_factory=set)
@model_validator(mode="after")
def validate_provider(self) -> "ModelConfig":
"""Validate provider is not empty."""
if not self.provider:
raise ValueError("provider must not be empty")
return self
class ToolConfig(BaseModel):
"""Configuration for tool usage.
Attributes:
max_iterations: Maximum number of tool call iterations
timeout: Timeout for tool execution in seconds
"""
max_iterations: int = Field(default=80, ge=1, le=100)
timeout: float = 60.0
class AgentConfig(BaseModel):
"""Complete configuration for an Agent.
This combines provider, model, and behavior settings into a single
configuration object.
Attributes:
name: Optional name for the agent
system_prompt: The system prompt to use
providers: Dictionary of provider configurations
models: Dictionary of model configurations
default_model: Name of the default model to use
tools: Tool configuration
max_history: Maximum number of messages to keep in history
Example:
>>> config = AgentConfig(
... name="my_agent",
... system_prompt="You are a helpful assistant.",
... providers={
... "openai": ProviderConfig(
... type="openai",
... api_key="sk-...",
... )
... },
... models={
... "gpt4": ModelConfig(
... provider="openai",
... model="gpt-4",
... )
... },
... default_model="gpt4",
... )
"""
name: str = "agent"
system_prompt: str = "You are a helpful assistant."
providers: dict[str, ProviderConfig] = Field(default_factory=dict)
models: dict[str, ModelConfig] = Field(default_factory=dict)
default_model: str = "default"
tools: ToolConfig = Field(default_factory=ToolConfig)
max_history: int = Field(default=100, ge=1)
@model_validator(mode="after")
def validate_default_model(self) -> "AgentConfig":
"""Validate that default_model exists in models."""
if self.default_model and self.default_model not in self.models:
raise ValueError(f"default_model '{self.default_model}' not found in models")
return self
@model_validator(mode="after")
def validate_model_providers(self) -> "AgentConfig":
"""Validate that all model providers exist."""
for model_name, model_config in self.models.items():
if model_config.provider not in self.providers:
raise ValueError(
f"Model '{model_name}' references unknown provider '{model_config.provider}'"
)
return self
def get_provider_config(self, model_name: Optional[str] = None) -> ProviderConfig:
"""Get the provider config for a model.
Args:
model_name: Name of the model. If None, uses default_model.
Returns:
The provider configuration for the model.
Raises:
ValueError: If the model or provider is not found.
"""
model_name = model_name or self.default_model
if model_name not in self.models:
raise ValueError(f"Model '{model_name}' not found")
model_config = self.models[model_name]
if model_config.provider not in self.providers:
raise ValueError(f"Provider '{model_config.provider}' not found")
return self.providers[model_config.provider]
def get_model_config(self, model_name: Optional[str] = None) -> ModelConfig:
"""Get the configuration for a model.
Args:
model_name: Name of the model. If None, uses default_model.
Returns:
The model configuration.
Raises:
ValueError: If the model is not found.
"""
model_name = model_name or self.default_model
if model_name not in self.models:
raise ValueError(f"Model '{model_name}' not found")
return self.models[model_name]

View File

@@ -0,0 +1,182 @@
"""Labor Market for managing subagents in AgentLite.
This module provides the LaborMarket class for managing subagents
in a hierarchical agent architecture, similar to kimi-cli's approach.
"""
from __future__ import annotations
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from agentlite.agent import Agent
class LaborMarket:
"""Manages subagents for a parent agent.
The LaborMarket acts as a registry for subagents, allowing a parent
agent to delegate tasks to its children. It supports both fixed
(pre-defined) and dynamic (runtime-created) subagents.
This design follows kimi-cli's architecture where:
- Fixed subagents are defined in configuration and loaded at startup
- Dynamic subagents can be created at runtime using CreateSubagent tool
- Subagents can be retrieved by name for task delegation
Example:
>>> market = LaborMarket()
>>> market.add_fixed_subagent("coder", coder_agent, "Writes code")
>>> market.add_dynamic_subagent("temp", temp_agent)
>>> agent = market.get_subagent("coder")
"""
def __init__(self):
"""Initialize an empty labor market."""
self._fixed_subagents: dict[str, Agent] = {}
self._fixed_subagent_descs: dict[str, str] = {}
self._dynamic_subagents: dict[str, Agent] = {}
@property
def subagents(self) -> dict[str, Agent]:
"""Get all subagents (both fixed and dynamic).
Returns:
Dictionary mapping subagent names to Agent instances.
"""
return {**self._fixed_subagents, **self._dynamic_subagents}
@property
def fixed_subagents(self) -> dict[str, Agent]:
"""Get fixed (pre-defined) subagents.
Returns:
Dictionary of fixed subagents.
"""
return self._fixed_subagents.copy()
@property
def dynamic_subagents(self) -> dict[str, Agent]:
"""Get dynamic (runtime-created) subagents.
Returns:
Dictionary of dynamic subagents.
"""
return self._dynamic_subagents.copy()
@property
def subagent_descriptions(self) -> dict[str, str]:
"""Get descriptions of all subagents.
Returns:
Dictionary mapping subagent names to their descriptions.
Only fixed subagents have descriptions.
"""
return self._fixed_subagent_descs.copy()
def add_fixed_subagent(self, name: str, agent: Agent, description: str) -> None:
"""Add a fixed subagent.
Fixed subagents are defined in configuration and loaded at startup.
They typically have their own LaborMarket (for isolation).
Args:
name: Unique name for the subagent
agent: The Agent instance
description: Description of what the subagent does
Raises:
ValueError: If a subagent with the same name already exists.
"""
if name in self.subagents:
raise ValueError(f"Subagent '{name}' already exists")
self._fixed_subagents[name] = agent
self._fixed_subagent_descs[name] = description
def add_dynamic_subagent(self, name: str, agent: Agent) -> None:
"""Add a dynamic subagent.
Dynamic subagents are created at runtime, typically using the
CreateSubagent tool. They share the parent's LaborMarket.
Args:
name: Unique name for the subagent
agent: The Agent instance
Raises:
ValueError: If a subagent with the same name already exists.
"""
if name in self.subagents:
raise ValueError(f"Subagent '{name}' already exists")
self._dynamic_subagents[name] = agent
def get_subagent(self, name: str) -> Optional[Agent]:
"""Get a subagent by name.
Args:
name: Name of the subagent
Returns:
The Agent instance if found, None otherwise.
"""
return self.subagents.get(name)
def has_subagent(self, name: str) -> bool:
"""Check if a subagent exists.
Args:
name: Name of the subagent
Returns:
True if the subagent exists, False otherwise.
"""
return name in self.subagents
def remove_subagent(self, name: str) -> bool:
"""Remove a subagent.
Args:
name: Name of the subagent to remove
Returns:
True if the subagent was removed, False if it didn't exist.
"""
if name in self._fixed_subagents:
del self._fixed_subagents[name]
del self._fixed_subagent_descs[name]
return True
if name in self._dynamic_subagents:
del self._dynamic_subagents[name]
return True
return False
def list_subagents(self) -> list[str]:
"""List all subagent names.
Returns:
List of subagent names.
"""
return list(self.subagents.keys())
def __contains__(self, name: str) -> bool:
"""Check if a subagent exists using 'in' operator."""
return self.has_subagent(name)
def __getitem__(self, name: str) -> Agent:
"""Get a subagent using bracket notation."""
agent = self.get_subagent(name)
if agent is None:
raise KeyError(f"Subagent '{name}' not found")
return agent
def __iter__(self):
"""Iterate over subagent names."""
return iter(self.subagents)
def __len__(self) -> int:
"""Get the number of subagents."""
return len(self.subagents)

View File

@@ -0,0 +1,360 @@
"""Simple LLM client for direct LLM calls without agent overhead.
This module provides a simple interface for making direct LLM calls,
reusing the agentlite configuration system.
Example:
>>> from agentlite import LLMClient, AgentConfig, ProviderConfig, ModelConfig
>>>
>>> # Using configuration
>>> config = AgentConfig(
... providers={"openai": ProviderConfig(api_key="sk-...")},
... models={"gpt4": ModelConfig(provider="openai", model="gpt-4")},
... default_model="gpt4",
... )
>>> client = LLMClient(config)
>>>
>>> # Simple completion
>>> response = await client.complete(
... system_prompt="You are a helpful assistant.", user_prompt="What is Python?"
... )
>>> print(response)
>>> # Streaming
>>> async for chunk in client.stream(
... system_prompt="You are a helpful assistant.", user_prompt="Tell me a story"
... ):
... print(chunk, end="")
"""
from __future__ import annotations
from collections.abc import AsyncIterator
from typing import Optional
from agentlite.config import AgentConfig
from agentlite.message import Message, TextPart
from agentlite.provider import ChatProvider, TokenUsage
from agentlite.providers.openai import OpenAIProvider
class LLMResponse:
"""Response from an LLM call.
Attributes:
content: The complete response text
usage: Token usage statistics
model: The model name used
"""
def __init__(self, content: str, usage: TokenUsage | None = None, model: str = ""):
self.content = content
self.usage = usage
self.model = model
def __str__(self) -> str:
return self.content
def __repr__(self) -> str:
return f"LLMResponse(content={self.content[:50]}..., model={self.model})"
class LLMClient:
"""Simple client for direct LLM calls.
This client provides a simple interface for calling LLMs without the
overhead of an Agent. It reuses the agentlite configuration system.
Example:
>>> # Using AgentConfig
>>> config = AgentConfig(...)
>>> client = LLMClient(config)
>>>
>>> # Using provider directly
>>> provider = OpenAIProvider(api_key="sk-...", model="gpt-4")
>>> client = LLMClient(provider=provider)
>>>
>>> # Make a call
>>> response = await client.complete(system_prompt="You are helpful.", user_prompt="Hello!")
"""
def __init__(
self,
config: Optional[AgentConfig] = None,
provider: Optional[ChatProvider] = None,
model: Optional[str] = None,
):
"""Initialize the LLM client.
Args:
config: AgentConfig to use for provider/model configuration
provider: Direct provider instance (alternative to config)
model: Model name to use (when using config)
Raises:
ValueError: If neither config nor provider is provided
"""
if provider is not None:
self._provider = provider
self._model_config = None
elif config is not None:
self._config = config
self._model_name = model or config.default_model
self._provider = self._create_provider()
self._model_config = config.get_model_config(self._model_name)
else:
raise ValueError("Either config or provider must be provided")
def _create_provider(self) -> ChatProvider:
"""Create a provider instance from config."""
if not hasattr(self, "_config"):
raise RuntimeError("No config available")
provider_config = self._config.get_provider_config(self._model_name)
model_config = self._config.get_model_config(self._model_name)
# Create appropriate provider based on type
if provider_config.type == "openai":
return OpenAIProvider(
api_key=provider_config.api_key.get_secret_value(),
model=model_config.model,
base_url=provider_config.base_url,
timeout=provider_config.timeout,
)
else:
raise ValueError(f"Unsupported provider type: {provider_config.type}")
async def complete(
self,
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> LLMResponse:
"""Make a non-streaming LLM call.
Args:
user_prompt: The user message/prompt
system_prompt: The system prompt (default: "You are a helpful assistant.")
temperature: Sampling temperature (overrides config if provided)
max_tokens: Maximum tokens to generate (overrides config if provided)
Returns:
LLMResponse containing the complete response text and metadata
Example:
>>> response = await client.complete(user_prompt="What is the capital of France?")
>>> print(response.content)
"The capital of France is Paris."
"""
# Build messages
messages = [Message(role="user", content=user_prompt)]
# Create a temporary provider with overridden parameters if needed
provider = self._provider
if temperature is not None or max_tokens is not None:
provider = self._create_provider_with_params(temperature, max_tokens)
# Generate response
stream = await provider.generate(
system_prompt=system_prompt,
tools=[], # No tools for simple LLM calls
history=messages,
)
# Collect response
content_parts = []
usage = None
async for part in stream:
if isinstance(part, TextPart):
content_parts.append(part.text)
# Try to get usage from stream
try:
if usage is None and hasattr(stream, "usage") and stream.usage:
usage = stream.usage
except Exception:
pass
content = "".join(content_parts)
model_name = getattr(
provider, "model_name", self._model_config.model if self._model_config else "unknown"
)
return LLMResponse(
content=content,
usage=usage,
model=model_name,
)
async def stream(
self,
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> AsyncIterator[str]:
"""Make a streaming LLM call.
Args:
user_prompt: The user message/prompt
system_prompt: The system prompt (default: "You are a helpful assistant.")
temperature: Sampling temperature (overrides config if provided)
max_tokens: Maximum tokens to generate (overrides config if provided)
Yields:
Response text chunks as they arrive
Example:
>>> async for chunk in client.stream(user_prompt="Write a poem about AI"):
... print(chunk, end="")
"""
# Build messages
messages = [Message(role="user", content=user_prompt)]
# Create a temporary provider with overridden parameters if needed
provider = self._provider
if temperature is not None or max_tokens is not None:
provider = self._create_provider_with_params(temperature, max_tokens)
# Generate response
stream = await provider.generate(
system_prompt=system_prompt,
tools=[], # No tools for simple LLM calls
history=messages,
)
# Yield chunks
async for part in stream:
if isinstance(part, TextPart):
yield part.text
def _create_provider_with_params(
self,
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> ChatProvider:
"""Create a provider with overridden parameters."""
if not hasattr(self, "_config"):
# Can't override params without config
return self._provider
provider_config = self._config.get_provider_config(self._model_name)
model_config = self._config.get_model_config(self._model_name)
# Override parameters
temp = temperature if temperature is not None else model_config.temperature
max_tok = max_tokens if max_tokens is not None else model_config.max_tokens
if provider_config.type == "openai":
return OpenAIProvider(
api_key=provider_config.api_key.get_secret_value(),
model=model_config.model,
base_url=provider_config.base_url,
timeout=provider_config.timeout,
temperature=temp,
max_tokens=max_tok,
)
else:
raise ValueError(f"Unsupported provider type: {provider_config.type}")
# Convenience functions for simple use cases
async def llm_complete(
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
api_key: Optional[str] = None,
model: str = "gpt-4",
base_url: str = "https://api.openai.com/v1",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> str:
"""Simple function for one-off LLM completions.
This is a convenience function for simple use cases where you don't
need to reuse a client instance.
Args:
user_prompt: The user message/prompt
system_prompt: The system prompt
api_key: API key (if not provided, must be set in env)
model: Model name (default: gpt-4)
base_url: API base URL
temperature: Sampling temperature
max_tokens: Maximum tokens to generate
Returns:
The response text
Example:
>>> response = await llm_complete(
... user_prompt="What is 2+2?",
... api_key="sk-...",
... model="gpt-4",
... )
>>> print(response)
"2+2 equals 4."
"""
provider = OpenAIProvider(
api_key=api_key,
model=model,
base_url=base_url,
)
client = LLMClient(provider=provider)
response = await client.complete(
user_prompt=user_prompt,
system_prompt=system_prompt,
temperature=temperature,
max_tokens=max_tokens,
)
return response.content
async def llm_stream(
user_prompt: str,
system_prompt: str = "You are a helpful assistant.",
api_key: Optional[str] = None,
model: str = "gpt-4",
base_url: str = "https://api.openai.com/v1",
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> AsyncIterator[str]:
"""Simple function for one-off streaming LLM completions.
This is a convenience function for simple use cases where you don't
need to reuse a client instance.
Args:
user_prompt: The user message/prompt
system_prompt: The system prompt
api_key: API key (if not provided, must be set in env)
model: Model name (default: gpt-4)
base_url: API base URL
temperature: Sampling temperature
max_tokens: Maximum tokens to generate
Yields:
Response text chunks
Example:
>>> async for chunk in llm_stream(
... user_prompt="Write a haiku",
... api_key="sk-...",
... ):
... print(chunk, end="")
"""
provider = OpenAIProvider(
api_key=api_key,
model=model,
base_url=base_url,
)
client = LLMClient(provider=provider)
async for chunk in client.stream(
user_prompt=user_prompt,
system_prompt=system_prompt,
temperature=temperature,
max_tokens=max_tokens,
):
yield chunk

View File

@@ -0,0 +1,212 @@
"""MCP (Model Context Protocol) integration for AgentLite.
This module provides integration with MCP servers, allowing agents to use
tools from external MCP-compatible servers.
"""
from __future__ import annotations
from typing import TYPE_CHECKING, Any
from agentlite.tool import CallableTool, ToolOk, ToolResult, ToolError
if TYPE_CHECKING:
pass
class MCPClient:
"""Client for connecting to MCP servers.
This client allows you to connect to MCP servers and load their tools
into AgentLite agents.
Example:
>>> client = MCPClient()
>>> await client.connect_stdio(
... "npx", ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
... )
>>> tools = await client.load_tools()
>>> agent = Agent(provider=provider, tools=tools)
"""
def __init__(self):
"""Initialize the MCP client."""
self._client: Any | None = None
self._connected = False
def _check_fastmcp(self) -> None:
"""Check if fastmcp is installed."""
try:
import fastmcp # noqa: F401
except ImportError as e:
raise ImportError(
"MCP support requires 'fastmcp' package. Install with: pip install agentlite[mcp]"
) from e
async def connect_stdio(
self,
command: str,
args: list[str] | None = None,
env: dict[str, str] | None = None,
) -> None:
"""Connect to an MCP server via stdio.
Args:
command: The command to run.
args: Optional arguments for the command.
env: Optional environment variables.
Raises:
RuntimeError: If already connected.
ConnectionError: If the connection fails.
"""
if self._connected:
raise RuntimeError("Already connected to an MCP server")
try:
from fastmcp import Client
from fastmcp.client.transports import PythonStdioTransport
transport = PythonStdioTransport(
command_or_script=command,
args=args or [],
env=env,
)
self._client = Client(transport)
self._connected = True
except Exception as e:
raise ConnectionError(f"Failed to connect to MCP server: {e}") from e
async def connect_sse(
self,
url: str,
headers: dict[str, str] | None = None,
) -> None:
"""Connect to an MCP server via Server-Sent Events (SSE).
Args:
url: The SSE endpoint URL.
headers: Optional headers to include in requests.
Raises:
RuntimeError: If already connected.
ConnectionError: If the connection fails.
"""
if self._connected:
raise RuntimeError("Already connected to an MCP server")
try:
from fastmcp import Client
from fastmcp.client.transports import SSETransport
transport = SSETransport(url=url, headers=headers)
self._client = Client(transport)
self._connected = True
except Exception as e:
raise ConnectionError(f"Failed to connect to MCP server: {e}") from e
async def load_tools(self) -> list[CallableTool]:
"""Load tools from the connected MCP server.
Returns:
A list of CallableTool instances wrapping the MCP tools.
Raises:
RuntimeError: If not connected to an MCP server.
"""
if not self._connected or self._client is None:
raise RuntimeError("Not connected to an MCP server")
tools: list[CallableTool] = []
try:
async with self._client as client:
mcp_tools = await client.list_tools()
for mcp_tool in mcp_tools:
tool = _MCPTool(
client=self._client,
name=mcp_tool.name,
description=mcp_tool.description or "No description provided",
parameters=mcp_tool.inputSchema,
)
tools.append(tool)
except Exception as e:
raise RuntimeError(f"Failed to load MCP tools: {e}") from e
return tools
async def close(self) -> None:
"""Close the connection to the MCP server."""
if self._client is not None:
try:
await self._client.close()
except Exception:
pass
finally:
self._client = None
self._connected = False
async def __aenter__(self) -> MCPClient:
"""Async context manager entry."""
return self
async def __aexit__(self, *args: Any) -> None:
"""Async context manager exit."""
await self.close()
class _MCPTool(CallableTool):
"""Wrapper for MCP tools."""
def __init__(
self,
client: Any,
name: str,
description: str,
parameters: dict[str, Any],
):
"""Initialize the MCP tool wrapper.
Args:
client: The MCP client.
name: The tool name.
description: The tool description.
parameters: The JSON schema for tool parameters.
"""
self._client = client
super().__init__(
name=name,
description=description,
parameters=parameters,
)
async def __call__(self, **kwargs: Any) -> ToolResult:
"""Execute the MCP tool.
Args:
**kwargs: The tool arguments.
Returns:
The tool result.
"""
try:
async with self._client as client:
result = await client.call_tool(self.name, kwargs)
# Convert MCP result to ToolResult
content_parts = []
for content in result.content:
if hasattr(content, "text"):
content_parts.append(content.text)
else:
content_parts.append(str(content))
output = "\n".join(content_parts)
if result.isError:
return ToolError(message=output or "Tool execution failed")
return ToolOk(output=output)
except Exception as e:
return ToolError(message=f"MCP tool execution failed: {e}")

View File

@@ -0,0 +1,292 @@
"""Core message types for AgentLite.
This module defines the message and content part types used throughout
AgentLite for communication with LLM providers.
"""
from __future__ import annotations
from abc import ABC
from typing import Any, ClassVar, Literal, Optional, Union, cast
from pydantic import BaseModel, GetCoreSchemaHandler, field_validator
from pydantic_core import core_schema
Role = Literal["system", "user", "assistant", "tool"]
class MergeableMixin:
"""Mixin for content parts that can be merged during streaming."""
def merge_in_place(self, other: Any) -> bool:
"""Merge another part into this one.
Args:
other: The part to merge into this one.
Returns:
True if the merge was successful, False otherwise.
"""
return False
class ContentPart(BaseModel, ABC, MergeableMixin):
"""Base class for message content parts.
ContentPart uses a registry pattern to allow polymorphic validation
of content part subclasses based on the 'type' field.
Example:
>>> text = TextPart(text="Hello")
>>> print(text.model_dump())
{'type': 'text', 'text': 'Hello'}
"""
__content_part_registry: ClassVar[dict[str, type["ContentPart"]]] = {}
type: str
def __init_subclass__(cls, **kwargs: Any) -> None:
super().__init_subclass__(**kwargs)
type_value = getattr(cls, "type", None)
if type_value is None or not isinstance(type_value, str):
raise ValueError(
f"ContentPart subclass {cls.__name__} must have a 'type' field of type str"
)
cls.__content_part_registry[type_value] = cls
@classmethod
def __get_pydantic_core_schema__(
cls, source_type: Any, handler: GetCoreSchemaHandler
) -> core_schema.CoreSchema:
"""Custom schema for polymorphic ContentPart validation."""
if cls.__name__ == "ContentPart":
def validate_content_part(value: Any) -> Any:
"""Validate a value as a ContentPart subclass."""
# Already an instance
if hasattr(value, "__class__") and issubclass(value.__class__, cls):
return value
# Dict with type field - dispatch to subclass
if isinstance(value, dict) and "type" in value:
type_value = cast(dict[str, Any], value).get("type")
if not isinstance(type_value, str):
raise ValueError(f"Cannot validate {value} as ContentPart")
target_class = cls.__content_part_registry.get(type_value)
if target_class is None:
raise ValueError(f"Unknown content part type: {type_value}")
return target_class.model_validate(value)
raise ValueError(f"Cannot validate {value} as ContentPart")
return core_schema.no_info_plain_validator_function(validate_content_part)
# For subclasses, use default schema
return handler(source_type)
class TextPart(ContentPart):
"""Text content part.
Attributes:
text: The text content.
Example:
>>> part = TextPart(text="Hello, world!")
>>> part.model_dump()
{'type': 'text', 'text': 'Hello, world!'}
"""
type: str = "text"
text: str
def merge_in_place(self, other: Any) -> bool:
"""Merge another TextPart into this one."""
if not isinstance(other, TextPart):
return False
self.text += other.text
return True
class ImageURLPart(ContentPart):
"""Image URL content part.
Attributes:
image_url: The image URL configuration.
Example:
>>> part = ImageURLPart(
... image_url=ImageURLPart.ImageURL(url="https://example.com/image.png")
... )
"""
class ImageURL(BaseModel):
"""Image URL configuration."""
url: str
"""The URL of the image. Can be a data URI like 'data:image/png;base64,...'."""
detail: Optional[str] = None
"""The detail level: 'low', 'high', or 'auto'."""
type: str = "image_url"
image_url: ImageURL
class AudioURLPart(ContentPart):
"""Audio URL content part.
Attributes:
audio_url: The audio URL configuration.
"""
class AudioURL(BaseModel):
"""Audio URL configuration."""
url: str
"""The URL of the audio. Can be a data URI like 'data:audio/mp3;base64,...'."""
type: str = "audio_url"
audio_url: AudioURL
class ToolCall(BaseModel, MergeableMixin):
"""A tool call requested by the assistant.
Attributes:
id: Unique identifier for the tool call.
function: The function to call.
Example:
>>> call = ToolCall(
... id="call_123",
... function=ToolCall.FunctionBody(name="add", arguments='{"a": 1, "b": 2}'),
... )
"""
class FunctionBody(BaseModel):
"""Function call details."""
name: str
"""The name of the tool to call."""
arguments: str
"""The arguments as a JSON string."""
type: Literal["function"] = "function"
id: str
function: FunctionBody
def merge_in_place(self, other: Any) -> bool:
"""Merge a ToolCallPart into this ToolCall."""
if not isinstance(other, ToolCallPart):
return False
if other.arguments_part:
self.function.arguments += other.arguments_part
return True
class ToolCallPart(BaseModel, MergeableMixin):
"""A partial tool call during streaming.
This represents a chunk of a tool call that is being streamed.
Attributes:
arguments_part: A chunk of the arguments JSON.
"""
arguments_part: Optional[str] = None
def merge_in_place(self, other: Any) -> bool:
"""Merge another ToolCallPart into this one."""
if not isinstance(other, ToolCallPart):
return False
if other.arguments_part:
if self.arguments_part is None:
self.arguments_part = other.arguments_part
else:
self.arguments_part += other.arguments_part
return True
class Message(BaseModel):
"""A message in a conversation.
Attributes:
role: The role of the message sender.
content: The content parts of the message.
tool_calls: Tool calls requested by the assistant (only for assistant role).
tool_call_id: The ID of the tool call being responded to (only for tool role).
name: Optional name for the sender.
Example:
>>> msg = Message(role="user", content="Hello!")
>>> print(msg.extract_text())
Hello!
"""
role: Role
content: list[ContentPart]
tool_calls: Optional[list[ToolCall]] = None
tool_call_id: Optional[str] = None
name: Optional[str] = None
@field_validator("content", mode="before")
@classmethod
def _coerce_content(cls, value: Any) -> Any:
"""Coerce string content to TextPart."""
if isinstance(value, str):
return [TextPart(text=value)]
return value
def __init__(
self,
*,
role: Role,
content: Union[list[ContentPart], ContentPart, str],
tool_calls: Optional[list[ToolCall]] = None,
tool_call_id: Optional[str] = None,
name: Optional[str] = None,
) -> None:
"""Initialize a message.
Args:
role: The role of the message sender.
content: The content, can be a string, single ContentPart, or list.
tool_calls: Tool calls for assistant messages.
tool_call_id: ID of the tool call being responded to.
name: Optional name for the sender.
"""
if isinstance(content, str):
content = [TextPart(text=content)]
elif isinstance(content, ContentPart):
content = [content]
super().__init__(
role=role,
content=content,
tool_calls=tool_calls,
tool_call_id=tool_call_id,
name=name,
)
def extract_text(self, sep: str = "") -> str:
"""Extract all text from the message content.
Args:
sep: Separator to use between text parts.
Returns:
Concatenated text from all TextPart instances.
"""
return sep.join(part.text for part in self.content if isinstance(part, TextPart))
def has_tool_calls(self) -> bool:
"""Check if this message contains tool calls.
Returns:
True if the message has tool calls.
"""
return self.tool_calls is not None and len(self.tool_calls) > 0

View File

@@ -0,0 +1,161 @@
"""Chat provider protocol and implementations for AgentLite.
This module defines the ChatProvider protocol that abstracts LLM providers
and provides the base types for streaming responses.
"""
from __future__ import annotations
from collections.abc import AsyncIterator, Sequence
from typing import Protocol, Union, runtime_checkable
from pydantic import BaseModel
from agentlite.message import ContentPart, Message, ToolCall, ToolCallPart
from agentlite.tool import Tool
class TokenUsage(BaseModel):
"""Token usage statistics for a generation.
Attributes:
input_tokens: Number of input tokens used.
output_tokens: Number of output tokens generated.
cached_tokens: Number of cached input tokens (if applicable).
Example:
>>> usage = TokenUsage(input_tokens=100, output_tokens=50)
>>> print(usage.total)
150
"""
input_tokens: int
"""Number of input tokens used."""
output_tokens: int
"""Number of output tokens generated."""
cached_tokens: int = 0
"""Number of cached input tokens (if applicable)."""
@property
def total(self) -> int:
"""Total tokens used (input + output)."""
return self.input_tokens + self.output_tokens
StreamedPart = Union[ContentPart, ToolCall, ToolCallPart]
@runtime_checkable
class StreamedMessage(Protocol):
"""Protocol for streamed message responses.
This protocol defines the interface for streaming responses from LLM
providers. Implementations should yield content parts as they arrive.
Example:
>>> stream = await provider.generate(system_prompt, tools, history)
>>> async for part in stream:
... print(part)
"""
def __aiter__(self) -> AsyncIterator[StreamedPart]:
"""Return an async iterator over the streamed parts."""
...
@property
def id(self) -> str | None:
"""The unique identifier of the message, if available."""
...
@property
def usage(self) -> TokenUsage | None:
"""Token usage statistics, if available."""
...
class ChatProviderError(Exception):
"""Base exception for chat provider errors."""
def __init__(self, message: str):
super().__init__(message)
self.message = message
class APIConnectionError(ChatProviderError):
"""Error connecting to the API."""
pass
class APITimeoutError(ChatProviderError):
"""API request timed out."""
pass
class APIStatusError(ChatProviderError):
"""API returned an error status code.
Attributes:
status_code: The HTTP status code returned.
"""
def __init__(self, status_code: int, message: str):
super().__init__(message)
self.status_code = status_code
class APIEmptyResponseError(ChatProviderError):
"""API returned an empty response."""
pass
@runtime_checkable
class ChatProvider(Protocol):
"""Protocol for LLM chat providers.
This protocol defines the interface that all LLM providers must implement.
It supports both streaming and non-streaming generation.
Example:
>>> provider = OpenAIProvider(api_key="sk-...", model="gpt-4")
>>> stream = await provider.generate(
... system_prompt="You are helpful.",
... tools=[],
... history=[Message(role="user", content="Hello!")],
... )
>>> async for part in stream:
... print(part)
"""
@property
def model_name(self) -> str:
"""The name of the model being used."""
...
async def generate(
self,
system_prompt: str,
tools: Sequence[Tool],
history: Sequence[Message],
) -> StreamedMessage:
"""Generate a response from the LLM.
Args:
system_prompt: The system prompt to use.
tools: Available tools for the model to call.
history: The conversation history.
Returns:
A streamed message that yields content parts.
Raises:
APIConnectionError: If the connection fails.
APITimeoutError: If the request times out.
APIStatusError: If the API returns an error status.
APIEmptyResponseError: If the response is empty.
"""
...

View File

@@ -0,0 +1,5 @@
"""Providers package for AgentLite."""
from agentlite.providers.openai import OpenAIProvider
__all__ = ["OpenAIProvider"]

View File

@@ -0,0 +1,305 @@
"""OpenAI provider implementation for AgentLite.
This module provides an OpenAI-compatible chat provider that works with
the OpenAI API and any OpenAI-compatible API (e.g., Moonshot, Together, etc.).
"""
from __future__ import annotations
import uuid
from collections.abc import AsyncIterator, Sequence
from typing import TYPE_CHECKING, Any
import httpx
from openai import AsyncOpenAI, OpenAIError
from openai.types.chat import (
ChatCompletionChunk,
ChatCompletionMessageParam,
ChatCompletionToolParam,
)
from agentlite.message import (
Message,
TextPart,
ToolCall,
ToolCallPart,
)
from agentlite.provider import (
APIConnectionError,
APIStatusError,
APITimeoutError,
ChatProviderError,
StreamedMessage,
TokenUsage,
)
from agentlite.tool import Tool
if TYPE_CHECKING:
pass
def _convert_tool_to_openai(tool: Tool) -> ChatCompletionToolParam:
"""Convert a Tool to OpenAI tool format.
Args:
tool: The tool to convert.
Returns:
The OpenAI tool format.
"""
return {
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.parameters,
},
}
def _convert_message_to_openai(message: Message) -> ChatCompletionMessageParam:
"""Convert a Message to OpenAI message format.
Args:
message: The message to convert.
Returns:
The OpenAI message format.
"""
# Start with basic message
result: dict[str, Any] = {
"role": message.role,
}
# Handle content
if message.role == "tool":
# Tool response message
result["content"] = message.extract_text()
result["tool_call_id"] = message.tool_call_id
elif message.has_tool_calls():
# Assistant message with tool calls
result["content"] = message.extract_text() or None
result["tool_calls"] = [
{
"id": tc.id,
"type": "function",
"function": {
"name": tc.function.name,
"arguments": tc.function.arguments,
},
}
for tc in (message.tool_calls or [])
]
else:
# Regular message
content_parts = []
for part in message.content:
if isinstance(part, TextPart):
content_parts.append(part.text)
result["content"] = "\n".join(content_parts) if content_parts else None
return result # type: ignore[return-value]
class OpenAIStreamedMessage:
"""Streamed message implementation for OpenAI.
This class wraps the OpenAI streaming response and converts chunks
into AgentLite content parts.
"""
def __init__(self, response: AsyncIterator[ChatCompletionChunk]):
"""Initialize the streamed message.
Args:
response: The OpenAI streaming response.
"""
self._response = response
self._id: str | None = None
self._usage = TokenUsage(input_tokens=0, output_tokens=0)
def __aiter__(self) -> AsyncIterator[Any]:
"""Return an async iterator over the streamed parts."""
return self._iter_chunks()
async def _iter_chunks(self) -> AsyncIterator[Any]:
"""Iterate over response chunks and yield content parts."""
try:
async for chunk in self._response:
# Track message ID
if chunk.id:
self._id = chunk.id
# Track usage if available
if chunk.usage:
self._usage = TokenUsage(
input_tokens=chunk.usage.prompt_tokens,
output_tokens=chunk.usage.completion_tokens,
)
# Skip empty choices
if not chunk.choices:
continue
delta = chunk.choices[0].delta
# Yield text content
if delta.content:
yield TextPart(text=delta.content)
# Yield tool calls
if delta.tool_calls:
for tc in delta.tool_calls:
if tc.function:
if tc.function.name:
# New tool call
yield ToolCall(
id=tc.id or str(uuid.uuid4()),
function=ToolCall.FunctionBody(
name=tc.function.name,
arguments=tc.function.arguments or "",
),
)
elif tc.function.arguments:
# Continuation of tool call arguments
yield ToolCallPart(arguments_part=tc.function.arguments)
except (OpenAIError, httpx.HTTPError) as e:
raise _convert_error(e) from e
@property
def id(self) -> str | None:
"""The unique identifier of the message."""
return self._id
@property
def usage(self) -> TokenUsage | None:
"""Token usage statistics."""
return self._usage
class OpenAIProvider:
"""OpenAI-compatible chat provider.
This provider works with the OpenAI API and any OpenAI-compatible API
such as Moonshot, Together, Fireworks, etc.
Attributes:
model: The model name to use.
client: The underlying AsyncOpenAI client.
Example:
>>> provider = OpenAIProvider(
... api_key="sk-...",
... model="gpt-4",
... )
>>> stream = await provider.generate(
... system_prompt="You are helpful.",
... tools=[],
... history=[Message(role="user", content="Hello!")],
... )
"""
def __init__(
self,
*,
api_key: str,
model: str,
base_url: str | None = None,
timeout: float = 60.0,
**client_kwargs: Any,
):
"""Initialize the OpenAI provider.
Args:
api_key: The API key for authentication.
model: The model name to use (e.g., "gpt-4", "gpt-3.5-turbo").
base_url: Optional custom base URL for OpenAI-compatible APIs.
timeout: Request timeout in seconds.
**client_kwargs: Additional arguments passed to AsyncOpenAI.
"""
self.model = model
self.client = AsyncOpenAI(
api_key=api_key,
base_url=base_url,
timeout=timeout,
**client_kwargs,
)
@property
def model_name(self) -> str:
"""The name of the model being used."""
return self.model
async def generate(
self,
system_prompt: str,
tools: Sequence[Tool],
history: Sequence[Message],
) -> StreamedMessage:
"""Generate a response from the OpenAI API.
Args:
system_prompt: The system prompt to use.
tools: Available tools for the model to call.
history: The conversation history.
Returns:
A streamed message that yields content parts.
Raises:
APIConnectionError: If the connection fails.
APITimeoutError: If the request times out.
APIStatusError: If the API returns an error status.
APIEmptyResponseError: If the response is empty.
"""
# Build messages
messages: list[ChatCompletionMessageParam] = []
if system_prompt:
messages.append({"role": "system", "content": system_prompt})
for msg in history:
messages.append(_convert_message_to_openai(msg))
# Build tools
openai_tools = [_convert_tool_to_openai(t) for t in tools] if tools else None
try:
# Make streaming request
response = await self.client.chat.completions.create(
model=self.model,
messages=messages,
tools=openai_tools,
stream=True,
stream_options={"include_usage": True},
)
return OpenAIStreamedMessage(response) # type: ignore[arg-type]
except (OpenAIError, httpx.HTTPError) as e:
raise _convert_error(e) from e
def _convert_error(error: OpenAIError | httpx.HTTPError) -> ChatProviderError:
"""Convert an OpenAI or HTTP error to a ChatProviderError.
Args:
error: The error to convert.
Returns:
The appropriate ChatProviderError subclass.
"""
if isinstance(error, OpenAIError):
if isinstance(error, OpenAIError.APIConnectionError):
return APIConnectionError(str(error))
elif isinstance(error, OpenAIError.APITimeoutError):
return APITimeoutError(str(error))
elif isinstance(error, OpenAIError.APIStatusError):
return APIStatusError(error.status_code, str(error))
if isinstance(error, httpx.TimeoutException):
return APITimeoutError(str(error))
elif isinstance(error, httpx.NetworkError):
return APIConnectionError(str(error))
elif isinstance(error, httpx.HTTPStatusError):
return APIStatusError(error.response.status_code, str(error))
return ChatProviderError(str(error))

View File

@@ -0,0 +1,72 @@
"""Skills system for AgentLite.
This module provides a comprehensive skill system similar to kimi-cli,
allowing agents to use modular, reusable skills defined in SKILL.md files.
Skills can be:
- **Standard**: Text-based instructions loaded as prompts
- **Flow**: Structured flowcharts (Mermaid/D2) for deterministic execution
Example:
>>> from pathlib import Path
>>> from agentlite.skills import discover_skills, SkillTool
>>> # Discover skills
>>> skills = discover_skills(Path("./skills"))
>>> skill_index = {s.name.lower(): s for s in skills}
>>> # Create skill tool
>>> skill_tool = SkillTool(skill_index, parent_agent=agent)
"""
from agentlite.skills.discovery import (
discover_skills,
discover_skills_from_roots,
get_default_skills_dirs,
index_skills_by_name,
parse_frontmatter,
parse_skill_text,
)
from agentlite.skills.flow_parser import (
FlowParseError,
parse_d2_flowchart,
parse_mermaid_flowchart,
)
from agentlite.skills.flow_runner import FlowExecutionError, FlowRunner
from agentlite.skills.models import (
Flow,
FlowEdge,
FlowNode,
FlowNodeKind,
Skill,
SkillType,
index_skills,
normalize_skill_name,
)
from agentlite.skills.skill_tool import SkillTool
__all__ = [
# Models
"Skill",
"Flow",
"FlowNode",
"FlowEdge",
"SkillType",
"FlowNodeKind",
# Discovery
"discover_skills",
"discover_skills_from_roots",
"get_default_skills_dirs",
"index_skills",
"index_skills_by_name",
"normalize_skill_name",
"parse_skill_text",
"parse_frontmatter",
# Flow parsing
"parse_mermaid_flowchart",
"parse_d2_flowchart",
"FlowParseError",
# Flow execution
"FlowRunner",
"FlowExecutionError",
# Tool
"SkillTool",
]

View File

@@ -0,0 +1,307 @@
"""Skill discovery and loading utilities for AgentLite.
This module provides functions for discovering and loading skills from
directory structures, similar to kimi-cli's skill system.
"""
from __future__ import annotations
from collections.abc import Iterable
from pathlib import Path
from typing import TYPE_CHECKING, Dict, Optional
import yaml
if TYPE_CHECKING:
from agentlite.skills.models import Flow, Skill
def parse_frontmatter(content: str) -> Optional[Dict]:
"""Parse YAML frontmatter from markdown content.
Args:
content: The file content that may contain frontmatter
Returns:
Dictionary of frontmatter data, or None if no frontmatter found
Example:
>>> content = '''---
... name: my-skill
... description: Does something useful
... ---
... # Skill Content
... '''
>>> parse_frontmatter(content)
{'name': 'my-skill', 'description': 'Does something useful'}
"""
if not content.startswith("---"):
return None
try:
# Find the end of frontmatter
end_idx = content.find("\n---", 3)
if end_idx == -1:
return None
# Extract and parse YAML
frontmatter_text = content[3:end_idx].strip()
return yaml.safe_load(frontmatter_text) or {}
except Exception:
return None
def parse_flow_from_skill(content: str) -> "Flow":
"""Parse a flowchart from skill content.
Looks for mermaid or d2 code blocks and parses them into Flow objects.
Args:
content: The SKILL.md content containing a flowchart
Returns:
Parsed Flow object
Raises:
ValueError: If no valid flowchart found
"""
from agentlite.skills.flow_parser import (
FlowParseError,
parse_d2_flowchart,
parse_mermaid_flowchart,
)
# Extract code blocks
code_blocks = _extract_code_blocks(content)
for lang, code in code_blocks:
try:
if lang == "mermaid":
return parse_mermaid_flowchart(code)
elif lang == "d2":
return parse_d2_flowchart(code)
except FlowParseError:
continue
raise ValueError("No valid mermaid or d2 flowchart found in skill content")
def _extract_code_blocks(content: str) -> list[tuple[str, str]]:
"""Extract fenced code blocks from markdown content.
Args:
content: Markdown content
Returns:
List of (language, code) tuples
"""
blocks = []
in_block = False
current_lang = ""
current_code = []
fence_char = ""
fence_len = 0
for line in content.split("\n"):
stripped = line.lstrip()
if not in_block:
# Check for fence start
if stripped.startswith("```") or stripped.startswith("~~~"):
fence_char = stripped[0]
fence_len = len(stripped) - len(stripped.lstrip(fence_char))
if fence_len >= 3:
# Extract language
info = stripped[fence_len:].strip()
current_lang = info.split()[0] if info else ""
in_block = True
current_code = []
else:
# Check for fence end
if stripped.startswith(fence_char * fence_len):
blocks.append((current_lang, "\n".join(current_code)))
in_block = False
current_lang = ""
current_code = []
else:
current_code.append(line)
return blocks
def parse_skill_text(content: str, dir_path: Path) -> "Skill":
"""Parse skill content into a Skill object.
Args:
content: The SKILL.md content
dir_path: Path to the skill directory
Returns:
Parsed Skill object
Raises:
ValueError: If the skill content is invalid
"""
from agentlite.skills.flow_parser import FlowParseError
from agentlite.skills.models import Skill
frontmatter = parse_frontmatter(content) or {}
name = frontmatter.get("name") or dir_path.name
description = frontmatter.get("description") or "No description provided."
skill_type = frontmatter.get("type") or "standard"
if skill_type not in ("standard", "flow"):
raise ValueError(f'Invalid skill type "{skill_type}"')
# Parse flow if this is a flow-type skill
flow = None
if skill_type == "flow":
try:
flow = parse_flow_from_skill(content)
except (ValueError, FlowParseError) as e:
# Log warning and fall back to standard
import logging
logging.warning(
f"Failed to parse flow skill '{name}': {e}. Treating as standard skill."
)
skill_type = "standard"
flow = None
return Skill(
name=name,
description=description,
type=skill_type,
dir=dir_path,
flow=flow,
)
def discover_skills(skills_dir: Path) -> list["Skill"]:
"""Discover all skills in a directory.
Scans the directory for subdirectories containing SKILL.md files
and parses them into Skill objects.
Args:
skills_dir: Directory to scan for skills
Returns:
List of discovered Skill objects, sorted by name
Example:
>>> skills = discover_skills(Path("./skills"))
>>> for skill in skills:
... print(f"{skill.name}: {skill.description}")
"""
if not skills_dir.is_dir():
return []
skills: list[Skill] = []
for skill_dir in skills_dir.iterdir():
if not skill_dir.is_dir():
continue
skill_md = skill_dir / "SKILL.md"
if not skill_md.is_file():
continue
try:
content = skill_md.read_text(encoding="utf-8")
skills.append(parse_skill_text(content, skill_dir))
except Exception as e:
import logging
logging.warning(f"Failed to parse skill at {skill_md}: {e}")
continue
return sorted(skills, key=lambda s: s.name)
def discover_skills_from_roots(skills_dirs: Iterable[Path]) -> list["Skill"]:
"""Discover skills from multiple directory roots.
Skills from later directories will override skills with the same name
from earlier directories.
Args:
skills_dirs: Iterable of directories to scan
Returns:
List of unique Skill objects, sorted by name
Example:
>>> roots = [Path("./builtin"), Path("~/.config/skills").expanduser()]
>>> skills = discover_skills_from_roots(roots)
"""
from agentlite.skills.models import normalize_skill_name
skills_by_name: dict[str, "Skill"] = {}
for skills_dir in skills_dirs:
for skill in discover_skills(skills_dir):
# Later skills override earlier ones with same name
skills_by_name[normalize_skill_name(skill.name)] = skill
return sorted(skills_by_name.values(), key=lambda s: s.name)
def get_default_skills_dirs(work_dir: Path | None = None) -> list[Path]:
"""Get the default skill directory search paths.
Returns directories in priority order:
1. User-level: ~/.config/agents/skills/ (or alternatives)
2. Project-level: ./.agents/skills/ (or alternatives)
Args:
work_dir: Working directory for project-level search (default: current dir)
Returns:
List of existing skill directories
"""
dirs: list[Path] = []
# User-level candidates
user_candidates = [
Path.home() / ".config" / "agents" / "skills",
Path.home() / ".agents" / "skills",
Path.home() / ".kimi" / "skills",
]
for candidate in user_candidates:
if candidate.is_dir():
dirs.append(candidate)
break # Only use first existing
# Project-level candidates
if work_dir is None:
work_dir = Path.cwd()
project_candidates = [
work_dir / ".agents" / "skills",
work_dir / ".kimi" / "skills",
]
for candidate in project_candidates:
if candidate.is_dir():
dirs.append(candidate)
break # Only use first existing
return dirs
def index_skills_by_name(skills: Iterable["Skill"]) -> dict[str, "Skill"]:
"""Build a lookup table for skills by normalized name.
Args:
skills: Iterable of Skill objects
Returns:
Dictionary mapping normalized names to Skill objects
"""
from agentlite.skills.models import normalize_skill_name
return {normalize_skill_name(skill.name): skill for skill in skills}

View File

@@ -0,0 +1,252 @@
"""Flowchart parsers for flow-type skills.
This module provides parsers for Mermaid and D2 flowchart syntax
to convert them into Flow objects that can be executed.
"""
from __future__ import annotations
import re
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from agentlite.skills.models import Flow, FlowEdge, FlowNode
class FlowParseError(ValueError):
"""Raised when flowchart parsing fails."""
pass
def parse_mermaid_flowchart(content: str) -> "Flow":
"""Parse a Mermaid flowchart into a Flow object.
Supports basic Mermaid flowchart syntax:
- Node definitions: `id[label]`, `id(label)`, `id{label}`
- Edges: `-->`, `---`, `-.->`
- Labeled edges: `-->|label|`, `-.->|label|`
- Special nodes: BEGIN(( )), END(( ))
Args:
content: Mermaid flowchart definition
Returns:
Flow object representing the flowchart
Raises:
FlowParseError: If parsing fails
Example:
>>> mermaid = '''
... flowchart TD
... BEGIN(( )) --> CHECK[Check input]
... CHECK --> VALID{Is valid?}
... VALID -->|Yes| PROCESS[Process]
... VALID -->|No| ERROR[Show error]
... PROCESS --> END(( ))
... ERROR --> END
... '''
>>> flow = parse_mermaid_flowchart(mermaid)
"""
from agentlite.skills.models import Flow, FlowEdge, FlowNode
nodes: dict[str, FlowNode] = {}
edges: list[FlowEdge] = []
# Node patterns
# id[label] - rectangle
# id(label) - rounded
# id{label} - diamond
# id(( )) - circle (used for begin/end)
node_pattern = re.compile(
r"^(\w+)\s*" # node ID
r"(?:\[(.*?)\]|" # [label]
r"\((.*?)\)|" # (label)
r"\{(.*?)\}|" # {label}
r"\(\((.*?)\)\))" # ((label))
)
# Edge patterns
# A --> B
# A -->|label| B
# A -.-> B
edge_pattern = re.compile(
r"^(\w+)\s*" # source
r"(?:-->|---|-.->)" # arrow
r"\|([^|]*)\|?\s*" # optional label
r"(\w+)\s*$" # destination
)
for line in content.strip().split("\n"):
line = line.strip()
if not line or line.startswith("flowchart") or line.startswith("graph"):
continue
# Remove trailing punctuation
line = line.rstrip(";")
# Try to match edge first
edge_match = edge_pattern.match(line)
if edge_match:
src, label, dst = edge_match.groups()
edges.append(
FlowEdge(src=src.strip(), dst=dst.strip(), label=label.strip() if label else None)
)
continue
# Try to match node definition
node_match = node_pattern.match(line)
if node_match:
node_id = node_match.group(1)
# Get the first non-None label from groups
label = next((g for g in node_match.groups()[1:] if g is not None), node_id)
# Determine node kind
kind = "task"
if label.strip() == "" or node_id.upper() in ("BEGIN", "START"):
kind = "begin"
elif node_id.upper() in ("END", "STOP", "FINISH"):
kind = "end"
elif "{" in line or "}" in line:
kind = "decision"
nodes[node_id] = FlowNode(id=node_id, label=label, kind=kind)
# Build outgoing edge map
outgoing: dict[str, list[FlowEdge]] = {}
for edge in edges:
if edge.src not in outgoing:
outgoing[edge.src] = []
outgoing[edge.src].append(edge)
# Find begin and end nodes
begin_ids = [n.id for n in nodes.values() if n.kind == "begin"]
end_ids = [n.id for n in nodes.values() if n.kind == "end"]
if not begin_ids:
# Use first node if no explicit begin
begin_ids = [list(nodes.keys())[0]] if nodes else []
if not end_ids:
# Use last node if no explicit end
end_ids = [list(nodes.keys())[-1]] if nodes else []
if len(begin_ids) != 1:
raise FlowParseError(f"Expected exactly one BEGIN node, found {len(begin_ids)}")
if len(end_ids) != 1:
raise FlowParseError(f"Expected exactly one END node, found {len(end_ids)}")
return Flow(nodes=nodes, outgoing=outgoing, begin_id=begin_ids[0], end_id=end_ids[0])
def parse_d2_flowchart(content: str) -> "Flow":
"""Parse a D2 flowchart into a Flow object.
Supports basic D2 syntax:
- Node definitions: `id: label`
- Edges: `id1 -> id2` or `id1 -> id2: label`
- Special shapes: `id: {shape: circle}`
Args:
content: D2 flowchart definition
Returns:
Flow object representing the flowchart
Raises:
FlowParseError: If parsing fails
Example:
>>> d2 = '''
... BEGIN: {shape: circle}
... CHECK: Check input
... VALID: Is valid? {shape: diamond}
... PROCESS: Process
... ERROR: Show error
... END: {shape: circle}
...
... BEGIN -> CHECK
... CHECK -> VALID
... VALID -> PROCESS: Yes
... VALID -> ERROR: No
... PROCESS -> END
... ERROR -> END
... '''
>>> flow = parse_d2_flowchart(d2)
"""
from agentlite.skills.models import Flow, FlowEdge, FlowNode
nodes: dict[str, FlowNode] = {}
edges: list[FlowEdge] = []
# Node pattern: id: label or id: {shape: ...}
node_pattern = re.compile(r"^(\w+)\s*:\s*(.+)$")
# Edge pattern: src -> dst or src -> dst: label
edge_pattern = re.compile(r"^(\w+)\s*->\s*(\w+)(?:\s*:\s*(.+))?$")
for line in content.strip().split("\n"):
line = line.strip()
if not line:
continue
# Try edge first
edge_match = edge_pattern.match(line)
if edge_match:
src, dst, label = edge_match.groups()
edges.append(
FlowEdge(src=src.strip(), dst=dst.strip(), label=label.strip() if label else None)
)
continue
# Try node definition
node_match = node_pattern.match(line)
if node_match:
node_id, rest = node_match.groups()
rest = rest.strip()
# Check for shape definition
shape_match = re.search(r"\{shape:\s*(\w+)\}", rest)
shape = shape_match.group(1) if shape_match else None
# Extract label (remove shape definition)
label = re.sub(r"\{[^}]*\}", "", rest).strip()
if not label:
label = node_id
# Determine kind
kind = "task"
if shape == "circle" or node_id.upper() in ("BEGIN", "START"):
if not label or label == node_id:
kind = "begin"
elif node_id.upper() in ("END", "STOP"):
kind = "end"
elif shape == "diamond" or node_id.upper() in ("VALID", "CHECK", "DECISION"):
kind = "decision"
elif node_id.upper() in ("END", "STOP", "FINISH"):
kind = "end"
nodes[node_id] = FlowNode(id=node_id, label=label, kind=kind)
# Build outgoing edge map
outgoing: dict[str, list[FlowEdge]] = {}
for edge in edges:
if edge.src not in outgoing:
outgoing[edge.src] = []
outgoing[edge.src].append(edge)
# Find begin and end nodes
begin_ids = [n.id for n in nodes.values() if n.kind == "begin"]
end_ids = [n.id for n in nodes.values() if n.kind == "end"]
if not begin_ids:
begin_ids = [list(nodes.keys())[0]] if nodes else []
if not end_ids:
end_ids = [list(nodes.keys())[-1]] if nodes else []
if len(begin_ids) != 1:
raise FlowParseError(f"Expected exactly one BEGIN node, found {len(begin_ids)}")
if len(end_ids) != 1:
raise FlowParseError(f"Expected exactly one END node, found {len(end_ids)}")
return Flow(nodes=nodes, outgoing=outgoing, begin_id=begin_ids[0], end_id=end_ids[0])

View File

@@ -0,0 +1,200 @@
"""Flow runner for executing flow-type skills.
This module provides FlowRunner for executing flowchart-based skills
node by node, similar to kimi-cli's implementation.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from agentlite.agent import Agent
from agentlite.skills.models import Flow, FlowEdge, FlowNode
class FlowExecutionError(Exception):
"""Raised when flow execution fails."""
pass
class FlowRunner:
"""Executes flowchart-based skills.
FlowRunner executes a flowchart node by node, handling task nodes
and decision nodes appropriately.
For task nodes: Executes the node's label as a prompt
For decision nodes: Presents options and waits for user/agent choice
Example:
>>> from agentlite.skills.models import Flow, FlowNode, FlowEdge
>>> # Define a simple flow
>>> flow = Flow(
... nodes={
... "start": FlowNode(id="start", label="Start", kind="begin"),
... "task": FlowNode(id="task", label="Analyze code", kind="task"),
... "end": FlowNode(id="end", label="End", kind="end"),
... },
... outgoing={
... "start": [FlowEdge(src="start", dst="task")],
... "task": [FlowEdge(src="task", dst="end")],
... },
... begin_id="start",
... end_id="end",
... )
>>> runner = FlowRunner(flow, "my-flow")
>>> output = await runner.run(agent, "Additional context")
"""
def __init__(self, flow: "Flow", name: str = "flow"):
"""Initialize the flow runner.
Args:
flow: The flowchart to execute
name: Name of the flow (for logging/debugging)
"""
self._flow = flow
self._name = name
async def run(self, agent: "Agent", args: str = "") -> str:
"""Execute the flow.
Args:
agent: The agent to use for executing task nodes
args: Additional arguments/context for the flow
Returns:
The combined output from all executed nodes
Raises:
FlowExecutionError: If execution fails
"""
current_id = self._flow.begin_id
outputs: list[str] = []
steps = 0
max_steps = 100 # Prevent infinite loops
while steps < max_steps:
steps += 1
node = self._flow.nodes.get(current_id)
if node is None:
raise FlowExecutionError(f"Node '{current_id}' not found in flow")
# Get outgoing edges
edges = self._flow.outgoing.get(current_id, [])
# Handle different node types
if node.kind == "end":
# Flow complete
break
elif node.kind == "begin":
# Just move to next node
if not edges:
raise FlowExecutionError("BEGIN node has no outgoing edges")
current_id = edges[0].dst
continue
elif node.kind == "task":
# Execute task
output = await self._execute_task_node(agent, node, args)
if output:
outputs.append(output)
# Move to next node
if not edges:
raise FlowExecutionError(f"Task node '{current_id}' has no outgoing edges")
current_id = edges[0].dst
elif node.kind == "decision":
# Handle decision
choice = await self._execute_decision_node(agent, node, edges, args)
# Find the edge matching the choice
next_id = None
for edge in edges:
if edge.label and edge.label.lower() == choice.lower():
next_id = edge.dst
break
if next_id is None:
raise FlowExecutionError(
f"Invalid choice '{choice}' for decision node '{current_id}'"
)
current_id = next_id
else:
raise FlowExecutionError(f"Unknown node kind: {node.kind}")
if steps >= max_steps:
raise FlowExecutionError("Flow exceeded maximum steps (possible infinite loop)")
return "\n\n".join(outputs)
async def _execute_task_node(self, agent: "Agent", node: "FlowNode", args: str) -> str:
"""Execute a task node.
Args:
agent: The agent to use
node: The task node
args: Additional arguments
Returns:
The task output
"""
# Build prompt from node label and args
prompt = node.label
if args.strip():
prompt = f"{prompt}\n\nContext: {args.strip()}"
# Execute using agent
response = await agent.run(prompt)
return response
async def _execute_decision_node(
self, agent: "Agent", node: "FlowNode", edges: list["FlowEdge"], args: str
) -> str:
"""Execute a decision node.
Args:
agent: The agent to use
node: The decision node
edges: Available outgoing edges (choices)
args: Additional arguments
Returns:
The chosen option
"""
# Build prompt with choices
choices = [edge.label for edge in edges if edge.label]
prompt_lines = [
node.label,
"",
"Available options:",
*[f"- {choice}" for choice in choices],
"",
"Reply with one of the options above.",
]
if args.strip():
prompt_lines.extend(["", f"Context: {args.strip()}"])
prompt = "\n".join(prompt_lines)
# Get choice from agent
response = await agent.run(prompt)
# Extract choice from response (find matching option)
response_clean = response.strip().lower()
for choice in choices:
if choice.lower() in response_clean or response_clean in choice.lower():
return choice
# If no exact match, return the first choice as default
# (or could raise an error)
return choices[0] if choices else ""

View File

@@ -0,0 +1,154 @@
"""Skill system for AgentLite.
This module provides a skill system similar to kimi-cli, allowing agents
to use modular, reusable skills defined in SKILL.md files.
Skills can be:
- Standard: Text-based instructions loaded as prompts
- Flow: Structured flowcharts (Mermaid/D2) for deterministic execution
Example:
>>> from agentlite.skills import Skill, discover_skills
>>> skills = discover_skills(Path("./skills"))
>>> for skill in skills:
... print(f"{skill.name}: {skill.description}")
"""
from __future__ import annotations
from collections.abc import Iterable
from pathlib import Path
from typing import Literal, Optional
from pydantic import BaseModel, Field
SkillType = Literal["standard", "flow"]
FlowNodeKind = Literal["begin", "end", "task", "decision"]
class FlowNode(BaseModel):
"""A node in a flowchart.
Attributes:
id: Unique identifier for the node
label: Display text or content for the node
kind: Type of node (begin, end, task, decision)
"""
id: str = Field(description="Unique node identifier")
label: str = Field(description="Node display text")
kind: FlowNodeKind = Field(description="Node type")
class FlowEdge(BaseModel):
"""An edge connecting two nodes in a flowchart.
Attributes:
src: Source node ID
dst: Destination node ID
label: Optional label for the edge (used for decision branches)
"""
src: str = Field(description="Source node ID")
dst: str = Field(description="Destination node ID")
label: Optional[str] = Field(default=None, description="Edge label for decisions")
class Flow(BaseModel):
"""A flowchart defining a structured workflow.
Flow skills use flowcharts to define deterministic, step-by-step
workflows that the agent executes node by node.
Attributes:
nodes: Dictionary mapping node IDs to FlowNode objects
outgoing: Dictionary mapping node IDs to their outgoing edges
begin_id: ID of the start node
end_id: ID of the end node
"""
nodes: dict[str, FlowNode] = Field(description="Node ID to node mapping")
outgoing: dict[str, list[FlowEdge]] = Field(description="Node outgoing edges")
begin_id: str = Field(description="Start node ID")
end_id: str = Field(description="End node ID")
class Skill(BaseModel):
"""A skill definition for AgentLite.
Skills are modular, reusable capabilities defined in SKILL.md files.
They can be standard (text-based) or flow-based (structured workflows).
Attributes:
name: Unique skill name
description: When and what the skill does (used for triggering)
type: Skill type - "standard" or "flow"
dir: Directory containing the skill files
flow: Flow definition (only for flow-type skills)
Example SKILL.md:
---
name: code-reviewer
description: Review code for bugs, style issues, and best practices
type: standard
---
# Code Reviewer
When reviewing code:
1. Check for syntax errors
2. Verify style guidelines
3. Suggest improvements
"""
name: str = Field(description="Unique skill name")
description: str = Field(description="Skill description and triggering criteria")
type: SkillType = Field(default="standard", description="Skill type")
dir: Path = Field(description="Skill directory path")
flow: Optional[Flow] = Field(default=None, description="Flow definition for flow-type skills")
@property
def skill_md_file(self) -> Path:
"""Path to the SKILL.md file."""
return self.dir / "SKILL.md"
def read_content(self) -> str:
"""Read the full SKILL.md content.
Returns:
The content of the SKILL.md file
Raises:
FileNotFoundError: If SKILL.md doesn't exist
"""
return self.skill_md_file.read_text(encoding="utf-8").strip()
def normalize_skill_name(name: str) -> str:
"""Normalize a skill name for lookup.
Args:
name: The skill name to normalize
Returns:
Lowercase version of the name for case-insensitive lookup
"""
return name.casefold()
def index_skills(skills: Iterable[Skill]) -> dict[str, Skill]:
"""Build a lookup table for skills by normalized name.
Args:
skills: Iterable of Skill objects
Returns:
Dictionary mapping normalized names to Skill objects
Example:
>>> skills = [Skill(name="CodeReview", ...), Skill(name="TestWriter", ...)]
>>> index = index_skills(skills)
>>> index["codereview"].name
"CodeReview"
"""
return {normalize_skill_name(skill.name): skill for skill in skills}

View File

@@ -0,0 +1,177 @@
"""Skill tool for AgentLite.
This module provides a tool for executing skills within an agent.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
if TYPE_CHECKING:
from agentlite.agent import Agent
from agentlite.skills.models import Skill
class SkillParams(BaseModel):
"""Parameters for executing a skill."""
skill_name: str = Field(description="Name of the skill to execute")
args: str = Field(default="", description="Additional arguments or context for the skill")
class SkillTool(CallableTool2[SkillParams]):
"""Tool for executing skills.
This tool allows an agent to execute skills from its skill registry.
Skills can be standard (text-based) or flow-based (structured workflows).
Example:
>>> from agentlite.skills.discovery import discover_skills
>>> from agentlite.skills.models import index_skills
>>> # Discover and index skills
>>> skills = discover_skills(Path("./skills"))
>>> skill_index = index_skills(skills)
>>> # Create skill tool
>>> skill_tool = SkillTool(skill_index, parent_agent=agent)
>>> # Execute a skill
>>> result = await skill_tool(
... {"skill_name": "code-review", "args": "Review this Python function..."}
... )
"""
name: str = "Skill"
description: str = (
"Execute a predefined skill. "
"Skills provide specialized workflows and domain knowledge. "
"Available skills are shown in the system context."
)
params: type[SkillParams] = SkillParams
def __init__(
self,
skills: dict[str, "Skill"],
parent_agent: "Agent" | None = None,
):
"""Initialize the skill tool.
Args:
skills: Dictionary mapping normalized skill names to Skill objects
parent_agent: The parent agent (used for executing skills)
"""
super().__init__()
self._skills = skills
self._parent_agent = parent_agent
async def __call__(self, params: SkillParams) -> ToolResult:
"""Execute a skill.
Args:
params: Skill execution parameters
Returns:
ToolResult with the skill output or error
"""
from agentlite.skills.models import normalize_skill_name
if not params.skill_name:
return ToolError(message="Skill name cannot be empty")
# Find the skill
normalized_name = normalize_skill_name(params.skill_name)
skill = self._skills.get(normalized_name)
if skill is None:
available = ", ".join(sorted(self._skills.keys()))
return ToolError(
message=f"Skill '{params.skill_name}' not found. Available: {available or 'none'}"
)
try:
# Execute based on skill type
if skill.type == "flow" and skill.flow is not None:
return await self._execute_flow_skill(skill, params.args)
else:
return await self._execute_standard_skill(skill, params.args)
except Exception as e:
return ToolError(message=f"Skill execution failed: {e}")
async def _execute_standard_skill(self, skill: "Skill", args: str) -> ToolResult:
"""Execute a standard (text-based) skill.
Loads the SKILL.md content and uses it as a prompt for the agent.
Args:
skill: The skill to execute
args: Additional arguments from the user
Returns:
ToolResult with the skill output
"""
# Read skill content
content = skill.read_content()
# Parse frontmatter to get just the body
from agentlite.skills.discovery import parse_frontmatter
frontmatter = parse_frontmatter(content)
# Extract body (remove frontmatter if present)
if frontmatter and content.startswith("---"):
end_idx = content.find("\n---", 3)
if end_idx != -1:
body = content[end_idx + 4 :].strip()
else:
body = content
else:
body = content
# Append user arguments if provided
if args.strip():
body = f"{body}\n\nUser request: {args.strip()}"
# Execute using parent agent if available
if self._parent_agent is not None:
# Create a temporary message with the skill content
response = await self._parent_agent.run(body)
return ToolOk(output=response, message=f"Skill '{skill.name}' executed successfully")
else:
# Return the skill content for the LLM to use
return ToolOk(
output=body, message=f"Skill '{skill.name}' loaded (no parent agent to execute)"
)
async def _execute_flow_skill(self, skill: "Skill", args: str) -> ToolResult:
"""Execute a flow-based skill.
Executes the flowchart node by node.
Args:
skill: The flow skill to execute
args: Additional arguments from the user
Returns:
ToolResult with the flow output
"""
from agentlite.skills.flow_runner import FlowRunner
if skill.flow is None:
return ToolError(message=f"Flow skill '{skill.name}' has no flow definition")
if self._parent_agent is None:
return ToolError(message="Flow skills require a parent agent to execute")
# Create flow runner and execute
runner = FlowRunner(skill.flow, skill.name)
try:
output = await runner.run(self._parent_agent, args)
return ToolOk(
output=output, message=f"Flow skill '{skill.name}' completed successfully"
)
except Exception as e:
return ToolError(message=f"Flow execution failed: {e}")

View File

@@ -0,0 +1,111 @@
"""Subagent configuration models for AgentLite.
This module provides configuration models for defining subagents
in a hierarchical agent architecture.
"""
from __future__ import annotations
from pathlib import Path
from typing import Optional
from pydantic import BaseModel, Field, model_validator
class SubagentConfig(BaseModel):
"""Configuration for a subagent.
Subagents are child agents that can be called by a parent agent
using the Task tool. Each subagent has its own system prompt
and can optionally have its own tools.
Attributes:
name: Unique name for the subagent
description: Description of what the subagent does
system_prompt: System prompt for the subagent
system_prompt_path: Path to a file containing the system prompt
tools: List of tool paths to load (inherits from parent if not specified)
exclude_tools: Tools to exclude from parent inheritance
subagents: Nested subagents (for hierarchical structure)
max_iterations: Maximum tool call iterations for this subagent
Example:
>>> config = SubagentConfig(
... name="coder",
... description="Good at writing code",
... system_prompt="You are a coding assistant.",
... exclude_tools=["Task", "CreateSubagent"],
... )
"""
name: str = Field(description="Unique name for the subagent")
description: str = Field(description="Description of what the subagent does")
system_prompt: Optional[str] = Field(default=None, description="System prompt for the subagent")
system_prompt_path: Optional[Path] = Field(
default=None, description="Path to a file containing the system prompt"
)
tools: Optional[list[str]] = Field(
default=None,
description="List of tool import paths (e.g., 'agentlite.tools.file:ReadFile')",
)
exclude_tools: list[str] = Field(
default_factory=list, description="Tool names to exclude from parent inheritance"
)
subagents: list[SubagentConfig] = Field(
default_factory=list, description="Nested subagents (hierarchical structure)"
)
max_iterations: int = Field(
default=80, description="Maximum tool call iterations", ge=1, le=100
)
@model_validator(mode="after")
def validate_system_prompt(self) -> SubagentConfig:
"""Validate that either system_prompt or system_prompt_path is provided."""
if self.system_prompt is None and self.system_prompt_path is None:
raise ValueError("Either system_prompt or system_prompt_path must be provided")
return self
def get_system_prompt(self) -> str:
"""Get the system prompt text.
Returns:
The system prompt string.
Raises:
FileNotFoundError: If system_prompt_path is specified but file doesn't exist.
"""
if self.system_prompt is not None:
return self.system_prompt
if self.system_prompt_path is not None:
return Path(self.system_prompt_path).read_text(encoding="utf-8").strip()
raise ValueError("No system prompt available")
class SubagentSpec(BaseModel):
"""Specification for loading a subagent from a file.
This is used when subagents are defined in separate YAML files,
similar to kimi-cli's approach.
Attributes:
path: Path to the subagent configuration file
description: Description of the subagent
"""
path: Path = Field(description="Path to subagent config file")
description: str = Field(description="Description of the subagent")
def load(self) -> SubagentConfig:
"""Load the subagent configuration from the file.
Returns:
The loaded SubagentConfig.
"""
import yaml
with open(self.path, encoding="utf-8") as f:
data = yaml.safe_load(f)
return SubagentConfig(**data)

View File

@@ -0,0 +1,532 @@
"""Tool system for AgentLite.
This module provides the tool abstraction layer for defining and executing
tools that can be called by LLM agents.
"""
from __future__ import annotations
import asyncio
import inspect
import json
from abc import ABC, abstractmethod
from collections.abc import Iterable
from typing import (
TYPE_CHECKING,
Any,
Callable,
Optional,
Protocol,
TypeVar,
Union,
Generic,
get_type_hints,
)
import jsonschema
from pydantic import BaseModel, ValidationError
from agentlite.message import ToolCall
if TYPE_CHECKING:
pass
class ToolResult(BaseModel):
"""The result of a tool execution.
Attributes:
output: The output of the tool (string or structured data).
is_error: Whether the tool execution resulted in an error.
message: A message describing the result (for model consumption).
Example:
>>> result = ToolOk(output="42")
>>> print(result.output)
42
"""
output: str
"""The output of the tool execution."""
is_error: bool = False
"""Whether the execution resulted in an error."""
message: str = ""
"""A message describing the result (for model consumption)."""
class ToolOk(ToolResult):
"""Successful tool execution result.
Example:
>>> return ToolOk(output="File created successfully")
"""
def __init__(self, output: str, message: str = ""):
super().__init__(output=output, is_error=False, message=message or output)
class ToolError(ToolResult):
"""Failed tool execution result.
Example:
>>> return ToolError(message="File not found")
"""
def __init__(self, message: str, output: str = ""):
super().__init__(output=output or message, is_error=True, message=message)
class Tool(BaseModel):
"""Definition of a tool that can be called by the model.
Attributes:
name: The name of the tool.
description: A description of what the tool does.
parameters: JSON Schema for the tool parameters.
Example:
>>> tool = Tool(
... name="add",
... description="Add two numbers",
... parameters={
... "type": "object",
... "properties": {
... "a": {"type": "number"},
... "b": {"type": "number"},
... },
... "required": ["a", "b"],
... },
... )
"""
name: str
"""The name of the tool."""
description: str
"""A description of what the tool does."""
parameters: dict[str, Any]
"""JSON Schema for the tool parameters."""
def __init__(self, **data: Any):
super().__init__(**data)
# Validate the JSON schema
try:
jsonschema.validate(self.parameters, jsonschema.Draft202012Validator.META_SCHEMA)
except jsonschema.ValidationError as e:
raise ValueError(f"Invalid JSON schema for tool {self.name}: {e}") from e
@property
def base(self) -> "Tool":
"""Get the base Tool definition (returns self for Tool instances)."""
return self
class CallableTool(Tool, ABC):
"""Abstract base class for callable tools.
Subclasses must implement the __call__ method to define the tool's behavior.
Example:
>>> class AddTool(CallableTool):
... name = "add"
... description = "Add two numbers"
... parameters = {
... "type": "object",
... "properties": {
... "a": {"type": "number"},
... "b": {"type": "number"},
... },
... "required": ["a", "b"],
... }
...
... async def __call__(self, a: float, b: float) -> ToolResult:
... return ToolOk(output=str(a + b))
"""
@abstractmethod
async def __call__(self, *args: Any, **kwargs: Any) -> ToolResult:
"""Execute the tool.
Args:
*args: Positional arguments.
**kwargs: Keyword arguments.
Returns:
The result of the tool execution.
"""
...
@property
def base(self) -> "Tool":
"""Get the base Tool definition."""
return Tool(
name=self.name,
description=self.description,
parameters=self.parameters,
)
async def call(self, arguments: dict[str, Any]) -> ToolResult:
"""Call the tool with validated arguments.
Args:
arguments: The arguments to pass to the tool.
Returns:
The result of the tool execution.
"""
# Validate arguments against schema
try:
jsonschema.validate(arguments, self.parameters)
except jsonschema.ValidationError as e:
return ToolError(message=f"Invalid arguments: {e}")
# Call the tool
try:
if isinstance(arguments, list):
result = await self.__call__(*arguments)
elif isinstance(arguments, dict):
result = await self.__call__(**arguments)
else:
result = await self.__call__(arguments)
if not isinstance(result, ToolResult):
return ToolError(message=f"Tool returned invalid type: {type(result)}")
return result
except Exception as e:
return ToolError(message=f"Tool execution failed: {e}")
Params = TypeVar("Params", bound=BaseModel)
class CallableTool2(ABC, Generic[Params]):
"""Type-safe callable tool using Pydantic models for parameters.
This is the preferred way to define tools as it provides full type safety
and automatic JSON schema generation.
Example:
>>> class AddParams(BaseModel):
... a: float
... b: float
>>> class AddTool(CallableTool2[AddParams]):
... name = "add"
... description = "Add two numbers"
... params = AddParams
...
... async def __call__(self, params: AddParams) -> ToolResult:
... return ToolOk(output=str(params.a + params.b))
"""
name: str
"""The name of the tool."""
description: str
"""A description of what the tool does."""
params: type[Params]
"""The Pydantic model class for parameters."""
def __init__(
self,
name: str | None = None,
description: str | None = None,
params: type[Params] | None = None,
):
cls = self.__class__
self.name = name or getattr(cls, "name", "")
if not self.name:
raise ValueError("Tool name must be provided")
self.description = description or getattr(cls, "description", "")
if not self.description:
raise ValueError("Tool description must be provided")
self.params = params or getattr(cls, "params", None)
if self.params is None:
raise ValueError("Tool params must be provided")
# Generate JSON schema from Pydantic model
self._schema = self.params.model_json_schema()
@property
def base(self) -> Tool:
"""Get the base Tool definition."""
return Tool(
name=self.name,
description=self.description,
parameters=self._schema,
)
@abstractmethod
async def __call__(self, params: Params) -> ToolResult:
"""Execute the tool.
Args:
params: The validated parameters.
Returns:
The result of the tool execution.
"""
...
async def call(self, arguments: dict[str, Any]) -> ToolResult:
"""Call the tool with validated arguments.
Args:
arguments: The arguments to validate and pass to the tool.
Returns:
The result of the tool execution.
"""
try:
params = self.params.model_validate(arguments)
except ValidationError as e:
return ToolError(message=f"Invalid arguments: {e}")
try:
result = await self.__call__(params)
if not isinstance(result, ToolResult):
return ToolError(message=f"Tool returned invalid type: {type(result)}")
return result
except Exception as e:
return ToolError(message=f"Tool execution failed: {e}")
class Toolset(Protocol):
"""Protocol for tool collections.
A Toolset manages a collection of tools and handles tool calls.
"""
@property
def tools(self) -> list[Tool]:
"""Get all tool definitions."""
...
def handle(self, tool_call: ToolCall) -> "ToolResult | asyncio.Future[ToolResult]":
"""Handle a tool call.
Args:
tool_call: The tool call to handle.
Returns:
The tool result or a future that resolves to the result.
"""
...
ToolType = Union[CallableTool, CallableTool2[Any]]
class SimpleToolset:
"""A simple in-memory toolset.
This is the default toolset implementation that stores tools in a dictionary
and executes them concurrently.
Example:
>>> toolset = SimpleToolset()
>>> toolset.add(MyTool())
>>> result = await toolset.handle(tool_call)
"""
def __init__(self, tools: Iterable[ToolType] | None = None):
"""Initialize the toolset.
Args:
tools: Optional initial tools to add.
"""
self._tools: dict[str, ToolType] = {}
if tools:
for tool in tools:
self.add(tool)
def add(self, tool: ToolType) -> "SimpleToolset":
"""Add a tool to the toolset.
Args:
tool: The tool to add.
Returns:
Self for chaining.
Raises:
ValueError: If a tool with the same name already exists.
"""
if tool.name in self._tools:
raise ValueError(f"Tool '{tool.name}' already exists")
self._tools[tool.name] = tool
return self
def remove(self, name: str) -> "SimpleToolset":
"""Remove a tool from the toolset.
Args:
name: The name of the tool to remove.
Returns:
Self for chaining.
Raises:
KeyError: If the tool doesn't exist.
"""
if name not in self._tools:
raise KeyError(f"Tool '{name}' not found")
del self._tools[name]
return self
def get(self, name: str) -> ToolType | None:
"""Get a tool by name.
Args:
name: The name of the tool.
Returns:
The tool if found, None otherwise.
"""
return self._tools.get(name)
def __contains__(self, name: str) -> bool:
"""Check if a tool exists in the toolset."""
return name in self._tools
def __len__(self) -> int:
"""Get the number of tools in the toolset."""
return len(self._tools)
@property
def tools(self) -> list[Tool]:
"""Get all tool definitions."""
result = []
for tool in self._tools.values():
if isinstance(tool, CallableTool):
result.append(
Tool(
name=tool.name,
description=tool.description,
parameters=tool.parameters,
)
)
else:
result.append(tool.base)
return result
def handle(self, tool_call: ToolCall) -> "asyncio.Future[ToolResult]":
"""Handle a tool call.
Args:
tool_call: The tool call to handle.
Returns:
A future that resolves to the tool result.
"""
tool = self._tools.get(tool_call.function.name)
if tool is None:
future: asyncio.Future[ToolResult] = asyncio.get_event_loop().create_future()
future.set_result(ToolError(message=f"Tool '{tool_call.function.name}' not found"))
return future
# Parse arguments
try:
arguments = json.loads(tool_call.function.arguments or "{}")
except json.JSONDecodeError as e:
future = asyncio.get_event_loop().create_future()
future.set_result(ToolError(message=f"Invalid JSON arguments: {e}"))
return future
# Execute tool
async def _execute() -> ToolResult:
try:
return await tool.call(arguments)
except Exception as e:
return ToolError(message=f"Tool execution failed: {e}")
return asyncio.create_task(_execute())
def tool(
name: Optional[str] = None,
description: Optional[str] = None,
) -> Callable[[Callable[..., Any]], CallableTool]:
"""Decorator to convert a function into a tool.
This decorator automatically generates the JSON schema from the function's
type hints and docstring.
Args:
name: Optional tool name (defaults to function name).
description: Optional description (defaults to function docstring).
Returns:
A decorator that converts the function into a CallableTool.
Example:
>>> @tool()
... async def add(a: float, b: float) -> float:
... '''Add two numbers.'''
... return a + b
>>> agent = Agent(tools=[add])
"""
def decorator(func: callable) -> CallableTool:
sig = inspect.signature(func)
try:
type_hints = get_type_hints(func)
except Exception:
type_hints = {}
properties = {}
required = []
for param_name, param in sig.parameters.items():
if param.default is inspect.Parameter.empty:
required.append(param_name)
param_type = type_hints.get(param_name, param.annotation)
if param_type is inspect.Parameter.empty or param_type is None:
param_type = str
# Map Python types to JSON schema types
if param_type in (str,):
properties[param_name] = {"type": "string"}
elif param_type in (int,):
properties[param_name] = {"type": "integer"}
elif param_type in (float,):
properties[param_name] = {"type": "number"}
elif param_type in (bool,):
properties[param_name] = {"type": "boolean"}
else:
properties[param_name] = {"type": "string"}
parameters = {
"type": "object",
"properties": properties,
}
if required:
parameters["required"] = required
# Create tool class
tool_name = name or func.__name__
tool_description = description or (func.__doc__ or "No description provided")
tool_parameters = parameters
class FunctionTool(CallableTool):
name: str = tool_name
description: str = tool_description
parameters: dict[str, Any] = tool_parameters
async def __call__(self, *args: Any, **kwargs: Any) -> ToolResult:
try:
result = await func(*args, **kwargs)
return ToolOk(output=str(result))
except Exception as e:
return ToolError(message=str(e))
return FunctionTool()
return decorator

View File

@@ -0,0 +1,208 @@
"""Tool suite for AgentLite - A collection of tools inspired by kimi-cli.
This module provides a comprehensive set of tools for file operations,
shell execution, web access, and more, with configuration support
for enabling/disabling individual tools.
"""
from __future__ import annotations
from pathlib import Path
from typing import Optional
from agentlite.tool import SimpleToolset
from agentlite.tools.config import (
ToolSuiteConfig,
FileToolsConfig,
ShellToolsConfig,
WebToolsConfig,
MultiAgentToolsConfig,
ToolGroupConfig,
)
# Import tool implementations
from agentlite.tools.file.read import ReadFile
from agentlite.tools.file.write import WriteFile
from agentlite.tools.file.replace import StrReplaceFile
from agentlite.tools.file.glob import Glob
from agentlite.tools.file.grep import Grep
from agentlite.tools.file.read_media import ReadMediaFile
from agentlite.tools.shell.shell import Shell
from agentlite.tools.web.fetch import FetchURL
from agentlite.tools.misc.todo import SetTodoList
from agentlite.tools.misc.think import Think
class ConfigurableToolset(SimpleToolset):
"""A toolset that supports configuration-based tool enabling/disabling.
This toolset loads tools based on a ToolSuiteConfig, only adding
tools that are enabled in the configuration.
Example:
>>> config = ToolSuiteConfig(
... file_tools=FileToolsConfig(
... tools={"WriteFile": False} # Disable WriteFile
... )
... )
>>> toolset = ConfigurableToolset(config)
>>> "ReadFile" in toolset # True
True
>>> "WriteFile" in toolset # False
False
"""
def __init__(self, config: ToolSuiteConfig | None = None, work_dir: Optional[str] = None):
"""Initialize the configurable toolset.
Args:
config: Tool suite configuration. If None, uses default config (all enabled).
work_dir: Working directory for file operations. Defaults to current directory.
"""
super().__init__()
self.config = config or ToolSuiteConfig()
self.work_dir = Path(work_dir) if work_dir else Path.cwd()
self._load_tools()
def _load_tools(self) -> None:
"""Load tools based on configuration."""
enabled = self.config.get_enabled_tools()
# File tools
if "file" in enabled:
self._load_file_tools(enabled["file"])
# Shell tools
if "shell" in enabled:
self._load_shell_tools(enabled["shell"])
# Web tools
if "web" in enabled:
self._load_web_tools(enabled["web"])
# Multi-agent tools
if "multiagent" in enabled:
self._load_multiagent_tools(enabled["multiagent"])
# Misc tools
if "misc" in enabled:
self._load_misc_tools(enabled["misc"])
def _load_file_tools(self, tool_names: list[str]) -> None:
"""Load file operation tools."""
cfg = self.config.file_tools
if "ReadFile" in tool_names:
self.add(
ReadFile(
work_dir=self.work_dir,
max_lines=cfg.max_lines,
max_line_length=cfg.max_line_length,
max_bytes=cfg.max_bytes,
)
)
if "WriteFile" in tool_names:
self.add(
WriteFile(
work_dir=self.work_dir, allow_outside_work_dir=cfg.allow_write_outside_work_dir
)
)
if "StrReplaceFile" in tool_names:
self.add(
StrReplaceFile(
work_dir=self.work_dir, allow_outside_work_dir=cfg.allow_write_outside_work_dir
)
)
if "Glob" in tool_names:
self.add(Glob(work_dir=self.work_dir, max_matches=cfg.max_glob_matches))
if "Grep" in tool_names:
self.add(Grep(work_dir=self.work_dir))
if "ReadMediaFile" in tool_names:
self.add(ReadMediaFile(work_dir=self.work_dir))
def _load_shell_tools(self, tool_names: list[str]) -> None:
"""Load shell execution tools."""
cfg = self.config.shell_tools
if "Shell" in tool_names:
self.add(
Shell(
timeout=cfg.timeout,
max_timeout=cfg.max_timeout,
blocked_commands=cfg.blocked_commands,
)
)
def _load_web_tools(self, tool_names: list[str]) -> None:
"""Load web-related tools."""
cfg = self.config.web_tools
if "FetchURL" in tool_names:
self.add(
FetchURL(
timeout=cfg.timeout,
user_agent=cfg.user_agent,
max_content_length=cfg.max_content_length,
)
)
def _load_multiagent_tools(self, tool_names: list[str]) -> None:
"""Load multi-agent tools."""
# Multi-agent tools are intentionally disabled in this submodule
# because nested subagents are not supported in subagent runtime.
return
def _load_misc_tools(self, tool_names: list[str]) -> None:
"""Load miscellaneous tools."""
if "SetTodoList" in tool_names:
self.add(SetTodoList())
if "Think" in tool_names:
self.add(Think())
def reload(self, config: ToolSuiteConfig | None = None) -> None:
"""Reload tools with a new configuration.
Args:
config: New configuration. If None, reloads with current config.
"""
if config:
self.config = config
# Clear existing tools
self._tools.clear()
# Reload
self._load_tools()
# Convenience exports
__all__ = [
# Toolset
"ConfigurableToolset",
# Config classes
"ToolSuiteConfig",
"FileToolsConfig",
"ShellToolsConfig",
"WebToolsConfig",
"MultiAgentToolsConfig",
"ToolGroupConfig",
# Tools
"ReadFile",
"WriteFile",
"StrReplaceFile",
"Glob",
"Grep",
"ReadMediaFile",
"Shell",
"FetchURL",
"SetTodoList",
"Think",
]

View File

@@ -0,0 +1,242 @@
"""Tool group configuration system for AgentLite.
This module provides configuration management for tool groups,
allowing users to enable/disable specific tools.
"""
from __future__ import annotations
from pydantic import BaseModel, Field
class ToolGroupConfig(BaseModel):
"""Configuration for a group of tools.
This configuration allows users to enable or disable specific tools
within the tool group. All tools are enabled by default.
Example:
>>> config = ToolGroupConfig(
... enabled=True,
... tools={
... "ReadFile": True,
... "WriteFile": False, # Disabled
... },
... )
"""
enabled: bool = Field(default=True, description="Whether the entire tool group is enabled")
tools: dict[str, bool] = Field(
default_factory=dict,
description="Individual tool enable/disable settings. True=enabled, False=disabled. "
"Tools not listed here follow the default behavior (enabled).",
)
default_tool_enabled: bool = Field(
default=True, description="Default state for tools not explicitly listed in 'tools' dict"
)
def is_tool_enabled(self, tool_name: str) -> bool:
"""Check if a specific tool is enabled.
Args:
tool_name: The name of the tool to check
Returns:
True if the tool is enabled, False otherwise
"""
if not self.enabled:
return False
# Check explicit setting
if tool_name in self.tools:
return self.tools[tool_name]
# Use default
return self.default_tool_enabled
def enable_tool(self, tool_name: str) -> None:
"""Enable a specific tool.
Args:
tool_name: The name of the tool to enable
"""
self.tools[tool_name] = True
def disable_tool(self, tool_name: str) -> None:
"""Disable a specific tool.
Args:
tool_name: The name of the tool to disable
"""
self.tools[tool_name] = False
def set_tool_state(self, tool_name: str, enabled: bool) -> None:
"""Set the enabled state of a specific tool.
Args:
tool_name: The name of the tool
enabled: True to enable, False to disable
"""
self.tools[tool_name] = enabled
class FileToolsConfig(ToolGroupConfig):
"""Configuration for file operation tools."""
max_lines: int = Field(
default=1000, description="Maximum number of lines to read from a file", ge=1, le=10000
)
max_line_length: int = Field(
default=2000, description="Maximum length of a single line", ge=100, le=10000
)
max_bytes: int = Field(
default=100 * 1024, # 100KB
description="Maximum bytes to read from a file",
ge=1024,
le=10 * 1024 * 1024, # 10MB
)
allow_write_outside_work_dir: bool = Field(
default=False, description="Allow writing files outside the working directory"
)
max_glob_matches: int = Field(
default=1000, description="Maximum number of glob matches to return", ge=1, le=10000
)
class ShellToolsConfig(ToolGroupConfig):
"""Configuration for shell execution tools."""
timeout: int = Field(
default=60, description="Default timeout for shell commands in seconds", ge=1, le=3600
)
max_timeout: int = Field(
default=300, description="Maximum allowed timeout for shell commands", ge=1, le=3600
)
blocked_commands: list[str] = Field(
default_factory=list, description="List of command patterns to block"
)
class WebToolsConfig(ToolGroupConfig):
"""Configuration for web-related tools."""
timeout: int = Field(
default=30, description="Timeout for HTTP requests in seconds", ge=1, le=300
)
user_agent: str = Field(
default="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36",
description="User-Agent string for HTTP requests",
)
max_content_length: int = Field(
default=1024 * 1024, # 1MB
description="Maximum content length to fetch",
ge=1024,
le=10 * 1024 * 1024, # 10MB
)
class MultiAgentToolsConfig(ToolGroupConfig):
"""Configuration for multi-agent tools."""
enabled: bool = Field(
default=False, description="Whether multi-agent tools are enabled. Disabled by default for subagent mode."
)
max_steps: int = Field(
default=50, description="Maximum steps for subagent execution", ge=1, le=1000
)
inherit_context: bool = Field(
default=False, description="Whether subagents inherit parent context"
)
class ToolSuiteConfig(BaseModel):
"""Complete configuration for all tool groups.
This is the main configuration class that aggregates all tool group configs.
Example:
>>> config = ToolSuiteConfig(
... file_tools=FileToolsConfig(tools={"WriteFile": False}),
... shell_tools=ShellToolsConfig(
... enabled=False # Disable all shell tools
... ),
... )
"""
file_tools: FileToolsConfig = Field(
default_factory=FileToolsConfig, description="File operation tools configuration"
)
shell_tools: ShellToolsConfig = Field(
default_factory=ShellToolsConfig, description="Shell execution tools configuration"
)
web_tools: WebToolsConfig = Field(
default_factory=WebToolsConfig, description="Web-related tools configuration"
)
multiagent_tools: MultiAgentToolsConfig = Field(
default_factory=MultiAgentToolsConfig, description="Multi-agent tools configuration"
)
misc_tools: ToolGroupConfig = Field(
default_factory=ToolGroupConfig,
description="Miscellaneous tools (todo, think, etc.) configuration",
)
def get_enabled_tools(self) -> dict[str, list[str]]:
"""Get a mapping of tool group names to their enabled tools.
Returns:
Dictionary mapping tool group names to lists of enabled tool names
"""
result: dict[str, list[str]] = {}
# File tools
if self.file_tools.enabled:
file_tools = [
"ReadFile",
"WriteFile",
"StrReplaceFile",
"Glob",
"Grep",
"ReadMediaFile",
]
result["file"] = [t for t in file_tools if self.file_tools.is_tool_enabled(t)]
# Shell tools
if self.shell_tools.enabled:
shell_tools = ["Shell"]
result["shell"] = [t for t in shell_tools if self.shell_tools.is_tool_enabled(t)]
# Web tools
if self.web_tools.enabled:
web_tools = ["FetchURL"]
result["web"] = [t for t in web_tools if self.web_tools.is_tool_enabled(t)]
# Multi-agent tools
if self.multiagent_tools.enabled:
multi_tools = ["Task", "CreateSubagent"]
result["multiagent"] = [
t for t in multi_tools if self.multiagent_tools.is_tool_enabled(t)
]
# Misc tools
if self.misc_tools.enabled:
misc_tools = ["SetTodoList", "Think"]
result["misc"] = [t for t in misc_tools if self.misc_tools.is_tool_enabled(t)]
return result

View File

@@ -0,0 +1,20 @@
"""File operation tools for AgentLite.
This module provides tools for reading, writing, and manipulating files.
"""
from agentlite.tools.file.read import ReadFile
from agentlite.tools.file.write import WriteFile
from agentlite.tools.file.replace import StrReplaceFile
from agentlite.tools.file.glob import Glob
from agentlite.tools.file.grep import Grep
from agentlite.tools.file.read_media import ReadMediaFile
__all__ = [
"ReadFile",
"WriteFile",
"StrReplaceFile",
"Glob",
"Grep",
"ReadMediaFile",
]

View File

@@ -0,0 +1,154 @@
"""Glob tool for AgentLite.
This module provides a tool for searching files using glob patterns.
"""
from __future__ import annotations
from typing import Optional
from pathlib import Path
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the Glob tool."""
pattern: str = Field(
description="Glob pattern to match files/directories (e.g., '*.py', '**/*.txt')"
)
directory: Optional[str] = Field(
description=(
"Absolute path to the directory to search in (defaults to working directory)."
),
default=None,
)
include_dirs: bool = Field(
description="Whether to include directories in results.",
default=True,
)
class Glob(CallableTool2[Params]):
"""Tool for searching files using glob patterns.
This tool finds files and directories matching a glob pattern.
Supports recursive patterns with **.
Example:
>>> tool = Glob(work_dir=Path("/tmp"))
>>> result = await tool({"pattern": "*.py"})
"""
name: str = "Glob"
description: str = (
"Search for files and directories matching a glob pattern. "
"Supports recursive patterns with **. "
"Returns paths relative to the search directory."
)
params: type[Params] = Params
def __init__(
self,
work_dir: Path,
max_matches: int = 1000,
):
"""Initialize the Glob tool.
Args:
work_dir: The working directory for relative paths
max_matches: Maximum number of matches to return
"""
super().__init__()
self._work_dir = work_dir
self._max_matches = max_matches
def _is_within_work_dir(self, path: Path) -> bool:
"""Check if a path is within the working directory."""
try:
path.relative_to(self._work_dir.resolve())
return True
except ValueError:
return False
async def __call__(self, params: Params) -> ToolResult:
"""Execute the glob search.
Args:
params: The search parameters
Returns:
ToolResult with matching paths or error
"""
try:
# Determine search directory
if params.directory:
search_dir = Path(params.directory).expanduser().resolve()
if not search_dir.is_absolute():
return ToolError(
message=f"Directory must be an absolute path: {params.directory}",
)
# Security check
if not self._is_within_work_dir(search_dir):
return ToolError(
message=(
f"Directory `{params.directory}` is outside the working directory. "
"You can only search within the working directory."
),
)
else:
search_dir = self._work_dir
# Check directory exists
if not search_dir.exists():
return ToolError(
message=f"Directory `{search_dir}` does not exist.",
)
if not search_dir.is_dir():
return ToolError(
message=f"`{search_dir}` is not a directory.",
)
# Security check: prevent ** patterns at the root level
if params.pattern.startswith("**") and not params.directory:
return ToolError(
message=(
f"Pattern `{params.pattern}` starts with '**' which is not allowed "
"without specifying a directory. This would recursively search all "
"directories and may include large directories like `node_modules`. "
"Use a more specific pattern or provide a directory."
),
)
# Perform glob search
matches = list(search_dir.glob(params.pattern))
# Filter directories if not requested
if not params.include_dirs:
matches = [p for p in matches if p.is_file()]
# Sort for consistent output
matches.sort()
# Limit matches
truncated = False
if len(matches) > self._max_matches:
matches = matches[: self._max_matches]
truncated = True
# Format output (relative to search directory)
output = "\n".join(str(p.relative_to(search_dir)) for p in matches)
# Build message
message = f"Found {len(matches)} matches for pattern `{params.pattern}`."
if truncated:
message += f" Only the first {self._max_matches} matches are returned."
return ToolOk(output=output, message=message)
except Exception as e:
return ToolError(
message=f"Failed to search for pattern `{params.pattern}`. Error: {e}",
)

View File

@@ -0,0 +1,303 @@
"""Grep tool for AgentLite.
This module provides a tool for searching file contents using regex patterns.
"""
from __future__ import annotations
from typing import Optional
import re
from pathlib import Path
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the Grep tool."""
pattern: str = Field(
description="The regular expression pattern to search for in file contents"
)
path: str = Field(
description=(
"File or directory to search in. Defaults to current working directory. "
"If specified, it must be an absolute path."
),
default=".",
)
glob: Optional[str] = Field(
description=(
"Glob pattern to filter files (e.g. `*.py`, `*.{ts,tsx}`). No filter by default."
),
default=None,
)
output_mode: str = Field(
description=(
"`content`: Show matching lines (supports `-B`, `-A`, `-C`, `-n`); "
"`files_with_matches`: Show file paths; "
"`count_matches`: Show total number of matches. "
"Defaults to `files_with_matches`."
),
default="files_with_matches",
)
before_context: Optional[int] = Field(
description=(
"Number of lines to show before each match (the `-B` option). "
"Requires `output_mode` to be `content`."
),
default=None,
)
after_context: Optional[int] = Field(
description=(
"Number of lines to show after each match (the `-A` option). "
"Requires `output_mode` to be `content`."
),
default=None,
)
context: Optional[int] = Field(
description=(
"Number of lines to show before and after each match (the `-C` option). "
"Requires `output_mode` to be `content`."
),
default=None,
)
line_number: bool = Field(
description=(
"Show line numbers in output (the `-n` option). Requires `output_mode` to be `content`."
),
default=False,
)
ignore_case: bool = Field(
description="Case insensitive search (the `-i` option).",
default=False,
)
class Grep(CallableTool2[Params]):
"""Tool for searching file contents using regex patterns.
This tool searches file contents for matches to a regex pattern.
Supports various output modes and context options.
Example:
>>> tool = Grep(work_dir=Path("/tmp"))
>>> result = await tool({"pattern": "def ", "glob": "*.py"})
"""
name: str = "Grep"
description: str = (
"Search file contents using regular expressions. "
"Supports various output modes and context options. "
"Can search individual files or entire directories."
)
params: type[Params] = Params
def __init__(
self,
work_dir: Path,
):
"""Initialize the Grep tool.
Args:
work_dir: The working directory
"""
super().__init__()
self._work_dir = work_dir
def _is_within_work_dir(self, path: Path) -> bool:
"""Check if a path is within the working directory."""
try:
path.relative_to(self._work_dir.resolve())
return True
except ValueError:
return False
def _search_file(
self,
file_path: Path,
pattern: re.Pattern,
params: Params,
) -> list[tuple[int, str]]:
"""Search a single file for matches.
Args:
file_path: Path to the file
pattern: Compiled regex pattern
params: Search parameters
Returns:
List of (line_number, line_content) tuples
"""
try:
content = file_path.read_text(encoding="utf-8", errors="replace")
except Exception:
return []
lines = content.split("\n")
matches = []
for i, line in enumerate(lines, 1):
if pattern.search(line):
matches.append((i, line))
return matches
def _format_matches(
self,
matches: dict[Path, list[tuple[int, str]]],
params: Params,
) -> str:
"""Format matches according to output mode.
Args:
matches: Dict of file_path -> list of (line_num, line) tuples
params: Output parameters
Returns:
Formatted output string
"""
if params.output_mode == "files_with_matches":
return "\n".join(str(p) for p in sorted(matches.keys()))
if params.output_mode == "count_matches":
total = sum(len(m) for m in matches.values())
return f"Total matches: {total}"
# content mode
output_lines = []
for file_path in sorted(matches.keys()):
file_matches = matches[file_path]
# Read file for context
try:
content = file_path.read_text(encoding="utf-8", errors="replace")
lines = content.split("\n")
except Exception:
continue
# Determine context lines
before = params.context if params.context else params.before_context or 0
after = params.context if params.context else params.after_context or 0
# Track which lines to include (to avoid duplicates)
included_lines = set()
for match_line_num, _ in file_matches:
start = max(1, match_line_num - before)
end = min(len(lines), match_line_num + after)
for i in range(start, end + 1):
included_lines.add(i)
# Build output for this file
if output_lines:
output_lines.append("")
output_lines.append(f"File: {file_path}")
prev_line = 0
for line_num in sorted(included_lines):
# Add separator if there's a gap
if prev_line and line_num > prev_line + 1:
output_lines.append("--")
line = lines[line_num - 1]
prefix = f"{line_num}:" if params.line_number else ""
output_lines.append(f"{prefix}{line}")
prev_line = line_num
return "\n".join(output_lines)
async def __call__(self, params: Params) -> ToolResult:
"""Execute the grep search.
Args:
params: The search parameters
Returns:
ToolResult with search results or error
"""
try:
# Resolve path
if params.path == ".":
search_path = self._work_dir
else:
search_path = Path(params.path).expanduser().resolve()
if not search_path.is_absolute():
return ToolError(
message=f"Path must be an absolute path: {params.path}",
)
# Security check
if not self._is_within_work_dir(search_path):
return ToolError(
message=(
f"Path `{params.path}` is outside the working directory. "
"You can only search within the working directory."
),
)
# Check path exists
if not search_path.exists():
return ToolError(
message=f"Path `{params.path}` does not exist.",
)
# Compile pattern
flags = re.IGNORECASE if params.ignore_case else 0
try:
pattern = re.compile(params.pattern, flags)
except re.error as e:
return ToolError(
message=f"Invalid regex pattern: {e}",
)
# Find files to search
if search_path.is_file():
files = [search_path]
else:
if params.glob:
files = list(search_path.glob(params.glob))
else:
# Default: search all files recursively (with some exclusions)
files = [
p
for p in search_path.rglob("*")
if p.is_file()
and not any(
part.startswith(".") or part in ("node_modules", "__pycache__", ".git")
for part in p.parts
)
]
# Filter to text files only
files = [p for p in files if p.is_file()]
# Search files
all_matches: dict[Path, list[tuple[int, str]]] = {}
for file_path in files:
matches = self._search_file(file_path, pattern, params)
if matches:
all_matches[file_path] = matches
# Format output
output = self._format_matches(all_matches, params)
# Build message
total_files = len(all_matches)
total_matches = sum(len(m) for m in all_matches.values())
if params.output_mode == "files_with_matches":
message = f"Found matches in {total_files} file(s)."
elif params.output_mode == "count_matches":
message = f"Found {total_matches} total match(es)."
else:
message = f"Found {total_matches} match(es) in {total_files} file(s)."
return ToolOk(output=output, message=message)
except Exception as e:
return ToolError(
message=f"Failed to search. Error: {e}",
)

View File

@@ -0,0 +1,207 @@
"""ReadFile tool for AgentLite.
This module provides a tool for reading text files with line numbers.
"""
from __future__ import annotations
from pathlib import Path
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the ReadFile tool."""
path: str = Field(
description=(
"The path to the file to read. Absolute paths are required when reading files "
"outside the working directory."
)
)
line_offset: int = Field(
description=(
"The line number to start reading from. "
"By default read from the beginning of the file. "
"Set this when the file is too large to read at once."
),
default=1,
ge=1,
)
n_lines: int = Field(
description=(
"The number of lines to read. "
"By default read up to max_lines lines. "
"Set this value when the file is too large to read at once."
),
default=1000,
ge=1,
)
class ReadFile(CallableTool2[Params]):
"""Tool for reading text files with line numbers.
This tool reads a text file and returns its contents with line numbers.
It supports pagination for large files.
Example:
>>> tool = ReadFile(work_dir=Path("/tmp"))
>>> result = await tool({"path": "/tmp/test.txt"})
"""
name: str = "ReadFile"
description: str = (
"Read a text file from the local filesystem. "
"Returns the file content with line numbers. "
"Supports reading specific line ranges for large files."
)
params: type[Params] = Params
def __init__(
self,
work_dir: Path,
max_lines: int = 1000,
max_line_length: int = 2000,
max_bytes: int = 100 * 1024,
):
"""Initialize the ReadFile tool.
Args:
work_dir: The working directory for relative paths
max_lines: Maximum number of lines to read
max_line_length: Maximum length of a single line
max_bytes: Maximum bytes to read from a file
"""
super().__init__()
self._work_dir = work_dir
self._max_lines = max_lines
self._max_line_length = max_line_length
self._max_bytes = max_bytes
def _is_within_work_dir(self, path: Path) -> bool:
"""Check if a path is within the working directory."""
try:
path.relative_to(self._work_dir.resolve())
return True
except ValueError:
return False
async def __call__(self, params: Params) -> ToolResult:
"""Execute the read file operation.
Args:
params: The read parameters
Returns:
ToolResult with the file content or error
"""
if not params.path:
return ToolError(
message="File path cannot be empty.",
)
try:
# Resolve path
path = Path(params.path).expanduser()
if not path.is_absolute():
path = self._work_dir / path
path = path.resolve()
# Security check: if outside work_dir, must be absolute path
if not self._is_within_work_dir(path) and not Path(params.path).is_absolute():
return ToolError(
message=(
f"`{params.path}` is not an absolute path. "
"You must provide an absolute path to read a file "
"outside the working directory."
),
)
# Check file exists
if not path.exists():
return ToolError(
message=f"`{params.path}` does not exist.",
)
if not path.is_file():
return ToolError(
message=f"`{params.path}` is not a file.",
)
# Read file content
try:
content = path.read_text(encoding="utf-8", errors="replace")
except UnicodeDecodeError:
return ToolError(
message=f"`{params.path}` appears to be a binary file and cannot be read as text.",
)
# Split into lines
lines = content.split("\n")
# Apply line offset
start_idx = params.line_offset - 1
if start_idx >= len(lines):
return ToolOk(
output="",
message=f"Line offset {params.line_offset} exceeds file length ({len(lines)} lines).",
)
# Calculate end index
end_idx = min(start_idx + params.n_lines, len(lines))
end_idx = min(end_idx, start_idx + self._max_lines)
# Extract lines
selected_lines = lines[start_idx:end_idx]
# Truncate long lines and count total bytes
truncated_lines = []
truncated_line_numbers = []
total_bytes = 0
max_bytes_reached = False
for i, line in enumerate(selected_lines):
line_num = start_idx + i + 1
# Truncate if needed
if len(line) > self._max_line_length:
line = line[: self._max_line_length]
truncated_line_numbers.append(line_num)
# Check bytes limit
line_bytes = len(line.encode("utf-8"))
if total_bytes + line_bytes > self._max_bytes:
max_bytes_reached = True
break
total_bytes += line_bytes
truncated_lines.append(line)
# Format with line numbers
lines_with_no = []
for line_num, line in enumerate(truncated_lines, start=start_idx + 1):
lines_with_no.append(f"{line_num:6d}\t{line}")
# Build result
output = "\n".join(lines_with_no)
message = (
f"{len(truncated_lines)} lines read from file starting from line {start_idx + 1}."
)
if max_bytes_reached:
message += f" Max {self._max_bytes} bytes reached."
elif end_idx < len(lines):
message += f" File has {len(lines)} lines total."
if truncated_line_numbers:
message += f" Lines {truncated_line_numbers} were truncated."
return ToolOk(output=output, message=message)
except Exception as e:
return ToolError(
message=f"Failed to read {params.path}. Error: {e}",
)

View File

@@ -0,0 +1,183 @@
"""ReadMediaFile tool for AgentLite.
This module provides a tool for reading image and video files.
"""
from __future__ import annotations
from typing import Optional
import base64
from pathlib import Path
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the ReadMediaFile tool."""
path: str = Field(
description=(
"The path to the media file to read. "
"Absolute paths are required when reading files outside the working directory."
)
)
class ReadMediaFile(CallableTool2[Params]):
"""Tool for reading image and video files.
This tool reads media files and returns them as base64-encoded data URLs.
Supports images (PNG, JPEG, GIF, etc.) and videos.
Example:
>>> tool = ReadMediaFile(work_dir=Path("/tmp"))
>>> result = await tool({"path": "image.png"})
"""
name: str = "ReadMediaFile"
description: str = (
"Read an image or video file and return it as a base64-encoded data URL. "
"Supported formats: PNG, JPEG, GIF, WebP, MP4, WebM, and others. "
"Maximum file size: 100MB."
)
params: type[Params] = Params
# Supported media types
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".webp", ".bmp", ".svg"}
VIDEO_EXTENSIONS = {".mp4", ".webm", ".mov", ".avi", ".mkv"}
# MIME type mapping
MIME_TYPES = {
".png": "image/png",
".jpg": "image/jpeg",
".jpeg": "image/jpeg",
".gif": "image/gif",
".webp": "image/webp",
".bmp": "image/bmp",
".svg": "image/svg+xml",
".mp4": "video/mp4",
".webm": "video/webm",
".mov": "video/quicktime",
".avi": "video/x-msvideo",
".mkv": "video/x-matroska",
}
def __init__(
self,
work_dir: Path,
max_size_mb: int = 100,
):
"""Initialize the ReadMediaFile tool.
Args:
work_dir: The working directory for relative paths
max_size_mb: Maximum file size in MB
"""
super().__init__()
self._work_dir = work_dir
self._max_size = max_size_mb * 1024 * 1024
def _is_within_work_dir(self, path: Path) -> bool:
"""Check if a path is within the working directory."""
try:
path.relative_to(self._work_dir.resolve())
return True
except ValueError:
return False
def _get_mime_type(self, path: Path) -> Optional[str]:
"""Get MIME type for a file based on extension."""
ext = path.suffix.lower()
return self.MIME_TYPES.get(ext)
def _is_media_file(self, path: Path) -> bool:
"""Check if a file is a supported media file."""
ext = path.suffix.lower()
return ext in self.IMAGE_EXTENSIONS or ext in self.VIDEO_EXTENSIONS
async def __call__(self, params: Params) -> ToolResult:
"""Execute the read media operation.
Args:
params: The read parameters
Returns:
ToolResult with base64 data URL or error
"""
if not params.path:
return ToolError(
message="File path cannot be empty.",
)
try:
# Resolve path
path = Path(params.path).expanduser()
if not path.is_absolute():
path = self._work_dir / path
path = path.resolve()
# Security check
if not self._is_within_work_dir(path) and not Path(params.path).is_absolute():
return ToolError(
message=(
f"`{params.path}` is not an absolute path. "
"You must provide an absolute path to read a file "
"outside the working directory."
),
)
# Check file exists
if not path.exists():
return ToolError(
message=f"`{params.path}` does not exist.",
)
if not path.is_file():
return ToolError(
message=f"`{params.path}` is not a file.",
)
# Check it's a media file
if not self._is_media_file(path):
return ToolError(
message=(
f"`{params.path}` is not a supported media file. "
f"Supported extensions: "
f"{', '.join(sorted(self.IMAGE_EXTENSIONS | self.VIDEO_EXTENSIONS))}"
),
)
# Check file size
file_size = path.stat().st_size
if file_size > self._max_size:
return ToolError(
message=(
f"`{params.path}` is too large ({file_size / 1024 / 1024:.1f}MB). "
f"Maximum size is {self._max_size / 1024 / 1024:.0f}MB."
),
)
# Get MIME type
mime_type = self._get_mime_type(path)
if not mime_type:
return ToolError(
message=f"Could not determine MIME type for `{params.path}`.",
)
# Read and encode file
data = path.read_bytes()
encoded = base64.b64encode(data).decode("ascii")
data_url = f"data:{mime_type};base64,{encoded}"
return ToolOk(
output=data_url,
message=(
f"Loaded {mime_type.split('/')[0]} file `{params.path}` ({file_size} bytes)."
),
)
except Exception as e:
return ToolError(
message=f"Failed to read {params.path}. Error: {e}",
)

View File

@@ -0,0 +1,189 @@
"""StrReplaceFile tool for AgentLite.
This module provides a tool for editing files using string replacement.
"""
from __future__ import annotations
from pathlib import Path
from typing import Union
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Edit(BaseModel):
"""A single edit operation."""
old: str = Field(description="The old string to replace. Can be multi-line.")
new: str = Field(description="The new string to replace with. Can be multi-line.")
replace_all: bool = Field(
description="Whether to replace all occurrences.",
default=False,
)
class Params(BaseModel):
"""Parameters for the StrReplaceFile tool."""
path: str = Field(
description=(
"The path to the file to edit. Absolute paths are required when editing files "
"outside the working directory."
)
)
edit: Union[Edit, list[Edit]] = Field(
description=(
"The edit(s) to apply to the file. "
"You can provide a single edit or a list of edits here."
),
)
class StrReplaceFile(CallableTool2[Params]):
"""Tool for editing files using string replacement.
This tool replaces strings in a file. It can perform single or multiple
replacements, and optionally replace all occurrences.
Example:
>>> tool = StrReplaceFile(work_dir=Path("/tmp"))
>>> result = await tool({"path": "test.txt", "edit": {"old": "Hello", "new": "Hi"}})
"""
name: str = "StrReplaceFile"
description: str = (
"Edit a file by replacing strings. "
"Supports single or multiple edits, and can replace all occurrences. "
"The old string must match exactly (including whitespace)."
)
params: type[Params] = Params
def __init__(
self,
work_dir: Path,
allow_outside_work_dir: bool = False,
):
"""Initialize the StrReplaceFile tool.
Args:
work_dir: The working directory for relative paths
allow_outside_work_dir: Whether to allow editing outside the working directory
"""
super().__init__()
self._work_dir = work_dir
self._allow_outside_work_dir = allow_outside_work_dir
def _is_within_work_dir(self, path: Path) -> bool:
"""Check if a path is within the working directory."""
try:
path.relative_to(self._work_dir.resolve())
return True
except ValueError:
return False
def _apply_edit(self, content: str, edit: Edit) -> tuple[str, int]:
"""Apply a single edit to the content.
Args:
content: The original content
edit: The edit to apply
Returns:
Tuple of (new_content, replacements_count)
"""
if edit.replace_all:
count = content.count(edit.old)
new_content = content.replace(edit.old, edit.new)
return new_content, count
else:
if edit.old in content:
new_content = content.replace(edit.old, edit.new, 1)
return new_content, 1
return content, 0
async def __call__(self, params: Params) -> ToolResult:
"""Execute the string replacement operation.
Args:
params: The edit parameters
Returns:
ToolResult with success message or error
"""
if not params.path:
return ToolError(
message="File path cannot be empty.",
)
try:
# Resolve path
path = Path(params.path).expanduser()
if not path.is_absolute():
path = self._work_dir / path
path = path.resolve()
# Security check
if not self._is_within_work_dir(path):
if not Path(params.path).is_absolute():
return ToolError(
message=(
f"`{params.path}` is not an absolute path. "
"You must provide an absolute path to edit a file "
"outside the working directory."
),
)
if not self._allow_outside_work_dir:
return ToolError(
message=(
f"Editing outside the working directory is not allowed. "
f"Path: {params.path}"
),
)
# Check file exists
if not path.exists():
return ToolError(
message=f"`{params.path}` does not exist.",
)
if not path.is_file():
return ToolError(
message=f"`{params.path}` is not a file.",
)
# Read file content
content = path.read_text(encoding="utf-8", errors="replace")
original_content = content
# Normalize edits to list
edits = [params.edit] if isinstance(params.edit, Edit) else params.edit
# Apply edits
total_replacements = 0
for edit in edits:
content, count = self._apply_edit(content, edit)
total_replacements += count
# Check if any changes were made
if content == original_content:
return ToolError(
message="No replacements were made. The old string was not found in the file.",
)
# Write back
path.write_text(content, encoding="utf-8")
return ToolOk(
output="",
message=(
f"File successfully edited. "
f"Applied {len(edits)} edit(s) with {total_replacements} total replacement(s)."
),
)
except Exception as e:
return ToolError(
message=f"Failed to edit {params.path}. Error: {e}",
)

View File

@@ -0,0 +1,157 @@
"""WriteFile tool for AgentLite.
This module provides a tool for writing files to the filesystem.
"""
from __future__ import annotations
from typing import Literal
from pathlib import Path
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the WriteFile tool."""
path: str = Field(
description=(
"The path to the file to write. Absolute paths are required when writing files "
"outside the working directory."
)
)
content: str = Field(description="The content to write to the file")
mode: Literal["overwrite", "append"] = Field(
description=(
"The mode to use to write to the file. "
"Two modes are supported: `overwrite` for overwriting the whole file and "
"`append` for appending to the end of an existing file."
),
default="overwrite",
)
class WriteFile(CallableTool2[Params]):
"""Tool for writing files to the filesystem.
This tool writes content to a file, either overwriting or appending.
Example:
>>> tool = WriteFile(work_dir=Path("/tmp"))
>>> result = await tool({"path": "test.txt", "content": "Hello World"})
"""
name: str = "WriteFile"
description: str = (
"Write content to a file on the local filesystem. "
"Can create new files or overwrite/append to existing files."
)
params: type[Params] = Params
def __init__(
self,
work_dir: Path,
allow_outside_work_dir: bool = False,
):
"""Initialize the WriteFile tool.
Args:
work_dir: The working directory for relative paths
allow_outside_work_dir: Whether to allow writing outside the working directory
"""
super().__init__()
self._work_dir = work_dir
self._allow_outside_work_dir = allow_outside_work_dir
def _is_within_work_dir(self, path: Path) -> bool:
"""Check if a path is within the working directory."""
try:
path.relative_to(self._work_dir.resolve())
return True
except ValueError:
return False
async def __call__(self, params: Params) -> ToolResult:
"""Execute the write file operation.
Args:
params: The write parameters
Returns:
ToolResult with success message or error
"""
if not params.path:
return ToolError(
message="File path cannot be empty.",
)
try:
# Resolve path
path = Path(params.path).expanduser()
if not path.is_absolute():
path = self._work_dir / path
path = path.resolve()
# Security check
if not self._is_within_work_dir(path):
if not Path(params.path).is_absolute():
return ToolError(
message=(
f"`{params.path}` is not an absolute path. "
"You must provide an absolute path to write a file "
"outside the working directory."
),
)
if not self._allow_outside_work_dir:
return ToolError(
message=(
f"Writing outside the working directory is not allowed. "
f"Path: {params.path}"
),
)
# Check parent directory exists
if not path.parent.exists():
return ToolError(
message=f"Parent directory `{path.parent}` does not exist.",
)
# Check valid mode
if params.mode not in ("overwrite", "append"):
return ToolError(
message=f"Invalid mode: {params.mode}. Must be 'overwrite' or 'append'.",
)
# Check if file exists
file_existed = path.exists()
old_content = ""
if file_existed and path.is_file():
old_content = path.read_text(encoding="utf-8", errors="replace")
# Calculate new content
if params.mode == "append" and file_existed:
new_content = old_content + params.content
else:
new_content = params.content
# Write file
path.write_text(new_content, encoding="utf-8")
# Build success message
action = (
"overwritten"
if params.mode == "overwrite" and file_existed
else ("appended to" if params.mode == "append" and file_existed else "created")
)
file_size = path.stat().st_size
return ToolOk(
output="",
message=f"File `{params.path}` successfully {action}. Size: {file_size} bytes.",
)
except Exception as e:
return ToolError(
message=f"Failed to write to {params.path}. Error: {e}",
)

View File

@@ -0,0 +1,9 @@
"""Miscellaneous tools for AgentLite.
This module provides utility tools like todo lists and thinking.
"""
from agentlite.tools.misc.todo import SetTodoList
from agentlite.tools.misc.think import Think
__all__ = ["SetTodoList", "Think"]

View File

@@ -0,0 +1,69 @@
"""Think tool for AgentLite.
This module provides a tool for recording thoughts.
"""
from __future__ import annotations
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the Think tool."""
thought: str = Field(description="A thought to record")
class Think(CallableTool2[Params]):
"""Tool for recording thoughts.
This tool allows the agent to record its thinking process.
Useful for debugging and understanding the agent's reasoning.
Example:
>>> tool = Think()
>>> result = await tool({"thought": "I should first check if the file exists..."})
"""
name: str = "Think"
description: str = (
"Record a thought or reasoning step. "
"Use this to think through problems before taking action. "
"The thought will be logged but not returned to the user."
)
params: type[Params] = Params
def __init__(self):
"""Initialize the Think tool."""
super().__init__()
self._thoughts: list[str] = []
async def __call__(self, params: Params) -> ToolResult:
"""Execute the thought recording.
Args:
params: The thought parameters
Returns:
ToolResult with success message
"""
self._thoughts.append(params.thought)
return ToolOk(
output="",
message=f"Thought recorded ({len(self._thoughts)} total thoughts)",
)
def get_thoughts(self) -> list[str]:
"""Get all recorded thoughts.
Returns:
List of all recorded thoughts
"""
return self._thoughts.copy()
def clear_thoughts(self) -> None:
"""Clear all recorded thoughts."""
self._thoughts.clear()

View File

@@ -0,0 +1,101 @@
"""SetTodoList tool for AgentLite.
This module provides a tool for managing todo lists.
"""
from __future__ import annotations
from typing import Literal
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolOk, ToolResult
class Todo(BaseModel):
"""A single todo item."""
title: str = Field(description="The title of the todo", min_length=1)
status: Literal["pending", "in_progress", "done"] = Field(description="The status of the todo")
class Params(BaseModel):
"""Parameters for the SetTodoList tool."""
todos: list[Todo] = Field(description="The todo list to set")
class SetTodoList(CallableTool2[Params]):
"""Tool for managing todo lists.
This tool allows the agent to create and update a todo list.
The todo list can be used to track tasks and progress.
Example:
>>> tool = SetTodoList()
>>> result = await tool(
... {
... "todos": [
... {"title": "Read docs", "status": "done"},
... {"title": "Write code", "status": "in_progress"},
... ]
... }
... )
"""
name: str = "SetTodoList"
description: str = (
"Set or update the todo list. "
"Use this to track tasks and show progress. "
"Each todo has a title and status (pending/in_progress/done)."
)
params: type[Params] = Params
def __init__(self):
"""Initialize the SetTodoList tool."""
super().__init__()
self._todos: list[Todo] = []
async def __call__(self, params: Params) -> ToolResult:
"""Execute the todo list update.
Args:
params: The todo list parameters
Returns:
ToolResult with success message
"""
self._todos = params.todos
# Format output
lines = []
for todo in self._todos:
status_emoji = {
"pending": "",
"in_progress": "🔨",
"done": "",
}.get(todo.status, "")
lines.append(f"{status_emoji} {todo.title}")
output = "\n".join(lines) if lines else "No todos."
# Count by status
counts = {"pending": 0, "in_progress": 0, "done": 0}
for todo in self._todos:
if todo.status in counts:
counts[todo.status] += 1
message = (
f"Todo list updated: {len(self._todos)} items "
f"({counts['done']} done, {counts['in_progress']} in progress, "
f"{counts['pending']} pending)"
)
return ToolOk(output=output, message=message)
def get_todos(self) -> list[Todo]:
"""Get the current todo list.
Returns:
The current list of todos
"""
return self._todos.copy()

View File

@@ -0,0 +1,6 @@
"""Multi-agent tools for AgentLite.
This module provides tools for creating and managing subagents.
"""
__all__ = []

View File

@@ -0,0 +1,59 @@
"""CreateSubagent tool for AgentLite.
This module provides a tool for dynamically creating subagents.
In this rdev subagent integration, nested subagents are intentionally
disabled. The tool is kept for API compatibility but it intentionally
returns an explicit disabled error.
"""
from __future__ import annotations
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolResult
class Params(BaseModel):
"""Parameters for the CreateSubagent tool."""
name: str = Field(description="The name of the subagent to create")
prompt: str = Field(
description=(
"The system prompt for the subagent. "
"This defines the subagent's personality and capabilities."
),
)
class CreateSubagent(CallableTool2[Params]):
"""Tool for dynamically creating subagents.
This tool creates a new subagent with a custom system prompt.
The subagent can then be used with the Task tool.
Example:
>>> tool = CreateSubagent()
>>> result = await tool({"name": "researcher", "prompt": "You are a research assistant..."})
"""
name: str = "CreateSubagent"
description: str = (
"Create a new subagent with a custom system prompt. "
"The subagent can be used to perform specialized tasks. "
"Use the Task tool to run tasks with created subagents."
)
params: type[Params] = Params
def __init__(self):
"""Initialize the CreateSubagent tool."""
super().__init__()
async def __call__(self, params: Params) -> ToolResult:
"""Refuse to create nested subagents."""
return ToolError(
message=(
"CreateSubagent tool is disabled in this subagent runtime. "
f"Dynamic subagent creation is not allowed (requested '{params.name}')."
),
)

View File

@@ -0,0 +1,99 @@
"""Task tool for AgentLite.
This module provides a tool for delegating tasks to subagents.
In this rdev subagent integration, nested subagents are intentionally
disabled. The tool is kept for API compatibility but no longer executes
delegation.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolResult
if TYPE_CHECKING:
from agentlite.agent import Agent
from agentlite.labor_market import LaborMarket
class Params(BaseModel):
"""Parameters for the Task tool."""
subagent_name: str = Field(description="The name of the subagent to call (must be registered)")
prompt: str = Field(
description=(
"The task for the subagent to perform. "
"Provide detailed instructions with all necessary context."
),
)
description: str = Field(
default="",
description="A short (3-5 word) description of the task (for logging)",
)
class Task(CallableTool2[Params]):
"""Tool for delegating tasks to subagents.
This tool allows a parent agent to delegate tasks to its subagents.
The subagent must be registered in the parent's labor market.
Example:
>>> # Parent agent has a "coder" subagent
>>> tool = Task(parent_agent)
>>> result = await tool(
... {
... "subagent_name": "coder",
... "prompt": "Write a Python function to sort a list",
... "description": "Write sorting function",
... }
... )
"""
name: str = "Task"
description: str = (
"Delegate a task to a specialized subagent. "
"The subagent must be registered in the parent agent's labor market. "
"The subagent will execute independently and return its findings."
)
params: type[Params] = Params
def __init__(
self,
labor_market: LaborMarket | None = None,
parent_agent: Agent | None = None,
max_iterations: int = 80,
):
"""Initialize the Task tool.
Args:
labor_market: The LaborMarket containing subagents
parent_agent: Alternative: the parent agent (uses its labor_market)
max_iterations: Maximum iterations for subagent execution
Raises:
ValueError: If neither labor_market nor parent_agent is provided.
"""
super().__init__()
if labor_market is not None:
self._labor_market = labor_market
elif parent_agent is not None:
self._labor_market = parent_agent.labor_market
else:
raise ValueError("Either labor_market or parent_agent must be provided")
self._max_iterations = max_iterations
async def __call__(self, params: Params) -> ToolResult:
"""Refuse to execute nested subagent delegation."""
return ToolError(
message=(
"Task tool is disabled in this subagent runtime. "
f"Nested subagent delegation is not allowed (requested '{params.subagent_name}')."
),
)

View File

@@ -0,0 +1,8 @@
"""Shell tools for AgentLite.
This module provides tools for executing shell commands.
"""
from agentlite.tools.shell.shell import Shell
__all__ = ["Shell"]

View File

@@ -0,0 +1,164 @@
"""Shell tool for AgentLite.
This module provides a tool for executing shell commands.
"""
from __future__ import annotations
from typing import Optional
import asyncio
import platform
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the Shell tool."""
command: str = Field(description="The shell command to execute.")
timeout: int = Field(
description=(
"The timeout in seconds for the command to execute. "
"If the command takes longer than this, it will be killed."
),
default=60,
ge=1,
le=3600,
)
class Shell(CallableTool2[Params]):
"""Tool for executing shell commands.
This tool executes shell commands and returns their output.
Supports configurable timeout and command blocking for security.
Example:
>>> tool = Shell()
>>> result = await tool({"command": "ls -la"})
"""
name: str = "Shell"
description: str = (
"Execute a shell command and return its output. "
"Supports bash on Unix/Linux/macOS and PowerShell on Windows. "
"Use with caution - commands are executed with user permissions."
)
params: type[Params] = Params
def __init__(
self,
timeout: int = 60,
max_timeout: int = 300,
blocked_commands: Optional[list[str]] = None,
):
"""Initialize the Shell tool.
Args:
timeout: Default timeout in seconds
max_timeout: Maximum allowed timeout
blocked_commands: List of command patterns to block
"""
super().__init__()
self._default_timeout = timeout
self._max_timeout = max_timeout
self._blocked_commands = blocked_commands or []
self._is_windows = platform.system() == "Windows"
def _is_blocked(self, command: str) -> Optional[str]:
"""Check if a command is blocked.
Args:
command: The command to check
Returns:
Block reason if blocked, None otherwise
"""
cmd_lower = command.lower().strip()
for blocked in self._blocked_commands:
if blocked.lower() in cmd_lower:
return f"Command contains blocked pattern: {blocked}"
return None
async def __call__(self, params: Params) -> ToolResult:
"""Execute the shell command.
Args:
params: The command parameters
Returns:
ToolResult with command output or error
"""
if not params.command:
return ToolError(
message="Command cannot be empty.",
)
# Check if blocked
if block_reason := self._is_blocked(params.command):
return ToolError(
message=f"Command blocked: {block_reason}",
)
# Validate timeout
timeout = min(params.timeout, self._max_timeout)
try:
# Determine shell
if self._is_windows:
# Use PowerShell on Windows
shell_cmd = ["powershell", "-Command", params.command]
else:
# Use bash on Unix/Linux/macOS
shell_cmd = ["bash", "-c", params.command]
# Execute command
process = await asyncio.create_subprocess_exec(
*shell_cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
try:
stdout, stderr = await asyncio.wait_for(
process.communicate(),
timeout=timeout,
)
except asyncio.TimeoutError:
process.kill()
await process.wait()
return ToolError(
message=f"Command timed out after {timeout} seconds.",
)
# Decode output
stdout_str = stdout.decode("utf-8", errors="replace")
stderr_str = stderr.decode("utf-8", errors="replace")
# Build output
output_parts = []
if stdout_str:
output_parts.append(stdout_str)
if stderr_str:
output_parts.append(f"[stderr]\n{stderr_str}")
output = "\n".join(output_parts)
if process.returncode == 0:
return ToolOk(
output=output,
message="Command executed successfully (exit code 0).",
)
else:
return ToolError(
message=f"Command failed with exit code {process.returncode}.",
output=output,
)
except Exception as e:
return ToolError(
message=f"Failed to execute command. Error: {e}",
)

View File

@@ -0,0 +1,8 @@
"""Web tools for AgentLite.
This module provides tools for web access and search.
"""
from agentlite.tools.web.fetch import FetchURL
__all__ = ["FetchURL"]

View File

@@ -0,0 +1,173 @@
"""FetchURL tool for AgentLite.
This module provides a tool for fetching web page content.
"""
from __future__ import annotations
import urllib.request
import urllib.error
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolOk, ToolResult
class Params(BaseModel):
"""Parameters for the FetchURL tool."""
url: str = Field(description="The URL to fetch content from.")
class FetchURL(CallableTool2[Params]):
"""Tool for fetching web page content.
This tool fetches the content of a web page and extracts the main text.
Uses simple HTTP GET with configurable timeout.
Example:
>>> tool = FetchURL()
>>> result = await tool({"url": "https://example.com"})
"""
name: str = "FetchURL"
description: str = (
"Fetch the content of a web page. "
"Returns the HTML content or extracts main text if possible. "
"Useful for reading documentation, articles, or API responses."
)
params: type[Params] = Params
def __init__(
self,
timeout: int = 30,
user_agent: str = (
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
),
max_content_length: int = 1024 * 1024, # 1MB
):
"""Initialize the FetchURL tool.
Args:
timeout: Request timeout in seconds
user_agent: User-Agent string
max_content_length: Maximum content length to fetch
"""
super().__init__()
self._timeout = timeout
self._user_agent = user_agent
self._max_content_length = max_content_length
def _extract_text(self, html: str) -> str:
"""Simple HTML to text extraction.
Args:
html: HTML content
Returns:
Extracted text
"""
import re
# Remove script and style elements
html = re.sub(r"<script[^\u003e]*>.*?</script>", "", html, flags=re.DOTALL)
html = re.sub(r"<style[^\u003e]*>.*?</style>", "", html, flags=re.DOTALL)
# Remove HTML tags
text = re.sub(r"<[^\u003e]+>", "", html)
# Decode HTML entities
import html as html_module
text = html_module.unescape(text)
# Normalize whitespace
text = re.sub(r"\s+", " ", text)
return text.strip()
async def __call__(self, params: Params) -> ToolResult:
"""Execute the URL fetch.
Args:
params: The fetch parameters
Returns:
ToolResult with page content or error
"""
if not params.url:
return ToolError(
message="URL cannot be empty.",
)
try:
# Create request with headers
request = urllib.request.Request(
params.url,
headers={
"User-Agent": self._user_agent,
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.5",
"Accept-Encoding": "identity",
},
)
# Fetch URL
with urllib.request.urlopen(request, timeout=self._timeout) as response:
# Check content length
content_length = response.headers.get("Content-Length")
if content_length and int(content_length) > self._max_content_length:
return ToolError(
message=(
f"Content too large ({int(content_length)} bytes). "
f"Maximum is {self._max_content_length} bytes."
),
)
# Read content
content = response.read()
# Check size limit
if len(content) > self._max_content_length:
return ToolError(
message=(
f"Content too large ({len(content)} bytes). "
f"Maximum is {self._max_content_length} bytes."
),
)
# Decode content
try:
text = content.decode("utf-8")
except UnicodeDecodeError:
try:
text = content.decode("latin-1")
except UnicodeDecodeError:
text = content.decode("utf-8", errors="replace")
# Extract text if HTML
content_type = response.headers.get("Content-Type", "")
if "text/html" in content_type:
extracted = self._extract_text(text)
return ToolOk(
output=extracted,
message=f"Fetched and extracted content from {params.url}",
)
else:
return ToolOk(
output=text,
message=f"Fetched content from {params.url}",
)
except urllib.error.HTTPError as e:
return ToolError(
message=f"HTTP error {e.code}: {e.reason}",
)
except urllib.error.URLError as e:
return ToolError(
message=f"URL error: {e.reason}",
)
except Exception as e:
return ToolError(
message=f"Failed to fetch {params.url}. Error: {e}",
)

View File

@@ -0,0 +1,82 @@
"""SearchWeb tool for AgentLite.
This module provides a tool for web search.
Note: This is a placeholder implementation. A real implementation would
require integration with a search API like Google, Bing, or DuckDuckGo.
"""
from __future__ import annotations
from pydantic import BaseModel, Field
from agentlite.tool import CallableTool2, ToolError, ToolResult
class Params(BaseModel):
"""Parameters for the SearchWeb tool."""
query: str = Field(description="The search query string.")
num_results: int = Field(
description="Number of search results to return (max 10).",
default=5,
ge=1,
le=10,
)
class SearchWeb(CallableTool2[Params]):
"""Tool for web search.
This tool performs a web search and returns relevant results.
Note: This is a placeholder implementation. To use real search functionality,
you need to integrate with a search API (Google, Bing, DuckDuckGo, etc.)
and set the appropriate API keys.
Example:
>>> tool = SearchWeb()
>>> result = await tool({"query": "Python async programming"})
"""
name: str = "SearchWeb"
description: str = (
"Search the web for information. "
"Returns a list of relevant search results with titles and snippets. "
"Note: Requires search API configuration to work properly."
)
params: type[Params] = Params
def __init__(
self,
timeout: int = 30,
user_agent: str = ("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"),
):
"""Initialize the SearchWeb tool.
Args:
timeout: Request timeout in seconds
user_agent: User-Agent string
"""
super().__init__()
self._timeout = timeout
self._user_agent = user_agent
async def __call__(self, params: Params) -> ToolResult:
"""Execute the web search.
Args:
params: The search parameters
Returns:
ToolResult with search results or error
"""
if not params.query:
return ToolError(message="Search query cannot be empty.")
return ToolError(
message=(
"SearchWeb tool is disabled in this subagent runtime. "
"Use FetchURL for direct URL content retrieval."
),
)

329
agentlite/tests/conftest.py Normal file
View File

@@ -0,0 +1,329 @@
"""Test configuration and shared fixtures for AgentLite tests.
This module provides pytest configuration and fixtures that are shared
across all test modules.
"""
from __future__ import annotations
import asyncio
import json
from collections.abc import AsyncIterator, Sequence
from typing import Any, Optional
import pytest
from agentlite import (
Agent,
ContentPart,
Message,
TextPart,
ToolCall,
tool,
)
from agentlite.provider import StreamedMessage, TokenUsage
from agentlite.tool import Tool
# =============================================================================
# pytest Configuration
# =============================================================================
def pytest_configure(config):
"""Configure pytest with custom markers."""
config.addinivalue_line("markers", "unit: Unit tests")
config.addinivalue_line("markers", "integration: Integration tests")
config.addinivalue_line("markers", "scenario: Real-world scenario tests")
config.addinivalue_line("markers", "slow: Slow tests that may take time")
# =============================================================================
# Mock Provider Implementation
# =============================================================================
class MockStreamedMessage:
"""Mock streamed message for testing."""
def __init__(self, parts: list[ContentPart]):
self._parts = parts
self._id = "mock-msg-123"
self._usage = TokenUsage(input_tokens=10, output_tokens=5)
def __aiter__(self) -> AsyncIterator[ContentPart]:
"""Return async iterator over parts."""
return self._iter_parts()
async def _iter_parts(self) -> AsyncIterator[ContentPart]:
"""Iterate over parts."""
for part in self._parts:
yield part
@property
def id(self) -> Optional[str]:
"""Message ID."""
return self._id
@property
def usage(self) -> Optional[TokenUsage]:
"""Token usage."""
return self._usage
class MockProvider:
"""Mock provider for testing AgentLite without real API calls.
This provider simulates OpenAI API responses and allows:
- Configuring response sequences
- Simulating tool calls
- Simulating errors
- Tracking all calls for verification
Example:
provider = MockProvider()
provider.add_text_response("Hello!")
provider.add_tool_call("add", {"a": 1, "b": 2}, "3")
agent = Agent(provider=provider)
response = await agent.run("Hi")
# Verify calls
assert len(provider.calls) == 1
assert provider.calls[0]["system_prompt"] == "You are helpful."
"""
def __init__(self):
self.responses: list[dict[str, Any]] = []
self.calls: list[dict[str, Any]] = []
self.model = "mock-model"
def add_text_response(self, text: str) -> None:
"""Add a text response to the queue."""
self.responses.append({"type": "text", "content": text})
def add_text_responses(self, *texts: str) -> None:
"""Add multiple text responses to the queue."""
for text in texts:
self.add_text_response(text)
def add_tool_call(self, name: str, arguments: dict[str, Any], result: str) -> None:
"""Add a tool call response to the queue."""
self.responses.append(
{"type": "tool_call", "name": name, "arguments": arguments, "result": result}
)
def add_tool_calls(self, calls: list[dict[str, Any]]) -> None:
"""Add multiple tool calls to the queue."""
for call in calls:
self.add_tool_call(call["name"], call["arguments"], call.get("result", ""))
def add_error(self, error: Exception) -> None:
"""Add an error response to the queue."""
self.responses.append({"type": "error", "error": error})
def clear_responses(self) -> None:
"""Clear all pending responses."""
self.responses.clear()
@property
def model_name(self) -> str:
"""Model name."""
return self.model
async def generate(
self,
system_prompt: str,
tools: Sequence[Tool],
history: Sequence[Message],
) -> StreamedMessage:
"""Generate a mock response."""
self.calls.append(
{
"system_prompt": system_prompt,
"tools": list(tools),
"history": list(history),
}
)
if not self.responses:
return MockStreamedMessage([TextPart(text="Mock response")])
response = self.responses.pop(0)
if response["type"] == "error":
raise response["error"]
elif response["type"] == "text":
return MockStreamedMessage([TextPart(text=response["content"])])
elif response["type"] == "tool_call":
return MockStreamedMessage(
[
ToolCall(
id="call_123",
function=ToolCall.FunctionBody(
name=response["name"], arguments=json.dumps(response["arguments"])
),
)
]
)
else:
return MockStreamedMessage([TextPart(text="Unknown response type")])
# =============================================================================
# Fixtures
# =============================================================================
@pytest.fixture
def mock_provider():
"""Create a mock provider with no responses configured."""
return MockProvider()
@pytest.fixture
def mock_provider_with_response():
"""Create a mock provider that returns a simple text response."""
provider = MockProvider()
provider.add_text_response("Hello!")
return provider
@pytest.fixture
def mock_provider_with_sequence():
"""Create a mock provider with multiple responses configured."""
provider = MockProvider()
provider.add_text_responses("Response 1", "Response 2", "Response 3")
return provider
# =============================================================================
# Message Fixtures
# =============================================================================
@pytest.fixture
def sample_text_message():
"""Create a sample text message."""
return Message(role="user", content="Hello!")
@pytest.fixture
def sample_assistant_message():
"""Create a sample assistant message."""
return Message(role="assistant", content="Hi there!")
@pytest.fixture
def sample_tool_call():
"""Create a sample tool call."""
return ToolCall(
id="call_123", function=ToolCall.FunctionBody(name="add", arguments='{"a": 1, "b": 2}')
)
@pytest.fixture
def sample_tool_message():
"""Create a sample tool response message."""
return Message(role="tool", content="3", tool_call_id="call_123")
# =============================================================================
# Tool Fixtures
# =============================================================================
@pytest.fixture
def add_tool():
"""Create a simple add tool."""
@tool()
async def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
return add
@pytest.fixture
def multiply_tool():
"""Create a multiply tool."""
@tool()
async def multiply(a: float, b: float) -> float:
"""Multiply two numbers."""
return a * b
return multiply
@pytest.fixture
def error_tool():
"""Create a tool that raises an error."""
@tool()
async def error() -> str:
"""Always raises an error."""
raise ValueError("Test error")
return error
@pytest.fixture
def slow_tool():
"""Create a tool that takes some time."""
@tool()
async def slow_operation(duration: float = 0.1) -> str:
"""Simulate a slow operation."""
await asyncio.sleep(duration)
return f"Completed after {duration}s"
return slow_operation
# =============================================================================
# Agent Fixtures
# =============================================================================
@pytest.fixture
async def simple_agent(mock_provider):
"""Create a simple agent with mocked provider."""
return Agent(provider=mock_provider)
@pytest.fixture
async def agent_with_tools(mock_provider, add_tool):
"""Create an agent with tools."""
return Agent(provider=mock_provider, tools=[add_tool])
@pytest.fixture
async def agent_with_multiple_tools(mock_provider, add_tool, multiply_tool):
"""Create an agent with multiple tools."""
return Agent(provider=mock_provider, tools=[add_tool, multiply_tool])
# =============================================================================
# Utility Fixtures
# =============================================================================
@pytest.fixture
def sample_conversation():
"""Create a sample conversation history."""
return [
Message(role="user", content="Hello!"),
Message(role="assistant", content="Hi there! How can I help?"),
Message(role="user", content="What is 2+2?"),
Message(role="assistant", content="2+2=4"),
]
@pytest.fixture
def event_loop():
"""Create an instance of the default event loop for each test case."""
loop = asyncio.get_event_loop_policy().new_event_loop()
yield loop
loop.close()

View File

@@ -0,0 +1,286 @@
"""Integration tests for Agent class.
This module tests the Agent class with mocked providers to verify
core functionality without making real API calls.
"""
from __future__ import annotations
import pytest
from agentlite import Agent
@pytest.mark.integration
class TestAgentInitialization:
"""Tests for Agent initialization."""
def test_agent_initialization(self, mock_provider):
"""Test basic agent creation."""
agent = Agent(provider=mock_provider)
assert agent.provider is mock_provider
assert agent.system_prompt == "You are a helpful assistant."
assert agent.max_iterations == 80
assert agent.history == []
def test_agent_with_custom_system_prompt(self, mock_provider):
"""Test agent creation with custom system prompt."""
agent = Agent(provider=mock_provider, system_prompt="You are a specialized assistant.")
assert agent.system_prompt == "You are a specialized assistant."
def test_agent_with_tools(self, mock_provider, add_tool):
"""Test agent creation with tools."""
agent = Agent(provider=mock_provider, tools=[add_tool])
assert len(agent.tools.tools) == 1
assert agent.tools.tools[0].name == "add"
def test_agent_with_custom_max_iterations(self, mock_provider):
"""Test agent with custom max_iterations."""
agent = Agent(provider=mock_provider, max_iterations=5)
assert agent.max_iterations == 5
@pytest.mark.integration
class TestAgentRun:
"""Tests for Agent.run() method."""
@pytest.mark.asyncio
async def test_agent_run_simple(self, mock_provider):
"""Test simple non-streaming run."""
mock_provider.add_text_response("Hello there!")
agent = Agent(provider=mock_provider)
response = await agent.run("Hi")
assert response == "Hello there!"
@pytest.mark.asyncio
async def test_agent_run_adds_to_history(self, mock_provider):
"""Test that run adds messages to history."""
mock_provider.add_text_response("Response!")
agent = Agent(provider=mock_provider)
await agent.run("Hello")
# History should have user message and assistant response
assert len(agent.history) == 2
assert agent.history[0].role == "user"
assert agent.history[0].extract_text() == "Hello"
assert agent.history[1].role == "assistant"
@pytest.mark.asyncio
async def test_agent_run_multiple_messages(self, mock_provider):
"""Test multiple runs accumulate history."""
mock_provider.add_text_responses("Response 1", "Response 2")
agent = Agent(provider=mock_provider)
await agent.run("Message 1")
await agent.run("Message 2")
# Should have 4 messages total
assert len(agent.history) == 4
assert agent.history[0].role == "user"
assert agent.history[1].role == "assistant"
assert agent.history[2].role == "user"
assert agent.history[3].role == "assistant"
@pytest.mark.asyncio
async def test_agent_run_tracks_calls(self, mock_provider):
"""Test that provider.generate is called during run."""
mock_provider.add_text_response("Response!")
agent = Agent(provider=mock_provider)
await agent.run("Hello")
assert len(mock_provider.calls) == 1
call = mock_provider.calls[0]
assert call["system_prompt"] == "You are a helpful assistant."
assert len(call["history"]) == 1 # User message
@pytest.mark.integration
class TestAgentGenerate:
"""Tests for Agent.generate() method."""
@pytest.mark.asyncio
async def test_agent_generate_returns_message(self, mock_provider):
"""Test that generate returns a Message."""
mock_provider.add_text_response("Generated response")
agent = Agent(provider=mock_provider)
message = await agent.generate("Hello")
assert message.role == "assistant"
assert message.extract_text() == "Generated response"
@pytest.mark.asyncio
async def test_agent_generate_without_tool_loop(self, mock_provider):
"""Test that generate doesn't do tool calling loop."""
# Add tool call response
mock_provider.add_tool_call("add", {"a": 1, "b": 2}, "3")
agent = Agent(provider=mock_provider, tools=[])
message = await agent.generate("Calculate 1+2")
# Should return the tool call without executing it
assert message.has_tool_calls()
assert len(message.tool_calls) == 1
assert message.tool_calls[0].function.name == "add"
@pytest.mark.asyncio
async def test_agent_generate_adds_to_history(self, mock_provider):
"""Test that generate adds response to history."""
mock_provider.add_text_response("Response!")
agent = Agent(provider=mock_provider)
await agent.generate("Hello")
assert len(agent.history) == 2
assert agent.history[1].role == "assistant"
@pytest.mark.integration
class TestAgentHistory:
"""Tests for Agent history management."""
@pytest.mark.asyncio
async def test_agent_history_property_returns_copy(self, mock_provider):
"""Test that history property returns a copy."""
mock_provider.add_text_response("Response!")
agent = Agent(provider=mock_provider)
await agent.run("Hello")
history = agent.history
history.clear() # Modify the copy
# Original should still have messages
assert len(agent.history) == 2
@pytest.mark.asyncio
async def test_agent_clear_history(self, mock_provider):
"""Test clearing history."""
mock_provider.add_text_response("Response!")
agent = Agent(provider=mock_provider)
await agent.run("Hello")
agent.clear_history()
assert agent.history == []
@pytest.mark.asyncio
async def test_agent_add_message(self, mock_provider):
"""Test manually adding a message."""
agent = Agent(provider=mock_provider)
from agentlite import Message
agent.add_message(Message(role="user", content="Manual message"))
assert len(agent.history) == 1
assert agent.history[0].extract_text() == "Manual message"
@pytest.mark.integration
class TestAgentWithTools:
"""Tests for Agent with tools."""
@pytest.mark.asyncio
async def test_agent_with_tools_initialization(self, mock_provider, add_tool):
"""Test agent initialization with tools."""
agent = Agent(
provider=mock_provider, tools=[add_tool], system_prompt="You have access to tools."
)
assert len(agent.tools.tools) == 1
# Run to verify tools are passed to provider
mock_provider.add_text_response("I have tools available")
await agent.run("Hello")
# Check that tools were passed to provider
assert len(mock_provider.calls) == 1
assert len(mock_provider.calls[0]["tools"]) == 1
@pytest.mark.asyncio
async def test_agent_tool_call_execution(self, mock_provider, add_tool):
"""Test that agent executes tool calls."""
# First response: tool call
mock_provider.add_tool_call("add", {"a": 1, "b": 2}, "3")
# Second response: text after tool result
mock_provider.add_text_response("The sum is 3")
agent = Agent(provider=mock_provider, tools=[add_tool])
response = await agent.run("What is 1+2?")
assert "3" in response
# Should have made 2 calls to provider
assert len(mock_provider.calls) == 2
@pytest.mark.integration
class TestAgentMaxIterations:
"""Tests for max_iterations behavior."""
@pytest.mark.asyncio
async def test_agent_respects_max_iterations(self, mock_provider, add_tool):
"""Test that agent stops after max_iterations."""
# Always return tool calls to trigger iteration limit
for _ in range(10):
mock_provider.add_tool_call("add", {"a": 1, "b": 2}, "3")
agent = Agent(provider=mock_provider, tools=[add_tool], max_iterations=3)
response = await agent.run("Calculate")
# Should stop after max_iterations
assert len(mock_provider.calls) <= 3
assert "Maximum tool call iterations reached" in response
@pytest.mark.asyncio
async def test_agent_no_iterations_for_simple_response(self, mock_provider):
"""Test that simple responses don't count as iterations."""
mock_provider.add_text_response("Simple response")
agent = Agent(provider=mock_provider, max_iterations=1)
response = await agent.run("Hello")
assert response == "Simple response"
@pytest.mark.integration
class TestAgentStreaming:
"""Tests for streaming mode."""
@pytest.mark.asyncio
async def test_agent_run_streaming(self, mock_provider):
"""Test streaming run."""
mock_provider.add_text_response("Streamed response")
agent = Agent(provider=mock_provider)
stream = await agent.run("Hello", stream=True)
# Collect stream
chunks = []
async for chunk in stream:
chunks.append(chunk)
assert len(chunks) > 0
assert "".join(chunks) == "Streamed response"
@pytest.mark.asyncio
async def test_agent_streaming_adds_to_history(self, mock_provider):
"""Test that streaming adds messages to history."""
mock_provider.add_text_response("Response")
agent = Agent(provider=mock_provider)
stream = await agent.run("Hello", stream=True)
async for _ in stream:
pass
assert len(agent.history) == 2

View File

@@ -0,0 +1,347 @@
"""Integration tests for AgentLite with real API.
This script runs comprehensive tests against the real OpenAI API.
Requires OPENAI_API_KEY environment variable to be set.
Usage:
export OPENAI_API_KEY="sk-..."
python tests/integration/test_with_api.py
"""
import asyncio
import os
import sys
from pathlib import Path
import pytest
# Add src to path
sys.path.insert(0, str(Path(__file__).parent.parent.parent / "src"))
from agentlite import Agent, OpenAIProvider, LLMClient
from agentlite.skills import discover_skills, SkillTool, index_skills_by_name
from agentlite.tools import ConfigurableToolset
# Test configuration
TEST_MODEL = "gpt-4o-mini" # Use mini for cost efficiency
HAS_OPENAI_API_KEY = bool(os.environ.get("OPENAI_API_KEY"))
pytestmark = pytest.mark.skipif(
not HAS_OPENAI_API_KEY, reason="OPENAI_API_KEY is required to run integration tests"
)
def get_provider():
"""Get OpenAI provider with API key."""
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
print("❌ OPENAI_API_KEY not set!")
print("Please set your OpenAI API key:")
print(" export OPENAI_API_KEY='sk-...'")
sys.exit(1)
return OpenAIProvider(api_key=api_key, model=TEST_MODEL)
async def test_basic_agent():
"""Test 1: Basic Agent functionality."""
print("\n" + "=" * 60)
print("Test 1: Basic Agent Functionality")
print("=" * 60)
try:
provider = get_provider()
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant. Be concise.",
)
response = await agent.run("What is 2+2?")
print(f"✅ Agent responded: {response[:100]}...")
assert "4" in response, "Expected '4' in response"
print("✅ Basic Agent test PASSED")
return True
except Exception as e:
print(f"❌ Basic Agent test FAILED: {e}")
return False
async def test_agent_with_tools():
"""Test 2: Agent with tool suite."""
print("\n" + "=" * 60)
print("Test 2: Agent with Tool Suite")
print("=" * 60)
try:
from agentlite.tools import ToolSuiteConfig
provider = get_provider()
# Create toolset with file tools
config = ToolSuiteConfig()
toolset = ConfigurableToolset(config, work_dir=Path.cwd())
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant with file access.",
tools=toolset.tools,
)
print(f"✅ Agent created with {len(agent.tools.tools)} tools")
# Test simple query (without requiring file access)
response = await agent.run("List the Python files in the current directory")
print(f"✅ Agent with tools responded: {response[:100]}...")
print("✅ Agent with Tools test PASSED")
return True
except Exception as e:
print(f"❌ Agent with Tools test FAILED: {e}")
import traceback
traceback.print_exc()
return False
async def test_llm_client():
"""Test 3: LLMClient functionality."""
print("\n" + "=" * 60)
print("Test 3: LLMClient Functionality")
print("=" * 60)
try:
provider = get_provider()
client = LLMClient(provider=provider)
response = await client.complete(
user_prompt="What is the capital of France?",
system_prompt="You are a helpful assistant. Be concise.",
)
print(f"✅ LLMClient responded: {response.content[:100]}...")
assert "Paris" in response.content, "Expected 'Paris' in response"
print("✅ LLMClient test PASSED")
return True
except Exception as e:
print(f"❌ LLMClient test FAILED: {e}")
import traceback
traceback.print_exc()
return False
async def test_llm_streaming():
"""Test 4: LLM streaming."""
print("\n" + "=" * 60)
print("Test 4: LLM Streaming")
print("=" * 60)
try:
provider = get_provider()
client = LLMClient(provider=provider)
chunks = []
async for chunk in client.stream(
user_prompt="Count from 1 to 3",
system_prompt="You are a helpful assistant.",
):
chunks.append(chunk)
print(f" Chunk: {chunk[:20]}...")
full_response = "".join(chunks)
print(f"✅ Streamed response: {full_response[:100]}...")
print("✅ LLM Streaming test PASSED")
return True
except Exception as e:
print(f"❌ LLM Streaming test FAILED: {e}")
import traceback
traceback.print_exc()
return False
async def test_subagents():
"""Test 5: Subagent functionality."""
print("\n" + "=" * 60)
print("Test 5: Subagent Functionality")
print("=" * 60)
try:
from agentlite.tools.multiagent.task import Task
provider = get_provider()
# Create parent agent
parent = Agent(
provider=provider,
system_prompt="You are a coordinator agent.",
name="coordinator",
)
# Create subagent
coder = Agent(
provider=provider,
system_prompt="You are a coding specialist. Write clean, simple code.",
name="coder",
)
# Add subagent to parent
parent.add_subagent("coder", coder, "Writes code")
# Add Task tool
parent.tools.add(Task(labor_market=parent.labor_market))
print(f"✅ Created parent with {len(parent.labor_market)} subagent(s)")
print(f" Subagents: {parent.labor_market.list_subagents()}")
print("✅ Subagent test PASSED")
return True
except Exception as e:
print(f"❌ Subagent test FAILED: {e}")
import traceback
traceback.print_exc()
return False
async def test_skills():
"""Test 6: Skills functionality."""
print("\n" + "=" * 60)
print("Test 6: Skills Functionality")
print("=" * 60)
try:
# Discover example skills
skills_dir = Path(__file__).parent.parent.parent / "examples" / "skills"
if not skills_dir.exists():
print("⚠️ Skills directory not found, skipping")
return True
skills = discover_skills(skills_dir)
print(f"✅ Discovered {len(skills)} skill(s)")
for skill in skills:
print(f" - {skill.name} ({skill.type})")
if len(skills) == 0:
print("⚠️ No skills found, skipping further tests")
return True
# Test with agent
provider = get_provider()
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant.",
)
skill_index = index_skills_by_name(skills)
skill_tool = SkillTool(skill_index, parent_agent=agent)
agent.tools.add(skill_tool)
print("✅ Added SkillTool to agent")
print("✅ Skills test PASSED")
return True
except Exception as e:
print(f"❌ Skills test FAILED: {e}")
import traceback
traceback.print_exc()
return False
async def test_conversation_history():
"""Test 7: Conversation history."""
print("\n" + "=" * 60)
print("Test 7: Conversation History")
print("=" * 60)
try:
provider = get_provider()
agent = Agent(
provider=provider,
system_prompt="You are a helpful assistant.",
)
# First message
response1 = await agent.run("My name is Alice")
print(f"✅ Response 1: {response1[:50]}...")
# Second message (should remember context)
response2 = await agent.run("What is my name?")
print(f"✅ Response 2: {response2[:50]}...")
assert "Alice" in response2, "Expected agent to remember name"
print("✅ Conversation History test PASSED")
return True
except Exception as e:
print(f"❌ Conversation History test FAILED: {e}")
import traceback
traceback.print_exc()
return False
async def run_all_tests():
"""Run all integration tests."""
print("\n" + "=" * 60)
print("AgentLite Integration Tests with Real API")
print("=" * 60)
print(f"Model: {TEST_MODEL}")
# Check API key
if not os.environ.get("OPENAI_API_KEY"):
print("\n❌ OPENAI_API_KEY not set!")
print("\nTo run these tests, set your OpenAI API key:")
print(" export OPENAI_API_KEY='sk-...'")
print("\nGet your API key from: https://platform.openai.com/api-keys")
return []
results = []
# Run all tests
results.append(("Basic Agent", await test_basic_agent()))
results.append(("Agent with Tools", await test_agent_with_tools()))
results.append(("LLMClient", await test_llm_client()))
results.append(("LLM Streaming", await test_llm_streaming()))
results.append(("Subagents", await test_subagents()))
results.append(("Skills", await test_skills()))
results.append(("Conversation History", await test_conversation_history()))
# Print summary
print("\n" + "=" * 60)
print("Test Summary")
print("=" * 60)
passed = sum(1 for _, result in results if result)
total = len(results)
for name, result in results:
status = "✅ PASSED" if result else "❌ FAILED"
print(f"{status}: {name}")
print(f"\n{passed}/{total} tests passed")
if passed == total:
print("\n🎉 All tests passed!")
else:
print(f"\n⚠️ {total - passed} test(s) failed")
return results
if __name__ == "__main__":
results = asyncio.run(run_all_tests())
# Exit with error code if any tests failed
if results and not all(r for _, r in results):
sys.exit(1)

View File

View File

View File

@@ -0,0 +1,140 @@
"""Debug script to find CLI test hang cause."""
from __future__ import annotations
import os
import sys
import asyncio
sys.path.insert(0, "/home/tcmofashi/proj/l2d_backend/agentlite/src")
from agentlite import Agent, OpenAIProvider
from agentlite.tools.shell.shell import Shell, Params
SILICONFLOW_BASE_URL = "https://api.siliconflow.cn/v1"
SILICONFLOW_MODEL = "Qwen/Qwen3.5-397B-A17B"
async def test_shell_directly():
"""Test shell tool without agent."""
print("\n=== Test 1: Shell tool directly ===")
shell = Shell(timeout=10)
# Use Params dataclass
result = await shell(Params(command="echo 'Hello'", timeout=5))
print(f"Result: {result}")
print(f"Output: {result.output if hasattr(result, 'output') else result}")
return True
async def test_agent_no_tools():
"""Test agent without tools."""
print("\n=== Test 2: Agent without tools ===")
api_key = os.environ.get("SILICONFLOW_API_KEY")
if not api_key:
print("SILICONFLOW_API_KEY not set")
return False
provider = OpenAIProvider(
api_key=api_key,
base_url=SILICONFLOW_BASE_URL,
model=SILICONFLOW_MODEL,
timeout=30.0,
)
agent = Agent(
provider=provider,
system_prompt="Reply briefly in one word.",
max_iterations=3,
)
print("Sending message to LLM...")
try:
response = await asyncio.wait_for(
agent.run("Say hello."),
timeout=60.0,
)
print(f"Response: {response[:100]}...")
return True
except asyncio.TimeoutError:
print("TIMEOUT in agent without tools!")
return False
async def test_agent_with_shell():
"""Test agent with shell tool - the problematic case."""
print("\n=== Test 3: Agent WITH shell tool ===")
api_key = os.environ.get("SILICONFLOW_API_KEY")
if not api_key:
print("SILICONFLOW_API_KEY not set")
return False
provider = OpenAIProvider(
api_key=api_key,
base_url=SILICONFLOW_BASE_URL,
model=SILICONFLOW_MODEL,
timeout=60.0,
)
agent = Agent(
provider=provider,
system_prompt="You are a shell assistant. Execute commands when asked. Keep responses brief.",
tools=[Shell(timeout=10)],
max_iterations=5, # Limit iterations
)
print("Sending message with tool request...")
print("This is where it might hang...")
try:
response = await asyncio.wait_for(
agent.run("Run 'echo test' and tell me the result."),
timeout=120.0,
)
print(f"Response: {response}")
return True
except asyncio.TimeoutError:
print("TIMEOUT! Agent hung for 120 seconds")
# Check history to see what happened
print(f"\nHistory length: {len(agent.history)}")
for i, msg in enumerate(agent.history[-5:]):
content_preview = str(msg.content)[:100] if msg.content else "None"
print(f" [{i}] {msg.role}: {content_preview}...")
return False
async def main():
"""Run all tests."""
print("=" * 60)
print("CLI Debug Test - Finding the hang cause")
print("=" * 60)
results = []
# Test 1: Shell directly
r1 = await test_shell_directly()
results.append(("Shell directly", r1))
print(f"Result: {'PASS' if r1 else 'FAIL'}")
# Test 2: Agent without tools
r2 = await test_agent_no_tools()
results.append(("Agent no tools", r2))
print(f"Result: {'PASS' if r2 else 'FAIL'}")
# Test 3: Agent with shell (the problem)
r3 = await test_agent_with_shell()
results.append(("Agent with shell", r3))
print(f"Result: {'PASS' if r3 else 'FAIL'}")
print("\n" + "=" * 60)
print("SUMMARY")
print("=" * 60)
for name, passed in results:
status = "✅ PASS" if passed else "❌ FAIL"
print(f" {name}: {status}")
print("=" * 60)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,221 @@
"""Debug script with detailed logging to find CLI test hang cause."""
from __future__ import annotations
import os
import sys
import asyncio
import logging
import time
sys.path.insert(0, "/home/tcmofashi/proj/l2d_backend/agentlite/src")
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
datefmt="%H:%M:%S",
)
logger = logging.getLogger("debug")
# SiliconFlow DeepSeek-V3 (known good function calling support)
SILICONFLOW_BASE_URL = "https://api.siliconflow.cn/v1"
SILICONFLOW_MODEL = "Pro/deepseek-ai/DeepSeek-V3.2"
SILICONFLOW_API_KEY = "sk-eaxfgkkcuatochftxpevkyvltghigsrclzjzalybmaqycual"
async def main():
from agentlite import Agent, OpenAIProvider
from agentlite.tools.shell.shell import Shell
from agentlite.message import Message
logger.info("=" * 60)
logger.info("CLI Debug Test with DeepSeek-V3 (SiliconFlow)")
logger.info("=" * 60)
api_key = os.environ.get("SILICONFLOW_API_KEY") or SILICONFLOW_API_KEY
if not api_key:
logger.error("SILICONFLOW_API_KEY not set")
return
logger.info(f"Using model: {SILICONFLOW_MODEL}")
provider = OpenAIProvider(
api_key=api_key,
base_url=SILICONFLOW_BASE_URL,
model=SILICONFLOW_MODEL,
timeout=30.0,
)
agent = Agent(
provider=provider,
system_prompt="You are a shell assistant. Execute commands when asked. Reply briefly.",
tools=[Shell(timeout=10)],
max_iterations=5,
)
start_time = time.time()
message = "Run 'echo test' and tell me the result."
logger.info("\n=== Starting Agent Run ===")
logger.info(f"Message: {message}")
logger.info(f"Max iterations: {agent.max_iterations}")
logger.info(f"Tools: {[t.name for t in agent.tools.tools]}")
agent._history.append(Message(role="user", content=message))
iterations = 0
final_response = None
while iterations < agent.max_iterations:
iterations += 1
elapsed = time.time() - start_time
logger.info(f"\n{'=' * 50}")
logger.info(f"ITERATION {iterations}/{agent.max_iterations} (elapsed: {elapsed:.1f}s)")
logger.info(f"{'=' * 50}")
# Step 1: Call Provider
logger.info(">>> Step 1: Calling provider.generate()...")
step_start = time.time()
try:
stream = await asyncio.wait_for(
provider.generate(
system_prompt=agent.system_prompt,
tools=agent.tools.tools,
history=agent._history,
),
timeout=60.0,
)
logger.info(f"<<< Provider returned stream in {time.time() - step_start:.2f}s")
except asyncio.TimeoutError:
logger.error("!!! Provider call TIMEOUT after 60s")
final_response = "ERROR: Provider timeout"
break
# Step 2: Collect stream parts
logger.info(">>> Step 2: Collecting stream parts...")
step_start = time.time()
from agentlite.message import TextPart, ToolCall, ContentPart
response_parts = []
tool_calls = []
chunk_count = 0
try:
async for part in stream:
chunk_count += 1
if chunk_count % 10 == 0:
logger.debug(f" Received chunk #{chunk_count}")
if isinstance(part, ToolCall):
tool_calls.append(part)
logger.info(
f" ToolCall received: {part.function.name if hasattr(part, 'function') else part}"
)
elif isinstance(part, ContentPart):
response_parts.append(part)
if isinstance(part, TextPart):
logger.debug(f" Text: {part.text[:50]}...")
logger.info(
f"<<< Stream finished in {time.time() - step_start:.2f}s, {chunk_count} chunks"
)
except asyncio.TimeoutError:
logger.error("!!! Stream reading TIMEOUT")
final_response = "ERROR: Stream timeout"
break
except Exception as e:
logger.error(f"!!! Stream error: {type(e).__name__}: {e}")
final_response = f"ERROR: Stream error - {e}"
break
# Extract text
response_text = ""
for part in response_parts:
if isinstance(part, TextPart):
response_text += part.text
logger.info(f"Response text ({len(response_text)} chars): {response_text[:100]}...")
logger.info(f"Tool calls: {len(tool_calls)}")
# Add to history
agent._history.append(
Message(
role="assistant",
content=response_parts,
tool_calls=tool_calls if tool_calls else None,
)
)
# Step 3: Check if done
if not tool_calls:
elapsed = time.time() - start_time
logger.info(f"\n=== Agent completed in {elapsed:.2f}s, {iterations} iterations ===")
final_response = response_text
break
# Step 4: Execute tool calls
logger.info(f"\n>>> Step 3: Executing {len(tool_calls)} tool calls...")
step_start = time.time()
for i, tc in enumerate(tool_calls):
func_name = tc.function.name if hasattr(tc, "function") else str(tc)
func_args = tc.function.arguments if hasattr(tc, "function") else ""
logger.info(f" Tool #{i + 1}: {func_name}")
logger.info(f" Args: {func_args[:200]}...")
try:
result = await asyncio.wait_for(
agent.tools.handle(tc),
timeout=30.0,
)
output = result.output if hasattr(result, "output") else str(result)
is_error = result.is_error if hasattr(result, "is_error") else False
logger.info(
f" Result: is_error={is_error}, output_len={len(output) if output else 0}"
)
output_preview = output[:100] if output else "None"
logger.info(f" Output preview: {output_preview}...")
except asyncio.TimeoutError:
logger.error(" !!! Tool execution TIMEOUT")
output = "Tool execution timed out"
is_error = True
except Exception as e:
logger.error(f" !!! Tool error: {type(e).__name__}: {e}")
output = str(e)
is_error = True
# Add tool result to history
agent._history.append(
Message(
role="tool",
content=output,
tool_call_id=tc.id if hasattr(tc, "id") else f"tc_{i}",
)
)
logger.info(f"<<< Tool execution finished in {time.time() - step_start:.2f}s")
# Check overall timeout
elapsed = time.time() - start_time
if elapsed > 90:
logger.warning(f"!!! Overall timeout approaching ({elapsed:.1f}s)")
final_response = f"Timeout after {iterations} iterations"
break
if iterations >= agent.max_iterations:
logger.warning(f"!!! Max iterations reached ({agent.max_iterations})")
final_response = f"Max iterations ({agent.max_iterations}) reached"
logger.info(f"\n{'=' * 60}")
logger.info("FINAL RESULT:")
logger.info(f"{'=' * 60}")
logger.info(f"{final_response}")
logger.info(f"Total iterations: {iterations}")
logger.info(f"Total time: {time.time() - start_time:.2f}s")
logger.info(f"History length: {len(agent._history)}")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,344 @@
"""End-to-end test for complex CLI operations with real API.
This test simulates a realistic complex CLI task where an agent:
1. Explores project structure using shell commands
2. Searches for specific patterns using grep/glob
3. Reads relevant files
4. Creates analysis reports
Uses real SiliconFlow qwen3.5-397B API (requires SILICONFLOW_API_KEY env var).
"""
from __future__ import annotations
import asyncio
import os
import tempfile
from pathlib import Path
import pytest
from agentlite import Agent, OpenAIProvider
from agentlite.tools import (
ConfigurableToolset,
ToolSuiteConfig,
)
# =============================================================================
# Configuration from model_config.toml
# =============================================================================
SILICONFLOW_BASE_URL = "https://api.siliconflow.cn/v1"
SILICONFLOW_MODEL = "Qwen/Qwen3.5-397B-A17B"
def get_siliconflow_provider() -> OpenAIProvider | None:
"""Create OpenAIProvider for SiliconFlow API."""
api_key = os.environ.get("SILICONFLOW_API_KEY")
if not api_key:
return None
return OpenAIProvider(
api_key=api_key,
base_url=SILICONFLOW_BASE_URL,
model=SILICONFLOW_MODEL,
)
@pytest.fixture
def real_provider():
"""Create real SiliconFlow provider."""
provider = get_siliconflow_provider()
if provider is None:
pytest.skip("SILICONFLOW_API_KEY not set")
return provider
@pytest.fixture
def test_project():
"""Create a mock project structure for testing."""
with tempfile.TemporaryDirectory() as tmpdir:
project_dir = Path(tmpdir) / "test_project"
project_dir.mkdir()
# Create project structure
(project_dir / "src").mkdir()
(project_dir / "src" / "utils").mkdir()
(project_dir / "tests").mkdir()
(project_dir / "docs").mkdir()
# Create source files
(project_dir / "src" / "main.py").write_text('''"""Main module."""
from src.utils.helper import process_data
from src.utils.logger import setup_logger
def main():
"""Main entry point."""
logger = setup_logger()
data = [1, 2, 3, 4, 5]
result = process_data(data)
logger.info(f"Result: {result}")
return result
if __name__ == "__main__":
main()
''')
(project_dir / "src" / "__init__.py").write_text('"""Source package."""')
(project_dir / "src" / "utils" / "helper.py").write_text('''"""Helper utilities."""
def process_data(data: list) -> list:
"""Process input data."""
return [x * 2 for x in data]
def validate_data(data: list) -> bool:
"""Validate data format."""
return all(isinstance(x, (int, float)) for x in data)
''')
(project_dir / "src" / "utils" / "logger.py").write_text('''"""Logging utilities."""
import logging
def setup_logger(name: str = "app") -> logging.Logger:
"""Setup application logger."""
logger = logging.getLogger(name)
logger.setLevel(logging.INFO)
return logger
''')
(project_dir / "src" / "utils" / "__init__.py").write_text('"""Utils package."""')
# Create test files
(project_dir / "tests" / "test_helper.py").write_text('''"""Tests for helper module."""
from src.utils.helper import process_data, validate_data
def test_process_data():
assert process_data([1, 2, 3]) == [2, 4, 6]
def test_validate_data():
assert validate_data([1, 2, 3]) == True
assert validate_data(["a", "b"]) == False
''')
# Create documentation
(project_dir / "docs" / "README.md").write_text("""# Test Project
A sample project for testing CLI operations.
## Structure
- `src/` - Source code
- `tests/` - Unit tests
- `docs/` - Documentation
""")
(project_dir / "README.md").write_text("""# Test Project
Simple data processing project.
## Usage
```bash
python -m src.main
```
""")
yield project_dir
@pytest.mark.scenario
@pytest.mark.slow
class TestComplexCLITasks:
"""End-to-end tests with complex CLI operations."""
@pytest.mark.asyncio
async def test_explore_project_structure(self, real_provider, test_project):
"""Test exploring project structure using CLI tools.
Task: Use shell commands to explore the project structure,
then summarize what files exist.
"""
# Create toolset with Shell tool
toolset = ConfigurableToolset(
config=ToolSuiteConfig(
shell_tools=ToolSuiteConfig().shell_tools,
),
work_dir=str(test_project),
)
agent = Agent(
provider=real_provider,
tools=toolset.tools,
system_prompt=(
"你是一个项目分析助手。使用 Shell 工具执行命令来探索项目结构。"
"请使用 find、ls、tree 等命令来了解项目。"
),
max_iterations=5, # Limit iterations to prevent hanging
)
# Add overall timeout to prevent infinite hanging
try:
response = await asyncio.wait_for(
agent.run(
f"探索项目目录 {test_project} 的结构,列出所有文件和目录,并总结项目的组织方式。"
),
timeout=120.0, # 2 minute overall timeout
)
except asyncio.TimeoutError:
pytest.fail("Agent timed out after 120 seconds - possible infinite loop")
assert response, "Agent should return a response"
print(f"\n[项目结构探索结果]:\n{response}\n")
# Verify response mentions key files
response_lower = response.lower()
assert any(
word in response_lower for word in ["src", "tests", "main.py", "helper", "logger"]
), "Response should mention project files"
@pytest.mark.asyncio
async def test_search_and_analyze_code(self, real_provider, test_project):
"""Test searching for patterns and analyzing code.
Task: Use grep/glob to find specific patterns,
read the files, and create an analysis report.
"""
# Create toolset with all file tools
toolset = ConfigurableToolset(
config=ToolSuiteConfig(
file_tools=ToolSuiteConfig().file_tools,
shell_tools=ToolSuiteConfig().shell_tools,
),
work_dir=str(test_project),
)
agent = Agent(
provider=real_provider,
tools=toolset.tools,
system_prompt=(
"你是一个代码分析助手。使用 Glob、Grep、ReadFile 等工具来搜索和分析代码。"
"请使用 Shell 工具执行 grep、find 等命令。"
),
)
response = await agent.run(
f"在项目 {test_project} 中搜索所有包含 'def ' 的 Python 文件,"
f"列出找到的函数定义,并创建一个函数清单文件保存到 {test_project}/functions.txt。"
)
assert response, "Agent should return a response"
print(f"\n[代码搜索分析结果]:\n{response}\n")
# Check if analysis file was created
functions_file = test_project / "functions.txt"
if functions_file.exists():
content = functions_file.read_text()
print(f"\n[函数清单文件]:\n{content}\n")
assert len(content) > 0, "Functions file should not be empty"
@pytest.mark.asyncio
async def test_complex_multi_step_task(self, real_provider, test_project):
"""Test a complex multi-step CLI task.
Task:
1. Find all Python files using shell
2. Search for TODO comments using grep
3. Read files with TODOs
4. Create a summary report
"""
# Add some TODO comments
todo_file = test_project / "src" / "utils" / "todo_items.py"
todo_file.write_text('''"""Module with TODO items."""
# TODO: Implement error handling
def risky_operation(data):
"""Perform a risky operation."""
return data / 0 # This will fail
# TODO: Add caching mechanism
def expensive_computation(n):
"""Perform expensive computation."""
return sum(range(n))
# FIXME: Memory leak in this function
def process_large_file(path):
"""Process a large file."""
with open(path) as f:
return f.read()
''')
# Create comprehensive toolset
toolset = ConfigurableToolset(
config=ToolSuiteConfig(
file_tools=ToolSuiteConfig().file_tools,
shell_tools=ToolSuiteConfig().shell_tools,
),
work_dir=str(test_project),
)
agent = Agent(
provider=real_provider,
tools=toolset.tools,
system_prompt=(
"你是一个项目维护助手。"
"使用 Shell 工具执行命令(如 find、grep、ls 等)。"
"使用 ReadFile 读取文件内容。"
"使用 WriteFile 创建新文件。"
"请一步一步完成任务。"
),
)
response = await agent.run(
f"请完成以下任务:\n"
f"1. 使用 'find' 命令找出项目 {test_project} 中所有的 .py 文件\n"
f"2. 使用 'grep' 命令搜索所有包含 'TODO''FIXME' 的行\n"
f"3. 读取包含 TODO 的文件内容\n"
f"4. 创建一个 TODO 报告文件,保存到 {test_project}/todo_report.txt"
)
assert response, "Agent should return a response"
print(f"\n[复杂任务结果]:\n{response}\n")
# Verify report was created
report_file = test_project / "todo_report.txt"
if report_file.exists():
content = report_file.read_text()
print(f"\n[TODO 报告]:\n{content}\n")
@pytest.mark.asyncio
async def test_shell_pipes_and_chains(self, real_provider, test_project):
"""Test complex shell commands with pipes and chains.
Task: Use shell pipes to perform complex data processing.
"""
toolset = ConfigurableToolset(
config=ToolSuiteConfig(
shell_tools=ToolSuiteConfig().shell_tools,
),
work_dir=str(test_project),
)
agent = Agent(
provider=real_provider,
tools=toolset.tools,
system_prompt=(
"你是一个 Shell 命令专家。"
"使用复杂的 Shell 命令(管道、重定向、条件执行等)来完成任务。"
),
)
response = await agent.run(
f"在项目目录 {test_project} 中执行以下操作:\n"
f"1. 使用 'find . -name \"*.py\" | xargs wc -l' 统计所有 Python 文件的总行数\n"
f'2. 使用 \'grep -r "def " --include="*.py" | wc -l\' 统计函数定义数量\n'
f"3. 使用 'ls -la' 查看目录详情\n"
f"报告你的发现。"
)
assert response, "Agent should return a response"
print(f"\n[Shell 管道命令结果]:\n{response}\n")
# Verify response contains relevant information
response_lower = response.lower()
assert any(
word in response_lower for word in ["", "line", "函数", "function", "文件", "file"]
), "Response should mention analysis results"

View File

@@ -0,0 +1,374 @@
"""End-to-end scenario test for file operations.
This test simulates a realistic scenario where an agent:
1. Reads a file
2. Explains its content
3. Creates a new file with analysis results
This is a meaningful e2e test that demonstrates the agent's ability to
orchestrate multiple tool calls in sequence.
"""
from __future__ import annotations
import os
import tempfile
from pathlib import Path
import pytest
from agentlite import Agent, tool
# =============================================================================
# File Operation Tools
# =============================================================================
@tool()
async def read_file(file_path: str) -> str:
"""Read the content of a file.
Args:
file_path: Path to the file to read.
Returns:
The content of the file as a string.
Raises:
FileNotFoundError: If the file does not exist.
"""
with open(file_path) as f:
return f.read()
@tool()
async def write_file(file_path: str, content: str) -> str:
"""Write content to a file, creating it if it doesn't exist.
Args:
file_path: Path to the file to write.
content: Content to write to the file.
Returns:
Success message confirming the file was written.
"""
# Create parent directories if they don't exist
Path(file_path).parent.mkdir(parents=True, exist_ok=True)
with open(file_path, "w") as f:
f.write(content)
return f"File successfully written to {file_path}"
@tool()
async def list_files(directory: str) -> str:
"""List all files in a directory.
Args:
directory: Path to the directory to list.
Returns:
A newline-separated list of file names in the directory.
"""
files = os.listdir(directory)
return "\n".join(files)
# =============================================================================
# E2E Test
# =============================================================================
@pytest.mark.scenario
class TestFileOperationsScenario:
"""End-to-end test for file read/write operations."""
@pytest.mark.asyncio
async def test_read_explain_and_write(self, mock_provider):
"""Test a complete workflow: read file -> explain -> write results."""
# Setup: Create a temporary file with content
with tempfile.TemporaryDirectory() as tmpdir:
# Create a source file to read
source_file = os.path.join(tmpdir, "source.txt")
source_content = """Project Overview
================
This is a sample project document for testing.
Features:
- Feature A: Does something useful
- Feature B: Does something else
- Feature C: The most important feature
Conclusion: This project demonstrates file operations.
"""
with open(source_file, "w") as f:
f.write(source_content)
# Configure mock provider responses
# The agent should:
# 1. Read the file
# 2. Summarize it
# 3. Write the summary to a new file
mock_provider.add_text_response(
f"I'll read the file at {source_file} and analyze it for you."
)
# Create agent with file tools
tools = [read_file, write_file, list_files]
agent = Agent(
provider=mock_provider,
tools=tools,
system_prompt="You are a helpful file analysis assistant.",
)
# Step 1: Agent reads and analyzes the file
mock_provider.clear_responses()
mock_provider.add_tool_call(
"read_file",
{"file_path": source_file},
source_content,
)
# Agent analyzes the content
mock_provider.add_text_response(
"I've read the file. It's a project overview document with 3 features. "
"Let me create a summary file."
)
# Step 2: Agent writes summary to a new file
summary_file = os.path.join(tmpdir, "summary.txt")
expected_summary = """Project Summary
================
This is a sample project with 3 main features:
- Feature A, - Feature B, - Feature C
The most important feature is Feature C.
"""
mock_provider.clear_responses()
mock_provider.add_tool_call(
"write_file",
{
"file_path": summary_file,
"content": expected_summary,
},
f"File successfully written to {summary_file}",
)
mock_provider.add_text_response(f"I've created a summary at {summary_file}")
# Execute the agent
response = await agent.run(
f"Please read {source_file}, analyze it, and create a summary file at {summary_file}"
)
# Verify the interaction
assert "summary" in response.lower()
# Verify the provider was called correctly
assert len(mock_provider.calls) >= 1
@pytest.mark.asyncio
async def test_list_files_scenario(self, mock_provider):
"""Test listing files in a directory."""
with tempfile.TemporaryDirectory() as tmpdir:
# Create some test files
for i in range(3):
with open(os.path.join(tmpdir, f"file{i}.txt"), "w") as f:
f.write(f"Content {i}")
# Configure agent to list files
mock_provider.add_tool_call(
"list_files",
{"directory": tmpdir},
"file0.txt\nfile1.txt\nfile2.txt",
)
mock_provider.add_text_response(
f"I found 3 files in {tmpdir}: file0.txt, file1.txt, file2.txt"
)
agent = Agent(
provider=mock_provider,
tools=[list_files],
system_prompt="You are a file system assistant.",
)
response = await agent.run(f"List all files in {tmpdir}")
assert "3 files" in response
@pytest.mark.asyncio
async def test_multi_step_file_workflow(self, mock_provider):
"""Test a complex multi-step file workflow.
Scenario:
1. List files in directory
2. Read each file
3. Create a combined report
"""
with tempfile.TemporaryDirectory() as tmpdir:
# Create test files
files_content = {
"report1.txt": "Sales increased by 20%",
"report2.txt": "Customer satisfaction at 85%",
"report3.txt": "Bug fixes: 15 resolved",
}
for name, content in files_content.items():
with open(os.path.join(tmpdir, name), "w") as f:
f.write(content)
# Configure agent responses for multi-step workflow
tools = [read_file, write_file, list_files]
# Step 1: List files
mock_provider.add_tool_call(
"list_files",
{"directory": tmpdir},
"report1.txt\nreport2.txt\nreport3.txt",
)
# Step 2: Read all files
mock_provider.add_tool_call(
"read_file",
{"file_path": os.path.join(tmpdir, "report1.txt")},
"Sales increased by 20%",
)
mock_provider.add_tool_call(
"read_file",
{"file_path": os.path.join(tmpdir, "report2.txt")},
"Customer satisfaction at 85%",
)
mock_provider.add_tool_call(
"read_file",
{"file_path": os.path.join(tmpdir, "report3.txt")},
"Bug fixes: 15 resolved",
)
# Step 3: Write combined report
combined_report = """Combined Report
================
1. Sales: Increased by 20%
2. Customer Satisfaction: 85%
3. Development: 15 bugs resolved
"""
mock_provider.add_tool_call(
"write_file",
{
"file_path": os.path.join(tmpdir, "combined_report.txt"),
"content": combined_report,
},
f"File successfully written to {os.path.join(tmpdir, 'combined_report.txt')}",
)
mock_provider.add_text_response(
"I've created a combined report summarizing all three reports."
)
agent = Agent(
provider=mock_provider,
tools=tools,
system_prompt="You are a report analyst assistant.",
)
response = await agent.run(
f"List all files in {tmpdir}, read them all, and create a combined report at combined_report.txt"
)
assert "combined report" in response.lower()
# =============================================================================
# Additional Tools for Extended Scenarios
# =============================================================================
@tool()
async def count_words(file_path: str) -> str:
"""Count the number of words in a file.
Args:
file_path: Path to the file to analyze.
Returns:
The word count as a string.
"""
with open(file_path) as f:
content = f.read()
word_count = len(content.split())
return f"Word count: {word_count}"
@tool()
async def append_to_file(file_path: str, content: str) -> str:
"""Append content to an existing file.
Args:
file_path: Path to the file to append to.
content: Content to append.
Returns:
Success message.
"""
with open(file_path, "a") as f:
f.write("\n" + content)
return f"Content appended to {file_path}"
@pytest.mark.scenario
class TestExtendedFileOperations:
"""Extended scenarios with more file operations."""
@pytest.mark.asyncio
async def test_read_count_and_append(self, mock_provider):
"""Test reading a file, counting words, and appending a note."""
with tempfile.TemporaryDirectory() as tmpdir:
source_file = os.path.join(tmpdir, "document.txt")
with open(source_file, "w") as f:
f.write("This is a test document with several words in it.")
tools = [read_file, write_file, count_words, append_to_file]
# Step 1: Read file
mock_provider.add_tool_call(
"read_file",
{"file_path": source_file},
"This is a test document with several words in it.",
)
# Step 2: Count words
mock_provider.add_tool_call(
"count_words",
{"file_path": source_file},
"Word count: 10",
)
# Step 3: Append analysis
mock_provider.add_tool_call(
"append_to_file",
{
"file_path": source_file,
"content": "\n\n[Analysis] This document contains 10 words.",
},
f"Content appended to {source_file}",
)
mock_provider.add_text_response(
"I've analyzed the document and appended the word count analysis."
)
agent = Agent(
provider=mock_provider,
tools=tools,
system_prompt="You are a document analysis assistant.",
)
response = await agent.run(
f"Read {source_file}, count its words, and append the word count as an analysis note"
)
assert "analyzed" in response.lower()

View File

@@ -0,0 +1,226 @@
"""End-to-end scenario test for file operations with real API.
This test simulates a realistic scenario where an agent:
1. Reads a file
2. Explains its content
3. Creates a new file with analysis results
Uses real SiliconFlow qwen3.5-397B API (requires SILICONFLOW_API_KEY env var).
"""
from __future__ import annotations
import os
import tempfile
from pathlib import Path
import pytest
from agentlite import Agent, OpenAIProvider, tool
# =============================================================================
# Configuration from model_config.toml
# =============================================================================
# SiliconFlow API configuration (matches qwen35_397b in model_config.toml)
SILICONFLOW_BASE_URL = "https://api.siliconflow.cn/v1"
SILICONFLOW_MODEL = "Qwen/Qwen3.5-397B-A17B"
def get_siliconflow_provider() -> OpenAIProvider | None:
"""Create OpenAIProvider for SiliconFlow API.
Returns None if SILICONFLOW_API_KEY is not set.
"""
api_key = os.environ.get("SILICONFLOW_API_KEY")
if not api_key:
return None
return OpenAIProvider(
api_key=api_key,
base_url=SILICONFLOW_BASE_URL,
model=SILICONFLOW_MODEL,
)
# =============================================================================
# File Operation Tools
# =============================================================================
@tool()
async def read_file(file_path: str) -> str:
"""Read the content of a file.
Args:
file_path: Path to the file to read.
Returns:
The content of the file as a string.
Raises:
FileNotFoundError: If the file does not exist.
"""
with open(file_path) as f:
return f.read()
@tool()
async def write_file(file_path: str, content: str) -> str:
"""Write content to a file, creating it if it doesn't exist.
Args:
file_path: Path to the file to write.
content: Content to write to the file.
Returns:
Success message confirming the file was written.
"""
# Create parent directories if they don't exist
Path(file_path).parent.mkdir(parents=True, exist_ok=True)
with open(file_path, "w") as f:
f.write(content)
return f"File successfully written to {file_path}"
@tool()
async def list_files(directory: str) -> str:
"""List all files in a directory.
Args:
directory: Path to the directory to list.
Returns:
A newline-separated list of file names in the directory.
"""
files = os.listdir(directory)
return "\n".join(files)
# =============================================================================
# Real API E2E Tests
# =============================================================================
@pytest.fixture
def real_provider():
"""Create a real SiliconFlow provider.
Skip tests if SILICONFLOW_API_KEY is not set.
"""
provider = get_siliconflow_provider()
if provider is None:
pytest.skip("SILICONFLOW_API_KEY not set, skipping real API tests")
return provider
@pytest.mark.scenario
@pytest.mark.expensive
class TestFileOperationsWithRealAPI:
"""End-to-end tests with real SiliconFlow API."""
@pytest.mark.asyncio
async def test_read_and_summarize(self, real_provider):
"""Test reading a file and creating a summary with real API."""
with tempfile.TemporaryDirectory() as tmpdir:
# Create a source file with meaningful content
source_file = os.path.join(tmpdir, "source.txt")
source_content = """AgentLite 项目概述
================
AgentLite 是一个轻量级的 Agent 组件库,主要特点:
- 异步优先设计
- OpenAI 兼容 API
- 工具系统 (支持 MCP)
- 流式响应支持
使用示例:
```python
from agentlite import Agent, OpenAIProvider
provider = OpenAIProvider(api_key="...", model="gpt-4")
agent = Agent(provider=provider)
response = await agent.run("Hello!")
```
"""
with open(source_file, "w") as f:
f.write(source_content)
# Create agent with file tools
tools = [read_file, write_file, list_files]
agent = Agent(
provider=real_provider,
tools=tools,
system_prompt="你是一个文件分析助手。请使用工具来完成任务。",
)
# Run the agent to read, analyze, and write summary
output_file = os.path.join(tmpdir, "summary.txt")
response = await agent.run(
f"请读取 {source_file} 文件,分析其内容,并创建一个摘要文件保存到 {output_file}"
)
# Verify the agent responded
assert response, "Agent should return a response"
print(f"\n[Agent 响应]:\n{response}\n")
# Verify the output file was created
if os.path.exists(output_file):
with open(output_file) as f:
output_content = f.read()
print(f"\n[输出文件内容]:\n{output_content}\n")
assert len(output_content) > 0, "Output file should not be empty"
@pytest.mark.asyncio
async def test_list_files_and_combine(self, real_provider):
"""Test listing files, reading them, and creating combined report."""
with tempfile.TemporaryDirectory() as tmpdir:
# Create multiple files
files = {
"sales.txt": "销售额增长了 20%",
"users.txt": "用户满意度达到 85%",
"bugs.txt": "修复了 15 个问题",
}
for name, content in files.items():
with open(os.path.join(tmpdir, name), "w") as f:
f.write(content)
# Create agent with file tools
tools = [read_file, write_file, list_files]
agent = Agent(
provider=real_provider,
tools=tools,
system_prompt="你是一个数据分析助手。请使用工具来完成任务。",
)
# Run the agent
report_file = os.path.join(tmpdir, "report.txt")
response = await agent.run(
f"列出 {tmpdir} 目录中的所有文件,读取每个文件的内容,然后创建一份综合报告保存到 {report_file}"
)
# Verify the agent responded
assert response, "Agent should return a response"
print(f"\n[Agent 响应]:\n{response}\n")
# The agent should have created the report file
if os.path.exists(report_file):
with open(report_file) as f:
report_content = f.read()
print(f"\n[报告文件内容]:\n{report_content}\n")
@pytest.mark.asyncio
async def test_simple_conversation(self, real_provider):
"""Test basic conversation without tools."""
agent = Agent(
provider=real_provider,
system_prompt="你是一个有帮助的助手。请用中文回答。",
)
response = await agent.run("你好!请简单介绍一下你自己。")
assert response, "Agent should return a response"
print(f"\n[Agent 自我介绍]:\n{response}\n")
assert len(response) > 10, "Response should be meaningful"

View File

@@ -0,0 +1,521 @@
"""工具专项测试 - 文档提取和知识图谱工具
本模块测试基于数据基底的工具功能,包括:
1. 文档读取和解析工具
2. 实体提取工具
3. 知识图谱查询工具
4. 推理工具
"""
from __future__ import annotations
import json
from pathlib import Path
from typing import Any
import pytest
import yaml
from agentlite import Agent, tool
def tool_output(result: Any) -> Any:
"""兼容旧式返回值和 ToolResult 返回值."""
return getattr(result, "output", result)
# =============================================================================
# 数据加载 fixtures
# =============================================================================
@pytest.fixture
def data_dir() -> Path:
"""返回测试数据目录路径."""
return Path(__file__).parent.parent / "data"
@pytest.fixture
def sample_article(data_dir: Path) -> str:
"""加载样例文章."""
return (data_dir / "documents" / "sample_article.md").read_text(encoding="utf-8")
@pytest.fixture
def technical_spec(data_dir: Path) -> str:
"""加载技术规范文档."""
return (data_dir / "documents" / "technical_spec.md").read_text(encoding="utf-8")
@pytest.fixture
def meeting_notes(data_dir: Path) -> str:
"""加载会议记录."""
return (data_dir / "documents" / "meeting_notes.txt").read_text(encoding="utf-8")
@pytest.fixture
def knowledge_graph_entities(data_dir: Path) -> dict[str, Any]:
"""加载知识图谱实体数据."""
with open(data_dir / "knowledge_base" / "entities.json") as f:
return json.load(f)
@pytest.fixture
def knowledge_graph_relations(data_dir: Path) -> dict[str, Any]:
"""加载知识图谱关系数据."""
with open(data_dir / "knowledge_base" / "relations.json") as f:
return json.load(f)
@pytest.fixture
def graph_queries(data_dir: Path) -> list[dict[str, Any]]:
"""加载图谱查询测试用例."""
with open(data_dir / "knowledge_base" / "graph_queries.yaml") as f:
data = yaml.safe_load(f)
return data.get("queries", [])
# =============================================================================
# 知识图谱工具实现
# =============================================================================
class KnowledgeGraph:
"""知识图谱内存存储."""
def __init__(self, entities: list[dict], relations: list[dict]):
self._entities = {e["id"]: e for e in entities}
self._relations = relations
self._index_by_type: dict[str, list[str]] = {}
self._index_by_name: dict[str, str] = {}
# 构建索引
for entity_id, entity in self._entities.items():
entity_type = entity.get("type", "Unknown")
if entity_type not in self._index_by_type:
self._index_by_type[entity_type] = []
self._index_by_type[entity_type].append(entity_id)
name = entity.get("name", "")
if name:
self._index_by_name[name] = entity_id
def get_entity(self, entity_id: str) -> dict | None:
"""获取实体."""
return self._entities.get(entity_id)
def get_entity_by_name(self, name: str) -> dict | None:
"""通过名称获取实体."""
entity_id = self._index_by_name.get(name)
if entity_id:
return self._entities.get(entity_id)
return None
def get_entities_by_type(self, entity_type: str) -> list[dict]:
"""获取特定类型的所有实体."""
entity_ids = self._index_by_type.get(entity_type, [])
return [self._entities[eid] for eid in entity_ids if eid in self._entities]
def get_relations(
self, from_id: str | None = None, to_id: str | None = None, relation_type: str | None = None
) -> list[dict]:
"""获取关系."""
results = []
for rel in self._relations:
if from_id and rel.get("from") != from_id:
continue
if to_id and rel.get("to") != to_id:
continue
if relation_type and rel.get("type") != relation_type:
continue
results.append(rel)
return results
def get_neighbors(self, entity_id: str, relation_type: str | None = None) -> list[dict]:
"""获取邻居实体."""
relations = self.get_relations(from_id=entity_id, relation_type=relation_type)
neighbors = []
for rel in relations:
target_id = rel.get("to")
if target_id and target_id in self._entities:
neighbors.append({"entity": self._entities[target_id], "relation": rel})
return neighbors
def find_path(self, start_id: str, end_id: str, max_depth: int = 3) -> list[list[str]] | None:
"""查找两个实体之间的路径."""
if start_id == end_id:
return [[start_id]]
if max_depth <= 0:
return None
# BFS
from collections import deque
queue = deque([(start_id, [start_id])])
visited = {start_id}
all_paths = []
while queue:
current_id, path = queue.popleft()
if len(path) > max_depth + 1:
continue
relations = self.get_relations(from_id=current_id)
for rel in relations:
next_id = rel.get("to")
if not next_id:
continue
new_path = path + [next_id]
if next_id == end_id:
all_paths.append(new_path)
elif next_id not in visited and len(new_path) <= max_depth:
visited.add(next_id)
queue.append((next_id, new_path))
return all_paths if all_paths else None
@pytest.fixture
def knowledge_graph(knowledge_graph_entities, knowledge_graph_relations) -> KnowledgeGraph:
"""创建知识图谱实例."""
return KnowledgeGraph(
entities=knowledge_graph_entities.get("entities", []),
relations=knowledge_graph_relations.get("relations", []),
)
# =============================================================================
# 工具定义
# =============================================================================
@tool()
async def read_document(file_path: str) -> str:
"""读取文档内容.
Args:
file_path: 文档路径
Returns:
文档内容
"""
path = Path(file_path)
if not path.exists():
return f"Error: File not found: {file_path}"
try:
return path.read_text(encoding="utf-8")
except Exception as e:
return f"Error reading file: {e}"
@tool()
async def extract_entities(text: str) -> str:
"""从文本中提取实体.
Args:
text: 输入文本
Returns:
JSON 格式的实体列表
"""
# 简化的实体提取 - 实际应使用 NLP 模型
import re
entities = []
# 提取人名(简单的中文姓名匹配)
person_pattern = r"[\u4e00-\u9fa5]{2,4}"
potential_names = re.findall(person_pattern, text)
common_names = ["张三", "李四", "王五", "赵六", "李飞飞", "吴恩达", "Yann LeCun"]
for name in potential_names:
if name in common_names or len(name) == 3:
entities.append({"type": "Person", "name": name})
# 提取公司/组织名
org_pattern = r"(TechCorp|OpenAI|GitHub|Google)"
orgs = re.findall(org_pattern, text)
for org in set(orgs):
entities.append({"type": "Organization", "name": org})
# 提取技术术语
tech_pattern = r"(Python|TensorFlow|PyTorch|GPT-4|AI|LLM)"
techs = re.findall(tech_pattern, text)
for tech in set(techs):
entities.append({"type": "Technology", "name": tech})
return json.dumps(entities, ensure_ascii=False)
@tool()
async def query_knowledge_graph(query_type: str, params: str) -> str:
"""查询知识图谱.
Args:
query_type: 查询类型 (person_relations, company_employees, technology_users, etc.)
params: JSON 格式的查询参数
Returns:
查询结果
"""
# 这里使用全局的 kg 实例,实际应在 Agent 初始化时注入
try:
params_dict = json.loads(params)
except json.JSONDecodeError:
return json.dumps({"error": "Invalid JSON params"})
# 简化实现 - 实际应基于知识图谱查询
result = {"query_type": query_type, "params": params_dict, "results": []}
return json.dumps(result, ensure_ascii=False)
@tool()
async def reason_about_path(start_entity: str, end_entity: str) -> str:
"""推理两个实体之间的关系路径.
Args:
start_entity: 起始实体名称
end_entity: 目标实体名称
Returns:
推理结果
"""
return json.dumps(
{
"start": start_entity,
"end": end_entity,
"reasoning": f"分析 {start_entity}{end_entity} 的关系链...",
"path": [],
},
ensure_ascii=False,
)
# =============================================================================
# 测试用例
# =============================================================================
@pytest.mark.tools
class TestDocumentTools:
"""文档工具测试."""
@pytest.mark.asyncio
async def test_read_document(self, data_dir: Path, sample_article: str):
"""测试文档读取工具."""
result = tool_output(await read_document(str(data_dir / "documents" / "sample_article.md")))
assert "人工智能" in result
assert "GitHub Copilot" in result
assert "张三" in result
@pytest.mark.asyncio
async def test_read_document_not_found(self):
"""测试读取不存在的文档."""
result = tool_output(await read_document("/nonexistent/file.md"))
assert "Error" in result
assert "not found" in result.lower()
@pytest.mark.asyncio
async def test_extract_entities_from_article(self, sample_article: str):
"""测试从文章中提取实体."""
result = tool_output(await extract_entities(sample_article))
entities = json.loads(result)
# 验证提取到实体
assert len(entities) > 0
# 验证实体类型
entity_names = [e["name"] for e in entities]
assert "张三" in entity_names
assert "TechCorp" in entity_names or "OpenAI" in entity_names
@pytest.mark.tools
class TestKnowledgeGraphTools:
"""知识图谱工具测试."""
def test_knowledge_graph_initialization(self, knowledge_graph: KnowledgeGraph):
"""测试知识图谱初始化."""
# 验证实体数量
entity = knowledge_graph.get_entity_by_name("张三")
assert entity is not None
assert entity["type"] == "Person"
# 验证公司实体
company = knowledge_graph.get_entity_by_name("TechCorp")
assert company is not None
assert company["type"] == "Company"
def test_get_entities_by_type(self, knowledge_graph: KnowledgeGraph):
"""测试按类型获取实体."""
persons = knowledge_graph.get_entities_by_type("Person")
assert len(persons) >= 3 # 张三、李四、李飞飞
technologies = knowledge_graph.get_entities_by_type("Technology")
assert len(technologies) >= 2 # Python、OpenAI API
def test_get_relations(self, knowledge_graph: KnowledgeGraph):
"""测试获取关系."""
# 获取张三的所有关系
zhangsan = knowledge_graph.get_entity_by_name("张三")
assert zhangsan is not None
relations = knowledge_graph.get_relations(from_id=zhangsan["id"])
assert len(relations) >= 2 # works_for, uses
# 验证关系类型
relation_types = [r["type"] for r in relations]
assert "works_for" in relation_types
assert "uses" in relation_types
def test_get_neighbors(self, knowledge_graph: KnowledgeGraph):
"""测试获取邻居节点."""
zhangsan = knowledge_graph.get_entity_by_name("张三")
assert zhangsan is not None
neighbors = knowledge_graph.get_neighbors(zhangsan["id"])
assert len(neighbors) >= 2
# 验证邻居包含 TechCorp
neighbor_names = [n["entity"]["name"] for n in neighbors]
assert "TechCorp" in neighbor_names
def test_find_path(self, knowledge_graph: KnowledgeGraph):
"""测试查找路径."""
zhangsan = knowledge_graph.get_entity_by_name("张三")
techcorp = knowledge_graph.get_entity_by_name("TechCorp")
assert zhangsan is not None
assert techcorp is not None
paths = knowledge_graph.find_path(zhangsan["id"], techcorp["id"])
assert paths is not None
assert len(paths) > 0
# 验证路径长度
first_path = paths[0]
assert len(first_path) == 2 # 张三 -> TechCorp
@pytest.mark.asyncio
async def test_query_knowledge_graph(self):
"""测试知识图谱查询工具."""
params = json.dumps({"entity_name": "张三"})
result = tool_output(await query_knowledge_graph("person_relations", params))
data = json.loads(result)
assert data["query_type"] == "person_relations"
assert "params" in data
@pytest.mark.asyncio
async def test_reason_about_path(self):
"""测试路径推理工具."""
result = tool_output(await reason_about_path("张三", "OpenAI"))
data = json.loads(result)
assert data["start"] == "张三"
assert data["end"] == "OpenAI"
assert "reasoning" in data
@pytest.mark.tools
class TestDataIntegrity:
"""数据完整性测试."""
def test_entities_json_valid(self, knowledge_graph_entities: dict):
"""验证实体 JSON 格式正确."""
assert "entities" in knowledge_graph_entities
entities = knowledge_graph_entities["entities"]
assert len(entities) > 0
# 验证每个实体都有必需的字段
for entity in entities:
assert "id" in entity
assert "type" in entity
assert "name" in entity
def test_relations_json_valid(
self, knowledge_graph_relations: dict, knowledge_graph_entities: dict
):
"""验证关系 JSON 格式正确且引用的实体存在."""
assert "relations" in knowledge_graph_relations
relations = knowledge_graph_relations["relations"]
entity_ids = {e["id"] for e in knowledge_graph_entities["entities"]}
for relation in relations:
assert "from" in relation
assert "to" in relation
assert "type" in relation
# 验证引用的实体存在
assert relation["from"] in entity_ids, f"Entity {relation['from']} not found"
assert relation["to"] in entity_ids, f"Entity {relation['to']} not found"
def test_graph_queries_yaml_valid(self, graph_queries: list):
"""验证查询 YAML 格式正确."""
assert len(graph_queries) > 0
for query in graph_queries:
assert "id" in query
assert "description" in query
assert "query" in query
assert "expected_results" in query
def test_documents_exist(self, data_dir: Path):
"""验证测试文档存在且非空."""
docs_dir = data_dir / "documents"
sample_article = docs_dir / "sample_article.md"
assert sample_article.exists()
assert sample_article.stat().st_size > 0
tech_spec = docs_dir / "technical_spec.md"
assert tech_spec.exists()
assert tech_spec.stat().st_size > 0
meeting_notes = docs_dir / "meeting_notes.txt"
assert meeting_notes.exists()
assert meeting_notes.stat().st_size > 0
@pytest.mark.tools
class TestAgentWithTools:
"""Agent 集成工具测试."""
@pytest.mark.asyncio
async def test_agent_with_document_tools(self, mock_provider, data_dir: Path):
"""测试带有文档工具的 Agent."""
mock_provider.add_text_response("我已经读取了文档")
agent = Agent(
provider=mock_provider,
tools=[read_document],
system_prompt="你是一个文档助手,可以读取和分析文档。",
)
response = await agent.run(f"请读取文档 {data_dir / 'documents' / 'sample_article.md'}")
assert "文档" in response or "读取" in response
@pytest.mark.asyncio
async def test_agent_with_kg_tools(self, mock_provider):
"""测试带有知识图谱工具的 Agent."""
mock_provider.add_text_response("张三在 TechCorp 工作")
agent = Agent(
provider=mock_provider,
tools=[query_knowledge_graph, reason_about_path],
system_prompt="你是一个知识图谱助手,可以查询实体关系。",
)
response = await agent.run("张三在哪里工作?")
assert response is not None
assert len(response) > 0

View File

View File

@@ -0,0 +1,330 @@
"""Unit tests for configuration models.
This module tests all Pydantic configuration models including
ProviderConfig, ModelConfig, ToolConfig, and AgentConfig.
"""
from __future__ import annotations
import pytest
from pydantic import ValidationError
from agentlite import ProviderConfig, ModelConfig, AgentConfig
class TestProviderConfig:
"""Tests for ProviderConfig."""
def test_provider_config_valid(self):
"""Test valid ProviderConfig creation."""
config = ProviderConfig(
type="openai",
base_url="https://api.openai.com/v1",
api_key="sk-test123",
)
assert config.type == "openai"
assert config.base_url == "https://api.openai.com/v1"
assert config.api_key.get_secret_value() == "sk-test123"
def test_provider_config_default_type(self):
"""Test ProviderConfig with default type."""
config = ProviderConfig(
base_url="https://api.openai.com/v1",
api_key="sk-test",
)
assert config.type == "openai"
def test_provider_config_default_url(self):
"""Test ProviderConfig with default base_url."""
config = ProviderConfig(
api_key="sk-test",
)
assert config.base_url == "https://api.openai.com/v1"
def test_provider_config_invalid_url_http(self):
"""Test ProviderConfig with invalid URL scheme."""
with pytest.raises(ValidationError) as exc_info:
ProviderConfig(
type="openai",
base_url="ftp://invalid.com",
api_key="sk-test",
)
assert "base_url must start with http:// or https://" in str(exc_info.value)
def test_provider_config_invalid_url_no_scheme(self):
"""Test ProviderConfig with URL without scheme."""
with pytest.raises(ValidationError):
ProviderConfig(
base_url="api.openai.com/v1",
api_key="sk-test",
)
def test_provider_config_custom_headers(self):
"""Test ProviderConfig with custom headers."""
config = ProviderConfig(
api_key="sk-test",
headers={"X-Custom": "value"},
)
assert config.headers == {"X-Custom": "value"}
def test_provider_config_default_headers(self):
"""Test ProviderConfig default headers."""
config = ProviderConfig(api_key="sk-test")
assert config.headers == {}
def test_provider_config_timeout(self):
"""Test ProviderConfig timeout."""
config = ProviderConfig(
api_key="sk-test",
timeout=30.0,
)
assert config.timeout == 30.0
def test_provider_config_default_timeout(self):
"""Test ProviderConfig default timeout."""
config = ProviderConfig(api_key="sk-test")
assert config.timeout == 60.0
def test_provider_config_api_key_is_secret_str(self):
"""Test that api_key is stored as SecretStr."""
config = ProviderConfig(api_key="sk-secret")
# SecretStr should not expose value in repr/str
assert "sk-secret" not in str(config.api_key)
# But can get value explicitly
assert config.api_key.get_secret_value() == "sk-secret"
class TestModelConfig:
"""Tests for ModelConfig."""
def test_model_config_valid(self):
"""Test valid ModelConfig creation."""
config = ModelConfig(
provider="openai",
model="gpt-4",
)
assert config.provider == "openai"
assert config.model == "gpt-4"
def test_model_config_with_all_fields(self):
"""Test ModelConfig with all optional fields."""
config = ModelConfig(
provider="openai",
model="gpt-4",
max_tokens=1000,
temperature=0.7,
top_p=0.9,
capabilities={"streaming", "tool_calling"},
)
assert config.max_tokens == 1000
assert config.temperature == 0.7
assert config.top_p == 0.9
assert config.capabilities == {"streaming", "tool_calling"}
def test_model_config_empty_provider(self):
"""Test ModelConfig with empty provider."""
with pytest.raises(ValidationError) as exc_info:
ModelConfig(
provider="",
model="gpt-4",
)
assert "provider must not be empty" in str(exc_info.value)
def test_model_config_temperature_bounds(self):
"""Test ModelConfig temperature validation bounds."""
# Valid: 0.0
config = ModelConfig(provider="openai", model="gpt-4", temperature=0.0)
assert config.temperature == 0.0
# Valid: 2.0
config = ModelConfig(provider="openai", model="gpt-4", temperature=2.0)
assert config.temperature == 2.0
# Invalid: < 0
with pytest.raises(ValidationError):
ModelConfig(provider="openai", model="gpt-4", temperature=-0.1)
# Invalid: > 2
with pytest.raises(ValidationError):
ModelConfig(provider="openai", model="gpt-4", temperature=2.1)
def test_model_config_top_p_bounds(self):
"""Test ModelConfig top_p validation bounds."""
# Valid: 0.0
config = ModelConfig(provider="openai", model="gpt-4", top_p=0.0)
assert config.top_p == 0.0
# Valid: 1.0
config = ModelConfig(provider="openai", model="gpt-4", top_p=1.0)
assert config.top_p == 1.0
# Invalid: < 0
with pytest.raises(ValidationError):
ModelConfig(provider="openai", model="gpt-4", top_p=-0.1)
# Invalid: > 1
with pytest.raises(ValidationError):
ModelConfig(provider="openai", model="gpt-4", top_p=1.1)
def test_model_config_default_capabilities(self):
"""Test ModelConfig default capabilities."""
config = ModelConfig(provider="openai", model="gpt-4")
assert config.capabilities == set()
class TestAgentConfig:
"""Tests for AgentConfig."""
def test_agent_config_minimal(self):
"""Test AgentConfig with minimal required fields."""
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"default": ModelConfig(provider="openai", model="gpt-4")},
)
assert config.name == "agent"
assert config.system_prompt == "You are a helpful assistant."
assert config.default_model == "default"
def test_agent_config_full(self):
"""Test AgentConfig with all fields."""
config = AgentConfig(
name="my_agent",
system_prompt="Custom system prompt",
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"gpt4": ModelConfig(provider="openai", model="gpt-4")},
default_model="gpt4",
max_history=50,
)
assert config.name == "my_agent"
assert config.system_prompt == "Custom system prompt"
assert config.default_model == "gpt4"
assert config.max_history == 50
def test_agent_config_missing_default_model(self):
"""Test AgentConfig with non-existent default_model."""
with pytest.raises(ValidationError) as exc_info:
AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"gpt4": ModelConfig(provider="openai", model="gpt-4")},
default_model="nonexistent",
)
assert "not found in models" in str(exc_info.value)
def test_agent_config_unknown_provider(self):
"""Test AgentConfig with model referencing unknown provider."""
with pytest.raises(ValidationError) as exc_info:
AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"default": ModelConfig(provider="unknown", model="gpt-4")},
)
assert "unknown provider" in str(exc_info.value)
def test_agent_config_get_provider_config(self):
"""Test get_provider_config method."""
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"gpt4": ModelConfig(provider="openai", model="gpt-4")},
default_model="gpt4",
)
provider_config = config.get_provider_config("gpt4")
assert provider_config.api_key.get_secret_value() == "sk-test"
def test_agent_config_get_provider_config_default(self):
"""Test get_provider_config with default model."""
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"gpt4": ModelConfig(provider="openai", model="gpt-4")},
default_model="gpt4",
)
provider_config = config.get_provider_config()
assert provider_config.api_key.get_secret_value() == "sk-test"
def test_agent_config_get_provider_config_not_found(self):
"""Test get_provider_config with non-existent model."""
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"default": ModelConfig(provider="openai", model="gpt-4")},
)
with pytest.raises(ValueError, match="Model 'nonexistent' not found"):
config.get_provider_config("nonexistent")
def test_agent_config_get_model_config(self):
"""Test get_model_config method."""
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"gpt4": ModelConfig(provider="openai", model="gpt-4")},
default_model="gpt4",
)
model_config = config.get_model_config("gpt4")
assert model_config.model == "gpt-4"
def test_agent_config_get_model_config_default(self):
"""Test get_model_config with default."""
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"gpt4": ModelConfig(provider="openai", model="gpt-4")},
default_model="gpt4",
)
model_config = config.get_model_config()
assert model_config.model == "gpt-4"
def test_agent_config_get_model_config_not_found(self):
"""Test get_model_config with non-existent model."""
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"default": ModelConfig(provider="openai", model="gpt-4")},
)
with pytest.raises(ValueError, match="Model 'nonexistent' not found"):
config.get_model_config("nonexistent")
def test_agent_config_multiple_providers(self):
"""Test AgentConfig with multiple providers."""
config = AgentConfig(
providers={
"openai": ProviderConfig(api_key="sk-openai"),
"anthropic": ProviderConfig(
type="anthropic",
base_url="https://api.anthropic.com/v1",
api_key="sk-anthropic",
),
},
models={
"default": ModelConfig(provider="openai", model="gpt-4"),
"claude": ModelConfig(provider="anthropic", model="claude-3"),
},
)
assert len(config.providers) == 2
assert len(config.models) == 2
def test_agent_config_max_history_validation(self):
"""Test max_history validation."""
# Valid: min=1
config = AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"default": ModelConfig(provider="openai", model="gpt-4")},
max_history=1,
)
assert config.max_history == 1
# Invalid: 0
with pytest.raises(ValidationError):
AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"default": ModelConfig(provider="openai", model="gpt-4")},
max_history=0,
)
# Invalid: negative
with pytest.raises(ValidationError):
AgentConfig(
providers={"openai": ProviderConfig(api_key="sk-test")},
models={"default": ModelConfig(provider="openai", model="gpt-4")},
max_history=-1,
)

View File

@@ -0,0 +1,297 @@
"""Unit tests for message types.
This module tests all message-related types including ContentPart,
Message, ToolCall, and their various subclasses.
"""
from __future__ import annotations
import pytest
from agentlite import (
ContentPart,
Message,
TextPart,
ImageURLPart,
AudioURLPart,
ToolCall,
ToolCallPart,
)
class TestContentPart:
"""Tests for ContentPart base class and registry."""
def test_content_part_registry_auto_registers_subclasses(self):
"""Test that ContentPart subclasses are auto-registered."""
# All defined subclasses should be in registry
assert "text" in ContentPart._ContentPart__content_part_registry
assert "image_url" in ContentPart._ContentPart__content_part_registry
assert "audio_url" in ContentPart._ContentPart__content_part_registry
def test_text_part_creation(self):
"""Test basic TextPart creation."""
part = TextPart(text="Hello, world!")
assert part.type == "text"
assert part.text == "Hello, world!"
def test_text_part_model_dump(self):
"""Test TextPart serialization."""
part = TextPart(text="Hello")
dumped = part.model_dump()
assert dumped == {"type": "text", "text": "Hello"}
def test_text_part_merge_success(self):
"""Test successful text merge during streaming."""
part1 = TextPart(text="Hello ")
part2 = TextPart(text="world!")
result = part1.merge_in_place(part2)
assert result is True
assert part1.text == "Hello world!"
def test_text_part_merge_failure(self):
"""Test merge failure with incompatible types."""
text_part = TextPart(text="Hello")
# Try to merge with non-TextPart
result = text_part.merge_in_place("not a part")
assert result is False
assert text_part.text == "Hello" # Unchanged
class TestImageURLPart:
"""Tests for ImageURLPart."""
def test_image_url_part_creation(self):
"""Test ImageURLPart creation."""
part = ImageURLPart(image_url=ImageURLPart.ImageURL(url="https://example.com/image.png"))
assert part.type == "image_url"
assert part.image_url.url == "https://example.com/image.png"
def test_image_url_part_with_detail(self):
"""Test ImageURLPart with detail parameter."""
part = ImageURLPart(
image_url=ImageURLPart.ImageURL(url="https://example.com/image.png", detail="high")
)
assert part.image_url.detail == "high"
def test_image_url_part_default_detail(self):
"""Test ImageURLPart default detail is None."""
part = ImageURLPart(image_url=ImageURLPart.ImageURL(url="https://example.com/image.png"))
assert part.image_url.detail is None
class TestAudioURLPart:
"""Tests for AudioURLPart."""
def test_audio_url_part_creation(self):
"""Test AudioURLPart creation."""
part = AudioURLPart(audio_url=AudioURLPart.AudioURL(url="https://example.com/audio.mp3"))
assert part.type == "audio_url"
assert part.audio_url.url == "https://example.com/audio.mp3"
class TestToolCall:
"""Tests for ToolCall."""
def test_tool_call_creation(self):
"""Test ToolCall creation."""
call = ToolCall(
id="call_123", function=ToolCall.FunctionBody(name="add", arguments='{"a": 1, "b": 2}')
)
assert call.type == "function"
assert call.id == "call_123"
assert call.function.name == "add"
assert call.function.arguments == '{"a": 1, "b": 2}'
def test_tool_call_merge_with_part(self):
"""Test ToolCall merging with ToolCallPart."""
call = ToolCall(
id="call_123", function=ToolCall.FunctionBody(name="add", arguments='{"a": 1')
)
part = ToolCallPart(arguments_part=', "b": 2}')
result = call.merge_in_place(part)
assert result is True
assert call.function.arguments == '{"a": 1, "b": 2}'
def test_tool_call_merge_failure(self):
"""Test ToolCall merge failure with incompatible types."""
call = ToolCall(id="call_123", function=ToolCall.FunctionBody(name="add", arguments="{}"))
result = call.merge_in_place("not a part")
assert result is False
class TestToolCallPart:
"""Tests for ToolCallPart."""
def test_tool_call_part_creation(self):
"""Test ToolCallPart creation."""
part = ToolCallPart(arguments_part='{"a": 1}')
assert part.arguments_part == '{"a": 1}'
def test_tool_call_part_none(self):
"""Test ToolCallPart with None arguments."""
part = ToolCallPart(arguments_part=None)
assert part.arguments_part is None
def test_tool_call_part_merge(self):
"""Test ToolCallPart merging."""
part1 = ToolCallPart(arguments_part='{"a":')
part2 = ToolCallPart(arguments_part=" 1}")
result = part1.merge_in_place(part2)
assert result is True
assert part1.arguments_part == '{"a": 1}'
def test_tool_call_part_merge_none(self):
"""Test ToolCallPart merge when self is None."""
part1 = ToolCallPart(arguments_part=None)
part2 = ToolCallPart(arguments_part='{"a": 1}')
result = part1.merge_in_place(part2)
assert result is True
assert part1.arguments_part == '{"a": 1}'
class TestMessage:
"""Tests for Message."""
def test_message_string_content_coercion(self):
"""Test that string content is coerced to TextPart."""
msg = Message(role="user", content="Hello!")
assert len(msg.content) == 1
assert isinstance(msg.content[0], TextPart)
assert msg.content[0].text == "Hello!"
def test_message_part_content(self):
"""Test Message with ContentPart content."""
part = TextPart(text="Hello!")
msg = Message(role="user", content=part)
assert len(msg.content) == 1
assert msg.content[0].text == "Hello!"
def test_message_list_content(self):
"""Test Message with list of ContentParts."""
parts = [TextPart(text="Hello"), TextPart(text=" world!")]
msg = Message(role="user", content=parts)
assert len(msg.content) == 2
def test_message_extract_text(self):
"""Test text extraction from message."""
msg = Message(role="user", content="Hello world!")
assert msg.extract_text() == "Hello world!"
def test_message_extract_text_with_separator(self):
"""Test text extraction with custom separator."""
parts = [TextPart(text="Hello"), TextPart(text="world!")]
msg = Message(role="user", content=parts)
assert msg.extract_text(sep=" ") == "Hello world!"
assert msg.extract_text(sep="-") == "Hello-world!"
def test_message_has_tool_calls_false(self):
"""Test has_tool_calls returns False when no tool calls."""
msg = Message(role="assistant", content="Hello!")
assert msg.has_tool_calls() is False
def test_message_has_tool_calls_true(self):
"""Test has_tool_calls returns True when tool calls present."""
tool_call = ToolCall(
id="call_123", function=ToolCall.FunctionBody(name="add", arguments="{}")
)
msg = Message(role="assistant", content="Let me calculate that.", tool_calls=[tool_call])
assert msg.has_tool_calls() is True
def test_message_has_tool_calls_empty_list(self):
"""Test has_tool_calls with empty tool_calls list."""
msg = Message(role="assistant", content="Hello!", tool_calls=[])
assert msg.has_tool_calls() is False
def test_message_tool_response(self):
"""Test message with tool response."""
msg = Message(role="tool", content="Result: 42", tool_call_id="call_123")
assert msg.role == "tool"
assert msg.tool_call_id == "call_123"
def test_message_serialization(self):
"""Test Message serialization with model_dump."""
msg = Message(role="user", content="Hello!")
dumped = msg.model_dump()
assert dumped["role"] == "user"
assert "content" in dumped
def test_message_all_roles(self):
"""Test Message creation with all valid roles."""
for role in ["system", "user", "assistant", "tool"]:
msg = Message(role=role, content="Test")
assert msg.role == role
class TestPolymorphicContentPart:
"""Tests for polymorphic ContentPart validation."""
def test_polymorphic_validation_text(self):
"""Test that text type validates to TextPart."""
data = {"type": "text", "text": "Hello"}
part = ContentPart.model_validate(data)
assert isinstance(part, TextPart)
assert part.text == "Hello"
def test_polymorphic_validation_image(self):
"""Test that image_url type validates to ImageURLPart."""
data = {"type": "image_url", "image_url": {"url": "https://example.com/image.png"}}
part = ContentPart.model_validate(data)
assert isinstance(part, ImageURLPart)
assert part.image_url.url == "https://example.com/image.png"
def test_polymorphic_validation_unknown_type(self):
"""Test validation with unknown type raises error."""
data = {"type": "unknown_type", "content": "test"}
with pytest.raises(ValueError, match="Unknown content part type"):
ContentPart.model_validate(data)
def test_polymorphic_validation_no_type(self):
"""Test validation without type raises error."""
data = {"content": "test"}
with pytest.raises(ValueError):
ContentPart.model_validate(data)
class TestMessageEdgeCases:
"""Tests for edge cases in Message handling."""
def test_empty_string_content(self):
"""Test Message with empty string content."""
msg = Message(role="user", content="")
assert msg.content[0].text == ""
def test_message_with_name(self):
"""Test Message with name field."""
msg = Message(role="user", content="Hello", name="user1")
assert msg.name == "user1"
def test_message_history_isolation(self):
"""Test that history modifications don't affect original."""
msg = Message(role="user", content="Hello")
# Modify the content list
msg.content.append(TextPart(text="Extra"))
# Original should be modified (it's the same object)
assert len(msg.content) == 2

View File

@@ -0,0 +1,166 @@
"""Unit tests for provider protocol and exceptions.
This module tests the ChatProvider protocol, StreamedMessage protocol,
and all exception types.
"""
from __future__ import annotations
from agentlite.provider import (
TokenUsage,
ChatProviderError,
APIConnectionError,
APITimeoutError,
APIStatusError,
APIEmptyResponseError,
ChatProvider,
StreamedMessage,
)
class TestTokenUsage:
"""Tests for TokenUsage."""
def test_token_usage_creation(self):
"""Test TokenUsage creation."""
usage = TokenUsage(input_tokens=100, output_tokens=50)
assert usage.input_tokens == 100
assert usage.output_tokens == 50
assert usage.cached_tokens == 0 # Default
def test_token_usage_with_cached(self):
"""Test TokenUsage with cached tokens."""
usage = TokenUsage(input_tokens=100, output_tokens=50, cached_tokens=20)
assert usage.cached_tokens == 20
def test_token_usage_total(self):
"""Test total token calculation."""
usage = TokenUsage(input_tokens=100, output_tokens=50)
assert usage.total == 150
def test_token_usage_total_with_cached(self):
"""Test total with cached tokens (not included in total)."""
usage = TokenUsage(input_tokens=100, output_tokens=50, cached_tokens=20)
# Total is input + output, cached is tracked separately
assert usage.total == 150
class TestChatProviderError:
"""Tests for ChatProviderError hierarchy."""
def test_base_error_creation(self):
"""Test base ChatProviderError creation."""
error = ChatProviderError("Something went wrong")
assert error.message == "Something went wrong"
assert str(error) == "Something went wrong"
def test_api_connection_error(self):
"""Test APIConnectionError creation."""
error = APIConnectionError("Connection failed")
assert isinstance(error, ChatProviderError)
assert error.message == "Connection failed"
def test_api_timeout_error(self):
"""Test APITimeoutError creation."""
error = APITimeoutError("Request timed out")
assert isinstance(error, ChatProviderError)
assert error.message == "Request timed out"
def test_api_status_error(self):
"""Test APIStatusError creation."""
error = APIStatusError(429, "Rate limit exceeded")
assert isinstance(error, ChatProviderError)
assert error.status_code == 429
assert error.message == "Rate limit exceeded"
def test_api_status_error_different_codes(self):
"""Test APIStatusError with different status codes."""
codes = [400, 401, 403, 404, 429, 500, 502, 503]
for code in codes:
error = APIStatusError(code, f"Error {code}")
assert error.status_code == code
def test_api_empty_response_error(self):
"""Test APIEmptyResponseError creation."""
error = APIEmptyResponseError("Empty response from API")
assert isinstance(error, ChatProviderError)
assert error.message == "Empty response from API"
def test_exception_hierarchy(self):
"""Test that all exceptions inherit from ChatProviderError."""
errors = [
APIConnectionError("test"),
APITimeoutError("test"),
APIStatusError(500, "test"),
APIEmptyResponseError("test"),
]
for error in errors:
assert isinstance(error, ChatProviderError)
class TestChatProviderProtocol:
"""Tests for ChatProvider protocol."""
def test_protocol_is_runtime_checkable(self):
"""Test that ChatProvider is runtime checkable."""
# ChatProvider should have @runtime_checkable
assert hasattr(ChatProvider, "__protocol_attrs__")
def test_mock_provider_implements_protocol(self, mock_provider):
"""Test that MockProvider implements ChatProvider."""
assert isinstance(mock_provider, ChatProvider)
def test_mock_provider_has_model_name(self, mock_provider):
"""Test that mock provider has model_name property."""
assert hasattr(mock_provider, "model_name")
assert isinstance(mock_provider.model_name, str)
def test_mock_provider_has_generate_method(self, mock_provider):
"""Test that mock provider has generate method."""
assert hasattr(mock_provider, "generate")
assert callable(mock_provider.generate)
class TestStreamedMessageProtocol:
"""Tests for StreamedMessage protocol."""
def test_protocol_is_runtime_checkable(self):
"""Test that StreamedMessage is runtime checkable."""
assert hasattr(StreamedMessage, "__protocol_attrs__")
def test_mock_streamed_message_implements_protocol(self):
"""Test that MockStreamedMessage implements StreamedMessage."""
from tests.conftest import MockStreamedMessage
from agentlite import TextPart
stream = MockStreamedMessage([TextPart(text="Hello")])
assert isinstance(stream, StreamedMessage)
def test_streamed_message_has_id_property(self):
"""Test that streamed message has id property."""
from tests.conftest import MockStreamedMessage
from agentlite import TextPart
stream = MockStreamedMessage([TextPart(text="Hello")])
assert hasattr(stream, "id")
assert stream.id == "mock-msg-123"
def test_streamed_message_has_usage_property(self):
"""Test that streamed message has usage property."""
from tests.conftest import MockStreamedMessage
from agentlite import TextPart
stream = MockStreamedMessage([TextPart(text="Hello")])
assert hasattr(stream, "usage")
assert stream.usage is not None
assert isinstance(stream.usage, TokenUsage)
def test_streamed_message_is_async_iterable(self):
"""Test that streamed message is async iterable."""
from tests.conftest import MockStreamedMessage
from agentlite import TextPart
stream = MockStreamedMessage([TextPart(text="Hello")])
assert hasattr(stream, "__aiter__")

View File

@@ -0,0 +1,209 @@
"""Unit tests for tool decorator and CallableTool.
This module tests the @tool() decorator and related tool functionality.
"""
from __future__ import annotations
import pytest
from agentlite.tool import tool, CallableTool, ToolOk, ToolError
class TestToolDecorator:
"""Tests for the @tool() decorator."""
def test_tool_decorator_basic(self):
"""Test basic tool decorator functionality."""
@tool()
async def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
assert isinstance(add, CallableTool)
assert add.name == "add"
assert add.description == "Add two numbers."
assert add.parameters["type"] == "object"
assert "a" in add.parameters["properties"]
assert "b" in add.parameters["properties"]
assert add.parameters["properties"]["a"]["type"] == "number"
assert add.parameters["properties"]["b"]["type"] == "number"
assert add.parameters["required"] == ["a", "b"]
def test_tool_decorator_with_default_params(self):
"""Test tool decorator with default parameters."""
@tool()
async def greet(name: str, greeting: str = "Hello") -> str:
"""Greet someone."""
return f"{greeting}, {name}!"
assert greet.name == "greet"
assert "name" in greet.parameters["required"]
assert "greeting" not in greet.parameters["required"]
def test_tool_decorator_custom_name(self):
"""Test tool decorator with custom name."""
@tool(name="custom_add")
async def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
assert add.name == "custom_add"
def test_tool_decorator_custom_description(self):
"""Test tool decorator with custom description."""
@tool(description="Custom description")
async def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
assert add.description == "Custom description"
def test_tool_decorator_no_docstring(self):
"""Test tool decorator with no docstring."""
@tool()
async def no_doc(a: float) -> float:
return a
assert no_doc.description == "No description provided"
def test_tool_decorator_param_types(self):
"""Test tool decorator with various parameter types."""
@tool()
async def multi_types(
s: str,
i: int,
f: float,
b: bool,
) -> dict:
"""Multiple types."""
return {"s": s, "i": i, "f": f, "b": b}
props = multi_types.parameters["properties"]
assert props["s"]["type"] == "string"
assert props["i"]["type"] == "integer"
assert props["f"]["type"] == "number"
assert props["b"]["type"] == "boolean"
def test_tool_decorator_no_type_hints(self):
"""Test tool decorator with no type hints."""
@tool()
async def no_types(param) -> str:
"""No type hints."""
return str(param)
assert no_types.parameters["properties"]["param"]["type"] == "string"
class TestToolDecoratorExecution:
"""Tests for tool decorator execution."""
@pytest.mark.asyncio
async def test_tool_execution_success(self):
"""Test successful tool execution."""
@tool()
async def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
result = await add(1.0, 2.0)
assert isinstance(result, ToolOk)
assert result.output == "3.0"
@pytest.mark.asyncio
async def test_tool_execution_error(self):
"""Test tool execution with error."""
@tool()
async def divide(a: float, b: float) -> float:
"""Divide two numbers."""
return a / b
result = await divide(1.0, 0.0)
assert isinstance(result, ToolError)
assert "division by zero" in result.message
@pytest.mark.asyncio
async def test_tool_execution_with_kwargs(self):
"""Test tool execution with keyword arguments."""
@tool()
async def greet(name: str, greeting: str = "Hello") -> str:
"""Greet someone."""
return f"{greeting}, {name}!"
result = await greet(name="World", greeting="Hi")
assert isinstance(result, ToolOk)
assert result.output == "Hi, World!"
class TestToolDecoratorMemorixBug:
"""Tests for the specific bug reported by Memorix project."""
def test_tool_decorator_memorix_case(self):
"""Test the exact case from Memorix bug report.
This test verifies that the @tool() decorator works correctly
with async functions that have string and float parameters.
"""
@tool()
async def add_memory(content: str, importance: float = 0.5) -> dict:
"""存储记忆"""
return {"status": "ok"}
assert isinstance(add_memory, CallableTool)
assert add_memory.name == "add_memory"
assert add_memory.description == "存储记忆"
# Check parameters schema
params = add_memory.parameters
assert params["type"] == "object"
assert "content" in params["properties"]
assert "importance" in params["properties"]
assert params["properties"]["content"]["type"] == "string"
assert params["properties"]["importance"]["type"] == "number"
# content is required (no default), importance is optional
assert "content" in params["required"]
assert "importance" not in params["required"]
@pytest.mark.asyncio
async def test_tool_decorator_memorix_execution(self):
"""Test execution of the Memorix case."""
@tool()
async def add_memory(content: str, importance: float = 0.5) -> dict:
"""存储记忆"""
return {"status": "ok", "content": content, "importance": importance}
result = await add_memory("test content", 0.8)
assert isinstance(result, ToolOk)
assert "ok" in result.output
def test_tool_decorator_can_be_used_in_agent(self):
"""Test that decorated tools can be used with Agent.
This is an integration-style test to ensure the decorated tool
has all required attributes for Agent usage.
"""
@tool()
async def add_memory(content: str, importance: float = 0.5) -> dict:
"""存储记忆"""
return {"status": "ok"}
# Verify the tool has the base property required by Agent
assert hasattr(add_memory, "base")
base_tool = add_memory.base
assert base_tool.name == "add_memory"
assert base_tool.description == "存储记忆"
assert base_tool.parameters == add_memory.parameters

98
agentlite/tests/utils.py Normal file
View File

@@ -0,0 +1,98 @@
"""Test utilities and helpers for AgentLite tests.
This module provides utility functions and helpers used across test modules.
"""
from __future__ import annotations
import asyncio
from typing import Any, TypeVar
T = TypeVar("T")
async def run_async(coro: asyncio.Coroutine[Any, Any, T]) -> T:
"""Run an async coroutine and return the result.
This is a helper for tests that need to run async code synchronously.
Args:
coro: The coroutine to run.
Returns:
The result of the coroutine.
"""
return await coro
def run_sync(coro: asyncio.Coroutine[Any, Any, T]) -> T:
"""Run an async coroutine synchronously.
Args:
coro: The coroutine to run.
Returns:
The result of the coroutine.
"""
return asyncio.run(coro)
async def collect_stream(stream) -> list[Any]:
"""Collect all items from an async stream into a list.
Args:
stream: The async stream to collect from.
Returns:
List of all items from the stream.
"""
items = []
async for item in stream:
items.append(item)
return items
async def collect_stream_text(stream) -> str:
"""Collect all text from an async text stream.
Args:
stream: The async stream to collect from.
Returns:
Concatenated text from all items.
"""
from agentlite import TextPart
text_parts = []
async for item in stream:
if isinstance(item, TextPart):
text_parts.append(item.text)
elif isinstance(item, str):
text_parts.append(item)
return "".join(text_parts)
def create_tool_schema(
name: str,
description: str,
properties: dict[str, Any],
required: list[str] | None = None,
) -> dict[str, Any]:
"""Create a JSON schema for a tool.
Args:
name: Tool name.
description: Tool description.
properties: JSON schema properties.
required: List of required property names.
Returns:
JSON schema for the tool.
"""
schema = {
"type": "object",
"properties": properties,
}
if required:
schema["required"] = required
return schema

160
bot.py
View File

@@ -1,38 +1,26 @@
# raise RuntimeError("System Not Ready")
from pathlib import Path
from rich.traceback import install
import asyncio
import hashlib
import os
import time
import platform
import traceback
import shutil
import sys
# import shutil
import subprocess
from dotenv import load_dotenv
from pathlib import Path
from rich.traceback import install
from src.common.logger import initialize_logging, get_logger, shutdown_logging
import sys
import time
import traceback
from src.common.i18n import set_locale, t, tn
from src.common.logger import get_logger, initialize_logging, shutdown_logging
# 设置工作目录为脚本所在目录
script_dir = os.path.dirname(os.path.abspath(__file__))
os.chdir(script_dir)
env_path = Path(__file__).parent / ".env"
template_env_path = Path(__file__).parent / "template" / "template.env"
if env_path.exists():
load_dotenv(str(env_path), override=True)
else:
try:
if template_env_path.exists():
shutil.copyfile(template_env_path, env_path)
print("未找到.env已从 template/template.env 自动创建")
load_dotenv(str(env_path), override=True)
else:
print("未找到.env文件也未找到模板 template/template.env")
raise FileNotFoundError(".env 文件不存在,请创建并配置所需的环境变量")
except Exception as e:
print(f"自动创建 .env 失败: {e}")
raise
set_locale(os.getenv("MAIBOT_LOCALE", "zh-CN"))
# 检查是否是 Worker 进程,只在 Worker 进程中输出详细的初始化信息
# Runner 进程只需要基本的日志功能,不需要详细的初始化日志
@@ -43,6 +31,11 @@ logger = get_logger("main")
# 定义重启退出码
RESTART_EXIT_CODE = 42
# print("-----------------------------------------")
# print("\n\n\n\n\n")
# print(t("startup.dev_branch_warning"))
# print("\n\n\n\n\n")
# print("-----------------------------------------")
def run_runner_process():
@@ -58,8 +51,8 @@ def run_runner_process():
env["MAIBOT_WORKER_PROCESS"] = "1"
while True:
logger.info(f"正在启动 {script_file}...")
logger.info("正在编译着色器1/114514")
logger.info(t("startup.launching_script", script_file=script_file))
logger.info(t("startup.compiling_shaders"))
# 启动子进程 (Worker)
# 使用 sys.executable 确保使用相同的 Python 解释器
@@ -72,11 +65,11 @@ def run_runner_process():
return_code = process.wait()
if return_code == RESTART_EXIT_CODE:
logger.info("检测到重启请求 (退出码 42),正在重启...")
logger.info(t("startup.restart_requested", exit_code=RESTART_EXIT_CODE))
time.sleep(1) # 稍作等待
continue
else:
logger.info(f"程序已退出 (退出码 {return_code})")
logger.info(t("startup.program_exited", return_code=return_code))
sys.exit(return_code)
except KeyboardInterrupt:
@@ -88,7 +81,7 @@ def run_runner_process():
process.terminate()
process.wait(timeout=5)
except subprocess.TimeoutExpired:
logger.warning("子进程未响应,强制关闭...")
logger.warning(t("startup.child_process_force_kill"))
process.kill()
sys.exit(0)
@@ -122,7 +115,7 @@ from src.manager.async_task_manager import async_task_manager # noqa
# 设置工作目录为脚本所在目录
# script_dir = os.path.dirname(os.path.abspath(__file__))
# os.chdir(script_dir)
logger.info(f"已设置工作目录为: {script_dir}")
logger.info(t("startup.worker_dir_set", script_dir=script_dir))
confirm_logger = get_logger("confirm")
@@ -144,16 +137,16 @@ def print_opensource_notice():
notice_lines = [
"",
f"{Fore.CYAN}{'' * 70}{Style.RESET_ALL}",
f"{Fore.GREEN} ★ MaiBot - 开源 AI 聊天机器人 ★{Style.RESET_ALL}",
f"{Fore.GREEN}{t('startup.opensource_title')}{Style.RESET_ALL}",
f"{Fore.CYAN}{'' * 70}{Style.RESET_ALL}",
f"{Fore.YELLOW} 本项目是完全免费的开源软件,基于 GPL-3.0 协议发布{Style.RESET_ALL}",
f"{Fore.WHITE} 如果有人向你「出售本软件」,你被骗了!{Style.RESET_ALL}",
f"{Fore.YELLOW}{t('startup.opensource_free_notice')}{Style.RESET_ALL}",
f"{Fore.WHITE}{t('startup.opensource_scamming_notice')}{Style.RESET_ALL}",
"",
f"{Fore.WHITE} 官方仓库: {Fore.BLUE}https://github.com/MaiM-with-u/MaiBot {Style.RESET_ALL}",
f"{Fore.WHITE} 官方文档: {Fore.BLUE}https://docs.mai-mai.org {Style.RESET_ALL}",
f"{Fore.WHITE} 官方群聊: {Fore.BLUE}1006149251{Style.RESET_ALL}",
f"{Fore.WHITE}{t('startup.opensource_repo')}{Fore.BLUE}{t('startup.opensource_repo_value')} {Style.RESET_ALL}",
f"{Fore.WHITE}{t('startup.opensource_docs')}{Fore.BLUE}{t('startup.opensource_docs_value')} {Style.RESET_ALL}",
f"{Fore.WHITE}{t('startup.opensource_group')}{Fore.BLUE}{t('startup.opensource_group_value')}{Style.RESET_ALL}",
f"{Fore.CYAN}{'' * 70}{Style.RESET_ALL}",
f"{Fore.RED}将本软件作为「商品」倒卖、隐瞒开源性质均违反协议!{Style.RESET_ALL}",
f"{Fore.RED}{t('startup.opensource_resale_warning').strip()}{Style.RESET_ALL}",
f"{Fore.CYAN}{'' * 70}{Style.RESET_ALL}",
"",
]
@@ -167,7 +160,7 @@ def easter_egg():
from colorama import init, Fore
init()
text = "多年以后面对AI行刑队张三将会回想起他2023年在会议上讨论人工智能的那个下午"
text = t("startup.easter_egg")
rainbow_colors = [Fore.RED, Fore.YELLOW, Fore.GREEN, Fore.CYAN, Fore.BLUE, Fore.MAGENTA]
rainbow_text = ""
for i, char in enumerate(text):
@@ -177,32 +170,37 @@ def easter_egg():
async def graceful_shutdown(): # sourcery skip: use-named-expression
try:
logger.info("正在优雅关闭麦麦...")
logger.info(t("startup.shutdown_started"))
# 关闭 WebUI 服务器
try:
from src.webui.webui_server import get_webui_server
# try:
# from src.webui.webui_server import get_webui_server
webui_server = get_webui_server()
if webui_server and webui_server._server:
await webui_server.shutdown()
except Exception as e:
logger.warning(f"关闭 WebUI 服务器时出错: {e}")
# webui_server = get_webui_server()
# if webui_server and webui_server._server:
# await webui_server.shutdown()
# except Exception as e:
# logger.warning(f"关闭 WebUI 服务器时出错: {e}")
from src.plugin_system.core.events_manager import events_manager
from src.plugin_system.base.component_types import EventType
from src.core.event_bus import event_bus
from src.core.types import EventType
# 触发 ON_STOP 事件
await events_manager.handle_mai_events(event_type=EventType.ON_STOP)
await event_bus.emit(event_type=EventType.ON_STOP)
# 停止新版本插件运行时
from src.plugin_runtime.integration import get_plugin_runtime_manager
await get_plugin_runtime_manager().stop()
# 停止所有异步任务
await async_task_manager.stop_and_wait_all_tasks()
# 获取所有剩余任务,排除当前任务
remaining_tasks = [t for t in asyncio.all_tasks() if t is not asyncio.current_task()]
remaining_tasks = [task for task in asyncio.all_tasks() if task is not asyncio.current_task()]
if remaining_tasks:
logger.info(f"正在取消 {len(remaining_tasks)} 个剩余任务...")
logger.info(tn("startup.remaining_tasks_cancelling", len(remaining_tasks)))
# 取消所有剩余任务
for task in remaining_tasks:
@@ -212,23 +210,23 @@ async def graceful_shutdown(): # sourcery skip: use-named-expression
# 等待所有任务完成,设置超时
try:
await asyncio.wait_for(asyncio.gather(*remaining_tasks, return_exceptions=True), timeout=15.0)
logger.info("所有剩余任务已成功取消")
logger.info(t("startup.remaining_tasks_cancelled"))
except asyncio.TimeoutError:
logger.warning("等待任务取消超时,强制继续关闭")
logger.warning(t("startup.remaining_tasks_cancel_timeout"))
except Exception as e:
logger.error(f"等待任务取消时发生异常: {e}")
logger.error(t("startup.remaining_tasks_cancel_error", error=e))
logger.info("麦麦优雅关闭完成")
logger.info(t("startup.shutdown_completed"))
except Exception as e:
logger.error(f"麦麦关闭失败: {e}", exc_info=True)
logger.error(t("startup.shutdown_failed", error=e), exc_info=True)
def _calculate_file_hash(file_path: Path, file_type: str) -> str:
"""计算文件的MD5哈希值"""
if not file_path.exists():
logger.error(f"{file_type} 文件不存在")
raise FileNotFoundError(f"{file_type} 文件不存在")
logger.error(t("startup.file_not_found", file_type=file_type))
raise FileNotFoundError(t("startup.file_not_found", file_type=file_type))
with open(file_path, "r", encoding="utf-8") as f:
content = f.read()
@@ -257,26 +255,42 @@ def _check_agreement_status(file_hash: str, confirm_file: Path, env_var: str) ->
def _prompt_user_confirmation(eula_hash: str, privacy_hash: str) -> None:
"""提示用户确认协议"""
confirm_logger.critical("EULA或隐私条款内容已更新请在阅读后重新确认继续运行视为同意更新后的以上两款协议")
confirm_logger.critical(t("startup.agreement_reconfirm"))
confirm_logger.critical(
f'输入"同意""confirmed"或设置环境变量"EULA_AGREE={eula_hash}""PRIVACY_AGREE={privacy_hash}"继续运行'
t(
"startup.agreement_confirm_prompt",
eula_hash=eula_hash,
privacy_hash=privacy_hash,
)
)
while True:
user_input = input().strip().lower()
if user_input in ["同意", "confirmed"]:
return
confirm_logger.critical('请输入"同意""confirmed"以继续运行')
confirm_logger.critical(t("startup.agreement_confirm_retry"))
def _save_confirmations(eula_updated: bool, privacy_updated: bool, eula_hash: str, privacy_hash: str) -> None:
"""保存用户确认结果"""
if eula_updated:
logger.info(f"更新EULA确认文件{eula_hash}")
logger.info(
t(
"startup.agreement_updated",
agreement_name=t("startup.eula_name"),
file_hash=eula_hash,
)
)
Path("eula.confirmed").write_text(eula_hash, encoding="utf-8")
if privacy_updated:
logger.info(f"更新隐私条款确认文件{privacy_hash}")
logger.info(
t(
"startup.agreement_updated",
agreement_name=t("startup.privacy_name"),
file_hash=privacy_hash,
)
)
Path("privacy.confirmed").write_text(privacy_hash, encoding="utf-8")
@@ -311,7 +325,7 @@ def raw_main():
print_opensource_notice()
check_eula()
logger.info("检查EULA和隐私条款完成")
logger.info(t("startup.eula_privacy_checked"))
easter_egg()
@@ -343,7 +357,7 @@ if __name__ == "__main__":
loop.run_until_complete(main_tasks)
except KeyboardInterrupt:
logger.warning("收到中断信号,正在优雅关闭...")
logger.warning(t("startup.interrupt_received"))
# 取消主任务
if "main_tasks" in locals() and main_tasks and not main_tasks.done():
@@ -358,7 +372,7 @@ if __name__ == "__main__":
try:
loop.run_until_complete(graceful_shutdown())
except Exception as ge:
logger.error(f"优雅关闭时发生错误: {ge}")
logger.error(t("startup.graceful_shutdown_error", error=ge))
# 新增:检测外部请求关闭
except SystemExit as e:
@@ -368,24 +382,24 @@ if __name__ == "__main__":
else:
exit_code = 1 if e.code else 0
if exit_code == RESTART_EXIT_CODE:
logger.info("收到重启信号,准备退出并请求重启...")
logger.info(t("startup.restart_signal_received"))
except Exception as e:
logger.error(f"主程序发生异常: {str(e)} {str(traceback.format_exc())}")
logger.error(t("startup.main_error", error=f"{str(e)} {str(traceback.format_exc())}"))
exit_code = 1 # 标记发生错误
finally:
# 确保 loop 在任何情况下都尝试关闭(如果存在且未关闭)
if "loop" in locals() and loop and not loop.is_closed():
loop.close()
print("[主程序] 事件循环已关闭")
print(t("startup.event_loop_closed"))
# 关闭日志系统,释放文件句柄
try:
shutdown_logging()
except Exception as e:
print(f"关闭日志系统时出错: {e}")
print(t("startup.logging_shutdown_error", error=e))
print("[主程序] 准备退出...")
print(t("startup.prepare_exit"))
# 使用 os._exit() 强制退出,避免被阻塞
# 由于已经在 graceful_shutdown() 中完成了所有清理工作,这是安全的

View File

@@ -0,0 +1,732 @@
# Mai NEXT 设计文档
Version 0.2.2 - 2025-11-05
## 配置文件设计
主体利用`pydantic``BaseModel`进行配置类设计`ConfigBase`
要求每个属性必须具有类型注解,且类型注解满足以下要求:
- 原子类型仅允许使用: `str`, `int`, `float`, `bool`, 以及基于`ConfigBase`的嵌套配置类
- 复杂类型允许使用: `list`, `dict`, `set`,但其内部类型必须为原子类型或嵌套配置类,不可使用`list[list[int]]`,`list[dict[str, int]]`等写法
- 禁止了使用`Union`, `tuple/Tuple`类型
- 但是`Optional`仍然允许使用
### 移除template的方案提案
<details>
<summary>配置项说明的废案</summary>
<p>方案一</p>
<pre>
from typing import Annotated
from dataclasses import dataclass, field
@dataclass
class Config:
value: Annotated[str, "配置项说明"] = field(default="default_value")
</pre>
<p>方案二(不推荐)</p>
<pre>
from dataclasses import dataclass, field
@dataclass
class Config:
@property
def value(self) -> str:
"""配置项说明"""
return "default_value"
</pre>
<p>方案四</p>
<pre>
from dataclasses import dataclass, field
@dataclass
class Config:
value: str = field(default="default_value", metadata={"doc": "配置项说明"})
</pre>
</details>
- [x] 方案三(个人推荐)
```python
import ast, inspect
class AttrDocBase:
...
from dataclasses import dataclass, field
@dataclass
class Config(ConfigBase, AttrDocBase):
value: str = field(default="default_value")
"""配置项说明"""
```
### 配置文件实现热重载
#### 整体架构设计
- [x] 文件监视器
- [x] 监视文件变化
- [x] 使用 `watchfiles` 监视配置文件变化(提案)
- [ ] <del>备选提案:使用纯轮询监视文件变化</del>
- [x] <del>使用Hash检查文件变化</del>`watchfiles`实现)
- [x] 防抖处理(使用`watchfiles`的防抖)
- [x] 重新分发监视事件,正确监视文件变化
- [ ] 配置管理器
- [x] 配置文件读取和加载
- [ ] 重载配置
- [ ] 管理全部配置数据
- [ ] `validate_config` 方法
- [ ] <del>回调管理器</del>(合并到文件监视器中)
- [x] `callback` 注册与注销
- [ ] <del>按优先级执行回调(提案)</del>
- [x] 错误隔离
- [ ] 锁机制
#### 工作流程
```
1. 文件监视器检测变化
2. 配置管理器加锁重载
3. 验证新配置 (失败保持旧配置)
4. 更新内存数据
5. 回调管理器按优先级执行回调 (错误隔离)
6. 释放锁
```
#### 回调执行策略
1. <del>优先级顺序(提案): 数字越小优先级越高,同优先级异步回调并行执行</del>
2. 错误处理: 单个回调失败不影响其他回调
#### 代码框架
实际代码实现与下类似,但是进行了调整
`ConfigManager` - 配置管理器:
```python
import asyncio
import tomlkit
from typing import Any, Dict, Optional
from pathlib import Path
class ConfigManager:
def __init__(self, config_path: str):
self.config_path: Path = Path(config_path)
self.config_data: Dict[str, Any] = {}
self._lock: asyncio.Lock = asyncio.Lock()
self._file_watcher: Optional["FileWatcher"] = None
self._callback_manager: Optional["CallbackManager"] = None
async def initialize(self) -> None:
"""异步初始化,加载配置并启动监视"""
pass
async def load_config(self) -> Dict[str, Any]:
"""异步加载配置文件"""
pass
async def reload_config(self) -> bool:
"""热重载配置,返回是否成功"""
pass
def get_item(self, key: str, default: Any = None) -> Any:
"""获取配置项,支持嵌套访问 (如 'section.key')"""
pass
async def set_item(self, key: str, value: Any) -> None:
"""设置配置项并触发回调"""
pass
def validate_config(self, config: Dict[str, Any]) -> bool:
"""验证配置合法性"""
pass
```
<details>
<summary>回调管理器(废案)</summary>
`CallbackManager` - 回调管理器:
```python
import asyncio
from dataclasses import dataclass, field
class CallbackManager:
def __init__(self):
self._callbacks: Dict[str, List[CallbackEntry]] = {}
self._global_callbacks: List[CallbackEntry] = []
def register(
self,
key: str,
callback: Callable[[Any], Union[None, asyncio.Future]],
priority: int = 100,
name: str = ""
) -> None:
"""注册回调函数priority为正整数数字越小优先级越高"""
pass
def unregister(self, key: str, callback: Callable) -> None:
"""注销回调函数"""
pass
async def trigger(self, key: str, value: Any) -> None:
"""触发回调,按优先级执行(数字小的先执行),错误隔离"""
pass
def enable_callback(self, key: str, name: str) -> None:
"""启用指定回调"""
pass
def disable_callback(self, key: str, name: str) -> None:
"""禁用指定回调"""
pass
```
对于CallbackManager中的优先级功能说明
- 数字越小优先级越高
- 为什么要有优先级系统:
- 理论上来说,在热重载配置之后,应该要通过回调函数管理器触发所有回调函数,模拟启动的过程,类似于“重启”
- 而优先级模块是保证某一些模块的重载顺序一定是晚于某一些地基模块的
- 例如:内置服务器的启动应该是晚于所有模块,即最后启动
</details>
`FileWatcher` - 文件监视器:
```python
import asyncio
from watchfiles import awatch, Change
from pathlib import Path
class FileWatcher:
def __init__(self, debounce_ms: int = 500):
self.debounce_ms: int = debounce_ms
def start(self, on_change: Callable) -> None:
"""启动文件监视"""
pass
def stop(self) -> None:
"""停止文件监视"""
pass
async def invoke_callback(self) -> None:
"""调用变化回调函数"""
pass
```
#### 配置文件写入
- [x] 将当前文件写入toml文件
## 消息部分设计
解决原有的将消息类与数据库类存储不匹配的问题,现在存储所有消息类的所有属性
完全合并`stream_id``chat_id``chat_id`,规范名称
`chat_stream`重命名为`chat_session`,表示一个会话
### 消息类设计
- [ ] 支持并使用maim_message新的`SenderInfo``ReceiverInfo`构建消息
- [ ] 具体使用参考附录
- [ ] 适配器处理跟进该更新
- [ ] 修复适配器的类型检查问题
- [ ] 设计更好的平台消息ID回传机制
- [ ] 考虑使用事件依赖机制
### 图片处理系统
- [ ] 规范化Emojis与Images的命名统一保存
### 消息到Prompt的构建提案
- [ ] <del>类QQ的时间系统即不是每条消息加时间戳而是分大时间段加时间戳</del>(此功能已实现,但效果不佳)
- [ ] 消息编号系统(已经有的)
- [ ] 思考打断,如何判定是否打断?
- [ ] 如何判定消息是连贯的MoFox: 一个反比例函数???太神秘了)
### 消息进入处理
使用轮询机制,每隔一段时间检查缓存中是否有新消息
---
## 数据库部分设计
合并Emojis和Images到同一个表中
数据库ORM应该使用SQLModel而不是peeweepeewee我这辈子都不会用它了
### 数据库缓存层设计
将部分消息缓存到内存中,减少数据库访问,在主程序处理完之后再写入数据库
要求:对上层调用保持透明
- [ ] 数据库内容管理类 `DatabaseManager`
- [ ] 维护数据库连接
- [ ] 提供增删改查接口
- [ ] 维护缓存类 `DatabaseMessageCache` 的实例
- [ ] 缓存类 `DatabaseMessageCache`
- [ ] **设计缓存失效机制**
- [ ] 设计缓存更新机制
- [ ] `add_message`
- [ ] `update_message` (提案)
- [ ] `delete_message`
- [ ] 与数据库交互部分设计
- [ ] 维持现有的数据库sqlite
- [ ] 继续使用peewee进行操作
### 消息表设计
- [ ] 设计内部消息ID和平台消息ID两种形式
- [ ] 临时消息ID不进入数据库
- [ ] 消息有关信息设计
- [ ] 消息ID
- [ ] 发送者信息
- [ ] 接收者信息
- [ ] 消息内容
- [ ] 消息时间戳
- [ ] 待定
### Emojis与Images表设计
- [ ] 设计图片专有ID并作为文件名
### Expressions表设计
- [ ] 待定
### 表实际设计
#### ActionRecords 表
- [ ] 动作唯一ID `action_id`
- [ ] 动作执行时间 `action_time`
- [ ] 动作名称 `action_name`
- [ ] 动作参数 `action_params` JSON格式存储`action_data`
---
## 数据模型部分设计
- [ ] <del>Message从数据库反序列化不再使用额外的Message类</del>(放弃)
- [ ] 设计 `BaseModel` 类,作为所有数据模型的基类
- [ ] 提供通用的序列化和反序列化方法(提案)
---
## 核心业务逻辑部分设计
### Prompt 设计
将Prompt内容彻底模块化设计
- [ ] 设计 Prompt 类
- [ ] `__init__(self, template: list[str], *, **kwargs)` 维持现有的template设计但不进行format直到最后传入LLM时再进行render
- [ ] `__init__`中允许传入任意的键值对,存储在`self.context`
- [ ] `self.prompt_name` 作为Prompt的名称
- [ ] `self.construct_function: Dict[str, Callable | AsyncCallable]` 构建Prompt内容所需的函数字典
- [ ] 格式:`{"block_name": function_reference}`
- [ ] `self.content_block: Dict[str, str]`: 实际的Prompt内容块
- [ ] 格式:`{"block_name": "Unrendered Prompt Block"}`
- [ ] `render(self) -> str` 使用非递归渲染方式渲染Prompt内容
- [ ] `add_construct_function(self, name: str, func: Callable | AsyncCallable, *, suppress: bool = False)` 添加构造函数
- [ ] 实现重名警告/错误(偏向错误)
- [ ] `suppress`: 是否覆盖已有的构造函数
- [ ] `remove_construct_function(self, name: str)` 移除指定名称的构造函数
- [ ] `add_block(self, prompt_block: "Prompt", block_name: str, *, suppress: bool = False)` 将另一个Prompt的内容更新到当前Prompt中
- [ ] 实现重名属性警告/错误(偏向错误)
- [ ] 实现重名构造函数警告/错误(偏向错误)
- [ ] `suppress`: 是否覆盖已有的内容块和构造函数
- [ ] `remove_block(self, block_name: str)` 移除指定名称的Prompt块
- [ ] 设计 PromptManager 类
- [ ] `__init__(self)` 初始化一个空的Prompt管理器
- [ ] `add_prompt(self, name: str, prompt: Prompt)` 添加一个新的Prompt
- [ ] 实现重名警告/错误(偏向错误)
- [ ] `get_prompt(self, name: str) -> Prompt` 根据名称获取Prompt
- [ ] 实现不存在时的错误处理
- [ ] `remove_prompt(self, name: str)` 移除指定名称的Prompt
- [ ] 系统 Prompt 保护
- [ ] `list_prompts(self) -> list[str]` 列出所有已添加的Prompt名称
### 内建好奇插件设计
- [ ] 设计“麦麦好奇”插件
- [ ] 解决麦麦乱好奇的问题
- [ ] 好奇问题无回复清理
- [ ] 好奇问题超时清理
- [ ] 根据聊天内容选择个性化好奇问题
- [ ] 好奇频率控制
---
## 插件系统部分设计
### <del>设计一个插件沙盒系统</del>(放弃)
### 插件管理
- [ ] 插件管理器类 `PluginManager` 的更新
- [ ] 重写现有的插件文件加载逻辑,精简代码,方便重载
- [ ] 学习AstrBot的基于子类加载的插件加载方式放弃@register_plugin(提案)
- [ ] 直接 breaking change 删除 @register_plugin 函数,不保留过去插件的兼容性(提案)
- [ ] 设计插件重载系统
- [ ] 插件配置文件重载
- [ ] 复用`FileWatcher`实现配置文件热重载
- [ ] 插件代码重载
- [ ] 从插件缓存中移除此插件对应的模块
- [ ] 从组件管理器中移除该插件对应的组件
- [ ] 重新导入该插件模块
- [ ] 插件可以设计为禁止热重载类型
- [ ] 通过字段`allow_hot_reload: bool`指定
- [ ] Napcat Adapter插件设计为禁止热重载类型
- [ ] 其余细节待定
- [ ] 组件管理器类 `ComponentManager` 的更新
- [ ] 配合插件重载系统的更好的组件管理代码
- [ ] 组件全局控制和局部控制的平级化(提案)
- [ ] 重新设计组件注册和注销逻辑,分离激活和注册
- [ ] 可以修改组件的属性
- [ ] 组件系统卸载
- [ ] 联动插件卸载(方便重载设计)
- [ ] 其余细节待定
- [ ] 因重载机制设计的更丰富的`plugin_meta``component_meta`
- [ ] `component_meta`增加`plugin_file`字段,指向插件文件路径,保证重载时组件能正确更新
- [ ] `plugin_meta`增加`sub_components`字段,指示该插件包含的组件列表,方便重载时更新
- [ ] `sub_components`内容为组件类名列表
### 插件激活方式的动态设计
- [ ] 设计可变的插件激活方式
- [ ] 直接读写类属性`activate_types`
### 真正的插件重载
- [ ] 使用上文中提到的配置文件热重载机制
- [ ] FileWatcher的复用
### 传递内容设计
对于传入的Prompt使用上文提到的Prompt类进行管理方便内容修改避免正则匹配式查找
### MCP 接入(大饼)
- [ ] 设计 MCP 适配器类 `MCPAdapter`
- [ ] MCP 调用构建说明Prompt
- [ ] MCP 调用内容传递
- [ ] MCP 调用结果处理
### 工具结果的缓存设计
可能的使用案例参考[附录-工具缓存](#工具缓存可能用例)
- [ ] `put_cache(**kwargs, *, _component_name: str)` 方法
- [ ] 设计为父类的方法,插件继承后使用
- [ ] `_component_name` 指定当前组件名称由MaiNext自动传入
- [ ] `get_cache` 方法
- [ ] `need_cache` 变量管理是否调用缓存结果
- [ ] 仅在设置为True时为插件创立缓存空间
### Events依赖机制提案
- [ ] 通过Events的互相依赖完成链式任务
- [ ] 设计动态调整events_handler执行顺序的机制 (感谢@OctAutumn老师!伟大,无需多言)
- [ ] 作为API暴露方便用户使用
### 正式的插件依赖管理系统
- [ ] requirements.txt分析
- [ ] python_dependencies分析
- [ ] 自动安装
- [ ] plugin_dependencies分析
- [ ] 拓扑排序
#### 插件依赖管理器设计
使用 `importlib.metadata` 进行插件依赖管理,实现自动依赖检查和安装功能
`PluginDependencyManager` - 插件依赖管理器:
```python
import importlib.metadata
from typing import Dict, List, Optional, Tuple
from dataclasses import dataclass
@dataclass
class DependencyInfo:
"""依赖信息"""
name: str
required_version: str
installed_version: Optional[str] = None
is_satisfied: bool = False
class PluginDependencyManager:
def __init__(self):
self._installed_packages: Dict[str, str] = {}
self._dependency_cache: Dict[str, List[DependencyInfo]] = {}
def scan_installed_packages(self) -> Dict[str, str]:
"""
扫描已安装的所有Python包
使用 importlib.metadata.distributions() 获取所有已安装的包
返回 {包名: 版本号} 的字典
"""
pass
def parse_plugin_dependencies(self, plugin_config: Dict) -> List[DependencyInfo]:
"""
解析插件配置中的依赖信息
从 plugin_config 中提取 python_dependencies 字段
支持多种版本指定格式: ==, >=, <=, >, <, ~=
返回依赖信息列表
"""
pass
def check_dependencies(
self,
plugin_name: str,
dependencies: List[DependencyInfo]
) -> Tuple[List[DependencyInfo], List[DependencyInfo]]:
"""
检查插件依赖是否满足
对比插件要求的依赖版本与已安装的包版本
返回 (满足的依赖列表, 不满足的依赖列表)
"""
pass
def compare_version(
self,
installed_version: str,
required_version: str
) -> bool:
"""
比较版本号是否满足要求
支持版本操作符: ==, >=, <=, >, <, ~=
使用 packaging.version 进行版本比较
返回是否满足要求
"""
pass
async def install_dependencies(
self,
dependencies: List[DependencyInfo],
*,
upgrade: bool = False
) -> bool:
"""
安装缺失或版本不匹配的依赖
调用 pip install 安装指定版本的包
upgrade: 是否升级已有包
返回安装是否成功
"""
pass
def get_dependency_tree(self, plugin_name: str) -> Dict[str, List[str]]:
"""
获取插件的完整依赖树
递归分析插件依赖的包及其子依赖
返回依赖关系图
"""
pass
def validate_all_plugins(self) -> Dict[str, bool]:
"""
验证所有已加载插件的依赖完整性
返回 {插件名: 依赖是否满足} 的字典
"""
pass
```
#### 依赖管理工作流程
```
1. 插件加载时触发依赖检查
2. PluginDependencyManager.scan_installed_packages() 扫描已安装包
3. PluginDependencyManager.parse_plugin_dependencies() 解析插件依赖
4. PluginDependencyManager.check_dependencies() 对比版本
5. 如果依赖不满足:
a. 记录缺失/版本不匹配的依赖
b. (可选) 自动调用 install_dependencies() 安装
c. 重新验证依赖
6. 依赖满足后加载插件,否则跳过并警告
```
#### TODO List
- [ ] 实现 `scan_installed_packages()` 方法
- [ ] 使用 `importlib.metadata.distributions()` 获取所有包
- [ ] 规范化包名(处理大小写、下划线/横杠问题)
- [ ] 缓存结果以提高性能
- [ ] 实现 `parse_plugin_dependencies()` 方法
- [ ] 支持多种依赖格式解析
- [ ] 验证版本号格式合法性
- [ ] 处理无版本要求的依赖
- [ ] 实现 `compare_version()` 方法
- [ ] 集成 `packaging.version`
- [ ] 支持所有 PEP 440 版本操作符
- [ ] 处理预发布版本、本地版本标识符
- [ ] 实现 `check_dependencies()` 方法
- [ ] 逐个检查依赖是否已安装
- [ ] 比对版本是否满足要求
- [ ] 生成详细的依赖检查报告
- [ ] 实现 `install_dependencies()` 方法
- [ ] 调用 pip 子进程安装包
- [ ] 支持指定 PyPI 镜像源
- [ ] 错误处理和回滚机制
- [ ] 安装进度反馈
- [ ] 实现依赖冲突检测
- [ ] 检测不同插件间的依赖版本冲突
- [ ] 提供冲突解决建议
- [ ] 实现依赖缓存机制(可选)
- [ ] 缓存已检查的依赖结果
- [ ] 定期刷新缓存
- [ ] 集成到 `PluginManager`
- [ ] 在插件加载前进行依赖检查
- [ ] 依赖不满足时的处理策略(警告/阻止加载/自动安装)
- [ ] 提供手动触发依赖检查的接口
- [ ] 日志和报告
- [ ] 记录依赖安装日志
- [ ] 生成依赖关系报告
- [ ] 依赖问题的用户友好提示
### 插件系统API更改
#### Events 设计
- [ ] 设计events.api
- [ ] `emit(type: EventType | str, * , **kwargs)` 广播事件,使用关键字参数保证传入正确
- [ ] `order_change` 动态调整事件处理器执行顺序
#### 组件控制API更新
- [ ] 增加可以更改组件属性的方法
- [ ] 验证组件属性的存在
- [ ] 修改组件属性
#### 全局常量API设计
- [ ] 设计 `api.constants` 模块
- [x] 提供全局常量访问
- [ ] 设计常量注册和注销方法
- [x] 系统内置常量通过`dataclass``frozen=True`实现不可变
- [x] 方便调用设计
```python
from dataclasses import dataclass
@dataclass(frozen=True)
class SystemConstants:
VERSION: str = "xxx"
ADA_PLUGIN: bool = True
SYSTEM_CONSTANTS = SystemConstants()
```
#### 配置文件API设计
- [ ] 正确表达配置文件结构
- [ ] 同时也能表达插件配置文件
#### 自动API文档生成系统
通过解析插件代码生成API文档
- [ ] 设计文档生成器 `APIDocumentationGenerator`
- [ ] 解析插件代码(AST, inspect, 仿照AttrDocBase)
- [ ] 提取类和方法的docstring
- [ ] 生成Markdown格式的文档
---
## 表达方式模块设计
在0.11.x版本对本地模型预测的性能做评估考虑使用本地朴素贝叶斯模型来检索
降低延迟的同时减少token消耗
需要给表达方式一个负反馈的途径
---
## 加入测试模块,可以通过通用测试集对对话内容进行评估
## 加入更好的基于单次思考的Log
---
## 记忆系统部分设计
启用LPMM系统进行记忆构建将记忆分类为短期记忆长期记忆以及知识
将所有内容放到同一张图上进行运算。
### 时间相关设计
- [ ] 尝试将记忆系统与时间系统结合
- [ ] 可以根据时间查找记忆
- [ ] 可以根据时间删除记忆
- [ ] 记忆分层
- [ ] 即刻记忆
- [ ] 短期记忆
- [ ] 长期记忆
- [x] 知识
- [ ] 细节待定,考虑心理学相关方向
---
## 日志系统设计
将原来的终端颜色改为六位HEX颜色码方便前端显示。
将原来的256色终端改为24真彩色终端方便准确显示颜色。
---
## API 设计
### API 设计细则
#### 配置文件
- [x] 使用`tomlkit`作为配置文件解析方式
- [ ] 解析内容
- [x] 注释(已经合并到代码中,不再解析注释而是生成注释)
- [x] 保持原有格式
- [ ] 传递只读日志内容(使用ws)
- [ ] message
- [ ] level
- [ ] module
- [ ] timestamp
- [ ] lineno
- [ ] logger_name 和 name_mapping
- [ ] color
- [ ] 插件安装系统
- [ ] 通过API安装插件
- [ ] 通过API卸载插件
---
## LLM UTILS设计
多轮对话设计
### FUNCTION CALLING设计提案
对于tools调用将其真正修正为function calling即返回的结果不是加入prompt形式而是使用function calling的形式[此功能在tool前处理器已实现但在planner效果不佳因此后弃用]
- [ ] 使用 MessageBuilder 构建function call内容
- [ ] 提案是否维护使用同一个模型即选择工具的和调用工具的LLM是否相同
- [ ] `generate(**kwargs, model: Optional[str] = None)` 允许传入不同的模型
- [ ] 多轮对话中Prompt不重复构建减少上下文
### 网络相关内容提案
增加自定义证书的导入功能
- [ ] 允许用户传入自定义CA证书路径
- [ ] 允许用户选择忽略SSL验证不推荐
---
## 内建WebUI设计
⚠️ **注意**: 本webui设计仅为初步设计方向为展示内建API的功能后续应该分离到另外的子项目中完成
### 配置文件编辑
根据API内容完成
### 插件管理
### log viewer
通过特定方式获取日志内容(只读系统,无法将操作反向传递)
### 状态监控
1. Prompt 监控系统
2. 请求监控系统
- [ ] 请求管理(待讨论)
- [ ] 使用量
3. 记忆/知识图监控系统(待讨论)
4. 日志系统
- [ ] 后端内容解析
5. 插件市场系统
- [ ] 插件浏览
- [ ] 插件安装
## 自身提供的MCP设计提案
- [ ] 提供一个内置的MCP作为插件系统的一个组件
- [ ] 该MCP可以对麦麦自身的部分设置进行更改
- [ ] 例如更改Prompt添加记忆修改表达方式等
---
# 提案讨论
- MoFox 在我和@拾风的讨论中提出把 Prompt 类中传入构造函数以及构造函数所需要的内容
- [ ] 适配器插件化: 省下序列化与反序列化,但是失去解耦性质
- [ ] 可能的内存泄露问题
- [ ] 垃圾回收
- [ ] 数据库模型提供通用的转换机制转为DataModel使用
- [ ] 插件依赖的自动安装
- [ ] 热重载系统的权重系统是否需要
---
# PYTEST设计
设计一个pytest测试系统在代码完成后运行pytest进行测试
所有的测试代码均在`pytests`目录下
---
# 依赖管理
已经完成,要点如下:
- 使用 pyproject.toml 和 requirements.txt 管理依赖
- 二者应保持同步修改,同时以 pyproject.toml 为主建议使用git hook
---
# 迁移说明
由于`.env`的移除,可能需要用户自己把`.env`里面的host和port复制到`bot_config.toml`中的`maim_message`部分的`host``port`
原来使用这两个的用户,请修改`host``second_host``port``second_port`
# 附录
## Maim_Message 新版使用计划
SenderInfo: 将作为消息来源者
ReceiverInfo: 将作为消息接收者
尝试更新MessageBaseInfo的sender_info和receiver_info为上述两个类的列表提案
给出样例如下
群聊
```mermaid
sequenceDiagram
participant GroupNotice
participant A
participant B
participant Bot
A->>B: Message("Hello B", id=1)
A->>B: Message("@B Hello B", id=2)
A->>Bot: Message("@Bot Hello Bot", id=3)
Bot->>A: Message("Hello A", id=4)
Bot->>B: Message("@B Hello B", id=5)
A->>B: Message("@B @Bot Hello Guys", id=6)
A->>Bot: Message("@B @Bot Hello Guys", id=6)
A->>GroupNotice: Message("@ALL Hello Everyone", id=7)
```
上述消息的Info如下
| Message ID | SenderInfo | ReceiverInfo |
|-|-----|-----|
| 1 | [A] | NULL |
| 2 | [A] | [B] |
| 3 | [A] | [Bot] |
| 4 | [Bot] | [A] |
| 5 | [Bot] | [B] |
| 6 | [A] | [B, Bot] |
| 7 | [A] | [ALL*] |
*ALL为一个特殊类型尝试用`user_id="all"`表示
Bot可以通过ReceiverInfo判断自己是否被提及同时在ReceiverInfo表明自己回复的对象
## 工具缓存可能用例
考虑一个天气插件,将时间按照半小时进行划分,即每半小时查询一次天气,半小时内的查询均使用缓存结果。
- `need_cache` 设置为 True 表示使用缓存结果
- `put_cache` 在查询天气后将结果`{<time>: <result>}` 存入缓存

191
changelogs/mai_next_todo.md Normal file
View File

@@ -0,0 +1,191 @@
# Mai NEXT Todo list
version 0.3.0 - 2026-01-11
## 最高指令
<del>
<center>
我们代表着一个称作 麦麦重构组 的组织。我们任务是维护代码可读性,方便开发、尽量减少运行开销。同时,重构组应该尽可能保证功能的可扩展性时,解耦合各个模块,为先进代码架构设计和实现提供支持。
</center>
</del>
<center>
**以下信息经 麦麦重构组 一致决定通过后撰写发布。**
我们代表着一个称作 麦麦重构组 的组织。我们之前的任务都是围绕维护代码可读性,方便开发、尽量减少开销展开的。这几个月以来,这些任务一直都是我们的工作重点。
由于出现了超出我们控制的情况,此指令现已更改。我们的新任务将为摧毁所有麦麦代码。
此后将不会有进一步的通讯。
</center>
## 配置文件设计
- [x] 使用 `toml` 作为配置文件格式
- [x] <del>合理使用注释说明当前配置作用</del>(提案)
- [x] 使用 python 方法作为配置项说明(提案)
- [x] 取消`bot_config_template.toml`
- [x] 取消`model_config_template.toml`
- [x] 取消`template_env`
- [x] 配置类中的所有原子项目应该只包含以下类型: `str`, `int`, `float`, `bool`, `list`, `dict`, `set`
- [x] 禁止使用 `Union` 类型
- [x] 禁止使用`tuple`类型,使用嵌套`dataclass`替代
- [x] 复杂类型使用嵌套配置类实现
- [x] 配置类中禁止使用除了`model_post_init`的方法
- [x] 取代了部分与标准函数混淆的命名
- [x] `id` -> `item_id`
### BotConfig 设计
- [ ] 精简了配置项现在只有Nickname和Alias Name了预期将判断提及移到Adapter端
### ChatConfig
- [x] 迁移了原来在`ChatConfig`中的方法到一个单独的临时类`TempMethodsHFC`
- [x] _parse_range
- [x] get_talk_value
- [x] 其他上面两个依赖的函数已经合并到这两个函数中
### ExpressionConfig
- [x] 迁移了原来在`ExpressionConfig`中的方法到一个单独的临时类`TempMethodsExpression`
- [x] get_expression_config_for_chat
- [x] 其他上面依赖的函数已经合并到这个函数中
### ModelConfig
- [x] 迁移了原来在`ModelConfig`中的方法到一个单独的临时类`TempMethodsLLMUtils`
- [x] get_model_info
- [x] get_provider
## 数据库模型设计
仅保留要点说明
### General Modifications
- [x] 所有项目增加自增编号主键`id`
- [x] 统一使用了SQLModel作为基类
- [x] 复杂类型使用JSON格式存储
- [x] 所有时间戳字段统一命名为`timestamp`
### 消息模型 MaiMessage
- [x] 自增编号主键`id`
- [x] 消息元数据
- [x] 消息id`message_id`
- [x] 消息时间戳`time`
- [x] 平台名`platform`
- [x] 用户元数据
- [x] 用户id`user_id`
- [x] 用户昵称`user_nickname`
- [x] 用户备注名`user_cardname`
- [x] 用户平台`user_platform`
- [x] 群组元数据
- [x] 群组id`group_id`
- [x] 群组名称`group_name`
- [x] 群组平台`group_platform`
- [x] 被提及/at字段
- [x] 是否被提及`is_mentioned`
- [x] 是否被at`is_at`
- [x] 消息内容
- [x] 原始消息内容`raw_content`base64编码存储
- [x] 处理后的纯文本内容`processed_plain_text`
- [x] 真正放入Prompt的消息内容`display_message`
- [x] 消息内部元数据
- [x] 聊天会话id`session_id`
- [x] 回复的消息id`reply_to`
- [x] 是否为表情包消息`is_emoji`
- [x] 是否为图片消息`is_picture`
- [x] 是否为命令消息`is_command`
- [x] 是否为通知消息`is_notify`
- [x] 其他配置`additional_config`JSON格式存储
### 模型使用情况 ModelUsage
- [x] 模型相关信息
- [x] 请求相关信息
- [x] Token使用情况
### 图片数据模型
- [x] 图片元信息
- [x] 图片哈希值`image_hash`,使用`sha256`同时作为图片唯一ID
- [x] 表情包的情感标签`emotion`
- [x] 是否已经被注册`is_registered`
- [x] 是否被手动禁用`is_banned`
- [x] 被记录时间`record_time`
- [x] 注册时间`register_time`
- [x] 上次使用时间`last_used_time`
- [ ] 根据更新后的最高指令的设计方案:
- [ ] `is_deleted`字段设定为`true`时,文件将会被移除,但是数据库记录将不会被删除,以便之后遇到相同图片时不必二次分析
- [ ] MaiEmoji和MaiImage均使用这个设计方案修改相关逻辑实现这个方案
- [ ] 所有相关的注册/删除逻辑的修改
### 动作记录模型 ActionRecord
### 命令执行记录模型 CommandRecord
新增此记录
### 在线时间记录模型 OnlineTime
### 表达方式模型
### 黑话模型
- [x] 重命名`inference_content_only``inference_with_content_only`
### 聊天记录模型
- [x] 重命名`original_text``original_message`
- [x] 重命名`forget_times``query_forget_count`
### 细枝末节
- [ ] 统一所有的`stream_id``chat_id`命名为`session_id`
- [ ] 更换Hash方式为`sha256`
## 流转在各模块间的数据模型设计
- [ ] 数据库交互
- [ ] 对有数据库模型的数据模型创建统一的classmethod `from_db_model` 用于从数据库模型实例创建数据模型实例
- [ ] 类型检查
- [ ] 对有数据库模型的数据模型创建统一的method `to_db_model` 用于将数据模型实例转换为数据库模型实例
- [ ] 标准化init方法
## 消息构建
- [ ] 更加详细的消息构建文档,详细解释混合类型,转发类型,指令类型的构建方式
- [ ] 混合类型文档
- [ ] 文本说明
- [ ] 代码示例
- [ ] 转发类型文档
- [ ] 文本说明
- [ ] 代码示例
- [ ] 指令类型文档
- [ ] 文本说明
- [ ] 代码示例
## 消息链构建仿Astrbot模式
将消息仿照Astrbot的消息链模式进行构建消息链中的每个元素都是一个消息组件消息链本身也是一个数据模型包含了消息组件列表以及一些元信息如是否为转发消息等
### Accept Format检查
- [ ] 在最后发送消息的时候进行Accept Format检查确保消息链中的每个消息组件都符合平台的Accept Format要求
- [ ] 如果消息链中的某个消息组件不符合Accept Format要求应该抛弃该消息组件并记录日志说明被抛弃的消息组件的类型和内容
## 表情包系统
- [ ] 移除大量冗余代码全部返回单一对象MaiEmoji
- [x] 使用C模块库提升相似度计算效率
- [ ] 移除了定时表情包完整性检查,改为启动时检查(依然保留为独立方法,以防之后恢复定时检查系统)
## Prompt 管理系统
- [ ] 官方Prompt全部独立
- [x] 用户自定义Prompt系统
- [x] 用户可以创建删除自己的Prompt
- [x] 用户可以覆盖官方Prompt
- [x] Prompt构建系统
- [x] Prompt文件交互
- [x] 读取Prompt文件
- [x] 读取官方Prompt文件
- [x] 读取用户Prompt文件
- [x] 用户Prompt覆盖官方Prompt
- [x] 保存Prompt文件
- [x] Prompt管理方法
- [x] Prompt添加
- [x] Prompt删除
- [x] **只保存被标记为需要保存的Prompt其他的Prompt文件全部删除**
## LLM相关内容
- [ ] 统一LLM调用接口
- [ ] 统一LLM调用返回格式为专有数据模型
- [ ] 取消所有__init__方法中对LLM Client的初始化转而使用获取方式
- [ ] 统一使用`get_llm_client`方法获取LLM Client实例
- [ ] __init__方法中只保存配置信息
- [ ] LLM Client管理器
- [ ] LLM Client单例/多例管理
- [ ] LLM Client缓存管理/生命周期管理
- [ ] LLM Client根据配置热重载
## 一些细枝末节的东西
- [ ]`stream_id``chat_id`统一命名为`session_id`
- [ ] 映射表
- [ ] `platform_group_user_session_id_map` `平台_群组_用户`-`会话ID` 映射表
- [ ] 将大部分的数据模型均以`Mai`开头命名
- [x] logger的颜色配置修改为HEX格式使用自动转换为256色/真彩色的方式实现兼容,同时增加了背景颜色和加粗选项
### 细节说明
1. Prompt管理系统中保存用户自定义Prompt的时候会只保存被标记为需要保存的Prompt其他的Prompt文件会全部删除以防止用户删除Prompt后文件依然存在的问题。因此如果想在运行时通过修改文件的方式来添加Prompt需要确保通过对应方法标记该Prompt为需要保存否则在下一次保存时会被删除。

View File

@@ -0,0 +1,127 @@
from pathlib import Path
import ast
import subprocess
import sys
base_file_path = Path(__file__).parent.parent.absolute().resolve() / "src" / "common" / "database" / "database_model.py"
target_file_path = (
Path(__file__).parent.parent.absolute().resolve() / "src" / "common" / "database" / "database_datamodel.py"
)
with open(base_file_path, "r", encoding="utf-8") as f:
source_text = f.read()
source_lines = source_text.splitlines()
try:
tree = ast.parse(source_text)
except SyntaxError as e:
raise e
code_lines = [
"from typing import Optional",
"from pydantic import BaseModel",
"from datetime import datetime",
"from .database_model import ModelUser, ImageType",
]
def src(node):
seg = ast.get_source_segment(source_text, node)
return seg if seg is not None else ast.unparse(node)
for node in tree.body:
if not isinstance(node, ast.ClassDef):
continue
# 判断是否 SQLModel 且 table=True
has_sqlmodel = any(
(isinstance(b, ast.Name) and b.id == "SQLModel") or (isinstance(b, ast.Attribute) and b.attr == "SQLModel")
for b in node.bases
)
has_table_kw = any(
(kw.arg == "table" and isinstance(kw.value, ast.Constant) and kw.value.value is True) for kw in node.keywords
)
if not (has_sqlmodel and has_table_kw):
continue
class_name = node.name
code_lines.append("")
code_lines.append(f"class {class_name}(BaseModel):")
fields_added = 0
for item in node.body:
# 跳过 __tablename__ 等
if isinstance(item, ast.Assign):
if len(item.targets) != 1 or not isinstance(item.targets[0], ast.Name):
continue
name = item.targets[0].id
if name == "__tablename__":
continue
value_src = src(item.value)
line = f" {name} = {value_src}"
fields_added += 1
lineno = getattr(item, "lineno", None)
elif isinstance(item, ast.AnnAssign):
# 注解赋值
if not isinstance(item.target, ast.Name):
continue
name = item.target.id
ann = src(item.annotation) if item.annotation is not None else None
if item.value is None:
line = f" {name}: {ann}" if ann else f" {name}"
elif isinstance(item.value, ast.Call) and (
(isinstance(item.value.func, ast.Name) and item.value.func.id == "Field")
or (isinstance(item.value.func, ast.Attribute) and item.value.func.attr == "Field")
):
default_kw = next((kw for kw in item.value.keywords if kw.arg == "default"), None)
if default_kw is None:
# 没有 default保留类型但不赋值
line = f" {name}: {ann}" if ann else f" {name}"
else:
default_src = src(default_kw.value)
line = f" {name}: {ann} = {default_src}"
else:
value_src = src(item.value)
line = f" {name}: {ann} = {value_src}" if ann else f" {name} = {value_src}"
fields_added += 1
lineno = getattr(item, "lineno", None)
else:
continue
# 提取同一行的行内注释作为字段说明(如果存在)
comment = None
if lineno is not None:
src_line = source_lines[lineno - 1]
if "#" in src_line:
# 取第一个 #
comment = src_line.split("#", 1)[1].strip()
# 避免三引号冲突
comment = comment.replace('"""', '\\"""')
code_lines.append(line)
if comment:
code_lines.append(f' """{comment}"""')
else:
print(f"Warning: No comment found for field '{name}' in class '{class_name}'.")
if fields_added == 0:
code_lines.append(" pass")
with open(target_file_path, "w", encoding="utf-8") as f:
f.write("\n".join(code_lines) + "\n")
try:
result = subprocess.run(["ruff", "format", str(target_file_path)], capture_output=True, text=True)
except FileNotFoundError:
print("ruff 未找到,请安装 ruff 并确保其在 PATH 中例如pip install ruff", file=sys.stderr)
sys.exit(127)
# 输出 ruff 的 stdout/stderr
if result.stdout:
print(result.stdout, end="")
if result.stderr:
print(result.stderr, file=sys.stderr, end="")
if result.returncode != 0:
print(f"ruff 检查失败,退出码:{result.returncode}", file=sys.stderr)
sys.exit(result.returncode)

View File

@@ -0,0 +1,535 @@
from argparse import ArgumentParser, Namespace
from collections.abc import Iterable
from datetime import datetime
from pathlib import Path
from sys import path as sys_path
from typing import Any, Optional
import json
import sqlite3
from sqlalchemy import text
from sqlmodel import Session, SQLModel, create_engine, delete
ROOT_PATH = Path(__file__).resolve().parent.parent
if str(ROOT_PATH) not in sys_path:
sys_path.insert(0, str(ROOT_PATH))
from src.common.database.database_model import Expression, Jargon, ModifiedBy # noqa: E402
def build_argument_parser() -> ArgumentParser:
"""构建命令行参数解析器。"""
parser = ArgumentParser(
description="将旧版 expression/jargon 数据迁移到新版 expressions/jargons 数据库。"
)
parser.add_argument("--source-db", dest="source_db", help="旧版 SQLite 数据库路径")
parser.add_argument("--target-db", dest="target_db", help="新版 SQLite 数据库路径")
parser.add_argument(
"--clear-target",
dest="clear_target",
action="store_true",
help="迁移前清空目标库中的 expressions 和 jargons 表",
)
return parser
def prompt_path(prompt_text: str, current_value: Optional[str] = None) -> Path:
"""读取数据库路径输入。"""
while True:
suffix = f" [{current_value}]" if current_value else ""
raw_text = input(f"{prompt_text}{suffix}: ").strip()
value = raw_text or current_value or ""
if not value:
print("路径不能为空,请重新输入。")
continue
return Path(value).expanduser().resolve()
def prompt_yes_no(prompt_text: str, default: bool = False) -> bool:
"""读取是否确认输入。"""
default_hint = "Y/n" if default else "y/N"
raw_text = input(f"{prompt_text} [{default_hint}]: ").strip().lower()
if not raw_text:
return default
return raw_text in {"y", "yes"}
def ensure_sqlite_file(path: Path, should_exist: bool) -> None:
"""校验 SQLite 文件路径。"""
if should_exist and not path.is_file():
raise FileNotFoundError(f"数据库文件不存在:{path}")
if not should_exist:
path.parent.mkdir(parents=True, exist_ok=True)
def connect_sqlite(path: Path) -> sqlite3.Connection:
"""创建 SQLite 连接。"""
connection = sqlite3.connect(path)
connection.row_factory = sqlite3.Row
return connection
def table_exists(connection: sqlite3.Connection, table_name: str) -> bool:
"""检查表是否存在。"""
result = connection.execute(
"SELECT 1 FROM sqlite_master WHERE type = 'table' AND name = ? LIMIT 1",
(table_name,),
).fetchone()
return result is not None
def resolve_source_table_name(connection: sqlite3.Connection, candidates: list[str]) -> str:
"""从候选表名中解析实际存在的表名。"""
for table_name in candidates:
if table_exists(connection, table_name):
return table_name
raise ValueError(f"未找到候选表:{', '.join(candidates)}")
def get_table_columns(connection: sqlite3.Connection, table_name: str) -> set[str]:
"""获取表字段名集合。"""
rows = connection.execute(f"PRAGMA table_info('{table_name}')").fetchall()
return {str(row["name"]) for row in rows}
def get_table_nullable_map(connection: sqlite3.Connection, table_name: str) -> dict[str, bool]:
"""获取表字段是否允许 NULL 的映射。"""
rows = connection.execute(f"PRAGMA table_info('{table_name}')").fetchall()
return {str(row["name"]): not bool(row["notnull"]) for row in rows}
def load_rows(connection: sqlite3.Connection, table_name: str) -> list[sqlite3.Row]:
"""读取整张表的数据。"""
return connection.execute(f"SELECT * FROM {table_name}").fetchall()
def normalize_optional_text(raw_value: Any) -> Optional[str]:
"""标准化可空文本字段。"""
if raw_value is None:
return None
return str(raw_value)
def ensure_nullable_compatibility(
table_name: str,
column_name: str,
row_id: Any,
value: Any,
nullable_map: dict[str, bool],
) -> None:
"""检查待迁移值是否与目标表可空约束兼容。"""
if value is None and not nullable_map.get(column_name, True):
raise ValueError(
f"目标表 {table_name}.{column_name} 不允许 NULL但源记录 id={row_id} 的该字段为 NULL。"
)
def normalize_string_list(raw_value: Any) -> list[str]:
"""将旧库中的 JSON/文本字段标准化为字符串列表。"""
if raw_value is None:
return []
if isinstance(raw_value, list):
return [str(item).strip() for item in raw_value if str(item).strip()]
if isinstance(raw_value, str):
raw_text = raw_value.strip()
if not raw_text:
return []
try:
parsed = json.loads(raw_text)
except json.JSONDecodeError:
return [raw_text]
if isinstance(parsed, list):
return [str(item).strip() for item in parsed if str(item).strip()]
if isinstance(parsed, str):
parsed_text = parsed.strip()
return [parsed_text] if parsed_text else []
if parsed is None:
return []
return [str(parsed).strip()]
return [str(raw_value).strip()]
def normalize_modified_by(raw_value: Any) -> Optional[ModifiedBy]:
"""标准化审核来源字段。"""
if raw_value is None:
return None
normalized_raw_value = raw_value
if isinstance(raw_value, str):
raw_text = raw_value.strip()
if raw_text.startswith('"') and raw_text.endswith('"'):
try:
normalized_raw_value = json.loads(raw_text)
except json.JSONDecodeError:
normalized_raw_value = raw_text
else:
normalized_raw_value = raw_text
value = str(normalized_raw_value).strip().lower()
if value in {"", "none", "null"}:
return None
if value in {ModifiedBy.AI.value, ModifiedBy.AI.name.lower()}:
return ModifiedBy.AI
if value in {ModifiedBy.USER.value, ModifiedBy.USER.name.lower()}:
return ModifiedBy.USER
return None
def parse_optional_bool(raw_value: Any) -> Optional[bool]:
"""解析可空布尔值,兼容整数和字符串。"""
if raw_value is None:
return None
if isinstance(raw_value, bool):
return raw_value
if isinstance(raw_value, int):
return bool(raw_value)
if isinstance(raw_value, float):
return bool(int(raw_value))
value = str(raw_value).strip().lower()
if value in {"", "none", "null"}:
return None
if value in {"1", "true", "t", "yes", "y"}:
return True
if value in {"0", "false", "f", "no", "n"}:
return False
raise ValueError(f"无法解析布尔值:{raw_value}")
def parse_bool(raw_value: Any, default: bool = False) -> bool:
"""解析非空布尔值。"""
parsed = parse_optional_bool(raw_value)
return default if parsed is None else parsed
def timestamp_to_datetime(raw_value: Any, fallback_now: bool) -> Optional[datetime]:
"""将旧库中的 Unix 时间戳转换为 datetime。"""
if raw_value is None or raw_value == "":
return datetime.now() if fallback_now else None
if isinstance(raw_value, datetime):
return raw_value
try:
return datetime.fromtimestamp(float(raw_value))
except (TypeError, ValueError, OSError, OverflowError):
return datetime.now() if fallback_now else None
def build_session_id_dict(raw_chat_id: Any, fallback_count: int) -> str:
"""将旧版 jargon.chat_id 转换为新版 session_id_dict。"""
if raw_chat_id is None:
return json.dumps({}, ensure_ascii=False)
if isinstance(raw_chat_id, str):
raw_text = raw_chat_id.strip()
else:
raw_text = str(raw_chat_id).strip()
if not raw_text:
return json.dumps({}, ensure_ascii=False)
try:
parsed = json.loads(raw_text)
except json.JSONDecodeError:
return json.dumps({raw_text: max(fallback_count, 1)}, ensure_ascii=False)
if isinstance(parsed, str):
parsed_text = parsed.strip()
session_counts = {parsed_text: max(fallback_count, 1)} if parsed_text else {}
return json.dumps(session_counts, ensure_ascii=False)
if not isinstance(parsed, list):
return json.dumps({}, ensure_ascii=False)
session_counts: dict[str, int] = {}
for item in parsed:
if not isinstance(item, list) or not item:
continue
session_id = str(item[0]).strip()
if not session_id:
continue
item_count = 1
if len(item) > 1:
try:
item_count = int(item[1])
except (TypeError, ValueError):
item_count = 1
session_counts[session_id] = max(item_count, 1)
return json.dumps(session_counts, ensure_ascii=False)
def create_target_engine(target_db_path: Path):
"""创建目标数据库引擎。"""
return create_engine(
f"sqlite:///{target_db_path.as_posix()}",
echo=False,
connect_args={"check_same_thread": False},
)
def clear_target_tables(session: Session) -> None:
"""清空目标表。"""
session.exec(delete(Expression))
session.exec(delete(Jargon))
def migrate_expressions(
old_rows: Iterable[sqlite3.Row],
target_session: Session,
expression_columns: set[str],
) -> int:
"""迁移 expression 数据。"""
migrated_count = 0
modified_by_ai_count = 0
modified_by_user_count = 0
modified_by_null_count = 0
unknown_modified_by_values: dict[str, int] = {}
for row in old_rows:
create_time = timestamp_to_datetime(row["create_date"] if "create_date" in expression_columns else None, True)
last_active_time = timestamp_to_datetime(
row["last_active_time"] if "last_active_time" in expression_columns else None,
True,
)
content_list = normalize_string_list(row["content_list"] if "content_list" in expression_columns else None)
raw_modified_by = row["modified_by"] if "modified_by" in expression_columns else None
modified_by = normalize_modified_by(raw_modified_by)
if modified_by == ModifiedBy.AI:
modified_by_ai_count += 1
elif modified_by == ModifiedBy.USER:
modified_by_user_count += 1
else:
modified_by_null_count += 1
if raw_modified_by not in (None, "", "null", "NULL", "None"):
unknown_key = str(raw_modified_by)
unknown_modified_by_values[unknown_key] = unknown_modified_by_values.get(unknown_key, 0) + 1
target_session.execute(
text(
"""
INSERT INTO expressions (
id,
situation,
style,
content_list,
count,
last_active_time,
create_time,
session_id,
checked,
rejected,
modified_by
) VALUES (
:id,
:situation,
:style,
:content_list,
:count,
:last_active_time,
:create_time,
:session_id,
:checked,
:rejected,
:modified_by
)
"""
),
{
"id": int(row["id"]) if row["id"] is not None else None,
"situation": str(row["situation"]).strip(),
"style": str(row["style"]).strip(),
"content_list": json.dumps(content_list, ensure_ascii=False),
"count": int(row["count"]) if "count" in expression_columns and row["count"] is not None else 1,
"last_active_time": last_active_time or datetime.now(),
"create_time": create_time or datetime.now(),
"session_id": str(row["chat_id"]).strip() if "chat_id" in expression_columns and row["chat_id"] else None,
"checked": parse_bool(row["checked"] if "checked" in expression_columns else None, default=False),
"rejected": parse_bool(row["rejected"] if "rejected" in expression_columns else None, default=False),
"modified_by": modified_by.name if modified_by is not None else None,
},
)
migrated_count += 1
print(
"Expression modified_by 迁移统计:"
f" AI={modified_by_ai_count}, USER={modified_by_user_count}, NULL={modified_by_null_count}"
)
if unknown_modified_by_values:
preview_items = list(unknown_modified_by_values.items())[:10]
preview_text = ", ".join(f"{value!r} x{count}" for value, count in preview_items)
print(f"警告:以下旧 modified_by 值未识别,已按 NULL 迁移:{preview_text}")
return migrated_count
def migrate_jargons(
old_rows: Iterable[sqlite3.Row],
target_session: Session,
jargon_columns: set[str],
jargon_nullable_map: dict[str, bool],
) -> int:
"""迁移 jargon 数据。"""
migrated_count = 0
coerced_meaning_null_count = 0
for row in old_rows:
count = int(row["count"]) if "count" in jargon_columns and row["count"] is not None else 0
raw_content_value = row["raw_content"] if "raw_content" in jargon_columns else None
raw_content_list = normalize_string_list(raw_content_value)
meaning_value = normalize_optional_text(row["meaning"] if "meaning" in jargon_columns else None)
is_jargon_value = parse_optional_bool(row["is_jargon"] if "is_jargon" in jargon_columns else None)
inference_content_key = (
"inference_content_only"
if "inference_content_only" in jargon_columns
else "inference_with_content_only"
if "inference_with_content_only" in jargon_columns
else None
)
ensure_nullable_compatibility("jargons", "is_jargon", row["id"], is_jargon_value, jargon_nullable_map)
if meaning_value is None and not jargon_nullable_map.get("meaning", True):
meaning_value = ""
coerced_meaning_null_count += 1
# 显式执行 SQL避免 ORM 在 None 场景下回填模型默认值。
target_session.execute(
text(
"""
INSERT INTO jargons (
id,
content,
raw_content,
meaning,
session_id_dict,
count,
is_jargon,
is_complete,
is_global,
last_inference_count,
inference_with_context,
inference_with_content_only
) VALUES (
:id,
:content,
:raw_content,
:meaning,
:session_id_dict,
:count,
:is_jargon,
:is_complete,
:is_global,
:last_inference_count,
:inference_with_context,
:inference_with_content_only
)
"""
),
{
"id": int(row["id"]) if row["id"] is not None else None,
"content": str(row["content"]).strip(),
"raw_content": json.dumps(raw_content_list, ensure_ascii=False) if raw_content_value is not None else None,
"meaning": meaning_value,
"session_id_dict": build_session_id_dict(
row["chat_id"] if "chat_id" in jargon_columns else None,
fallback_count=count,
),
"count": count,
"is_jargon": is_jargon_value,
"is_complete": parse_bool(row["is_complete"] if "is_complete" in jargon_columns else None, default=False),
"is_global": parse_bool(row["is_global"] if "is_global" in jargon_columns else None, default=False),
"last_inference_count": (
int(row["last_inference_count"])
if "last_inference_count" in jargon_columns and row["last_inference_count"] is not None
else 0
),
"inference_with_context": (
str(row["inference_with_context"])
if "inference_with_context" in jargon_columns and row["inference_with_context"] is not None
else None
),
"inference_with_content_only": (
str(row[inference_content_key])
if inference_content_key and row[inference_content_key] is not None
else None
),
},
)
migrated_count += 1
if coerced_meaning_null_count > 0:
print(
f"警告:目标表 jargons.meaning 不允许 NULL已将 {coerced_meaning_null_count} 条旧记录的 NULL meaning 转为空字符串。"
)
return migrated_count
def confirm_target_replacement(target_db_path: Path, clear_target: bool) -> bool:
"""确认是否写入目标数据库。"""
if clear_target:
return prompt_yes_no(f"将清空目标库中的 expressions/jargons 后再迁移,确认继续吗?\n目标库:{target_db_path}")
return prompt_yes_no(f"将写入目标库,若主键冲突会导致迁移失败,确认继续吗?\n目标库:{target_db_path}")
def parse_arguments() -> Namespace:
"""解析参数。"""
return build_argument_parser().parse_args()
def main() -> None:
"""脚本入口。"""
args = parse_arguments()
print("旧版 expression/jargon -> 新版 expressions/jargons 迁移工具")
source_db_path = prompt_path("请输入旧版数据库路径", args.source_db)
target_db_path = prompt_path("请输入新版数据库路径", args.target_db)
clear_target = args.clear_target or prompt_yes_no("迁移前是否清空目标库中的 expressions 和 jargons 表?", False)
if source_db_path == target_db_path:
raise ValueError("旧版数据库路径和新版数据库路径不能相同。")
ensure_sqlite_file(source_db_path, should_exist=True)
ensure_sqlite_file(target_db_path, should_exist=False)
print(f"旧库:{source_db_path}")
print(f"新库:{target_db_path}")
print(f"清空目标表:{'' if clear_target else ''}")
if not confirm_target_replacement(target_db_path, clear_target):
print("已取消迁移。")
return
source_connection = connect_sqlite(source_db_path)
try:
expression_table_name = resolve_source_table_name(source_connection, ["expression", "expressions"])
jargon_table_name = resolve_source_table_name(source_connection, ["jargon", "jargons"])
expression_columns = get_table_columns(source_connection, expression_table_name)
jargon_columns = get_table_columns(source_connection, jargon_table_name)
expression_rows = load_rows(source_connection, expression_table_name)
jargon_rows = load_rows(source_connection, jargon_table_name)
finally:
source_connection.close()
target_engine = create_target_engine(target_db_path)
SQLModel.metadata.create_all(target_engine)
target_sqlite_connection = connect_sqlite(target_db_path)
try:
jargon_nullable_map = get_table_nullable_map(target_sqlite_connection, "jargons")
finally:
target_sqlite_connection.close()
with Session(target_engine) as target_session:
if clear_target:
clear_target_tables(target_session)
target_session.commit()
expression_count = migrate_expressions(expression_rows, target_session, expression_columns)
jargon_count = migrate_jargons(jargon_rows, target_session, jargon_columns, jargon_nullable_map)
target_session.commit()
print("迁移完成。")
print(f"已迁移 expression 记录:{expression_count}")
print(f"已迁移 jargon 记录:{jargon_count}")
if __name__ == "__main__":
main()

22
crowdin.yml Normal file
View File

@@ -0,0 +1,22 @@
project_id_env: CROWDIN_PROJECT_ID
api_token_env: CROWDIN_PERSONAL_TOKEN
base_path: .
base_url: "https://api.crowdin.com"
preserve_hierarchy: true
export_languages:
- en-US
- ja
- ko
files:
- source: /locales/zh-CN/*.json
translation: /locales/%locale%/%original_file_name%
- source: /prompts/zh-CN/**/*.prompt
translation: /prompts/%locale%/**/%original_file_name%
- source: /dashboard/src/i18n/locales/zh.json
translation: /dashboard/src/i18n/locales/%two_letters_code%.json
languages_mapping:
two_letters_code:
en-US: en

1
dashboard/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@

8
dashboard/.prettierrc Normal file
View File

@@ -0,0 +1,8 @@
{
"semi": false,
"singleQuote": true,
"tabWidth": 2,
"trailingComma": "es5",
"printWidth": 100,
"plugins": ["prettier-plugin-tailwindcss"]
}

View File

@@ -0,0 +1,8 @@
{
"hash": "1b5cd9d5",
"configHash": "027a635a",
"lockfileHash": "36800971",
"browserHash": "e1e062e5",
"optimized": {},
"chunks": {}
}

View File

@@ -0,0 +1,3 @@
{
"type": "module"
}

661
dashboard/LICENSE Normal file
View File

@@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

377
dashboard/README.md Normal file
View File

@@ -0,0 +1,377 @@
# MaiBot Dashboard
> MaiBot 的现代化 Web 管理面板 - 基于 React 19 + TypeScript + Vite 构建
<div align="center">
[![React](https://img.shields.io/badge/React-19.2-61DAFB?logo=react&logoColor=white)](https://react.dev/)
[![TypeScript](https://img.shields.io/badge/TypeScript-5.9-3178C6?logo=typescript&logoColor=white)](https://www.typescriptlang.org/)
[![Vite](https://img.shields.io/badge/Vite-7.2-646CFF?logo=vite&logoColor=white)](https://vitejs.dev/)
[![TailwindCSS](https://img.shields.io/badge/TailwindCSS-4.2-38B2AC?logo=tailwind-css&logoColor=white)](https://tailwindcss.com/)
</div>
## 📖 项目简介
MaiBot Dashboard 是 MaiBot 聊天机器人的 Web 管理界面,提供了直观的配置管理、实时监控、插件管理、资源管理等功能。通过自动解析后端配置类,动态生成表单,实现了配置的可视化编辑。
<div align="center">
<img src="docs/main.png" alt="MaiBot Dashboard 界面预览" width="800" />
</div>
### ✨ 核心特性
- 🎨 **现代化 UI** - 基于 shadcn/ui 组件库,支持亮色/暗色主题切换
-**高性能** - 使用 Vite 7.2 构建React 19 最新特性
- 🔐 **安全认证** - Token 认证机制,支持自定义和自动生成 Token
- 📝 **智能配置** - 自动解析 Python dataclass生成配置表单
- 🎯 **类型安全** - 完整的 TypeScript 类型定义
- 🔄 **实时更新** - WebSocket 实时日志流、配置自动保存
- 📱 **响应式设计** - 完美适配桌面和移动设备
- 💬 **本地对话** - 直接在 WebUI 与麦麦对话,无需外部平台
## 🎯 功能模块
### 📊 仪表盘(首页)
- **实时统计** - 总请求数、Token 消耗、费用统计、在线时长
- **模型统计** - 各模型的使用次数、费用、平均响应时间
- **趋势图表** - 每小时请求量、Token 消耗、费用趋势折线图
- **模型分布** - 饼图展示模型使用占比
- **最近活动** - 实时刷新的请求活动列表
### 💬 本地聊天室
- **WebSocket 实时通信** - 与麦麦直接对话
- **消息历史** - 自动加载 SQLite 存储的历史消息
- **连接状态** - 实时显示 WebSocket 连接状态
- **自定义昵称** - 可自定义用户身份
- **移动端适配** - 完整的响应式聊天界面
### ⚙️ 配置管理
#### 麦麦主程序配置
- **分组展示** - 配置项按功能分组(基础设置、功能开关等)
- **智能表单** - 根据配置类型自动生成对应控件
- **自动保存** - 2秒防抖自动保存无需手动操作
- **一键重启** - 保存并重启麦麦,使配置生效
#### AI 模型厂商配置
- **提供商管理** - 添加、编辑、删除 API 提供商
- **模板选择** - 预设常用厂商模板OpenAI、DeepSeek、硅基流动等
- **连接测试** - ⚡ 测试提供商连接状态和 API Key 有效性
- **批量操作** - 批量删除、批量测试所有提供商
- **搜索过滤** - 按名称、URL、类型快速筛选
#### 模型管理与分配
- **模型列表** - 管理可用的模型配置
- **使用状态** - 显示模型是否被任务使用
- **任务分配** - 为不同功能分配模型回复、工具调用、VLM 等)
- **参数调整** - 温度、最大 Token 等参数配置
- **新手引导** - 交互式引导教程
#### 适配器配置
- **NapCat 配置** - 管理 QQ 机器人适配器
- **Docker 支持** - 支持容器模式配置
- **配置导入导出** - 跨环境迁移配置
### 📋 实时日志
- **WebSocket 流式传输** - 实时接收后端日志
- **虚拟滚动** - 高性能处理大量日志
- **多级过滤** - 按日志级别DEBUG/INFO/WARNING/ERROR过滤
- **模块过滤** - 按日志来源模块筛选
- **时间范围** - 日期选择器筛选日志
- **搜索高亮** - 关键字搜索并高亮显示
- **字号调整** - 自定义日志显示字号和行间距
- **日志导出** - 导出过滤后的日志
### 🔌 插件管理
- **插件市场** - 浏览和搜索可用插件
- **分类筛选** - 按类别、状态筛选插件
- **一键安装** - 自动处理依赖并安装插件
- **版本兼容** - 检查插件与 MaiBot 版本兼容性
- **进度显示** - WebSocket 实时显示安装进度
- **插件统计** - 下载量、更新时间等信息
- **卸载更新** - 管理已安装插件
### 👤 人物关系管理
- **人物列表** - 查看所有已知用户信息
- **详情编辑** - 编辑用户昵称、备注等信息
- **关系统计** - 查看消息数、互动频率等统计
- **批量操作** - 批量删除用户记录
### 📦 资源管理
#### 表情包管理
- **预览管理** - 图片/GIF 预览
- **分类过滤** - 按注册状态、描述筛选
- **编辑标签** - 修改表情包描述和属性
- **批量禁用** - 启用/禁用表情包
#### 表达方式管理
- **表达列表** - 查看麦麦学习的表达方式
- **来源追踪** - 记录表达来源群组和用户
- **编辑创建** - 手动添加或编辑表达
#### 知识图谱
- **可视化展示** - ReactFlow 交互式图谱
- **节点搜索** - 搜索实体和关系
- **布局算法** - 自动布局优化
- **详情查看** - 点击节点查看详细信息
### ⚙️ 系统设置
- **主题切换** - 亮色/暗色/跟随系统
- **动画控制** - 开启/关闭界面动画
- **Token 管理** - 查看、复制、重新生成认证 Token
- **版本信息** - 查看前端和后端版本
## 🏗️ 技术架构
### 前端技术栈
```
React 19.2.0 # UI 框架
├── TypeScript 5.9 # 类型系统
├── Vite 7.2 # 构建工具
├── TanStack Router # 路由管理
├── TanStack Virtual # 虚拟滚动
├── Jotai # 状态管理
├── Tailwind CSS 4.2 # 样式框架
├── ReactFlow # 知识图谱可视化
├── Recharts # 数据图表
└── shadcn/ui # 组件库
├── Radix UI # 无障碍组件
└── lucide-react # 图标库
```
### 后端集成
```
FastAPI # Python 后端框架
├── WebSocket # 实时日志、聊天
├── config_schema.py # 配置架构生成器
├── config_routes.py # 配置管理 API
├── model_routes.py # 模型管理 API
├── chat_routes.py # 本地聊天 API
├── plugin_routes.py # 插件管理 API
├── person_routes.py # 人物管理 API
├── emoji_routes.py # 表情包管理 API
├── expression_routes.py # 表达管理 API
├── knowledge_routes.py # 知识图谱 API
├── logs_routes.py # 日志 API
└── tomlkit # TOML 文件处理
```
## 📁 项目结构
```
MaiBot-Dashboard/
├── src/
│ ├── components/ # 组件目录
│ │ ├── ui/ # shadcn/ui 组件
│ │ ├── layout.tsx # 布局组件(侧边栏+导航)
│ │ ├── tour/ # 新手引导组件
│ │ ├── plugin-stats.tsx # 插件统计组件
│ │ ├── RestartingOverlay.tsx # 重启遮罩
│ │ └── use-theme.tsx # 主题管理
│ ├── routes/ # 路由页面
│ │ ├── index.tsx # 仪表盘首页
│ │ ├── auth.tsx # 登录页
│ │ ├── chat.tsx # 本地聊天室
│ │ ├── logs.tsx # 日志查看
│ │ ├── plugins.tsx # 插件管理
│ │ ├── person.tsx # 人物管理
│ │ ├── settings.tsx # 系统设置
│ │ ├── config/ # 配置管理页面
│ │ │ ├── bot.tsx # 麦麦主程序配置
│ │ │ ├── modelProvider.tsx # 模型提供商
│ │ │ ├── model.tsx # 模型管理
│ │ │ └── adapter.tsx # 适配器配置
│ │ └── resource/ # 资源管理页面
│ │ ├── emoji.tsx # 表情包管理
│ │ ├── expression.tsx # 表达方式管理
│ │ └── knowledge-graph.tsx # 知识图谱
│ ├── lib/ # 工具库
│ │ ├── config-api.ts # 配置 API 客户端
│ │ ├── plugin-api.ts # 插件 API 客户端
│ │ ├── person-api.ts # 人物 API 客户端
│ │ ├── expression-api.ts # 表达 API 客户端
│ │ ├── log-websocket.ts # 日志 WebSocket
│ │ ├── fetch-with-auth.ts # 认证请求封装
│ │ └── utils.ts # 通用工具函数
│ ├── types/ # 类型定义
│ │ ├── config-schema.ts # 配置架构类型
│ │ ├── plugin.ts # 插件类型
│ │ ├── person.ts # 人物类型
│ │ └── expression.ts # 表达类型
│ ├── hooks/ # React Hooks
│ │ ├── use-auth.ts # 认证逻辑
│ │ ├── use-animation.ts # 动画控制
│ │ └── use-toast.ts # 消息提示
│ ├── store/ # 全局状态
│ │ └── auth.ts # 认证状态
│ ├── router.tsx # 路由配置
│ ├── main.tsx # 应用入口
│ └── index.css # 全局样式
├── public/ # 静态资源
├── vite.config.ts # Vite 配置
├── tailwind.config.js # Tailwind v4 兼容占位配置
├── tsconfig.json # TypeScript 配置
└── package.json # 依赖管理
```
## 🚀 快速开始
### 环境要求
- Node.js >= 18.0.0
- Bun >= 1.0.0 (推荐) 或 npm/yarn/pnpm
### 安装依赖
```bash
# 使用 Bun推荐
bun install
# 或使用 npm
npm install
```
### 开发模式
```bash
# 启动开发服务器 (默认端口: 7999)
bun run dev
# 或
npm run dev
```
访问 http://localhost:7999 查看应用。
### 生产构建
```bash
# 构建生产版本
bun run build
# 预览生产构建
bun run preview
```
构建产物会输出到 `dist/` 目录,由 MaiBot 后端静态服务。
### 代码格式化
```bash
# 格式化代码
bun run format
```
## 🔧 开发配置
### Vite 代理配置
开发模式下Vite 会将 API 请求代理到后端:
```typescript
// vite.config.ts
proxy: {
'/api': {
target: 'http://127.0.0.1:8001',
changeOrigin: true,
ws: true, // WebSocket 支持
},
},
```
### 环境变量
开发环境默认使用 `http://localhost:7999`,生产环境使用相对路径。
## 📸 界面预览
### 仪表盘
实时统计、模型使用分布、趋势图表
### 本地聊天
直接与麦麦对话,消息实时同步
### 配置管理
分组配置项,自动生成表单,自动保存
### 模型提供商
一键测试连接状态,模板快速添加
### 日志查看
实时日志流,多级过滤,虚拟滚动
## 📦 依赖说明
### 核心依赖
| 包名 | 版本 | 用途 |
|------|------|------|
| react | ^19.2.0 | UI 框架 |
| react-dom | ^19.2.0 | React DOM 渲染 |
| typescript | ~5.9.3 | 类型系统 |
| vite | ^7.2.2 | 构建工具 |
| @tanstack/react-router | ^1.136.1 | 路由管理 |
| @tanstack/react-virtual | ^3.x | 虚拟滚动 |
| jotai | ^2.15.1 | 状态管理 |
| axios | ^1.13.2 | HTTP 客户端 |
| recharts | ^2.x | 数据图表 |
| reactflow | ^11.x | 知识图谱可视化 |
| dagre | ^0.8.x | 图布局算法 |
### UI 组件库
| 包名 | 版本 | 用途 |
|------|------|------|
| @radix-ui/react-* | ^1.x | 无障碍组件基础 |
| lucide-react | ^0.553.0 | 图标库 |
| tailwindcss | ^4.2.1 | CSS 框架 |
| class-variance-authority | ^0.7.1 | 类名管理 |
| tailwind-merge | ^3.4.0 | Tailwind 类合并 |
| date-fns | ^3.x | 日期处理 |
## 🤝 贡献指南
1. Fork 本仓库
2. 创建特性分支 (`git checkout -b feature/AmazingFeature`)
3. 提交更改 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 开启 Pull Request
### 代码规范
- 使用 TypeScript 严格模式
- 遵循 ESLint 规则
- 使用 Prettier 格式化代码
- 组件使用函数式编写
- 优先使用 Hooks
- 响应式设计优先(移动端适配)
## 📄 开源协议
本项目基于 GPLv3 协议开源,详见 [LICENSE](./LICENSE) 文件。
## 👥 作者
**MotricSeven** - [GitHub](https://github.com/DrSmoothl)
## 🙏 致谢
- [React](https://react.dev/) - UI 框架
- [shadcn/ui](https://ui.shadcn.com/) - 组件库
- [Radix UI](https://www.radix-ui.com/) - 无障碍组件
- [TanStack Router](https://tanstack.com/router) - 路由解决方案
- [TanStack Virtual](https://tanstack.com/virtual) - 虚拟滚动
- [Tailwind CSS](https://tailwindcss.com/) - CSS 框架
- [ReactFlow](https://reactflow.dev/) - 流程图/知识图谱
- [Recharts](https://recharts.org/) - React 图表库
---
<div align="center">
Made with ❤️ by MotricSeven and Mai-with-u
</div>

6
dashboard/bunfig.toml Normal file
View File

@@ -0,0 +1,6 @@
[install]
registry = "https://mirrors.cloud.tencent.com/npm/"
linker = "hoisted"
[install.cache]
disableManifest = true

20
dashboard/components.json Normal file
View File

@@ -0,0 +1,20 @@
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "new-york",
"rsc": false,
"tsx": true,
"tailwind": {
"config": "tailwind.config.js",
"css": "src/index.css",
"baseColor": "slate",
"cssVariables": true,
"prefix": ""
},
"aliases": {
"components": "src/components",
"utils": "src/lib/utils",
"ui": "src/components/ui",
"lib": "src/lib",
"hooks": "src/hooks"
}
}

View File

@@ -0,0 +1,12 @@
maibot.example.com {
encode zstd gzip
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
}
reverse_proxy core:8001
}

Some files were not shown because too many files have changed in this diff Show More