Commit Graph

206 Commits

Author SHA1 Message Date
Maple_枫溪
9e944f7f85 fix: accept blank tool call arguments
Treat blank OpenAI-compatible tool call arguments as an empty dict so parameterless tools such as finish can execute with providers that return an empty string. Also trim model identifiers during config normalization to avoid leading whitespace leaking into requests and snapshots.
2026-05-05 05:06:26 +08:00
hsd221
e962326f93 fix: 支持 HTTP 协议和内网地址访问,修复聊天 URL 缺协议前缀问题
- validate_public_url 默认允许 http 和 https 协议
- 移除 _is_forbidden_ip_address 中 is_private 和 is_site_local 拦截,允许内网 IP
- validate_public_url 中无 scheme 时自动补全 http://
- normalize_openai_base_url 中无协议前缀时自动补全 http://
- pyproject.toml 添加 httpx[socks] 依赖以支持 SOCKS 代理
2026-05-04 18:19:01 +08:00
SengokuCola
c5cd47adc2 feat:模型支持高级配置 2026-05-04 18:18:59 +08:00
SengokuCola
4fd9875aae pref: 移除部分冗余log 2026-04-29 18:10:52 +08:00
DrSmoothl
742e21a727 feat: Add LLM Provider support in plugin runtime
- Introduced LLM Provider declarations in plugin manifests, allowing plugins to specify their LLM capabilities.
- Implemented validation for LLM Provider declarations to prevent duplicates and conflicts.
- Enhanced the PluginRunner to handle LLM Provider invocation requests, enabling plugins to interact with LLM Providers seamlessly.
- Added a ClientRegistry to manage LLM Provider registrations and ensure no conflicts arise between different plugins.
- Created a PluginLLMClient to facilitate communication with LLM Providers through the plugin runtime.
- Developed tests to ensure proper registration and conflict handling of LLM Providers.
2026-04-27 16:49:44 +08:00
SengokuCola
d32be4741a feat:新增按顺序选择 fix:修复timing gate意外tool问题 2026-04-27 10:53:13 +08:00
Soulter
45cd00e343 fix(llm): support reasoning field for OpenRouter and Groq
fixes: #1600
2026-04-26 22:45:37 +08:00
SengokuCola
be2248b283 feat:为日志添加上限和配置防止膨胀 2026-04-25 14:45:35 +08:00
SengokuCola
9759018a0c feat:支持模型缓存和相关配置 2026-04-25 13:53:30 +08:00
SengokuCola
2471a2c4a4 fix:工具调用存储问题 2026-04-13 19:54:38 +08:00
SengokuCola
ff75930466 fix:解决qwen3.5空回复问题 2026-04-12 15:06:08 +08:00
SengokuCola
6db380b10d fix:修复回复器格式问题,记录完整空回复请求 2026-04-11 21:23:31 +08:00
SengokuCola
d9b3440169 feat:优化对多模态/非多模态replyer的配置 2026-04-11 19:30:23 +08:00
SengokuCola
c0230fc313 feat:统一replyer在是否多模态下的表现,提高一致性和通用性,新增模型visual参数 2026-04-11 16:41:00 +08:00
SengokuCola
243b8deb43 feat:展示更详细的工具信息,修改wait定义 2026-04-09 19:58:20 +08:00
SengokuCola
09bce14664 feat:为失败请求留档并提供重试分析 2026-04-07 15:16:06 +08:00
SengokuCola
3b5baf901a 移除残留的KnowU系统,修复gemini请求的思考签名问题 2026-04-07 15:15:37 +08:00
SengokuCola
50a51757a8 feat:添加提及必回,部分尺寸过大自动重试,移除无用配置项,正确解析at消息 2026-04-07 01:31:58 +08:00
SengokuCola
d82b37a08f feat:修复gemini tool问题,简化表情包识别,修复非多模态plan图片识别 2026-04-05 14:50:52 +08:00
SengokuCola
6c720e0403 ref:重构maisaka内置工具逻辑,拆分文件 2026-04-03 14:51:05 +08:00
SengokuCola
fe6ccaaf86 fix:部分模型不支持gif 2026-04-01 19:56:08 +08:00
SengokuCola
efb84df768 fix:无参工具在某些api报错 2026-04-01 18:28:00 +08:00
SengokuCola
d713aa9576 feat:显示实时占用上下文,移除旧记忆系统 2026-04-01 13:18:17 +08:00
SengokuCola
503a257d66 remove:无用配置 2026-04-01 13:06:01 +08:00
SengokuCola
01ef29aadb feat:重构maisaka的消息类型,添加打断功能 2026-03-30 00:45:41 +08:00
DrSmoothl
7a460a474d feat: 更新多个文件以使用 SessionMessage 替代 MaiMessage,并调整相关逻辑 2026-03-28 13:39:48 +08:00
DrSmoothl
0a08973c41 feat: Enhance emoji and image management with asynchronous background processing
- Added support for scheduling background tasks to build emoji and image descriptions when not found in cache.
- Improved error handling and logging for emoji and image processing.
- Updated `SessionMessage` processing to allow for optional heavy media analysis and voice transcription.
- Refactored logging messages for better clarity and consistency across various modules.
- Introduced a new function to build outbound log previews for messages, enhancing logging capabilities.
2026-03-26 23:03:47 +08:00
DrSmoothl
777d4cb0d2 feat: Enhance OpenAI compatibility and introduce unified LLM service data models
- Refactored model fetching logic to support various authentication methods for OpenAI-compatible APIs.
- Introduced new data models for LLM service requests and responses to standardize interactions across layers.
- Added an adapter base class for unified request execution across different providers.
- Implemented utility functions for building OpenAI-compatible client configurations and request overrides.
2026-03-26 16:15:42 +08:00
SengokuCola
a5fc4d172d feat:提供原生vlm支持 2026-03-24 20:57:57 +08:00
DrSmoothl
2a33fd1121 refactor(llm): enable hot-reload for model config and client runtime
make LLM task config resolution dynamic in LLMRequest
load model clients on demand from latest config
clear client instance cache on config reload
remove stale module-level model_config usage in llm_api
add hot-reload tests for LLM/config watcher flow
2026-03-04 21:56:50 +08:00
DrSmoothl
5cccdf6715 feat(llm): 添加响应格式转换功能,支持JSON_SCHEMA输出 2026-03-04 21:11:10 +08:00
DrSmoothl
eaef7f0e98 Ruff Format 2026-02-21 16:24:24 +08:00
DrSmoothl
dc36542403 添加文件监视器地基模块,重构模型请求模块使用新版本的配置热重载模块,新增watchfiles依赖 2026-02-14 21:17:24 +08:00
DrSmoothl
16b16d2ca6 重构绝大部分模块以适配新版本的数据库和数据模型,修复缺少依赖问题,更新 pyproject 2026-02-13 20:39:11 +08:00
UnCLAS-Prommer
3a66bfeac1 恢复可用性 2026-01-16 23:03:45 +08:00
墨梓柒
7bdd394bf0 将PFC加回来,修复一大堆PFC的神秘报错 2026-01-16 03:36:25 +08:00
UnCLAS-Prommer
77725ba9d8 逐步适配新的config 2026-01-15 23:51:19 +08:00
SengokuCola
0debe0efcf log:优化模型报错log 2026-01-12 19:05:30 +08:00
SengokuCola
72ef7bade2 Merge pull request #1420 from xcr1234/dev
fix: 工具调用的时候可能出现的无法解析参数问题
2025-12-29 19:07:32 +08:00
SengokuCola
f92136bffc feat;模型选择现在可以使用完全随机的策略
Update model_config_template.toml
2025-12-27 17:34:26 +08:00
墨梓柒
e680a4d1f5 Ruff format 2025-12-13 17:14:09 +08:00
xiaoxi68
613f8e8783 fix: 规范 OpenAI/Gemini 客户端图片 MIME(jpg/jpeg→image/jpeg) 2025-12-13 01:20:33 +08:00
xiaoxi68
3b964307ce fix: 修复工具调用 JSON Schema 中 float 类型导致的 422 错误 2025-12-11 20:00:47 +08:00
Process Xie
53d0e91d02 fix: 工具调用的时候可能出现的无法解析参数问题 2025-12-09 20:18:37 +08:00
墨梓柒
12bc661790 feat: 添加模型级别最大token数配置,并更新相关逻辑以支持优先级处理 2025-12-03 11:45:15 +08:00
墨梓柒
d97c6aa948 feat: 添加模型级别温度配置并优化温度优先级处理逻辑 2025-12-03 10:45:20 +08:00
Ronifue
6470d27270 feat: 统一对task中过慢的模型进行警告,并在model_config.toml中设定对应task的慢请求阈值 2025-11-30 01:23:29 +08:00
Ronifue
e6d1a6e87b feat: 为所有LLM的请求异常添加原始错误显示 2025-11-29 20:22:25 +08:00
Ronifue
79e8962f6f feat: 使得model_info.extra_params能够单独指定模型的temprature 2025-11-29 18:15:46 +08:00
墨梓柒
3935ce817e Ruff Fix & format 2025-11-29 14:38:42 +08:00