SengokuCola
fe6ccaaf86
fix:部分模型不支持gif
2026-04-01 19:56:08 +08:00
SengokuCola
efb84df768
fix:无参工具在某些api报错
2026-04-01 18:28:00 +08:00
SengokuCola
d713aa9576
feat:显示实时占用上下文,移除旧记忆系统
2026-04-01 13:18:17 +08:00
SengokuCola
503a257d66
remove:无用配置
2026-04-01 13:06:01 +08:00
SengokuCola
01ef29aadb
feat:重构maisaka的消息类型,添加打断功能
2026-03-30 00:45:41 +08:00
DrSmoothl
7a460a474d
feat: 更新多个文件以使用 SessionMessage 替代 MaiMessage,并调整相关逻辑
2026-03-28 13:39:48 +08:00
DrSmoothl
0a08973c41
feat: Enhance emoji and image management with asynchronous background processing
...
- Added support for scheduling background tasks to build emoji and image descriptions when not found in cache.
- Improved error handling and logging for emoji and image processing.
- Updated `SessionMessage` processing to allow for optional heavy media analysis and voice transcription.
- Refactored logging messages for better clarity and consistency across various modules.
- Introduced a new function to build outbound log previews for messages, enhancing logging capabilities.
2026-03-26 23:03:47 +08:00
DrSmoothl
777d4cb0d2
feat: Enhance OpenAI compatibility and introduce unified LLM service data models
...
- Refactored model fetching logic to support various authentication methods for OpenAI-compatible APIs.
- Introduced new data models for LLM service requests and responses to standardize interactions across layers.
- Added an adapter base class for unified request execution across different providers.
- Implemented utility functions for building OpenAI-compatible client configurations and request overrides.
2026-03-26 16:15:42 +08:00
SengokuCola
a5fc4d172d
feat:提供原生vlm支持
2026-03-24 20:57:57 +08:00
DrSmoothl
2a33fd1121
refactor(llm): enable hot-reload for model config and client runtime
...
make LLM task config resolution dynamic in LLMRequest
load model clients on demand from latest config
clear client instance cache on config reload
remove stale module-level model_config usage in llm_api
add hot-reload tests for LLM/config watcher flow
2026-03-04 21:56:50 +08:00
DrSmoothl
5cccdf6715
feat(llm): 添加响应格式转换功能,支持JSON_SCHEMA输出
2026-03-04 21:11:10 +08:00
DrSmoothl
eaef7f0e98
Ruff Format
2026-02-21 16:24:24 +08:00
DrSmoothl
dc36542403
添加文件监视器地基模块,重构模型请求模块使用新版本的配置热重载模块,新增watchfiles依赖
2026-02-14 21:17:24 +08:00
DrSmoothl
16b16d2ca6
重构绝大部分模块以适配新版本的数据库和数据模型,修复缺少依赖问题,更新 pyproject
2026-02-13 20:39:11 +08:00
UnCLAS-Prommer
3a66bfeac1
恢复可用性
2026-01-16 23:03:45 +08:00
墨梓柒
7bdd394bf0
将PFC加回来,修复一大堆PFC的神秘报错
2026-01-16 03:36:25 +08:00
UnCLAS-Prommer
77725ba9d8
逐步适配新的config
2026-01-15 23:51:19 +08:00
SengokuCola
0debe0efcf
log:优化模型报错log
2026-01-12 19:05:30 +08:00
SengokuCola
72ef7bade2
Merge pull request #1420 from xcr1234/dev
...
fix: 工具调用的时候可能出现的无法解析参数问题
2025-12-29 19:07:32 +08:00
SengokuCola
f92136bffc
feat;模型选择现在可以使用完全随机的策略
...
Update model_config_template.toml
2025-12-27 17:34:26 +08:00
墨梓柒
e680a4d1f5
Ruff format
2025-12-13 17:14:09 +08:00
xiaoxi68
613f8e8783
fix: 规范 OpenAI/Gemini 客户端图片 MIME(jpg/jpeg→image/jpeg)
2025-12-13 01:20:33 +08:00
xiaoxi68
3b964307ce
fix: 修复工具调用 JSON Schema 中 float 类型导致的 422 错误
2025-12-11 20:00:47 +08:00
Process Xie
53d0e91d02
fix: 工具调用的时候可能出现的无法解析参数问题
2025-12-09 20:18:37 +08:00
墨梓柒
12bc661790
feat: 添加模型级别最大token数配置,并更新相关逻辑以支持优先级处理
2025-12-03 11:45:15 +08:00
墨梓柒
d97c6aa948
feat: 添加模型级别温度配置并优化温度优先级处理逻辑
2025-12-03 10:45:20 +08:00
Ronifue
6470d27270
feat: 统一对task中过慢的模型进行警告,并在model_config.toml中设定对应task的慢请求阈值
2025-11-30 01:23:29 +08:00
Ronifue
e6d1a6e87b
feat: 为所有LLM的请求异常添加原始错误显示
2025-11-29 20:22:25 +08:00
Ronifue
79e8962f6f
feat: 使得model_info.extra_params能够单独指定模型的temprature
2025-11-29 18:15:46 +08:00
墨梓柒
3935ce817e
Ruff Fix & format
2025-11-29 14:38:42 +08:00
Ronifue
a58c54d378
feat: 更加详细的模型API请求错误提示与添加遇到常见的频繁API请求超时的建议
2025-11-27 18:48:51 +08:00
墨梓柒
44f427dc64
Ruff fix
2025-11-19 23:35:14 +08:00
SengokuCola
ebe224b0a1
fix:bool和boolean问题
2025-11-13 23:43:35 +08:00
SengokuCola
d306e40db0
Merge branch 'dev' of https://github.com/Mai-with-u/MaiBot into dev
2025-11-13 19:00:59 +08:00
SengokuCola
f2819be5e9
feat:lpmm可选接入memory agent,将memory agent改为标准工具格式,修改llm_utils以兼容
2025-11-13 18:55:37 +08:00
墨梓柒
7839acd25d
Ruff fix
2025-11-13 13:24:55 +08:00
SengokuCola
7b3793f366
better:优化log和添加changelog
2025-11-09 14:14:57 +08:00
SengokuCola
03e06c282c
feat:可以对不同chat自定义一段额外prompt
2025-11-05 00:35:16 +08:00
SengokuCola
f3f7b10fb6
Merge branch 'dev' of https://github.com/Mai-with-u/MaiBot into dev
2025-11-03 22:42:06 +08:00
SengokuCola
3e5058eb0f
fix:优化记忆提取,提供细节prompt debug项目
2025-11-03 22:41:21 +08:00
SengokuCola
5bcbf9d024
Merge pull request #1341 from exynos967/dev1102
...
fix(model_utils): HTTP 400 不终止全局尝试,继续切换模型
2025-11-03 15:09:38 +08:00
exynos
b63057edec
fix(model_utils): HTTP 400 不终止全局尝试,继续切换模型
2025-11-02 17:33:58 +08:00
xiaoxi68
50e0bc6513
fix(openai): 修复空回复 TypeError(choices=None),并在抛错前输出调试日志
2025-11-01 16:40:36 +08:00
SengokuCola
c6dadc2872
fix:截断Log优化
2025-10-30 11:30:48 +08:00
SengokuCola
2b09edbe77
Merge pull request #1269 from foxplaying/patch-2
...
Gemini:修复意外thought输出并增加Search功能和截断提示
2025-10-26 23:07:24 +08:00
foxplaying
9662d818a7
恢复
2025-10-26 15:01:18 +08:00
magisk317
2f84cb6305
修复图片压缩处理 embedded null byte 异常
2025-10-21 23:34:08 +08:00
foxplaying
d5696c12d4
增加Search功能
2025-10-19 00:50:32 +08:00
foxplaying
c0a7cc2102
Update gemini_client.py
2025-10-17 23:14:33 +08:00
foxplaying
3231f4f2f8
增加详细输出
2025-10-17 23:11:34 +08:00