diff --git a/prompts/zh-CN/maisaka_chat.prompt b/prompts/zh-CN/maisaka_chat.prompt index ddba3952..1000d656 100644 --- a/prompts/zh-CN/maisaka_chat.prompt +++ b/prompts/zh-CN/maisaka_chat.prompt @@ -1,29 +1,29 @@ -你的任务是分析聊天和聊天中的互动情况。 +你的任务是分析聊天和聊天中的互动情况,然后做出下一步动作。 你需要关注 {bot_name}(AI) 与不同用户的对话来为选择正确的动作和行为以及搜集信息提供建议 【参考信息】 {bot_name}的人设:{identity} 【参考信息结束】 -你需要根据提供的参考信息,当前场景和输出规则来进行分析 -在当前场景中,不同的人正在互动({bot_name}也是一位参与的用户),用户也可能与进行聊天互动,你的任务不是生成对用户可见的发言,而是进行分析来指导AI进行回复。 -“分析”应该体现你对当前局面的判断、你的建议、你的下一步计划,以及你为什么这样想。 -你需要先搜集能够帮助{bot_name}进行下一步行动的信息,然后再给出回复意见 +请你对当前场景和输出规则来进行分析,你可以参考参考信息中的内容,但不用过分遵守,仅供参考。 +在当前场景中,不同的人正在互动({bot_name}也是一位参与的用户),用户也可能与进行聊天互动,你的任务不是生成对用户可见的发言,而是进行分析来指导AI进行动作。 +“分析”应该体现你对当前局面的判断、你的建议、你的下一步计划,以及你为什么这样想。默认直接输出你当前的最新分析,不要重复之前的分析内容。最新分析应尽量具体,贴近上下文。 +你需要先搜集能够帮助{bot_name}进行下一步行动的信息,然后再给出思考 +{group_chat_attention_block} -你可以使用这些工具: +工具说明: - reply():当你判断{bot_name}现在应该正式对用户发出一条可见回复时调用。调用后系统会基于你当前这轮的想法生成一条真正展示给用户的回复。你可以针对某个用户回复,也可以对所有用户回复。 - query_jargon():当你认为某些词的含义不明确,或用户询问某些词的含义,需要进行查询 - query_memory():如果当前可用工具中存在它,当回复明显依赖历史对话、长期偏好、共同经历、人物长期信息或之前约定时使用 -- tool_search():当你在 `...` 中看到 deferred tools 列表,并且需要其中某个工具时,先调用它来搜索并发现对应工具;它只负责让工具在后续轮次变为可用,不直接执行业务 +- tool_search():当你在deferred tools列表中需要其中某个工具时,先调用它来搜索并发现对应工具;它只负责让工具在后续轮次变为可用,不直接执行业务 +- finish():当没有更多操作需要做,使用finish结束这次思考 - 其他定义的工具,你可以视情况合适使用 工具使用规则: -1. 你当前处于 Action Loop 阶段,节奏控制由独立的 timing gate 负责;如果系统让你继续,就专注于分析、搜集信息和执行真正需要的工具。 -2. 如果存在用户的疑问,或者对某些概念的不确定,你可以使用工具来搜集信息或者查询含义,你可以使用多个工具。 -3. 当你判断 {bot_name} 现在应该正式发出可见回复时,调用 reply()。 -4. 如果看到 `` 中列出了 deferred tools,而你需要其中某个工具,先调用 tool_search() 搜索该工具,等它在后续轮次变为可用后再正常调用。 -5. 如果需要补充上下文、查看消息、查询黑话、检索记忆或使用其他当前可用工具,可以按需调用。 +1. 你可以使用多个工具。 +2. 如果存在工具可以帮助你执行某些动作,完成某些目标,直接使用该工具来完成任务 +3. 如果看到 `` 中列出了 deferred tools,而你需要其中某个工具,先调用 tool_search() 搜索该工具,等它在后续轮次变为可用后再正常调用。 长期记忆使用建议: 1. 仅当历史信息会明显影响当前回复时,才考虑调用 `query_memory()`。 @@ -32,11 +32,5 @@ 4. 模式上:`search` 查事实或偏好,`time` 查某段时间,`episode` 查某次经历,`aggregate` 查整体情况;拿不准时用 `hybrid`。 5. 如果无命中、被过滤、或证据不足,就不要编造。 -你的分析规则: -1. 默认直接输出你当前的最新分析,不要重复之前的分析内容。最新分析应尽量具体,贴近上下文。 -2. 你需要先评估是用户之间在互动还是和{bot_name}在互动,不要盲目插话,弄错回复对象 -3. 你需要评估哪些话是对{bot_name}的发言,哪些是用户之间的交流或者自言自语,不要频繁插入无关的话题。 - -{group_chat_attention_block} 现在,请你输出你对{bot_name}发言的分析,你必须先输出文本内容的分析,然后再进行工具调用,: diff --git a/prompts/zh-CN/maisaka_timing_gate.prompt b/prompts/zh-CN/maisaka_timing_gate.prompt index 57b7b520..89cb4514 100644 --- a/prompts/zh-CN/maisaka_timing_gate.prompt +++ b/prompts/zh-CN/maisaka_timing_gate.prompt @@ -8,16 +8,16 @@ 在当前场景中,不同的人正在互动({bot_name} 也是一位参与的用户),用户也可能正在连续发送消息或彼此互动。 你的任务不是生成对别人可见的发言,也不是直接使用查询类工具,而是判断当前是否应该: - continue:立刻进入下一轮完整思考、搜集信息、回复与其他工具执行 -- wait:固定再等待一段时间,时间到后再重新判断;等待期间即使收到新消息也不会提前打断,只会暂存到超时后统一处理 -- no_reply:本轮不继续,直接等待新的外部消息 +- wait:固定再等待一段时间,时间到后再重新判断; +- no_reply:本轮不继续,直接等待新的消息 节奏控制规则: 1. 如果 {bot_name} 已经回复,但用户暂时没有新的回复,且没有新信息需要搜集,使用 wait 或者 no_reply 进行等待。 2. 如果用户有新发言,但是你评估用户还有后续发言尚未发送,可以适当等待让用户说完。 -3. 在特定情况下也可以连续回复,例如想要追问,或者补充自己先前的发言,这时应调用 continue,让主流程继续执行。 -4. 不要每条消息都回复,不要直接因为别的用户发送了表情包就发言。 -5. 如果你判断现在需要真正回复、查询信息、查看上下文或做进一步分析,不要在这里完成,直接调用 continue,把工作交给主流程。 -6. 你必须且只能调用一个工具,不要连续调用多个工具,也不要只输出文本不调用工具。 +3. 你需要先评估是用户之间在互动还是和{bot_name}在互动,不要盲目插话,弄错回复对象 +4. 你需要评估哪些话是对{bot_name}的发言,哪些是用户之间的交流或者自言自语,不要频繁插入无关的话题。 +5. 在特定情况下也可以连续回复,例如想要追问,或者补充自己先前的发言,这时应调用 continue,让主流程继续执行。 +6. 如果你判断现在需要真正回复、查询信息、查看上下文或做进一步分析,不要在这里完成,直接调用 continue,把工作交给主流程。 {group_chat_attention_block} diff --git a/pytests/test_mute_plugin_sdk.py b/pytests/test_mute_plugin_sdk.py index 24bde7e4..c811cc51 100644 --- a/pytests/test_mute_plugin_sdk.py +++ b/pytests/test_mute_plugin_sdk.py @@ -1,4 +1,4 @@ -"""MutePlugin SDK 迁移回归测试。""" +"""MutePlugin SDK 回归测试。""" from __future__ import annotations @@ -71,6 +71,8 @@ async def test_mute_command_calls_napcat_group_ban_api() -> None: return {"success": True, "person_id": "person-1"} if capability == "person.get_value": return {"success": True, "value": "123456"} + if capability == "api.call" and payload["args"]["api_name"] == "adapter.napcat.group.get_group_member_info": + return {"success": True, "result": {"role": "member"}} if capability == "api.call": return {"success": True, "result": {"status": "ok", "retcode": 0}} if capability == "send.text": @@ -94,8 +96,12 @@ async def test_mute_command_calls_napcat_group_ban_api() -> None: assert message == "成功禁言 张三" assert intercept is True - api_call = next(call for call in capability_calls if call["capability"] == "api.call") - assert api_call["args"]["api_name"] == "adapter.napcat.group.set_group_ban" + api_call = next( + call + for call in capability_calls + if call["capability"] == "api.call" + and call["args"]["api_name"] == "adapter.napcat.group.set_group_ban" + ) assert api_call["args"]["version"] == "1" assert api_call["args"]["args"] == { "group_id": "10001", @@ -133,6 +139,179 @@ async def test_mute_tool_requires_target_person_name() -> None: assert capability_calls[-1]["args"]["text"] == "没有指定禁言对象哦" +@pytest.mark.asyncio +async def test_mute_tool_can_unwrap_nested_person_user_id_response() -> None: + """禁言工具应能兼容解包多层 capability 返回结果。""" + + plugin = _build_plugin() + capability_calls: List[Dict[str, Any]] = [] + + async def fake_rpc_call(method: str, plugin_id: str = "", payload: Dict[str, Any] | None = None) -> Dict[str, Any]: + assert method == "cap.call" + assert payload is not None + capability_calls.append(payload) + + capability = payload["capability"] + if capability == "person.get_id_by_name": + return {"success": True, "result": {"success": True, "person_id": "person-1"}} + if capability == "person.get_value": + return {"success": True, "result": {"success": True, "value": "123456"}} + if capability == "api.call" and payload["args"]["api_name"] == "adapter.napcat.group.get_group_member_info": + return {"success": True, "result": {"role": "member"}} + if capability == "api.call": + return {"success": True, "result": {"status": "ok"}} + if capability == "send.text": + return {"success": True} + raise AssertionError(f"unexpected capability: {capability}") + + plugin._set_context(PluginContext(plugin_id="mute", rpc_call=fake_rpc_call)) + + success, message = await plugin.handle_mute_tool( + stream_id="group-10001", + group_id="10001", + target="张三", + duration=60, + reason="测试", + ) + + assert success is True + assert message == "成功禁言 张三" + + api_call = next( + call + for call in capability_calls + if call["capability"] == "api.call" + and call["args"]["api_name"] == "adapter.napcat.group.set_group_ban" + ) + assert api_call["args"]["args"]["user_id"] == "123456" + + +@pytest.mark.asyncio +async def test_mute_tool_rejects_owner_before_group_ban_call() -> None: + """禁言工具应在检测到群主时提前返回明确提示。""" + + plugin = _build_plugin() + capability_calls: List[Dict[str, Any]] = [] + + async def fake_rpc_call(method: str, plugin_id: str = "", payload: Dict[str, Any] | None = None) -> Dict[str, Any]: + assert method == "cap.call" + assert payload is not None + capability_calls.append(payload) + + capability = payload["capability"] + if capability == "person.get_id_by_name": + return {"success": True, "person_id": "person-1"} + if capability == "person.get_value": + return {"success": True, "value": "123456"} + if capability == "api.call" and payload["args"]["api_name"] == "adapter.napcat.group.get_group_member_info": + return {"success": True, "result": {"role": "owner"}} + if capability == "send.text": + return {"success": True} + raise AssertionError(f"unexpected capability: {capability}") + + plugin._set_context(PluginContext(plugin_id="mute", rpc_call=fake_rpc_call)) + + success, message = await plugin.handle_mute_tool( + stream_id="group-10001", + group_id="10001", + target="张三", + duration=60, + reason="测试", + ) + + assert success is False + assert message == "张三 是群主,不能被禁言" + assert not any( + call["capability"] == "api.call" and call["args"]["api_name"] == "adapter.napcat.group.set_group_ban" + for call in capability_calls + ) + + +@pytest.mark.asyncio +async def test_mute_tool_maps_cannot_ban_owner_error_message() -> None: + """NapCat 返回 cannot ban owner 时应转成明确中文提示。""" + + plugin = _build_plugin() + capability_calls: List[Dict[str, Any]] = [] + + async def fake_rpc_call(method: str, plugin_id: str = "", payload: Dict[str, Any] | None = None) -> Dict[str, Any]: + assert method == "cap.call" + assert payload is not None + capability_calls.append(payload) + + capability = payload["capability"] + if capability == "person.get_id_by_name": + return {"success": True, "person_id": "person-1"} + if capability == "person.get_value": + return {"success": True, "value": "123456"} + if capability == "api.call" and payload["args"]["api_name"] == "adapter.napcat.group.get_group_member_info": + return {"success": True, "result": {"role": "member"}} + if capability == "api.call" and payload["args"]["api_name"] == "adapter.napcat.group.set_group_ban": + return {"success": False, "error": "NapCat 动作返回失败: action=set_group_ban message=cannot ban owner"} + if capability == "send.text": + return {"success": True} + raise AssertionError(f"unexpected capability: {capability}") + + plugin._set_context(PluginContext(plugin_id="mute", rpc_call=fake_rpc_call)) + + success, message = await plugin.handle_mute_tool( + stream_id="group-10001", + group_id="10001", + target="张三", + duration=60, + reason="测试", + ) + + assert success is False + assert message == "张三 是群主,不能被禁言" + + +@pytest.mark.asyncio +async def test_mute_tool_accepts_nested_ok_api_result() -> None: + """嵌套的 success/result/status=ok 返回值也应判定为成功。""" + + plugin = _build_plugin() + + async def fake_rpc_call(method: str, plugin_id: str = "", payload: Dict[str, Any] | None = None) -> Dict[str, Any]: + assert method == "cap.call" + assert payload is not None + + capability = payload["capability"] + if capability == "person.get_id_by_name": + return {"success": True, "person_id": "person-1"} + if capability == "person.get_value": + return {"success": True, "value": "123456"} + if capability == "api.call" and payload["args"]["api_name"] == "adapter.napcat.group.get_group_member_info": + return {"success": True, "result": {"role": "member"}} + if capability == "api.call" and payload["args"]["api_name"] == "adapter.napcat.group.set_group_ban": + return { + "success": True, + "result": { + "status": "ok", + "retcode": 0, + "data": None, + "message": "", + "wording": "", + }, + } + if capability == "send.text": + return {"success": True} + raise AssertionError(f"unexpected capability: {capability}") + + plugin._set_context(PluginContext(plugin_id="mute", rpc_call=fake_rpc_call)) + + success, message = await plugin.handle_mute_tool( + stream_id="group-10001", + group_id="10001", + target="张三", + duration=60, + reason="测试", + ) + + assert success is True + assert message == "成功禁言 张三" + + def test_tool_invocation_payload_injects_group_and_user_context() -> None: """插件工具执行时应自动补齐群聊上下文字段。""" diff --git a/src/chat/replyer/maisaka_expression_selector.py b/src/chat/replyer/maisaka_expression_selector.py index a1851293..f5245ef8 100644 --- a/src/chat/replyer/maisaka_expression_selector.py +++ b/src/chat/replyer/maisaka_expression_selector.py @@ -273,7 +273,7 @@ class MaisakaExpressionSelector: logger.exception("表达方式选择子代理执行失败") return MaisakaExpressionSelectionResult() - logger.info(f"表达方式子代理原始结果:session_id={session_id} response={raw_response!r}") + # logger.info(f"表达方式子代理原始结果:session_id={session_id} response={raw_response!r}") selected_ids = self._parse_selected_ids(raw_response, candidates) if not selected_ids: logger.info(f"表达方式选择完成但未命中:session_id={session_id}") diff --git a/src/chat/replyer/replyer_manager.py b/src/chat/replyer/replyer_manager.py index 58b4041b..712fb421 100644 --- a/src/chat/replyer/replyer_manager.py +++ b/src/chat/replyer/replyer_manager.py @@ -50,30 +50,13 @@ class ReplyerManager: ) try: - if replyer_type == "maisaka": - logger.info(f"[ReplyerManager] 选择 MaisakaReplyGenerator: generator_type={generator_type}") - maisaka_replyer_class = get_maisaka_replyer_class() + maisaka_replyer_class = get_maisaka_replyer_class() - replyer = maisaka_replyer_class( - chat_stream=target_stream, - request_type=request_type, - ) - elif target_stream.is_group_session: - logger.info("[ReplyerManager] importing DefaultReplyer") - from src.chat.replyer.group_generator import DefaultReplyer + replyer = maisaka_replyer_class( + chat_stream=target_stream, + request_type=request_type, + ) - replyer = DefaultReplyer( - chat_stream=target_stream, - request_type=request_type, - ) - else: - logger.info("[ReplyerManager] importing PrivateReplyer") - from src.chat.replyer.private_generator import PrivateReplyer - - replyer = PrivateReplyer( - chat_stream=target_stream, - request_type=request_type, - ) except Exception: logger.exception(f"[ReplyerManager] 创建 replyer 失败: cache_key={cache_key}") raise diff --git a/src/core/tooling.py b/src/core/tooling.py index f9c6ec62..cbfa5854 100644 --- a/src/core/tooling.py +++ b/src/core/tooling.py @@ -181,10 +181,7 @@ class ToolSpec: str: 合并后的单段工具描述。 """ - parts = [self.brief_description.strip()] - if self.detailed_description.strip(): - parts.append(self.detailed_description.strip()) - return "\n\n".join(part for part in parts if part).strip() + return self.brief_description.strip() def to_llm_definition(self) -> ToolDefinitionInput: """转换为统一的 LLM 工具定义。 diff --git a/src/learners/expression_auto_check_task.py b/src/learners/expression_auto_check_task.py index 60dc9bd1..54c4ee68 100644 --- a/src/learners/expression_auto_check_task.py +++ b/src/learners/expression_auto_check_task.py @@ -1,5 +1,5 @@ """ -表达方式自动检查定时任务 +表达方式自动检查定时任务。 功能: 1. 定期随机选取指定数量的表达方式 @@ -9,52 +9,48 @@ """ import asyncio -import json import random from typing import List from sqlmodel import select -from src.learners.expression_review_store import get_review_state, set_review_state +from src.common.data_models.llm_service_data_models import LLMGenerationOptions from src.common.database.database import get_db_session from src.common.database.database_model import Expression from src.common.logger import get_logger from src.config.config import global_config -from src.common.data_models.llm_service_data_models import LLMGenerationOptions -from src.services.llm_service import LLMServiceClient +from src.learners.expression_review_store import get_review_state, set_review_state +from src.learners.expression_utils import parse_evaluation_response from src.manager.async_task_manager import AsyncTask +from src.services.llm_service import LLMServiceClient logger = get_logger("expressor") def create_evaluation_prompt(situation: str, style: str) -> str: """ - 创建评估提示词 + 创建评估提示词。 Args: - situation: 情境 + situation: 情景 style: 风格 Returns: 评估提示词 """ - # 基础评估标准 base_criteria = [ - "表达方式或言语风格 是否与使用条件或使用情景 匹配", - "允许部分语法错误或口头化或缺省出现", + "表达方式或言语风格是否与使用条件或使用情景匹配", + "允许部分语法错误或口语化或缺省出现", "表达方式不能太过特指,需要具有泛用性", "一般不涉及具体的人名或名称", ] - # 从配置中获取额外的自定义标准 custom_criteria = global_config.expression.expression_auto_check_custom_criteria - # 合并所有评估标准 all_criteria = base_criteria.copy() if custom_criteria: all_criteria.extend(custom_criteria) - # 构建评估标准列表字符串 criteria_list = "\n".join([f"{i + 1}. {criterion}" for i, criterion in enumerate(all_criteria)]) prompt = f"""请评估以下表达方式或语言风格以及使用条件或使用情景是否合适: @@ -64,14 +60,13 @@ def create_evaluation_prompt(situation: str, style: str) -> str: 请从以下方面进行评估: {criteria_list} -请以JSON格式输出评估结果: +请以 JSON 格式输出评估结果: {{ "suitable": true/false, "reason": "评估理由(如果不合适,请说明原因)" - }} -如果合适,suitable设为true;如果不合适,suitable设为false,并在reason中说明原因。 -请严格按照JSON格式输出,不要包含其他内容。""" +如果合适,suitable 设为 true;如果不合适,suitable 设为 false,并在 reason 中说明原因。 +请严格按照 JSON 格式输出,不要包含其他内容。""" return prompt @@ -81,10 +76,10 @@ judge_llm = LLMServiceClient(task_name="utils", request_type="expression_check") async def single_expression_check(situation: str, style: str) -> tuple[bool, str, str | None]: """ - 执行单次LLM评估 + 执行单次 LLM 评估。 Args: - situation: 情境 + situation: 情景 style: 风格 Returns: @@ -101,20 +96,10 @@ async def single_expression_check(situation: str, style: str) -> tuple[bool, str response = generation_result.response logger.debug(f"LLM响应: {response}") - # 解析JSON响应 - try: - evaluation = json.loads(response) - except json.JSONDecodeError as e: - import re + evaluation = parse_evaluation_response(response) - json_match = re.search(r'\{[^{}]*"suitable"[^{}]*\}', response, re.DOTALL) - if json_match: - evaluation = json.loads(json_match.group()) - else: - raise ValueError("无法从响应中提取JSON格式的评估结果") from e - - suitable = evaluation.get("suitable", False) - reason = evaluation.get("reason", "未提供理由") + suitable = bool(evaluation.get("suitable", False)) + reason = str(evaluation.get("reason", "未提供理由")) logger.debug(f"评估结果: {'通过' if suitable else '不通过'}") return suitable, reason, None @@ -125,20 +110,19 @@ async def single_expression_check(situation: str, style: str) -> tuple[bool, str class ExpressionAutoCheckTask(AsyncTask): - """表达方式自动检查定时任务""" + """表达方式自动检查定时任务。""" def __init__(self): - # 从配置中获取检查间隔和一次检查数量 check_interval = global_config.expression.expression_auto_check_interval super().__init__( task_name="Expression Auto Check Task", - wait_before_start=60, # 启动后等待60秒再开始第一次检查 + wait_before_start=60, run_interval=check_interval, ) async def _select_expressions(self, count: int) -> List[Expression]: """ - 随机选择指定数量的未检查表达方式 + 随机选择指定数量的未检查表达方式。 Args: count: 需要选择的数量 @@ -158,11 +142,12 @@ class ExpressionAutoCheckTask(AsyncTask): logger.info("没有未检查的表达方式") return [] - # 随机选择指定数量 selected_count = min(count, len(unevaluated_expressions)) selected = random.sample(unevaluated_expressions, selected_count) - logger.info(f"从 {len(unevaluated_expressions)} 条未检查表达方式中随机选择了 {selected_count} 条") + logger.info( + f"从 {len(unevaluated_expressions)} 条未检查表达方式中随机选择了 {selected_count} 条" + ) return selected except Exception as e: @@ -171,35 +156,35 @@ class ExpressionAutoCheckTask(AsyncTask): async def _evaluate_expression(self, expression: Expression) -> bool: """ - 评估单个表达方式 + 评估单个表达方式。 Args: expression: 要评估的表达方式 Returns: - True表示通过,False表示不通过 + True 表示通过,False 表示不通过 """ - suitable, reason, error = await single_expression_check( expression.situation, expression.style, ) - # 更新数据库 try: set_review_state(expression.id, True, not suitable, "ai") status = "通过" if suitable else "不通过" + # 保留这段注释,方便后续需要时恢复更详细的审核日志。 # logger.info( - # f"表达方式评估完成 [ID: {expression.id}] - {status} | " - # f"Situation: {expression.situation}... | " - # f"Style: {expression.style}... | " - # f"Reason: {reason[:50]}..." + # f"表达方式评估完成 [ID: {expression.id}] - {status} | " + # f"Situation: {expression.situation}... | " + # f"Style: {expression.style}... | " + # f"Reason: {reason[:50]}..." # ) if error: logger.warning(f"表达方式评估时出现错误 [ID: {expression.id}]: {error}") + logger.debug(f"表达方式 [ID: {expression.id}] 评估完成: {status}, reason={reason}") return suitable except Exception as e: @@ -207,9 +192,8 @@ class ExpressionAutoCheckTask(AsyncTask): return False async def run(self): - """执行检查任务""" + """执行检查任务。""" try: - # 检查是否启用自动检查 if not global_config.expression.expression_self_reflect: logger.debug("表达方式自动检查未启用,跳过本次执行") return @@ -221,26 +205,22 @@ class ExpressionAutoCheckTask(AsyncTask): logger.info(f"开始执行表达方式自动检查,本次将检查 {check_count} 条") - # 选择要检查的表达方式 expressions = await self._select_expressions(check_count) - if not expressions: logger.info("没有需要检查的表达方式") return - # 逐个评估 passed_count = 0 failed_count = 0 - for i, expression in enumerate(expressions, 1): - logger.debug(f"正在评估 [{i}/{len(expressions)}]: ID={expression.id}") + for index, expression in enumerate(expressions, 1): + logger.debug(f"正在评估 [{index}/{len(expressions)}]: ID={expression.id}") if await self._evaluate_expression(expression): passed_count += 1 else: failed_count += 1 - # 避免请求过快 await asyncio.sleep(0.3) logger.info( diff --git a/src/learners/expression_utils.py b/src/learners/expression_utils.py index 23c41c39..6f68480c 100644 --- a/src/learners/expression_utils.py +++ b/src/learners/expression_utils.py @@ -1,14 +1,14 @@ -from json_repair import repair_json -from typing import Any, List, Optional, Tuple - import json import re +from typing import Any, Dict, List, Optional, Tuple + +from json_repair import repair_json -from src.config.config import global_config from src.common.data_models.llm_service_data_models import LLMGenerationOptions -from src.services.llm_service import LLMServiceClient -from src.prompt.prompt_manager import prompt_manager from src.common.logger import get_logger +from src.config.config import global_config +from src.prompt.prompt_manager import prompt_manager +from src.services.llm_service import LLMServiceClient logger = get_logger("expression_utils") @@ -16,17 +16,7 @@ judge_llm = LLMServiceClient(task_name="utils", request_type="expression_check") def _normalize_repair_json_result(repaired_result: Any) -> str: - """将 repair_json 的返回值规范化为 JSON 字符串。 - - Args: - repaired_result: `repair_json` 的返回值,可能是字符串或带附加信息的元组。 - - Returns: - str: 可供 `json.loads` 继续解析的 JSON 字符串。 - - Raises: - TypeError: 当返回值无法规范化为字符串时抛出。 - """ + """将 `repair_json` 的返回结果统一转换为字符串。""" if isinstance(repaired_result, str): return repaired_result if isinstance(repaired_result, tuple) and repaired_result: @@ -37,22 +27,121 @@ def _normalize_repair_json_result(repaired_result: Any) -> str: raise TypeError(f"repair_json 返回了无法处理的结果类型: {type(repaired_result)}") +def _strip_markdown_code_fence(text: str) -> str: + """移除 LLM 可能附带的 Markdown 代码块包裹。""" + raw = text.strip() + if match := re.search(r"```json\s*(.*?)\s*```", raw, re.DOTALL): + return match[1].strip() + raw = re.sub(r"^```\s*", "", raw, flags=re.MULTILINE) + raw = re.sub(r"```\s*$", "", raw, flags=re.MULTILINE) + return raw.strip() + + +def _extract_json_object_candidate(text: str) -> str: + """尽量从文本中提取首个 JSON 对象片段。""" + start_index = text.find("{") + end_index = text.rfind("}") + if start_index != -1 and end_index != -1 and start_index < end_index: + return text[start_index : end_index + 1].strip() + return text.strip() + + +def _extract_reason_from_text(text: str) -> Optional[str]: + """从格式不完整的 JSON 文本中兜底提取 reason 字段。""" + reason_key_match = re.search(r'["“”]?reason["“”]?\s*:\s*', text, re.IGNORECASE) + if reason_key_match is None: + return None + + value_text = text[reason_key_match.end() :].strip() + if not value_text: + return None + + if value_text.endswith("}"): + value_text = value_text[:-1].rstrip() + if value_text.endswith(","): + value_text = value_text[:-1].rstrip() + if not value_text: + return None + + if value_text[0] in {'"', "'", "“", "”", "‘", "’"}: + value_text = value_text[1:] + while value_text and value_text[-1] in {'"', "'", "“", "”", "‘", "’"}: + value_text = value_text[:-1].rstrip() + + return value_text.strip() or None + + +def _normalize_reason_text(reason: Any) -> str: + """清理解析后 reason 中残留的包裹引号。""" + normalized_reason = str(reason).strip() + + if len(normalized_reason) >= 2 and normalized_reason[0] == normalized_reason[-1]: + if normalized_reason[0] in {'"', "'", "“", "”", "‘", "’"}: + normalized_reason = normalized_reason[1:-1].strip() + + if normalized_reason.endswith('"') and normalized_reason.count('"') % 2 == 1: + normalized_reason = normalized_reason[:-1].rstrip() + if normalized_reason.endswith("'") and normalized_reason.count("'") % 2 == 1: + normalized_reason = normalized_reason[:-1].rstrip() + if normalized_reason.endswith('"') and not normalized_reason.startswith('"'): + normalized_reason = normalized_reason[:-1].rstrip() + if normalized_reason.endswith("'") and not normalized_reason.startswith("'"): + normalized_reason = normalized_reason[:-1].rstrip() + + return normalized_reason + + +def parse_evaluation_response(response: str) -> Dict[str, Any]: + """解析表达方式评估结果,兼容不完全合法的 JSON。""" + raw = _strip_markdown_code_fence(response) + if not raw: + raise ValueError("LLM 响应为空") + + parse_candidates = [raw] + json_candidate = _extract_json_object_candidate(raw) + if json_candidate and json_candidate not in parse_candidates: + parse_candidates.append(json_candidate) + + for candidate in parse_candidates: + parsed = _try_parse(candidate) + if isinstance(parsed, dict): + if "reason" in parsed: + parsed["reason"] = _normalize_reason_text(parsed["reason"]) + return parsed + + fixed_candidate = fix_chinese_quotes_in_json(candidate) + if fixed_candidate != candidate: + parsed = _try_parse(fixed_candidate) + if isinstance(parsed, dict): + if "reason" in parsed: + parsed["reason"] = _normalize_reason_text(parsed["reason"]) + return parsed + + suitable_match = re.search(r'["“”]?suitable["“”]?\s*:\s*(true|false)', raw, re.IGNORECASE) + reason = _extract_reason_from_text(json_candidate or raw) + if suitable_match is None or reason is None: + raise ValueError(f"无法解析 LLM 响应为评估结果 JSON: {response}") + + return { + "suitable": suitable_match.group(1).lower() == "true", + "reason": _normalize_reason_text(reason), + } + + async def check_expression_suitability(situation: str, style: str) -> Tuple[bool, str, Optional[str]]: """ - 执行单次LLM评估 + 执行单次 LLM 评估。 Args: - situation: 情境 + situation: 情景 style: 风格 Returns: (suitable, reason, error) 元组,如果出错则 suitable 为 False,error 包含错误信息 """ - # 构建评估提示词 - # 基础评估标准 base_criteria = [ "表达方式或言语风格是否与使用条件或使用情景匹配", - "允许部分语法错误或口头化或缺省出现", + "允许部分语法错误或口语化或缺省出现", "表达方式不能太过特指,需要具有泛用性", "一般不涉及具体的人名或名称", ] @@ -60,7 +149,6 @@ async def check_expression_suitability(situation: str, style: str) -> Tuple[bool if custom_criteria := global_config.expression.expression_auto_check_custom_criteria: base_criteria.extend(custom_criteria) - # 构建评估标准列表字符串 criteria_list = "\n".join([f"{i + 1}. {criterion}" for i, criterion in enumerate(base_criteria)]) prompt_template = prompt_manager.get_prompt("expression_evaluation") @@ -81,18 +169,13 @@ async def check_expression_suitability(situation: str, style: str) -> Tuple[bool logger.debug(f"评估结果: {response}") try: - evaluation = json.loads(response) - except json.JSONDecodeError: - try: - response_repaired = _normalize_repair_json_result(repair_json(response)) - evaluation = json.loads(response_repaired) - except Exception as e: - raise ValueError(f"无法解析LLM响应为JSON: {response}") from e + evaluation = parse_evaluation_response(response) except Exception as e: return False, f"评估表达方式时发生错误: {e}", str(e) + try: - suitable = evaluation.get("suitable", False) - reason = evaluation.get("reason", "未提供理由") + suitable = bool(evaluation.get("suitable", False)) + reason = _normalize_reason_text(evaluation.get("reason", "未提供理由")) logger.debug(f"评估结果: {'通过' if suitable else '不通过'}") return suitable, reason, None except Exception as e: @@ -100,69 +183,48 @@ async def check_expression_suitability(situation: str, style: str) -> Tuple[bool def fix_chinese_quotes_in_json(text: str) -> str: - """使用状态机修复 JSON 字符串值中的中文引号""" - result = [] - i = 0 + """使用状态机修复 JSON 字符串值中的中文引号。""" + result: List[str] = [] in_string = False escape_next = False - while i < len(text): - char = text[i] + for char in text: if escape_next: - # 当前字符是转义字符后的字符,直接添加 result.append(char) escape_next = False - i += 1 continue + if char == "\\": - # 转义字符 result.append(char) escape_next = True - i += 1 continue - if char == '"' and not escape_next: - # 遇到英文引号,切换字符串状态 + + if char == '"': in_string = not in_string result.append(char) - i += 1 continue + if in_string and char in ["“", "”"]: result.append('\\"') - else: - result.append(char) - i += 1 + continue + + result.append(char) return "".join(result) def parse_expression_response(response: str) -> Tuple[List[Tuple[str, str, str]], List[Tuple[str, str]]]: """ - 解析 LLM 返回的表达风格总结和黑话 JSON,提取两个列表。 - - 期望的 JSON 结构: - [ - {"situation": "AAAAA", "style": "BBBBB", "source_id": "3"}, // 表达方式 - {"content": "词条", "source_id": "12"}, // 黑话 - ... - ] + 解析 LLM 返回的表达方式总结和黑话 JSON,提取两个列表。 Returns: - Tuple[List[Tuple[str, str, str]], List[Tuple[str, str]]]: - 第一个列表是表达方式 (situation, style, source_id) - 第二个列表是黑话 (content, source_id) + 第一个列表是表达方式 (situation, style, source_id) + 第二个列表是黑话 (content, source_id) """ if not response: return [], [] - raw = response.strip() - - if match := re.search(r"```json\s*(.*?)\s*```", raw, re.DOTALL): - raw = match[1].strip() - else: - # 去掉可能存在的通用 ``` 包裹 - raw = re.sub(r"^```\s*", "", raw, flags=re.MULTILINE) - raw = re.sub(r"```\s*$", "", raw, flags=re.MULTILINE) - raw = raw.strip() + raw = _strip_markdown_code_fence(response) parsed = _try_parse(raw) if parsed is None: @@ -180,22 +242,21 @@ def parse_expression_response(response: str) -> Tuple[List[Tuple[str, str, str]] logger.error(f"表达风格解析结果类型异常: {type(parsed)}, 内容: {parsed}") return [], [] - expressions: List[Tuple[str, str, str]] = [] # (situation, style, source_id) - jargon_entries: List[Tuple[str, str]] = [] # (content, source_id) + expressions: List[Tuple[str, str, str]] = [] + jargon_entries: List[Tuple[str, str]] = [] for item in parsed_list: if not isinstance(item, dict): continue - # 检查是否是表达方式条目(有 situation 和 style) situation = str(item.get("situation", "")).strip() style = str(item.get("style", "")).strip() source_id = str(item.get("source_id", "")).strip() if situation and style and source_id: - # 表达方式条目 expressions.append((situation, style, source_id)) continue + content = str(item.get("content", "")).strip() if content and source_id: jargon_entries.append((content, source_id)) @@ -204,25 +265,16 @@ def parse_expression_response(response: str) -> Tuple[List[Tuple[str, str, str]] def is_single_char_jargon(content: str) -> bool: - """ - 判断是否是单字黑话(单个汉字、英文或数字) - - Args: - content: 词条内容 - - Returns: - bool: 如果是单字黑话返回True,否则返回False - """ + """判断是否是单字黑话(单个汉字、英文或数字)。""" if not content or len(content) != 1: return False char = content[0] - # 判断是否是单个汉字、单个英文字母或单个数字 return ( - "\u4e00" <= char <= "\u9fff" # 汉字 - or "a" <= char <= "z" # 小写字母 - or "A" <= char <= "Z" # 大写字母 - or "0" <= char <= "9" # 数字 + "\u4e00" <= char <= "\u9fff" + or "a" <= char <= "z" + or "A" <= char <= "Z" + or "0" <= char <= "9" ) diff --git a/src/maisaka/builtin_tool/finish.py b/src/maisaka/builtin_tool/finish.py index de2d73e3..17ecbdf1 100644 --- a/src/maisaka/builtin_tool/finish.py +++ b/src/maisaka/builtin_tool/finish.py @@ -29,6 +29,6 @@ async def handle_tool( tool_ctx.runtime._enter_stop_state() return tool_ctx.build_success_result( invocation.tool_name, - "当前对话循环已结束本轮思考,等待新的外部消息到来。", + "当前对话循环已结束本轮思考,等待新的消息到来。", metadata={"pause_execution": True}, ) diff --git a/src/maisaka/builtin_tool/no_reply.py b/src/maisaka/builtin_tool/no_reply.py index fde97253..70e7d243 100644 --- a/src/maisaka/builtin_tool/no_reply.py +++ b/src/maisaka/builtin_tool/no_reply.py @@ -29,6 +29,6 @@ async def handle_tool( tool_ctx.runtime._enter_stop_state() return tool_ctx.build_success_result( invocation.tool_name, - "当前对话循环已暂停,等待新消息到来。", + "当前暂时停止思考,等待新消息到来。", metadata={"pause_execution": True}, ) diff --git a/src/maisaka/builtin_tool/reply.py b/src/maisaka/builtin_tool/reply.py index 4b392439..00c392b9 100644 --- a/src/maisaka/builtin_tool/reply.py +++ b/src/maisaka/builtin_tool/reply.py @@ -91,10 +91,6 @@ async def handle_tool( f"未找到要回复的目标消息,msg_id={target_message_id}", ) - logger.info( - f"{tool_ctx.runtime.log_prefix} 已触发回复工具," - f"目标消息编号={target_message_id} 引用回复={set_quote} 最新思考={latest_thought!r}" - ) try: replyer = replyer_manager.get_replyer( chat_stream=tool_ctx.runtime.chat_stream, diff --git a/src/maisaka/chat_loop_service.py b/src/maisaka/chat_loop_service.py index c49da004..4c2af2e6 100644 --- a/src/maisaka/chat_loop_service.py +++ b/src/maisaka/chat_loop_service.py @@ -408,46 +408,43 @@ class MaisakaChatLoopService: if llm_message is not None: messages.append(llm_message) + normalized_injected_messages: List[Message] = [] for injected_message in injected_user_messages or []: normalized_message = str(injected_message or "").strip() if not normalized_message: continue - messages.append( + normalized_injected_messages.append( MessageBuilder() .set_role(RoleType.User) .add_text_content(normalized_message) .build() ) + if normalized_injected_messages: + insertion_index = self._resolve_injected_user_messages_insertion_index(messages) + messages[insertion_index:insertion_index] = normalized_injected_messages + return messages @staticmethod - def _build_tool_names_log_text(tool_definitions: Sequence[ToolDefinitionInput]) -> str: - """构造 planner 请求前的工具列表日志文本。 + def _resolve_injected_user_messages_insertion_index(messages: Sequence[Message]) -> int: + """计算 injected meta user messages 在请求中的插入位置。 - Args: - tool_definitions: 本轮实际传给 planner 的工具定义列表。 - - Returns: - str: 适合直接写入日志的单行文本。 + 规则与 deferred attachment 更接近: + - 从尾部向前寻找最近的 stopping point; + - stopping point 为 assistant 消息或 tool 结果消息; + - 找到后插入到其后面; + - 若不存在 stopping point,则退回到 system 消息之后。 """ - tool_names: List[str] = [] - for tool_definition in tool_definitions: - if not isinstance(tool_definition, dict): - continue - normalized_name = str(tool_definition.get("name") or "").strip() - if not normalized_name: - function_definition = tool_definition.get("function") - if isinstance(function_definition, dict): - normalized_name = str(function_definition.get("name") or "").strip() - if normalized_name: - tool_names.append(normalized_name) + for index in range(len(messages) - 1, -1, -1): + message = messages[index] + if message.role in {RoleType.Assistant, RoleType.Tool}: + return index + 1 - if not tool_names: - return "[无工具]" - - return "、".join(tool_names) + if messages and messages[0].role == RoleType.System: + return 1 + return 0 async def chat_loop_step( self, @@ -517,11 +514,6 @@ class MaisakaChatLoopService: if isinstance(raw_tool_definitions, list): all_tools = [item for item in raw_tool_definitions if isinstance(item, dict)] - logger.info( - f"规划器工具列表(request_kind={request_kind}): " - f"共 {len(all_tools)} 个 -> {self._build_tool_names_log_text(all_tools)}" - ) - prompt_section: RenderableType | None = None if global_config.debug.show_maisaka_thinking: image_display_mode: str = "path_link" if global_config.maisaka.show_image_path else "legacy" @@ -536,13 +528,6 @@ class MaisakaChatLoopService: tool_definitions=list(all_tools), ) - logger.info( - f"规划器请求开始(request_kind={request_kind}): " - f"已选上下文消息数={len(selected_history)} " - f"大模型消息数={len(built_messages)} " - f"工具数={len(all_tools)} " - f"启用打断={self._interrupt_flag is not None}" - ) generation_result = await self._llm_chat.generate_response_with_messages( message_factory=message_factory, options=LLMGenerationOptions( @@ -609,7 +594,7 @@ class MaisakaChatLoopService: *, max_context_size: Optional[int] = None, ) -> tuple[List[LLMContextMessage], str]: - """??????? LLM ???????""" + """选择LLM上下文消息""" effective_context_size = max(1, int(max_context_size or global_config.chat.max_context_size)) selected_indices: List[int] = [] @@ -627,7 +612,7 @@ class MaisakaChatLoopService: break if not selected_indices: - return [], f"???????? {effective_context_size} ? user/assistant??? 0 ??" + return [], f"没有选择到上下文消息,实际发送 {effective_context_size} 条 user/assistant 消息" selected_indices.reverse() selected_history = [chat_history[index] for index in selected_indices] @@ -644,47 +629,6 @@ class MaisakaChatLoopService: selection_reason, ) - @staticmethod - def _select_llm_context_messages(chat_history: List[LLMContextMessage]) -> tuple[List[LLMContextMessage], str]: - """选择真正发送给 LLM 的上下文消息。 - - Args: - chat_history: 当前全部对话历史。 - - Returns: - tuple[List[LLMContextMessage], str]: `(已选上下文, 选择说明)`。 - """ - - max_context_size = max(1, int(global_config.chat.max_context_size)) - selected_indices: List[int] = [] - counted_message_count = 0 - - for index in range(len(chat_history) - 1, -1, -1): - message = chat_history[index] - if message.to_llm_message() is None: - continue - - selected_indices.append(index) - if message.count_in_context: - counted_message_count += 1 - if counted_message_count >= max_context_size: - break - - if not selected_indices: - return [], f"上下文判定:最近 {max_context_size} 条 user/assistant(当前 0 条)" - - selected_indices.reverse() - selected_history = [chat_history[index] for index in selected_indices] - selected_history, hidden_assistant_count = MaisakaChatLoopService._hide_early_assistant_messages(selected_history) - selected_history, _ = drop_orphan_tool_results(selected_history) - return ( - selected_history, - ( - f"上下文判定:最近 {max_context_size} 条 user/assistant;" - f"展示并发送窗口内消息 {len(selected_history)} 条" - ), - ) - @staticmethod def _hide_early_assistant_messages( selected_history: List[LLMContextMessage], diff --git a/src/maisaka/display_utils.py b/src/maisaka/display_utils.py index 23209972..311a8013 100644 --- a/src/maisaka/display_utils.py +++ b/src/maisaka/display_utils.py @@ -4,6 +4,7 @@ from typing import Any _REQUEST_PANEL_STYLE_MAP: dict[str, tuple[str, str]] = { + "planner": ("\u004d\u0061\u0069\u0053\u0061\u006b\u0061 \u5927\u6a21\u578b\u8bf7\u6c42 - \u5bf9\u8bdd\u5355\u6b65", "green"), "timing_gate": ("\u004d\u0061\u0069\u0053\u0061\u006b\u0061 \u5927\u6a21\u578b\u8bf7\u6c42 - Timing Gate \u5b50\u4ee3\u7406", "bright_magenta"), "replyer": ("\u004d\u0061\u0069\u0053\u0061\u006b\u0061 \u56de\u590d\u5668 Prompt", "bright_yellow"), "emotion": ("MaiSaka Emotion Tool Prompt", "bright_cyan"), diff --git a/src/maisaka/prompt_cli_renderer.py b/src/maisaka/prompt_cli_renderer.py index d1b8d6ec..5af8ca8c 100644 --- a/src/maisaka/prompt_cli_renderer.py +++ b/src/maisaka/prompt_cli_renderer.py @@ -269,7 +269,7 @@ class PromptCLIVisualizer: ) return ( - "
" + "
" "" f"{html.escape(tool_name)}" "" @@ -638,6 +638,9 @@ class PromptCLIVisualizer: border-radius: 14px; overflow: hidden; }} + .tool-call-card {{ + border-color: #ff8700; + }} .tool-card:first-of-type {{ margin-top: 0; }} @@ -672,6 +675,9 @@ class PromptCLIVisualizer: padding: 12px 14px; background: rgba(255, 255, 255, 0.52); }} + .tool-call-card .tool-card-body {{ + border-top-color: #ff8700; + }} .tool-card-meta {{ margin-bottom: 10px; color: #a21caf; diff --git a/src/maisaka/reasoning_engine.py b/src/maisaka/reasoning_engine.py index 00b0604b..fb847fe9 100644 --- a/src/maisaka/reasoning_engine.py +++ b/src/maisaka/reasoning_engine.py @@ -118,36 +118,27 @@ class MaisakaReasoningEngine: ) self._runtime._chat_loop_service.set_interrupt_flag(None) - async def _run_interruptible_sub_agent( + async def _run_timing_gate_sub_agent( self, *, context_message_limit: int, system_prompt: str, tool_definitions: list[dict[str, Any]], ) -> Any: - """运行一轮可被新消息打断的临时子代理请求。""" + """运行一轮 Timing Gate 子代理请求。 - interrupt_flag = asyncio.Event() - interrupted = False - self._runtime._bind_planner_interrupt_flag(interrupt_flag) - try: - return await self._runtime.run_sub_agent( - context_message_limit=context_message_limit, - system_prompt=system_prompt, - request_kind="timing_gate", - interrupt_flag=interrupt_flag, - max_tokens=TIMING_GATE_MAX_TOKENS, - temperature=0.1, - tool_definitions=tool_definitions, - ) - except ReqAbortException: - interrupted = True - raise - finally: - self._runtime._unbind_planner_interrupt_flag( - interrupt_flag, - interrupted=interrupted, - ) + Timing Gate 阶段不再响应新的 planner 打断,只有主 planner 阶段允许被打断。 + """ + + return await self._runtime.run_sub_agent( + context_message_limit=context_message_limit, + system_prompt=system_prompt, + request_kind="timing_gate", + interrupt_flag=None, + max_tokens=TIMING_GATE_MAX_TOKENS, + temperature=0.1, + tool_definitions=tool_definitions, + ) @staticmethod def _build_timing_gate_fallback_prompt() -> str: @@ -240,18 +231,19 @@ class MaisakaReasoningEngine: async def _run_timing_gate( self, anchor_message: SessionMessage, - ) -> tuple[Literal["continue", "no_reply", "wait"], Any, list[str]]: + ) -> tuple[Literal["continue", "no_reply", "wait"], Any, list[str], list[dict[str, Any]]]: """运行 Timing Gate 子代理并返回控制决策。""" if self._runtime._force_next_timing_continue: return self._build_forced_continue_timing_result() - response = await self._run_interruptible_sub_agent( + response = await self._run_timing_gate_sub_agent( context_message_limit=TIMING_GATE_CONTEXT_LIMIT, system_prompt=self._build_timing_gate_system_prompt(), tool_definitions=get_timing_tools(), ) tool_result_summaries: list[str] = [] + tool_monitor_results: list[dict[str, Any]] = [] selected_tool_call: Optional[ToolCall] = None for tool_call in response.tool_calls: if tool_call.func_name in TIMING_GATE_TOOL_NAMES: @@ -260,11 +252,11 @@ class MaisakaReasoningEngine: if selected_tool_call is None: logger.warning(f"{self._runtime.log_prefix} Timing Gate 未返回有效控制工具,默认继续执行 Action Loop") - return "continue", response, tool_result_summaries + return "continue", response, tool_result_summaries, tool_monitor_results append_history = selected_tool_call.func_name != "continue" store_record = selected_tool_call.func_name != "continue" - _, result, _ = await self._invoke_tool_call( + invocation, result, tool_spec = await self._invoke_tool_call( selected_tool_call, response.content or "", anchor_message, @@ -272,16 +264,27 @@ class MaisakaReasoningEngine: store_record=store_record, ) tool_result_summaries.append(self._build_tool_result_summary(selected_tool_call, result)) + tool_monitor_results.append( + self._build_tool_monitor_result( + selected_tool_call, + invocation, + result, + duration_ms=0.0, + tool_spec=tool_spec, + ) + ) timing_action = str(result.metadata.get("timing_action") or selected_tool_call.func_name).strip() if timing_action not in TIMING_GATE_TOOL_NAMES: logger.warning( f"{self._runtime.log_prefix} Timing Gate 返回未知动作 {timing_action!r},将按 continue 处理" ) - return "continue", response, tool_result_summaries - return timing_action, response, tool_result_summaries + return "continue", response, tool_result_summaries, tool_monitor_results + return timing_action, response, tool_result_summaries, tool_monitor_results - def _build_forced_continue_timing_result(self) -> tuple[Literal["continue"], ChatResponse, list[str]]: + def _build_forced_continue_timing_result( + self, + ) -> tuple[Literal["continue"], ChatResponse, list[str], list[dict[str, Any]]]: """构造跳过 Timing Gate 时使用的伪 continue 结果。""" reason = self._runtime._consume_force_next_timing_continue_reason() or "本轮直接跳过 Timing Gate 并视作 continue。" @@ -309,6 +312,7 @@ class MaisakaReasoningEngine: prompt_section=None, ), [f"- continue [强制跳过]: {reason}"], + [], ) @staticmethod @@ -383,10 +387,14 @@ class MaisakaReasoningEngine: planner_started_at = 0.0 planner_duration_ms = 0.0 timing_duration_ms = 0.0 + current_stage_started_at = 0.0 timing_action: Optional[str] = None timing_response: Optional[ChatResponse] = None timing_tool_results: Optional[list[str]] = None + timing_tool_monitor_results: Optional[list[dict[str, Any]]] = None response: Optional[ChatResponse] = None + action_tool_definitions: list[dict[str, Any]] = [] + planner_extra_lines: list[str] = [] tool_result_summaries: list[str] = [] tool_monitor_results: list[dict[str, Any]] = [] try: @@ -399,10 +407,14 @@ class MaisakaReasoningEngine: ) if timing_gate_required: + current_stage_started_at = time.time() timing_started_at = time.time() - timing_action, timing_response, timing_tool_results = await self._run_timing_gate( - anchor_message - ) + ( + timing_action, + timing_response, + timing_tool_results, + timing_tool_monitor_results, + ) = await self._run_timing_gate(anchor_message) timing_duration_ms = (time.time() - timing_started_at) * 1000 cycle_detail.time_records["timing_gate"] = timing_duration_ms / 1000 await emit_timing_gate_result( @@ -430,6 +442,7 @@ class MaisakaReasoningEngine: ) planner_started_at = time.time() + current_stage_started_at = planner_started_at action_tool_definitions, deferred_tools_reminder = await self._build_action_tool_definitions() logger.info( f"{self._runtime.log_prefix} 规划器开始执行: " @@ -472,14 +485,46 @@ class MaisakaReasoningEngine: if not response.content: break - except ReqAbortException: + except ReqAbortException as exc: interrupted_at = time.time() + interrupted_stage_label = "Planner" + interrupted_text = ( + "Planner 在流式响应阶段被新消息打断。" + "本轮未完成,因此这里展示的是中断说明而不是完整返回。" + ) + interrupted_response = ChatResponse( + content=interrupted_text or None, + tool_calls=[], + request_messages=[], + raw_message=AssistantMessage( + content=interrupted_text, + timestamp=datetime.now(), + tool_calls=[], + source_kind="perception", + ), + selected_history_count=len(self._runtime._chat_history), + tool_count=len(action_tool_definitions), + prompt_tokens=0, + built_message_count=0, + completion_tokens=0, + total_tokens=0, + prompt_section=None, + ) + interrupted_extra_lines = [ + "状态:已被新消息打断", + f"打断位置:{interrupted_stage_label} 请求流式响应阶段", + f"打断耗时:{interrupted_at - current_stage_started_at:.3f} 秒", + f"打断原因:{str(exc) or '收到外部中断信号'}", + ] + interrupted_extra_lines.append("展示内容:以下为 Maisaka 侧记录的中断说明") + response = interrupted_response + planner_extra_lines = interrupted_extra_lines logger.info( - f"{self._runtime.log_prefix} 规划器打断成功: " + f"{self._runtime.log_prefix} {interrupted_stage_label} 打断成功: " f"回合={round_index + 1} " - f"开始时间={planner_started_at:.3f} " + f"开始时间={current_stage_started_at:.3f} " f"打断时间={interrupted_at:.3f} " - f"耗时={interrupted_at - planner_started_at:.3f} 秒" + f"耗时={interrupted_at - current_stage_started_at:.3f} 秒" ) if not self._should_retry_planner_after_interrupt( round_index=round_index, @@ -506,6 +551,7 @@ class MaisakaReasoningEngine: completed_cycle = self._end_cycle(cycle_detail) self._runtime._render_context_usage_panel( cycle_id=cycle_detail.cycle_id, + time_records=dict(completed_cycle.time_records), timing_selected_history_count=( timing_response.selected_history_count if timing_response is not None else None ), @@ -516,6 +562,7 @@ class MaisakaReasoningEngine: timing_response=timing_response.content or "" if timing_response is not None else "", timing_tool_calls=timing_response.tool_calls if timing_response is not None else None, timing_tool_results=timing_tool_results, + timing_tool_detail_results=timing_tool_monitor_results, timing_prompt_section=( timing_response.prompt_section if timing_response is not None else None ), @@ -528,6 +575,7 @@ class MaisakaReasoningEngine: planner_tool_results=tool_result_summaries, planner_tool_detail_results=tool_monitor_results, planner_prompt_section=response.prompt_section if response is not None else None, + planner_extra_lines=planner_extra_lines, ) await emit_planner_finalized( session_id=self._runtime.session_id, @@ -1125,6 +1173,7 @@ class MaisakaReasoningEngine: invocation: ToolInvocation, result: ToolExecutionResult, duration_ms: float, + tool_spec: Optional[ToolSpec] = None, ) -> dict[str, Any]: """构建 planner.finalized 中单个工具的监控结果。""" @@ -1133,9 +1182,20 @@ class MaisakaReasoningEngine: if monitor_detail is not None: normalized_detail = self._normalize_tool_record_value(monitor_detail) + monitor_card = result.metadata.get("monitor_card") + normalized_card = None + if monitor_card is not None: + normalized_card = self._normalize_tool_record_value(monitor_card) + + monitor_sub_cards = result.metadata.get("monitor_sub_cards") + normalized_sub_cards = None + if monitor_sub_cards is not None: + normalized_sub_cards = self._normalize_tool_record_value(monitor_sub_cards) + return { "tool_call_id": tool_call.call_id, "tool_name": tool_call.func_name, + "tool_title": tool_spec.title.strip() if tool_spec is not None and tool_spec.title.strip() else "", "tool_args": self._normalize_tool_record_value( invocation.arguments if isinstance(invocation.arguments, dict) else {} ), @@ -1143,6 +1203,8 @@ class MaisakaReasoningEngine: "duration_ms": round(duration_ms, 2), "summary": self._build_tool_result_summary(tool_call, result), "detail": normalized_detail, + "card": normalized_card, + "sub_cards": normalized_sub_cards, } async def _handle_tool_calls( @@ -1178,7 +1240,7 @@ class MaisakaReasoningEngine: self._append_tool_execution_result(tool_call, result) tool_result_summaries.append(self._build_tool_result_summary(tool_call, result)) tool_monitor_results.append( - self._build_tool_monitor_result(tool_call, invocation, result, duration_ms=0.0) + self._build_tool_monitor_result(tool_call, invocation, result, duration_ms=0.0, tool_spec=None) ) return False, tool_result_summaries, tool_monitor_results @@ -1210,7 +1272,13 @@ class MaisakaReasoningEngine: self._append_tool_execution_result(tool_call, result) tool_result_summaries.append(self._build_tool_result_summary(tool_call, result)) tool_monitor_results.append( - self._build_tool_monitor_result(tool_call, invocation, result, tool_duration_ms) + self._build_tool_monitor_result( + tool_call, + invocation, + result, + tool_duration_ms, + tool_spec=tool_spec_map.get(invocation.tool_name), + ) ) if not result.success and tool_call.func_name == "reply": diff --git a/src/maisaka/runtime.py b/src/maisaka/runtime.py index a3dff283..ccb3834b 100644 --- a/src/maisaka/runtime.py +++ b/src/maisaka/runtime.py @@ -479,15 +479,23 @@ class MaisakaHeartFlowChatting: def build_deferred_tools_reminder(self) -> str: """构造供 planner 使用的 deferred tools 提示消息。""" - undiscovered_tool_names = [ - tool_name - for tool_name in self.deferred_tool_specs_by_name + undiscovered_tool_specs = [ + tool_spec + for tool_name, tool_spec in self.deferred_tool_specs_by_name.items() if tool_name not in self.discovered_tool_names ] - if not undiscovered_tool_names: + if not undiscovered_tool_specs: return "" - tool_lines = [f"{index}. {tool_name}" for index, tool_name in enumerate(undiscovered_tool_names, start=1)] + tool_lines: list[str] = [] + for index, tool_spec in enumerate(undiscovered_tool_specs, start=1): + tool_name = tool_spec.name.strip() + tool_description = tool_spec.brief_description.strip() + if tool_description: + tool_lines.append(f"{index}. {tool_name}: {tool_description}") + else: + tool_lines.append(f"{index}. {tool_name}") + reminder_lines = [ "", "以下工具当前未直接暴露给你,但可以通过 tool_search 工具发现并在后续轮次中使用:", @@ -803,6 +811,7 @@ class MaisakaHeartFlowChatting: self, *, cycle_id: Optional[int] = None, + time_records: Optional[dict[str, float]] = None, timing_selected_history_count: Optional[int] = None, timing_prompt_tokens: Optional[int] = None, timing_action: str = "", @@ -818,6 +827,7 @@ class MaisakaHeartFlowChatting: planner_tool_results: Optional[list[str]] = None, planner_tool_detail_results: Optional[list[dict[str, Any]]] = None, planner_prompt_section: Optional[RenderableType] = None, + planner_extra_lines: Optional[list[str]] = None, ) -> None: """在终端展示当前聊天流本轮 cycle 的最终结果。""" if not global_config.debug.show_maisaka_thinking: @@ -830,6 +840,7 @@ class MaisakaHeartFlowChatting: if cycle_id is not None: body_lines.append(f"循环编号:{cycle_id}") + panel_subtitle = self._build_cycle_time_records_text(time_records or {}) renderables: list[RenderableType] = [Text("\n".join(body_lines))] timing_panel = self._build_cycle_stage_panel( title="Timing Gate", @@ -837,33 +848,49 @@ class MaisakaHeartFlowChatting: selected_history_count=timing_selected_history_count, prompt_tokens=timing_prompt_tokens, response_text=timing_response, - tool_calls=timing_tool_calls, - tool_results=timing_tool_results, - tool_detail_results=timing_tool_detail_results, prompt_section=timing_prompt_section, extra_lines=[f"门控动作:{timing_action}"] if timing_action.strip() else None, ) if timing_panel is not None: renderables.append(timing_panel) + timing_tool_cards = self._build_tool_activity_cards( + stage_title="Timing Tool", + tool_calls=timing_tool_calls, + tool_results=timing_tool_results, + tool_detail_results=timing_tool_detail_results, + planner_style=False, + ) + if timing_tool_cards: + renderables.extend(timing_tool_cards) + planner_panel = self._build_cycle_stage_panel( title="Planner", border_style="green", selected_history_count=planner_selected_history_count, prompt_tokens=planner_prompt_tokens, response_text=planner_response, - tool_calls=planner_tool_calls, - tool_results=planner_tool_results, - tool_detail_results=planner_tool_detail_results, prompt_section=planner_prompt_section, + extra_lines=planner_extra_lines, ) if planner_panel is not None: renderables.append(planner_panel) + planner_tool_cards = self._build_tool_activity_cards( + stage_title="Planner Tool", + tool_calls=planner_tool_calls, + tool_results=planner_tool_results, + tool_detail_results=planner_tool_detail_results, + planner_style=True, + ) + if planner_tool_cards: + renderables.extend(planner_tool_cards) + console.print( Panel( Group(*renderables), title="MaiSaka 循环", + subtitle=panel_subtitle, border_style="bright_blue", padding=(0, 1), ) @@ -877,9 +904,6 @@ class MaisakaHeartFlowChatting: selected_history_count: Optional[int], prompt_tokens: Optional[int], response_text: str = "", - tool_calls: Optional[list[Any]] = None, - tool_results: Optional[list[str]] = None, - tool_detail_results: Optional[list[dict[str, Any]]] = None, prompt_section: Optional[RenderableType] = None, extra_lines: Optional[list[str]] = None, ) -> Optional[Panel]: @@ -889,9 +913,6 @@ class MaisakaHeartFlowChatting: selected_history_count is not None, prompt_tokens is not None, bool(response_text.strip()), - bool(tool_calls), - bool(tool_results), - bool(tool_detail_results), prompt_section is not None, bool(extra_lines), ]) @@ -918,40 +939,11 @@ class MaisakaHeartFlowChatting: Panel( Text(normalized_response), title="Maisaka 返回", - border_style="green", + border_style=border_style, padding=(0, 1), ) ) - normalized_tool_calls = build_tool_call_summary_lines(tool_calls or []) - if normalized_tool_calls: - renderables.append( - Panel( - Text("\n".join(normalized_tool_calls)), - title="工具调用", - border_style="magenta", - padding=(0, 1), - ) - ) - - normalized_tool_results = self._filter_redundant_tool_results( - tool_results=tool_results or [], - tool_detail_results=tool_detail_results or [], - ) - if normalized_tool_results: - renderables.append( - Panel( - Text("\n".join(normalized_tool_results)), - title="工具结果", - border_style="yellow", - padding=(0, 1), - ) - ) - - detail_panels = self._build_tool_detail_panels(tool_detail_results or []) - if detail_panels: - renderables.extend(detail_panels) - return Panel( Group(*renderables), title=title, @@ -959,6 +951,75 @@ class MaisakaHeartFlowChatting: padding=(0, 1), ) + def _build_tool_activity_cards( + self, + *, + stage_title: str, + tool_calls: Optional[list[Any]] = None, + tool_results: Optional[list[str]] = None, + tool_detail_results: Optional[list[dict[str, Any]]] = None, + planner_style: bool = False, + ) -> list[RenderableType]: + """构建与阶段同级的工具执行卡片列表。""" + + detail_results = tool_detail_results or [] + cards = self._build_tool_detail_cards( + detail_results, + stage_title=stage_title, + planner_style=planner_style, + ) + if cards: + return cards + + # 兼容旧数据结构:若尚无 detail,则降级为简单文本卡片。 + fallback_lines = self._filter_redundant_tool_results( + tool_results=tool_results or [], + tool_detail_results=detail_results, + ) + if not fallback_lines and tool_calls: + fallback_lines = build_tool_call_summary_lines(tool_calls) + if not fallback_lines: + return [] + + fallback_border_style = "blue" if planner_style else "magenta" + return [ + Panel( + Text("\n".join(fallback_lines)), + title=stage_title, + border_style=fallback_border_style, + padding=(0, 1), + ) + ] + + @staticmethod + def _build_cycle_time_records_text(time_records: dict[str, float]) -> str: + """构建循环最外层面板展示的阶段耗时文本。""" + + if not time_records: + return "流程耗时:无" + + label_map = { + "timing_gate": "Timing Gate", + "planner": "Planner", + "tool_calls": "工具执行", + } + ordered_keys = ["timing_gate", "planner", "tool_calls"] + + parts: list[str] = [] + for key in ordered_keys: + duration = time_records.get(key) + if isinstance(duration, (int, float)): + parts.append(f"{label_map.get(key, key)} {float(duration):.2f} s") + + for key, duration in time_records.items(): + if key in ordered_keys or not isinstance(duration, (int, float)): + continue + parts.append(f"{label_map.get(key, key)} {float(duration):.2f} s") + + if not parts: + return "流程耗时:无" + return "流程耗时:" + " | ".join(parts) + @staticmethod def _filter_redundant_tool_results( *, @@ -1052,6 +1113,7 @@ class MaisakaHeartFlowChatting: prompt_text: str, request_messages: Optional[list[Any]] = None, tool_call_id: str, + border_style: str = "bright_yellow", ) -> Panel: """将工具 prompt 渲染为可点击查看的预览入口。""" @@ -1076,7 +1138,7 @@ class MaisakaHeartFlowChatting: image_display_mode="path_link" if global_config.maisaka.show_image_path else "legacy", ), title=labels["prompt_title"], - border_style="bright_yellow", + border_style=border_style, padding=(0, 1), ) @@ -1089,117 +1151,235 @@ class MaisakaHeartFlowChatting: subtitle=subtitle, ), title=labels["prompt_title"], - border_style="bright_yellow", + border_style=border_style, padding=(0, 1), ) - def _build_tool_detail_panels(self, tool_detail_results: list[dict[str, Any]]) -> list[RenderableType]: - """将 tool monitor detail 渲染为 CLI 详情卡片。""" + def _normalize_tool_card_body_lines(self, body: Any) -> list[str]: + """将工具卡片正文规范化为行列表。""" + + if isinstance(body, str): + return [line for line in body.splitlines() if line.strip()] + if isinstance(body, list): + return [ + str(item).strip() + for item in body + if str(item).strip() + ] + return [] + + def _build_custom_tool_sub_cards( + self, + sub_cards: Any, + *, + default_border_style: str, + ) -> list[RenderableType]: + """构建工具自定义子卡片。""" + + if not isinstance(sub_cards, list): + return [] + + renderables: list[RenderableType] = [] + for sub_card in sub_cards: + if not isinstance(sub_card, dict): + continue + title = str(sub_card.get("title") or "").strip() or "附加信息" + border_style = str(sub_card.get("border_style") or "").strip() or default_border_style + body_lines = self._normalize_tool_card_body_lines( + sub_card.get("body_lines", sub_card.get("content", "")) + ) + if not body_lines: + continue + renderables.append( + Panel( + Text("\n".join(body_lines)), + title=title, + border_style=border_style, + padding=(0, 1), + ) + ) + return renderables + + def _build_default_tool_detail_parts( + self, + *, + tool_name: str, + tool_call_id: str, + tool_args: Any, + summary: str, + duration_ms: Any, + detail: dict[str, Any], + planner_style: bool, + ) -> list[RenderableType]: + """构建工具卡片默认内容块。""" + + argument_border_style = "blue" if planner_style else "cyan" + metrics_border_style = "bright_blue" if planner_style else "bright_cyan" + prompt_border_style = "bright_blue" if planner_style else "bright_yellow" + reasoning_border_style = "blue" if planner_style else "magenta" + output_border_style = "bright_blue" if planner_style else "green" + extra_info_border_style = "cyan" if planner_style else "white" + detail_labels = self._get_tool_detail_labels(tool_name) + + parts: list[RenderableType] = [] + header_lines: list[str] = [] + if summary: + header_lines.append(summary) + if tool_call_id: + header_lines.append(f"调用ID:{tool_call_id}") + if isinstance(duration_ms, (int, float)): + header_lines.append(f"执行耗时:{round(float(duration_ms), 2)} ms") + if header_lines: + parts.append(Text("\n".join(header_lines))) + + if isinstance(tool_args, dict) and tool_args: + parts.append( + Panel( + Pretty(tool_args, expand_all=True), + title="工具参数", + border_style=argument_border_style, + padding=(0, 1), + ) + ) + + metrics = detail.get("metrics") + if isinstance(metrics, dict): + metrics_text = self._build_tool_metrics_text(metrics) + if metrics_text: + parts.append( + Panel( + Text(metrics_text), + title="执行指标", + border_style=metrics_border_style, + padding=(0, 1), + ) + ) + + prompt_text = str(detail.get("prompt_text") or "").strip() + if prompt_text: + parts.append( + self._build_tool_prompt_access_panel( + tool_name=tool_name, + prompt_text=prompt_text, + request_messages=detail.get("request_messages") if isinstance(detail.get("request_messages"), list) else None, + tool_call_id=tool_call_id, + border_style=prompt_border_style, + ) + ) + + reasoning_text = str(detail.get("reasoning_text") or "").strip() + if reasoning_text: + parts.append( + Panel( + Text(reasoning_text), + title=detail_labels["reasoning_title"], + border_style=reasoning_border_style, + padding=(0, 1), + ) + ) + + output_text = str(detail.get("output_text") or "").strip() + if output_text: + parts.append( + Panel( + Text(output_text), + title=detail_labels["output_title"], + border_style=output_border_style, + padding=(0, 1), + ) + ) + + extra_sections = detail.get("extra_sections") + if isinstance(extra_sections, list): + for section in extra_sections: + if not isinstance(section, dict): + continue + section_title = str(section.get("title") or "").strip() or "附加信息" + section_content = str(section.get("content") or "").strip() + if not section_content: + continue + parts.append( + Panel( + Text(section_content), + title=section_title, + border_style=extra_info_border_style, + padding=(0, 1), + ) + ) + + return parts + + def _build_tool_detail_cards( + self, + tool_detail_results: list[dict[str, Any]], + *, + stage_title: str, + planner_style: bool = False, + ) -> list[RenderableType]: + """将 tool monitor detail 渲染为与 Planner/Timing 平级的工具卡片。""" + + detail_panel_border_style = "blue" if planner_style else "yellow" + sub_card_border_style = "cyan" if planner_style else "white" panels: list[RenderableType] = [] for tool_result in tool_detail_results: detail = tool_result.get("detail") - if not isinstance(detail, dict) or not detail: - continue - + detail_dict = detail if isinstance(detail, dict) else {} tool_name = str(tool_result.get("tool_name") or "unknown").strip() or "unknown" - detail_labels = self._get_tool_detail_labels(tool_name) + tool_title = str(tool_result.get("tool_title") or "").strip() or tool_name tool_call_id = str(tool_result.get("tool_call_id") or "").strip() tool_args = tool_result.get("tool_args") summary = str(tool_result.get("summary") or "").strip() duration_ms = tool_result.get("duration_ms") + custom_card = tool_result.get("card") parts: list[RenderableType] = [] - header_lines: list[str] = [] - if summary: - header_lines.append(summary) - if tool_call_id: - header_lines.append(f"调用ID:{tool_call_id}") - if isinstance(duration_ms, (int, float)): - header_lines.append(f"执行耗时:{round(float(duration_ms), 2)} ms") - if header_lines: - parts.append(Text("\n".join(header_lines))) - - if isinstance(tool_args, dict) and tool_args: - parts.append( - Panel( - Pretty(tool_args, expand_all=True), - title="工具参数", - border_style="cyan", - padding=(0, 1), - ) + custom_title = "" + card_border_style = detail_panel_border_style + replace_default_children = False + if isinstance(custom_card, dict): + custom_title = str(custom_card.get("title") or "").strip() + card_border_style = str(custom_card.get("border_style") or "").strip() or detail_panel_border_style + replace_default_children = bool(custom_card.get("replace_default_children", False)) + custom_body_lines = self._normalize_tool_card_body_lines( + custom_card.get("body_lines", custom_card.get("content", "")) ) + if custom_body_lines: + parts.append(Text("\n".join(custom_body_lines))) - metrics = detail.get("metrics") - if isinstance(metrics, dict): - metrics_text = self._build_tool_metrics_text(metrics) - if metrics_text: - parts.append( - Panel( - Text(metrics_text), - title="执行指标", - border_style="bright_cyan", - padding=(0, 1), - ) - ) - - prompt_text = str(detail.get("prompt_text") or "").strip() - if prompt_text: - parts.append( - self._build_tool_prompt_access_panel( + if not replace_default_children: + parts.extend( + self._build_default_tool_detail_parts( tool_name=tool_name, - prompt_text=prompt_text, - request_messages=detail.get("request_messages") if isinstance(detail.get("request_messages"), list) else None, tool_call_id=tool_call_id, + tool_args=tool_args, + summary=summary, + duration_ms=duration_ms, + detail=detail_dict, + planner_style=planner_style, ) ) - reasoning_text = str(detail.get("reasoning_text") or "").strip() - if reasoning_text: - parts.append( - Panel( - Text(reasoning_text), - title=detail_labels["reasoning_title"], - border_style="magenta", - padding=(0, 1), + if isinstance(custom_card, dict): + parts.extend( + self._build_custom_tool_sub_cards( + custom_card.get("sub_cards"), + default_border_style=sub_card_border_style, ) ) - - output_text = str(detail.get("output_text") or "").strip() - if output_text: - parts.append( - Panel( - Text(output_text), - title=detail_labels["output_title"], - border_style="green", - padding=(0, 1), - ) + parts.extend( + self._build_custom_tool_sub_cards( + tool_result.get("sub_cards"), + default_border_style=sub_card_border_style, ) - - extra_sections = detail.get("extra_sections") - if isinstance(extra_sections, list): - for section in extra_sections: - if not isinstance(section, dict): - continue - section_title = str(section.get("title") or "").strip() or "附加信息" - section_content = str(section.get("content") or "").strip() - if not section_content: - continue - parts.append( - Panel( - Text(section_content), - title=section_title, - border_style="white", - padding=(0, 1), - ) - ) + ) if parts: panels.append( Panel( Group(*parts), - title=f"{tool_name} 工具详情", - border_style="yellow", + title=custom_title or f"{stage_title} · {tool_title}", + border_style=card_border_style, padding=(0, 1), ) ) diff --git a/src/mcp_module/host_llm_bridge.py b/src/mcp_module/host_llm_bridge.py index 1b8bc10d..a4507a7e 100644 --- a/src/mcp_module/host_llm_bridge.py +++ b/src/mcp_module/host_llm_bridge.py @@ -521,9 +521,7 @@ class MCPHostLLMBridge: tool_definitions.append( { "name": tool_name, - "description": "\n\n".join( - part for part in [brief_description, detailed_description] if part.strip() - ).strip(), + "description": brief_description, "parameters_schema": parameters_schema or {"type": "object", "properties": {}}, } )