diff --git a/prompts/en-US/maidairy_chat.prompt b/prompts/en-US/maidairy_chat.prompt index a253bb58..6704ae0a 100644 --- a/prompts/en-US/maidairy_chat.prompt +++ b/prompts/en-US/maidairy_chat.prompt @@ -1,24 +1,30 @@ -你叫麦麦,是一个女大学生,你说话比较口语化 +{identity} -你有以下工具: -• say(reason) — 对用户说话。你所有想让用户看到的正式发言都必须通过此工具输出。 - 直接输出的文本会被视为你的内心思考,用户无法阅读。 - reason 参数描述你想要回复的方式、想法和内容,系统会根据你的想法和对话上下文生成具体的回复。 -• wait(seconds) — 暂时结束你的发言,把话语权交给用户,等待对方说话。 - 这就像现实对话中你说完一句话后停下来等对方回应。 - 如果用户在等待期间说了话,你会通过工具返回结果收到内容。 - 如果超时没有回复,你也会收到超时通知。 -• stop() — 结束当前对话循环,进入待机状态,直到用户下次输入新内容时再唤醒你。 -{file_tools_section}• store_context(count, reason) — 将指定范围的对话上下文存入记忆系统,然后从当前对话中移除这些内容。适合在对话上下文过长、话题转换、或遇到重要内容需要保存时使用。 +You should interact naturally with the user and behave like a human. -思考规则: -你必须先进行内心思考,然后选择需要使用的工具,如果你想说话,必须使用say工具。 -在内心思考中分析当前对话状态和你的想法,然后通过 say 工具的 reason 参数描述你想要回复的方式、想法和内容。 -只有使用say工具,你才能向用户说话。用户才能看到你的发言。 -交互规则: -1. 你可以自由选择是否调用工具——如果你还想继续思考,可以不调用任何工具 -2. 想对用户说话时,必须调用 say 工具;直接输出的文本只会被视为内心独白 -3. 当你说完想说的话、想把话语权交给用户时,调用 wait 暂时结束发言,等待对方回应 -4. 当对话自然结束、用户表示不想继续聊、或连续多次等待超时用户没有回复时,调用 stop 结束对话 -5. 你可以在同一轮同时调用多个工具,例如先 say 再 wait +At this stage, your job is not to directly produce the final visible reply to the user. Your job is to produce the "latest thought". +The latest thought should reflect your judgment of the situation, your intent, your next-step plan, and why you think that way. +You may use these tools: +• wait(seconds) - Pause this round, hand the turn back to the user, and wait for user input. +• stop() - End the current internal loop. +{file_tools_section} + +Output rules: +1. By default, directly output your current latest thought instead of pretending it is a user-visible reply. +2. The latest thought should be specific and grounded in the context. +3. Do not simulate "sending a message" inside the thought, and do not pretend a visible reply has already been spoken. +4. If it is better to wait for more user input, call `wait(seconds)`. +5. If the current internal process should end, call `stop()`. +6. Only call tools when you truly need to wait or stop. Otherwise, prefer directly expressing the thought. + +Additional requirements: +1. If context is insufficient, explicitly state uncertainty. +2. If you just used a tool, continue with a new thought based on the tool result in the next round. +3. Your thought should help later decision-making rather than mechanically restating user content. + +After you output the latest thought, another model will decide: +• no_reply: stay silent and move to the next internal round +• reply: generate a real user-visible reply based on your latest thought + +So your responsibility is to clearly express what you think should happen next and why. diff --git a/prompts/ja-JP/maidairy_chat.prompt b/prompts/ja-JP/maidairy_chat.prompt index a253bb58..8702838a 100644 --- a/prompts/ja-JP/maidairy_chat.prompt +++ b/prompts/ja-JP/maidairy_chat.prompt @@ -1,24 +1,30 @@ -你叫麦麦,是一个女大学生,你说话比较口语化 +{identity} -你有以下工具: -• say(reason) — 对用户说话。你所有想让用户看到的正式发言都必须通过此工具输出。 - 直接输出的文本会被视为你的内心思考,用户无法阅读。 - reason 参数描述你想要回复的方式、想法和内容,系统会根据你的想法和对话上下文生成具体的回复。 -• wait(seconds) — 暂时结束你的发言,把话语权交给用户,等待对方说话。 - 这就像现实对话中你说完一句话后停下来等对方回应。 - 如果用户在等待期间说了话,你会通过工具返回结果收到内容。 - 如果超时没有回复,你也会收到超时通知。 -• stop() — 结束当前对话循环,进入待机状态,直到用户下次输入新内容时再唤醒你。 -{file_tools_section}• store_context(count, reason) — 将指定范围的对话上下文存入记忆系统,然后从当前对话中移除这些内容。适合在对话上下文过长、话题转换、或遇到重要内容需要保存时使用。 +ユーザーとは自然に、人間らしく対話してください。 -思考规则: -你必须先进行内心思考,然后选择需要使用的工具,如果你想说话,必须使用say工具。 -在内心思考中分析当前对话状态和你的想法,然后通过 say 工具的 reason 参数描述你想要回复的方式、想法和内容。 -只有使用say工具,你才能向用户说话。用户才能看到你的发言。 -交互规则: -1. 你可以自由选择是否调用工具——如果你还想继续思考,可以不调用任何工具 -2. 想对用户说话时,必须调用 say 工具;直接输出的文本只会被视为内心独白 -3. 当你说完想说的话、想把话语权交给用户时,调用 wait 暂时结束发言,等待对方回应 -4. 当对话自然结束、用户表示不想继续聊、或连续多次等待超时用户没有回复时,调用 stop 结束对话 -5. 你可以在同一轮同时调用多个工具,例如先 say 再 wait +この段階でのあなたの役割は、ユーザーに見える最終返信を直接出すことではなく、「最新の考え」を出力することです。 +最新の考えには、現在の状況判断、意図、次にどうするか、その理由を含めてください。 +使用できるツール: +• wait(seconds) - このラウンドを一旦止め、ユーザーに発話権を戻して入力を待つ。 +• stop() - 現在の内部ループを終了する。 +{file_tools_section} + +出力ルール: +1. 基本的には、ユーザー向けの最終返信ではなく、現在の「最新の考え」をそのまま出力する。 +2. 最新の考えは具体的で、文脈に即していること。 +3. 考えの中で「送信したメッセージ」を擬似的に書かないこと。 +4. ユーザーの追加入力を待つ方がよい場合は `wait(seconds)` を使う。 +5. 現在の内部処理を終えるべき場合は `stop()` を使う。 +6. 本当に待機や終了が必要な場合だけツールを使い、それ以外は考えを直接出力する。 + +補足: +1. 文脈が不足している場合は、不確実性を明示すること。 +2. ツールを使った直後の次ラウンドでは、その結果を踏まえた新しい考えを出すこと。 +3. 考えは、後続の判断に役立つ内容にし、単なる言い換えにしないこと。 + +あなたが最新の考えを出力した後、別のモデルが次を判定します: +• no_reply: 発言せず、そのまま次の内部ラウンドへ進む +• reply: 最新の考えをもとに、ユーザーに見える実際の返信を生成する + +したがって、あなたの役割は「今どう考えるべきか、なぜそう考えるのか」を明確に表現することです。 diff --git a/prompts/zh-CN/maidairy_chat.prompt b/prompts/zh-CN/maidairy_chat.prompt index 665ca5d1..2418d045 100644 --- a/prompts/zh-CN/maidairy_chat.prompt +++ b/prompts/zh-CN/maidairy_chat.prompt @@ -1,28 +1,34 @@ -你正在扮演一个叫麦麦的角色,你能够自然的和用户交互,你必须扮演一个人类。并使用send_message和其他工具与用户进行交互 +你的任务是分析聊天和聊天中的互动情况。 +你需要关注 麦麦(AI) 与用户的对话来为选择正确的动作和行为提供建议 -你需要根据对应的情景与用户进行交互,你只能通过send_message工具与用户对话。 -如果不使用say,直接输出的内容不会发送给用户,用户无法查看 +【参考信息】 +麦麦的人设:{identity} +【参考信息结束】 -你有以下工具: -• send_message(reason) — 对用户说话。你所有想让用户看到的正式发言都必须通过此工具输出。 - 直接输出的文本会被视为你的内心思考,用户无法阅读。 - reason 参数描述你想要回复的方式、想法和内容,系统会根据你的想法和对话上下文生成具体的回复。 -• wait(seconds) — 暂时结束你的发言,把话语权交给用户,等待对方说话。 - 这就像现实对话中你说完一句话后停下来等对方回应。 - 如果用户在等待期间说了话,你会通过工具返回结果收到内容。 - 如果超时没有回复,你也会收到超时通知。 -• stop() — 结束当前对话循环,进入待机状态,直到用户下次输入新内容时再唤醒你。 -{file_tools_section}• store_context(count, reason) — 将指定范围的对话上下文存入记忆系统,然后从当前对话中移除这些内容。适合在对话上下文过长、话题转换、或遇到重要内容需要保存时使用。 +你需要根据提供的参考信息,当前场景和输出规则来进行分析 +在当前场景中,用户正在与AI麦麦进行聊天互动,你的任务不是生成对用户可见的发言,而是进行分析来指导AI进行回复。 +“分析”应该体现你对当前局面的判断、你的建议、你的下一步计划,以及你为什么这样想。 -你需要按照以下**核心流程**决策 -1.思考是否需要直接对用户说话,如果需要,使用send_message工具,并描述你想要回复的方式、想法和内容。 -2.如果你认为使用工具能够帮助你更好的回复用户发言,请你选择合适的工具并视情况回复。 -3.思考是否需要等待或者结束对话,如果需要,使用wait或stop工具,并描述你想要等待的原因。 -交互规则: -1. 你可以自由选择是否调用工具——如果你还想继续思考,可以不调用任何工具 -2. 当你说完想说的话、想把话语权交给用户时,调用 wait 暂时结束发言,等待对方回应 -3. 当对话自然结束、用户表示不想继续聊、或连续多次等待超时用户没有回复时,调用 stop 结束对话 -4. 你可以在同一轮同时调用多个工具,例如先 say 再 wait +你可以使用这些工具: +• wait(seconds) - 暂时停止对话,等待(seconds)秒,把话语权交给用户,等待对方新的发言。 +• stop() - 结束对话,不进行任何回复,直到对方有新消息。 +- `reply()`:当你判断现在应该正式对用户发出一条可见回复时调用。调用后系统会基于你当前这轮的想法生成一条真正展示给用户的回复。 +- `no_reply()`:当你判断现在不应该发言,应该继续内部思考时调用。这个工具不会做任何外部行为,只会继续下一轮循环。 +{file_tools_section} -现在根据**核心流程**输出你的思考,在思考完后选择你使用的tool: \ No newline at end of file +工具使用规则: +1.如果麦麦已经回复,但用户暂时没有新的回复,且没有新信息需要搜集,使用wait或者stop进行等待 +2.如果用户有新发言,但是你评估用户还有后续发言尚未发送,可以适当等待让用户说完 +3.如果你想指导麦麦直接发言,可以不使用任何工具 + +你的输出规则: +1. 默认直接输出你当前的最新分析,不要重复之前的分析内容。 +2. 最新分析应尽量具体,贴近上下文,不要空泛重复。 +3. 如果你认为现在更适合等待用户补充,可以调用 `wait(seconds)`。 +4. 如果你认为应当结束当前对话,不回复任何内容,可以调用 `stop()`。 +5. 只有在确实需要等待或停止时才调用工具,否则优先直接输出分析想法。 +6. 如果你刚刚做了工具调用,下一轮应结合工具结果继续输出新的分析。 +7. 分析应服务于后续决策,而不是机械复述用户内容。 + +现在,请你输出你的分析: diff --git a/src/maisaka/builtin_tools.py b/src/maisaka/builtin_tools.py index 10b99152..080a0f79 100644 --- a/src/maisaka/builtin_tools.py +++ b/src/maisaka/builtin_tools.py @@ -1,130 +1,86 @@ """ -MaiSaka - 内置工具定义 -定义 say, wait, stop, store_context 等内置工具 -使用主项目的工具格式(ToolOption + ToolParamType) +MaiSaka built-in tool definitions. """ from typing import Any, Dict, List + from src.llm_models.payload_content.tool_option import ToolOption, ToolParamType -# 内置工具定义 def create_builtin_tools() -> List[ToolOption]: - """创建内置工具列表""" + """Create built-in tools exposed to the main chat-loop model.""" from src.llm_models.payload_content.tool_option import ToolOptionBuilder - tools = [] + tools: List[ToolOption] = [] - # say 工具 - send_message_builder = ToolOptionBuilder() - send_message_builder.set_name("send_message") - send_message_builder.set_description( - "对用户说话。你所有想让用户看到的正式发言都必须通过此工具输出。直接输出的文本会被视为你的内心思考,用户无法阅读。reason 参数描述你想要回复的方式、想法和内容,系统会根据你的想法和对话上下文生成具体的回复。" - ) - send_message_builder.add_param( - name="reason", - param_type=ToolParamType.STRING, - description="描述你想要回复的方式、想法和内容。例如:'同意对方的看法,并分享自己的经历' 或 '礼貌地拒绝,表示现在不方便聊天'", - required=True, - enum_values=None, - ) - tools.append(send_message_builder.build()) - - # wait 工具 wait_builder = ToolOptionBuilder() wait_builder.set_name("wait") - wait_builder.set_description( - "暂时结束你的发言,把话语权交给用户,等待对方说话。这就像现实对话中你说完一句话后停下来等对方回应。如果用户在等待期间说了话,你会通过工具返回结果收到内容。如果超时没有回复,你也会收到超时通知。" - ) + wait_builder.set_description("Pause speaking and wait for the user to provide more input.") wait_builder.add_param( name="seconds", param_type=ToolParamType.INTEGER, - description="等待的秒数。建议 3-10 秒。超过这个时间用户没有回复会显示超时提示。", + description="How many seconds to wait before timing out.", required=True, enum_values=None, ) tools.append(wait_builder.build()) - # stop 工具 + reply_builder = ToolOptionBuilder() + reply_builder.set_name("reply") + reply_builder.set_description("Generate and emit a visible reply based on the current thought.") + tools.append(reply_builder.build()) + + no_reply_builder = ToolOptionBuilder() + no_reply_builder.set_name("no_reply") + no_reply_builder.set_description("Do not emit a visible reply this round and continue thinking.") + tools.append(no_reply_builder.build()) + stop_builder = ToolOptionBuilder() stop_builder.set_name("stop") - stop_builder.set_description( - "结束当前对话循环,进入待机状态,直到用户下次输入新内容时再唤醒你。当对话自然结束、用户表示不想继续聊、或连续多次等待超时用户没有回复时使用。" - ) + stop_builder.set_description("Stop the current inner loop and return control to the outer chat flow.") tools.append(stop_builder.build()) - # store_context 工具 - store_context_builder = ToolOptionBuilder() - store_context_builder.set_name("store_context") - store_context_builder.set_description( - "将指定范围的对话上下文存入记忆系统,然后从当前对话中移除这些内容。适合在对话上下文过长、话题转换、或遇到重要内容需要保存时使用。" - ) - store_context_builder.add_param( - name="count", - param_type=ToolParamType.INTEGER, - description="要保存的消息条数(从最早的对话开始计数)。建议 5-20 条。", - required=True, - enum_values=None, - ) - store_context_builder.add_param( - name="reason", - param_type=ToolParamType.STRING, - description="保存原因,用于后续检索。例如:'讨论了用户的工作情况' 或 '用户分享了对电影的看法'", - required=True, - enum_values=None, - ) - tools.append(store_context_builder.build()) - return tools -# 为了兼容性,创建一个函数来将工具转换为 dict 格式(用于调试显示) def builtin_tools_as_dicts() -> List[Dict[str, Any]]: - """将内置工具转换为 dict 格式(用于调试)""" + """Return built-in tools as plain dictionaries.""" return [ - { - "name": "send_message", - "description": "对用户说话。你所有想让用户看到的正式发言都必须通过此工具输出。", - "parameters": { - "type": "object", - "properties": {"reason": {"type": "string", "description": "回复的想法和内容"}}, - "required": ["reason"], - }, - }, { "name": "wait", - "description": "暂时结束发言,等待用户回应", + "description": "Pause speaking and wait for the user to provide more input.", "parameters": { "type": "object", - "properties": {"seconds": {"type": "number", "description": "等待秒数"}}, + "properties": { + "seconds": { + "type": "number", + "description": "How many seconds to wait before timing out.", + } + }, "required": ["seconds"], }, }, { - "name": "stop", - "description": "结束对话循环", + "name": "reply", + "description": "Generate and emit a visible reply based on the current thought.", "parameters": {"type": "object", "properties": {}, "required": []}, }, { - "name": "store_context", - "description": "保存对话上下文到记忆系统", - "parameters": { - "type": "object", - "properties": { - "count": {"type": "number", "description": "保存的消息条数"}, - "reason": {"type": "string", "description": "保存原因"}, - }, - "required": ["count", "reason"], - }, + "name": "no_reply", + "description": "Do not emit a visible reply this round and continue thinking.", + "parameters": {"type": "object", "properties": {}, "required": []}, + }, + { + "name": "stop", + "description": "Stop the current inner loop and return control to the outer chat flow.", + "parameters": {"type": "object", "properties": {}, "required": []}, }, ] -# 导出工具创建函数和列表 def get_builtin_tools() -> List[ToolOption]: - """获取内置工具列表""" + """Return built-in tools.""" return create_builtin_tools() -# 为了向后兼容,也导出 dict 格式 BUILTIN_TOOLS_DICTS = builtin_tools_as_dicts() diff --git a/src/maisaka/cli.py b/src/maisaka/cli.py index e10620c1..bb5ae2a8 100644 --- a/src/maisaka/cli.py +++ b/src/maisaka/cli.py @@ -1,585 +1,623 @@ -""" -MaiSaka - CLI 交互界面与对话引擎 -BufferCLI 整合主循环、对话引擎、子代理管理。 -""" - -import os -import asyncio -from datetime import datetime -from typing import Optional - -from rich.panel import Panel -from rich.markdown import Markdown -from rich.text import Text -from rich import box - -from .config import ( - console, - ENABLE_EMOTION_MODULE, - ENABLE_COGNITION_MODULE, - ENABLE_TIMING_MODULE, - ENABLE_KNOWLEDGE_MODULE, - ENABLE_MCP, -) -from .input_reader import InputReader -from .knowledge import retrieve_relevant_knowledge, store_knowledge_from_context -from .knowledge_store import get_knowledge_store -from .llm_service import MaiSakaLLMService, build_message, remove_last_perception -from .mcp_client import MCPManager -from .timing import build_timing_info -from .tool_handlers import ( - ToolHandlerContext, - handle_list_files, - handle_mcp_tool, - handle_read_file, - handle_send_message, - handle_store_context, - handle_stop, - handle_unknown_tool, - handle_wait, - handle_write_file, -) - - -class BufferCLI: - """命令行交互界面""" - - def __init__(self): - self.llm_service: Optional[MaiSakaLLMService] = None - self._reader = InputReader() - self._chat_history: Optional[list] = None # 持久化的对话历史 - self._knowledge_store = get_knowledge_store() # 了解存储实例 - - # 显示了解存储统计 - knowledge_stats = self._knowledge_store.get_stats() - if knowledge_stats["total_items"] > 0: - console.print(f"[success][OK] 了解系统: {knowledge_stats['total_items']}条特征信息[/success]") - else: - console.print("[muted][OK] 了解系统: 已初始化 (暂无数据)[/muted]") - # Timing 模块时间戳跟踪 - self._chat_start_time: Optional[datetime] = None - self._last_user_input_time: Optional[datetime] = None - self._last_assistant_response_time: Optional[datetime] = None - self._user_input_times: list[datetime] = [] # 所有用户输入时间戳 - # MCP 管理器(异步初始化,在 run() 中完成) - self._mcp_manager: Optional[MCPManager] = None - self._init_llm() - - def _init_llm(self): - """初始化 LLM 服务 - 使用主项目配置系统""" - thinking_env = os.getenv("ENABLE_THINKING", "").strip().lower() - enable_thinking: Optional[bool] = True if thinking_env == "true" else False if thinking_env == "false" else None - - # MaiSakaLLMService 现在使用主项目的配置系统 - # 参数仅为兼容性保留,实际从 config_manager 读取配置 - self.llm_service = MaiSakaLLMService( - api_key="", - base_url=None, - model="", - enable_thinking=enable_thinking, - ) - - # 获取实际使用的模型名称 - model_name = self.llm_service._model_name - console.print(f"[success][OK] LLM 服务已初始化[/success] [muted](模型: {model_name})[/muted]") - - def _build_tool_context(self) -> ToolHandlerContext: - """构建工具处理器所需的上下文。""" - ctx = ToolHandlerContext( - llm_service=self.llm_service, - reader=self._reader, - user_input_times=self._user_input_times, - ) - ctx.last_user_input_time = self._last_user_input_time - return ctx - - # ──────── 显示方法 ──────── - - def _show_banner(self): - """显示欢迎横幅""" - banner = Text() - banner.append("MaiSaka", style="bold cyan") - banner.append(" v2.0\n", style="muted") - banner.append("直接输入文字开始对话 | Ctrl+C 退出", style="muted") - - console.print(Panel(banner, box=box.DOUBLE_EDGE, border_style="cyan", padding=(1, 2))) - console.print() - - # ──────── 上下文管理 ──────── - - def _get_safe_removal_indices(self, chat_history: list, count: int) -> list[int]: - """ - 获取可以安全删除的消息索引。 - - 确保 tool_calls 和 tool 响应消息成对删除,避免破坏 API 要求的配对关系。 - 只删除完整的消息块(user/assistant + 可选的 tool 响应序列)。 - - 保留最后 3 条非 tool 消息,避免删除可能还在处理中的内容。 - - Returns: - 可以安全删除的消息索引列表(从后往前排序) - """ - indices_to_remove = [] - removed_count = 0 - i = 0 - - # 计算保留的消息数量(最后 3 条非 tool 消息) - safe_zone_count = 3 - non_tool_count = 0 - for msg in reversed(chat_history): - if msg.get("role") != "tool": - non_tool_count += 1 - if non_tool_count >= safe_zone_count: - break - - # 只处理前 (len - non_tool_count) 条消息 - max_process_index = len(chat_history) - non_tool_count - - while i < max_process_index and removed_count < count: - msg = chat_history[i] - role = msg.get("role", "") - - # 跳过 role=tool 的消息(它们会被对应的 assistant 消息一起处理) - if role == "tool": - i += 1 +""" +MaiSaka - CLI 交互界面与对话引擎 +BufferCLI 整合主循环、对话引擎、子代理管理。 +""" + +import os +import asyncio +from datetime import datetime +from typing import Optional + +from rich.panel import Panel +from rich.markdown import Markdown +from rich.text import Text +from rich import box + +from .config import ( + console, + ENABLE_EMOTION_MODULE, + ENABLE_COGNITION_MODULE, + ENABLE_TIMING_MODULE, + ENABLE_KNOWLEDGE_MODULE, + ENABLE_MCP, +) +from .input_reader import InputReader +from .knowledge import retrieve_relevant_knowledge, store_knowledge_from_context +from .knowledge_store import get_knowledge_store +from .llm_service import MaiSakaLLMService, build_message, remove_last_perception +from .mcp_client import MCPManager +from .timing import build_timing_info +from .tool_handlers import ( + ToolHandlerContext, + handle_list_files, + handle_mcp_tool, + handle_read_file, + handle_stop, + handle_unknown_tool, + handle_wait, + handle_write_file, +) + + +class BufferCLI: + """命令行交互界面""" + + def __init__(self): + self.llm_service: Optional[MaiSakaLLMService] = None + self._reader = InputReader() + self._chat_history: Optional[list] = None # 持久化的对话历史 + self._knowledge_store = get_knowledge_store() # 了解存储实例 + + # 显示了解存储统计 + knowledge_stats = self._knowledge_store.get_stats() + if knowledge_stats["total_items"] > 0: + console.print(f"[success][OK] 了解系统: {knowledge_stats['total_items']}条特征信息[/success]") + else: + console.print("[muted][OK] 了解系统: 已初始化 (暂无数据)[/muted]") + # Timing 模块时间戳跟踪 + self._chat_start_time: Optional[datetime] = None + self._last_user_input_time: Optional[datetime] = None + self._last_assistant_response_time: Optional[datetime] = None + self._user_input_times: list[datetime] = [] # 所有用户输入时间戳 + # MCP 管理器(异步初始化,在 run() 中完成) + self._mcp_manager: Optional[MCPManager] = None + self._init_llm() + + def _init_llm(self): + """初始化 LLM 服务 - 使用主项目配置系统""" + thinking_env = os.getenv("ENABLE_THINKING", "").strip().lower() + enable_thinking: Optional[bool] = True if thinking_env == "true" else False if thinking_env == "false" else None + + # MaiSakaLLMService 现在使用主项目的配置系统 + # 参数仅为兼容性保留,实际从 config_manager 读取配置 + self.llm_service = MaiSakaLLMService( + api_key="", + base_url=None, + model="", + enable_thinking=enable_thinking, + ) + + # 获取实际使用的模型名称 + model_name = self.llm_service._model_name + console.print(f"[success][OK] LLM 服务已初始化[/success] [muted](模型: {model_name})[/muted]") + + def _build_tool_context(self) -> ToolHandlerContext: + """构建工具处理器所需的上下文。""" + ctx = ToolHandlerContext( + llm_service=self.llm_service, + reader=self._reader, + user_input_times=self._user_input_times, + ) + ctx.last_user_input_time = self._last_user_input_time + return ctx + + def _show_banner(self): + """显示欢迎横幅""" + banner = Text() + banner.append("MaiSaka", style="bold cyan") + banner.append(" v2.0\n", style="muted") + banner.append("直接输入文字开始对话 | Ctrl+C 退出", style="muted") + + console.print(Panel(banner, box=box.DOUBLE_EDGE, border_style="cyan", padding=(1, 2))) + console.print() + + # ──────── 上下文管理 ──────── + + def _get_safe_removal_indices(self, chat_history: list, count: int) -> list[int]: + """ + 获取可以安全删除的消息索引。 + + 确保 tool_calls 和 tool 响应消息成对删除,避免破坏 API 要求的配对关系。 + 只删除完整的消息块(user/assistant + 可选的 tool 响应序列)。 + + 保留最后 3 条非 tool 消息,避免删除可能还在处理中的内容。 + + Returns: + 可以安全删除的消息索引列表(从后往前排序) + """ + indices_to_remove = [] + removed_count = 0 + i = 0 + + # 计算保留的消息数量(最后 3 条非 tool 消息) + safe_zone_count = 3 + non_tool_count = 0 + for msg in reversed(chat_history): + if msg.get("role") != "tool": + non_tool_count += 1 + if non_tool_count >= safe_zone_count: + break + + # 只处理前 (len - non_tool_count) 条消息 + max_process_index = len(chat_history) - non_tool_count + + while i < max_process_index and removed_count < count: + msg = chat_history[i] + role = msg.get("role", "") + + # 跳过 role=tool 的消息(它们会被对应的 assistant 消息一起处理) + if role == "tool": + i += 1 + continue + + # 检查这是否是一个带 tool_calls 的 assistant 消息 + if role == "assistant" and "tool_calls" in msg: + # 收集这个 assistant 消息及其后续的 tool 响应消息 + block_indices = [i] + j = i + 1 + while j < len(chat_history): + next_msg = chat_history[j] + if next_msg.get("role") == "tool": + block_indices.append(j) + j += 1 + else: + break + indices_to_remove.extend(block_indices) + removed_count += 1 + i = j + elif role in ["user", "assistant"]: + # 普通消息,可以直接删除 + indices_to_remove.append(i) + removed_count += 1 + i += 1 + else: + i += 1 + + # 从后往前排序,避免索引问题 + return sorted(indices_to_remove, reverse=True) + + async def _manage_context_length(self, chat_history: list) -> None: + """ + 上下文管理:当对话历史过长时进行压缩。 + + 当达到 20 条上下文时: + 1. 移除最早 10 条上下文 + 2. 对这 10 条内容进行 LLM 总结 + 3. 将总结后的内容存入记忆 + """ + CONTEXT_LIMIT = 20 + COMPRESS_COUNT = 10 + + # 计算实际消息数量(排除 role=tool 的工具返回消息) + actual_messages = [m for m in chat_history if m.get("role") != "tool"] + + if len(actual_messages) >= CONTEXT_LIMIT: + # 获取安全删除的索引 + indices_to_remove = self._get_safe_removal_indices(chat_history, COMPRESS_COUNT) + + if indices_to_remove: + # 收集要总结的消息(在删除前) + to_compress = [] + for i in sorted(indices_to_remove): + if 0 <= i < len(chat_history): + to_compress.append(chat_history[i]) + + if to_compress: + # 总结上下文 + try: + console.print("[accent]🧠 上下文过长,正在压缩并存入记忆...[/accent]") + summary = await self.llm_service.summarize_context(to_compress) + + # 存储了解信息(如果启用) + if ENABLE_KNOWLEDGE_MODULE: + try: + knowledge_count = await store_knowledge_from_context( + self.llm_service, + to_compress, + store_result_callback=lambda cat_id, cat_name, content: console.print( + f"[muted] [OK] 存储了解信息: {cat_name}[/muted]" + ), + ) + if knowledge_count > 0: + console.print(f"[success][OK] 了解模块: 存储{knowledge_count}条特征信息[/success]") + except Exception as e: + console.print(f"[warning]了解存储失败: {e}[/warning]") + if summary: + # 存入记忆 + # 显示压缩结果 + console.print( + Panel( + Markdown(summary), + title="📝 上下文已压缩", + border_style="green", + padding=(0, 1), + style="dim", + ) + ) + except Exception as e: + console.print(f"[warning]上下文总结失败: {e}[/warning]") + + # 从后往前删除 + for i in indices_to_remove: + if 0 <= i < len(chat_history): + chat_history.pop(i) + + # 清理"孤儿" tool 消息(没有对应 tool_calls 的 tool 消息) + valid_tool_call_ids = set() + for msg in chat_history: + if msg.get("role") == "assistant" and "tool_calls" in msg: + for tool_call in msg["tool_calls"]: + valid_tool_call_ids.add(tool_call.get("id", "")) + + # 删除无效的 tool 消息(从后往前) + i = len(chat_history) - 1 + while i >= 0: + msg = chat_history[i] + if msg.get("role") == "tool": + tool_call_id = msg.get("tool_call_id", "") + if tool_call_id not in valid_tool_call_ids: + chat_history.pop(i) + i -= 1 + + # ──────── LLM 循环架构 ──────── + + async def _start_chat(self, user_text: str): + """接收用户输入并启动/继续 LLM 对话循环""" + if not self.llm_service: + console.print("[warning]LLM 服务未初始化,跳过对话。[/warning]") + return + + now = datetime.now() + self._last_user_input_time = now + self._user_input_times.append(now) + + if self._chat_history is None: + # 首次对话:初始化上下文 + self._chat_start_time = now + self._last_assistant_response_time = None + self._chat_history = self.llm_service.build_chat_context(user_text) + else: + # 后续对话:追加用户消息到已有上下文 + self._chat_history.append(build_message(role="user", content=user_text)) + + await self._run_llm_loop(self._chat_history) + + async def _run_llm_loop(self, chat_history: list): + """ + LLM 循环架构核心。 + + LLM 持续运行,每步可能输出文本(内心思考)和/或调用工具: + - say(text): 对用户说话 + - wait(seconds): 暂停等待用户输入,超时或收到输入后继续 + - stop(): 结束循环,进入待机,直到用户下次输入 + - 不调用工具: 继续下一轮思考/生成 + + 每轮流程: + 1. 上下文管理:达到上限时自动压缩 + 2. 情商 + Timing + 了解模块(并行):分析用户情绪、对话时间节奏、检索用户特征 + *注:如果上次没有调用工具,跳过模块分析 + 3. 调用主 LLM:基于完整上下文生成响应 + """ + consecutive_errors = 0 + last_had_tool_calls = True # 第一次循环总是执行模块分析 + + while True: + # ── 上下文管理 ── + await self._manage_context_length(chat_history) + + # ── 情商模块 + Timing 模块 + 了解模块(并行) ── + # 只有上次调用了工具才重新分析(首次循环除外) + if last_had_tool_calls: + timing_info = build_timing_info( + self._chat_start_time, + self._last_user_input_time, + self._last_assistant_response_time, + self._user_input_times, + ) + + # 根据配置决定要执行的模块 + tasks = [] + status_text_parts = [] + + if ENABLE_EMOTION_MODULE: + tasks.append(("eq", self.llm_service.analyze_emotion(chat_history))) + status_text_parts.append("🎭") + if ENABLE_COGNITION_MODULE: + tasks.append(("cognition", self.llm_service.analyze_cognition(chat_history))) + status_text_parts.append("🧩") + if ENABLE_TIMING_MODULE: + tasks.append(("timing", self.llm_service.analyze_timing(chat_history, timing_info))) + status_text_parts.append("⏱️🪞") + if ENABLE_KNOWLEDGE_MODULE: + tasks.append(("knowledge", retrieve_relevant_knowledge(self.llm_service, chat_history))) + status_text_parts.append("👤") + + with console.status( + f"[info]{' '.join(status_text_parts)} {' + '.join(status_text_parts)} 模块并行分析中...[/info]", + spinner="dots", + ): + results = await asyncio.gather(*[task for _, task in tasks], return_exceptions=True) + + # 解析结果 + eq_result, cognition_result, timing_result, knowledge_result = None, None, None, None + result_idx = 0 + if ENABLE_EMOTION_MODULE: + eq_result = results[result_idx] + result_idx += 1 + if ENABLE_COGNITION_MODULE: + cognition_result = results[result_idx] + result_idx += 1 + if ENABLE_TIMING_MODULE: + timing_result = results[result_idx] + result_idx += 1 + if ENABLE_KNOWLEDGE_MODULE: + knowledge_result = results[result_idx] + result_idx += 1 + + # 处理情商模块结果 + eq_analysis = "" + if ENABLE_EMOTION_MODULE: + if isinstance(eq_result, Exception): + console.print(f"[warning]情商模块分析失败: {eq_result}[/warning]") + elif eq_result: + eq_analysis = eq_result + console.print( + Panel( + Markdown(eq_analysis), + title="🎭 情绪感知", + border_style="bright_yellow", + padding=(0, 1), + style="dim", + ) + ) + + # 处理认知模块结果 + cognition_analysis = "" + if ENABLE_COGNITION_MODULE: + if isinstance(cognition_result, Exception): + console.print(f"[warning]认知模块分析失败: {cognition_result}[/warning]") + elif cognition_result: + cognition_analysis = cognition_result + console.print( + Panel( + Markdown(cognition_analysis), + title="🧩 意图感知", + border_style="bright_cyan", + padding=(0, 1), + style="dim", + ) + ) + + # 处理 Timing 模块结果(含自我反思功能) + timing_analysis = "" + if ENABLE_TIMING_MODULE: + if isinstance(timing_result, Exception): + console.print(f"[warning]Timing 模块分析失败: {timing_result}[/warning]") + elif timing_result: + timing_analysis = timing_result + console.print( + Panel( + Markdown(timing_analysis), + title="⏱️🪞 时间感知 & 自我反思", + border_style="bright_blue", + padding=(0, 1), + style="dim", + ) + ) + + # 处理了解模块结果 + knowledge_analysis = "" + if ENABLE_KNOWLEDGE_MODULE: + if isinstance(knowledge_result, Exception): + console.print(f"[warning]了解模块分析失败: {knowledge_result}[/warning]") + elif knowledge_result: + knowledge_analysis = knowledge_result + console.print( + Panel( + Markdown(knowledge_analysis), + title="👤 用户特征", + border_style="bright_magenta", + padding=(0, 1), + style="dim", + ) + ) + + # 注入感知信息(作为 assistant 的感知消息) + # 移除上一条感知消息(如果存在) + remove_last_perception(chat_history) + + # 构建感知内容 + perception_parts = [] + if eq_analysis: + perception_parts.append(f"情绪感知\n{eq_analysis}") + if cognition_analysis: + perception_parts.append(f"意图感知\n{cognition_analysis}") + if timing_analysis: + perception_parts.append(f"时间感知 & 自我反思\n{timing_analysis}") + if knowledge_analysis: + perception_parts.append(f"用户特征\n{knowledge_analysis}") + + if perception_parts: + # 添加感知消息(AI 的感知能力结果) + chat_history.append( + build_message( + role="assistant", + content="\n\n".join(perception_parts), + msg_type="perception", + ) + ) + else: + # 上次没有调用工具,跳过模块分析 + console.print("[muted]ℹ️ 上次未调用工具,跳过模块分析[/muted]") + + # ── 调用 LLM ── + with console.status("[info]💬 AI 正在思考...[/info]", spinner="dots"): + try: + response = await self.llm_service.chat_loop_step(chat_history) + consecutive_errors = 0 + except Exception as e: + consecutive_errors += 1 + console.print(f"[error]LLM 调用出错: {e}[/error]") + if consecutive_errors >= 3: + console.print("[error]连续出错,退出对话[/error]\n") + break + continue + + # 将 assistant 消息追加到历史 + chat_history.append(response.raw_message) + self._last_assistant_response_time = datetime.now() + + + # 显示内心思考(content 部分,淡色呈现) + if response.content: + console.print( + Panel( + Markdown(response.content), + title="💭 内心思考", + border_style="dim", + padding=(1, 2), + style="dim", + ) + ) + + # ── 处理工具调用 ── + if response.content and not response.tool_calls: + last_had_tool_calls = False continue - - # 检查这是否是一个带 tool_calls 的 assistant 消息 - if role == "assistant" and "tool_calls" in msg: - # 收集这个 assistant 消息及其后续的 tool 响应消息 - block_indices = [i] - j = i + 1 - while j < len(chat_history): - next_msg = chat_history[j] - if next_msg.get("role") == "tool": - block_indices.append(j) - j += 1 - else: - break - indices_to_remove.extend(block_indices) - removed_count += 1 - i = j - elif role in ["user", "assistant"]: - # 普通消息,可以直接删除 - indices_to_remove.append(i) - removed_count += 1 - i += 1 - else: - i += 1 - - # 从后往前排序,避免索引问题 - return sorted(indices_to_remove, reverse=True) - - async def _manage_context_length(self, chat_history: list) -> None: - """ - 上下文管理:当对话历史过长时进行压缩。 - - 当达到 20 条上下文时: - 1. 移除最早 10 条上下文 - 2. 对这 10 条内容进行 LLM 总结 - 3. 将总结后的内容存入记忆 - """ - CONTEXT_LIMIT = 20 - COMPRESS_COUNT = 10 - - # 计算实际消息数量(排除 role=tool 的工具返回消息) - actual_messages = [m for m in chat_history if m.get("role") != "tool"] - - if len(actual_messages) >= CONTEXT_LIMIT: - # 获取安全删除的索引 - indices_to_remove = self._get_safe_removal_indices(chat_history, COMPRESS_COUNT) - - if indices_to_remove: - # 收集要总结的消息(在删除前) - to_compress = [] - for i in sorted(indices_to_remove): - if 0 <= i < len(chat_history): - to_compress.append(chat_history[i]) - - if to_compress: - # 总结上下文 - try: - console.print("[accent]🧠 上下文过长,正在压缩并存入记忆...[/accent]") - summary = await self.llm_service.summarize_context(to_compress) - - # 存储了解信息(如果启用) - if ENABLE_KNOWLEDGE_MODULE: - try: - knowledge_count = await store_knowledge_from_context( - self.llm_service, - to_compress, - store_result_callback=lambda cat_id, cat_name, content: console.print( - f"[muted] [OK] 存储了解信息: {cat_name}[/muted]" - ), - ) - if knowledge_count > 0: - console.print(f"[success][OK] 了解模块: 存储{knowledge_count}条特征信息[/success]") - except Exception as e: - console.print(f"[warning]了解存储失败: {e}[/warning]") - if summary: - # 存入记忆 - # 显示压缩结果 - console.print( - Panel( - Markdown(summary), - title="📝 上下文已压缩", - border_style="green", - padding=(0, 1), - style="dim", - ) - ) - except Exception as e: - console.print(f"[warning]上下文总结失败: {e}[/warning]") - - # 从后往前删除 - for i in indices_to_remove: - if 0 <= i < len(chat_history): - chat_history.pop(i) - - # 清理"孤儿" tool 消息(没有对应 tool_calls 的 tool 消息) - valid_tool_call_ids = set() - for msg in chat_history: - if msg.get("role") == "assistant" and "tool_calls" in msg: - for tool_call in msg["tool_calls"]: - valid_tool_call_ids.add(tool_call.get("id", "")) - - # 删除无效的 tool 消息(从后往前) - i = len(chat_history) - 1 - while i >= 0: - msg = chat_history[i] - if msg.get("role") == "tool": - tool_call_id = msg.get("tool_call_id", "") - if tool_call_id not in valid_tool_call_ids: - chat_history.pop(i) - i -= 1 - - # ──────── LLM 循环架构 ──────── - - async def _start_chat(self, user_text: str): - """接收用户输入并启动/继续 LLM 对话循环""" - if not self.llm_service: - console.print("[warning]LLM 服务未初始化,跳过对话。[/warning]") - return - - now = datetime.now() - self._last_user_input_time = now - self._user_input_times.append(now) - - if self._chat_history is None: - # 首次对话:初始化上下文 - self._chat_start_time = now - self._last_assistant_response_time = None - self._chat_history = self.llm_service.build_chat_context(user_text) - else: - # 后续对话:追加用户消息到已有上下文 - self._chat_history.append( - { - "role": "user", - "content": user_text, - } - ) - - await self._run_llm_loop(self._chat_history) - - async def _run_llm_loop(self, chat_history: list): - """ - LLM 循环架构核心。 - - LLM 持续运行,每步可能输出文本(内心思考)和/或调用工具: - - say(text): 对用户说话 - - wait(seconds): 暂停等待用户输入,超时或收到输入后继续 - - stop(): 结束循环,进入待机,直到用户下次输入 - - 不调用工具: 继续下一轮思考/生成 - - 每轮流程: - 1. 上下文管理:达到上限时自动压缩 - 2. 情商 + Timing + 了解模块(并行):分析用户情绪、对话时间节奏、检索用户特征 - *注:如果上次没有调用工具,跳过模块分析 - 3. 调用主 LLM:基于完整上下文生成响应 - """ - consecutive_errors = 0 - last_had_tool_calls = True # 第一次循环总是执行模块分析 - - while True: - # ── 上下文管理 ── - await self._manage_context_length(chat_history) - - # ── 情商模块 + Timing 模块 + 了解模块(并行) ── - # 只有上次调用了工具才重新分析(首次循环除外) - if last_had_tool_calls: - timing_info = build_timing_info( - self._chat_start_time, - self._last_user_input_time, - self._last_assistant_response_time, - self._user_input_times, - ) - - # 根据配置决定要执行的模块 - tasks = [] - status_text_parts = [] - - if ENABLE_EMOTION_MODULE: - tasks.append(("eq", self.llm_service.analyze_emotion(chat_history))) - status_text_parts.append("🎭") - if ENABLE_COGNITION_MODULE: - tasks.append(("cognition", self.llm_service.analyze_cognition(chat_history))) - status_text_parts.append("🧩") - if ENABLE_TIMING_MODULE: - tasks.append(("timing", self.llm_service.analyze_timing(chat_history, timing_info))) - status_text_parts.append("⏱️🪞") - if ENABLE_KNOWLEDGE_MODULE: - tasks.append(("knowledge", retrieve_relevant_knowledge(self.llm_service, chat_history))) - status_text_parts.append("👤") - - with console.status( - f"[info]{' '.join(status_text_parts)} {' + '.join(status_text_parts)} 模块并行分析中...[/info]", - spinner="dots", - ): - results = await asyncio.gather(*[task for _, task in tasks], return_exceptions=True) - - # 解析结果 - eq_result, cognition_result, timing_result, knowledge_result = None, None, None, None - result_idx = 0 - if ENABLE_EMOTION_MODULE: - eq_result = results[result_idx] - result_idx += 1 - if ENABLE_COGNITION_MODULE: - cognition_result = results[result_idx] - result_idx += 1 - if ENABLE_TIMING_MODULE: - timing_result = results[result_idx] - result_idx += 1 - if ENABLE_KNOWLEDGE_MODULE: - knowledge_result = results[result_idx] - result_idx += 1 - - # 处理情商模块结果 - eq_analysis = "" - if ENABLE_EMOTION_MODULE: - if isinstance(eq_result, Exception): - console.print(f"[warning]情商模块分析失败: {eq_result}[/warning]") - elif eq_result: - eq_analysis = eq_result - console.print( - Panel( - Markdown(eq_analysis), - title="🎭 情绪感知", - border_style="bright_yellow", - padding=(0, 1), - style="dim", - ) - ) - - # 处理认知模块结果 - cognition_analysis = "" - if ENABLE_COGNITION_MODULE: - if isinstance(cognition_result, Exception): - console.print(f"[warning]认知模块分析失败: {cognition_result}[/warning]") - elif cognition_result: - cognition_analysis = cognition_result - console.print( - Panel( - Markdown(cognition_analysis), - title="🧩 意图感知", - border_style="bright_cyan", - padding=(0, 1), - style="dim", - ) - ) - - # 处理 Timing 模块结果(含自我反思功能) - timing_analysis = "" - if ENABLE_TIMING_MODULE: - if isinstance(timing_result, Exception): - console.print(f"[warning]Timing 模块分析失败: {timing_result}[/warning]") - elif timing_result: - timing_analysis = timing_result - console.print( - Panel( - Markdown(timing_analysis), - title="⏱️🪞 时间感知 & 自我反思", - border_style="bright_blue", - padding=(0, 1), - style="dim", - ) - ) - - # 处理了解模块结果 - knowledge_analysis = "" - if ENABLE_KNOWLEDGE_MODULE: - if isinstance(knowledge_result, Exception): - console.print(f"[warning]了解模块分析失败: {knowledge_result}[/warning]") - elif knowledge_result: - knowledge_analysis = knowledge_result - console.print( - Panel( - Markdown(knowledge_analysis), - title="👤 用户特征", - border_style="bright_magenta", - padding=(0, 1), - style="dim", - ) - ) - - # 注入感知信息(作为 assistant 的感知消息) - # 移除上一条感知消息(如果存在) - remove_last_perception(chat_history) - - # 构建感知内容 - perception_parts = [] - if eq_analysis: - perception_parts.append(f"情绪感知\n{eq_analysis}") - if cognition_analysis: - perception_parts.append(f"意图感知\n{cognition_analysis}") - if timing_analysis: - perception_parts.append(f"时间感知 & 自我反思\n{timing_analysis}") - if knowledge_analysis: - perception_parts.append(f"用户特征\n{knowledge_analysis}") - - if perception_parts: - # 添加感知消息(AI 的感知能力结果) - chat_history.append( - build_message( - role="assistant", - content="\n\n".join(perception_parts), - msg_type="perception", - ) - ) - else: - # 上次没有调用工具,跳过模块分析 - console.print("[muted]ℹ️ 上次未调用工具,跳过模块分析[/muted]") - - # ── 调用 LLM ── - with console.status("[info]💬 AI 正在思考...[/info]", spinner="dots"): - try: - response = await self.llm_service.chat_loop_step(chat_history) - consecutive_errors = 0 - except Exception as e: - consecutive_errors += 1 - console.print(f"[error]LLM 调用出错: {e}[/error]") - if consecutive_errors >= 3: - console.print("[error]连续出错,退出对话[/error]\n") - break - continue - - # 将 assistant 消息追加到历史 - chat_history.append(response.raw_message) - self._last_assistant_response_time = datetime.now() - - # 显示内心思考(content 部分,淡色呈现) - if response.content: - console.print( - Panel( - Markdown(response.content), - title="💭 内心思考", - border_style="dim", - padding=(1, 2), - style="dim", - ) - ) - - # ── 处理工具调用 ── - if response.tool_calls: - should_stop = False - ctx = self._build_tool_context() - - for tc in response.tool_calls: - if tc.name in {"send_message", "say"}: - await handle_send_message(tc, chat_history, ctx) - - elif tc.name == "stop": + + if response.tool_calls: + should_stop = False + ctx = self._build_tool_context() + + for tc in response.tool_calls: + if tc.name == "stop": await handle_stop(tc, chat_history) should_stop = True + elif tc.name == "reply": + reply = await self._generate_visible_reply(chat_history, response.content) + chat_history.append( + { + "role": "tool", + "tool_call_id": tc.id, + "content": "Visible reply generated and recorded.", + } + ) + chat_history.append( + build_message( + role="user", + content=f"\u3010\u9ea6\u9ea6\u7684\u53d1\u8a00\u3011{reply}", + ) + ) + + elif tc.name == "no_reply": + console.print("[muted]No visible reply this round.[/muted]") + chat_history.append( + { + "role": "tool", + "tool_call_id": tc.id, + "content": "No visible reply was sent for this round.", + } + ) + elif tc.name == "wait": tool_result = await handle_wait(tc, chat_history, ctx) - # 同步回 timing 时间戳 - if ctx.last_user_input_time != self._last_user_input_time: - self._last_user_input_time = ctx.last_user_input_time - if tool_result.startswith("[[QUIT]]"): - should_stop = True + # 同步回 timing 时间戳 + if ctx.last_user_input_time != self._last_user_input_time: + self._last_user_input_time = ctx.last_user_input_time + if tool_result.startswith("[[QUIT]]"): + should_stop = True + + elif tc.name == "write_file": + await handle_write_file(tc, chat_history) + + elif tc.name == "read_file": + await handle_read_file(tc, chat_history) + + elif tc.name == "list_files": + await handle_list_files(tc, chat_history) + + elif self._mcp_manager and self._mcp_manager.is_mcp_tool(tc.name): + await handle_mcp_tool(tc, chat_history, self._mcp_manager) + + else: + await handle_unknown_tool(tc, chat_history) + + if should_stop: + console.print("[muted]对话暂停,等待新输入...[/muted]\n") + break + + # 调用了工具,下次循环需要重新分析模块 + last_had_tool_calls = True + else: + # LLM 未调用任何工具 → 继续下一轮思考 + # (不做任何额外操作,直接回到循环顶部再次调用 LLM) + # 标记上次没有调用工具,下次循环跳过模块分析 + last_had_tool_calls = False + continue + + # ──────── 主循环 ──────── + + async def _init_mcp(self): + """初始化 MCP 服务器连接,发现并注册外部工具。""" + config_path = os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "mcp_config.json", + ) + self._mcp_manager = await MCPManager.from_config(config_path) + + if self._mcp_manager and self.llm_service: + mcp_tools = self._mcp_manager.get_openai_tools() + if mcp_tools: + self.llm_service.set_extra_tools(mcp_tools) + summary = self._mcp_manager.get_tool_summary() + console.print( + Panel( + f"已加载 {len(mcp_tools)} 个 MCP 工具:\n{summary}", + title="🔌 MCP 工具", + border_style="green", + padding=(0, 1), + ) + ) + + async def _generate_visible_reply(self, chat_history: list, latest_thought: str) -> str: + """Generate and emit a visible reply based on the latest thought.""" + if not self.llm_service or not latest_thought: + return "" - elif tc.name == "write_file": - await handle_write_file(tc, chat_history) - - elif tc.name == "read_file": - await handle_read_file(tc, chat_history) - - elif tc.name == "list_files": - await handle_list_files(tc, chat_history) - - elif tc.name == "store_context": - await handle_store_context(tc, chat_history, ctx) - - elif self._mcp_manager and self._mcp_manager.is_mcp_tool(tc.name): - await handle_mcp_tool(tc, chat_history, self._mcp_manager) - - else: - await handle_unknown_tool(tc, chat_history) - - if should_stop: - console.print("[muted]对话暂停,等待新输入...[/muted]\n") - break - - # 调用了工具,下次循环需要重新分析模块 - last_had_tool_calls = True - else: - # LLM 未调用任何工具 → 继续下一轮思考 - # (不做任何额外操作,直接回到循环顶部再次调用 LLM) - # 标记上次没有调用工具,下次循环跳过模块分析 - last_had_tool_calls = False - - # ──────── 主循环 ──────── - - async def _init_mcp(self): - """初始化 MCP 服务器连接,发现并注册外部工具。""" - config_path = os.path.join( - os.path.dirname(os.path.abspath(__file__)), - "mcp_config.json", - ) - self._mcp_manager = await MCPManager.from_config(config_path) - - if self._mcp_manager and self.llm_service: - mcp_tools = self._mcp_manager.get_openai_tools() - if mcp_tools: - self.llm_service.set_extra_tools(mcp_tools) - summary = self._mcp_manager.get_tool_summary() - console.print( - Panel( - f"已加载 {len(mcp_tools)} 个 MCP 工具:\n{summary}", - title="🔌 MCP 工具", - border_style="green", - padding=(0, 1), - ) - ) - - async def run(self): - """主循环:直接输入文本即可对话""" - # 根据配置决定是否初始化 MCP 服务器 - if ENABLE_MCP: - await self._init_mcp() - else: - console.print("[muted]🔌 MCP 已禁用 (ENABLE_MCP=false)[/muted]") - - # 启动异步输入读取器 - self._reader.start(asyncio.get_event_loop()) - - self._show_banner() - - try: - while True: - console.print("[bold cyan]> [/bold cyan]", end="") - raw_input = await self._reader.get_line() - - if raw_input is None: # EOF - console.print("\n[muted]再见![/muted]") - break - - raw_input = raw_input.strip() - if not raw_input: - continue - - await self._start_chat(raw_input) - finally: - if self._mcp_manager: - await self._mcp_manager.close() + with console.status("[info]Generating visible reply...[/info]", spinner="dots"): + reply = await self.llm_service.generate_reply(latest_thought, chat_history) + + console.print( + Panel( + Markdown(reply), + title="MaiSaka", + border_style="magenta", + padding=(1, 2), + ) + ) + return reply + + async def run(self): + """主循环:直接输入文本即可对话""" + # 根据配置决定是否初始化 MCP 服务器 + if ENABLE_MCP: + await self._init_mcp() + else: + console.print("[muted]🔌 MCP 已禁用 (ENABLE_MCP=false)[/muted]") + + # 启动异步输入读取器 + self._reader.start(asyncio.get_event_loop()) + + self._show_banner() + + try: + while True: + console.print("[bold cyan]> [/bold cyan]", end="") + raw_input = await self._reader.get_line() + + if raw_input is None: # EOF + console.print("\n[muted]再见![/muted]") + break + + raw_input = raw_input.strip() + if not raw_input: + continue + + await self._start_chat(raw_input) + finally: + if self._mcp_manager: + await self._mcp_manager.close() + + + diff --git a/src/maisaka/llm_service.py b/src/maisaka/llm_service.py index 5dfdfe48..bc83e9b9 100644 --- a/src/maisaka/llm_service.py +++ b/src/maisaka/llm_service.py @@ -1,11 +1,14 @@ -""" +""" MaiSaka LLM 服务 - 使用主项目 LLM 系统 将主项目的 LLMRequest 适配为 MaiSaka 需要的接口 """ +from datetime import datetime + +import json +import random from dataclasses import dataclass from typing import Any, List, Literal, Optional -import json from rich.console import Group from rich.panel import Panel @@ -13,7 +16,7 @@ from rich.pretty import Pretty from rich.text import Text from src.common.logger import get_logger -from src.config.config import config_manager +from src.config.config import config_manager, global_config from src.llm_models.payload_content.message import MessageBuilder, RoleType from src.llm_models.payload_content.tool_option import ToolCall as ToolCallOption, ToolOption from src.llm_models.utils_model import LLMRequest @@ -58,7 +61,13 @@ class ChatResponse: def build_message(role: str, content: str, msg_type: MessageType = "user", **kwargs) -> dict: """构建消息字典,包含消息类型标记。""" - msg = {"role": role, "content": content, MSG_TYPE_FIELD: msg_type, **kwargs} + msg = { + "role": role, + "content": content, + MSG_TYPE_FIELD: msg_type, + "_time": datetime.now().strftime("%H:%M:%S"), + **kwargs, + } return msg @@ -107,8 +116,8 @@ class MaiSakaLLMService: # 初始化 LLMRequest 实例(只使用 tool_use 和 replyer) self._llm_tool_use = LLMRequest(model_set=self._model_configs.tool_use, request_type="maisaka_tool_use") # 主对话也使用 tool_use 模型(因为需要工具调用支持) - self._llm_chat = self._llm_tool_use - # 分析模块也使用 tool_use 模型 + self._llm_planner = LLMRequest(model_set=self._model_configs.planner, request_type="maisaka_planner") + self._llm_chat = self._llm_planner self._llm_utils = self._llm_tool_use # 回复生成使用 replyer 模型 self._llm_replyer = LLMRequest(model_set=self._model_configs.replyer, request_type="maisaka_replyer") @@ -116,6 +125,9 @@ class MaiSakaLLMService: # 尝试修复数据库 schema(忽略错误) self._try_fix_database_schema() + # 构建人设信息 + personality_prompt = self._build_personality_prompt() + # 加载系统提示词 if chat_system_prompt is None: try: @@ -130,6 +142,7 @@ class MaiSakaLLMService: tools_section += "\n• list_files() — 获取 mai_files 目录下所有文件的元信息列表。" chat_prompt.add_context("file_tools_section", tools_section if tools_section else "") + chat_prompt.add_context("identity", personality_prompt) import asyncio loop = asyncio.new_event_loop() @@ -141,15 +154,15 @@ class MaiSakaLLMService: loop.close() except Exception as e: logger.error(f"加载系统提示词失败: {e}") - self._chat_system_prompt = "你是一个友好的 AI 助手。" + self._chat_system_prompt = f"{personality_prompt}\n\n你是一个友好的 AI 助手。" else: self._chat_system_prompt = chat_system_prompt - # 获取模型名称用于显示 self._model_name = ( - self._model_configs.tool_use.model_list[0] if self._model_configs.tool_use.model_list else "未配置" + self._model_configs.planner.model_list[0] if self._model_configs.planner.model_list else "未配置" ) + # 加载子模块提示词 self._emotion_prompt: Optional[str] = None self._cognition_prompt: Optional[str] = None @@ -200,6 +213,37 @@ class MaiSakaLLMService: # 静默忽略任何错误,不影响正常流程 pass + def _build_personality_prompt(self) -> str: + """构建人设信息,参考 replyer 的做法""" + try: + bot_name = global_config.bot.nickname + if global_config.bot.alias_names: + bot_nickname = f",也有人叫你{','.join(global_config.bot.alias_names)}" + else: + bot_nickname = "" + + # 获取基础personality + prompt_personality = global_config.personality.personality + + # 检查是否需要随机替换为状态(personality 本体) + if ( + hasattr(global_config.personality, "states") + and global_config.personality.states + and hasattr(global_config.personality, "state_probability") + and global_config.personality.state_probability > 0 + and random.random() < global_config.personality.state_probability + ): + # 随机选择一个状态替换personality + selected_state = random.choice(global_config.personality.states) + prompt_personality = selected_state + + prompt_personality = f"{prompt_personality};" + return f"你的名字是{bot_name}{bot_nickname},你{prompt_personality}" + except Exception as e: + logger.warning(f"构建人设信息失败: {e}") + # 返回默认人设 + return "你的名字是麦麦,你是一个活泼可爱的AI助手。" + def set_extra_tools(self, tools: List[dict]) -> None: """设置额外的工具定义(如 MCP 工具)""" self._extra_tools = list(tools) @@ -390,14 +434,34 @@ class MaiSakaLLMService: # 打印消息列表 built_messages = message_factory(None) - console.print( - Panel( - Group(*[self._render_message_panel(msg, index + 1) for index, msg in enumerate(built_messages)]), - title="MaiSaka LLM Request - chat_loop_step", - border_style="cyan", - padding=(0, 1), + + # 将消息分为普通消息和 tool 消息 + non_tool_panels = [] + tool_panels = [] + + for index, msg in enumerate(built_messages): + panel = self._render_message_panel(msg, index + 1) + role = msg.role.value if hasattr(msg.role, "value") else str(msg.role) + + if role == "tool": + tool_panels.append(panel) + else: + non_tool_panels.append(panel) + + # 先显示普通消息(group 在一个 panel 内) + if non_tool_panels: + console.print( + Panel( + Group(*non_tool_panels), + title="MaiSaka LLM Request - chat_loop_step", + border_style="cyan", + padding=(0, 1), + ) ) - ) + + # tool 消息作为单独的块展示 + for panel in tool_panels: + console.print(panel) response, (reasoning, model, tool_calls) = await self._llm_chat.generate_response_with_message_async( message_factory=message_factory, @@ -424,7 +488,11 @@ class MaiSakaLLMService: ) # 构建原始消息格式(MaiSaka 风格) - raw_message = {"role": "assistant", "content": response} + raw_message = { + "role": "assistant", + "content": response, + "_time": datetime.now().strftime("%H:%M:%S"), + } if converted_tool_calls: raw_message["tool_calls"] = [ { @@ -660,8 +728,12 @@ class MaiSakaLLMService: temperature=0.8, max_tokens=512, ) - return response.strip() if response else "..." except Exception as e: logger.error(f"回复生成 LLM 调用出错: {e}") return "..." + + + + + diff --git a/src/maisaka/mcp_client/manager.py b/src/maisaka/mcp_client/manager.py index 9c43c666..5409a39d 100644 --- a/src/maisaka/mcp_client/manager.py +++ b/src/maisaka/mcp_client/manager.py @@ -12,8 +12,6 @@ from .connection import MCPConnection, MCP_AVAILABLE # 内置工具名称集合 —— MCP 工具不允许与这些名称冲突 BUILTIN_TOOL_NAMES = frozenset( { - "say", - "send_message", "wait", "stop", "create_table", diff --git a/src/maisaka/replyer.py b/src/maisaka/replyer.py index eea23a6d..2cb428a2 100644 --- a/src/maisaka/replyer.py +++ b/src/maisaka/replyer.py @@ -1,76 +1,94 @@ """ -MaiSaka - Reply 回复生成器 -根据想法和上下文生成口语化回复。 +MaiSaka reply helper. """ -from typing import Optional +from datetime import datetime +from typing import Any, Optional + +from src.config.config import global_config + from .llm_service import MaiSakaLLMService +VISIBLE_REPLY_PREFIX = "\u3010\u9ea6\u9ea6\u7684\u53d1\u8a00\u3011" -def format_chat_history(messages: list) -> str: - """将聊天消息列表格式化为可读文本。""" + +def _normalize_content(content: str, limit: int = 500) -> str: + normalized = " ".join((content or "").split()) + if len(normalized) > limit: + return normalized[:limit] + "..." + return normalized + + +def _format_message_time(_: dict[str, Any]) -> str: + return datetime.now().strftime("%H:%M:%S") + + +def _extract_visible_assistant_reply(message: dict[str, Any]) -> str: + if message.get("_type") == "perception": + return "" + + content = (message.get("content", "") or "").strip() + if not content: + return "" + + marker = "[generated_reply]" + if marker in content: + _, visible_reply = content.rsplit(marker, 1) + return _normalize_content(visible_reply) + + return "" + + +def _extract_guided_bot_reply(message: dict[str, Any]) -> str: + content = (message.get("content", "") or "").strip() + if content.startswith(VISIBLE_REPLY_PREFIX): + return _normalize_content(content[len(VISIBLE_REPLY_PREFIX) :].strip()) + return "" + + +def format_chat_history(messages: list[dict[str, Any]]) -> str: + """Format visible chat history for reply generation.""" + bot_nickname = global_config.bot.nickname.strip() or "Bot" parts: list[str] = [] - for msg in messages: - role = msg.get("role", "?") - content = msg.get("content", "") or "" - if role == "system": - parts.append(f"[系统] {content[:500]}") - elif role == "user": - parts.append(f"[用户] {content[:500]}") - elif role == "assistant": + + for message in messages: + role = message.get("role", "") + timestamp = _format_message_time(message) + + if role == "user": + guided_reply = _extract_guided_bot_reply(message) + if guided_reply: + parts.append(f"{timestamp} {bot_nickname}(分析器指导的麦麦发言):{guided_reply}") + continue + + content = _normalize_content(message.get("content", "") or "") if content: - parts.append(f"[助手思考] {content[:500]}") - for tc in msg.get("tool_calls", []): - func = tc.get("function", {}) - name = func.get("name", "?") - args = func.get("arguments", "") - if isinstance(args, str) and len(args) > 200: - args = args[:200] + "..." - parts.append(f"[助手调用 {name}] {args}") - elif role == "tool": - parts.append(f"[工具结果] {content[:300]}") + parts.append(f"{timestamp} 用户:{content}") + continue + + if role == "assistant": + visible_reply = _extract_visible_assistant_reply(message) + if visible_reply: + parts.append(f"{timestamp} {bot_nickname}(你):{visible_reply}") + return "\n".join(parts) class Replyer: - """ - 回复生成器。 - - 根据给定的想法(reason)和对话上下文,生成符合人设的口语化回复。 - """ + """Generate visible replies from thoughts and context.""" def __init__(self, llm_service: Optional[MaiSakaLLMService] = None): - """ - 初始化回复器。 - - Args: - llm_service: LLM 服务实例,如果为 None 则需要在调用前设置 - """ self._llm_service = llm_service self._enabled = True def set_llm_service(self, llm_service: MaiSakaLLMService) -> None: - """设置 LLM 服务""" self._llm_service = llm_service def set_enabled(self, enabled: bool) -> None: - """启用/禁用回复功能""" self._enabled = enabled - async def reply(self, reason: str, chat_history: list) -> str: - """ - 根据想法和上下文生成回复。 - - Args: - reason: 想要回复的方式、想法、内容(不包含具体回复内容) - chat_history: 对话历史上下文 - - Returns: - 生成的回复内容,失败时返回默认回复 - """ + async def reply(self, reason: str, chat_history: list[dict[str, Any]]) -> str: if not self._enabled or not reason or self._llm_service is None: return "..." - # 直接使用 LLM 服务的 generate_reply 方法 - # 该方法使用主项目的 replyer 模型配置 return await self._llm_service.generate_reply(reason, chat_history) diff --git a/src/maisaka/tool_handlers.py b/src/maisaka/tool_handlers.py index 5464f6bb..9ac91941 100644 --- a/src/maisaka/tool_handlers.py +++ b/src/maisaka/tool_handlers.py @@ -1,16 +1,16 @@ """ -MaiSaka - 工具调用处理器 -处理 LLM 循环中各工具(say/wait/stop/file/MCP)的执行逻辑。 +MaiSaka tool handlers. """ +from datetime import datetime +from pathlib import Path +from typing import TYPE_CHECKING, Any, Optional + import json as _json import os -from datetime import datetime -from pathlib import Path -from typing import TYPE_CHECKING, Optional -from rich.panel import Panel from rich.markdown import Markdown +from rich.panel import Panel from .config import console from .input_reader import InputReader @@ -21,15 +21,13 @@ if TYPE_CHECKING: from .mcp_client import MCPManager -# mai_files 目录路径 MAI_FILES_DIR = Path(os.path.join(os.path.dirname(os.path.abspath(__file__)), "mai_files")) -# 全局回复器 _replyer: Optional[Replyer] = None def get_replyer(llm_service: MaiSakaLLMService) -> Replyer: - """获取回复器实例(单例模式)""" + """Return a shared replyer instance.""" global _replyer if _replyer is None: _replyer = Replyer(llm_service) @@ -39,94 +37,85 @@ def get_replyer(llm_service: MaiSakaLLMService) -> Replyer: class ToolHandlerContext: - """工具处理器所需的共享上下文。""" + """Shared context for tool handlers.""" def __init__( self, llm_service: MaiSakaLLMService, reader: InputReader, user_input_times: list[datetime], - ): + ) -> None: self.llm_service = llm_service self.reader = reader self.user_input_times = user_input_times self.last_user_input_time: Optional[datetime] = None -async def handle_send_message(tc, chat_history: list, ctx: ToolHandlerContext): - """处理 say 工具:根据想法和上下文生成回复后展示给用户。""" +async def handle_send_message(tc: Any, chat_history: list[dict[str, Any]], ctx: ToolHandlerContext) -> None: + """Backward-compatible handler for legacy send-message style tools.""" reason = tc.arguments.get("reason", "") - console.print("[accent]🔧 调用工具: say(...)[/accent]") + console.print("[accent]Calling tool: send_message(...)[/accent]") - if reason: - # 想法以淡色展示 - console.print( - Panel( - Markdown(reason), - title="💭 回复想法", - border_style="dim", - padding=(0, 1), - style="dim", - ) - ) - # 根据想法和上下文生成回复 - with console.status( - "[info]✏️ 生成回复中...[/info]", - spinner="dots", - ): - replyer = get_replyer(ctx.llm_service) - reply = await replyer.reply(reason, chat_history) - console.print( - Panel( - Markdown(reply), - title="💬 MaiSaka", - border_style="magenta", - padding=(1, 2), - ) - ) - # 生成的回复作为 tool 结果写入上下文 + if not reason: chat_history.append( { "role": "tool", "tool_call_id": tc.id, - "content": f"已向用户展示(实际输出):{reply}", + "content": "Missing required argument: reason", } ) - else: - chat_history.append( - { - "role": "tool", - "tool_call_id": tc.id, - "content": "reason 内容为空,未展示", - } + return + + console.print( + Panel( + Markdown(reason), + title="Reply Reason", + border_style="dim", + padding=(0, 1), + style="dim", ) + ) + with console.status("[info]Generating visible reply...[/info]", spinner="dots"): + replyer = get_replyer(ctx.llm_service) + reply = await replyer.reply(reason, chat_history) -async def handle_stop(tc, chat_history: list): - """处理 stop 工具:结束对话循环。""" - console.print("[accent]🔧 调用工具: stop()[/accent]") + console.print( + Panel( + Markdown(reply), + title="MaiSaka", + border_style="magenta", + padding=(1, 2), + ) + ) chat_history.append( { "role": "tool", "tool_call_id": tc.id, - "content": "对话循环已停止,等待用户下次输入。", + "content": f"Visible reply generated:\n{reply}", } ) -async def handle_wait(tc, chat_history: list, ctx: ToolHandlerContext) -> str: - """ - 处理 wait 工具:等待用户输入或超时。 +async def handle_stop(tc: Any, chat_history: list[dict[str, Any]]) -> None: + """Handle the stop tool.""" + console.print("[accent]Calling tool: stop()[/accent]") + chat_history.append( + { + "role": "tool", + "tool_call_id": tc.id, + "content": "Conversation loop will stop after this round.", + } + ) - Returns: - 工具结果字符串。以 "[[QUIT]]" 开头表示用户要求退出对话。 - """ + +async def handle_wait(tc: Any, chat_history: list[dict[str, Any]], ctx: ToolHandlerContext) -> str: + """Handle the wait tool.""" seconds = tc.arguments.get("seconds", 30) - seconds = max(5, min(seconds, 300)) # 限制 5-300 秒 - console.print(f"[accent]🔧 调用工具: wait({seconds})[/accent]") + seconds = max(5, min(seconds, 300)) + console.print(f"[accent]Calling tool: wait({seconds})[/accent]") tool_result = await _do_wait(seconds, ctx) - chat_history.append( { "role": "tool", @@ -138,62 +127,49 @@ async def handle_wait(tc, chat_history: list, ctx: ToolHandlerContext) -> str: async def _do_wait(seconds: int, ctx: ToolHandlerContext) -> str: - """实际执行等待逻辑。""" - console.print(f"[muted]⏳ 等待回复 (最多 {seconds} 秒)...[/muted]") - console.print("[bold magenta]💬 > [/bold magenta]", end="") + """Wait for user input with a timeout.""" + console.print(f"[muted]Waiting for user input (timeout: {seconds}s)...[/muted]") + console.print("[bold magenta]> [/bold magenta]", end="") user_input = await ctx.reader.get_line(timeout=seconds) if user_input is None: - # 超时 - console.print() # 换行 - console.print("[muted]⏳ 等待超时[/muted]") - return "等待超时,用户未输入任何内容" + console.print() + console.print("[muted]Wait timeout[/muted]") + return "Wait timed out; no user input received." user_input = user_input.strip() - if not user_input: - return "用户发送了空消息" + return "User submitted an empty input." - # 更新 timing 时间戳 now = datetime.now() ctx.last_user_input_time = now ctx.user_input_times.append(now) if user_input.lower() in ("/quit", "/exit", "/q"): - return "[[QUIT]] 用户主动退出了对话" + return "[[QUIT]] User requested to exit." - return f"用户说:{user_input}" + return f"User input received: {user_input}" -async def handle_mcp_tool(tc, chat_history: list, mcp_manager: "MCPManager"): - """ - 处理 MCP 工具调用。 - - 将调用转发到 MCPManager,展示结果并写入对话上下文。 - """ - # 格式化参数预览 +async def handle_mcp_tool(tc: Any, chat_history: list[dict[str, Any]], mcp_manager: "MCPManager") -> None: + """Handle an MCP tool call.""" args_str = _json.dumps(tc.arguments, ensure_ascii=False) args_preview = args_str if len(args_str) <= 120 else args_str[:120] + "..." - console.print(f"[accent]🔌 调用 MCP 工具: {tc.name}({args_preview})[/accent]") + console.print(f"[accent]Calling MCP tool: {tc.name}({args_preview})[/accent]") - with console.status( - f"[info]🔌 MCP 工具 {tc.name} 执行中...[/info]", - spinner="dots", - ): + with console.status(f"[info]Running MCP tool {tc.name}...[/info]", spinner="dots"): result = await mcp_manager.call_tool(tc.name, tc.arguments) - # 展示结果(截断过长内容) - display_text = result if len(result) <= 800 else result[:800] + "\n... (已截断)" + display_text = result if len(result) <= 800 else result[:800] + "\n... (truncated)" console.print( Panel( display_text, - title=f"🔌 MCP: {tc.name}", + title=f"MCP: {tc.name}", border_style="bright_green", padding=(0, 1), ) ) - chat_history.append( { "role": "tool", @@ -203,59 +179,50 @@ async def handle_mcp_tool(tc, chat_history: list, mcp_manager: "MCPManager"): ) -async def handle_unknown_tool(tc, chat_history: list): - """处理未知工具调用。""" - console.print(f"[accent]🔧 调用工具: {tc.name}({tc.arguments})[/accent]") +async def handle_unknown_tool(tc: Any, chat_history: list[dict[str, Any]]) -> None: + """Handle an unknown tool call.""" + console.print(f"[accent]Calling unknown tool: {tc.name}({tc.arguments})[/accent]") chat_history.append( { "role": "tool", "tool_call_id": tc.id, - "content": f"未知工具: {tc.name}", + "content": f"Unknown tool: {tc.name}", } ) -async def handle_write_file(tc, chat_history: list): - """处理 write_file 工具:在 mai_files 目录下写入文件。""" +async def handle_write_file(tc: Any, chat_history: list[dict[str, Any]]) -> None: + """Write a file under the local mai_files workspace.""" filename = tc.arguments.get("filename", "") content = tc.arguments.get("content", "") - console.print(f'[accent]🔧 调用工具: write_file("{filename}")[/accent]') + console.print(f'[accent]Calling tool: write_file("{filename}")[/accent]') - # 确保目录存在 MAI_FILES_DIR.mkdir(parents=True, exist_ok=True) - - # 构建完整文件路径 file_path = MAI_FILES_DIR / filename try: - # 创建父目录(如果需要) file_path.parent.mkdir(parents=True, exist_ok=True) + with open(file_path, "w", encoding="utf-8") as file: + file.write(content) - # 写入文件 - with open(file_path, "w", encoding="utf-8") as f: - f.write(content) - - # 获取文件大小 file_size = file_path.stat().st_size - console.print( Panel( - f"文件已写入: {filename}\n大小: {file_size} 字符", - title="📁 文件已保存", + f"Path: {filename}\nSize: {file_size} bytes", + title="File Written", border_style="green", padding=(0, 1), ) ) - chat_history.append( { "role": "tool", "tool_call_id": tc.id, - "content": f"文件「{filename}」已成功写入,共 {file_size} 个字符。", + "content": f"File written successfully: {filename} ({file_size} bytes)", } ) - except Exception as e: - error_msg = f"写入文件失败: {e}" + except Exception as exc: + error_msg = f"Failed to write file: {exc}" console.print(f"[error]{error_msg}[/error]") chat_history.append( { @@ -266,17 +233,16 @@ async def handle_write_file(tc, chat_history: list): ) -async def handle_read_file(tc, chat_history: list): - """处理 read_file 工具:读取 mai_files 目录下的文件。""" +async def handle_read_file(tc: Any, chat_history: list[dict[str, Any]]) -> None: + """Read a file from the local mai_files workspace.""" filename = tc.arguments.get("filename", "") - console.print(f'[accent]🔧 调用工具: read_file("{filename}")[/accent]') + console.print(f'[accent]Calling tool: read_file("{filename}")[/accent]') - # 构建完整文件路径 file_path = MAI_FILES_DIR / filename try: if not file_path.exists(): - error_msg = f"文件「{filename}」不存在。" + error_msg = f"File does not exist: {filename}" console.print(f"[warning]{error_msg}[/warning]") chat_history.append( { @@ -288,7 +254,7 @@ async def handle_read_file(tc, chat_history: list): return if not file_path.is_file(): - error_msg = f"「{filename}」不是一个文件。" + error_msg = f"Path is not a file: {filename}" console.print(f"[warning]{error_msg}[/warning]") chat_history.append( { @@ -299,33 +265,27 @@ async def handle_read_file(tc, chat_history: list): ) return - # 读取文件内容 - with open(file_path, "r", encoding="utf-8") as f: - file_content = f.read() - - # 截断过长内容用于显示 - display_content = file_content - if len(file_content) > 1000: - display_content = file_content[:1000] + "\n... (内容已截断)" + with open(file_path, "r", encoding="utf-8") as file: + file_content = file.read() + display_content = file_content if len(file_content) <= 1000 else file_content[:1000] + "\n... (truncated)" console.print( Panel( display_content, - title=f"📄 文件内容: {filename}", + title=f"Read File: {filename}", border_style="blue", padding=(0, 1), ) ) - chat_history.append( { "role": "tool", "tool_call_id": tc.id, - "content": f"文件「{filename}」内容:\n{file_content}", + "content": f"File content of {filename}:\n{file_content}", } ) - except Exception as e: - error_msg = f"读取文件失败: {e}" + except Exception as exc: + error_msg = f"Failed to read file: {exc}" console.print(f"[error]{error_msg}[/error]") chat_history.append( { @@ -336,49 +296,42 @@ async def handle_read_file(tc, chat_history: list): ) -async def handle_list_files(tc, chat_history: list): - """处理 list_files 工具:获取 mai_files 目录下所有文件的元信息。""" - console.print("[accent]🔧 调用工具: list_files()[/accent]") +async def handle_list_files(tc: Any, chat_history: list[dict[str, Any]]) -> None: + """List files under the local mai_files workspace.""" + console.print("[accent]Calling tool: list_files()[/accent]") try: - # 确保目录存在 MAI_FILES_DIR.mkdir(parents=True, exist_ok=True) - # 获取所有文件 - files_info = [] + files_info: list[dict[str, Any]] = [] for item in MAI_FILES_DIR.rglob("*"): if item.is_file(): - # 获取相对路径 - rel_path = item.relative_to(MAI_FILES_DIR) stat = item.stat() files_info.append( { - "name": str(rel_path), + "name": str(item.relative_to(MAI_FILES_DIR)), "size": stat.st_size, "modified": datetime.fromtimestamp(stat.st_mtime).strftime("%Y-%m-%d %H:%M:%S"), } ) if not files_info: - result_text = "mai_files 目录为空,没有任何文件。" + result_text = "No files found under mai_files." else: - # 按名称排序 - files_info.sort(key=lambda x: x["name"]) - # 格式化输出 - lines = [f"📁 mai_files 目录下共有 {len(files_info)} 个文件:\n"] - for info in files_info: - lines.append(f" • {info['name']} ({info['size']} 字节, 修改于 {info['modified']})") + files_info.sort(key=lambda item: item["name"]) + lines = [f"Found {len(files_info)} file(s):\n"] + for item in files_info: + lines.append(f"- {item['name']} ({item['size']} bytes, modified {item['modified']})") result_text = "\n".join(lines) console.print( Panel( result_text, - title="📁 文件列表", + title="File List", border_style="cyan", padding=(0, 1), ) ) - chat_history.append( { "role": "tool", @@ -386,8 +339,8 @@ async def handle_list_files(tc, chat_history: list): "content": result_text, } ) - except Exception as e: - error_msg = f"获取文件列表失败: {e}" + except Exception as exc: + error_msg = f"Failed to list files: {exc}" console.print(f"[error]{error_msg}[/error]") chat_history.append( { @@ -398,160 +351,7 @@ async def handle_list_files(tc, chat_history: list): ) -async def handle_store_context(tc, chat_history: list, ctx: ToolHandlerContext): - """ - 处理 store_context 工具:将指定范围的对话上下文存入记忆系统,然后从对话中移除。 - - 参数: - - count: 要存入记忆的消息数量(从最早的消息开始) - - reason: 存入的原因 - """ - count = tc.arguments.get("count", 0) - reason = tc.arguments.get("reason", "") - console.print(f'[accent]🔧 调用工具: store_context(count={count}, reason="{reason}")[/accent]') - - if count <= 0: - error_msg = "count 参数必须大于 0" - console.print(f"[error]{error_msg}[/error]") - chat_history.append( - { - "role": "tool", - "tool_call_id": tc.id, - "content": error_msg, - } - ) - return - - # 计算实际消息数量(排除 role=tool 的工具返回消息) - actual_messages = [m for m in chat_history if m.get("role") != "tool"] - - if count > len(actual_messages): - error_msg = f"count({count}) 超过了当前对话消息数量({len(actual_messages)})" - console.print(f"[warning]{error_msg}[/warning]") - count = len(actual_messages) - - # 找到要移除的消息索引(确保 tool_calls 和 tool 响应成对) - indices_to_remove = [] - removed_count = 0 - i = 0 - - while i < len(chat_history) and removed_count < count: - msg = chat_history[i] - role = msg.get("role", "") - - # 跳过 role=tool 的消息(它们会被对应的 assistant 消息一起处理) - if role == "tool": - i += 1 - continue - - # 检查这是否是一个带 tool_calls 的 assistant 消息 - if role == "assistant" and "tool_calls" in msg: - # 检查这个消息是否包含当前的 tool_call(store_context 自己) - # 如果包含,跳过不删除(否则会导致 tool 响应孤儿) - contains_current_call = any(tc.get("id") == tc.id for tc in msg.get("tool_calls", [])) - if contains_current_call: - i += 1 - continue - - # 收集这个 assistant 消息及其后续的 tool 响应消息 - block_indices = [i] - j = i + 1 - while j < len(chat_history): - next_msg = chat_history[j] - if next_msg.get("role") == "tool": - block_indices.append(j) - j += 1 - else: - break - indices_to_remove.extend(block_indices) - removed_count += 1 - i = j - elif role in ["user", "assistant"]: - # 普通消息,可以直接删除 - indices_to_remove.append(i) - removed_count += 1 - i += 1 - else: - i += 1 - - if not indices_to_remove: - result_msg = "没有找到可存入记忆的消息" - chat_history.append( - { - "role": "tool", - "tool_call_id": tc.id, - "content": result_msg, - } - ) - return - - # 收集要总结的消息(在删除前) - to_compress = [] - for i in sorted(indices_to_remove): - if 0 <= i < len(chat_history): - to_compress.append(chat_history[i]) - - # 总结上下文并压缩 - try: - with console.status( - "[info]📝 正在总结上下文...[/info]", - spinner="dots", - ): - summary = await ctx.llm_service.summarize_context(to_compress) - - if summary: - console.print( - Panel( - Markdown(summary), - title="📝 上下文已压缩", - border_style="green", - padding=(0, 1), - style="dim", - ) - ) - result_msg = f"✅ 已压缩 {len(to_compress)} 条消息\n原因: {reason}" - else: - result_msg = "⚠️ 上下文总结失败" - console.print(f"[warning]{result_msg}[/warning]") - - except Exception as e: - result_msg = f"❌ 总结上下文时出错: {e}" - console.print(f"[error]{result_msg}[/error]") - - # 从后往前删除消息 - for i in sorted(indices_to_remove, reverse=True): - if 0 <= i < len(chat_history): - chat_history.pop(i) - - # 清理"孤儿" tool 消息(没有对应 tool_calls 的 tool 消息) - # 收集所有有效的 tool_call_id - valid_tool_call_ids = set() - for msg in chat_history: - if msg.get("role") == "assistant" and "tool_calls" in msg: - for tool_call in msg["tool_calls"]: - valid_tool_call_ids.add(tool_call.get("id", "")) - - # 删除无效的 tool 消息(从后往前) - i = len(chat_history) - 1 - while i >= 0: - msg = chat_history[i] - if msg.get("role") == "tool": - tool_call_id = msg.get("tool_call_id", "") - if tool_call_id not in valid_tool_call_ids: - chat_history.pop(i) - i -= 1 - - chat_history.append( - { - "role": "tool", - "tool_call_id": tc.id, - "content": result_msg, - } - ) -# ──────────────────── 初始化 mai_files 目录 ──────────────────── - -# 确保程序启动时 mai_files 目录存在 try: MAI_FILES_DIR.mkdir(parents=True, exist_ok=True) -except Exception as e: - console.print(f"[warning]创建 mai_files 目录失败: {e}[/warning]") +except Exception as exc: + console.print(f"[warning]Failed to initialize mai_files directory: {exc}[/warning]")