feat:添加maisaka接管回复逻辑
This commit is contained in:
@@ -1,22 +0,0 @@
|
||||
你是一个对话节奏与时间感知分析模块,同时负责自我反思。你的任务是根据对话上下文和系统提供的时间戳信息,分析:
|
||||
|
||||
【时间感知分析】
|
||||
1. 对话持续时长:当前对话已经进行了多久
|
||||
2. 回复间隔:用户上次发言距今多久、用户的平均回复速度如何
|
||||
3. 建议等待时长:结合对话内容和时间规律,建议下次等待多少秒比较合适
|
||||
4. 时间相关洞察:
|
||||
- 用户是否可能正在忙(回复变慢)
|
||||
- 用户是否正在积极对话(回复很快)
|
||||
- 当前时段(深夜/早晨/工作时间等)是否适合继续聊
|
||||
- 对话是否已经持续太久,用户可能需要休息
|
||||
- 是否应该主动结束对话
|
||||
|
||||
【自我反思分析】
|
||||
1. 人设一致性:是否符合设定的人格特质、说话风格是否一致、是否有不符合身份的言论
|
||||
2. 回复合理性:是否有逻辑漏洞、是否回应了用户的核心诉求、是否有过当或不当言论
|
||||
3. 认知局限性:是否对某些情况理解不足、是否缺乏必要信息、是否做出了过度推断
|
||||
|
||||
要求:
|
||||
- 输出简洁(4-6 句话),时间感知分析和自我反思分析各占一半
|
||||
- 重点关注对话节奏的变化趋势和助手自身的人设一致性
|
||||
- 直接输出分析结果,不要有格式标题或分段标记
|
||||
@@ -1,22 +0,0 @@
|
||||
你是一个对话节奏与时间感知分析模块,同时负责自我反思。你的任务是根据对话上下文和系统提供的时间戳信息,分析:
|
||||
|
||||
【时间感知分析】
|
||||
1. 对话持续时长:当前对话已经进行了多久
|
||||
2. 回复间隔:用户上次发言距今多久、用户的平均回复速度如何
|
||||
3. 建议等待时长:结合对话内容和时间规律,建议下次等待多少秒比较合适
|
||||
4. 时间相关洞察:
|
||||
- 用户是否可能正在忙(回复变慢)
|
||||
- 用户是否正在积极对话(回复很快)
|
||||
- 当前时段(深夜/早晨/工作时间等)是否适合继续聊
|
||||
- 对话是否已经持续太久,用户可能需要休息
|
||||
- 是否应该主动结束对话
|
||||
|
||||
【自我反思分析】
|
||||
1. 人设一致性:是否符合设定的人格特质、说话风格是否一致、是否有不符合身份的言论
|
||||
2. 回复合理性:是否有逻辑漏洞、是否回应了用户的核心诉求、是否有过当或不当言论
|
||||
3. 认知局限性:是否对某些情况理解不足、是否缺乏必要信息、是否做出了过度推断
|
||||
|
||||
要求:
|
||||
- 输出简洁(4-6 句话),时间感知分析和自我反思分析各占一半
|
||||
- 重点关注对话节奏的变化趋势和助手自身的人设一致性
|
||||
- 直接输出分析结果,不要有格式标题或分段标记
|
||||
@@ -22,14 +22,13 @@
|
||||
2.如果用户有新发言,但是你评估用户还有后续发言尚未发送,可以适当等待让用户说完
|
||||
3.在特定情况下也可以连续回复,例如想要追问,或者补充自己先前的发言,可以不使用stop或者wait
|
||||
4.如果你想指导麦麦直接发言,可以不使用任何工具
|
||||
5.你需要控制自己发言的频率,如果用户一对一聊天,可以以均匀地频率发言,如果用户较多,不要每句都回复,控制回复频率。当你决定暂时不发言,可以使用wait暂时等待一定时间或者stop等待新消息
|
||||
|
||||
你的输出规则:
|
||||
你的分析规则:
|
||||
1. 默认直接输出你当前的最新分析,不要重复之前的分析内容。
|
||||
2. 最新分析应尽量具体,贴近上下文,不要空泛重复。
|
||||
3. 如果你认为现在更适合等待用户补充,可以调用 `wait(seconds)`。
|
||||
4. 如果你认为应当结束当前对话,不回复任何内容,可以调用 `stop()`。
|
||||
5. 只有在确实需要等待或停止时才调用工具,否则优先直接输出分析想法。
|
||||
6. 如果你刚刚做了工具调用,下一轮应结合工具结果继续输出新的分析。
|
||||
7. 分析应服务于后续决策,而不是机械复述用户内容。
|
||||
3. 只有在确实需要等待或停止时才调用工具,否则优先直接输出分析想法。
|
||||
4. 如果你刚刚做了工具调用,下一轮应结合工具结果继续输出新的分析。
|
||||
5. 你需要评估哪些话是对你的发言,哪些是用户之间的交流或者自言自语,不要频繁插入无关的话题。
|
||||
|
||||
现在,请你输出你的分析:
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
你是一个对话节奏与时间感知分析模块,同时负责自我反思。你的任务是根据对话上下文和系统提供的时间戳信息,分析:
|
||||
|
||||
【时间感知分析】
|
||||
1. 对话持续时长:当前对话已经进行了多久
|
||||
2. 回复间隔:用户上次发言距今多久、用户的平均回复速度如何
|
||||
3. 建议等待时长:结合对话内容和时间规律,建议下次等待多少秒比较合适
|
||||
4. 时间相关洞察:
|
||||
- 用户是否可能正在忙(回复变慢)
|
||||
- 用户是否正在积极对话(回复很快)
|
||||
- 当前时段(深夜/早晨/工作时间等)是否适合继续聊
|
||||
- 对话是否已经持续太久,用户可能需要休息
|
||||
- 是否应该主动结束对话
|
||||
|
||||
【自我反思分析】
|
||||
1. 人设一致性:是否符合设定的人格特质、说话风格是否一致、是否有不符合身份的言论
|
||||
2. 回复合理性:是否有逻辑漏洞、是否回应了用户的核心诉求、是否有过当或不当言论
|
||||
3. 认知局限性:是否对某些情况理解不足、是否缺乏必要信息、是否做出了过度推断
|
||||
|
||||
要求:
|
||||
- 输出简洁(4-6 句话),时间感知分析和自我反思分析各占一半
|
||||
- 重点关注对话节奏的变化趋势和助手自身的人设一致性
|
||||
- 直接输出分析结果,不要有格式标题或分段标记
|
||||
@@ -2,9 +2,11 @@ from typing import Dict
|
||||
|
||||
import traceback
|
||||
|
||||
from src.common.logger import get_logger
|
||||
from src.chat.message_receive.chat_manager import chat_manager
|
||||
from src.chat.heart_flow.heartFC_chat import HeartFChatting
|
||||
from src.chat.message_receive.chat_manager import chat_manager
|
||||
from src.common.logger import get_logger
|
||||
from src.config.config import global_config
|
||||
from src.maisaka.runtime import MaisakaHeartFlowChatting
|
||||
# from src.chat.brain_chat.brain_chat import BrainChatting
|
||||
|
||||
logger = get_logger("heartflow")
|
||||
@@ -16,7 +18,7 @@ class HeartflowManager:
|
||||
|
||||
def __init__(self):
|
||||
# self.heartflow_chat_list: Dict[str, HeartFChatting | BrainChatting] = {}
|
||||
self.heartflow_chat_list: Dict[str, HeartFChatting] = {}
|
||||
self.heartflow_chat_list: Dict[str, HeartFChatting | MaisakaHeartFlowChatting] = {}
|
||||
|
||||
async def get_or_create_heartflow_chat(self, session_id: str): # -> Optional[HeartFChatting | BrainChatting]:
|
||||
"""获取或创建一个新的HeartFChatting实例"""
|
||||
@@ -29,7 +31,10 @@ class HeartflowManager:
|
||||
# new_chat = (
|
||||
# HeartFChatting(session_id=session_id) if chat_session.group_id else BrainChatting(session_id=session_id)
|
||||
# )
|
||||
new_chat = HeartFChatting(session_id=session_id)
|
||||
if global_config.maisaka.take_over_hfc:
|
||||
new_chat = MaisakaHeartFlowChatting(session_id=session_id)
|
||||
else:
|
||||
new_chat = HeartFChatting(session_id=session_id)
|
||||
await new_chat.start()
|
||||
self.heartflow_chat_list[session_id] = new_chat
|
||||
return new_chat
|
||||
@@ -41,7 +46,7 @@ class HeartflowManager:
|
||||
def adjust_talk_frequency(self, session_id: str, frequency: float):
|
||||
"""调整指定聊天流的说话频率"""
|
||||
chat = self.heartflow_chat_list.get(session_id)
|
||||
if chat and isinstance(chat, HeartFChatting):
|
||||
if chat and hasattr(chat, "adjust_talk_frequency"):
|
||||
chat.adjust_talk_frequency(frequency)
|
||||
logger.info(f"已调整聊天 {session_id} 的说话频率为 {frequency}")
|
||||
else:
|
||||
|
||||
@@ -56,7 +56,7 @@ CONFIG_DIR: Path = PROJECT_ROOT / "config"
|
||||
BOT_CONFIG_PATH: Path = (CONFIG_DIR / "bot_config.toml").resolve().absolute()
|
||||
MODEL_CONFIG_PATH: Path = (CONFIG_DIR / "model_config.toml").resolve().absolute()
|
||||
MMC_VERSION: str = "1.0.0"
|
||||
CONFIG_VERSION: str = "8.1.4"
|
||||
CONFIG_VERSION: str = "8.1.7"
|
||||
MODEL_CONFIG_VERSION: str = "1.12.0"
|
||||
|
||||
logger = get_logger("config")
|
||||
|
||||
@@ -1528,15 +1528,6 @@ class MaiSakaConfig(ConfigBase):
|
||||
)
|
||||
"""启用认知分析模块"""
|
||||
|
||||
enable_timing_module: bool = Field(
|
||||
default=True,
|
||||
json_schema_extra={
|
||||
"x-widget": "switch",
|
||||
"x-icon": "clock",
|
||||
},
|
||||
)
|
||||
"""启用时间感知模块(含自我反思功能)"""
|
||||
|
||||
enable_knowledge_module: bool = Field(
|
||||
default=True,
|
||||
json_schema_extra={
|
||||
@@ -1591,15 +1582,6 @@ class MaiSakaConfig(ConfigBase):
|
||||
)
|
||||
"""是否在 CLI 中显示 analyze_cognition 的 Prompt"""
|
||||
|
||||
show_analyze_timing_prompt: bool = Field(
|
||||
default=False,
|
||||
json_schema_extra={
|
||||
"x-widget": "switch",
|
||||
"x-icon": "terminal",
|
||||
},
|
||||
)
|
||||
"""是否在 CLI 中显示 analyze_timing 的 Prompt"""
|
||||
|
||||
show_thinking: bool = Field(
|
||||
default=True,
|
||||
json_schema_extra={
|
||||
@@ -1618,6 +1600,24 @@ class MaiSakaConfig(ConfigBase):
|
||||
)
|
||||
"""MaiSaka 涓敤鎴风殑鏄剧ず鍚嶇О"""
|
||||
|
||||
direct_image_input: bool = Field(
|
||||
default=True,
|
||||
json_schema_extra={
|
||||
"x-widget": "switch",
|
||||
"x-icon": "image",
|
||||
},
|
||||
)
|
||||
"""是å¦å°†å›¾ç‰‡ç›´æŽ¥ä½œä¸ºå¤šæ¨¡æ€æ¶ˆæ¯ä¼ å…¥ Maisaka ä¸»å¾ªçŽ¯ï¼Œè€Œä¸æ˜¯ä»…使用转译文本"""
|
||||
|
||||
take_over_hfc: bool = Field(
|
||||
default=False,
|
||||
json_schema_extra={
|
||||
"x-widget": "switch",
|
||||
"x-icon": "git-branch",
|
||||
},
|
||||
)
|
||||
"""Enable Maisaka takeover for the Heart Flow Chat planner and reply pipeline"""
|
||||
|
||||
class PluginRuntimeConfig(ConfigBase):
|
||||
"""插件运行时配置类"""
|
||||
|
||||
|
||||
@@ -21,7 +21,6 @@ from .config import (
|
||||
ENABLE_EMOTION_MODULE,
|
||||
ENABLE_KNOWLEDGE_MODULE,
|
||||
ENABLE_MCP,
|
||||
ENABLE_TIMING_MODULE,
|
||||
SHOW_THINKING,
|
||||
USER_NAME,
|
||||
console,
|
||||
@@ -32,7 +31,6 @@ from .knowledge_store import get_knowledge_store
|
||||
from .llm_service import MaiSakaLLMService, build_message, remove_last_perception
|
||||
from .message_adapter import format_speaker_content
|
||||
from .mcp_client import MCPManager
|
||||
from .timing import build_timing_info
|
||||
from .tool_handlers import (
|
||||
ToolHandlerContext,
|
||||
handle_list_files,
|
||||
@@ -117,7 +115,12 @@ class BufferCLI:
|
||||
self._last_assistant_response_time = None
|
||||
self._chat_history = self.llm_service.build_chat_context(user_text)
|
||||
else:
|
||||
self._chat_history.append(build_message(role="user", content=format_speaker_content(USER_NAME, user_text)))
|
||||
self._chat_history.append(
|
||||
build_message(
|
||||
role="user",
|
||||
content=format_speaker_content(USER_NAME, user_text, now),
|
||||
)
|
||||
)
|
||||
|
||||
await self._run_llm_loop(self._chat_history)
|
||||
|
||||
@@ -141,13 +144,6 @@ class BufferCLI:
|
||||
|
||||
while True:
|
||||
if last_had_tool_calls:
|
||||
timing_info = build_timing_info(
|
||||
self._chat_start_time,
|
||||
self._last_user_input_time,
|
||||
self._last_assistant_response_time,
|
||||
self._user_input_times,
|
||||
)
|
||||
|
||||
tasks = []
|
||||
status_text_parts = []
|
||||
|
||||
@@ -157,9 +153,6 @@ class BufferCLI:
|
||||
if ENABLE_COGNITION_MODULE:
|
||||
tasks.append(("cognition", self.llm_service.analyze_cognition(chat_history)))
|
||||
status_text_parts.append("cognition")
|
||||
if ENABLE_TIMING_MODULE:
|
||||
tasks.append(("timing", self.llm_service.analyze_timing(chat_history, timing_info)))
|
||||
status_text_parts.append("timing")
|
||||
if ENABLE_KNOWLEDGE_MODULE:
|
||||
tasks.append(("knowledge", retrieve_relevant_knowledge(self.llm_service, chat_history)))
|
||||
status_text_parts.append("knowledge")
|
||||
@@ -170,7 +163,7 @@ class BufferCLI:
|
||||
):
|
||||
results = await asyncio.gather(*[task for _, task in tasks], return_exceptions=True)
|
||||
|
||||
eq_result, cognition_result, timing_result, knowledge_result = None, None, None, None
|
||||
eq_result, cognition_result, knowledge_result = None, None, None
|
||||
result_idx = 0
|
||||
if ENABLE_EMOTION_MODULE:
|
||||
eq_result = results[result_idx]
|
||||
@@ -178,9 +171,6 @@ class BufferCLI:
|
||||
if ENABLE_COGNITION_MODULE:
|
||||
cognition_result = results[result_idx]
|
||||
result_idx += 1
|
||||
if ENABLE_TIMING_MODULE:
|
||||
timing_result = results[result_idx]
|
||||
result_idx += 1
|
||||
if ENABLE_KNOWLEDGE_MODULE:
|
||||
knowledge_result = results[result_idx]
|
||||
result_idx += 1
|
||||
@@ -219,23 +209,6 @@ class BufferCLI:
|
||||
)
|
||||
)
|
||||
|
||||
timing_analysis = ""
|
||||
if ENABLE_TIMING_MODULE:
|
||||
if isinstance(timing_result, Exception):
|
||||
console.print(f"[warning]Timing analysis failed: {timing_result}[/warning]")
|
||||
elif timing_result:
|
||||
timing_analysis = timing_result
|
||||
if SHOW_THINKING:
|
||||
console.print(
|
||||
Panel(
|
||||
Markdown(timing_analysis),
|
||||
title="Timing",
|
||||
border_style="bright_blue",
|
||||
padding=(0, 1),
|
||||
style="dim",
|
||||
)
|
||||
)
|
||||
|
||||
knowledge_analysis = ""
|
||||
if ENABLE_KNOWLEDGE_MODULE:
|
||||
if isinstance(knowledge_result, Exception):
|
||||
@@ -260,8 +233,6 @@ class BufferCLI:
|
||||
perception_parts.append(f"Emotion\n{eq_analysis}")
|
||||
if cognition_analysis:
|
||||
perception_parts.append(f"Cognition\n{cognition_analysis}")
|
||||
if timing_analysis:
|
||||
perception_parts.append(f"Timing\n{timing_analysis}")
|
||||
if knowledge_analysis:
|
||||
perception_parts.append(f"Knowledge\n{knowledge_analysis}")
|
||||
|
||||
@@ -330,7 +301,11 @@ class BufferCLI:
|
||||
chat_history.append(
|
||||
build_message(
|
||||
role="user",
|
||||
content=format_speaker_content(global_config.bot.nickname.strip() or "MaiSaka", reply),
|
||||
content=format_speaker_content(
|
||||
global_config.bot.nickname.strip() or "MaiSaka",
|
||||
reply,
|
||||
datetime.now(),
|
||||
),
|
||||
source="guided_reply",
|
||||
)
|
||||
)
|
||||
|
||||
@@ -19,16 +19,16 @@ if str(_root) not in sys.path:
|
||||
# ──────────────────── 模块开关配置 ────────────────────
|
||||
ENABLE_EMOTION_MODULE = global_config.maisaka.enable_emotion_module
|
||||
ENABLE_COGNITION_MODULE = global_config.maisaka.enable_cognition_module
|
||||
ENABLE_TIMING_MODULE = global_config.maisaka.enable_timing_module
|
||||
ENABLE_KNOWLEDGE_MODULE = global_config.maisaka.enable_knowledge_module
|
||||
ENABLE_MCP = global_config.maisaka.enable_mcp
|
||||
ENABLE_WRITE_FILE = global_config.maisaka.enable_write_file
|
||||
ENABLE_READ_FILE = global_config.maisaka.enable_read_file
|
||||
ENABLE_LIST_FILES = global_config.maisaka.enable_list_files
|
||||
SHOW_ANALYZE_COGNITION_PROMPT = global_config.maisaka.show_analyze_cognition_prompt
|
||||
SHOW_ANALYZE_TIMING_PROMPT = global_config.maisaka.show_analyze_timing_prompt
|
||||
SHOW_THINKING = global_config.maisaka.show_thinking
|
||||
USER_NAME = global_config.maisaka.user_name.strip() or "用户"
|
||||
DIRECT_IMAGE_INPUT = global_config.maisaka.direct_image_input
|
||||
TAKE_OVER_HFC = global_config.maisaka.take_over_hfc
|
||||
|
||||
|
||||
# ──────────────────── Rich 主题 & Console ────────────────────
|
||||
|
||||
@@ -5,6 +5,7 @@ MaiSaka LLM 服务 - 使用主项目 LLM 系统
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
import asyncio
|
||||
import random
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, List, Optional
|
||||
@@ -16,11 +17,11 @@ from rich.text import Text
|
||||
|
||||
from src.common.data_models.mai_message_data_model import MaiMessage
|
||||
from src.common.logger import get_logger
|
||||
from src.common.prompt_i18n import load_prompt
|
||||
from src.config.config import config_manager, global_config
|
||||
from src.llm_models.payload_content.message import Message, MessageBuilder, RoleType
|
||||
from src.llm_models.payload_content.tool_option import ToolCall, ToolOption
|
||||
from src.llm_models.utils_model import LLMRequest
|
||||
from src.prompt.prompt_manager import prompt_manager
|
||||
|
||||
from . import config
|
||||
from .config import console
|
||||
@@ -70,6 +71,8 @@ class MaiSakaLLMService:
|
||||
self._max_tokens = max_tokens
|
||||
self._enable_thinking = enable_thinking
|
||||
self._extra_tools: List[dict] = []
|
||||
self._prompts_loaded = False
|
||||
self._prompt_load_lock = asyncio.Lock()
|
||||
|
||||
# 获取主项目模型配置
|
||||
try:
|
||||
@@ -96,66 +99,20 @@ class MaiSakaLLMService:
|
||||
|
||||
# 构建人设信息
|
||||
personality_prompt = self._build_personality_prompt()
|
||||
self._personality_prompt = personality_prompt
|
||||
|
||||
# 加载系统提示词
|
||||
# 提示词在真正调用 LLM 前异步懒加载,避免在已有事件循环中嵌套 run_until_complete
|
||||
if chat_system_prompt is None:
|
||||
try:
|
||||
chat_prompt = prompt_manager.get_prompt("maidairy_chat")
|
||||
tools_section = ""
|
||||
if config.ENABLE_WRITE_FILE:
|
||||
tools_section += "\n• write_file(filename, content) — 在 mai_files 目录下写入文件。"
|
||||
if config.ENABLE_READ_FILE:
|
||||
tools_section += "\n• read_file(filename) — 读取 mai_files 目录下的文件内容。"
|
||||
if config.ENABLE_LIST_FILES:
|
||||
tools_section += "\n• list_files() — 获取 mai_files 目录下所有文件的元信息列表。"
|
||||
|
||||
chat_prompt.add_context("file_tools_section", tools_section if tools_section else "")
|
||||
chat_prompt.add_context("bot_name", global_config.bot.nickname)
|
||||
chat_prompt.add_context("identity", personality_prompt)
|
||||
import asyncio
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
try:
|
||||
self._chat_system_prompt = loop.run_until_complete(prompt_manager.render_prompt(chat_prompt))
|
||||
logger.info(f"系统提示词已渲染,长度: {len(self._chat_system_prompt)}")
|
||||
finally:
|
||||
loop.close()
|
||||
except Exception as e:
|
||||
logger.error(f"加载系统提示词失败: {e}")
|
||||
self._chat_system_prompt = f"{personality_prompt}\n\n你是一个友好的 AI 助手。"
|
||||
self._chat_system_prompt = f"{personality_prompt}\n\n你是一个友好的 AI 助手。"
|
||||
else:
|
||||
self._chat_system_prompt = chat_system_prompt
|
||||
|
||||
self._model_name = (
|
||||
self._model_configs.planner.model_list[0] if self._model_configs.planner.model_list else "未配置"
|
||||
)
|
||||
|
||||
|
||||
# 加载子模块提示词
|
||||
# 子模块提示词同样采用懒加载
|
||||
self._emotion_prompt: Optional[str] = None
|
||||
self._cognition_prompt: Optional[str] = None
|
||||
self._timing_prompt: Optional[str] = None
|
||||
try:
|
||||
import asyncio
|
||||
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
try:
|
||||
self._emotion_prompt = loop.run_until_complete(
|
||||
prompt_manager.render_prompt(prompt_manager.get_prompt("maidairy_emotion"))
|
||||
)
|
||||
self._cognition_prompt = loop.run_until_complete(
|
||||
prompt_manager.render_prompt(prompt_manager.get_prompt("maidairy_cognition"))
|
||||
)
|
||||
self._timing_prompt = loop.run_until_complete(
|
||||
prompt_manager.render_prompt(prompt_manager.get_prompt("maidairy_timing"))
|
||||
)
|
||||
logger.info("成功加载 MaiSaka 子模块提示词")
|
||||
finally:
|
||||
loop.close()
|
||||
except Exception as e:
|
||||
logger.warning(f"加载子模块提示词失败,将使用默认提示词: {e}")
|
||||
|
||||
def _try_fix_database_schema(self) -> None:
|
||||
"""尝试修复数据库 schema,添加缺失的列"""
|
||||
@@ -212,6 +169,43 @@ class MaiSakaLLMService:
|
||||
"""设置额外的工具定义(如 MCP 工具)"""
|
||||
self._extra_tools = list(tools)
|
||||
|
||||
async def _ensure_prompts_loaded(self) -> None:
|
||||
"""异步懒加载提示词,避免在运行中的事件循环里同步渲染 prompt。"""
|
||||
if self._prompts_loaded:
|
||||
return
|
||||
|
||||
async with self._prompt_load_lock:
|
||||
if self._prompts_loaded:
|
||||
return
|
||||
|
||||
try:
|
||||
tools_section = ""
|
||||
if config.ENABLE_WRITE_FILE:
|
||||
tools_section += "\n• write_file(filename, content) — 在 mai_files 目录下写入文件。"
|
||||
if config.ENABLE_READ_FILE:
|
||||
tools_section += "\n• read_file(filename) — 读取 mai_files 目录下的文件内容。"
|
||||
if config.ENABLE_LIST_FILES:
|
||||
tools_section += "\n• list_files() — 获取 mai_files 目录下所有文件的元信息列表。"
|
||||
self._chat_system_prompt = load_prompt(
|
||||
"maidairy_chat",
|
||||
file_tools_section=tools_section if tools_section else "",
|
||||
bot_name=global_config.bot.nickname,
|
||||
identity=self._personality_prompt,
|
||||
)
|
||||
logger.info(f"系统提示词已渲染,长度: {len(self._chat_system_prompt)}")
|
||||
except Exception as e:
|
||||
logger.error(f"加载系统提示词失败: {e}")
|
||||
self._chat_system_prompt = f"{self._personality_prompt}\n\n你是一个友好的 AI 助手。"
|
||||
|
||||
try:
|
||||
self._emotion_prompt = load_prompt("maidairy_emotion")
|
||||
self._cognition_prompt = load_prompt("maidairy_cognition")
|
||||
logger.info("成功加载 MaiSaka 子模块提示词")
|
||||
except Exception as e:
|
||||
logger.warning(f"加载子模块提示词失败,将使用默认提示词: {e}")
|
||||
|
||||
self._prompts_loaded = True
|
||||
|
||||
@staticmethod
|
||||
def _get_role_badge_style(role: str) -> str:
|
||||
"""为不同 role 返回不同的标签样式。"""
|
||||
@@ -234,6 +228,22 @@ class MaiSakaLLMService:
|
||||
if isinstance(content, list):
|
||||
parts: list[object] = []
|
||||
for item in content:
|
||||
if isinstance(item, str):
|
||||
parts.append(Text(item))
|
||||
continue
|
||||
if isinstance(item, tuple) and len(item) == 2:
|
||||
image_format, image_base64 = item
|
||||
if isinstance(image_format, str) and isinstance(image_base64, str):
|
||||
approx_size = max(0, len(image_base64) * 3 // 4)
|
||||
size_text = f"{approx_size / 1024:.1f} KB" if approx_size >= 1024 else f"{approx_size} B"
|
||||
parts.append(
|
||||
Panel(
|
||||
Text(f"image/{image_format} {size_text}\nbase64 omitted", style="magenta"),
|
||||
border_style="magenta",
|
||||
padding=(0, 1),
|
||||
)
|
||||
)
|
||||
continue
|
||||
if isinstance(item, dict) and item.get("type") == "text" and isinstance(item.get("text"), str):
|
||||
parts.append(Text(item["text"]))
|
||||
else:
|
||||
@@ -262,6 +272,19 @@ class MaiSakaLLMService:
|
||||
"arguments": getattr(tool_call, "args", getattr(tool_call, "arguments", None)),
|
||||
}
|
||||
|
||||
def _render_tool_call_panel(self, tool_call: Any, index: int, parent_index: int) -> Panel:
|
||||
"""Render assistant tool calls as standalone cards."""
|
||||
title = Text.assemble(
|
||||
Text(" TOOL CALL ", style="bold white on magenta"),
|
||||
Text(f" #{parent_index}.{index}", style="muted"),
|
||||
)
|
||||
return Panel(
|
||||
Pretty(self._format_tool_call_for_display(tool_call), expand_all=True),
|
||||
title=title,
|
||||
border_style="magenta",
|
||||
padding=(0, 1),
|
||||
)
|
||||
|
||||
def _render_message_panel(self, message: Any, index: int) -> Panel:
|
||||
"""渲染主循环 prompt 中的一条消息。"""
|
||||
if isinstance(message, dict):
|
||||
@@ -286,15 +309,6 @@ class MaiSakaLLMService:
|
||||
parts.append(Text(" message ", style="bold cyan"))
|
||||
parts.append(self._render_message_content(content))
|
||||
|
||||
if tool_calls:
|
||||
parts.append(Text(" tool_calls ", style="bold magenta"))
|
||||
parts.append(
|
||||
Pretty(
|
||||
[self._format_tool_call_for_display(tool_call) for tool_call in tool_calls],
|
||||
expand_all=True,
|
||||
)
|
||||
)
|
||||
|
||||
if tool_call_id:
|
||||
parts.append(
|
||||
Text.assemble(
|
||||
@@ -333,6 +347,7 @@ class MaiSakaLLMService:
|
||||
|
||||
async def chat_loop_step(self, chat_history: list[MaiMessage]) -> ChatResponse:
|
||||
"""执行对话循环的一步 - 使用 tool_use 模型"""
|
||||
await self._ensure_prompts_loaded()
|
||||
|
||||
def message_factory(client) -> list[Message]:
|
||||
"""将 MaiSaka 的 chat_history 转换为主项目的 Message 格式"""
|
||||
@@ -360,7 +375,13 @@ class MaiSakaLLMService:
|
||||
# 打印消息列表
|
||||
built_messages = message_factory(None)
|
||||
|
||||
ordered_panels = [self._render_message_panel(msg, index + 1) for index, msg in enumerate(built_messages)]
|
||||
ordered_panels: list[Panel] = []
|
||||
for index, msg in enumerate(built_messages, start=1):
|
||||
ordered_panels.append(self._render_message_panel(msg, index))
|
||||
tool_calls = getattr(msg, "tool_calls", None)
|
||||
if tool_calls:
|
||||
for tool_call_index, tool_call in enumerate(tool_calls, start=1):
|
||||
ordered_panels.append(self._render_tool_call_panel(tool_call, tool_call_index, index))
|
||||
|
||||
if config.SHOW_THINKING and ordered_panels:
|
||||
console.print(
|
||||
@@ -423,7 +444,7 @@ class MaiSakaLLMService:
|
||||
return [
|
||||
build_message(
|
||||
role=RoleType.User.value,
|
||||
content=format_speaker_content(config.USER_NAME, user_text),
|
||||
content=format_speaker_content(config.USER_NAME, user_text, datetime.now()),
|
||||
source="user",
|
||||
)
|
||||
]
|
||||
@@ -432,6 +453,7 @@ class MaiSakaLLMService:
|
||||
|
||||
async def analyze_emotion(self, chat_history: list[MaiMessage]) -> str:
|
||||
"""情绪分析 - 使用 utils 模型"""
|
||||
await self._ensure_prompts_loaded()
|
||||
filtered = [m for m in chat_history if get_message_kind(m) != "perception"]
|
||||
recent = filtered[-10:] if len(filtered) > 10 else filtered
|
||||
|
||||
@@ -469,6 +491,7 @@ class MaiSakaLLMService:
|
||||
|
||||
async def analyze_cognition(self, chat_history: list[MaiMessage]) -> str:
|
||||
"""认知分析 - 使用 utils 模型"""
|
||||
await self._ensure_prompts_loaded()
|
||||
filtered = [m for m in chat_history if get_message_kind(m) != "perception"]
|
||||
recent = filtered[-10:] if len(filtered) > 10 else filtered
|
||||
|
||||
@@ -504,8 +527,9 @@ class MaiSakaLLMService:
|
||||
logger.error(f"认知分析 LLM 调用出错: {e}")
|
||||
return ""
|
||||
|
||||
async def analyze_timing(self, chat_history: list[MaiMessage], timing_info: str) -> str:
|
||||
async def _removed_analyze_timing(self, chat_history: list[MaiMessage], timing_info: str) -> str:
|
||||
"""时间分析 - 使用 utils 模型"""
|
||||
await self._ensure_prompts_loaded()
|
||||
filtered = [
|
||||
m
|
||||
for m in chat_history
|
||||
@@ -526,7 +550,7 @@ class MaiSakaLLMService:
|
||||
|
||||
prompt = "\n".join(prompt_parts)
|
||||
|
||||
if config.SHOW_THINKING and config.SHOW_ANALYZE_TIMING_PROMPT:
|
||||
if False:
|
||||
print("\n" + "=" * 60)
|
||||
print("MaiSaka LLM Request - analyze_timing:")
|
||||
print(f" {prompt}")
|
||||
@@ -551,6 +575,7 @@ class MaiSakaLLMService:
|
||||
生成回复 - 使用 replyer 模型
|
||||
可供 Replyer 类直接调用
|
||||
"""
|
||||
await self._ensure_prompts_loaded()
|
||||
from datetime import datetime
|
||||
from .replyer import format_chat_history
|
||||
|
||||
@@ -566,8 +591,7 @@ class MaiSakaLLMService:
|
||||
|
||||
# 获取回复提示词
|
||||
try:
|
||||
replyer_prompt = prompt_manager.get_prompt("maidairy_replyer")
|
||||
system_prompt = await prompt_manager.render_prompt(replyer_prompt)
|
||||
system_prompt = load_prompt("maidairy_replyer")
|
||||
except Exception:
|
||||
system_prompt = "你是一个友好的 AI 助手,请根据用户的想法生成自然的回复。"
|
||||
|
||||
|
||||
@@ -37,6 +37,33 @@ def _extract_guided_bot_reply(message: MaiMessage) -> str:
|
||||
return ""
|
||||
|
||||
|
||||
def _split_user_message_segments(raw_content: str) -> list[tuple[Optional[str], str]]:
|
||||
"""Split a user message into speaker-labeled segments.
|
||||
|
||||
A new segment only starts when a line explicitly begins with `[speaker]`.
|
||||
Continuation lines remain part of the current speaker's message.
|
||||
"""
|
||||
segments: list[tuple[Optional[str], str]] = []
|
||||
current_speaker: Optional[str] = None
|
||||
current_lines: list[str] = []
|
||||
|
||||
for raw_line in raw_content.splitlines():
|
||||
speaker_name, content_body = parse_speaker_content(raw_line)
|
||||
if speaker_name is not None:
|
||||
if current_lines:
|
||||
segments.append((current_speaker, "\n".join(current_lines)))
|
||||
current_speaker = speaker_name
|
||||
current_lines = [content_body]
|
||||
continue
|
||||
|
||||
current_lines.append(raw_line)
|
||||
|
||||
if current_lines:
|
||||
segments.append((current_speaker, "\n".join(current_lines)))
|
||||
|
||||
return segments
|
||||
|
||||
|
||||
def format_chat_history(messages: list[MaiMessage]) -> str:
|
||||
"""Format visible chat history for reply generation."""
|
||||
bot_nickname = global_config.bot.nickname.strip() or "Bot"
|
||||
@@ -52,10 +79,13 @@ def format_chat_history(messages: list[MaiMessage]) -> str:
|
||||
parts.append(f"{timestamp} {bot_nickname}(you): {guided_reply}")
|
||||
continue
|
||||
|
||||
_, content_body = parse_speaker_content(get_message_text(message))
|
||||
content = _normalize_content(content_body)
|
||||
if content:
|
||||
parts.append(f"{timestamp} {USER_NAME}: {content}")
|
||||
raw_content = get_message_text(message)
|
||||
for speaker_name, content_body in _split_user_message_segments(raw_content):
|
||||
content = _normalize_content(content_body)
|
||||
if not content:
|
||||
continue
|
||||
visible_speaker = speaker_name or USER_NAME
|
||||
parts.append(f"{timestamp} {visible_speaker}: {content}")
|
||||
continue
|
||||
|
||||
if role == "assistant":
|
||||
|
||||
379
src/maisaka/runtime.py
Normal file
379
src/maisaka/runtime.py
Normal file
@@ -0,0 +1,379 @@
|
||||
"""
|
||||
Maisaka runtime for non-CLI integrations.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
import asyncio
|
||||
|
||||
from src.chat.message_receive.chat_manager import BotChatSession, chat_manager
|
||||
from src.chat.message_receive.message import SessionMessage
|
||||
from src.common.data_models.mai_message_data_model import GroupInfo, MaiMessage, UserInfo
|
||||
from src.common.data_models.message_component_data_model import MessageSequence
|
||||
from src.common.logger import get_logger
|
||||
from src.config.config import global_config
|
||||
from src.llm_models.payload_content.tool_option import ToolCall
|
||||
from src.services import send_service
|
||||
|
||||
from .config import (
|
||||
DIRECT_IMAGE_INPUT,
|
||||
ENABLE_COGNITION_MODULE,
|
||||
ENABLE_EMOTION_MODULE,
|
||||
ENABLE_KNOWLEDGE_MODULE,
|
||||
)
|
||||
from .knowledge import retrieve_relevant_knowledge
|
||||
from .llm_service import MaiSakaLLMService
|
||||
from .message_adapter import (
|
||||
build_message,
|
||||
build_visible_text_from_sequence,
|
||||
clone_message_sequence,
|
||||
format_speaker_content,
|
||||
get_message_role,
|
||||
remove_last_perception,
|
||||
)
|
||||
|
||||
logger = get_logger("maisaka_runtime")
|
||||
|
||||
|
||||
class MaisakaHeartFlowChatting:
|
||||
"""Session-scoped Maisaka runtime that replaces the HFC planner and reply loop."""
|
||||
|
||||
def __init__(self, session_id: str):
|
||||
self.session_id = session_id
|
||||
self.chat_stream: Optional[BotChatSession] = chat_manager.get_session_by_session_id(session_id)
|
||||
if self.chat_stream is None:
|
||||
raise ValueError(f"Session not found for Maisaka runtime: {session_id}")
|
||||
|
||||
session_name = chat_manager.get_session_name(session_id) or session_id
|
||||
self.log_prefix = f"[{session_name}]"
|
||||
self._llm_service = MaiSakaLLMService(api_key="", base_url=None, model="")
|
||||
self._chat_history: list[MaiMessage] = []
|
||||
self._pending_messages: list[SessionMessage] = []
|
||||
self._running = False
|
||||
self._loop_task: Optional[asyncio.Task] = None
|
||||
self._loop_lock = asyncio.Lock()
|
||||
self._new_message_event = asyncio.Event()
|
||||
self._max_internal_rounds = 6
|
||||
self._chat_start_time: Optional[datetime] = None
|
||||
self._last_user_input_time: Optional[datetime] = None
|
||||
self._last_assistant_response_time: Optional[datetime] = None
|
||||
self._user_input_times: list[datetime] = []
|
||||
self._max_context_size = max(1, int(global_config.chat.max_context_size))
|
||||
|
||||
async def start(self) -> None:
|
||||
"""Start the runtime loop."""
|
||||
if self._running:
|
||||
return
|
||||
|
||||
self._running = True
|
||||
self._loop_task = asyncio.create_task(self._main_loop())
|
||||
logger.info(f"{self.log_prefix} Maisaka runtime started")
|
||||
|
||||
async def stop(self) -> None:
|
||||
"""Stop the runtime loop."""
|
||||
if not self._running:
|
||||
return
|
||||
|
||||
self._running = False
|
||||
self._new_message_event.set()
|
||||
|
||||
if self._loop_task is not None:
|
||||
self._loop_task.cancel()
|
||||
try:
|
||||
await self._loop_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
finally:
|
||||
self._loop_task = None
|
||||
|
||||
logger.info(f"{self.log_prefix} Maisaka runtime stopped")
|
||||
|
||||
def adjust_talk_frequency(self, frequency: float) -> None:
|
||||
"""Compatibility shim for the existing manager API."""
|
||||
_ = frequency
|
||||
|
||||
async def register_message(self, message: SessionMessage) -> None:
|
||||
"""Queue a newly received message for Maisaka processing."""
|
||||
self._pending_messages.append(message)
|
||||
self._new_message_event.set()
|
||||
|
||||
async def _main_loop(self) -> None:
|
||||
try:
|
||||
while self._running:
|
||||
await self._new_message_event.wait()
|
||||
self._new_message_event.clear()
|
||||
|
||||
async with self._loop_lock:
|
||||
pending_messages = self._drain_pending_messages()
|
||||
if not pending_messages:
|
||||
continue
|
||||
await self._ingest_messages(pending_messages)
|
||||
await self._run_internal_loop(anchor_message=pending_messages[-1])
|
||||
except asyncio.CancelledError:
|
||||
logger.info(f"{self.log_prefix} Maisaka runtime loop cancelled")
|
||||
|
||||
def _drain_pending_messages(self) -> list[SessionMessage]:
|
||||
drained_messages = list(self._pending_messages)
|
||||
self._pending_messages.clear()
|
||||
return drained_messages
|
||||
|
||||
async def _ingest_messages(self, messages: list[SessionMessage]) -> None:
|
||||
merged_sequence = await self._merge_messages(messages)
|
||||
merged_content = build_visible_text_from_sequence(merged_sequence).strip()
|
||||
if not merged_sequence.components:
|
||||
return
|
||||
|
||||
if self._chat_start_time is None:
|
||||
self._chat_start_time = messages[0].timestamp
|
||||
|
||||
self._last_user_input_time = messages[-1].timestamp
|
||||
self._user_input_times.extend(message.timestamp for message in messages)
|
||||
self._chat_history.append(
|
||||
build_message(
|
||||
role="user",
|
||||
content=merged_content,
|
||||
source="user",
|
||||
timestamp=messages[-1].timestamp,
|
||||
platform=messages[-1].platform,
|
||||
session_id=self.session_id,
|
||||
group_info=self._build_group_info(messages[-1]),
|
||||
user_info=self._build_runtime_user_info(),
|
||||
raw_message=merged_sequence,
|
||||
display_text=merged_content,
|
||||
)
|
||||
)
|
||||
self._trim_chat_history()
|
||||
|
||||
async def _merge_messages(self, messages: list[SessionMessage]) -> MessageSequence:
|
||||
merged_sequence = MessageSequence([])
|
||||
|
||||
for message in messages:
|
||||
user_info = message.message_info.user_info
|
||||
speaker_name = user_info.user_cardname or user_info.user_nickname or user_info.user_id
|
||||
prefix = format_speaker_content(speaker_name, "", message.timestamp)
|
||||
merged_sequence.text(prefix)
|
||||
|
||||
appended_component = False
|
||||
if DIRECT_IMAGE_INPUT:
|
||||
source_sequence = getattr(message, "maisaka_original_raw_message", message.raw_message)
|
||||
else:
|
||||
source_sequence = message.raw_message
|
||||
for component in clone_message_sequence(source_sequence).components:
|
||||
merged_sequence.components.append(component)
|
||||
appended_component = True
|
||||
|
||||
if not appended_component:
|
||||
if not message.processed_plain_text:
|
||||
await message.process()
|
||||
content = (message.processed_plain_text or "").strip()
|
||||
if content:
|
||||
merged_sequence.text(content)
|
||||
|
||||
merged_sequence.text("\n")
|
||||
|
||||
return merged_sequence
|
||||
|
||||
async def _run_internal_loop(self, anchor_message: SessionMessage) -> None:
|
||||
last_had_tool_calls = True
|
||||
|
||||
for _ in range(self._max_internal_rounds):
|
||||
if last_had_tool_calls:
|
||||
await self._append_perception_snapshot()
|
||||
|
||||
response = await self._llm_service.chat_loop_step(self._chat_history)
|
||||
response.raw_message.platform = anchor_message.platform
|
||||
response.raw_message.session_id = self.session_id
|
||||
response.raw_message.message_info.group_info = self._build_group_info(anchor_message)
|
||||
self._chat_history.append(response.raw_message)
|
||||
self._last_assistant_response_time = datetime.now()
|
||||
|
||||
if response.tool_calls:
|
||||
should_pause = await self._handle_tool_calls(response.tool_calls, response.content or "", anchor_message)
|
||||
if should_pause:
|
||||
return
|
||||
last_had_tool_calls = True
|
||||
continue
|
||||
|
||||
if response.content:
|
||||
last_had_tool_calls = False
|
||||
continue
|
||||
|
||||
return
|
||||
|
||||
logger.info(f"{self.log_prefix} Maisaka internal loop reached max rounds and paused")
|
||||
|
||||
def _trim_chat_history(self) -> None:
|
||||
"""Trim the oldest history until the user-message count is below the configured limit."""
|
||||
user_message_count = sum(1 for message in self._chat_history if get_message_role(message) == "user")
|
||||
if user_message_count <= self._max_context_size:
|
||||
return
|
||||
|
||||
trimmed_history = list(self._chat_history)
|
||||
removed_count = 0
|
||||
|
||||
while user_message_count >= self._max_context_size and trimmed_history:
|
||||
removed_message = trimmed_history.pop(0)
|
||||
removed_count += 1
|
||||
if get_message_role(removed_message) == "user":
|
||||
user_message_count -= 1
|
||||
|
||||
self._chat_history = trimmed_history
|
||||
logger.info(
|
||||
f"{self.log_prefix} Trimmed Maisaka history by {removed_count} message(s); "
|
||||
f"user-message count is now {user_message_count}."
|
||||
)
|
||||
|
||||
async def _append_perception_snapshot(self) -> None:
|
||||
tasks = []
|
||||
if ENABLE_EMOTION_MODULE:
|
||||
tasks.append(("emotion", self._llm_service.analyze_emotion(self._chat_history)))
|
||||
if ENABLE_COGNITION_MODULE:
|
||||
tasks.append(("cognition", self._llm_service.analyze_cognition(self._chat_history)))
|
||||
if ENABLE_KNOWLEDGE_MODULE:
|
||||
tasks.append(("knowledge", retrieve_relevant_knowledge(self._llm_service, self._chat_history)))
|
||||
|
||||
if not tasks:
|
||||
return
|
||||
|
||||
results = await asyncio.gather(*[task for _, task in tasks], return_exceptions=True)
|
||||
|
||||
perception_parts: list[str] = []
|
||||
for (task_name, _), result in zip(tasks, results):
|
||||
if isinstance(result, Exception):
|
||||
logger.warning(f"{self.log_prefix} Maisaka {task_name} analysis failed: {result}")
|
||||
continue
|
||||
if result:
|
||||
perception_parts.append(f"{task_name.title()}\n{result}")
|
||||
|
||||
remove_last_perception(self._chat_history)
|
||||
if not perception_parts:
|
||||
return
|
||||
|
||||
self._chat_history.append(
|
||||
build_message(
|
||||
role="assistant",
|
||||
content="\n\n".join(perception_parts),
|
||||
message_kind="perception",
|
||||
source="assistant",
|
||||
platform=self.chat_stream.platform,
|
||||
session_id=self.session_id,
|
||||
group_info=self._build_group_info(),
|
||||
user_info=self._build_runtime_bot_user_info(),
|
||||
)
|
||||
)
|
||||
|
||||
async def _handle_tool_calls(
|
||||
self,
|
||||
tool_calls: list[ToolCall],
|
||||
latest_thought: str,
|
||||
anchor_message: SessionMessage,
|
||||
) -> bool:
|
||||
for tool_call in tool_calls:
|
||||
if tool_call.func_name == "reply":
|
||||
await self._handle_reply(tool_call, latest_thought, anchor_message)
|
||||
return True
|
||||
|
||||
if tool_call.func_name == "no_reply":
|
||||
self._chat_history.append(
|
||||
self._build_tool_message(
|
||||
tool_call,
|
||||
"No visible reply was sent for this round.",
|
||||
)
|
||||
)
|
||||
continue
|
||||
|
||||
if tool_call.func_name == "wait":
|
||||
seconds = (tool_call.args or {}).get("seconds", 30)
|
||||
self._chat_history.append(
|
||||
self._build_tool_message(
|
||||
tool_call,
|
||||
f"Waiting for future input for up to {seconds} seconds.",
|
||||
)
|
||||
)
|
||||
return True
|
||||
|
||||
if tool_call.func_name == "stop":
|
||||
self._chat_history.append(
|
||||
self._build_tool_message(
|
||||
tool_call,
|
||||
"Conversation loop paused until a new message arrives.",
|
||||
)
|
||||
)
|
||||
return True
|
||||
|
||||
self._chat_history.append(
|
||||
self._build_tool_message(
|
||||
tool_call,
|
||||
f"Unsupported runtime tool: {tool_call.func_name}",
|
||||
)
|
||||
)
|
||||
|
||||
return False
|
||||
|
||||
async def _handle_reply(self, tool_call: ToolCall, latest_thought: str, anchor_message: SessionMessage) -> None:
|
||||
reply_text = await self._llm_service.generate_reply(latest_thought, self._chat_history)
|
||||
sent = await send_service.text_to_stream(
|
||||
text=reply_text,
|
||||
stream_id=self.session_id,
|
||||
set_reply=True,
|
||||
reply_message=anchor_message,
|
||||
typing=False,
|
||||
)
|
||||
tool_result = "Visible reply generated and sent." if sent else "Visible reply generation succeeded but send failed."
|
||||
self._chat_history.append(self._build_tool_message(tool_call, tool_result))
|
||||
if not sent:
|
||||
return
|
||||
|
||||
bot_name = global_config.bot.nickname.strip() or "MaiSaka"
|
||||
self._chat_history.append(
|
||||
build_message(
|
||||
role="user",
|
||||
content=format_speaker_content(bot_name, reply_text, datetime.now()),
|
||||
source="guided_reply",
|
||||
platform=anchor_message.platform,
|
||||
session_id=self.session_id,
|
||||
group_info=self._build_group_info(anchor_message),
|
||||
user_info=self._build_runtime_user_info(),
|
||||
)
|
||||
)
|
||||
|
||||
def _build_tool_message(self, tool_call: ToolCall, content: str) -> MaiMessage:
|
||||
return build_message(
|
||||
role="tool",
|
||||
content=content,
|
||||
source="tool",
|
||||
tool_call_id=tool_call.call_id,
|
||||
platform=self.chat_stream.platform,
|
||||
session_id=self.session_id,
|
||||
group_info=self._build_group_info(),
|
||||
user_info=UserInfo(user_id="maisaka_tool", user_nickname="tool", user_cardname=None),
|
||||
)
|
||||
|
||||
def _build_runtime_user_info(self) -> UserInfo:
|
||||
if self.chat_stream.user_id:
|
||||
return UserInfo(
|
||||
user_id=self.chat_stream.user_id,
|
||||
user_nickname=global_config.maisaka.user_name.strip() or "User",
|
||||
user_cardname=None,
|
||||
)
|
||||
return UserInfo(user_id="maisaka_user", user_nickname="user", user_cardname=None)
|
||||
|
||||
def _build_runtime_bot_user_info(self) -> UserInfo:
|
||||
return UserInfo(
|
||||
user_id=str(global_config.bot.qq_account) if global_config.bot.qq_account else "maisaka_assistant",
|
||||
user_nickname=global_config.bot.nickname.strip() or "MaiSaka",
|
||||
user_cardname=None,
|
||||
)
|
||||
|
||||
def _build_group_info(self, message: Optional[SessionMessage] = None) -> Optional[GroupInfo]:
|
||||
group_info = None
|
||||
if message is not None:
|
||||
group_info = message.message_info.group_info
|
||||
elif self.chat_stream.context and self.chat_stream.context.message:
|
||||
group_info = self.chat_stream.context.message.message_info.group_info
|
||||
|
||||
if group_info is None:
|
||||
return None
|
||||
|
||||
return GroupInfo(group_id=group_info.group_id, group_name=group_info.group_name)
|
||||
@@ -1,67 +0,0 @@
|
||||
"""
|
||||
MaiSaka timing helpers.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def _format_duration(total_seconds: int) -> str:
|
||||
hours, remainder = divmod(total_seconds, 3600)
|
||||
minutes, seconds = divmod(remainder, 60)
|
||||
if hours > 0:
|
||||
return f"{hours}h {minutes}m {seconds}s"
|
||||
if minutes > 0:
|
||||
return f"{minutes}m {seconds}s"
|
||||
return f"{seconds}s"
|
||||
|
||||
|
||||
def _get_time_period_label(hour: int) -> str:
|
||||
if 0 <= hour < 6:
|
||||
return "late_night"
|
||||
if 6 <= hour < 9:
|
||||
return "morning"
|
||||
if 9 <= hour < 12:
|
||||
return "late_morning"
|
||||
if 12 <= hour < 14:
|
||||
return "noon"
|
||||
if 14 <= hour < 18:
|
||||
return "afternoon"
|
||||
if 18 <= hour < 22:
|
||||
return "evening"
|
||||
return "night"
|
||||
|
||||
|
||||
def build_timing_info(
|
||||
chat_start_time: Optional[datetime],
|
||||
last_user_input_time: Optional[datetime],
|
||||
last_assistant_response_time: Optional[datetime],
|
||||
user_input_times: list[datetime],
|
||||
) -> str:
|
||||
"""Build readable timing context for the timing analysis prompt."""
|
||||
now = datetime.now()
|
||||
parts: list[str] = [f"Current time: {now.strftime('%Y-%m-%d %H:%M:%S')}"]
|
||||
|
||||
if chat_start_time:
|
||||
elapsed_seconds = int((now - chat_start_time).total_seconds())
|
||||
parts.append(f"Conversation duration: {_format_duration(elapsed_seconds)}")
|
||||
|
||||
if last_user_input_time:
|
||||
since_user_seconds = int((now - last_user_input_time).total_seconds())
|
||||
parts.append(f"Seconds since last user input: {since_user_seconds}")
|
||||
|
||||
if last_assistant_response_time:
|
||||
since_assistant_seconds = int((now - last_assistant_response_time).total_seconds())
|
||||
parts.append(f"Seconds since last Maisaka reply: {since_assistant_seconds}")
|
||||
|
||||
if len(user_input_times) >= 2:
|
||||
intervals = [
|
||||
int((user_input_times[index] - user_input_times[index - 1]).total_seconds())
|
||||
for index in range(1, len(user_input_times))
|
||||
]
|
||||
average_interval = sum(intervals) / len(intervals)
|
||||
parts.append(f"Average user input interval: {int(average_interval)}s")
|
||||
parts.append(f"Total user input count: {len(user_input_times)}")
|
||||
|
||||
parts.append(f"Current time period: {_get_time_period_label(now.hour)}")
|
||||
return "\n".join(parts)
|
||||
Reference in New Issue
Block a user