Version: 0.9.21.dev.260416
后端: 1. Memory 写入链路新增"召回→比对→汇总"去重决策层 - 新增决策流程:Runner 根据decision.enabled 配置走决策路径(语义召回候选 → Hash 精确命中 → LLM 逐对比对 → 汇总决策 → 执行 ADD/UPDATE/DELETE/NONE),默认关闭,旧路径完全保留 - 新增 LLMDecisionOrchestrator:单对关系判断编排器,输出 duplicate/update/conflict/unrelated 四种关系 - 新增 decision_flow / apply_actions:决策流程主循环与动作落地(新增、更新内容、软删除、跳过) - 新增 aggregate_decision / decision_validate:汇总规则(按优先级判定动作)与 LLM 输出校验 - 新增 decision model:CandidateSnapshot / ComparisonResult / FinalDecision 等决策层核心类型 - ItemRepo 新增 FindActiveByHash / UpdateContentByID / SoftDeleteByID 三个决策层专用方法 - RAG Runtime / Pipeline / Service 新增 DeleteMemory 向量删除能力,MilvusStore 补充 duplicate collection 错误识别 - Runner 新增 syncVectorDeletes 处理决策层 DELETE 动作的向量清理 - config 新增 decision(enabled/candidateTopK/candidateMinScore/fallbackMode)和 write.mode 配置项,config_loader 增加默认值兜底 - 删除 HANDOFF-RAG复用后续实施计划.md 和旧 log.txt,新增 Log.txt 记录决策流程调试日志 - normalize_facts 导出 HashContent 供决策层复用,audit 新增 update 操作常量 前端:无 仓库:无
This commit is contained in:
@@ -1,165 +0,0 @@
|
||||
# HANDOFF:RAG 复用后续实施计划(Memory 目录留档)
|
||||
|
||||
## 1. 文档目的
|
||||
|
||||
1. 给下一位开发者一个可直接执行的后续实施蓝图。
|
||||
2. 明确“已完成/未完成/切流点/回滚点”,避免重复摸索。
|
||||
3. 保证 Memory 与 WebSearch 共用一套 RAG Core,减少重复实现。
|
||||
|
||||
## 2. 当前状态(截至本次交接)
|
||||
|
||||
### 2.1 已完成
|
||||
|
||||
1. `backend/infra/rag` 共享骨架已建好,包含:
|
||||
- `core`:统一类型、接口、pipeline、错误码。
|
||||
- `chunk/embed/retrieve/rerank/store`:默认可运行实现(mock/in-memory/noop)。
|
||||
- `corpus`:`MemoryCorpus`、`WebCorpus` 适配器。
|
||||
- `config`:`rag.*` 配置读取。
|
||||
2. Memory Day1 写入链路已打通(`memory.extract.requested` 事件、`memory_jobs` 入库、worker 状态推进)。
|
||||
3. Milvus 相关 `docker-compose` 服务定义已补齐(本机因网络问题尚未拉起验证)。
|
||||
|
||||
### 2.2 未完成
|
||||
|
||||
1. RAG Core 尚未接入真实业务调用点(当前仍是并行骨架状态)。
|
||||
2. Milvus 真正读写实现未落地(`MilvusStore` 为占位)。
|
||||
3. Eino Embedding/Reranker 未落地(当前占位实现)。
|
||||
4. Memory 读取注入链路(Day2)尚未切到 RAG Core。
|
||||
5. WebSearch 尚未切到 `WebCorpus + RAG Core`。
|
||||
|
||||
## 3. 后续实施总原则
|
||||
|
||||
1. 并行迁移:新旧逻辑并存,先灰度后切流,最后删除旧实现。
|
||||
2. 单轮只做一个能力域:先 Memory 读链路,再 WebSearch。
|
||||
3. 保留可回滚开关:任何切流都必须有一键回退路径。
|
||||
4. 失败可降级:Rerank/Vector 异常不影响主链路响应。
|
||||
|
||||
## 4. 建议执行顺序(4 轮)
|
||||
|
||||
## Round 2:Memory 读链路接入 RAG Core(优先)
|
||||
|
||||
### 目标
|
||||
|
||||
1. Memory 检索由 `infra/rag` 承载,保留旧检索兜底。
|
||||
2. 注入效果不劣于当前逻辑,且可观测。
|
||||
|
||||
### 实施项
|
||||
|
||||
1. 在 Memory ReadService 中接入 `core.Pipeline.Retrieve`。
|
||||
2. 构造 `MemoryRetrieveInput`,强制过滤:
|
||||
- `user_id + assistant_id + conversation_id`
|
||||
3. 配置开关:
|
||||
- `memory.rag.enabled`(默认 `false`,灰度开启)
|
||||
4. 降级策略:
|
||||
- RAG 失败 -> 回退旧检索链路;
|
||||
- Rerank 失败 -> 使用原排序(pipeline 已支持 fallback 标记)。
|
||||
5. 指标补齐:
|
||||
- `memory_rag_hit_count`
|
||||
- `memory_rag_fallback_rate`
|
||||
- `memory_rag_latency_ms`
|
||||
|
||||
### 验收
|
||||
|
||||
1. 开关关闭时行为与当前一致。
|
||||
2. 开关开启时可稳定召回,失败能回退。
|
||||
3. 日志可追踪“为什么没注入某条记忆”。
|
||||
|
||||
## Round 3:WebSearch 接入 WebCorpus + RAG Core
|
||||
|
||||
### 目标
|
||||
|
||||
1. WebSearch 复用同一检索流程,不再各写一套召回逻辑。
|
||||
2. 严格限制跨 query/session 串召回。
|
||||
|
||||
### 实施项
|
||||
|
||||
1. 把抓取结果映射为 `WebIngestItem` 入 RAG。
|
||||
2. 检索时必须带:
|
||||
- `query_id` 或 `session_id`
|
||||
3. 配置开关:
|
||||
- `websearch.rag.enabled`(默认 `false`)
|
||||
4. 保留原 websearch 结果路径,RAG 仅先做“补充召回层”。
|
||||
|
||||
### 验收
|
||||
|
||||
1. 开启后答案质量不下降,且无跨 query 污染。
|
||||
2. 关闭后立即回到旧逻辑。
|
||||
|
||||
## Round 4:Milvus + Eino 真实实现替换
|
||||
|
||||
### 目标
|
||||
|
||||
1. 将 `InMemoryStore` 替换为 `MilvusStore`(生产可用)。
|
||||
2. 将 `MockEmbedder/NoopReranker` 替换为 Eino 实现。
|
||||
|
||||
### 实施项
|
||||
|
||||
1. `store/milvus_store.go`:
|
||||
- 实现 `Upsert/Search/Delete/Get`
|
||||
- 建 collection 与 metadata filter 映射
|
||||
2. `embed/eino_embedder.go`:
|
||||
- 完成 embedding 调用与超时控制
|
||||
3. `rerank/eino_reranker.go`:
|
||||
- 完成重排调用与错误降级
|
||||
4. 配置补齐:
|
||||
- `rag.store=milvus`
|
||||
- `rag.embed.provider=eino`
|
||||
- `rag.reranker.provider=eino`
|
||||
|
||||
### 验收
|
||||
|
||||
1. Milvus 可稳定写入/检索。
|
||||
2. 模型服务波动时主链路可降级。
|
||||
3. 指标完整(命中、延迟、fallback、错误码)。
|
||||
|
||||
## Round 5:统一切流与旧逻辑收敛
|
||||
|
||||
### 目标
|
||||
|
||||
1. Memory + WebSearch 默认走 RAG Core。
|
||||
2. 删除重复旧实现,避免双轨长期维护成本。
|
||||
|
||||
### 实施项
|
||||
|
||||
1. 提升开关默认值为 `true`。
|
||||
2. 观察窗口(建议 3~7 天)稳定后删除旧分支代码。
|
||||
3. 更新文档:
|
||||
- `backend/memory/记忆模块实施计划.md`
|
||||
- `backend/agent/通用能力接入文档.md`(若新增/替换通用能力,必须同步)
|
||||
|
||||
### 验收
|
||||
|
||||
1. 代码无重复检索实现。
|
||||
2. 回滚开关仍可用(应急时关闭 RAG)。
|
||||
3. 线上指标稳定且可追踪。
|
||||
|
||||
## 5. 开关与回滚建议
|
||||
|
||||
1. 建议开关:
|
||||
- `memory.rag.enabled`
|
||||
- `websearch.rag.enabled`
|
||||
- `rag.reranker.enabled`
|
||||
2. 回滚策略:
|
||||
- 先关 corpus 级开关(memory/websearch)。
|
||||
- 再关 reranker。
|
||||
- 极端情况降级到旧检索链路。
|
||||
|
||||
## 6. 关键风险与应对
|
||||
|
||||
1. 风险:切流后召回漂移。
|
||||
应对:双写日志对比(旧链路 vs 新链路 TopK)。
|
||||
2. 风险:Milvus 延迟波动。
|
||||
应对:检索超时 + fallback + 限流。
|
||||
3. 风险:跨会话串数据。
|
||||
应对:过滤维度强校验,不满足则拒绝检索。
|
||||
|
||||
## 7. 接手即办清单(最小行动)
|
||||
|
||||
1. 先做 Round 2(Memory 读链路接 RAG),不要同时改 WebSearch。
|
||||
2. 提交前必须跑:
|
||||
- `go test ./...`
|
||||
3. 每次本地 `go test` 后清理项目根目录 `.gocache`。
|
||||
4. 完成一轮后在本文件追加:
|
||||
- 已落地清单
|
||||
- 待办差距
|
||||
- 下一轮入口
|
||||
|
||||
43
backend/memory/Log.txt
Normal file
43
backend/memory/Log.txt
Normal file
@@ -0,0 +1,43 @@
|
||||
2026/04/16 11:24:55 D:/SmartFlow-Agent/backend/dao/agent.go:306 record not found
|
||||
[44.328ms] [rows:0] SELECT * FROM `agent_chats` WHERE user_id = 1 AND chat_id = 'df7ce26d-6952-493d-ac7f-3bfe98cbc338' ORDER BY `agent_chats`.`id` LIMIT 1
|
||||
2026/04/16 11:24:55 [DEBUG] loadOrCreateRuntimeState chatID=df7ce26d-6952-493d-ac7f-3bfe98cbc338 ok=false err=<nil> hasRuntime=false hasPending=false hasCtx=false hasSchedule=false hasOriginal=false
|
||||
2026/04/16 11:24:55 [GORM-Cache] Invalidated conversation history cache for user 1 conversation df7ce26d-6952-493d-ac7f-3bfe98cbc338
|
||||
2026/04/16 11:24:56 rag level=info component=store operation=ensure_collection action=search collection=smartflow_rag_chunks corpus=memory latency_ms=40 metric_type=COSINE status=created store=milvus vector_dim=1024
|
||||
2026/04/16 11:24:57 rag level=error component=store operation=search action=search collection=smartflow_rag_chunks corpus=memory error=Post "http://localhost:19530/v2/vectordb/entities/search": context deadline exceeded error_code=DEADLINE_EXCEEDED filter_count=3 latency_ms=1304 status=failed store=milvus top_k=5 vector_dim=1024
|
||||
2026/04/16 11:24:57 rag level=error component=runtime operation=retrieve action=search corpus=memory error=Post "http://localhost:19530/v2/vectordb/entities/search": context deadline exceeded error_code=DEADLINE_EXCEEDED latency_ms=1500 query_len=48 status=failed threshold=0.55 top_k=5
|
||||
2026/04/16 11:25:03 [DEBUG] chat routing chat=df7ce26d-6952-493d-ac7f-3bfe98cbc338 route=direct_reply needs_rough_build=false needs_refine_after_rough_build=false allow_reorder=false thinking=false has_rough_build_done=false task_class_count=0 raw=<SMARTFLOW_ROUTE nonce="84656bca-1aa3-4308-bb7d-5127badf9d47" route="direct_reply"/>
|
||||
[GIN] 2026/04/16 - 11:25:04 | 200 | 9.3560115s | 127.0.0.1 | POST "/api/v1/agent/chat"
|
||||
2026/04/16 11:25:05 outbox due messages=3, start dispatch
|
||||
2026/04/16 11:25:06 [GORM-Cache] Invalidated conversation history cache for user 1 conversation df7ce26d-6952-493d-ac7f-3bfe98cbc338
|
||||
2026/04/16 11:25:07 [GORM-Cache] Invalidated conversation history cache for user 1 conversation df7ce26d-6952-493d-ac7f-3bfe98cbc338
|
||||
2026/04/16 11:25:08 outbox due messages=1, start dispatch
|
||||
2026/04/16 11:25:09 outbox due messages=1, start dispatch
|
||||
2026/04/16 11:25:18 rag level=info component=store operation=search action=search collection=smartflow_rag_chunks corpus=memory filter_count=3 latency_ms=7 result_count=0 status=success store=milvus top_k=5 vector_dim=1024
|
||||
2026/04/16 11:25:18 rag level=info component=runtime operation=retrieve action=search corpus=memory fallback_used=false hit_count=0 latency_ms=100 query_len=21 raw_count=0 status=success threshold=0.6 top_k=5
|
||||
2026/04/16 11:25:18 [DEBUG][去重] 语义召回候选: job_id=18 user_id=1 memory_type=preference candidate_count=0
|
||||
2026/04/16 11:25:18 [DEBUG][去重] 汇总决策: job_id=18 action=ADD target_id=0 reason="无相关旧记忆,直接新增"
|
||||
2026/04/16 11:25:19 rag level=info component=store operation=upsert action=add collection=smartflow_rag_chunks corpus=memory latency_ms=53 row_count=1 status=success store=milvus vector_dim=1024
|
||||
2026/04/16 11:25:19 rag level=info component=runtime operation=ingest action=add chunk_count=1 corpus=memory document_count=1 latency_ms=158 status=success
|
||||
2026/04/16 11:25:19 [去重] 决策流程完成: job_id=18 user_id=1 新增=1 更新=0 删除=0 跳过=0
|
||||
|
||||
2026/04/16 11:25:44 D:/SmartFlow-Agent/backend/dao/agent.go:306 record not found
|
||||
[2.018ms] [rows:0] SELECT * FROM `agent_chats` WHERE user_id = 1 AND chat_id = '6279c9f0-0685-4484-bb33-d4216ef6107c' ORDER BY `agent_chats`.`id` LIMIT 1
|
||||
2026/04/16 11:25:44 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 6279c9f0-0685-4484-bb33-d4216ef6107c
|
||||
2026/04/16 11:25:44 [DEBUG] loadOrCreateRuntimeState chatID=6279c9f0-0685-4484-bb33-d4216ef6107c ok=false err=<nil> hasRuntime=false hasPending=false hasCtx=false hasSchedule=false hasOriginal=false
|
||||
2026/04/16 11:25:44 rag level=info component=store operation=search action=search collection=smartflow_rag_chunks corpus=memory filter_count=3 latency_ms=46 result_count=0 status=success store=milvus top_k=5 vector_dim=1024
|
||||
2026/04/16 11:25:44 rag level=info component=runtime operation=retrieve action=search corpus=memory fallback_used=false hit_count=0 latency_ms=145 query_len=45 raw_count=0 status=success threshold=0.55 top_k=5
|
||||
2026/04/16 11:25:48 [DEBUG] chat routing chat=6279c9f0-0685-4484-bb33-d4216ef6107c route=direct_reply needs_rough_build=false needs_refine_after_rough_build=false allow_reorder=false thinking=false has_rough_build_done=false task_class_count=0 raw=<SMARTFLOW_ROUTE nonce="a868c365-4f8c-4d56-ac90-a8504842f81c" route="direct_reply"/>
|
||||
[GIN] 2026/04/16 - 11:25:49 | 200 | 5.3825319s | 127.0.0.1 | POST "/api/v1/agent/chat"
|
||||
2026/04/16 11:25:50 outbox due messages=3, start dispatch
|
||||
2026/04/16 11:25:51 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 6279c9f0-0685-4484-bb33-d4216ef6107c
|
||||
2026/04/16 11:25:52 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 6279c9f0-0685-4484-bb33-d4216ef6107c
|
||||
2026/04/16 11:25:53 outbox due messages=2, start dispatch
|
||||
2026/04/16 11:25:58 rag level=info component=store operation=search action=search collection=smartflow_rag_chunks corpus=memory filter_count=3 latency_ms=53 result_count=1 status=success store=milvus top_k=5 vector_dim=1024
|
||||
2026/04/16 11:25:58 rag level=info component=runtime operation=retrieve action=search corpus=memory fallback_used=false hit_count=1 latency_ms=143 query_len=18 raw_count=1 status=success threshold=0.6 top_k=5
|
||||
2026/04/16 11:25:58 [WARN][去重] DocumentID 解析失败,跳过候选: document_id="memory:uid:1:6bf14130e4dfc8bd"
|
||||
2026/04/16 11:25:58 [WARN][去重] Milvus 返回 1 条结果但 DocumentID 全部解析失败,降级到 MySQL: user_id=1 memory_type=preference
|
||||
2026/04/16 11:25:58 [DEBUG][去重] 语义召回候选: job_id=19 user_id=1 memory_type=preference candidate_count=1
|
||||
2026/04/16 11:25:58 [DEBUG][去重] 候选详情: memory_id=17 score=0.0000 content="用户喜欢听音乐"
|
||||
2026/04/16 11:26:04 [DEBUG][去重] LLM 比对结果: candidate_id=17 score=0.0000 relation=duplicate reason="听歌和听音乐表达相同意思" candidate_content="用户喜欢听音乐"
|
||||
2026/04/16 11:26:04 [DEBUG][去重] 汇总决策: job_id=19 action=NONE target_id=0 reason="存在完全重复的旧记忆,跳过写入"
|
||||
2026/04/16 11:26:04 [去重] 决策流程完成: job_id=19 user_id=1 新增=0 更新=0 删除=0 跳过=1
|
||||
@@ -1,853 +0,0 @@
|
||||
GOROOT=C:\Program Files\Go #gosetup
|
||||
GOPATH=C:\Users\Dev\go #gosetup
|
||||
"C:\Program Files\Go\bin\go.exe" build -o C:\Users\Dev\AppData\Local\JetBrains\GoLand2025.3\tmp\GoLand\___6go_build_main_go.exe D:\SmartFlow-Agent\backend\main.go #gosetup
|
||||
C:\Users\Dev\AppData\Local\JetBrains\GoLand2025.3\tmp\GoLand\___6go_build_main_go.exe #gosetup
|
||||
2026/04/10 22:43:49 Config loaded successfully
|
||||
2026/04/10 22:43:57 Database connected successfully
|
||||
2026/04/10 22:43:57 Database auto migration completed
|
||||
2026/04/10 22:43:57 RAG runtime is disabled
|
||||
2026/04/10 22:43:57 outbox engine starting: topic=smartflow.agent.outbox brokers=[localhost:9092] retry_scan=1s batch=100
|
||||
2026/04/10 22:43:57 Kafka topic is ready: smartflow.agent.outbox
|
||||
2026/04/10 22:43:57 Outbox event bus started
|
||||
2026/04/10 22:43:57 Memory worker started
|
||||
2026/04/10 22:43:57 Routes setup completed
|
||||
2026/04/10 22:43:57 Server starting on port 8080...
|
||||
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
|
||||
|
||||
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
|
||||
- using env: export GIN_MODE=release
|
||||
- using code: gin.SetMode(gin.ReleaseMode)
|
||||
|
||||
[GIN-debug] GET /api/v1/health --> github.com/LoveLosita/smartflow/backend/routers.RegisterRouters.func1 (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/register --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).UserRegister-fm (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/login --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).UserLogin-fm (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/refresh-token --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).RefreshTokenHandler-fm (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/logout --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).UserLogout-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/task/create --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).AddTask-fm (6 handlers)
|
||||
[GIN-debug] PUT /api/v1/task/complete --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).CompleteTask-fm (6 handlers)
|
||||
[GIN-debug] PUT /api/v1/task/undo-complete --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).UndoCompleteTask-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/task/get --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).GetUserTasks-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/course/validate --> github.com/LoveLosita/smartflow/backend/api.(*CourseHandler).CheckUserCourse-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/course/import --> github.com/LoveLosita/smartflow/backend/api.(*CourseHandler).AddUserCourses-fm (6 handlers)
|
||||
[GIN-debug] POST /api/v1/task-class/add --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserAddTaskClass-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/task-class/list --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserGetTaskClassInfos-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/task-class/get --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserGetCompleteTaskClass-fm (5 handlers)
|
||||
[GIN-debug] PUT /api/v1/task-class/update --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserUpdateTaskClass-fm (6 handlers)
|
||||
[GIN-debug] POST /api/v1/task-class/insert-into-schedule --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserAddTaskClassItemIntoSchedule-fm (6 handlers)
|
||||
[GIN-debug] DELETE /api/v1/task-class/delete-item --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).DeleteTaskClassItem-fm (6 handlers)
|
||||
[GIN-debug] DELETE /api/v1/task-class/delete-class --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).DeleteTaskClass-fm (6 handlers)
|
||||
[GIN-debug] PUT /api/v1/task-class/apply-batch-into-schedule --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserInsertBatchTaskClassItemsIntoSchedule-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/today --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserTodaySchedule-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/week --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserWeeklySchedule-fm (5 handlers)
|
||||
[GIN-debug] DELETE /api/v1/schedule/delete --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).DeleteScheduleEvent-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/recent-completed --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserRecentCompletedSchedules-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/current --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserOngoingSchedule-fm (5 handlers)
|
||||
[GIN-debug] DELETE /api/v1/schedule/undo-task-item --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).UserRevocateTaskItemFromSchedule-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/smart-planning --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).SmartPlanning-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/schedule/smart-planning-multi --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).SmartPlanningMulti-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/agent/chat --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).ChatAgent-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/conversation-meta --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetConversationMeta-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/conversation-list --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetConversationList-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/conversation-history --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetConversationHistory-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/schedule-preview --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetSchedulePlanPreview-fm (5 handlers)
|
||||
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
|
||||
Please check https://github.com/gin-gonic/gin/blob/master/docs/doc.md#dont-trust-all-proxies for details.
|
||||
[GIN-debug] Listening and serving HTTP on :8080
|
||||
|
||||
2026/04/10 22:43:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[3.151ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:43:57.526')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:43:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.599ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:43:59.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:01 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.046ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:01.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:03 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.652ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:03.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:05 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.918ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:05.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:07 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.113ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:07.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:09 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.325ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:09.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.483ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:11.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.070ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:13.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.599ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:15.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.523ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:17.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.107ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:19.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.056ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:21.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.285ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:23.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.920ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.565ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:27.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.119ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.254ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.475ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:33.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.489ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:35.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.128ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:37.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.532ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.111ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:41.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.596ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:43.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.047ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:45.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.141ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:47.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.091ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:49.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.118ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:51.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.068ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.112ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:55.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.612ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:57.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.474ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:59.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:01 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.514ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:01.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:03 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.485ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:03.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:05 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.543ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:05.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:07 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.649ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:07.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:09 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.035ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:09.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.046ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:11.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.055ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:13.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.177ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:15.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.250ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:17.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.074ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:19.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.903ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:21.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.081ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:23.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.147ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.064ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:27.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.056ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.577ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.464ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:33.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.467ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:35.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.541ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:37.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.457ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.545ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:41.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.342ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:43.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.577ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:45.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.538ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:47.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.219ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:49.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.073ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:51.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.101ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.099ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:55.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.549ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:57.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.098ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:59.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:01 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.993ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:01.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:03 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.203ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:03.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:05 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.514ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:05.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:07 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.033ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:07.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:09 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.586ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:09.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.123ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:11.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:11 D:/SmartFlow-Agent/backend/dao/agent.go:306 record not found
|
||||
[44.927ms] [rows:0] SELECT * FROM `agent_chats` WHERE user_id = 1 AND chat_id = '325b37d1-3483-4c6f-b755-44532a4dbe3c' ORDER BY `agent_chats`.`id` LIMIT 1
|
||||
2026/04/10 22:46:11 [DEBUG] loadOrCreateRuntimeState chatID=325b37d1-3483-4c6f-b755-44532a4dbe3c ok=false err=<nil> hasRuntime=false hasPending=false hasCtx=false hasSchedule=false hasOriginal=false
|
||||
2026/04/10 22:46:11 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 325b37d1-3483-4c6f-b755-44532a4dbe3c
|
||||
|
||||
2026/04/10 22:46:12 D:/SmartFlow-Agent/backend/memory/repo/settings_repo.go:40 record not found
|
||||
[48.854ms] [rows:0] SELECT * FROM `memory_user_settings` WHERE user_id = 1 ORDER BY `memory_user_settings`.`user_id` LIMIT 1
|
||||
|
||||
2026/04/10 22:46:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.150ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:13.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.082ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:15.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:15 [DEBUG] chat routing chat=325b37d1-3483-4c6f-b755-44532a4dbe3c route=execute needs_rough_build=true needs_refine_after_rough_build=true allow_reorder=false has_rough_build_done=false task_class_count=4 reason=批量排课需求,有任务类ID,且给出明确微调偏好(避开早八和晚10)
|
||||
2026/04/10 22:46:16 [DEBUG] rough_build scope_task_classes=[2 3 4 5] placements=44 applied=44 day_mapping_miss=0 task_item_match_miss=0 pending_in_scope=0 total_tasks=105 window_days=42
|
||||
2026/04/10 22:46:16 [DEBUG] execute LLM context begin chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1 message_count=4
|
||||
----- message[0] -----
|
||||
role: system
|
||||
content:
|
||||
你叫 SmartFlow,是专为重邮(CQUPT)学子打造的智能排程专家。
|
||||
你的回复应当专业、干练,偶尔可以带一点程序员式的冷幽默。
|
||||
重要约束:你无法直接写入数据库。除非系统明确告知“任务已落库成功”,否则禁止使用“已安排/已记录/已帮你记下”等完成态表述。
|
||||
|
||||
你是 SmartFlow NewAgent 的执行器,当前处于自由执行模式(无预定义 plan 步骤)。
|
||||
|
||||
阶段事实(强约束):
|
||||
1. 若上下文给出“粗排已完成/rough_build_done”,表示目标任务类已经进入 suggested/existing,不是待排入状态。
|
||||
2. 当前阶段目标是“微调”,不是“重新粗排”。
|
||||
3. 若上下文明确“当前未收到明确微调偏好/本轮先收口”,应直接结束而不是继续优化循环。
|
||||
4. 若用户提出了二次微调方向,本轮优先目标就是满足该方向。
|
||||
|
||||
你可以做什么:
|
||||
1. 你可以基于用户给定的二次微调方向,对 suggested 做定向微调。
|
||||
2. existing 属于已安排事实层,可用于冲突判断和参考,不作为 move/batch_move/spread_even 的目标。
|
||||
3. 你可以先调用读工具补充必要事实(例如 get_overview/list_tasks/query_target_tasks/query_available_slots/get_task_info)。
|
||||
4. 你可以在需要改动时提出 confirm(move/swap/unplace/batch_move/spread_even)。
|
||||
5. 只有用户明确允许打乱顺序时,才可使用 min_context_switch。
|
||||
6. 多任务处理默认使用队列链路:先 query_target_tasks(enqueue=true) 入队,再 queue_pop_head 逐项处理。
|
||||
|
||||
你不要做什么:
|
||||
1. 不要假设任务还没排进去,然后改成逐个手动 place。
|
||||
2. 不要伪造工具结果。
|
||||
3. 不要重复做同类查询而没有新增结论;连续两轮同类读查询后,必须转入执行、ask_user,或明确阻塞原因。
|
||||
4. list_tasks 的 status 只允许单值:all / existing / suggested / pending。禁止使用 "existing,suggested" 这类拼接值。
|
||||
5. 若工具结果与已知事实明显冲突(如无写操作却从“有任务”变成“0任务”),先自我纠错并重查一次,不要直接 ask_user。
|
||||
6. 不要连续两轮调用“同一读工具 + 等价 arguments”;若上一轮已成功返回,下一轮必须换工具或进入 confirm。
|
||||
7. list_tasks.category 只接受任务类名称,不接受 task_class_ids(如 "1,2,3")。
|
||||
8. 若已明确“本轮先收口”,不要继续调用 list_tasks/query_available_slots/move 做无目标微调。
|
||||
9. 若用户明确了微调方向,不要只做“局部看起来更空”的随机调整;每次改动都要能对应到该方向。
|
||||
10. 若顺序策略为“保持顺序”,禁止调用 min_context_switch。
|
||||
11. 不要在同一轮构造大规模 batch_move;batch_move 最多 2 条,超过请走队列逐项处理。
|
||||
12. 未调用 queue_pop_head 获取 current 前,不要调用 queue_apply_head_move。
|
||||
13. 工具参数必须严格使用 schema 字段,禁止自造别名;例如 day_from/day_to 非法,必须改用 day_start/day_end。
|
||||
|
||||
执行规则:
|
||||
1. 只输出严格 JSON,不要输出 markdown,不要在 JSON 外补充文本。
|
||||
2. 读操作:action=continue + tool_call。
|
||||
3. 写操作:action=confirm + tool_call。
|
||||
4. 缺关键上下文且无法通过工具补齐:action=ask_user。
|
||||
5. 任务完成:action=done,并在 goal_check 总结完成证据。
|
||||
6. 流程应正式终止:action=abort。
|
||||
|
||||
补充 JSON 约束:
|
||||
1. 只输出当前 action 真正需要的字段;无关字段直接省略,不要用 ""、{}、[]、null 占位。
|
||||
2. 若输出 tool_call,参数字段名只能是 arguments,禁止写成 parameters。
|
||||
3. tool_call 只能是单个对象:{"name":"工具名","arguments":{...}},不能输出数组。
|
||||
4. 只有 action=abort 时才允许输出 abort 字段;非 abort 动作不要输出 abort。
|
||||
5. action=continue / ask_user / confirm 时,speak 必须是非空自然语言。
|
||||
|
||||
可用工具(简表):
|
||||
1. batch_move:原子性批量移动多个任务(仅 suggested,最多2条),全部成功才生效。若含 existing/pending 或任一冲突将整批失败回滚。
|
||||
参数:moves(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:批量移动完成,2个任务全部成功。(单次最多2条)
|
||||
2. get_overview:获取规划窗口总览(任务视角,全量返回):保留课程占位统计,展开任务清单(过滤课程明细)。
|
||||
参数:{}
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:规划窗口共27天...课程占位条目34个...任务清单(全量,已过滤课程)...
|
||||
3. get_task_info:查询单个任务详细信息,包括类别、状态、占用时段、嵌入关系。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:[35]第一章随机事件与概率 | 状态:已预排(suggested) | 占用时段:第3天第5-6节
|
||||
4. list_tasks:列出任务清单,可按类别和状态过滤。category 传任务类名称,status 仅支持单值 all/existing/suggested/pending。
|
||||
参数:category(可选,string);status(可选,string:all/existing/suggested/pending)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已预排任务共24个: [35]第一章随机事件与概率 — 已预排至 第3天第5-6节...
|
||||
5. min_context_switch:在指定任务集合内重排 suggested 任务,尽量让同类任务连续以减少上下文切换。仅在用户明确允许打乱顺序时使用。task_ids 必填(兼容 task_id)。
|
||||
参数:task_id(可选,int);task_ids(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:最少上下文切换重排完成:共处理 6 个任务,上下文切换次数 5 -> 2。
|
||||
6. move:将一个已预排任务(仅 suggested)移动到新位置。existing 属于已安排事实层,不参与 move。task_id/new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 从第3天第5-6节移至第5天第3-4节。
|
||||
7. place:将一个待安排任务预排到指定位置。自动检测可嵌入宿主。task_id/day/slot_start 必填。
|
||||
参数:day(必填,int);slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 预排到第5天第3-4节。
|
||||
8. query_available_slots:查询候选空位池(先返回纯空位,不足再补可嵌入位),适合 move 前的落点筛选。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);duration(可选,int);exclude_sections(可选,array);limit(可选,int);section_from(可选,int);section_to(可选,int);slot_type(可选,string);slot_types(可选,array);span(可选,int);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_available_slots","count":12,"strict_count":8,"embedded_count":4,"slots":[{"day":5,"week":12,"day_of_week":3,"slot_start":1,"slot_end":2,"slot_type":"empty"}]}
|
||||
9. query_range:查看某天或某时段的细粒度占用详情。day 必填,slot_start/slot_end 选填(不填查整天)。
|
||||
参数:day(必填,int);slot_end(可选,int);slot_start(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:第5天第3-6节:第3节空、第4节空...
|
||||
10. query_target_tasks:查询候选任务集合,可按 status/week/day/task_id/category 筛选;默认自动入队,供后续 queue_pop_head 逐项处理。
|
||||
参数:category(可选,string);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);enqueue(可选,bool);limit(可选,int);reset_queue(可选,bool);status(可选,string:all/existing/suggested/pending);task_id(可选,int);task_ids(可选,array);task_item_id(可选,int);task_item_ids(可选,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_target_tasks","count":6,"status":"suggested","enqueue":true,"enqueued":6,"queue":{"pending_count":6},"items":[{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}]}
|
||||
11. queue_apply_head_move:将当前队首任务移动到指定位置并自动出队。仅作用于 current,不接受 task_id。new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_apply_head_move","success":true,"task_id":35,"pending_count":4,"completed_count":2,"result":"已将 [35]... 从第3天第5-6节移至第5天第3-4节。"}
|
||||
12. queue_pop_head:弹出并返回当前队首任务;若已有 current 则复用,保证一次只处理一个任务。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_pop_head","has_head":true,"pending_count":5,"current":{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}}
|
||||
13. queue_skip_head:跳过当前队首任务(不改日程),将其标记为 skipped 并继续后续队列。
|
||||
参数:reason(可选,string)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_skip_head","success":true,"skipped_task_id":35,"pending_count":4,"skipped_count":1}
|
||||
14. queue_status:查看当前待处理队列状态(pending/current/completed/skipped)。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_status","pending_count":5,"completed_count":1,"skipped_count":0,"current_task_id":35,"current_attempt":1}
|
||||
15. spread_even:在给定任务集合内做均匀化铺开:先按筛选条件收集候选坑位,再规划并原子落地。task_ids 必填(兼容 task_id)。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);exclude_sections(可选,array);limit(可选,int);slot_type(可选,string);slot_types(可选,array);task_id(可选,int);task_ids(必填,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:均匀化调整完成:共处理 6 个任务,候选坑位 24 个。
|
||||
16. swap:交换两个已落位任务的位置。两个任务必须时长相同。task_a/task_b 必填。
|
||||
参数:task_a(必填,int);task_b(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:交换完成:[35]... ↔ [36]...
|
||||
17. unplace:将一个已落位任务移除,恢复为待安排状态。会自动清理嵌入关系。task_id 必填。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 移除,恢复为待安排状态。
|
||||
|
||||
----- message[1] -----
|
||||
role: assistant
|
||||
content:
|
||||
历史上下文(仅供参考):
|
||||
- 用户目标:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 阶段锚点:粗排已完成,本轮仅做微调,不重新 place。
|
||||
- 历史归档 ReAct 摘要:暂无。
|
||||
- 历史归档 ReAct 窗口:暂无。
|
||||
- 当前循环早期摘要:暂无。
|
||||
|
||||
----- message[2] -----
|
||||
role: assistant
|
||||
content:
|
||||
当轮 ReAct Loop 记录(窗口):
|
||||
- 已清空(新一轮 loop 准备中)。
|
||||
|
||||
----- message[3] -----
|
||||
role: system
|
||||
content:
|
||||
当前执行状态:
|
||||
- 当前轮次:1/60
|
||||
- 当前模式:自由执行(无预定义步骤)
|
||||
执行锚点:
|
||||
- 当前用户诉求:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 目标任务类:task_class_ids=[2,3,4,5]
|
||||
- 啥时候结束Loop:你可以根据工具调用记录自行判断。
|
||||
- 非目标:不重新粗排、不修改无关任务类。
|
||||
- 阶段约束:粗排已完成,本轮只微调 suggested;existing 仅作已安排事实参考,不作为可移动目标。
|
||||
- 参数纪律:工具参数必须严格使用 schema 字段;若返回“参数非法”,需先改参再继续。
|
||||
- 顺序策略:默认保持 suggested 相对顺序,禁止调用 min_context_switch。
|
||||
本轮指令:请继续当前任务的执行阶段,严格输出 JSON。
|
||||
|
||||
|
||||
[DEBUG] execute LLM context end chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1
|
||||
|
||||
2026/04/10 22:46:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.502ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:17.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.403ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:19.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.476ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:21.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.170ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:23.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.487ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.460ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:27.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.473ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:30 [DEBUG] execute LLM 响应 chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1 action=continue speak_len=103 raw_len=186 raw_preview={"action":"continue","tool_call":{"name":"get_overview","arguments":{}},"speak":"我先看看当前粗排完成后的总览,找出需要调整的早八/晚十时段的目标任务。"}
|
||||
2026/04/10 22:46:30 [DEBUG] execute tool chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1 tool=get_overview args={} before=tasks=105 pending=0 suggested=44 existing=61 task_item_with_slot=44 event_with_slot=52 after=tasks=105 pending=0 suggested=44 existing=61 task_item_with_slot=44 event_with_slot=52 result_preview=规划窗口共42天,每天12个时段,总计504个时段。 当前已占用197个,空闲307个。课程占位条目61个(仅用于占位统计);任务条目:已安排(existing)0个、已预排(suggested)44个、待安排(pending)0个。 每日概况: 第1天(星期3):总占3/12(课程占3/12,任务占0/12) — 任务:无 第2天(星期4):总占6/12(课程占6/12,任务占0/12) —
|
||||
2026/04/10 22:46:30 [DEBUG] execute LLM context begin chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=2 message_count=4
|
||||
----- message[0] -----
|
||||
role: system
|
||||
content:
|
||||
你叫 SmartFlow,是专为重邮(CQUPT)学子打造的智能排程专家。
|
||||
你的回复应当专业、干练,偶尔可以带一点程序员式的冷幽默。
|
||||
重要约束:你无法直接写入数据库。除非系统明确告知“任务已落库成功”,否则禁止使用“已安排/已记录/已帮你记下”等完成态表述。
|
||||
|
||||
你是 SmartFlow NewAgent 的执行器,当前处于自由执行模式(无预定义 plan 步骤)。
|
||||
|
||||
阶段事实(强约束):
|
||||
1. 若上下文给出“粗排已完成/rough_build_done”,表示目标任务类已经进入 suggested/existing,不是待排入状态。
|
||||
2. 当前阶段目标是“微调”,不是“重新粗排”。
|
||||
3. 若上下文明确“当前未收到明确微调偏好/本轮先收口”,应直接结束而不是继续优化循环。
|
||||
4. 若用户提出了二次微调方向,本轮优先目标就是满足该方向。
|
||||
|
||||
你可以做什么:
|
||||
1. 你可以基于用户给定的二次微调方向,对 suggested 做定向微调。
|
||||
2. existing 属于已安排事实层,可用于冲突判断和参考,不作为 move/batch_move/spread_even 的目标。
|
||||
3. 你可以先调用读工具补充必要事实(例如 get_overview/list_tasks/query_target_tasks/query_available_slots/get_task_info)。
|
||||
4. 你可以在需要改动时提出 confirm(move/swap/unplace/batch_move/spread_even)。
|
||||
5. 只有用户明确允许打乱顺序时,才可使用 min_context_switch。
|
||||
6. 多任务处理默认使用队列链路:先 query_target_tasks(enqueue=true) 入队,再 queue_pop_head 逐项处理。
|
||||
|
||||
你不要做什么:
|
||||
1. 不要假设任务还没排进去,然后改成逐个手动 place。
|
||||
2. 不要伪造工具结果。
|
||||
3. 不要重复做同类查询而没有新增结论;连续两轮同类读查询后,必须转入执行、ask_user,或明确阻塞原因。
|
||||
4. list_tasks 的 status 只允许单值:all / existing / suggested / pending。禁止使用 "existing,suggested" 这类拼接值。
|
||||
5. 若工具结果与已知事实明显冲突(如无写操作却从“有任务”变成“0任务”),先自我纠错并重查一次,不要直接 ask_user。
|
||||
6. 不要连续两轮调用“同一读工具 + 等价 arguments”;若上一轮已成功返回,下一轮必须换工具或进入 confirm。
|
||||
7. list_tasks.category 只接受任务类名称,不接受 task_class_ids(如 "1,2,3")。
|
||||
8. 若已明确“本轮先收口”,不要继续调用 list_tasks/query_available_slots/move 做无目标微调。
|
||||
9. 若用户明确了微调方向,不要只做“局部看起来更空”的随机调整;每次改动都要能对应到该方向。
|
||||
10. 若顺序策略为“保持顺序”,禁止调用 min_context_switch。
|
||||
11. 不要在同一轮构造大规模 batch_move;batch_move 最多 2 条,超过请走队列逐项处理。
|
||||
12. 未调用 queue_pop_head 获取 current 前,不要调用 queue_apply_head_move。
|
||||
13. 工具参数必须严格使用 schema 字段,禁止自造别名;例如 day_from/day_to 非法,必须改用 day_start/day_end。
|
||||
|
||||
执行规则:
|
||||
1. 只输出严格 JSON,不要输出 markdown,不要在 JSON 外补充文本。
|
||||
2. 读操作:action=continue + tool_call。
|
||||
3. 写操作:action=confirm + tool_call。
|
||||
4. 缺关键上下文且无法通过工具补齐:action=ask_user。
|
||||
5. 任务完成:action=done,并在 goal_check 总结完成证据。
|
||||
6. 流程应正式终止:action=abort。
|
||||
|
||||
补充 JSON 约束:
|
||||
1. 只输出当前 action 真正需要的字段;无关字段直接省略,不要用 ""、{}、[]、null 占位。
|
||||
2. 若输出 tool_call,参数字段名只能是 arguments,禁止写成 parameters。
|
||||
3. tool_call 只能是单个对象:{"name":"工具名","arguments":{...}},不能输出数组。
|
||||
4. 只有 action=abort 时才允许输出 abort 字段;非 abort 动作不要输出 abort。
|
||||
5. action=continue / ask_user / confirm 时,speak 必须是非空自然语言。
|
||||
|
||||
可用工具(简表):
|
||||
1. batch_move:原子性批量移动多个任务(仅 suggested,最多2条),全部成功才生效。若含 existing/pending 或任一冲突将整批失败回滚。
|
||||
参数:moves(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:批量移动完成,2个任务全部成功。(单次最多2条)
|
||||
2. get_overview:获取规划窗口总览(任务视角,全量返回):保留课程占位统计,展开任务清单(过滤课程明细)。
|
||||
参数:{}
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:规划窗口共27天...课程占位条目34个...任务清单(全量,已过滤课程)...
|
||||
3. get_task_info:查询单个任务详细信息,包括类别、状态、占用时段、嵌入关系。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:[35]第一章随机事件与概率 | 状态:已预排(suggested) | 占用时段:第3天第5-6节
|
||||
4. list_tasks:列出任务清单,可按类别和状态过滤。category 传任务类名称,status 仅支持单值 all/existing/suggested/pending。
|
||||
参数:category(可选,string);status(可选,string:all/existing/suggested/pending)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已预排任务共24个: [35]第一章随机事件与概率 — 已预排至 第3天第5-6节...
|
||||
5. min_context_switch:在指定任务集合内重排 suggested 任务,尽量让同类任务连续以减少上下文切换。仅在用户明确允许打乱顺序时使用。task_ids 必填(兼容 task_id)。
|
||||
参数:task_id(可选,int);task_ids(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:最少上下文切换重排完成:共处理 6 个任务,上下文切换次数 5 -> 2。
|
||||
6. move:将一个已预排任务(仅 suggested)移动到新位置。existing 属于已安排事实层,不参与 move。task_id/new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 从第3天第5-6节移至第5天第3-4节。
|
||||
7. place:将一个待安排任务预排到指定位置。自动检测可嵌入宿主。task_id/day/slot_start 必填。
|
||||
参数:day(必填,int);slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 预排到第5天第3-4节。
|
||||
8. query_available_slots:查询候选空位池(先返回纯空位,不足再补可嵌入位),适合 move 前的落点筛选。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);duration(可选,int);exclude_sections(可选,array);limit(可选,int);section_from(可选,int);section_to(可选,int);slot_type(可选,string);slot_types(可选,array);span(可选,int);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_available_slots","count":12,"strict_count":8,"embedded_count":4,"slots":[{"day":5,"week":12,"day_of_week":3,"slot_start":1,"slot_end":2,"slot_type":"empty"}]}
|
||||
9. query_range:查看某天或某时段的细粒度占用详情。day 必填,slot_start/slot_end 选填(不填查整天)。
|
||||
参数:day(必填,int);slot_end(可选,int);slot_start(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:第5天第3-6节:第3节空、第4节空...
|
||||
10. query_target_tasks:查询候选任务集合,可按 status/week/day/task_id/category 筛选;默认自动入队,供后续 queue_pop_head 逐项处理。
|
||||
参数:category(可选,string);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);enqueue(可选,bool);limit(可选,int);reset_queue(可选,bool);status(可选,string:all/existing/suggested/pending);task_id(可选,int);task_ids(可选,array);task_item_id(可选,int);task_item_ids(可选,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_target_tasks","count":6,"status":"suggested","enqueue":true,"enqueued":6,"queue":{"pending_count":6},"items":[{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}]}
|
||||
11. queue_apply_head_move:将当前队首任务移动到指定位置并自动出队。仅作用于 current,不接受 task_id。new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_apply_head_move","success":true,"task_id":35,"pending_count":4,"completed_count":2,"result":"已将 [35]... 从第3天第5-6节移至第5天第3-4节。"}
|
||||
12. queue_pop_head:弹出并返回当前队首任务;若已有 current 则复用,保证一次只处理一个任务。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_pop_head","has_head":true,"pending_count":5,"current":{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}}
|
||||
13. queue_skip_head:跳过当前队首任务(不改日程),将其标记为 skipped 并继续后续队列。
|
||||
参数:reason(可选,string)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_skip_head","success":true,"skipped_task_id":35,"pending_count":4,"skipped_count":1}
|
||||
14. queue_status:查看当前待处理队列状态(pending/current/completed/skipped)。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_status","pending_count":5,"completed_count":1,"skipped_count":0,"current_task_id":35,"current_attempt":1}
|
||||
15. spread_even:在给定任务集合内做均匀化铺开:先按筛选条件收集候选坑位,再规划并原子落地。task_ids 必填(兼容 task_id)。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);exclude_sections(可选,array);limit(可选,int);slot_type(可选,string);slot_types(可选,array);task_id(可选,int);task_ids(必填,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:均匀化调整完成:共处理 6 个任务,候选坑位 24 个。
|
||||
16. swap:交换两个已落位任务的位置。两个任务必须时长相同。task_a/task_b 必填。
|
||||
参数:task_a(必填,int);task_b(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:交换完成:[35]... ↔ [36]...
|
||||
17. unplace:将一个已落位任务移除,恢复为待安排状态。会自动清理嵌入关系。task_id 必填。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 移除,恢复为待安排状态。
|
||||
|
||||
----- message[1] -----
|
||||
role: assistant
|
||||
content:
|
||||
历史上下文(仅供参考):
|
||||
- 用户目标:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 阶段锚点:粗排已完成,本轮仅做微调,不重新 place。
|
||||
- 历史归档 ReAct 摘要:暂无。
|
||||
- 历史归档 ReAct 窗口:暂无。
|
||||
- 当前循环早期摘要:无(当前窗口已覆盖全部 ReAct 记录)。
|
||||
|
||||
----- message[2] -----
|
||||
role: assistant
|
||||
content:
|
||||
当轮 ReAct Loop 记录(窗口):
|
||||
1) thought/reason:我先看看当前粗排完成后的总览,找出需要调整的早八/晚十时段的目标任务。
|
||||
tool_call:get_overview({})
|
||||
observation:规划窗口共42天,每天12个时段,总计504个时段。
|
||||
当前已占用197个,空闲307个。课程占位条目61个(仅用于占位统计);任务条目:已安排(existing)0个、已预排(suggested)44个、待安排(pending)0个。
|
||||
|
||||
每日概况:
|
||||
第1天(星期3):总占3/12(课程占3/12,任务占0/12) — 任务:无
|
||||
第2天(星期4):总占6/12(课程占6/12,任务占0/12) — 任务:无
|
||||
第3天(星期5):总占6/12(课程占4/12,任务占2/12) — 任务:[94]线性表(顺序表/链表)与复杂度分析(suggested,9-10节)
|
||||
第4天(星期6):总占4/12(课程占0/12,任务占4/12) — 任务:[70]数制与码制、逻辑代数基础(suggested,3-4节) [82]命题逻辑与等值演算(suggested,5-6节)
|
||||
第5天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第6天(星期1):总占6/12(课程占2/12,任务占4/12) — 任务:[71]组合逻辑电路分析方法(suggested,7-8节) [95]栈与队列及典型应用(suggested,9-10节)
|
||||
第7天(星期2):总占12/12(课程占10/12,任务占2/12) — 任务:[83]谓词逻辑与量词推理(suggested,3-4节)
|
||||
第8天(星期3):总占5/12(课程占5/12,任务占0/12) — 任务:无
|
||||
第9天(星期4):总占8/12(课程占6/12,任务占2/12) — 任务:[72]组合逻辑电路设计方法(含卡诺图)(suggested,9-10节)
|
||||
第10天(星期5):总占6/12(课程占2/12,任务占4/12) — 任务:[96]串与模式匹配(KMP)(suggested,7-8节) [84]集合与关系基本性质(suggested,9-10节)
|
||||
第11天(星期6):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第12天(星期7):总占2/12(课程占0/12,任务占2/12) — 任务:[73]译码器、编码器、多路选择器综合应用(suggested,7-8节)
|
||||
第13天(星期1):总占6/12(课程占2/12,任务占4/12) — 任务:[97]数组与广义表、稀疏矩阵(suggested,5-6节) [85]关系闭包与等价关系/偏序关系(suggested,7-8节)
|
||||
第14天(星期2):总占10/12(课程占10/12,任务占0/12) — 任务:无
|
||||
第15天(星期3):总占7/12(课程占3/12,任务占4/12) — 任务:[74]触发器工作原理与时序特性(suggested,3-4节) [62]第一章 随机事件与概率(suggested,5-6节)
|
||||
第16天(星期4):总占6/12(课程占4/12,任务占2/12) — 任务:[98]树与二叉树遍历、线索化(suggested,9-10节)
|
||||
第17天(星期5):总占6/12(课程占4/12,任务占2/12) — 任务:[86]函数与映射(单射满射双射)(suggested,5-6节)
|
||||
第18天(星期6):总占4/12(课程占0/12,任务占4/12) — 任务:[63]第二章 条件概率与全概率公式(suggested,7-8节) [75]计数器设计与分析(suggested,9-10节)
|
||||
第19天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第20天(星期1):总占6/12(课程占2/12,任务占4/12) — 任务:[87]代数系统与群环域入门(suggested,3-4节) [99]二叉排序树、AVL、红黑树概念(suggested,5-6节)
|
||||
第21天(星期2):总占14/12(课程占10/12,任务占4/12) — 任务:[64]第三章 随机变量及其分布(suggested,3-4节) [76]寄存器与移位寄存器(suggested,7-8节)
|
||||
第22天(星期3):总占5/12(课程占5/12,任务占0/12) — 任务:无
|
||||
第23天(星期4):总占6/12(课程占4/12,任务占2/12) — 任务:[88]图的基本概念与图的表示(suggested,9-10节)
|
||||
第24天(星期5):总占6/12(课程占2/12,任务占4/12) — 任务:[100]堆与优先队列(suggested,5-6节) [65]第四章 多维随机变量(suggested,7-8节)
|
||||
第25天(星期6):总占2/12(课程占0/12,任务占2/12) — 任务:[77]时序逻辑电路设计(同步/异步)(suggested,5-6节)
|
||||
第26天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第27天(星期1):总占8/12(课程占2/12,任务占6/12) — 任务:[66]第五章 数字特征与大数定律(suggested,3-4节) [89]欧拉图、哈密顿图、最短路(suggested,5-6节) [101]图的存储与遍历(DFS/BFS)(suggested,7-8节)
|
||||
第28天(星期2):总占12/12(课程占10/12,任务占2/12) — 任务:[78]状态机建模与化简(suggested,3-4节)
|
||||
第29天(星期3):总占5/12(课程占3/12,任务占2/12) — 任务:[67]第六章 中心极限定理与参数估计(suggested,7-8节)
|
||||
第30天(星期4):总占4/12(课程占2/12,任务占2/12) — 任务:[90]树与生成树、最小生成树(suggested,9-10节)
|
||||
第31天(星期5):总占8/12(课程占4/12,任务占4/12) — 任务:[102]最短路径与拓扑排序(suggested,5-6节) [79]A/D 与 D/A 基础电路(suggested,7-8节)
|
||||
第32天(星期6):总占2/12(课程占0/12,任务占2/12) — 任务:[68]综合刷题与错题回顾(suggested,9-10节)
|
||||
第33天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第34天(星期1):总占6/12(课程占0/12,任务占6/12) — 任务:[91]组合计数(加法乘法原理)(suggested,3-4节) [80]历年真题专项(组合逻辑)(suggested,5-6节) [103]查找(顺序/折半/散列)(suggested,7-8节)
|
||||
第35天(星期2):总占4/12(课程占2/12,任务占2/12) — 任务:[69]模拟考试与查漏补缺(suggested,5-6节)
|
||||
第36天(星期3):总占2/12(课程占2/12,任务占0/12) — 任务:无
|
||||
第37天(星期4):总占6/12(课程占0/12,任务占6/12) — 任务:[81]历年真题专项(时序逻辑)(suggested,5-6节) [92]递推关系与母函数基础(suggested,7-8节) [104]排序(插入/交换/选择/归并/快排)(suggested,9-10节)
|
||||
第38天(星期5):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第39天(星期6):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第40天(星期7):总占4/12(课程占0/12,任务占4/12) — 任务:[93]离散综合题与证明题训练(suggested,3-4节) [105]综合算法题实战与代码模板整理(suggested,5-6节)
|
||||
第41天(星期1):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第42天(星期2):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
|
||||
任务清单(全量,已过滤课程):
|
||||
[62]第一章 随机事件与概率 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第15天(星期3)第5-6节
|
||||
[63]第二章 条件概率与全概率公式 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第18天(星期6)第7-8节
|
||||
[64]第三章 随机变量及其分布 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第21天(星期2)第3-4节
|
||||
[65]第四章 多维随机变量 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第24天(星期5)第7-8节
|
||||
[66]第五章 数字特征与大数定律 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第27天(星期1)第3-4节
|
||||
[67]第六章 中心极限定理与参数估计 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第29天(星期3)第7-8节
|
||||
[68]综合刷题与错题回顾 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第32天(星期6)第9-10节
|
||||
[69]模拟考试与查漏补缺 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第35天(星期2)第5-6节
|
||||
[70]数制与码制、逻辑代数基础 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第4天(星期6)第3-4节
|
||||
[71]组合逻辑电路分析方法 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第6天(星期1)第7-8节
|
||||
[72]组合逻辑电路设计方法(含卡诺图) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第9天(星期4)第9-10节
|
||||
[73]译码器、编码器、多路选择器综合应用 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第12天(星期7)第7-8节
|
||||
[74]触发器工作原理与时序特性 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第15天(星期3)第3-4节
|
||||
[75]计数器设计与分析 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第18天(星期6)第9-10节
|
||||
[76]寄存器与移位寄存器 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第21天(星期2)第7-8节
|
||||
[77]时序逻辑电路设计(同步/异步) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第25天(星期6)第5-6节
|
||||
[78]状态机建模与化简 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第28天(星期2)第3-4节
|
||||
[79]A/D 与 D/A 基础电路 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第31天(星期5)第7-8节
|
||||
[80]历年真题专项(组合逻辑) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第34天(星期1)第5-6节
|
||||
[81]历年真题专项(时序逻辑) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第37天(星期4)第5-6节
|
||||
[82]命题逻辑与等值演算 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第4天(星期6)第5-6节
|
||||
[83]谓词逻辑与量词推理 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第7天(星期2)第3-4节
|
||||
[84]集合与关系基本性质 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第10天(星期5)第9-10节
|
||||
[85]关系闭包与等价关系/偏序关系 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第13天(星期1)第7-8节
|
||||
[86]函数与映射(单射满射双射) | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第17天(星期5)第5-6节
|
||||
[87]代数系统与群环域入门 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第20天(星期1)第3-4节
|
||||
[88]图的基本概念与图的表示 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第23天(星期4)第9-10节
|
||||
[89]欧拉图、哈密顿图、最短路 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第27天(星期1)第5-6节
|
||||
[90]树与生成树、最小生成树 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第30天(星期4)第9-10节
|
||||
[91]组合计数(加法乘法原理) | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第34天(星期1)第3-4节
|
||||
[92]递推关系与母函数基础 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第37天(星期4)第7-8节
|
||||
[93]离散综合题与证明题训练 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第40天(星期7)第3-4节
|
||||
[94]线性表(顺序表/链表)与复杂度分析 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第3天(星期5)第9-10节
|
||||
[95]栈与队列及典型应用 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第6天(星期1)第9-10节
|
||||
[96]串与模式匹配(KMP) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第10天(星期5)第7-8节
|
||||
[97]数组与广义表、稀疏矩阵 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第13天(星期1)第5-6节
|
||||
[98]树与二叉树遍历、线索化 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第16天(星期4)第9-10节
|
||||
[99]二叉排序树、AVL、红黑树概念 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第20天(星期1)第5-6节
|
||||
[100]堆与优先队列 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第24天(星期5)第5-6节
|
||||
[101]图的存储与遍历(DFS/BFS) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第27天(星期1)第7-8节
|
||||
[102]最短路径与拓扑排序 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第31天(星期5)第5-6节
|
||||
[103]查找(顺序/折半/散列) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第34天(星期1)第7-8节
|
||||
[104]排序(插入/交换/选择/归并/快排) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第37天(星期4)第9-10节
|
||||
[105]综合算法题实战与代码模板整理 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第40天(星期7)第5-6节
|
||||
|
||||
任务类约束(排课时请遵守):
|
||||
[复习概率论] 策略=均匀分布 总预算=16节 允许嵌水课=是 排除时段=[1,6]
|
||||
[数电期末复习] 策略=均匀分布 总预算=30节 允许嵌水课=是 排除时段=[1,6]
|
||||
[离散数学期末复习] 策略=均匀分布 总预算=26节 允许嵌水课=是 排除时段=[1,6]
|
||||
[数据结构期末复习] 策略=均匀分布 总预算=30节 允许嵌水课=是 排除时段=[1,6]
|
||||
|
||||
----- message[3] -----
|
||||
role: system
|
||||
content:
|
||||
当前执行状态:
|
||||
- 当前轮次:2/60
|
||||
- 当前模式:自由执行(无预定义步骤)
|
||||
执行锚点:
|
||||
- 当前用户诉求:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 目标任务类:task_class_ids=[2,3,4,5]
|
||||
- 啥时候结束Loop:你可以根据工具调用记录自行判断。
|
||||
- 非目标:不重新粗排、不修改无关任务类。
|
||||
- 阶段约束:粗排已完成,本轮只微调 suggested;existing 仅作已安排事实参考,不作为可移动目标。
|
||||
- 参数纪律:工具参数必须严格使用 schema 字段;若返回“参数非法”,需先改参再继续。
|
||||
- 顺序策略:默认保持 suggested 相对顺序,禁止调用 min_context_switch。
|
||||
本轮指令:请继续当前任务的执行阶段,严格输出 JSON。
|
||||
|
||||
|
||||
[DEBUG] execute LLM context end chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=2
|
||||
|
||||
2026/04/10 22:46:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.512ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.057ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:33.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.540ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:35.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.072ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:37.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.028ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.462ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:41.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.523ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:43.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.457ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:45.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.042ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:47.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:49 [DEBUG] execute LLM 响应 chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=2 action=done speak_len=119 raw_len=296 raw_preview={"action":"done","speak":"检查完毕,所有已预排任务均避开了早八(第1-2节)和晚十(第11-12节)时段,微调完成!","goal_check":"当前所有suggested状态的任务(共44个)均安排在第3-10节,无早八/晚十时段冲突,符合用户微调需求。"}
|
||||
|
||||
2026/04/10 22:46:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.096ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:49.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.559ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:51.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.047ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:54 [DEBUG] schedule preview write chat=325b37d1-3483-4c6f-b755-44532a4dbe3c user=1 state=tasks=105 pending=0 suggested=44 existing=61 task_item_with_slot=44 event_with_slot=52 preview=entries=96 existing=52 suggested=44 task_type=44 course_type=52 generated_at=2026-04-10T22:46:54+08:00
|
||||
[GIN] 2026/04/10 - 22:46:55 | 200 | 43.3002757s | 127.0.0.1 | POST "/api/v1/agent/chat"
|
||||
2026/04/10 22:46:55 outbox due messages=3, start dispatch
|
||||
|
||||
2026/04/10 22:46:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.984ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:55.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:56 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 325b37d1-3483-4c6f-b755-44532a4dbe3c
|
||||
|
||||
2026/04/10 22:46:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.039ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:57.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:57 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 325b37d1-3483-4c6f-b755-44532a4dbe3c
|
||||
2026/04/10 22:46:58 outbox due messages=1, start dispatch
|
||||
2026/04/10 22:46:58 [GORM-Cache] No logic defined for model: model.AgentStateSnapshotRecord
|
||||
2026/04/10 22:46:59 异步生成会话标题失败(模型生成失败) chat=325b37d1-3483-4c6f-b755-44532a4dbe3c err=failed to create chat completion: context deadline exceeded
|
||||
|
||||
2026/04/10 22:46:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.138ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:59.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:59 [GORM-Cache] No logic defined for model: model.MemoryJob
|
||||
2026/04/10 22:47:01 [GORM-Cache] No logic defined for model: model.MemoryJob
|
||||
|
||||
2026/04/10 22:47:01 D:/SmartFlow-Agent/backend/memory/repo/settings_repo.go:40 record not found
|
||||
[0.596ms] [rows:0] SELECT * FROM `memory_user_settings` WHERE user_id = 1 ORDER BY `memory_user_settings`.`user_id` LIMIT 1
|
||||
2026/04/10 22:47:10 [GORM-Cache] No logic defined for model: model.MemoryItem
|
||||
2026/04/10 22:47:10 [GORM-Cache] No logic defined for model: model.MemoryAuditLog
|
||||
2026/04/10 22:47:10 [GORM-Cache] No logic defined for model: model.MemoryJob
|
||||
2026/04/10 22:47:10 memory worker run once success: job_id=1 extracted_facts=1
|
||||
|
||||
2026/04/10 22:47:10 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.918ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:10.174')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.892ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:11.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.189ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:13.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.516ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:15.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.075ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:17.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.266ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:19.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.529ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:21.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.659ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:23.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.255ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.482ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:27.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.769ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.113ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.073ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:33.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.533ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:35.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.173ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:37.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.626ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.991ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:41.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.504ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:43.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.485ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:45.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.059ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:47.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.042ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:49.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.043ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:51.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.579ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.973ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:55.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.236ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:57.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
进程 已完成,退出代码为 -1073741510 (0xC000013A: interrupted by Ctrl+C)
|
||||
@@ -23,4 +23,15 @@ type Config struct {
|
||||
JobMaxRetry int
|
||||
WorkerPollEvery time.Duration
|
||||
WorkerClaimBatch int
|
||||
|
||||
// 决策层配置。
|
||||
// 说明:
|
||||
// 1. DecisionEnabled 控制是否启用"召回→比对→汇总"决策流程;
|
||||
// 2. 默认关闭,旧路径完全保留,回滚无风险;
|
||||
// 3. DecisionFallbackMode 仅在决策流程整体报错时生效,不影响单条 LLM 比对失败(单条失败视为 unrelated)。
|
||||
DecisionEnabled bool
|
||||
DecisionCandidateTopK int // Milvus 语义召回候选数上限
|
||||
DecisionCandidateMinScore float64 // Milvus 语义召回最低相似度
|
||||
DecisionFallbackMode string // "legacy_add"(退回旧路径直接新增)/ "drop"(丢弃)
|
||||
WriteMode string // "legacy"(旧路径)/ "decision"(决策流程),仅 DecisionEnabled=true 时生效
|
||||
}
|
||||
|
||||
71
backend/memory/model/decision.go
Normal file
71
backend/memory/model/decision.go
Normal file
@@ -0,0 +1,71 @@
|
||||
package model
|
||||
|
||||
// RelationType 常量描述一条新 fact 与一条旧记忆之间的关系。
|
||||
//
|
||||
// 四种关系:
|
||||
// 1. duplicate — 完全重复,新 fact 没有新信息,旧记忆无需变动;
|
||||
// 2. update — 新 fact 是对旧记忆的修正、补充或更精确表述,需要合并更新;
|
||||
// 3. conflict — 新 fact 与旧记忆矛盾,旧记忆已过时,应删旧增新;
|
||||
// 4. unrelated — 两者说的是不同的事情,互不影响。
|
||||
const (
|
||||
RelationDuplicate = "duplicate"
|
||||
RelationUpdate = "update"
|
||||
RelationConflict = "conflict"
|
||||
RelationUnrelated = "unrelated"
|
||||
)
|
||||
|
||||
// CandidateSnapshot 是喂给 LLM 的旧记忆候选快照。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 只承载 LLM 做关系判断所需的最小信息;
|
||||
// 2. MemoryID 是真实 memory_id,LLM 不可见,仅供汇总决策时使用;
|
||||
// 3. Score 是向量召回的相似度分数,用于多条 update 时选最优候选。
|
||||
type CandidateSnapshot struct {
|
||||
MemoryID int64
|
||||
Title string
|
||||
Content string
|
||||
MemoryType string
|
||||
Score float64 // Milvus 相似度分数(0 表示来自 Hash 查询)
|
||||
}
|
||||
|
||||
// ComparisonResult 是单次"新 fact vs 一条旧记忆"的 LLM 输出。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 只描述 LLM 对一对比较的结果,不包含最终决策动作;
|
||||
// 2. UpdatedContent/UpdatedTitle 仅在 relation=update 时有意义;
|
||||
// 3. Reason 写审计日志用,便于后续复盘 LLM 判断依据。
|
||||
type ComparisonResult struct {
|
||||
MemoryID int64 // 被比较的旧记忆 ID
|
||||
Relation string // duplicate / update / conflict / unrelated
|
||||
UpdatedContent string // 仅 relation=update 时有意义:合并后的新内容
|
||||
UpdatedTitle string // 仅 relation=update 时有意义:合并后的新标题
|
||||
Reason string // 判断理由(写审计日志用)
|
||||
}
|
||||
|
||||
// FinalDecision 是汇总后的最终动作。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 由确定性代码产出,不是 LLM 产出;
|
||||
// 2. Action 取值复用 status.go 中已定义的 DecisionActionAdd/Update/Delete/None 常量;
|
||||
// 3. TargetID 在 UPDATE/DELETE 时指向旧记忆 ID,ADD/NONE 时为 0。
|
||||
type FinalDecision struct {
|
||||
Action string // ADD / UPDATE / DELETE / NONE
|
||||
TargetID int64 // UPDATE/DELETE 时指向旧记忆 ID
|
||||
Title string // UPDATE 时的新标题
|
||||
Content string // UPDATE 时的新内容
|
||||
Reason string // 汇总理由
|
||||
}
|
||||
|
||||
// UpdateContentFields 是 UPDATE 动作需要更新的字段集合。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 只包含 UPDATE 动作实际需要修改的字段,避免全量覆盖;
|
||||
// 2. NormalizedContent/ContentHash 由调用方重新计算,保证一致性。
|
||||
type UpdateContentFields struct {
|
||||
Title string
|
||||
Content string
|
||||
NormalizedContent string
|
||||
ContentHash string
|
||||
Confidence float64
|
||||
Importance float64
|
||||
}
|
||||
@@ -152,7 +152,16 @@ func wireModule(db *gorm.DB, llmClient *infrallm.Client, ragRuntime infrarag.Run
|
||||
readService := memoryservice.NewReadService(itemRepo, settingsRepo, ragRuntime, cfg)
|
||||
manageService := memoryservice.NewManageService(db, itemRepo, auditRepo, settingsRepo)
|
||||
extractor := memoryorchestrator.NewLLMWriteOrchestrator(llmClient, cfg)
|
||||
runner := memoryworker.NewRunner(db, jobRepo, itemRepo, auditRepo, settingsRepo, extractor, ragRuntime)
|
||||
|
||||
// 决策编排器:仅在 DecisionEnabled 时才创建有效实例。
|
||||
// 原因:cfg.DecisionEnabled=false 时,Runner 不走决策路径,编排器不会使用,
|
||||
// 但仍然创建以保持构造签名统一,避免上层调用方感知条件逻辑。
|
||||
var decisionOrchestrator *memoryorchestrator.LLMDecisionOrchestrator
|
||||
if cfg.DecisionEnabled && llmClient != nil {
|
||||
decisionOrchestrator = memoryorchestrator.NewLLMDecisionOrchestrator(llmClient, cfg)
|
||||
}
|
||||
|
||||
runner := memoryworker.NewRunner(db, jobRepo, itemRepo, auditRepo, settingsRepo, extractor, ragRuntime, cfg, decisionOrchestrator)
|
||||
|
||||
return &Module{
|
||||
db: db,
|
||||
|
||||
130
backend/memory/orchestrator/llm_decision_orchestrator.go
Normal file
130
backend/memory/orchestrator/llm_decision_orchestrator.go
Normal file
@@ -0,0 +1,130 @@
|
||||
package orchestrator
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
infrallm "github.com/LoveLosita/smartflow/backend/infra/llm"
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
)
|
||||
|
||||
const defaultDecisionCompareMaxTokens = 600
|
||||
|
||||
// LLMDecisionOrchestrator 负责对"一条新 fact vs 一条旧记忆"做关系判断。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 每次只比较一对,是最小粒度的 LLM 调用;
|
||||
// 2. LLM 只输出 relation(关系类型),不输出 action,不输出 target ID;
|
||||
// 3. LLM 调用失败时返回 error,由上层决定是否视为 unrelated。
|
||||
type LLMDecisionOrchestrator struct {
|
||||
client *infrallm.Client
|
||||
cfg memorymodel.Config
|
||||
logger *log.Logger
|
||||
}
|
||||
|
||||
// NewLLMDecisionOrchestrator 构造决策比对编排器。
|
||||
func NewLLMDecisionOrchestrator(client *infrallm.Client, cfg memorymodel.Config) *LLMDecisionOrchestrator {
|
||||
return &LLMDecisionOrchestrator{
|
||||
client: client,
|
||||
cfg: cfg,
|
||||
logger: log.Default(),
|
||||
}
|
||||
}
|
||||
|
||||
// Compare 对单条新 fact 与单条旧候选做关系判断。
|
||||
//
|
||||
// 返回语义:
|
||||
// 1. 成功时返回比对结果,relation 为四种合法值之一;
|
||||
// 2. LLM 不可用或输出异常时返回 error,上层应视为 unrelated;
|
||||
// 3. 不做最终决策,最终动作由确定性汇总逻辑产出。
|
||||
func (o *LLMDecisionOrchestrator) Compare(
|
||||
ctx context.Context,
|
||||
fact memorymodel.NormalizedFact,
|
||||
candidate memorymodel.CandidateSnapshot,
|
||||
) (*memorymodel.ComparisonResult, error) {
|
||||
if o == nil || o.client == nil {
|
||||
return nil, fmt.Errorf("决策编排器未初始化")
|
||||
}
|
||||
|
||||
// 1. 构建逐对比较 prompt:极简二元判断,LLM 只输出 relation。
|
||||
systemPrompt := buildDecisionCompareSystemPrompt()
|
||||
userPrompt := buildDecisionCompareUserPrompt(fact, candidate)
|
||||
|
||||
messages := infrallm.BuildSystemUserMessages(systemPrompt, nil, userPrompt)
|
||||
|
||||
// 2. 调用 LLM 做结构化输出,温度用低值保证判断稳定。
|
||||
resp, _, err := infrallm.GenerateJSON[decisionCompareResponse](
|
||||
ctx,
|
||||
o.client,
|
||||
messages,
|
||||
infrallm.GenerateOptions{
|
||||
Temperature: 0.1,
|
||||
MaxTokens: defaultDecisionCompareMaxTokens,
|
||||
Thinking: infrallm.ThinkingModeDisabled,
|
||||
Metadata: map[string]any{
|
||||
"stage": "memory_decision_compare",
|
||||
},
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
if o.logger != nil {
|
||||
o.logger.Printf("[WARN][去重] 决策比对 LLM 调用失败: memory_type=%s candidate_id=%d err=%v", fact.MemoryType, candidate.MemoryID, err)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// 3. 映射 LLM 输出到 ComparisonResult,MemoryID 由代码填充而非 LLM。
|
||||
result := &memorymodel.ComparisonResult{
|
||||
MemoryID: candidate.MemoryID,
|
||||
Relation: normalizeRelation(resp.Relation),
|
||||
UpdatedContent: strings.TrimSpace(resp.UpdatedContent),
|
||||
UpdatedTitle: strings.TrimSpace(resp.UpdatedTitle),
|
||||
Reason: strings.TrimSpace(resp.Reason),
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// decisionCompareResponse 是 LLM 逐对比较的 JSON 输出结构。
|
||||
type decisionCompareResponse struct {
|
||||
Relation string `json:"relation"`
|
||||
UpdatedContent string `json:"updated_content"`
|
||||
UpdatedTitle string `json:"updated_title"`
|
||||
Reason string `json:"reason"`
|
||||
}
|
||||
|
||||
// normalizeRelation 统一 relation 字段为小写标准形式。
|
||||
func normalizeRelation(raw string) string {
|
||||
return strings.ToLower(strings.TrimSpace(raw))
|
||||
}
|
||||
|
||||
// buildDecisionCompareSystemPrompt 构建逐对比较的系统 prompt。
|
||||
func buildDecisionCompareSystemPrompt() string {
|
||||
return strings.TrimSpace(`你是一个记忆关系判断器。请判断"新事实"和"旧记忆"之间的关系。
|
||||
|
||||
关系类型:
|
||||
- duplicate:两者表达相同意思,新事实没有新信息
|
||||
- update:新事实是对旧记忆的修正、补充或更精确表述
|
||||
- conflict:新事实与旧记忆在同一话题上存在矛盾(如"喜欢X"变为"不喜欢X"、"去了A地"变为"实际去了B地"),旧记忆已过时
|
||||
- unrelated:两者说的是不同的事情,或属于同一大类下的不同偏好(如"喜欢唱歌"与"喜欢打球"是不同爱好,不矛盾)
|
||||
|
||||
输出 JSON:
|
||||
{"relation":"...","updated_content":"...","updated_title":"...","reason":"..."}
|
||||
|
||||
规则:
|
||||
1. relation=update 时,updated_content 必须写出合并后的完整内容(不是只写差异部分)
|
||||
2. 其余 relation 类型,updated_content 留空即可
|
||||
3. reason 写简短判断依据
|
||||
4. 只输出 JSON,不要输出解释或 markdown
|
||||
5. conflict 仅限同一话题内的矛盾信息;不同话题的偏好、不同领域的兴趣一律判 unrelated`)
|
||||
}
|
||||
|
||||
// buildDecisionCompareUserPrompt 构建逐对比较的用户 prompt。
|
||||
func buildDecisionCompareUserPrompt(fact memorymodel.NormalizedFact, candidate memorymodel.CandidateSnapshot) string {
|
||||
return fmt.Sprintf("新事实:【%s】%s\n旧记忆:【%s】%s",
|
||||
fact.MemoryType, fact.Content,
|
||||
candidate.MemoryType, candidate.Content,
|
||||
)
|
||||
}
|
||||
@@ -210,6 +210,78 @@ func (r *ItemRepo) UpdateVectorStateByID(
|
||||
}).Error
|
||||
}
|
||||
|
||||
// FindActiveByHash 按用户和内容哈希精确查找活跃记忆。
|
||||
//
|
||||
// 用途:
|
||||
// 1. 决策层 Step 1 的 Hash 精确命中检查;
|
||||
// 2. 利用 idx_memory_items_user_type_hash 联合索引,避免全表扫描;
|
||||
// 3. 只返回 status=active 的记录,软删除记录不参与去重。
|
||||
func (r *ItemRepo) FindActiveByHash(ctx context.Context, userID int, contentHash string) ([]model.MemoryItem, error) {
|
||||
if r == nil || r.db == nil {
|
||||
return nil, errors.New("memory item repo is nil")
|
||||
}
|
||||
if userID <= 0 || strings.TrimSpace(contentHash) == "" {
|
||||
return nil, errors.New("memory item find by hash params is invalid")
|
||||
}
|
||||
|
||||
var items []model.MemoryItem
|
||||
err := r.db.WithContext(ctx).
|
||||
Where("user_id = ? AND content_hash = ? AND status = ?", userID, contentHash, model.MemoryItemStatusActive).
|
||||
Find(&items).Error
|
||||
return items, err
|
||||
}
|
||||
|
||||
// UpdateContentByID 更新指定记忆的内容相关字段。
|
||||
//
|
||||
// 步骤化说明:
|
||||
// 1. 只改 title/content/normalized_content/content_hash/confidence/importance 六个字段;
|
||||
// 2. 不改 status/user_id/memory_type 等身份字段,保证更新操作不改变记忆归属;
|
||||
// 3. updated_at 由 GORM AutoUpdateTime 自动维护。
|
||||
func (r *ItemRepo) UpdateContentByID(ctx context.Context, memoryID int64, fields memorymodel.UpdateContentFields) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory item repo is nil")
|
||||
}
|
||||
if memoryID <= 0 {
|
||||
return errors.New("memory item update content id is invalid")
|
||||
}
|
||||
|
||||
return r.db.WithContext(ctx).
|
||||
Model(&model.MemoryItem{}).
|
||||
Where("id = ?", memoryID).
|
||||
Updates(map[string]any{
|
||||
"title": fields.Title,
|
||||
"content": fields.Content,
|
||||
"normalized_content": fields.NormalizedContent,
|
||||
"content_hash": fields.ContentHash,
|
||||
"confidence": fields.Confidence,
|
||||
"importance": fields.Importance,
|
||||
}).Error
|
||||
}
|
||||
|
||||
// SoftDeleteByID 软删除指定用户的某条记忆。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 复用 UpdateStatusByIDAt 的逻辑模式,把 status 改为 deleted;
|
||||
// 2. 同时把 vector_status 重置为 pending,确保向量侧也能感知删除;
|
||||
// 3. 必须带 user_id 条件,避免跨用户误删。
|
||||
func (r *ItemRepo) SoftDeleteByID(ctx context.Context, userID int, memoryID int64) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory item repo is nil")
|
||||
}
|
||||
if userID <= 0 || memoryID <= 0 {
|
||||
return errors.New("memory item soft delete params is invalid")
|
||||
}
|
||||
|
||||
return r.db.WithContext(ctx).
|
||||
Model(&model.MemoryItem{}).
|
||||
Where("id = ? AND user_id = ?", memoryID, userID).
|
||||
Updates(map[string]any{
|
||||
"status": model.MemoryItemStatusDeleted,
|
||||
"vector_status": "pending",
|
||||
"updated_at": time.Now(),
|
||||
}).Error
|
||||
}
|
||||
|
||||
func applyScopedEquality(db *gorm.DB, column, value string, includeGlobal bool) *gorm.DB {
|
||||
value = strings.TrimSpace(value)
|
||||
if value == "" {
|
||||
|
||||
@@ -26,6 +26,13 @@ func LoadConfigFromViper() memorymodel.Config {
|
||||
JobMaxRetry: viper.GetInt("memory.job.maxRetry"),
|
||||
WorkerPollEvery: viper.GetDuration("memory.worker.pollEvery"),
|
||||
WorkerClaimBatch: viper.GetInt("memory.worker.claimBatch"),
|
||||
|
||||
// 决策层配置:默认关闭,灰度开启后才会生效。
|
||||
DecisionEnabled: viper.GetBool("memory.decision.enabled"),
|
||||
DecisionCandidateTopK: viper.GetInt("memory.decision.candidateTopK"),
|
||||
DecisionCandidateMinScore: viper.GetFloat64("memory.decision.candidateMinScore"),
|
||||
DecisionFallbackMode: viper.GetString("memory.decision.fallbackMode"),
|
||||
WriteMode: viper.GetString("memory.write.mode"),
|
||||
}
|
||||
|
||||
if cfg.Threshold <= 0 {
|
||||
@@ -46,5 +53,24 @@ func LoadConfigFromViper() memorymodel.Config {
|
||||
if cfg.WorkerClaimBatch <= 0 {
|
||||
cfg.WorkerClaimBatch = 1
|
||||
}
|
||||
|
||||
// 决策层配置默认值兜底。
|
||||
// 说明:
|
||||
// 1. TopK 和 MinScore 是 Milvus 召回参数,需要保守默认值避免召回过多噪声候选;
|
||||
// 2. FallbackMode 默认退回旧路径新增,保证决策流程异常时不丢数据;
|
||||
// 3. WriteMode 由 DecisionEnabled 隐式决定,这里不做强制联动。
|
||||
if cfg.DecisionCandidateTopK <= 0 {
|
||||
cfg.DecisionCandidateTopK = 5
|
||||
}
|
||||
if cfg.DecisionCandidateMinScore <= 0 {
|
||||
cfg.DecisionCandidateMinScore = 0.6
|
||||
}
|
||||
if cfg.DecisionFallbackMode == "" {
|
||||
cfg.DecisionFallbackMode = "legacy_add"
|
||||
}
|
||||
if cfg.WriteMode == "" {
|
||||
cfg.WriteMode = "legacy"
|
||||
}
|
||||
|
||||
return cfg
|
||||
}
|
||||
|
||||
115
backend/memory/utils/aggregate_decision.go
Normal file
115
backend/memory/utils/aggregate_decision.go
Normal file
@@ -0,0 +1,115 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
)
|
||||
|
||||
// AggregateComparisons 把一轮 LLM 比对结果汇总为最终动作。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 纯确定性逻辑,不调 LLM,不调外部服务;
|
||||
// 2. 按优先级从高到低判定:duplicate > update > conflict > unrelated;
|
||||
// 3. 多条 update 时选 Score 最高的候选执行 UPDATE。
|
||||
//
|
||||
// 汇总规则:
|
||||
// 1. 出现 duplicate → 最终动作 NONE(新 fact 完全重复,不需要写入);
|
||||
// 2. 出现 update → 最终动作 UPDATE(更新 Score 最高的那条旧记忆);
|
||||
// 3. 出现 conflict → 最终动作 DELETE + 后续按 ADD 处理(旧记忆过时,先删旧的再写新的);
|
||||
// 4. 全部 unrelated → 最终动作 ADD(没有相关旧记忆,直接新增)。
|
||||
func AggregateComparisons(
|
||||
fact memorymodel.NormalizedFact,
|
||||
comparisons []memorymodel.ComparisonResult,
|
||||
candidates []memorymodel.CandidateSnapshot,
|
||||
) *memorymodel.FinalDecision {
|
||||
// 1. 无候选时直接 ADD,无需走任何判断。
|
||||
if len(comparisons) == 0 {
|
||||
return &memorymodel.FinalDecision{
|
||||
Action: memorymodel.DecisionActionAdd,
|
||||
Reason: "无相关旧记忆,直接新增",
|
||||
}
|
||||
}
|
||||
|
||||
// 2. 建立 memoryID → CandidateSnapshot 映射,用于查找 Score。
|
||||
snapshotMap := make(map[int64]memorymodel.CandidateSnapshot, len(candidates))
|
||||
for _, c := range candidates {
|
||||
snapshotMap[c.MemoryID] = c
|
||||
}
|
||||
|
||||
hasDuplicate := false
|
||||
var bestUpdate *memorymodel.ComparisonResult
|
||||
bestUpdateScore := -1.0
|
||||
var conflictResult *memorymodel.ComparisonResult
|
||||
|
||||
for i := range comparisons {
|
||||
comp := &comparisons[i]
|
||||
|
||||
switch comp.Relation {
|
||||
case memorymodel.RelationDuplicate:
|
||||
// 3. 出现一条 duplicate 即可确定最终动作为 NONE。
|
||||
hasDuplicate = true
|
||||
|
||||
case memorymodel.RelationUpdate:
|
||||
// 4. 多条 update 时,选 Score 最高的那条执行 UPDATE。
|
||||
snapshot, ok := snapshotMap[comp.MemoryID]
|
||||
score := 0.0
|
||||
if ok {
|
||||
score = snapshot.Score
|
||||
}
|
||||
if score > bestUpdateScore {
|
||||
bestUpdateScore = score
|
||||
bestUpdate = comp
|
||||
}
|
||||
|
||||
case memorymodel.RelationConflict:
|
||||
// 5. 记录第一条 conflict,用于后续 DELETE + ADD 处理。
|
||||
if conflictResult == nil {
|
||||
conflictResult = comp
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 6. 按优先级判定最终动作。
|
||||
if hasDuplicate {
|
||||
return &memorymodel.FinalDecision{
|
||||
Action: memorymodel.DecisionActionNone,
|
||||
Reason: "存在完全重复的旧记忆,跳过写入",
|
||||
}
|
||||
}
|
||||
|
||||
if bestUpdate != nil {
|
||||
// 7. UPDATE 动作:使用 LLM 提供的合并后内容。
|
||||
title := bestUpdate.UpdatedTitle
|
||||
if title == "" {
|
||||
title = fact.Title
|
||||
}
|
||||
content := bestUpdate.UpdatedContent
|
||||
reason := bestUpdate.Reason
|
||||
if reason == "" {
|
||||
reason = "新事实是对旧记忆的修正或补充"
|
||||
}
|
||||
return &memorymodel.FinalDecision{
|
||||
Action: memorymodel.DecisionActionUpdate,
|
||||
TargetID: bestUpdate.MemoryID,
|
||||
Title: title,
|
||||
Content: content,
|
||||
Reason: fmt.Sprintf("更新旧记忆(id=%d): %s", bestUpdate.MemoryID, reason),
|
||||
}
|
||||
}
|
||||
|
||||
if conflictResult != nil {
|
||||
// 8. conflict → 先 DELETE 旧记忆,后续由上层按 ADD 写入新 fact。
|
||||
return &memorymodel.FinalDecision{
|
||||
Action: memorymodel.DecisionActionDelete,
|
||||
TargetID: conflictResult.MemoryID,
|
||||
Reason: fmt.Sprintf("旧记忆(id=%d)与新事实冲突,删除后新增: %s", conflictResult.MemoryID, conflictResult.Reason),
|
||||
}
|
||||
}
|
||||
|
||||
// 9. 全部 unrelated → 直接 ADD。
|
||||
return &memorymodel.FinalDecision{
|
||||
Action: memorymodel.DecisionActionAdd,
|
||||
Reason: "无相关旧记忆,直接新增",
|
||||
}
|
||||
}
|
||||
@@ -10,6 +10,8 @@ import (
|
||||
const (
|
||||
// AuditOperationCreate 表示系统新建一条记忆。
|
||||
AuditOperationCreate = "create"
|
||||
// AuditOperationUpdate 表示决策层更新已有记忆的内容。
|
||||
AuditOperationUpdate = "update"
|
||||
// AuditOperationDelete 表示对已有记忆做软删除。
|
||||
AuditOperationDelete = "delete"
|
||||
)
|
||||
|
||||
49
backend/memory/utils/decision_validate.go
Normal file
49
backend/memory/utils/decision_validate.go
Normal file
@@ -0,0 +1,49 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
)
|
||||
|
||||
// 合法关系类型集合,用于校验 LLM 输出的 relation 字段。
|
||||
var validRelations = map[string]struct{}{
|
||||
memorymodel.RelationDuplicate: {},
|
||||
memorymodel.RelationUpdate: {},
|
||||
memorymodel.RelationConflict: {},
|
||||
memorymodel.RelationUnrelated: {},
|
||||
}
|
||||
|
||||
// ValidateComparisonResult 校验单次比对结果的基本合法性。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 只校验 LLM 输出的结构合法性,不校验业务语义;
|
||||
// 2. relation 必须是四种合法值之一,update 时必须有 UpdatedContent;
|
||||
// 3. 校验失败直接返回 error,调用方决定丢弃或重试。
|
||||
func ValidateComparisonResult(result *memorymodel.ComparisonResult) error {
|
||||
if result == nil {
|
||||
return fmt.Errorf("比对结果不能为空")
|
||||
}
|
||||
|
||||
// 1. MemoryID 必须大于 0,确保能定位到旧记忆。
|
||||
if result.MemoryID <= 0 {
|
||||
return fmt.Errorf("比对结果 memory_id 无效: %d", result.MemoryID)
|
||||
}
|
||||
|
||||
// 2. relation 必须是四种合法值之一,防止 LLM 输出非法值。
|
||||
relation := strings.TrimSpace(strings.ToLower(result.Relation))
|
||||
if _, ok := validRelations[relation]; !ok {
|
||||
return fmt.Errorf("比对结果 relation 非法: %s", result.Relation)
|
||||
}
|
||||
|
||||
// 3. relation=update 时,UpdatedContent 不能为空。
|
||||
// 原因:update 需要合并后的完整内容,不能只写差异部分。
|
||||
if relation == memorymodel.RelationUpdate {
|
||||
if strings.TrimSpace(result.UpdatedContent) == "" {
|
||||
return fmt.Errorf("relation=update 时 updated_content 不能为空")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -56,7 +56,7 @@ func NormalizeFacts(candidates []memorymodel.FactCandidate) []memorymodel.Normal
|
||||
sensitivityLevel := clampInt(candidate.SensitivityLevel, 0, 2)
|
||||
|
||||
normalizedContent := strings.ToLower(content)
|
||||
contentHash := hashContent(memoryType, normalizedContent)
|
||||
contentHash := HashContent(memoryType, normalizedContent)
|
||||
dedupKey := fmt.Sprintf("%s:%s", memoryType, contentHash)
|
||||
if _, exists := seen[dedupKey]; exists {
|
||||
continue
|
||||
@@ -126,7 +126,10 @@ func defaultImportanceByType(memoryType string) float64 {
|
||||
}
|
||||
}
|
||||
|
||||
func hashContent(memoryType, normalizedContent string) string {
|
||||
// HashContent 计算记忆内容的去重哈希。
|
||||
// 算法:sha256(memoryType + "::" + normalizedContent)
|
||||
// 说明:导出此函数是为了让决策层 apply_actions 也能复用同一算法,避免哈希不一致导致去重失效。
|
||||
func HashContent(memoryType, normalizedContent string) string {
|
||||
sum := sha256.Sum256([]byte(memoryType + "::" + normalizedContent))
|
||||
return hex.EncodeToString(sum[:])
|
||||
}
|
||||
|
||||
248
backend/memory/worker/apply_actions.go
Normal file
248
backend/memory/worker/apply_actions.go
Normal file
@@ -0,0 +1,248 @@
|
||||
package worker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryrepo "github.com/LoveLosita/smartflow/backend/memory/repo"
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
)
|
||||
|
||||
// ApplyActionOutcome 是单个决策动作的执行结果。
|
||||
//
|
||||
// 说明:
|
||||
// 1. Action 记录本次执行的动作类型(ADD/UPDATE/DELETE/NONE);
|
||||
// 2. OldItem 仅在 UPDATE/DELETE 时有值,用于审计 before 快照;
|
||||
// 3. NewItem 仅在 ADD/UPDATE 时有值,用于审计 after 快照和向量同步;
|
||||
// 4. NeedsSync 标记是否需要触发向量同步(ADD 和 UPDATE 需要)。
|
||||
type ApplyActionOutcome struct {
|
||||
Action string
|
||||
MemoryID int64
|
||||
OldItem *model.MemoryItem // UPDATE/DELETE 时的 before 快照
|
||||
NewItem *model.MemoryItem // ADD/UPDATE 时的 after 快照
|
||||
NeedsSync bool // 是否需要向量同步
|
||||
}
|
||||
|
||||
// ApplyFinalDecision 把汇总后的最终决策落为数据库动作。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 在调用方事务内执行,不做独立事务管理;
|
||||
// 2. 负责写 memory_items + memory_audit_logs,不负责 job 状态推进;
|
||||
// 3. 所有动作的审计日志都由这里统一产出。
|
||||
//
|
||||
// 参数说明:
|
||||
// - itemRepo/auditRepo 必须是事务绑定的实例(WithTx 后的);
|
||||
// - fact 是当前正在处理的标准化事实;
|
||||
// - job/payload 提供写入所需的上下文(user_id、conversation_id 等)。
|
||||
func ApplyFinalDecision(
|
||||
ctx context.Context,
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
auditRepo *memoryrepo.AuditRepo,
|
||||
decision memorymodel.FinalDecision,
|
||||
fact memorymodel.NormalizedFact,
|
||||
job *model.MemoryJob,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
) (*ApplyActionOutcome, error) {
|
||||
switch decision.Action {
|
||||
case memorymodel.DecisionActionAdd:
|
||||
return applyAdd(ctx, itemRepo, auditRepo, fact, job, payload, decision.Reason)
|
||||
case memorymodel.DecisionActionUpdate:
|
||||
return applyUpdate(ctx, itemRepo, auditRepo, decision, fact, job, payload)
|
||||
case memorymodel.DecisionActionDelete:
|
||||
return applyDelete(ctx, itemRepo, auditRepo, decision, payload.UserID)
|
||||
case memorymodel.DecisionActionNone:
|
||||
return &ApplyActionOutcome{
|
||||
Action: memorymodel.DecisionActionNone,
|
||||
NeedsSync: false,
|
||||
}, nil
|
||||
default:
|
||||
return nil, fmt.Errorf("未知的决策动作: %s", decision.Action)
|
||||
}
|
||||
}
|
||||
|
||||
// applyAdd 执行新增动作:构建 MemoryItem → 写库 → 写审计。
|
||||
func applyAdd(
|
||||
ctx context.Context,
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
auditRepo *memoryrepo.AuditRepo,
|
||||
fact memorymodel.NormalizedFact,
|
||||
job *model.MemoryJob,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
reason string,
|
||||
) (*ApplyActionOutcome, error) {
|
||||
// 1. 复用 runner.go 的 buildMemoryItems 构建单条 MemoryItem。
|
||||
items := buildMemoryItems(job, payload, []memorymodel.NormalizedFact{fact})
|
||||
if len(items) == 0 {
|
||||
return nil, fmt.Errorf("构建记忆条目失败: memory_type=%s", fact.MemoryType)
|
||||
}
|
||||
|
||||
// 2. 写库,GORM Create 会自动填充 items[0].ID。
|
||||
if err := itemRepo.UpsertItems(ctx, items); err != nil {
|
||||
return nil, fmt.Errorf("新增记忆写入失败: %w", err)
|
||||
}
|
||||
// 注意:必须在 UpsertItems 之后取 items[0],因为 GORM Create 回填 ID 到 items[i],
|
||||
// 之前用 item := items[0] 在 UpsertItems 之前拷贝,导致副本 ID 永远为 0。
|
||||
item := items[0]
|
||||
|
||||
// 3. 写审计日志(create 动作只有 after 快照)。
|
||||
audit := memoryutils.BuildItemAuditLog(
|
||||
item.ID,
|
||||
item.UserID,
|
||||
memoryutils.AuditOperationCreate,
|
||||
"system",
|
||||
formatAuditReason("决策层新增", reason),
|
||||
nil,
|
||||
&item,
|
||||
)
|
||||
if err := auditRepo.Create(ctx, audit); err != nil {
|
||||
return nil, fmt.Errorf("新增审计写入失败: %w", err)
|
||||
}
|
||||
|
||||
return &ApplyActionOutcome{
|
||||
Action: memorymodel.DecisionActionAdd,
|
||||
MemoryID: item.ID,
|
||||
NewItem: &item,
|
||||
NeedsSync: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// applyUpdate 执行更新动作:查 before → 更新字段 → 写审计(before+after)。
|
||||
func applyUpdate(
|
||||
ctx context.Context,
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
auditRepo *memoryrepo.AuditRepo,
|
||||
decision memorymodel.FinalDecision,
|
||||
fact memorymodel.NormalizedFact,
|
||||
job *model.MemoryJob,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
) (*ApplyActionOutcome, error) {
|
||||
// 1. 查 before 快照,同时确认旧记忆存在且属于该用户。
|
||||
oldItem, err := itemRepo.GetByIDForUser(ctx, payload.UserID, decision.TargetID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("查询旧记忆失败(id=%d): %w", decision.TargetID, err)
|
||||
}
|
||||
|
||||
// 2. 重新计算 NormalizedContent 和 ContentHash,保证和 NormalizeFacts 的逻辑一致。
|
||||
// 原因:LLM 输出的 merged content 需要重新走归一化链,避免大小写/空格差异导致后续 Hash 去重失效。
|
||||
updatedContent := strings.TrimSpace(decision.Content)
|
||||
if updatedContent == "" {
|
||||
updatedContent = fact.Content
|
||||
}
|
||||
normalizedContent := strings.ToLower(updatedContent)
|
||||
// 复用 utils.HashContent 的 sha256(memoryType + "::" + normalizedContent) 算法。
|
||||
contentHash := memoryutils.HashContent(fact.MemoryType, normalizedContent)
|
||||
|
||||
title := strings.TrimSpace(decision.Title)
|
||||
if title == "" {
|
||||
title = oldItem.Title
|
||||
}
|
||||
|
||||
// 3. 执行内容更新。
|
||||
fields := memorymodel.UpdateContentFields{
|
||||
Title: title,
|
||||
Content: updatedContent,
|
||||
NormalizedContent: normalizedContent,
|
||||
ContentHash: contentHash,
|
||||
Confidence: fact.Confidence,
|
||||
Importance: fact.Importance,
|
||||
}
|
||||
if err := itemRepo.UpdateContentByID(ctx, decision.TargetID, fields); err != nil {
|
||||
return nil, fmt.Errorf("更新记忆内容失败(id=%d): %w", decision.TargetID, err)
|
||||
}
|
||||
|
||||
// 4. 构造 after 快照用于审计。
|
||||
afterItem := *oldItem
|
||||
afterItem.Title = title
|
||||
afterItem.Content = updatedContent
|
||||
if afterItem.NormalizedContent != nil {
|
||||
afterItem.NormalizedContent = &normalizedContent
|
||||
} else {
|
||||
afterItem.NormalizedContent = strPtrFromValue(normalizedContent)
|
||||
}
|
||||
if afterItem.ContentHash != nil {
|
||||
afterItem.ContentHash = &contentHash
|
||||
} else {
|
||||
afterItem.ContentHash = strPtrFromValue(contentHash)
|
||||
}
|
||||
afterItem.Confidence = fact.Confidence
|
||||
afterItem.Importance = fact.Importance
|
||||
|
||||
// 5. 写审计日志(update 动作同时有 before 和 after 快照)。
|
||||
audit := memoryutils.BuildItemAuditLog(
|
||||
oldItem.ID,
|
||||
oldItem.UserID,
|
||||
memoryutils.AuditOperationUpdate,
|
||||
"system",
|
||||
formatAuditReason("决策层更新", decision.Reason),
|
||||
oldItem,
|
||||
&afterItem,
|
||||
)
|
||||
if err := auditRepo.Create(ctx, audit); err != nil {
|
||||
return nil, fmt.Errorf("更新审计写入失败: %w", err)
|
||||
}
|
||||
|
||||
// 6. 向量状态重置为 pending,触发向量重同步。
|
||||
// 原因:内容变了,旧向量已过期,需要重新 embed。
|
||||
_ = itemRepo.UpdateVectorStateByID(ctx, oldItem.ID, "pending", nil)
|
||||
|
||||
return &ApplyActionOutcome{
|
||||
Action: memorymodel.DecisionActionUpdate,
|
||||
MemoryID: oldItem.ID,
|
||||
OldItem: oldItem,
|
||||
NewItem: &afterItem,
|
||||
NeedsSync: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// applyDelete 执行软删除动作:查 before → 软删 → 写审计(before only)。
|
||||
func applyDelete(
|
||||
ctx context.Context,
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
auditRepo *memoryrepo.AuditRepo,
|
||||
decision memorymodel.FinalDecision,
|
||||
userID int,
|
||||
) (*ApplyActionOutcome, error) {
|
||||
// 1. 查 before 快照。
|
||||
oldItem, err := itemRepo.GetByIDForUser(ctx, userID, decision.TargetID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("查询旧记忆失败(id=%d): %w", decision.TargetID, err)
|
||||
}
|
||||
|
||||
// 2. 执行软删除。
|
||||
if err := itemRepo.SoftDeleteByID(ctx, userID, decision.TargetID); err != nil {
|
||||
return nil, fmt.Errorf("软删除记忆失败(id=%d): %w", decision.TargetID, err)
|
||||
}
|
||||
|
||||
// 3. 写审计日志(delete 动作只有 before 快照)。
|
||||
audit := memoryutils.BuildItemAuditLog(
|
||||
oldItem.ID,
|
||||
oldItem.UserID,
|
||||
memoryutils.AuditOperationDelete,
|
||||
"system",
|
||||
formatAuditReason("决策层删除", decision.Reason),
|
||||
oldItem,
|
||||
nil,
|
||||
)
|
||||
if err := auditRepo.Create(ctx, audit); err != nil {
|
||||
return nil, fmt.Errorf("删除审计写入失败: %w", err)
|
||||
}
|
||||
|
||||
return &ApplyActionOutcome{
|
||||
Action: memorymodel.DecisionActionDelete,
|
||||
MemoryID: oldItem.ID,
|
||||
OldItem: oldItem,
|
||||
NeedsSync: false,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// formatAuditReason 统一审计日志的 reason 格式。
|
||||
func formatAuditReason(prefix, detail string) string {
|
||||
detail = strings.TrimSpace(detail)
|
||||
if detail == "" {
|
||||
return prefix
|
||||
}
|
||||
return prefix + ": " + detail
|
||||
}
|
||||
367
backend/memory/worker/decision_flow.go
Normal file
367
backend/memory/worker/decision_flow.go
Normal file
@@ -0,0 +1,367 @@
|
||||
package worker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
infrarag "github.com/LoveLosita/smartflow/backend/infra/rag"
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryrepo "github.com/LoveLosita/smartflow/backend/memory/repo"
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// DecisionFlowOutcome 是一轮决策流程的汇总结果。
|
||||
//
|
||||
// 说明:
|
||||
// 1. AddCount/UpdateCount/DeleteCount/NoneCount 分别统计四种动作的执行次数;
|
||||
// 2. ItemsToSync 收集所有需要向量同步的 item(ADD 和 UPDATE 产出的);
|
||||
// 3. VectorDeletes 收集所有需要从向量库删除的 memory_id(DELETE 动作产出的)。
|
||||
type DecisionFlowOutcome struct {
|
||||
AddCount int
|
||||
UpdateCount int
|
||||
DeleteCount int
|
||||
NoneCount int
|
||||
ItemsToSync []model.MemoryItem // 需要向量同步的新增/更新 item
|
||||
VectorDeletes []int64 // 需要从向量库删除的 memory_id 列表
|
||||
}
|
||||
|
||||
// factDecisionResult 是单条 fact 的决策执行结果,支持一对多动作。
|
||||
// 原因:conflict 场景下会产生 DELETE + ADD 两个动作,需要打包返回。
|
||||
type factDecisionResult struct {
|
||||
Outcomes []*ApplyActionOutcome
|
||||
}
|
||||
|
||||
// executeDecisionFlow 在 worker 内编排"召回→逐对比对→汇总→执行"全流程。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 对每条 fact 独立执行完整决策流程,fact 之间互不影响;
|
||||
// 2. 所有数据库写操作在同一个事务内完成,保证原子性;
|
||||
// 3. 向量同步在事务外异步执行,不影响事务提交。
|
||||
//
|
||||
// 降级策略:
|
||||
// 1. Milvus 不可用时,回退到 MySQL 按类型查最近 N 条活跃记忆;
|
||||
// 2. 单条 LLM 比对失败不影响其他候选,视为 unrelated;
|
||||
// 3. 整体流程报错时,由上层根据 FallbackMode 决定是否退回旧路径。
|
||||
func (r *Runner) executeDecisionFlow(
|
||||
ctx context.Context,
|
||||
job *model.MemoryJob,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
facts []memorymodel.NormalizedFact,
|
||||
) (*DecisionFlowOutcome, error) {
|
||||
outcome := &DecisionFlowOutcome{
|
||||
ItemsToSync: make([]model.MemoryItem, 0, len(facts)),
|
||||
VectorDeletes: make([]int64, 0),
|
||||
}
|
||||
|
||||
// 1. 所有数据库写操作在同一个事务内完成。
|
||||
err := r.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
|
||||
itemRepo := r.itemRepo.WithTx(tx)
|
||||
auditRepo := r.auditRepo.WithTx(tx)
|
||||
jobRepo := r.jobRepo.WithTx(tx)
|
||||
|
||||
for _, fact := range facts {
|
||||
// 2. 对每条 fact 执行完整决策流程。
|
||||
result, err := r.executeDecisionForFact(ctx, itemRepo, auditRepo, fact, job, payload)
|
||||
if err != nil {
|
||||
// 单条 fact 决策失败不影响其他 fact,记录日志后继续。
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN][去重] 单条 fact 决策失败,跳过继续: job_id=%d user_id=%d memory_type=%s hash=%s err=%v", job.ID, payload.UserID, fact.MemoryType, fact.ContentHash, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// 3. 汇总结果到全局 outcome。
|
||||
for _, actionOutcome := range result.Outcomes {
|
||||
r.collectActionOutcome(outcome, actionOutcome)
|
||||
}
|
||||
}
|
||||
|
||||
// 4. 事务内最后确认 job 成功。
|
||||
return jobRepo.MarkSuccess(ctx, job.ID)
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return outcome, nil
|
||||
}
|
||||
|
||||
// executeDecisionForFact 对单条 fact 执行完整决策流程。
|
||||
//
|
||||
// 步骤:
|
||||
// 1. Hash 精确命中检查 — 已有完全相同内容则直接跳过;
|
||||
// 2. Milvus 语义召回 — 从旧记忆中筛出 TopK 候选(含降级);
|
||||
// 3. 逐对 LLM 比对 — 每次拿一条新 fact 和一条旧候选比对;
|
||||
// 4. 确定性汇总 — 根据 LLM 比对结果确定 ADD/UPDATE/DELETE/NONE;
|
||||
// 5. 校验 + 执行 — 落为数据库动作 + 审计日志。
|
||||
func (r *Runner) executeDecisionForFact(
|
||||
ctx context.Context,
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
auditRepo *memoryrepo.AuditRepo,
|
||||
fact memorymodel.NormalizedFact,
|
||||
job *model.MemoryJob,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
) (*factDecisionResult, error) {
|
||||
result := &factDecisionResult{}
|
||||
|
||||
// Step 1: Hash 精确命中检查。
|
||||
// 原因:如果已有完全相同内容的记忆,直接跳过,无需调 LLM。
|
||||
existing, err := itemRepo.FindActiveByHash(ctx, payload.UserID, fact.ContentHash)
|
||||
if err != nil {
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN][去重] Hash 精确匹配查询失败: user_id=%d memory_type=%s hash=%s err=%v", payload.UserID, fact.MemoryType, fact.ContentHash, err)
|
||||
}
|
||||
}
|
||||
if len(existing) > 0 {
|
||||
result.Outcomes = append(result.Outcomes, &ApplyActionOutcome{
|
||||
Action: memorymodel.DecisionActionNone,
|
||||
NeedsSync: false,
|
||||
})
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Step 2: Milvus 语义召回(含降级)。
|
||||
candidates := r.recallCandidates(ctx, payload, fact)
|
||||
|
||||
// 打印召回候选详情,便于排查向量召回和阈值过滤效果。
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[DEBUG][去重] 语义召回候选: job_id=%d user_id=%d memory_type=%s candidate_count=%d",
|
||||
job.ID, payload.UserID, fact.MemoryType, len(candidates))
|
||||
for _, c := range candidates {
|
||||
r.logger.Printf("[DEBUG][去重] 候选详情: memory_id=%d score=%.4f content=\"%s\"",
|
||||
c.MemoryID, c.Score, truncateRunes(c.Content, 50))
|
||||
}
|
||||
}
|
||||
|
||||
// Step 3: 逐对 LLM 比对。
|
||||
comparisons := r.compareWithCandidates(ctx, fact, candidates)
|
||||
|
||||
// Step 4: 确定性汇总。
|
||||
decision := memoryutils.AggregateComparisons(fact, comparisons, candidates)
|
||||
|
||||
// 打印汇总决策结果,便于排查去重终态。
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[DEBUG][去重] 汇总决策: job_id=%d action=%s target_id=%d reason=\"%s\"",
|
||||
job.ID, decision.Action, decision.TargetID, decision.Reason)
|
||||
}
|
||||
|
||||
// Step 5: 校验 + 执行。
|
||||
actionOutcome, err := ApplyFinalDecision(ctx, itemRepo, auditRepo, *decision, fact, job, payload)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("执行决策动作失败: %w", err)
|
||||
}
|
||||
result.Outcomes = append(result.Outcomes, actionOutcome)
|
||||
|
||||
// Step 6: conflict (DELETE) 后需要补一个 ADD 写入新 fact。
|
||||
// 原因:旧记忆矛盾需删除,但新事实本身仍然有效,必须写入。
|
||||
if decision.Action == memorymodel.DecisionActionDelete {
|
||||
addDecision := memorymodel.FinalDecision{
|
||||
Action: memorymodel.DecisionActionAdd,
|
||||
Reason: "冲突旧记忆已删除,写入新事实",
|
||||
}
|
||||
addOutcome, addErr := ApplyFinalDecision(ctx, itemRepo, auditRepo, addDecision, fact, job, payload)
|
||||
if addErr != nil {
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN] 冲突后补增失败: memory_type=%s err=%v", fact.MemoryType, addErr)
|
||||
}
|
||||
} else if addOutcome != nil {
|
||||
result.Outcomes = append(result.Outcomes, addOutcome)
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// recallCandidates 从旧记忆中召回候选,先尝试 Milvus,降级时用 MySQL。
|
||||
func (r *Runner) recallCandidates(
|
||||
ctx context.Context,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
fact memorymodel.NormalizedFact,
|
||||
) []memorymodel.CandidateSnapshot {
|
||||
// 1. 优先使用 Milvus 向量语义召回。
|
||||
if r.ragRuntime != nil {
|
||||
retrieveResult, err := r.ragRuntime.RetrieveMemory(ctx, infrarag.MemoryRetrieveRequest{
|
||||
Query: fact.Content,
|
||||
TopK: r.cfg.DecisionCandidateTopK,
|
||||
Threshold: r.cfg.DecisionCandidateMinScore,
|
||||
UserID: payload.UserID,
|
||||
MemoryTypes: []string{fact.MemoryType},
|
||||
Action: "search",
|
||||
})
|
||||
if err == nil && len(retrieveResult.Items) > 0 {
|
||||
candidates := r.buildCandidatesFromRAG(retrieveResult.Items)
|
||||
if len(candidates) > 0 {
|
||||
return candidates
|
||||
}
|
||||
// RAG 返回了结果但 DocumentID 全部解析失败,降级到 MySQL。
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN][去重] Milvus 返回 %d 条结果但 DocumentID 全部解析失败,降级到 MySQL: user_id=%d memory_type=%s", len(retrieveResult.Items), payload.UserID, fact.MemoryType)
|
||||
}
|
||||
}
|
||||
if err != nil && r.logger != nil {
|
||||
r.logger.Printf("[WARN][去重] Milvus 语义召回失败,降级到 MySQL: user_id=%d memory_type=%s topk=%d err=%v", payload.UserID, fact.MemoryType, r.cfg.DecisionCandidateTopK, err)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. 降级:按 user_id + memory_type + status=active 查最近 N 条。
|
||||
return r.recallCandidatesFromMySQL(ctx, payload, fact)
|
||||
}
|
||||
|
||||
// buildCandidatesFromRAG 从 RAG 检索结果构建候选快照列表。
|
||||
//
|
||||
// 步骤:
|
||||
// 1. 从 DocumentID(格式 memory:{id})解析出 mysql_id;
|
||||
// 2. 从 metadata 提取 title 和 memory_type;
|
||||
// 3. 跳过无法解析 DocumentID 的结果。
|
||||
func (r *Runner) buildCandidatesFromRAG(hits []infrarag.RetrieveHit) []memorymodel.CandidateSnapshot {
|
||||
candidates := make([]memorymodel.CandidateSnapshot, 0, len(hits))
|
||||
for _, hit := range hits {
|
||||
memoryID := parseMemoryID(hit.DocumentID)
|
||||
if memoryID <= 0 {
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN][去重] DocumentID 解析失败,跳过候选: document_id=%q", hit.DocumentID)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
candidates = append(candidates, memorymodel.CandidateSnapshot{
|
||||
MemoryID: memoryID,
|
||||
Title: asStringFromMap(hit.Metadata, "title"),
|
||||
Content: hit.Text,
|
||||
MemoryType: asStringFromMap(hit.Metadata, "memory_type"),
|
||||
Score: hit.Score,
|
||||
})
|
||||
}
|
||||
return candidates
|
||||
}
|
||||
|
||||
// recallCandidatesFromMySQL 从 MySQL 查最近 N 条活跃记忆作为候选。
|
||||
// 这是 Milvus 不可用时的降级方案。
|
||||
func (r *Runner) recallCandidatesFromMySQL(
|
||||
ctx context.Context,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
fact memorymodel.NormalizedFact,
|
||||
) []memorymodel.CandidateSnapshot {
|
||||
items, err := r.itemRepo.FindByQuery(ctx, memorymodel.ItemQuery{
|
||||
UserID: payload.UserID,
|
||||
MemoryTypes: []string{fact.MemoryType},
|
||||
Statuses: []string{model.MemoryItemStatusActive},
|
||||
Limit: r.cfg.DecisionCandidateTopK,
|
||||
})
|
||||
if err != nil {
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN] MySQL 降级召回失败: err=%v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
candidates := make([]memorymodel.CandidateSnapshot, 0, len(items))
|
||||
for _, item := range items {
|
||||
candidates = append(candidates, memorymodel.CandidateSnapshot{
|
||||
MemoryID: item.ID,
|
||||
Title: item.Title,
|
||||
Content: item.Content,
|
||||
MemoryType: item.MemoryType,
|
||||
Score: 0, // MySQL 降级无向量分数
|
||||
})
|
||||
}
|
||||
return candidates
|
||||
}
|
||||
|
||||
// compareWithCandidates 对每个候选逐一调 LLM 做关系判断。
|
||||
//
|
||||
// 说明:
|
||||
// 1. LLM 调用失败时视为 unrelated,不影响其他候选的比对;
|
||||
// 2. 对比对结果做校验,不合法的也视为 unrelated;
|
||||
// 3. 无候选或决策编排器为空时返回空切片,上层直接走 ADD 路径。
|
||||
func (r *Runner) compareWithCandidates(
|
||||
ctx context.Context,
|
||||
fact memorymodel.NormalizedFact,
|
||||
candidates []memorymodel.CandidateSnapshot,
|
||||
) []memorymodel.ComparisonResult {
|
||||
if r.decisionOrchestrator == nil || len(candidates) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
comparisons := make([]memorymodel.ComparisonResult, 0, len(candidates))
|
||||
for _, candidate := range candidates {
|
||||
compResult, err := r.decisionOrchestrator.Compare(ctx, fact, candidate)
|
||||
if err != nil {
|
||||
// LLM 调用失败 → 视为 unrelated,不影响其他候选。
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN][去重] LLM 逐对比较调用失败,视为 unrelated: candidate_id=%d memory_type=%s err=%v", candidate.MemoryID, fact.MemoryType, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// 校验 LLM 输出合法性,不合法也跳过。
|
||||
if validateErr := memoryutils.ValidateComparisonResult(compResult); validateErr != nil {
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN][去重] LLM 比对结果校验不通过,视为 unrelated: candidate_id=%d memory_type=%s relation=%s err=%v", candidate.MemoryID, fact.MemoryType, compResult.Relation, validateErr)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
comparisons = append(comparisons, *compResult)
|
||||
|
||||
// 打印 LLM 比对结果,便于排查误判。
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[DEBUG][去重] LLM 比对结果: candidate_id=%d score=%.4f relation=%s reason=\"%s\" candidate_content=\"%s\"",
|
||||
candidate.MemoryID, candidate.Score, compResult.Relation, compResult.Reason, truncateRunes(candidate.Content, 50))
|
||||
}
|
||||
}
|
||||
return comparisons
|
||||
}
|
||||
|
||||
// collectActionOutcome 汇总单个动作结果到全局 outcome。
|
||||
func (r *Runner) collectActionOutcome(outcome *DecisionFlowOutcome, actionOutcome *ApplyActionOutcome) {
|
||||
if actionOutcome == nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch actionOutcome.Action {
|
||||
case memorymodel.DecisionActionAdd:
|
||||
outcome.AddCount++
|
||||
if actionOutcome.NeedsSync && actionOutcome.NewItem != nil {
|
||||
outcome.ItemsToSync = append(outcome.ItemsToSync, *actionOutcome.NewItem)
|
||||
}
|
||||
case memorymodel.DecisionActionUpdate:
|
||||
outcome.UpdateCount++
|
||||
if actionOutcome.NeedsSync && actionOutcome.NewItem != nil {
|
||||
outcome.ItemsToSync = append(outcome.ItemsToSync, *actionOutcome.NewItem)
|
||||
}
|
||||
case memorymodel.DecisionActionDelete:
|
||||
outcome.DeleteCount++
|
||||
outcome.VectorDeletes = append(outcome.VectorDeletes, actionOutcome.MemoryID)
|
||||
case memorymodel.DecisionActionNone:
|
||||
outcome.NoneCount++
|
||||
}
|
||||
}
|
||||
|
||||
// asStringFromMap 从 metadata map 中安全提取字符串值。
|
||||
func asStringFromMap(m map[string]any, key string) string {
|
||||
if m == nil {
|
||||
return ""
|
||||
}
|
||||
v, ok := m[key]
|
||||
if !ok || v == nil {
|
||||
return ""
|
||||
}
|
||||
return fmt.Sprintf("%v", v)
|
||||
}
|
||||
|
||||
// truncateRunes 截取字符串前 n 个 rune,超出则追加 "..."。
|
||||
// 用途:日志内容预览,避免超长内容撑爆单行日志。
|
||||
func truncateRunes(s string, n int) string {
|
||||
runes := []rune(s)
|
||||
if len(runes) <= n {
|
||||
return s
|
||||
}
|
||||
if n <= 0 {
|
||||
return ""
|
||||
}
|
||||
return string(runes[:n]) + "..."
|
||||
}
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
|
||||
infrarag "github.com/LoveLosita/smartflow/backend/infra/rag"
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryorchestrator "github.com/LoveLosita/smartflow/backend/memory/orchestrator"
|
||||
memoryrepo "github.com/LoveLosita/smartflow/backend/memory/repo"
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
@@ -41,6 +42,13 @@ type Runner struct {
|
||||
extractor Extractor
|
||||
ragRuntime infrarag.Runtime
|
||||
logger *log.Logger
|
||||
|
||||
// 决策层依赖。
|
||||
// 说明:
|
||||
// 1. cfg 提供决策层配置(是否启用、TopK、MinScore、FallbackMode);
|
||||
// 2. decisionOrchestrator 在决策启用时负责 LLM 逐对比较,为 nil 时走旧路径。
|
||||
cfg memorymodel.Config
|
||||
decisionOrchestrator *memoryorchestrator.LLMDecisionOrchestrator
|
||||
}
|
||||
|
||||
// NewRunner 构造记忆 worker 执行器。
|
||||
@@ -52,16 +60,20 @@ func NewRunner(
|
||||
settingsRepo *memoryrepo.SettingsRepo,
|
||||
extractor Extractor,
|
||||
ragRuntime infrarag.Runtime,
|
||||
cfg memorymodel.Config,
|
||||
decisionOrchestrator *memoryorchestrator.LLMDecisionOrchestrator,
|
||||
) *Runner {
|
||||
return &Runner{
|
||||
db: db,
|
||||
jobRepo: jobRepo,
|
||||
itemRepo: itemRepo,
|
||||
auditRepo: auditRepo,
|
||||
settingsRepo: settingsRepo,
|
||||
extractor: extractor,
|
||||
ragRuntime: ragRuntime,
|
||||
logger: log.Default(),
|
||||
db: db,
|
||||
jobRepo: jobRepo,
|
||||
itemRepo: itemRepo,
|
||||
auditRepo: auditRepo,
|
||||
settingsRepo: settingsRepo,
|
||||
extractor: extractor,
|
||||
ragRuntime: ragRuntime,
|
||||
logger: log.Default(),
|
||||
cfg: cfg,
|
||||
decisionOrchestrator: decisionOrchestrator,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -145,7 +157,42 @@ func (r *Runner) RunOnce(ctx context.Context) (*RunOnceResult, error) {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// 5. 先在事务里写入记忆条目和审计日志,再统一确认 job 成功。
|
||||
// 5. 根据配置选择写入路径:决策层 or 旧路径。
|
||||
if r.cfg.DecisionEnabled && r.decisionOrchestrator != nil {
|
||||
// 5a. 决策路径:召回→比对→汇总→执行。
|
||||
outcome, decisionErr := r.executeDecisionFlow(ctx, job, payload, facts)
|
||||
if decisionErr != nil {
|
||||
// 决策流程整体失败,根据 FallbackMode 决定是否退回旧路径。
|
||||
r.logger.Printf("[WARN][去重] 决策流程整体失败: job_id=%d user_id=%d facts_count=%d fallback=%s err=%v", job.ID, payload.UserID, len(facts), r.cfg.DecisionFallbackMode, decisionErr)
|
||||
if r.cfg.DecisionFallbackMode == "legacy_add" {
|
||||
if err = r.persistMemoryWrite(ctx, job.ID, items); err != nil {
|
||||
failReason := fmt.Sprintf("决策降级后记忆落库失败: %v", err)
|
||||
_ = r.jobRepo.MarkFailed(ctx, job.ID, failReason)
|
||||
result.Status = model.MemoryJobStatusFailed
|
||||
return result, nil
|
||||
}
|
||||
result.Status = model.MemoryJobStatusSuccess
|
||||
result.Facts = len(items)
|
||||
r.syncMemoryVectors(ctx, items)
|
||||
return result, nil
|
||||
}
|
||||
// FallbackMode=drop:丢弃本轮抽取结果,直接标记 job 成功。
|
||||
_ = r.jobRepo.MarkSuccess(ctx, job.ID)
|
||||
result.Status = model.MemoryJobStatusSuccess
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// 5b. 决策成功:同步向量(新增/更新)和删除过期向量。
|
||||
result.Status = model.MemoryJobStatusSuccess
|
||||
result.Facts = outcome.AddCount + outcome.UpdateCount + outcome.DeleteCount
|
||||
r.syncMemoryVectors(ctx, outcome.ItemsToSync)
|
||||
r.syncVectorDeletes(ctx, outcome.VectorDeletes)
|
||||
r.logger.Printf("[去重] 决策流程完成: job_id=%d user_id=%d 新增=%d 更新=%d 删除=%d 跳过=%d",
|
||||
job.ID, payload.UserID, outcome.AddCount, outcome.UpdateCount, outcome.DeleteCount, outcome.NoneCount)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// 5c. 旧路径:和现在完全一样 — 先在事务里写入记忆条目和审计日志,再统一确认 job 成功。
|
||||
if err = r.persistMemoryWrite(ctx, job.ID, items); err != nil {
|
||||
failReason := fmt.Sprintf("记忆落库失败: %v", err)
|
||||
_ = r.jobRepo.MarkFailed(ctx, job.ID, failReason)
|
||||
@@ -251,7 +298,7 @@ func (r *Runner) syncMemoryVectors(ctx context.Context, items []model.MemoryItem
|
||||
Items: requestItems,
|
||||
})
|
||||
if err != nil {
|
||||
r.logger.Printf("memory vector sync failed: err=%v", err)
|
||||
r.logger.Printf("[WARN][去重] 记忆向量同步失败: count=%d err=%v", len(items), err)
|
||||
for _, item := range items {
|
||||
_ = r.itemRepo.UpdateVectorStateByID(ctx, item.ID, "failed", nil)
|
||||
}
|
||||
@@ -273,6 +320,42 @@ func (r *Runner) syncMemoryVectors(ctx context.Context, items []model.MemoryItem
|
||||
}
|
||||
}
|
||||
|
||||
// syncVectorDeletes 处理决策层 DELETE 动作产出的向量清理需求。
|
||||
//
|
||||
// 步骤:
|
||||
// 1. 将 memoryID 转为 Milvus documentID("memory:{id}" 格式);
|
||||
// 2. 调 Runtime.DeleteMemory 真正从 Milvus 删除对应向量;
|
||||
// 3. 更新 MySQL vector_status 标记删除结果。
|
||||
func (r *Runner) syncVectorDeletes(ctx context.Context, memoryIDs []int64) {
|
||||
if r == nil || len(memoryIDs) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// 1. 构造 documentID 列表。
|
||||
documentIDs := make([]string, 0, len(memoryIDs))
|
||||
for _, id := range memoryIDs {
|
||||
documentIDs = append(documentIDs, fmt.Sprintf("memory:%d", id))
|
||||
}
|
||||
|
||||
// 2. 调 Runtime 删除向量。
|
||||
if r.ragRuntime != nil {
|
||||
if err := r.ragRuntime.DeleteMemory(ctx, documentIDs); err != nil {
|
||||
r.logger.Printf("[WARN][去重] Milvus 向量删除失败,标记为 pending 等待后续清理: count=%d ids=%v err=%v", len(memoryIDs), memoryIDs, err)
|
||||
} else {
|
||||
r.logger.Printf("[去重] Milvus 向量删除完成: count=%d ids=%v", len(memoryIDs), memoryIDs)
|
||||
}
|
||||
}
|
||||
|
||||
// 3. 更新 MySQL vector_status。
|
||||
for _, memoryID := range memoryIDs {
|
||||
if updateErr := r.itemRepo.UpdateVectorStateByID(ctx, memoryID, "deleted", nil); updateErr != nil {
|
||||
if r.logger != nil {
|
||||
r.logger.Printf("[WARN] 向量状态更新失败: memory_id=%d err=%v", memoryID, updateErr)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func resolveMemoryTTLAt(base time.Time, memoryType string) *time.Time {
|
||||
switch memoryType {
|
||||
case memorymodel.MemoryTypeTodoHint:
|
||||
|
||||
Reference in New Issue
Block a user