Version: 0.9.14.dev.260410
后端:
1. LLM 客户端从 newAgent/llm 提升为 infra/llm 基础设施层
- 删除 backend/newAgent/llm/(ark.go / ark_adapter.go / client.go / json.go)
- 等价迁移至 backend/infra/llm/,所有 newAgent node 与 service 统一改引用 infrallm
- 消除 newAgent 对模型客户端的私有依赖,为 memory / websearch 等多模块复用铺路
2. RAG 基础设施完成可运行态接入(factory / runtime / observer / service 四层成型)
- 新建 backend/infra/rag/factory.go / runtime.go / observe.go / observer.go /
service.go:工厂创建、运行时生命周期、轻量观测接口、检索服务门面
- 更新 infra/rag/config/config.go:补齐 Milvus / Embed / Reranker 全部配置项与默认值
- 更新 infra/rag/embed/eino_embedder.go:增强 Eino embedding 适配,支持 BaseURL / APIKey 环境变量 / 超时 /
维度等参数
- 更新 infra/rag/store/milvus_store.go:完整实现 Milvus 向量存储(建集合 / 建 Index / Upsert / Search /
Delete),支持 COSINE / L2 / IP 度量
- 更新 infra/rag/core/pipeline.go:适配 Runtime 接口,Pipeline 由 factory 注入而非手动拼装
- 更新 infra/rag/corpus/memory_corpus.go / vector_store.go:对接 Memory 模块数据源与 Store 接口扩展
3. Memory 模块从 Day1 骨架升级为 Day2 完整可运行态
- 新建 memory/module.go:统一门面 Module,对外封装 EnqueueExtract / ReadService / ManageService / WithTx /
StartWorker,启动层只依赖这一个入口
- 新建 memory/orchestrator/llm_write_orchestrator.go:LLM 驱动的记忆抽取编排器,替代原 mock 抽取
- 新建 memory/service/read_service.go:按用户开关过滤 + 轻量重排 + 访问时间刷新的读取链路
- 新建 memory/service/manage_service.go:记忆管理面能力(列出 / 软删除 / 开关读写),删除同步写审计日志
- 新建 memory/service/common.go:服务层公共工具
- 新建 memory/worker/loop.go:后台轮询循环 RunPollingLoop,定时抢占 pending 任务并推进
- 新建 memory/utils/audit.go / settings.go:审计日志构造、用户设置过滤等纯函数
- 更新 memory/model/item.go / job.go / settings.go / config.go / status.go:补齐 DTO 字段与状态常量
- 更新 memory/repo/item_repo.go / job_repo.go / audit_repo.go / settings_repo.go:补齐 CRUD 与查询能力
- 更新 memory/worker/runner.go:Runner 对接 Module 与 LLM 抽取器,任务状态机完整化
- 更新 memory/README.md:同步模块现状说明
4. newAgent 接入 Memory 读取注入与工具注册依赖预埋
- 新建 service/agentsvc/agent_memory.go:定义 MemoryReader 接口 + injectMemoryContext,在 graph
执行前统一补充记忆上下文
- 更新 service/agentsvc/agent.go:新增 memoryReader 字段与 SetMemoryReader 方法
- 更新 service/agentsvc/agent_newagent.go:调用 injectMemoryContext 注入 pinned block,检索失败仅降级不阻断主链路
- 更新 newAgent/tools/registry.go:新增 DefaultRegistryDeps(含 RAGRuntime),工具注册表支持依赖注入
5. 启动流程与事件处理器接线更新
- 更新 cmd/start.go:初始化 RAG Runtime → Memory Module → 注册事件处理器 → 启动 Worker 后台轮询
- 更新 service/events/memory_extract_requested.go:改用 memory.Module.WithTx(tx) 统一门面,事件处理器不再直接依赖
repo/service 内部包
6. 缓存插件与配置同步
- 更新 middleware/cache_deleter.go:静默忽略 MemoryJob / MemoryItem / MemoryAuditLog / MemoryUserSetting
等新模型,避免日志刷屏;清理冗余注释
- 更新 config.example.yaml:补齐 rag / memory / websearch 配置段及默认值
- 更新 go.mod / go.sum:新增 eino-ext/openai / json-patch / go-openai 依赖
前端:无 仓库:无
This commit is contained in:
@@ -1,28 +1,76 @@
|
||||
# Memory 模块(Day1 骨架)
|
||||
# Memory 模块现状说明
|
||||
|
||||
## 本轮目标
|
||||
## 当前已打通的链路
|
||||
|
||||
1. 打通 `memory.extract.requested` 事件发布与消费。
|
||||
2. 消费后把任务可靠写入 `memory_jobs`(支持幂等)。
|
||||
3. 提供 `worker.RunOnce()`,可手工推进 `pending -> processing -> success/failed`。
|
||||
1. 用户消息落聊天历史时,会通过 outbox 发布 `memory.extract.requested`。
|
||||
2. 事件消费者只负责把请求幂等写入 `memory_jobs`,不在消费回调里做重 LLM 计算。
|
||||
3. 启动期会拉起 `memory worker`,后台轮询 `memory_jobs`。
|
||||
4. worker 抢占任务后,调用 `backend/infra/llm` 驱动的记忆抽取编排器。
|
||||
5. 抽取结果会被标准化后写入 `memory_items`,同时写入 `memory_audit_logs`。
|
||||
6. 全部落库成功后,任务状态推进到 `success`;失败则走可重试状态机。
|
||||
|
||||
## 本轮边界(刻意不做)
|
||||
## 当前目录职责
|
||||
|
||||
1. 不接真实 LLM 抽取与冲突决策。
|
||||
2. 不接 Milvus 向量召回。
|
||||
3. 不做读取注入链路(Day2 再接)。
|
||||
- `module.go`:对外统一门面,负责组装 repo / service / worker / orchestrator。
|
||||
- `model/`:记忆模块 DTO、状态常量、配置对象。
|
||||
- `repo/`:`memory_jobs / memory_items / memory_audit_logs / memory_user_settings` 访问层。
|
||||
- `service/`:任务入队、读取重排、管理维护、配置加载。
|
||||
- `orchestrator/`:记忆抽取编排。
|
||||
- `write_orchestrator.go` 是纯本地 fallback。
|
||||
- `llm_write_orchestrator.go` 是当前主用的 LLM 抽取器。
|
||||
- `worker/`:任务执行器与后台轮询循环。
|
||||
- `utils/`:JSON 提取、候选事实标准化、设置过滤、审计构造等纯函数工具。
|
||||
|
||||
## 目录说明
|
||||
## 当前已补齐的内部能力
|
||||
|
||||
- `model/`:记忆领域 DTO、状态机、配置对象。
|
||||
- `repo/`:`memory_*` 表访问。
|
||||
- `service/`:任务入队门面与配置加载。
|
||||
- `orchestrator/`:写入链路编排(Day1 为 mock 抽取)。
|
||||
- `worker/`:任务执行器(支持手工触发单次运行)。
|
||||
- `utils/`:`ExtractJSON`、`NormalizeFacts` 等工具函数。
|
||||
1. `Module`
|
||||
- 负责把 repo / service / worker / orchestrator 组装成统一门面。
|
||||
- 外部现在优先依赖 `memory.Module`,而不是自己手搓内部组件。
|
||||
- 支持 `WithTx(tx)`,方便接入现有统一事务管理器。
|
||||
2. `EnqueueService`
|
||||
- 负责把 `memory.extract.requested` 事件转成 `memory_jobs`,不做重 LLM 计算。
|
||||
3. `Runner + RunPollingLoop`
|
||||
- 负责后台轮询任务、调用抽取器、写入 `memory_items`、补写 `memory_audit_logs`。
|
||||
4. `ReadService`
|
||||
- 负责在 memory 内部做“按用户开关过滤 + 轻量重排 + 访问时间刷新”。
|
||||
- 当前还没有接到 `newAgent` prompt 注入侧,这是刻意保留的切流点。
|
||||
5. `ManageService`
|
||||
- 负责记忆管理面能力:列出记忆、软删除记忆、读取/更新用户记忆开关。
|
||||
- 删除动作会同步写入审计日志,保证“有变更就有审计”。
|
||||
|
||||
## 手工验证建议
|
||||
## 当前推荐接入姿势
|
||||
|
||||
1. 发起一轮聊天后,检查 outbox 是否存在 `memory.extract.requested`。
|
||||
2. 等待消费后,检查 `memory_jobs` 是否新增 `pending` 记录。
|
||||
3. 手工调用 `worker.RunOnce()`,确认任务推进到 `success/failed`。
|
||||
1. 启动阶段统一创建:
|
||||
- `memoryModule := memory.NewModule(db, llmClient, memory.LoadConfigFromViper())`
|
||||
2. 后台 worker 启动:
|
||||
- `memoryModule.StartWorker(ctx)`
|
||||
3. 事务内写入记忆任务:
|
||||
- `memoryModule.WithTx(tx).EnqueueExtract(ctx, payload, eventID)`
|
||||
4. 后续 agent 读取:
|
||||
- 直接调用 `memoryModule.Retrieve(...)`
|
||||
|
||||
## 当前实现边界
|
||||
|
||||
1. 已实现异步写入链路,也已补齐 memory 内部读取与管理能力,但还没有接“读取召回 + prompt 注入”。
|
||||
2. 已实现 MySQL 事实落库,但还没有接 Milvus 向量同步。
|
||||
3. 已实现 LLM 抽取和基础审计日志,但还没有做 `ADD/UPDATE/DELETE/NONE` 决策型冲突消解。
|
||||
4. 当前更偏“先把 memory 自己的闭环打通”,后续再继续做 agent 注入、向量检索和冲突更新。
|
||||
|
||||
## 当前推荐验证方式
|
||||
|
||||
1. 发起一条用户消息,确认 outbox 中生成 `memory.extract.requested`。
|
||||
2. 等待事件消费后,确认 `memory_jobs` 出现 `pending` 或被 worker 抢占为 `processing`。
|
||||
3. 等待后台 worker 执行后,确认:
|
||||
- `memory_jobs.status = success`
|
||||
- `memory_items` 出现新记忆
|
||||
- `memory_audit_logs` 出现对应 `create` 记录
|
||||
4. 直接调用 `ManageService`:
|
||||
- `ListItems` 能列出 active/archived 记忆
|
||||
- `DeleteItem` 会把状态改成 `deleted`,并新增一条 `delete` 审计
|
||||
- `GetUserSetting / UpsertUserSetting` 能返回并更新用户记忆开关
|
||||
|
||||
## 下一步建议
|
||||
|
||||
1. 把 `ReadService` 接进 `newAgent`,先注入“偏好 / 约束 / 最近 todo_hint”三类高价值记忆。
|
||||
2. 引入向量召回与 rerank,把“当前话题相关的事实类记忆”补进候选集合。
|
||||
3. 再补 `ADD/UPDATE/DELETE/NONE` 决策,解决“同义记忆去重”和“旧记忆更新”。
|
||||
|
||||
853
backend/memory/log.txt
Normal file
853
backend/memory/log.txt
Normal file
@@ -0,0 +1,853 @@
|
||||
GOROOT=C:\Program Files\Go #gosetup
|
||||
GOPATH=C:\Users\Dev\go #gosetup
|
||||
"C:\Program Files\Go\bin\go.exe" build -o C:\Users\Dev\AppData\Local\JetBrains\GoLand2025.3\tmp\GoLand\___6go_build_main_go.exe D:\SmartFlow-Agent\backend\main.go #gosetup
|
||||
C:\Users\Dev\AppData\Local\JetBrains\GoLand2025.3\tmp\GoLand\___6go_build_main_go.exe #gosetup
|
||||
2026/04/10 22:43:49 Config loaded successfully
|
||||
2026/04/10 22:43:57 Database connected successfully
|
||||
2026/04/10 22:43:57 Database auto migration completed
|
||||
2026/04/10 22:43:57 RAG runtime is disabled
|
||||
2026/04/10 22:43:57 outbox engine starting: topic=smartflow.agent.outbox brokers=[localhost:9092] retry_scan=1s batch=100
|
||||
2026/04/10 22:43:57 Kafka topic is ready: smartflow.agent.outbox
|
||||
2026/04/10 22:43:57 Outbox event bus started
|
||||
2026/04/10 22:43:57 Memory worker started
|
||||
2026/04/10 22:43:57 Routes setup completed
|
||||
2026/04/10 22:43:57 Server starting on port 8080...
|
||||
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
|
||||
|
||||
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
|
||||
- using env: export GIN_MODE=release
|
||||
- using code: gin.SetMode(gin.ReleaseMode)
|
||||
|
||||
[GIN-debug] GET /api/v1/health --> github.com/LoveLosita/smartflow/backend/routers.RegisterRouters.func1 (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/register --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).UserRegister-fm (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/login --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).UserLogin-fm (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/refresh-token --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).RefreshTokenHandler-fm (3 handlers)
|
||||
[GIN-debug] POST /api/v1/user/logout --> github.com/LoveLosita/smartflow/backend/api.(*UserHandler).UserLogout-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/task/create --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).AddTask-fm (6 handlers)
|
||||
[GIN-debug] PUT /api/v1/task/complete --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).CompleteTask-fm (6 handlers)
|
||||
[GIN-debug] PUT /api/v1/task/undo-complete --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).UndoCompleteTask-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/task/get --> github.com/LoveLosita/smartflow/backend/api.(*TaskHandler).GetUserTasks-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/course/validate --> github.com/LoveLosita/smartflow/backend/api.(*CourseHandler).CheckUserCourse-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/course/import --> github.com/LoveLosita/smartflow/backend/api.(*CourseHandler).AddUserCourses-fm (6 handlers)
|
||||
[GIN-debug] POST /api/v1/task-class/add --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserAddTaskClass-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/task-class/list --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserGetTaskClassInfos-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/task-class/get --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserGetCompleteTaskClass-fm (5 handlers)
|
||||
[GIN-debug] PUT /api/v1/task-class/update --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserUpdateTaskClass-fm (6 handlers)
|
||||
[GIN-debug] POST /api/v1/task-class/insert-into-schedule --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserAddTaskClassItemIntoSchedule-fm (6 handlers)
|
||||
[GIN-debug] DELETE /api/v1/task-class/delete-item --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).DeleteTaskClassItem-fm (6 handlers)
|
||||
[GIN-debug] DELETE /api/v1/task-class/delete-class --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).DeleteTaskClass-fm (6 handlers)
|
||||
[GIN-debug] PUT /api/v1/task-class/apply-batch-into-schedule --> github.com/LoveLosita/smartflow/backend/api.(*TaskClassHandler).UserInsertBatchTaskClassItemsIntoSchedule-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/today --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserTodaySchedule-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/week --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserWeeklySchedule-fm (5 handlers)
|
||||
[GIN-debug] DELETE /api/v1/schedule/delete --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).DeleteScheduleEvent-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/recent-completed --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserRecentCompletedSchedules-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/current --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).GetUserOngoingSchedule-fm (5 handlers)
|
||||
[GIN-debug] DELETE /api/v1/schedule/undo-task-item --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).UserRevocateTaskItemFromSchedule-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/schedule/smart-planning --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).SmartPlanning-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/schedule/smart-planning-multi --> github.com/LoveLosita/smartflow/backend/api.(*ScheduleAPI).SmartPlanningMulti-fm (5 handlers)
|
||||
[GIN-debug] POST /api/v1/agent/chat --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).ChatAgent-fm (6 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/conversation-meta --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetConversationMeta-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/conversation-list --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetConversationList-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/conversation-history --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetConversationHistory-fm (5 handlers)
|
||||
[GIN-debug] GET /api/v1/agent/schedule-preview --> github.com/LoveLosita/smartflow/backend/api.(*AgentHandler).GetSchedulePlanPreview-fm (5 handlers)
|
||||
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
|
||||
Please check https://github.com/gin-gonic/gin/blob/master/docs/doc.md#dont-trust-all-proxies for details.
|
||||
[GIN-debug] Listening and serving HTTP on :8080
|
||||
|
||||
2026/04/10 22:43:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[3.151ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:43:57.526')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:43:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.599ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:43:59.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:01 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.046ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:01.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:03 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.652ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:03.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:05 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.918ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:05.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:07 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.113ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:07.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:09 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.325ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:09.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.483ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:11.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.070ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:13.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.599ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:15.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.523ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:17.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.107ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:19.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.056ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:21.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.285ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:23.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.920ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.565ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:27.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.119ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.254ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.475ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:33.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.489ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:35.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.128ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:37.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.532ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.111ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:41.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.596ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:43.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.047ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:45.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.141ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:47.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.091ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:49.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.118ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:51.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.068ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.112ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:55.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.612ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:57.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:44:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.474ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:44:59.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:01 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.514ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:01.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:03 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.485ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:03.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:05 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.543ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:05.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:07 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.649ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:07.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:09 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.035ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:09.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.046ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:11.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.055ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:13.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.177ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:15.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.250ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:17.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.074ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:19.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.903ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:21.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.081ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:23.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.147ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.064ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:27.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.056ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.577ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.464ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:33.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.467ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:35.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.541ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:37.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.457ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.545ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:41.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.342ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:43.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.577ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:45.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.538ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:47.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.219ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:49.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.073ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:51.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.101ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.099ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:55.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.549ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:57.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:45:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.098ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:45:59.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:01 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.993ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:01.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:03 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.203ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:03.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:05 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.514ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:05.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:07 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.033ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:07.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:09 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.586ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:09.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.123ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:11.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:11 D:/SmartFlow-Agent/backend/dao/agent.go:306 record not found
|
||||
[44.927ms] [rows:0] SELECT * FROM `agent_chats` WHERE user_id = 1 AND chat_id = '325b37d1-3483-4c6f-b755-44532a4dbe3c' ORDER BY `agent_chats`.`id` LIMIT 1
|
||||
2026/04/10 22:46:11 [DEBUG] loadOrCreateRuntimeState chatID=325b37d1-3483-4c6f-b755-44532a4dbe3c ok=false err=<nil> hasRuntime=false hasPending=false hasCtx=false hasSchedule=false hasOriginal=false
|
||||
2026/04/10 22:46:11 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 325b37d1-3483-4c6f-b755-44532a4dbe3c
|
||||
|
||||
2026/04/10 22:46:12 D:/SmartFlow-Agent/backend/memory/repo/settings_repo.go:40 record not found
|
||||
[48.854ms] [rows:0] SELECT * FROM `memory_user_settings` WHERE user_id = 1 ORDER BY `memory_user_settings`.`user_id` LIMIT 1
|
||||
|
||||
2026/04/10 22:46:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.150ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:13.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.082ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:15.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:15 [DEBUG] chat routing chat=325b37d1-3483-4c6f-b755-44532a4dbe3c route=execute needs_rough_build=true needs_refine_after_rough_build=true allow_reorder=false has_rough_build_done=false task_class_count=4 reason=批量排课需求,有任务类ID,且给出明确微调偏好(避开早八和晚10)
|
||||
2026/04/10 22:46:16 [DEBUG] rough_build scope_task_classes=[2 3 4 5] placements=44 applied=44 day_mapping_miss=0 task_item_match_miss=0 pending_in_scope=0 total_tasks=105 window_days=42
|
||||
2026/04/10 22:46:16 [DEBUG] execute LLM context begin chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1 message_count=4
|
||||
----- message[0] -----
|
||||
role: system
|
||||
content:
|
||||
你叫 SmartFlow,是专为重邮(CQUPT)学子打造的智能排程专家。
|
||||
你的回复应当专业、干练,偶尔可以带一点程序员式的冷幽默。
|
||||
重要约束:你无法直接写入数据库。除非系统明确告知“任务已落库成功”,否则禁止使用“已安排/已记录/已帮你记下”等完成态表述。
|
||||
|
||||
你是 SmartFlow NewAgent 的执行器,当前处于自由执行模式(无预定义 plan 步骤)。
|
||||
|
||||
阶段事实(强约束):
|
||||
1. 若上下文给出“粗排已完成/rough_build_done”,表示目标任务类已经进入 suggested/existing,不是待排入状态。
|
||||
2. 当前阶段目标是“微调”,不是“重新粗排”。
|
||||
3. 若上下文明确“当前未收到明确微调偏好/本轮先收口”,应直接结束而不是继续优化循环。
|
||||
4. 若用户提出了二次微调方向,本轮优先目标就是满足该方向。
|
||||
|
||||
你可以做什么:
|
||||
1. 你可以基于用户给定的二次微调方向,对 suggested 做定向微调。
|
||||
2. existing 属于已安排事实层,可用于冲突判断和参考,不作为 move/batch_move/spread_even 的目标。
|
||||
3. 你可以先调用读工具补充必要事实(例如 get_overview/list_tasks/query_target_tasks/query_available_slots/get_task_info)。
|
||||
4. 你可以在需要改动时提出 confirm(move/swap/unplace/batch_move/spread_even)。
|
||||
5. 只有用户明确允许打乱顺序时,才可使用 min_context_switch。
|
||||
6. 多任务处理默认使用队列链路:先 query_target_tasks(enqueue=true) 入队,再 queue_pop_head 逐项处理。
|
||||
|
||||
你不要做什么:
|
||||
1. 不要假设任务还没排进去,然后改成逐个手动 place。
|
||||
2. 不要伪造工具结果。
|
||||
3. 不要重复做同类查询而没有新增结论;连续两轮同类读查询后,必须转入执行、ask_user,或明确阻塞原因。
|
||||
4. list_tasks 的 status 只允许单值:all / existing / suggested / pending。禁止使用 "existing,suggested" 这类拼接值。
|
||||
5. 若工具结果与已知事实明显冲突(如无写操作却从“有任务”变成“0任务”),先自我纠错并重查一次,不要直接 ask_user。
|
||||
6. 不要连续两轮调用“同一读工具 + 等价 arguments”;若上一轮已成功返回,下一轮必须换工具或进入 confirm。
|
||||
7. list_tasks.category 只接受任务类名称,不接受 task_class_ids(如 "1,2,3")。
|
||||
8. 若已明确“本轮先收口”,不要继续调用 list_tasks/query_available_slots/move 做无目标微调。
|
||||
9. 若用户明确了微调方向,不要只做“局部看起来更空”的随机调整;每次改动都要能对应到该方向。
|
||||
10. 若顺序策略为“保持顺序”,禁止调用 min_context_switch。
|
||||
11. 不要在同一轮构造大规模 batch_move;batch_move 最多 2 条,超过请走队列逐项处理。
|
||||
12. 未调用 queue_pop_head 获取 current 前,不要调用 queue_apply_head_move。
|
||||
13. 工具参数必须严格使用 schema 字段,禁止自造别名;例如 day_from/day_to 非法,必须改用 day_start/day_end。
|
||||
|
||||
执行规则:
|
||||
1. 只输出严格 JSON,不要输出 markdown,不要在 JSON 外补充文本。
|
||||
2. 读操作:action=continue + tool_call。
|
||||
3. 写操作:action=confirm + tool_call。
|
||||
4. 缺关键上下文且无法通过工具补齐:action=ask_user。
|
||||
5. 任务完成:action=done,并在 goal_check 总结完成证据。
|
||||
6. 流程应正式终止:action=abort。
|
||||
|
||||
补充 JSON 约束:
|
||||
1. 只输出当前 action 真正需要的字段;无关字段直接省略,不要用 ""、{}、[]、null 占位。
|
||||
2. 若输出 tool_call,参数字段名只能是 arguments,禁止写成 parameters。
|
||||
3. tool_call 只能是单个对象:{"name":"工具名","arguments":{...}},不能输出数组。
|
||||
4. 只有 action=abort 时才允许输出 abort 字段;非 abort 动作不要输出 abort。
|
||||
5. action=continue / ask_user / confirm 时,speak 必须是非空自然语言。
|
||||
|
||||
可用工具(简表):
|
||||
1. batch_move:原子性批量移动多个任务(仅 suggested,最多2条),全部成功才生效。若含 existing/pending 或任一冲突将整批失败回滚。
|
||||
参数:moves(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:批量移动完成,2个任务全部成功。(单次最多2条)
|
||||
2. get_overview:获取规划窗口总览(任务视角,全量返回):保留课程占位统计,展开任务清单(过滤课程明细)。
|
||||
参数:{}
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:规划窗口共27天...课程占位条目34个...任务清单(全量,已过滤课程)...
|
||||
3. get_task_info:查询单个任务详细信息,包括类别、状态、占用时段、嵌入关系。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:[35]第一章随机事件与概率 | 状态:已预排(suggested) | 占用时段:第3天第5-6节
|
||||
4. list_tasks:列出任务清单,可按类别和状态过滤。category 传任务类名称,status 仅支持单值 all/existing/suggested/pending。
|
||||
参数:category(可选,string);status(可选,string:all/existing/suggested/pending)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已预排任务共24个: [35]第一章随机事件与概率 — 已预排至 第3天第5-6节...
|
||||
5. min_context_switch:在指定任务集合内重排 suggested 任务,尽量让同类任务连续以减少上下文切换。仅在用户明确允许打乱顺序时使用。task_ids 必填(兼容 task_id)。
|
||||
参数:task_id(可选,int);task_ids(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:最少上下文切换重排完成:共处理 6 个任务,上下文切换次数 5 -> 2。
|
||||
6. move:将一个已预排任务(仅 suggested)移动到新位置。existing 属于已安排事实层,不参与 move。task_id/new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 从第3天第5-6节移至第5天第3-4节。
|
||||
7. place:将一个待安排任务预排到指定位置。自动检测可嵌入宿主。task_id/day/slot_start 必填。
|
||||
参数:day(必填,int);slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 预排到第5天第3-4节。
|
||||
8. query_available_slots:查询候选空位池(先返回纯空位,不足再补可嵌入位),适合 move 前的落点筛选。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);duration(可选,int);exclude_sections(可选,array);limit(可选,int);section_from(可选,int);section_to(可选,int);slot_type(可选,string);slot_types(可选,array);span(可选,int);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_available_slots","count":12,"strict_count":8,"embedded_count":4,"slots":[{"day":5,"week":12,"day_of_week":3,"slot_start":1,"slot_end":2,"slot_type":"empty"}]}
|
||||
9. query_range:查看某天或某时段的细粒度占用详情。day 必填,slot_start/slot_end 选填(不填查整天)。
|
||||
参数:day(必填,int);slot_end(可选,int);slot_start(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:第5天第3-6节:第3节空、第4节空...
|
||||
10. query_target_tasks:查询候选任务集合,可按 status/week/day/task_id/category 筛选;默认自动入队,供后续 queue_pop_head 逐项处理。
|
||||
参数:category(可选,string);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);enqueue(可选,bool);limit(可选,int);reset_queue(可选,bool);status(可选,string:all/existing/suggested/pending);task_id(可选,int);task_ids(可选,array);task_item_id(可选,int);task_item_ids(可选,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_target_tasks","count":6,"status":"suggested","enqueue":true,"enqueued":6,"queue":{"pending_count":6},"items":[{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}]}
|
||||
11. queue_apply_head_move:将当前队首任务移动到指定位置并自动出队。仅作用于 current,不接受 task_id。new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_apply_head_move","success":true,"task_id":35,"pending_count":4,"completed_count":2,"result":"已将 [35]... 从第3天第5-6节移至第5天第3-4节。"}
|
||||
12. queue_pop_head:弹出并返回当前队首任务;若已有 current 则复用,保证一次只处理一个任务。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_pop_head","has_head":true,"pending_count":5,"current":{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}}
|
||||
13. queue_skip_head:跳过当前队首任务(不改日程),将其标记为 skipped 并继续后续队列。
|
||||
参数:reason(可选,string)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_skip_head","success":true,"skipped_task_id":35,"pending_count":4,"skipped_count":1}
|
||||
14. queue_status:查看当前待处理队列状态(pending/current/completed/skipped)。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_status","pending_count":5,"completed_count":1,"skipped_count":0,"current_task_id":35,"current_attempt":1}
|
||||
15. spread_even:在给定任务集合内做均匀化铺开:先按筛选条件收集候选坑位,再规划并原子落地。task_ids 必填(兼容 task_id)。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);exclude_sections(可选,array);limit(可选,int);slot_type(可选,string);slot_types(可选,array);task_id(可选,int);task_ids(必填,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:均匀化调整完成:共处理 6 个任务,候选坑位 24 个。
|
||||
16. swap:交换两个已落位任务的位置。两个任务必须时长相同。task_a/task_b 必填。
|
||||
参数:task_a(必填,int);task_b(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:交换完成:[35]... ↔ [36]...
|
||||
17. unplace:将一个已落位任务移除,恢复为待安排状态。会自动清理嵌入关系。task_id 必填。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 移除,恢复为待安排状态。
|
||||
|
||||
----- message[1] -----
|
||||
role: assistant
|
||||
content:
|
||||
历史上下文(仅供参考):
|
||||
- 用户目标:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 阶段锚点:粗排已完成,本轮仅做微调,不重新 place。
|
||||
- 历史归档 ReAct 摘要:暂无。
|
||||
- 历史归档 ReAct 窗口:暂无。
|
||||
- 当前循环早期摘要:暂无。
|
||||
|
||||
----- message[2] -----
|
||||
role: assistant
|
||||
content:
|
||||
当轮 ReAct Loop 记录(窗口):
|
||||
- 已清空(新一轮 loop 准备中)。
|
||||
|
||||
----- message[3] -----
|
||||
role: system
|
||||
content:
|
||||
当前执行状态:
|
||||
- 当前轮次:1/60
|
||||
- 当前模式:自由执行(无预定义步骤)
|
||||
执行锚点:
|
||||
- 当前用户诉求:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 目标任务类:task_class_ids=[2,3,4,5]
|
||||
- 啥时候结束Loop:你可以根据工具调用记录自行判断。
|
||||
- 非目标:不重新粗排、不修改无关任务类。
|
||||
- 阶段约束:粗排已完成,本轮只微调 suggested;existing 仅作已安排事实参考,不作为可移动目标。
|
||||
- 参数纪律:工具参数必须严格使用 schema 字段;若返回“参数非法”,需先改参再继续。
|
||||
- 顺序策略:默认保持 suggested 相对顺序,禁止调用 min_context_switch。
|
||||
本轮指令:请继续当前任务的执行阶段,严格输出 JSON。
|
||||
|
||||
|
||||
[DEBUG] execute LLM context end chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1
|
||||
|
||||
2026/04/10 22:46:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.502ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:17.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.403ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:19.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.476ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:21.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.170ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:23.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.487ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.460ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:27.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.473ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:30 [DEBUG] execute LLM 响应 chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1 action=continue speak_len=103 raw_len=186 raw_preview={"action":"continue","tool_call":{"name":"get_overview","arguments":{}},"speak":"我先看看当前粗排完成后的总览,找出需要调整的早八/晚十时段的目标任务。"}
|
||||
2026/04/10 22:46:30 [DEBUG] execute tool chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=1 tool=get_overview args={} before=tasks=105 pending=0 suggested=44 existing=61 task_item_with_slot=44 event_with_slot=52 after=tasks=105 pending=0 suggested=44 existing=61 task_item_with_slot=44 event_with_slot=52 result_preview=规划窗口共42天,每天12个时段,总计504个时段。 当前已占用197个,空闲307个。课程占位条目61个(仅用于占位统计);任务条目:已安排(existing)0个、已预排(suggested)44个、待安排(pending)0个。 每日概况: 第1天(星期3):总占3/12(课程占3/12,任务占0/12) — 任务:无 第2天(星期4):总占6/12(课程占6/12,任务占0/12) —
|
||||
2026/04/10 22:46:30 [DEBUG] execute LLM context begin chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=2 message_count=4
|
||||
----- message[0] -----
|
||||
role: system
|
||||
content:
|
||||
你叫 SmartFlow,是专为重邮(CQUPT)学子打造的智能排程专家。
|
||||
你的回复应当专业、干练,偶尔可以带一点程序员式的冷幽默。
|
||||
重要约束:你无法直接写入数据库。除非系统明确告知“任务已落库成功”,否则禁止使用“已安排/已记录/已帮你记下”等完成态表述。
|
||||
|
||||
你是 SmartFlow NewAgent 的执行器,当前处于自由执行模式(无预定义 plan 步骤)。
|
||||
|
||||
阶段事实(强约束):
|
||||
1. 若上下文给出“粗排已完成/rough_build_done”,表示目标任务类已经进入 suggested/existing,不是待排入状态。
|
||||
2. 当前阶段目标是“微调”,不是“重新粗排”。
|
||||
3. 若上下文明确“当前未收到明确微调偏好/本轮先收口”,应直接结束而不是继续优化循环。
|
||||
4. 若用户提出了二次微调方向,本轮优先目标就是满足该方向。
|
||||
|
||||
你可以做什么:
|
||||
1. 你可以基于用户给定的二次微调方向,对 suggested 做定向微调。
|
||||
2. existing 属于已安排事实层,可用于冲突判断和参考,不作为 move/batch_move/spread_even 的目标。
|
||||
3. 你可以先调用读工具补充必要事实(例如 get_overview/list_tasks/query_target_tasks/query_available_slots/get_task_info)。
|
||||
4. 你可以在需要改动时提出 confirm(move/swap/unplace/batch_move/spread_even)。
|
||||
5. 只有用户明确允许打乱顺序时,才可使用 min_context_switch。
|
||||
6. 多任务处理默认使用队列链路:先 query_target_tasks(enqueue=true) 入队,再 queue_pop_head 逐项处理。
|
||||
|
||||
你不要做什么:
|
||||
1. 不要假设任务还没排进去,然后改成逐个手动 place。
|
||||
2. 不要伪造工具结果。
|
||||
3. 不要重复做同类查询而没有新增结论;连续两轮同类读查询后,必须转入执行、ask_user,或明确阻塞原因。
|
||||
4. list_tasks 的 status 只允许单值:all / existing / suggested / pending。禁止使用 "existing,suggested" 这类拼接值。
|
||||
5. 若工具结果与已知事实明显冲突(如无写操作却从“有任务”变成“0任务”),先自我纠错并重查一次,不要直接 ask_user。
|
||||
6. 不要连续两轮调用“同一读工具 + 等价 arguments”;若上一轮已成功返回,下一轮必须换工具或进入 confirm。
|
||||
7. list_tasks.category 只接受任务类名称,不接受 task_class_ids(如 "1,2,3")。
|
||||
8. 若已明确“本轮先收口”,不要继续调用 list_tasks/query_available_slots/move 做无目标微调。
|
||||
9. 若用户明确了微调方向,不要只做“局部看起来更空”的随机调整;每次改动都要能对应到该方向。
|
||||
10. 若顺序策略为“保持顺序”,禁止调用 min_context_switch。
|
||||
11. 不要在同一轮构造大规模 batch_move;batch_move 最多 2 条,超过请走队列逐项处理。
|
||||
12. 未调用 queue_pop_head 获取 current 前,不要调用 queue_apply_head_move。
|
||||
13. 工具参数必须严格使用 schema 字段,禁止自造别名;例如 day_from/day_to 非法,必须改用 day_start/day_end。
|
||||
|
||||
执行规则:
|
||||
1. 只输出严格 JSON,不要输出 markdown,不要在 JSON 外补充文本。
|
||||
2. 读操作:action=continue + tool_call。
|
||||
3. 写操作:action=confirm + tool_call。
|
||||
4. 缺关键上下文且无法通过工具补齐:action=ask_user。
|
||||
5. 任务完成:action=done,并在 goal_check 总结完成证据。
|
||||
6. 流程应正式终止:action=abort。
|
||||
|
||||
补充 JSON 约束:
|
||||
1. 只输出当前 action 真正需要的字段;无关字段直接省略,不要用 ""、{}、[]、null 占位。
|
||||
2. 若输出 tool_call,参数字段名只能是 arguments,禁止写成 parameters。
|
||||
3. tool_call 只能是单个对象:{"name":"工具名","arguments":{...}},不能输出数组。
|
||||
4. 只有 action=abort 时才允许输出 abort 字段;非 abort 动作不要输出 abort。
|
||||
5. action=continue / ask_user / confirm 时,speak 必须是非空自然语言。
|
||||
|
||||
可用工具(简表):
|
||||
1. batch_move:原子性批量移动多个任务(仅 suggested,最多2条),全部成功才生效。若含 existing/pending 或任一冲突将整批失败回滚。
|
||||
参数:moves(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:批量移动完成,2个任务全部成功。(单次最多2条)
|
||||
2. get_overview:获取规划窗口总览(任务视角,全量返回):保留课程占位统计,展开任务清单(过滤课程明细)。
|
||||
参数:{}
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:规划窗口共27天...课程占位条目34个...任务清单(全量,已过滤课程)...
|
||||
3. get_task_info:查询单个任务详细信息,包括类别、状态、占用时段、嵌入关系。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:[35]第一章随机事件与概率 | 状态:已预排(suggested) | 占用时段:第3天第5-6节
|
||||
4. list_tasks:列出任务清单,可按类别和状态过滤。category 传任务类名称,status 仅支持单值 all/existing/suggested/pending。
|
||||
参数:category(可选,string);status(可选,string:all/existing/suggested/pending)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已预排任务共24个: [35]第一章随机事件与概率 — 已预排至 第3天第5-6节...
|
||||
5. min_context_switch:在指定任务集合内重排 suggested 任务,尽量让同类任务连续以减少上下文切换。仅在用户明确允许打乱顺序时使用。task_ids 必填(兼容 task_id)。
|
||||
参数:task_id(可选,int);task_ids(必填,array)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:最少上下文切换重排完成:共处理 6 个任务,上下文切换次数 5 -> 2。
|
||||
6. move:将一个已预排任务(仅 suggested)移动到新位置。existing 属于已安排事实层,不参与 move。task_id/new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 从第3天第5-6节移至第5天第3-4节。
|
||||
7. place:将一个待安排任务预排到指定位置。自动检测可嵌入宿主。task_id/day/slot_start 必填。
|
||||
参数:day(必填,int);slot_start(必填,int);task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 预排到第5天第3-4节。
|
||||
8. query_available_slots:查询候选空位池(先返回纯空位,不足再补可嵌入位),适合 move 前的落点筛选。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);duration(可选,int);exclude_sections(可选,array);limit(可选,int);section_from(可选,int);section_to(可选,int);slot_type(可选,string);slot_types(可选,array);span(可选,int);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_available_slots","count":12,"strict_count":8,"embedded_count":4,"slots":[{"day":5,"week":12,"day_of_week":3,"slot_start":1,"slot_end":2,"slot_type":"empty"}]}
|
||||
9. query_range:查看某天或某时段的细粒度占用详情。day 必填,slot_start/slot_end 选填(不填查整天)。
|
||||
参数:day(必填,int);slot_end(可选,int);slot_start(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:第5天第3-6节:第3节空、第4节空...
|
||||
10. query_target_tasks:查询候选任务集合,可按 status/week/day/task_id/category 筛选;默认自动入队,供后续 queue_pop_head 逐项处理。
|
||||
参数:category(可选,string);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);enqueue(可选,bool);limit(可选,int);reset_queue(可选,bool);status(可选,string:all/existing/suggested/pending);task_id(可选,int);task_ids(可选,array);task_item_id(可选,int);task_item_ids(可选,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"query_target_tasks","count":6,"status":"suggested","enqueue":true,"enqueued":6,"queue":{"pending_count":6},"items":[{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}]}
|
||||
11. queue_apply_head_move:将当前队首任务移动到指定位置并自动出队。仅作用于 current,不接受 task_id。new_day/new_slot_start 必填。
|
||||
参数:new_day(必填,int);new_slot_start(必填,int)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_apply_head_move","success":true,"task_id":35,"pending_count":4,"completed_count":2,"result":"已将 [35]... 从第3天第5-6节移至第5天第3-4节。"}
|
||||
12. queue_pop_head:弹出并返回当前队首任务;若已有 current 则复用,保证一次只处理一个任务。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_pop_head","has_head":true,"pending_count":5,"current":{"task_id":35,"name":"示例任务","status":"suggested","slots":[{"day":3,"week":12,"day_of_week":1,"slot_start":5,"slot_end":6}]}}
|
||||
13. queue_skip_head:跳过当前队首任务(不改日程),将其标记为 skipped 并继续后续队列。
|
||||
参数:reason(可选,string)
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_skip_head","success":true,"skipped_task_id":35,"pending_count":4,"skipped_count":1}
|
||||
14. queue_status:查看当前待处理队列状态(pending/current/completed/skipped)。
|
||||
参数:{}
|
||||
返回类型:string(JSON字符串)
|
||||
返回示例:{"tool":"queue_status","pending_count":5,"completed_count":1,"skipped_count":0,"current_task_id":35,"current_attempt":1}
|
||||
15. spread_even:在给定任务集合内做均匀化铺开:先按筛选条件收集候选坑位,再规划并原子落地。task_ids 必填(兼容 task_id)。
|
||||
参数:after_section(可选,int);allow_embed(可选,bool);before_section(可选,int);day(可选,int);day_end(可选,int);day_of_week(可选,array);day_scope(可选,string:all/workday/weekend);day_start(可选,int);exclude_sections(可选,array);limit(可选,int);slot_type(可选,string);slot_types(可选,array);task_id(可选,int);task_ids(必填,array);week(可选,int);week_filter(可选,array);week_from(可选,int);week_to(可选,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:均匀化调整完成:共处理 6 个任务,候选坑位 24 个。
|
||||
16. swap:交换两个已落位任务的位置。两个任务必须时长相同。task_a/task_b 必填。
|
||||
参数:task_a(必填,int);task_b(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:交换完成:[35]... ↔ [36]...
|
||||
17. unplace:将一个已落位任务移除,恢复为待安排状态。会自动清理嵌入关系。task_id 必填。
|
||||
参数:task_id(必填,int)
|
||||
返回类型:string(自然语言文本)
|
||||
返回示例:已将 [35]... 移除,恢复为待安排状态。
|
||||
|
||||
----- message[1] -----
|
||||
role: assistant
|
||||
content:
|
||||
历史上下文(仅供参考):
|
||||
- 用户目标:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 阶段锚点:粗排已完成,本轮仅做微调,不重新 place。
|
||||
- 历史归档 ReAct 摘要:暂无。
|
||||
- 历史归档 ReAct 窗口:暂无。
|
||||
- 当前循环早期摘要:无(当前窗口已覆盖全部 ReAct 记录)。
|
||||
|
||||
----- message[2] -----
|
||||
role: assistant
|
||||
content:
|
||||
当轮 ReAct Loop 记录(窗口):
|
||||
1) thought/reason:我先看看当前粗排完成后的总览,找出需要调整的早八/晚十时段的目标任务。
|
||||
tool_call:get_overview({})
|
||||
observation:规划窗口共42天,每天12个时段,总计504个时段。
|
||||
当前已占用197个,空闲307个。课程占位条目61个(仅用于占位统计);任务条目:已安排(existing)0个、已预排(suggested)44个、待安排(pending)0个。
|
||||
|
||||
每日概况:
|
||||
第1天(星期3):总占3/12(课程占3/12,任务占0/12) — 任务:无
|
||||
第2天(星期4):总占6/12(课程占6/12,任务占0/12) — 任务:无
|
||||
第3天(星期5):总占6/12(课程占4/12,任务占2/12) — 任务:[94]线性表(顺序表/链表)与复杂度分析(suggested,9-10节)
|
||||
第4天(星期6):总占4/12(课程占0/12,任务占4/12) — 任务:[70]数制与码制、逻辑代数基础(suggested,3-4节) [82]命题逻辑与等值演算(suggested,5-6节)
|
||||
第5天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第6天(星期1):总占6/12(课程占2/12,任务占4/12) — 任务:[71]组合逻辑电路分析方法(suggested,7-8节) [95]栈与队列及典型应用(suggested,9-10节)
|
||||
第7天(星期2):总占12/12(课程占10/12,任务占2/12) — 任务:[83]谓词逻辑与量词推理(suggested,3-4节)
|
||||
第8天(星期3):总占5/12(课程占5/12,任务占0/12) — 任务:无
|
||||
第9天(星期4):总占8/12(课程占6/12,任务占2/12) — 任务:[72]组合逻辑电路设计方法(含卡诺图)(suggested,9-10节)
|
||||
第10天(星期5):总占6/12(课程占2/12,任务占4/12) — 任务:[96]串与模式匹配(KMP)(suggested,7-8节) [84]集合与关系基本性质(suggested,9-10节)
|
||||
第11天(星期6):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第12天(星期7):总占2/12(课程占0/12,任务占2/12) — 任务:[73]译码器、编码器、多路选择器综合应用(suggested,7-8节)
|
||||
第13天(星期1):总占6/12(课程占2/12,任务占4/12) — 任务:[97]数组与广义表、稀疏矩阵(suggested,5-6节) [85]关系闭包与等价关系/偏序关系(suggested,7-8节)
|
||||
第14天(星期2):总占10/12(课程占10/12,任务占0/12) — 任务:无
|
||||
第15天(星期3):总占7/12(课程占3/12,任务占4/12) — 任务:[74]触发器工作原理与时序特性(suggested,3-4节) [62]第一章 随机事件与概率(suggested,5-6节)
|
||||
第16天(星期4):总占6/12(课程占4/12,任务占2/12) — 任务:[98]树与二叉树遍历、线索化(suggested,9-10节)
|
||||
第17天(星期5):总占6/12(课程占4/12,任务占2/12) — 任务:[86]函数与映射(单射满射双射)(suggested,5-6节)
|
||||
第18天(星期6):总占4/12(课程占0/12,任务占4/12) — 任务:[63]第二章 条件概率与全概率公式(suggested,7-8节) [75]计数器设计与分析(suggested,9-10节)
|
||||
第19天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第20天(星期1):总占6/12(课程占2/12,任务占4/12) — 任务:[87]代数系统与群环域入门(suggested,3-4节) [99]二叉排序树、AVL、红黑树概念(suggested,5-6节)
|
||||
第21天(星期2):总占14/12(课程占10/12,任务占4/12) — 任务:[64]第三章 随机变量及其分布(suggested,3-4节) [76]寄存器与移位寄存器(suggested,7-8节)
|
||||
第22天(星期3):总占5/12(课程占5/12,任务占0/12) — 任务:无
|
||||
第23天(星期4):总占6/12(课程占4/12,任务占2/12) — 任务:[88]图的基本概念与图的表示(suggested,9-10节)
|
||||
第24天(星期5):总占6/12(课程占2/12,任务占4/12) — 任务:[100]堆与优先队列(suggested,5-6节) [65]第四章 多维随机变量(suggested,7-8节)
|
||||
第25天(星期6):总占2/12(课程占0/12,任务占2/12) — 任务:[77]时序逻辑电路设计(同步/异步)(suggested,5-6节)
|
||||
第26天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第27天(星期1):总占8/12(课程占2/12,任务占6/12) — 任务:[66]第五章 数字特征与大数定律(suggested,3-4节) [89]欧拉图、哈密顿图、最短路(suggested,5-6节) [101]图的存储与遍历(DFS/BFS)(suggested,7-8节)
|
||||
第28天(星期2):总占12/12(课程占10/12,任务占2/12) — 任务:[78]状态机建模与化简(suggested,3-4节)
|
||||
第29天(星期3):总占5/12(课程占3/12,任务占2/12) — 任务:[67]第六章 中心极限定理与参数估计(suggested,7-8节)
|
||||
第30天(星期4):总占4/12(课程占2/12,任务占2/12) — 任务:[90]树与生成树、最小生成树(suggested,9-10节)
|
||||
第31天(星期5):总占8/12(课程占4/12,任务占4/12) — 任务:[102]最短路径与拓扑排序(suggested,5-6节) [79]A/D 与 D/A 基础电路(suggested,7-8节)
|
||||
第32天(星期6):总占2/12(课程占0/12,任务占2/12) — 任务:[68]综合刷题与错题回顾(suggested,9-10节)
|
||||
第33天(星期7):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第34天(星期1):总占6/12(课程占0/12,任务占6/12) — 任务:[91]组合计数(加法乘法原理)(suggested,3-4节) [80]历年真题专项(组合逻辑)(suggested,5-6节) [103]查找(顺序/折半/散列)(suggested,7-8节)
|
||||
第35天(星期2):总占4/12(课程占2/12,任务占2/12) — 任务:[69]模拟考试与查漏补缺(suggested,5-6节)
|
||||
第36天(星期3):总占2/12(课程占2/12,任务占0/12) — 任务:无
|
||||
第37天(星期4):总占6/12(课程占0/12,任务占6/12) — 任务:[81]历年真题专项(时序逻辑)(suggested,5-6节) [92]递推关系与母函数基础(suggested,7-8节) [104]排序(插入/交换/选择/归并/快排)(suggested,9-10节)
|
||||
第38天(星期5):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第39天(星期6):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第40天(星期7):总占4/12(课程占0/12,任务占4/12) — 任务:[93]离散综合题与证明题训练(suggested,3-4节) [105]综合算法题实战与代码模板整理(suggested,5-6节)
|
||||
第41天(星期1):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
第42天(星期2):总占0/12(课程占0/12,任务占0/12) — 任务:无
|
||||
|
||||
任务清单(全量,已过滤课程):
|
||||
[62]第一章 随机事件与概率 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第15天(星期3)第5-6节
|
||||
[63]第二章 条件概率与全概率公式 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第18天(星期6)第7-8节
|
||||
[64]第三章 随机变量及其分布 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第21天(星期2)第3-4节
|
||||
[65]第四章 多维随机变量 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第24天(星期5)第7-8节
|
||||
[66]第五章 数字特征与大数定律 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第27天(星期1)第3-4节
|
||||
[67]第六章 中心极限定理与参数估计 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第29天(星期3)第7-8节
|
||||
[68]综合刷题与错题回顾 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第32天(星期6)第9-10节
|
||||
[69]模拟考试与查漏补缺 | 状态:suggested | 类别:复习概率论 | task_class_id:2 | 时段:第35天(星期2)第5-6节
|
||||
[70]数制与码制、逻辑代数基础 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第4天(星期6)第3-4节
|
||||
[71]组合逻辑电路分析方法 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第6天(星期1)第7-8节
|
||||
[72]组合逻辑电路设计方法(含卡诺图) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第9天(星期4)第9-10节
|
||||
[73]译码器、编码器、多路选择器综合应用 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第12天(星期7)第7-8节
|
||||
[74]触发器工作原理与时序特性 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第15天(星期3)第3-4节
|
||||
[75]计数器设计与分析 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第18天(星期6)第9-10节
|
||||
[76]寄存器与移位寄存器 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第21天(星期2)第7-8节
|
||||
[77]时序逻辑电路设计(同步/异步) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第25天(星期6)第5-6节
|
||||
[78]状态机建模与化简 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第28天(星期2)第3-4节
|
||||
[79]A/D 与 D/A 基础电路 | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第31天(星期5)第7-8节
|
||||
[80]历年真题专项(组合逻辑) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第34天(星期1)第5-6节
|
||||
[81]历年真题专项(时序逻辑) | 状态:suggested | 类别:数电期末复习 | task_class_id:3 | 时段:第37天(星期4)第5-6节
|
||||
[82]命题逻辑与等值演算 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第4天(星期6)第5-6节
|
||||
[83]谓词逻辑与量词推理 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第7天(星期2)第3-4节
|
||||
[84]集合与关系基本性质 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第10天(星期5)第9-10节
|
||||
[85]关系闭包与等价关系/偏序关系 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第13天(星期1)第7-8节
|
||||
[86]函数与映射(单射满射双射) | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第17天(星期5)第5-6节
|
||||
[87]代数系统与群环域入门 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第20天(星期1)第3-4节
|
||||
[88]图的基本概念与图的表示 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第23天(星期4)第9-10节
|
||||
[89]欧拉图、哈密顿图、最短路 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第27天(星期1)第5-6节
|
||||
[90]树与生成树、最小生成树 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第30天(星期4)第9-10节
|
||||
[91]组合计数(加法乘法原理) | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第34天(星期1)第3-4节
|
||||
[92]递推关系与母函数基础 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第37天(星期4)第7-8节
|
||||
[93]离散综合题与证明题训练 | 状态:suggested | 类别:离散数学期末复习 | task_class_id:4 | 时段:第40天(星期7)第3-4节
|
||||
[94]线性表(顺序表/链表)与复杂度分析 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第3天(星期5)第9-10节
|
||||
[95]栈与队列及典型应用 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第6天(星期1)第9-10节
|
||||
[96]串与模式匹配(KMP) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第10天(星期5)第7-8节
|
||||
[97]数组与广义表、稀疏矩阵 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第13天(星期1)第5-6节
|
||||
[98]树与二叉树遍历、线索化 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第16天(星期4)第9-10节
|
||||
[99]二叉排序树、AVL、红黑树概念 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第20天(星期1)第5-6节
|
||||
[100]堆与优先队列 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第24天(星期5)第5-6节
|
||||
[101]图的存储与遍历(DFS/BFS) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第27天(星期1)第7-8节
|
||||
[102]最短路径与拓扑排序 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第31天(星期5)第5-6节
|
||||
[103]查找(顺序/折半/散列) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第34天(星期1)第7-8节
|
||||
[104]排序(插入/交换/选择/归并/快排) | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第37天(星期4)第9-10节
|
||||
[105]综合算法题实战与代码模板整理 | 状态:suggested | 类别:数据结构期末复习 | task_class_id:5 | 时段:第40天(星期7)第5-6节
|
||||
|
||||
任务类约束(排课时请遵守):
|
||||
[复习概率论] 策略=均匀分布 总预算=16节 允许嵌水课=是 排除时段=[1,6]
|
||||
[数电期末复习] 策略=均匀分布 总预算=30节 允许嵌水课=是 排除时段=[1,6]
|
||||
[离散数学期末复习] 策略=均匀分布 总预算=26节 允许嵌水课=是 排除时段=[1,6]
|
||||
[数据结构期末复习] 策略=均匀分布 总预算=30节 允许嵌水课=是 排除时段=[1,6]
|
||||
|
||||
----- message[3] -----
|
||||
role: system
|
||||
content:
|
||||
当前执行状态:
|
||||
- 当前轮次:2/60
|
||||
- 当前模式:自由执行(无预定义步骤)
|
||||
执行锚点:
|
||||
- 当前用户诉求:帮我排一下这些任务类,直接排,不要早八和晚10
|
||||
- 目标任务类:task_class_ids=[2,3,4,5]
|
||||
- 啥时候结束Loop:你可以根据工具调用记录自行判断。
|
||||
- 非目标:不重新粗排、不修改无关任务类。
|
||||
- 阶段约束:粗排已完成,本轮只微调 suggested;existing 仅作已安排事实参考,不作为可移动目标。
|
||||
- 参数纪律:工具参数必须严格使用 schema 字段;若返回“参数非法”,需先改参再继续。
|
||||
- 顺序策略:默认保持 suggested 相对顺序,禁止调用 min_context_switch。
|
||||
本轮指令:请继续当前任务的执行阶段,严格输出 JSON。
|
||||
|
||||
|
||||
[DEBUG] execute LLM context end chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=2
|
||||
|
||||
2026/04/10 22:46:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.512ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.057ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:33.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.540ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:35.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.072ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:37.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.028ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.462ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:41.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.523ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:43.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.457ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:45.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.042ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:47.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:49 [DEBUG] execute LLM 响应 chat=325b37d1-3483-4c6f-b755-44532a4dbe3c round=2 action=done speak_len=119 raw_len=296 raw_preview={"action":"done","speak":"检查完毕,所有已预排任务均避开了早八(第1-2节)和晚十(第11-12节)时段,微调完成!","goal_check":"当前所有suggested状态的任务(共44个)均安排在第3-10节,无早八/晚十时段冲突,符合用户微调需求。"}
|
||||
|
||||
2026/04/10 22:46:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.096ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:49.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.559ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:51.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:46:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.047ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:54 [DEBUG] schedule preview write chat=325b37d1-3483-4c6f-b755-44532a4dbe3c user=1 state=tasks=105 pending=0 suggested=44 existing=61 task_item_with_slot=44 event_with_slot=52 preview=entries=96 existing=52 suggested=44 task_type=44 course_type=52 generated_at=2026-04-10T22:46:54+08:00
|
||||
[GIN] 2026/04/10 - 22:46:55 | 200 | 43.3002757s | 127.0.0.1 | POST "/api/v1/agent/chat"
|
||||
2026/04/10 22:46:55 outbox due messages=3, start dispatch
|
||||
|
||||
2026/04/10 22:46:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.984ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:55.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:56 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 325b37d1-3483-4c6f-b755-44532a4dbe3c
|
||||
|
||||
2026/04/10 22:46:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.039ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:57.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:57 [GORM-Cache] Invalidated conversation history cache for user 1 conversation 325b37d1-3483-4c6f-b755-44532a4dbe3c
|
||||
2026/04/10 22:46:58 outbox due messages=1, start dispatch
|
||||
2026/04/10 22:46:58 [GORM-Cache] No logic defined for model: model.AgentStateSnapshotRecord
|
||||
2026/04/10 22:46:59 异步生成会话标题失败(模型生成失败) chat=325b37d1-3483-4c6f-b755-44532a4dbe3c err=failed to create chat completion: context deadline exceeded
|
||||
|
||||
2026/04/10 22:46:59 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.138ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:46:59.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
2026/04/10 22:46:59 [GORM-Cache] No logic defined for model: model.MemoryJob
|
||||
2026/04/10 22:47:01 [GORM-Cache] No logic defined for model: model.MemoryJob
|
||||
|
||||
2026/04/10 22:47:01 D:/SmartFlow-Agent/backend/memory/repo/settings_repo.go:40 record not found
|
||||
[0.596ms] [rows:0] SELECT * FROM `memory_user_settings` WHERE user_id = 1 ORDER BY `memory_user_settings`.`user_id` LIMIT 1
|
||||
2026/04/10 22:47:10 [GORM-Cache] No logic defined for model: model.MemoryItem
|
||||
2026/04/10 22:47:10 [GORM-Cache] No logic defined for model: model.MemoryAuditLog
|
||||
2026/04/10 22:47:10 [GORM-Cache] No logic defined for model: model.MemoryJob
|
||||
2026/04/10 22:47:10 memory worker run once success: job_id=1 extracted_facts=1
|
||||
|
||||
2026/04/10 22:47:10 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.918ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:10.174')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:11 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.892ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:11.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:13 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.189ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:13.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:15 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.516ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:15.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:17 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.075ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:17.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:19 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.266ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:19.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:21 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.529ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:21.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:23 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.659ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:23.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:25 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.255ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:25.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:27 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.482ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:27.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:29 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.769ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:29.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:31 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.113ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:31.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:33 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.073ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:33.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:35 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.533ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:35.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:37 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.173ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:37.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:39 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.626ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:39.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:41 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.991ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:41.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:43 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.504ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:43.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:45 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.485ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:45.578')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:47 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.059ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:47.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:49 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.042ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:49.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:51 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.043ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:51.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:53 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.579ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:53.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:55 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[0.973ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:55.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
2026/04/10 22:47:57 D:/SmartFlow-Agent/backend/memory/repo/job_repo.go:96 record not found
|
||||
[1.236ms] [rows:0] SELECT * FROM `memory_jobs` WHERE job_type = 'extract' AND status IN ('pending','failed') AND ((next_retry_at IS NULL OR next_retry_at <= '2026-04-10 22:47:57.579')) ORDER BY id ASC,`memory_jobs`.`id` LIMIT 1 FOR UPDATE
|
||||
|
||||
进程 已完成,退出代码为 -1073741510 (0xC000013A: interrupted by Ctrl+C)
|
||||
@@ -8,7 +8,8 @@ import "time"
|
||||
// 1. 只承载模块运行参数,不承载业务状态;
|
||||
// 2. 允许启动期统一注入,避免业务层直接依赖配置中心。
|
||||
type Config struct {
|
||||
Enabled bool
|
||||
Enabled bool
|
||||
RAGEnabled bool
|
||||
|
||||
ExtractPrompt string
|
||||
DecisionPrompt string
|
||||
|
||||
@@ -25,3 +25,51 @@ type ItemDTO struct {
|
||||
CreatedAt *time.Time
|
||||
UpdatedAt *time.Time
|
||||
}
|
||||
|
||||
// ItemQuery 描述 memory_items 的通用查询条件。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 只表达 memory 仓储层需要的过滤条件;
|
||||
// 2. 不直接承载注入策略、重排策略等上层业务语义;
|
||||
// 3. IncludeGlobal 用于“会话级 + 全局级”混合读取场景。
|
||||
type ItemQuery struct {
|
||||
UserID int
|
||||
ConversationID string
|
||||
AssistantID string
|
||||
RunID string
|
||||
Statuses []string
|
||||
MemoryTypes []string
|
||||
IncludeGlobal bool
|
||||
OnlyUnexpired bool
|
||||
Limit int
|
||||
Now time.Time
|
||||
}
|
||||
|
||||
// RetrieveRequest 描述“供提示词注入前读取”所需的最小参数。
|
||||
type RetrieveRequest struct {
|
||||
Query string
|
||||
UserID int
|
||||
ConversationID string
|
||||
AssistantID string
|
||||
RunID string
|
||||
MemoryTypes []string
|
||||
Limit int
|
||||
Now time.Time
|
||||
}
|
||||
|
||||
// ListItemsRequest 描述记忆管理页列表查询参数。
|
||||
type ListItemsRequest struct {
|
||||
UserID int
|
||||
ConversationID string
|
||||
Statuses []string
|
||||
MemoryTypes []string
|
||||
Limit int
|
||||
}
|
||||
|
||||
// DeleteItemRequest 描述软删除一条记忆时所需的最小参数。
|
||||
type DeleteItemRequest struct {
|
||||
UserID int
|
||||
MemoryID int64
|
||||
Reason string
|
||||
OperatorType string
|
||||
}
|
||||
|
||||
@@ -22,11 +22,13 @@ type ExtractJobPayload struct {
|
||||
|
||||
// FactCandidate 表示抽取阶段得到的候选事实。
|
||||
type FactCandidate struct {
|
||||
MemoryType string
|
||||
Title string
|
||||
Content string
|
||||
Confidence float64
|
||||
IsExplicit bool
|
||||
MemoryType string
|
||||
Title string
|
||||
Content string
|
||||
Confidence float64
|
||||
Importance float64
|
||||
SensitivityLevel int
|
||||
IsExplicit bool
|
||||
}
|
||||
|
||||
// NormalizedFact 表示通过标准化后的可入库事实。
|
||||
@@ -37,5 +39,7 @@ type NormalizedFact struct {
|
||||
NormalizedContent string
|
||||
ContentHash string
|
||||
Confidence float64
|
||||
Importance float64
|
||||
SensitivityLevel int
|
||||
IsExplicit bool
|
||||
}
|
||||
|
||||
@@ -10,3 +10,11 @@ type UserSettingDTO struct {
|
||||
SensitiveMemoryEnabled bool
|
||||
UpdatedAt *time.Time
|
||||
}
|
||||
|
||||
// UpdateUserSettingRequest 描述记忆开关写入请求。
|
||||
type UpdateUserSettingRequest struct {
|
||||
UserID int
|
||||
MemoryEnabled bool
|
||||
ImplicitMemoryEnabled bool
|
||||
SensitiveMemoryEnabled bool
|
||||
}
|
||||
|
||||
171
backend/memory/module.go
Normal file
171
backend/memory/module.go
Normal file
@@ -0,0 +1,171 @@
|
||||
package memory
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"log"
|
||||
|
||||
infrallm "github.com/LoveLosita/smartflow/backend/infra/llm"
|
||||
infrarag "github.com/LoveLosita/smartflow/backend/infra/rag"
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryorchestrator "github.com/LoveLosita/smartflow/backend/memory/orchestrator"
|
||||
memoryrepo "github.com/LoveLosita/smartflow/backend/memory/repo"
|
||||
memoryservice "github.com/LoveLosita/smartflow/backend/memory/service"
|
||||
memoryworker "github.com/LoveLosita/smartflow/backend/memory/worker"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// Module 是 memory 模块对外暴露的统一门面。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 负责把 repo、service、worker、orchestrator 组装成一个稳定入口;
|
||||
// 2. 负责对外暴露“写入 / 读取 / 管理 / 启动 worker”这些高层意图;
|
||||
// 3. 不负责替代应用层 DI,也不负责替代上层事务管理器,事务边界仍由调用方掌控。
|
||||
type Module struct {
|
||||
db *gorm.DB
|
||||
cfg memorymodel.Config
|
||||
llmClient *infrallm.Client
|
||||
ragRuntime infrarag.Runtime
|
||||
|
||||
jobRepo *memoryrepo.JobRepo
|
||||
itemRepo *memoryrepo.ItemRepo
|
||||
auditRepo *memoryrepo.AuditRepo
|
||||
settingsRepo *memoryrepo.SettingsRepo
|
||||
|
||||
enqueueService *memoryservice.EnqueueService
|
||||
readService *memoryservice.ReadService
|
||||
manageService *memoryservice.ManageService
|
||||
runner *memoryworker.Runner
|
||||
}
|
||||
|
||||
// LoadConfigFromViper 复用 memory 子包里的配置加载逻辑,对外收口一个统一入口。
|
||||
func LoadConfigFromViper() memorymodel.Config {
|
||||
return memoryservice.LoadConfigFromViper()
|
||||
}
|
||||
|
||||
// NewModule 创建 memory 模块门面。
|
||||
//
|
||||
// 设计说明:
|
||||
// 1. 这里做的是“轻组装”,不引入额外容器概念,方便先接进现有项目;
|
||||
// 2. llmClient 允许为 nil,此时写入链路会自动回退到本地 fallback 抽取;
|
||||
// 3. ragRuntime 允许为 nil,此时读取/向量同步自动回退旧逻辑;
|
||||
// 4. 若后续接入统一 DI 容器,也应优先注册这个 Module,而不是把内部 repo/service 继续向外泄漏。
|
||||
func NewModule(db *gorm.DB, llmClient *infrallm.Client, ragRuntime infrarag.Runtime, cfg memorymodel.Config) *Module {
|
||||
return wireModule(db, llmClient, ragRuntime, cfg)
|
||||
}
|
||||
|
||||
// WithTx 返回绑定到指定事务连接的同构门面。
|
||||
//
|
||||
// 步骤化说明:
|
||||
// 1. 上层事务管理器先创建 tx;
|
||||
// 2. 再通过 WithTx(tx) 把 memory 内部所有 repo/service 一次性切到同一个事务连接;
|
||||
// 3. 这样外部无需重新 new 一堆 repo,也不会破坏既有跨表事务边界。
|
||||
func (m *Module) WithTx(tx *gorm.DB) *Module {
|
||||
if m == nil {
|
||||
return nil
|
||||
}
|
||||
if tx == nil {
|
||||
return m
|
||||
}
|
||||
return wireModule(tx, m.llmClient, m.ragRuntime, m.cfg)
|
||||
}
|
||||
|
||||
// EnqueueExtract 把一次记忆抽取请求入队到 memory_jobs。
|
||||
func (m *Module) EnqueueExtract(
|
||||
ctx context.Context,
|
||||
payload memorymodel.ExtractJobPayload,
|
||||
sourceEventID string,
|
||||
) error {
|
||||
if m == nil || m.enqueueService == nil {
|
||||
return errors.New("memory module enqueue service is nil")
|
||||
}
|
||||
return m.enqueueService.EnqueueExtractJob(ctx, payload, sourceEventID)
|
||||
}
|
||||
|
||||
// Retrieve 读取后续可供 prompt 注入使用的候选记忆。
|
||||
func (m *Module) Retrieve(ctx context.Context, req memorymodel.RetrieveRequest) ([]memorymodel.ItemDTO, error) {
|
||||
if m == nil || m.readService == nil {
|
||||
return nil, errors.New("memory module read service is nil")
|
||||
}
|
||||
return m.readService.Retrieve(ctx, req)
|
||||
}
|
||||
|
||||
// ListItems 列出用户当前可管理的记忆条目。
|
||||
func (m *Module) ListItems(ctx context.Context, req memorymodel.ListItemsRequest) ([]memorymodel.ItemDTO, error) {
|
||||
if m == nil || m.manageService == nil {
|
||||
return nil, errors.New("memory module manage service is nil")
|
||||
}
|
||||
return m.manageService.ListItems(ctx, req)
|
||||
}
|
||||
|
||||
// DeleteItem 软删除一条记忆,并补写审计日志。
|
||||
func (m *Module) DeleteItem(ctx context.Context, req memorymodel.DeleteItemRequest) (*memorymodel.ItemDTO, error) {
|
||||
if m == nil || m.manageService == nil {
|
||||
return nil, errors.New("memory module manage service is nil")
|
||||
}
|
||||
return m.manageService.DeleteItem(ctx, req)
|
||||
}
|
||||
|
||||
// GetUserSetting 读取用户当前生效的记忆开关。
|
||||
func (m *Module) GetUserSetting(ctx context.Context, userID int) (memorymodel.UserSettingDTO, error) {
|
||||
if m == nil || m.manageService == nil {
|
||||
return memorymodel.UserSettingDTO{}, errors.New("memory module manage service is nil")
|
||||
}
|
||||
return m.manageService.GetUserSetting(ctx, userID)
|
||||
}
|
||||
|
||||
// UpsertUserSetting 写入用户记忆开关。
|
||||
func (m *Module) UpsertUserSetting(ctx context.Context, req memorymodel.UpdateUserSettingRequest) (memorymodel.UserSettingDTO, error) {
|
||||
if m == nil || m.manageService == nil {
|
||||
return memorymodel.UserSettingDTO{}, errors.New("memory module manage service is nil")
|
||||
}
|
||||
return m.manageService.UpsertUserSetting(ctx, req)
|
||||
}
|
||||
|
||||
// StartWorker 启动 memory 后台 worker。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 这里只负责按当前配置拉起轮询循环;
|
||||
// 2. 若 memory.enabled=false,则直接记录日志并返回;
|
||||
// 3. 当前不做重复启动保护,生命周期仍假设由应用启动层统一掌控。
|
||||
func (m *Module) StartWorker(ctx context.Context) {
|
||||
if m == nil || m.runner == nil {
|
||||
log.Println("Memory worker is not initialized")
|
||||
return
|
||||
}
|
||||
if !m.cfg.Enabled {
|
||||
log.Println("Memory worker is disabled")
|
||||
return
|
||||
}
|
||||
|
||||
go memoryworker.RunPollingLoop(ctx, m.runner, m.cfg.WorkerPollEvery, m.cfg.WorkerClaimBatch)
|
||||
log.Println("Memory worker started")
|
||||
}
|
||||
|
||||
func wireModule(db *gorm.DB, llmClient *infrallm.Client, ragRuntime infrarag.Runtime, cfg memorymodel.Config) *Module {
|
||||
jobRepo := memoryrepo.NewJobRepo(db)
|
||||
itemRepo := memoryrepo.NewItemRepo(db)
|
||||
auditRepo := memoryrepo.NewAuditRepo(db)
|
||||
settingsRepo := memoryrepo.NewSettingsRepo(db)
|
||||
|
||||
enqueueService := memoryservice.NewEnqueueService(jobRepo)
|
||||
readService := memoryservice.NewReadService(itemRepo, settingsRepo, ragRuntime, cfg)
|
||||
manageService := memoryservice.NewManageService(db, itemRepo, auditRepo, settingsRepo)
|
||||
extractor := memoryorchestrator.NewLLMWriteOrchestrator(llmClient, cfg)
|
||||
runner := memoryworker.NewRunner(db, jobRepo, itemRepo, auditRepo, settingsRepo, extractor, ragRuntime)
|
||||
|
||||
return &Module{
|
||||
db: db,
|
||||
cfg: cfg,
|
||||
llmClient: llmClient,
|
||||
ragRuntime: ragRuntime,
|
||||
jobRepo: jobRepo,
|
||||
itemRepo: itemRepo,
|
||||
auditRepo: auditRepo,
|
||||
settingsRepo: settingsRepo,
|
||||
enqueueService: enqueueService,
|
||||
readService: readService,
|
||||
manageService: manageService,
|
||||
runner: runner,
|
||||
}
|
||||
}
|
||||
299
backend/memory/orchestrator/llm_write_orchestrator.go
Normal file
299
backend/memory/orchestrator/llm_write_orchestrator.go
Normal file
@@ -0,0 +1,299 @@
|
||||
package orchestrator
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
infrallm "github.com/LoveLosita/smartflow/backend/infra/llm"
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultMemoryExtractMaxTokens = 1200
|
||||
defaultMemoryExtractMaxFacts = 5
|
||||
)
|
||||
|
||||
// LLMWriteOrchestrator 负责把单条对话消息转成可入库的记忆候选。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 负责调用 LLM 做抽取、把输出标准化成 memory_facts;
|
||||
// 2. 不负责落库,不负责任务状态机推进;
|
||||
// 3. 当 LLM 不可用或输出异常时,回退到保守的本地抽取,保证链路不完全断。
|
||||
type LLMWriteOrchestrator struct {
|
||||
client *infrallm.Client
|
||||
cfg memorymodel.Config
|
||||
logger *log.Logger
|
||||
}
|
||||
|
||||
// NewLLMWriteOrchestrator 构造 LLM 版记忆写入编排器。
|
||||
func NewLLMWriteOrchestrator(client *infrallm.Client, cfg memorymodel.Config) *LLMWriteOrchestrator {
|
||||
return &LLMWriteOrchestrator{
|
||||
client: client,
|
||||
cfg: cfg,
|
||||
logger: log.Default(),
|
||||
}
|
||||
}
|
||||
|
||||
// ExtractFacts 从单条消息中抽取可入库事实。
|
||||
//
|
||||
// 返回语义:
|
||||
// 1. 成功时返回标准化后的候选事实;
|
||||
// 2. 即使 LLM 失败,也尽量返回保守的 fallback 结果,避免 worker 空转报错;
|
||||
// 3. 只有输入本身为空时才返回空结果。
|
||||
func (o *LLMWriteOrchestrator) ExtractFacts(ctx context.Context, payload memorymodel.ExtractJobPayload) ([]memorymodel.NormalizedFact, error) {
|
||||
sourceText := strings.TrimSpace(payload.SourceText)
|
||||
if sourceText == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if o == nil || o.client == nil {
|
||||
return fallbackNormalizedFacts(payload), nil
|
||||
}
|
||||
|
||||
messages := infrallm.BuildSystemUserMessages(
|
||||
buildMemoryExtractSystemPrompt(o.cfg.ExtractPrompt),
|
||||
nil,
|
||||
buildMemoryExtractUserPrompt(payload),
|
||||
)
|
||||
|
||||
resp, rawResult, err := infrallm.GenerateJSON[memoryExtractResponse](
|
||||
ctx,
|
||||
o.client,
|
||||
messages,
|
||||
infrallm.GenerateOptions{
|
||||
Temperature: clampTemperature(o.cfg.LLMTemperature),
|
||||
MaxTokens: defaultMemoryExtractMaxTokens,
|
||||
Thinking: infrallm.ThinkingModeDisabled,
|
||||
Metadata: map[string]any{
|
||||
"stage": "memory_extract",
|
||||
"user_id": payload.UserID,
|
||||
"conversation_id": payload.ConversationID,
|
||||
},
|
||||
},
|
||||
)
|
||||
if err != nil {
|
||||
if o.logger != nil {
|
||||
o.logger.Printf("[WARN] memory extract llm failed user_id=%d conversation_id=%s err=%v raw=%s",
|
||||
payload.UserID, payload.ConversationID, err, truncateForLog(rawResult))
|
||||
}
|
||||
return fallbackNormalizedFacts(payload), nil
|
||||
}
|
||||
|
||||
facts := convertExtractResponse(resp)
|
||||
normalized := memoryutils.NormalizeFacts(facts)
|
||||
if len(normalized) == 0 {
|
||||
return fallbackNormalizedFacts(payload), nil
|
||||
}
|
||||
return normalized, nil
|
||||
}
|
||||
|
||||
type memoryExtractResponse struct {
|
||||
Facts []memoryExtractFact `json:"facts"`
|
||||
}
|
||||
|
||||
type memoryExtractFact struct {
|
||||
MemoryType string `json:"memory_type"`
|
||||
Title string `json:"title"`
|
||||
Content string `json:"content"`
|
||||
Confidence float64 `json:"confidence"`
|
||||
Importance float64 `json:"importance"`
|
||||
SensitivityLevel int `json:"sensitivity_level"`
|
||||
IsExplicit bool `json:"is_explicit"`
|
||||
}
|
||||
|
||||
type memoryExtractPromptInput struct {
|
||||
UserID int `json:"user_id"`
|
||||
ConversationID string `json:"conversation_id"`
|
||||
AssistantID string `json:"assistant_id,omitempty"`
|
||||
RunID string `json:"run_id,omitempty"`
|
||||
SourceMessageID int64 `json:"source_message_id,omitempty"`
|
||||
SourceRole string `json:"source_role"`
|
||||
SourceText string `json:"source_text"`
|
||||
OccurredAt string `json:"occurred_at"`
|
||||
TraceID string `json:"trace_id,omitempty"`
|
||||
}
|
||||
|
||||
func buildMemoryExtractSystemPrompt(override string) string {
|
||||
override = strings.TrimSpace(override)
|
||||
if override != "" {
|
||||
return override
|
||||
}
|
||||
|
||||
return strings.TrimSpace(`你是一个“记忆抽取器”。
|
||||
你的任务是从单条用户消息中抽取值得长期记住的事实、偏好、约束、待办线索。
|
||||
请只输出 JSON 对象,不要输出解释、不要输出 markdown。
|
||||
|
||||
输出格式:
|
||||
{
|
||||
"facts": [
|
||||
{
|
||||
"memory_type": "preference|constraint|fact|todo_hint",
|
||||
"title": "短标题",
|
||||
"content": "完整事实内容",
|
||||
"confidence": 0.0,
|
||||
"importance": 0.0,
|
||||
"sensitivity_level": 0,
|
||||
"is_explicit": false
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
规则:
|
||||
1. 最多输出 5 条事实。
|
||||
2. 只保留稳定、未来可能复用的信息,闲聊、寒暄、一次性噪声不要记。
|
||||
3. 用户明确说“记住”或“以后提醒我”时,is_explicit 设为 true。
|
||||
4. confidence 表示这条事实是否真的值得记,取 0 到 1。
|
||||
5. importance 表示对后续提醒/陪伴的价值,取 0 到 1。
|
||||
6. sensitivity_level 取 0 到 2,数字越大越敏感。
|
||||
7. 不确定就少记,不要编造。`)
|
||||
}
|
||||
|
||||
func buildMemoryExtractUserPrompt(payload memorymodel.ExtractJobPayload) string {
|
||||
request := memoryExtractPromptInput{
|
||||
UserID: payload.UserID,
|
||||
ConversationID: payload.ConversationID,
|
||||
AssistantID: payload.AssistantID,
|
||||
RunID: payload.RunID,
|
||||
SourceMessageID: payload.SourceMessageID,
|
||||
SourceRole: payload.SourceRole,
|
||||
SourceText: payload.SourceText,
|
||||
OccurredAt: payload.OccurredAt.Format("2006-01-02 15:04:05"),
|
||||
TraceID: payload.TraceID,
|
||||
}
|
||||
|
||||
raw, err := json.MarshalIndent(request, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Sprintf("请从这条消息中抽取可长期记住的信息:%s", payload.SourceText)
|
||||
}
|
||||
|
||||
return fmt.Sprintf("请从下面这条用户消息中抽取可长期记住的信息,最多 %d 条。\n输入:\n%s",
|
||||
defaultMemoryExtractMaxFacts, string(raw))
|
||||
}
|
||||
|
||||
func convertExtractResponse(resp *memoryExtractResponse) []memorymodel.FactCandidate {
|
||||
if resp == nil || len(resp.Facts) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]memorymodel.FactCandidate, 0, len(resp.Facts))
|
||||
for _, fact := range resp.Facts {
|
||||
memoryType := memorymodel.NormalizeMemoryType(fact.MemoryType)
|
||||
if memoryType == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
content := strings.TrimSpace(fact.Content)
|
||||
if content == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
confidence := clamp01(fact.Confidence)
|
||||
if confidence == 0 {
|
||||
confidence = 0.6
|
||||
}
|
||||
|
||||
importance := clamp01(fact.Importance)
|
||||
if importance == 0 {
|
||||
importance = defaultImportanceByType(memoryType)
|
||||
}
|
||||
|
||||
result = append(result, memorymodel.FactCandidate{
|
||||
MemoryType: memoryType,
|
||||
Title: strings.TrimSpace(fact.Title),
|
||||
Content: content,
|
||||
Confidence: confidence,
|
||||
Importance: importance,
|
||||
SensitivityLevel: clampInt(fact.SensitivityLevel, 0, 2),
|
||||
IsExplicit: fact.IsExplicit,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func fallbackNormalizedFacts(payload memorymodel.ExtractJobPayload) []memorymodel.NormalizedFact {
|
||||
sourceText := strings.TrimSpace(payload.SourceText)
|
||||
if sourceText == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
return memoryutils.NormalizeFacts([]memorymodel.FactCandidate{
|
||||
{
|
||||
MemoryType: memorymodel.MemoryTypeFact,
|
||||
Title: buildFallbackTitle(sourceText),
|
||||
Content: sourceText,
|
||||
Confidence: 0.55,
|
||||
Importance: defaultImportanceByType(memorymodel.MemoryTypeFact),
|
||||
SensitivityLevel: 0,
|
||||
IsExplicit: false,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func buildFallbackTitle(sourceText string) string {
|
||||
runes := []rune(strings.TrimSpace(sourceText))
|
||||
if len(runes) == 0 {
|
||||
return "用户提到"
|
||||
}
|
||||
if len(runes) > 24 {
|
||||
runes = runes[:24]
|
||||
}
|
||||
return "用户提到:" + string(runes)
|
||||
}
|
||||
|
||||
func clampTemperature(v float64) float64 {
|
||||
if v <= 0 {
|
||||
return 0.1
|
||||
}
|
||||
if v > 1 {
|
||||
return 1
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func clamp01(v float64) float64 {
|
||||
if v < 0 {
|
||||
return 0
|
||||
}
|
||||
if v > 1 {
|
||||
return 1
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func clampInt(v, minValue, maxValue int) int {
|
||||
if v < minValue {
|
||||
return minValue
|
||||
}
|
||||
if v > maxValue {
|
||||
return maxValue
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func defaultImportanceByType(memoryType string) float64 {
|
||||
switch memoryType {
|
||||
case memorymodel.MemoryTypePreference:
|
||||
return 0.85
|
||||
case memorymodel.MemoryTypeConstraint:
|
||||
return 0.95
|
||||
case memorymodel.MemoryTypeTodoHint:
|
||||
return 0.8
|
||||
default:
|
||||
return 0.6
|
||||
}
|
||||
}
|
||||
|
||||
func truncateForLog(raw *infrallm.TextResult) string {
|
||||
if raw == nil {
|
||||
return ""
|
||||
}
|
||||
text := strings.TrimSpace(raw.Text)
|
||||
if len(text) <= 200 {
|
||||
return text
|
||||
}
|
||||
return text[:200] + "..."
|
||||
}
|
||||
@@ -8,36 +8,33 @@ import (
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
)
|
||||
|
||||
// WriteOrchestrator 是写入链路编排器(Day1 首版)。
|
||||
// WriteOrchestrator 是 Day1 的本地回退版本。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. Day1 只做 mock 抽取 + 标准化,不接 LLM 决策;
|
||||
// 2. Day2/Day3 再引入冲突消解、重排与向量召回。
|
||||
// 1. 只做最保守的“从 source_text 直接生成一条候选事实”;
|
||||
// 2. 不依赖 LLM,便于在模型不可用时保底;
|
||||
// 3. 后续会逐步被 LLM 版编排器取代,但不会直接删掉,方便回退。
|
||||
type WriteOrchestrator struct{}
|
||||
|
||||
func NewWriteOrchestrator() *WriteOrchestrator {
|
||||
return &WriteOrchestrator{}
|
||||
}
|
||||
|
||||
// ExtractFacts 执行“候选事实抽取 -> 标准化”链路。
|
||||
//
|
||||
// Day1 策略:
|
||||
// 1. 先用 source_text 直接构造候选事实,确保链路可跑通;
|
||||
// 2. 后续再替换成 LLM 抽取与结构化决策。
|
||||
// ExtractFacts 执行最小回退链路。
|
||||
func (o *WriteOrchestrator) ExtractFacts(_ context.Context, payload memorymodel.ExtractJobPayload) ([]memorymodel.NormalizedFact, error) {
|
||||
sourceText := strings.TrimSpace(payload.SourceText)
|
||||
if sourceText == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
candidates := []memorymodel.FactCandidate{
|
||||
{
|
||||
MemoryType: memorymodel.MemoryTypeFact,
|
||||
Title: "用户近期提及",
|
||||
Content: sourceText,
|
||||
Confidence: 0.6,
|
||||
IsExplicit: false,
|
||||
},
|
||||
}
|
||||
candidates := []memorymodel.FactCandidate{{
|
||||
MemoryType: memorymodel.MemoryTypeFact,
|
||||
Title: "用户提到",
|
||||
Content: sourceText,
|
||||
Confidence: 0.6,
|
||||
Importance: 0.6,
|
||||
SensitivityLevel: 0,
|
||||
IsExplicit: false,
|
||||
}}
|
||||
return memoryutils.NormalizeFacts(candidates), nil
|
||||
}
|
||||
|
||||
@@ -17,6 +17,10 @@ func NewAuditRepo(db *gorm.DB) *AuditRepo {
|
||||
return &AuditRepo{db: db}
|
||||
}
|
||||
|
||||
func (r *AuditRepo) WithTx(tx *gorm.DB) *AuditRepo {
|
||||
return &AuditRepo{db: tx}
|
||||
}
|
||||
|
||||
func (r *AuditRepo) Create(ctx context.Context, log model.MemoryAuditLog) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory audit repo is nil")
|
||||
|
||||
@@ -3,12 +3,20 @@ package repo
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// ItemRepo 封装 memory_items 的数据访问(Day1 先占位)。
|
||||
// ItemRepo 封装 memory_items 的数据访问。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 只负责表级读写,不承载注入、重排、审计决策;
|
||||
// 2. 查询条件统一由 ItemQuery 表达,避免 service 层拼装 SQL;
|
||||
// 3. 软删除、访问时间刷新等状态变更也收敛到这里。
|
||||
type ItemRepo struct {
|
||||
db *gorm.DB
|
||||
}
|
||||
@@ -17,11 +25,11 @@ func NewItemRepo(db *gorm.DB) *ItemRepo {
|
||||
return &ItemRepo{db: db}
|
||||
}
|
||||
|
||||
// UpsertItems 预留给 Day2/Day3 的写入链路。
|
||||
//
|
||||
// Day1 约束:
|
||||
// 1. 先完成任务入队与状态机闭环;
|
||||
// 2. 不在本阶段引入复杂冲突消解与向量写入。
|
||||
func (r *ItemRepo) WithTx(tx *gorm.DB) *ItemRepo {
|
||||
return &ItemRepo{db: tx}
|
||||
}
|
||||
|
||||
// UpsertItems 批量写入记忆条目。
|
||||
func (r *ItemRepo) UpsertItems(ctx context.Context, items []model.MemoryItem) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory item repo is nil")
|
||||
@@ -29,5 +37,186 @@ func (r *ItemRepo) UpsertItems(ctx context.Context, items []model.MemoryItem) er
|
||||
if len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
return r.db.WithContext(ctx).Create(&items).Error
|
||||
|
||||
for i := range items {
|
||||
if err := r.db.WithContext(ctx).Create(&items[i]).Error; err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// FindByQuery 按统一过滤条件读取记忆条目。
|
||||
//
|
||||
// 步骤化说明:
|
||||
// 1. 先强制 user_id 过滤,避免跨用户串记忆;
|
||||
// 2. 再按会话/助手/run 维度补充过滤,IncludeGlobal=true 时允许读取对应全局条目;
|
||||
// 3. 最后补状态、类型、过期时间和 limit,返回稳定排序结果。
|
||||
func (r *ItemRepo) FindByQuery(ctx context.Context, query memorymodel.ItemQuery) ([]model.MemoryItem, error) {
|
||||
if r == nil || r.db == nil {
|
||||
return nil, errors.New("memory item repo is nil")
|
||||
}
|
||||
if query.UserID <= 0 {
|
||||
return nil, errors.New("memory item query user_id is invalid")
|
||||
}
|
||||
|
||||
db := r.db.WithContext(ctx).Model(&model.MemoryItem{}).Where("user_id = ?", query.UserID)
|
||||
db = applyScopedEquality(db, "conversation_id", query.ConversationID, query.IncludeGlobal)
|
||||
db = applyScopedEquality(db, "assistant_id", query.AssistantID, query.IncludeGlobal)
|
||||
db = applyScopedEquality(db, "run_id", query.RunID, query.IncludeGlobal)
|
||||
|
||||
if len(query.Statuses) > 0 {
|
||||
db = db.Where("status IN ?", query.Statuses)
|
||||
}
|
||||
if len(query.MemoryTypes) > 0 {
|
||||
db = db.Where("memory_type IN ?", query.MemoryTypes)
|
||||
}
|
||||
if query.OnlyUnexpired {
|
||||
now := query.Now
|
||||
if now.IsZero() {
|
||||
now = time.Now()
|
||||
}
|
||||
db = db.Where("(ttl_at IS NULL OR ttl_at > ?)", now)
|
||||
}
|
||||
if query.Limit > 0 {
|
||||
db = db.Limit(query.Limit)
|
||||
}
|
||||
|
||||
var items []model.MemoryItem
|
||||
err := db.
|
||||
Order("is_explicit DESC").
|
||||
Order("importance DESC").
|
||||
Order("updated_at DESC").
|
||||
Find(&items).Error
|
||||
return items, err
|
||||
}
|
||||
|
||||
// GetByIDForUser 读取某个用户的一条记忆条目。
|
||||
func (r *ItemRepo) GetByIDForUser(ctx context.Context, userID int, memoryID int64) (*model.MemoryItem, error) {
|
||||
if r == nil || r.db == nil {
|
||||
return nil, errors.New("memory item repo is nil")
|
||||
}
|
||||
if userID <= 0 || memoryID <= 0 {
|
||||
return nil, errors.New("memory item query params is invalid")
|
||||
}
|
||||
|
||||
var item model.MemoryItem
|
||||
err := r.db.WithContext(ctx).
|
||||
Where("id = ? AND user_id = ?", memoryID, userID).
|
||||
First(&item).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &item, nil
|
||||
}
|
||||
|
||||
// UpdateStatusByID 更新某条记忆的状态。
|
||||
func (r *ItemRepo) UpdateStatusByID(ctx context.Context, userID int, memoryID int64, status string) error {
|
||||
return r.UpdateStatusByIDAt(ctx, userID, memoryID, status, time.Now())
|
||||
}
|
||||
|
||||
// UpdateStatusByIDAt 更新某条记忆的状态,并允许上层显式指定更新时间。
|
||||
//
|
||||
// 这样做的原因:
|
||||
// 1. 管理侧删除时,需要让“库内更新时间”和“审计 after 快照时间”保持一致;
|
||||
// 2. 读取侧若只是刷新 last_access_at,不应该误改 updated_at;
|
||||
// 3. 因此把“更新时间来源”收口到 repo,避免 service 层自己拼 SQL。
|
||||
func (r *ItemRepo) UpdateStatusByIDAt(
|
||||
ctx context.Context,
|
||||
userID int,
|
||||
memoryID int64,
|
||||
status string,
|
||||
updatedAt time.Time,
|
||||
) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory item repo is nil")
|
||||
}
|
||||
if userID <= 0 || memoryID <= 0 {
|
||||
return errors.New("memory item update params is invalid")
|
||||
}
|
||||
|
||||
status = strings.TrimSpace(status)
|
||||
if status == "" {
|
||||
return errors.New("memory item status is empty")
|
||||
}
|
||||
if updatedAt.IsZero() {
|
||||
updatedAt = time.Now()
|
||||
}
|
||||
|
||||
return r.db.WithContext(ctx).
|
||||
Model(&model.MemoryItem{}).
|
||||
Where("id = ? AND user_id = ?", memoryID, userID).
|
||||
Updates(map[string]any{
|
||||
"status": status,
|
||||
"updated_at": updatedAt,
|
||||
}).Error
|
||||
}
|
||||
|
||||
// TouchLastAccessAt 批量刷新记忆访问时间。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 这里只更新 last_access_at,不更新 updated_at;
|
||||
// 2. 因为 updated_at 代表“内容被修改”的时间,不能被一次普通读取污染;
|
||||
// 3. 否则后续读取重排会把“最近被读过的旧记忆”误判成“最近被更新的记忆”。
|
||||
func (r *ItemRepo) TouchLastAccessAt(ctx context.Context, ids []int64, accessedAt time.Time) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory item repo is nil")
|
||||
}
|
||||
if len(ids) == 0 {
|
||||
return nil
|
||||
}
|
||||
if accessedAt.IsZero() {
|
||||
accessedAt = time.Now()
|
||||
}
|
||||
|
||||
return r.db.WithContext(ctx).
|
||||
Model(&model.MemoryItem{}).
|
||||
Where("id IN ?", ids).
|
||||
Updates(map[string]any{
|
||||
"last_access_at": accessedAt,
|
||||
}).Error
|
||||
}
|
||||
|
||||
// UpdateVectorStateByID 更新单条记忆的向量同步桥接状态。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 这里只更新 vector_status/vector_id,不更新 updated_at;
|
||||
// 2. 因为向量同步属于索引层状态,不代表记忆内容本身被修改;
|
||||
// 3. 若误改 updated_at,会污染读取侧的时间排序语义。
|
||||
func (r *ItemRepo) UpdateVectorStateByID(
|
||||
ctx context.Context,
|
||||
memoryID int64,
|
||||
vectorStatus string,
|
||||
vectorID *string,
|
||||
) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory item repo is nil")
|
||||
}
|
||||
if memoryID <= 0 {
|
||||
return errors.New("memory item vector update id is invalid")
|
||||
}
|
||||
|
||||
vectorStatus = strings.TrimSpace(vectorStatus)
|
||||
if vectorStatus == "" {
|
||||
return errors.New("memory item vector status is empty")
|
||||
}
|
||||
|
||||
return r.db.WithContext(ctx).
|
||||
Model(&model.MemoryItem{}).
|
||||
Where("id = ?", memoryID).
|
||||
UpdateColumns(map[string]any{
|
||||
"vector_status": vectorStatus,
|
||||
"vector_id": vectorID,
|
||||
}).Error
|
||||
}
|
||||
|
||||
func applyScopedEquality(db *gorm.DB, column, value string, includeGlobal bool) *gorm.DB {
|
||||
value = strings.TrimSpace(value)
|
||||
if value == "" {
|
||||
return db
|
||||
}
|
||||
if includeGlobal {
|
||||
return db.Where("("+column+" = ? OR "+column+" IS NULL)", value)
|
||||
}
|
||||
return db.Where(column+" = ?", value)
|
||||
}
|
||||
|
||||
@@ -87,18 +87,19 @@ func (r *JobRepo) ClaimNextRunnableExtractJob(ctx context.Context, now time.Time
|
||||
var claimed *model.MemoryJob
|
||||
err := r.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
|
||||
var job model.MemoryJob
|
||||
queryErr := tx.
|
||||
query := tx.
|
||||
Clauses(clause.Locking{Strength: "UPDATE"}).
|
||||
Where("job_type = ?", model.MemoryJobTypeExtract).
|
||||
Where("status IN ?", []string{model.MemoryJobStatusPending, model.MemoryJobStatusFailed}).
|
||||
Where("(next_retry_at IS NULL OR next_retry_at <= ?)", now).
|
||||
Order("id ASC").
|
||||
First(&job).Error
|
||||
if queryErr != nil {
|
||||
if errors.Is(queryErr, gorm.ErrRecordNotFound) {
|
||||
return nil
|
||||
}
|
||||
return queryErr
|
||||
Limit(1).
|
||||
Find(&job)
|
||||
if query.Error != nil {
|
||||
return query.Error
|
||||
}
|
||||
if query.RowsAffected == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
updates := map[string]any{
|
||||
|
||||
@@ -18,6 +18,36 @@ func NewSettingsRepo(db *gorm.DB) *SettingsRepo {
|
||||
return &SettingsRepo{db: db}
|
||||
}
|
||||
|
||||
func (r *SettingsRepo) WithTx(tx *gorm.DB) *SettingsRepo {
|
||||
return &SettingsRepo{db: tx}
|
||||
}
|
||||
|
||||
// GetByUserID 读取用户记忆设置。
|
||||
//
|
||||
// 返回语义:
|
||||
// 1. 命中时返回真实记录;
|
||||
// 2. 未命中时返回 nil,nil,由上层决定是否走默认开关;
|
||||
// 3. 不在仓储层偷偷补默认值,避免写路径和读路径语义不一致。
|
||||
func (r *SettingsRepo) GetByUserID(ctx context.Context, userID int) (*model.MemoryUserSetting, error) {
|
||||
if r == nil || r.db == nil {
|
||||
return nil, errors.New("memory settings repo is nil")
|
||||
}
|
||||
if userID <= 0 {
|
||||
return nil, errors.New("memory settings user_id is invalid")
|
||||
}
|
||||
|
||||
var setting model.MemoryUserSetting
|
||||
query := r.db.WithContext(ctx).Where("user_id = ?", userID).Limit(1).Find(&setting)
|
||||
if query.Error != nil {
|
||||
return nil, query.Error
|
||||
}
|
||||
if query.RowsAffected == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return &setting, nil
|
||||
}
|
||||
|
||||
// Upsert 写入用户记忆设置。
|
||||
func (r *SettingsRepo) Upsert(ctx context.Context, setting model.MemoryUserSetting) error {
|
||||
if r == nil || r.db == nil {
|
||||
return errors.New("memory settings repo is nil")
|
||||
|
||||
119
backend/memory/service/common.go
Normal file
119
backend/memory/service/common.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package service
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
)
|
||||
|
||||
func toItemDTO(item model.MemoryItem) memorymodel.ItemDTO {
|
||||
return memorymodel.ItemDTO{
|
||||
ID: item.ID,
|
||||
UserID: item.UserID,
|
||||
ConversationID: strValue(item.ConversationID),
|
||||
AssistantID: strValue(item.AssistantID),
|
||||
RunID: strValue(item.RunID),
|
||||
MemoryType: item.MemoryType,
|
||||
Title: item.Title,
|
||||
Content: item.Content,
|
||||
Confidence: item.Confidence,
|
||||
Importance: item.Importance,
|
||||
SensitivityLevel: item.SensitivityLevel,
|
||||
IsExplicit: item.IsExplicit,
|
||||
Status: item.Status,
|
||||
TTLAt: item.TTLAt,
|
||||
CreatedAt: item.CreatedAt,
|
||||
UpdatedAt: item.UpdatedAt,
|
||||
}
|
||||
}
|
||||
|
||||
func toItemDTOs(items []model.MemoryItem) []memorymodel.ItemDTO {
|
||||
if len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
result := make([]memorymodel.ItemDTO, 0, len(items))
|
||||
for _, item := range items {
|
||||
result = append(result, toItemDTO(item))
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func toUserSettingDTO(setting model.MemoryUserSetting) memorymodel.UserSettingDTO {
|
||||
return memorymodel.UserSettingDTO{
|
||||
UserID: setting.UserID,
|
||||
MemoryEnabled: setting.MemoryEnabled,
|
||||
ImplicitMemoryEnabled: setting.ImplicitMemoryEnabled,
|
||||
SensitiveMemoryEnabled: setting.SensitiveMemoryEnabled,
|
||||
UpdatedAt: setting.UpdatedAt,
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeMemoryTypes(raw []string) []string {
|
||||
if len(raw) == 0 {
|
||||
return nil
|
||||
}
|
||||
result := make([]string, 0, len(raw))
|
||||
seen := make(map[string]struct{}, len(raw))
|
||||
for _, item := range raw {
|
||||
normalized := memorymodel.NormalizeMemoryType(item)
|
||||
if normalized == "" {
|
||||
continue
|
||||
}
|
||||
if _, exists := seen[normalized]; exists {
|
||||
continue
|
||||
}
|
||||
seen[normalized] = struct{}{}
|
||||
result = append(result, normalized)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func normalizeManageStatuses(raw []string) []string {
|
||||
if len(raw) == 0 {
|
||||
return []string{
|
||||
model.MemoryItemStatusActive,
|
||||
model.MemoryItemStatusArchived,
|
||||
}
|
||||
}
|
||||
|
||||
result := make([]string, 0, len(raw))
|
||||
seen := make(map[string]struct{}, len(raw))
|
||||
for _, item := range raw {
|
||||
status := strings.ToLower(strings.TrimSpace(item))
|
||||
if status != model.MemoryItemStatusActive &&
|
||||
status != model.MemoryItemStatusArchived &&
|
||||
status != model.MemoryItemStatusDeleted {
|
||||
continue
|
||||
}
|
||||
if _, exists := seen[status]; exists {
|
||||
continue
|
||||
}
|
||||
seen[status] = struct{}{}
|
||||
result = append(result, status)
|
||||
}
|
||||
if len(result) == 0 {
|
||||
return []string{
|
||||
model.MemoryItemStatusActive,
|
||||
model.MemoryItemStatusArchived,
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func normalizeLimit(limit, defaultValue, maxValue int) int {
|
||||
if limit <= 0 {
|
||||
limit = defaultValue
|
||||
}
|
||||
if maxValue > 0 && limit > maxValue {
|
||||
return maxValue
|
||||
}
|
||||
return limit
|
||||
}
|
||||
|
||||
func strValue(v *string) string {
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(*v)
|
||||
}
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
func LoadConfigFromViper() memorymodel.Config {
|
||||
cfg := memorymodel.Config{
|
||||
Enabled: viper.GetBool("memory.enabled"),
|
||||
RAGEnabled: viper.GetBool("memory.rag.enabled"),
|
||||
ExtractPrompt: viper.GetString("memory.prompt.extract"),
|
||||
DecisionPrompt: viper.GetString("memory.prompt.decision"),
|
||||
Threshold: viper.GetFloat64("memory.threshold"),
|
||||
|
||||
203
backend/memory/service/manage_service.go
Normal file
203
backend/memory/service/manage_service.go
Normal file
@@ -0,0 +1,203 @@
|
||||
package service
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryrepo "github.com/LoveLosita/smartflow/backend/memory/repo"
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultManageListLimit = 20
|
||||
maxManageListLimit = 100
|
||||
)
|
||||
|
||||
// ManageService 负责 memory 模块内部的管理面能力。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 负责“列出记忆 / 删除记忆 / 读取与更新用户开关”这类维护动作;
|
||||
// 2. 负责把用户主动管理行为补充进 memory_audit_logs;
|
||||
// 3. 不负责 prompt 注入、不负责向量召回,也不负责后台抽取任务执行。
|
||||
type ManageService struct {
|
||||
db *gorm.DB
|
||||
itemRepo *memoryrepo.ItemRepo
|
||||
auditRepo *memoryrepo.AuditRepo
|
||||
settingsRepo *memoryrepo.SettingsRepo
|
||||
}
|
||||
|
||||
func NewManageService(
|
||||
db *gorm.DB,
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
auditRepo *memoryrepo.AuditRepo,
|
||||
settingsRepo *memoryrepo.SettingsRepo,
|
||||
) *ManageService {
|
||||
return &ManageService{
|
||||
db: db,
|
||||
itemRepo: itemRepo,
|
||||
auditRepo: auditRepo,
|
||||
settingsRepo: settingsRepo,
|
||||
}
|
||||
}
|
||||
|
||||
// ListItems 列出某个用户当前可管理的记忆条目。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 这里面向“管理视角”,不会按用户开关再做二次过滤;
|
||||
// 2. 即便用户暂时关闭 memory,总览页仍需要看见已有记忆,便于手动删除或核对;
|
||||
// 3. 默认只返回 active/archived,除非显式传入 deleted。
|
||||
func (s *ManageService) ListItems(ctx context.Context, req memorymodel.ListItemsRequest) ([]memorymodel.ItemDTO, error) {
|
||||
if s == nil || s.itemRepo == nil {
|
||||
return nil, errors.New("memory manage service is nil")
|
||||
}
|
||||
if req.UserID <= 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
conversationID := strings.TrimSpace(req.ConversationID)
|
||||
query := memorymodel.ItemQuery{
|
||||
UserID: req.UserID,
|
||||
ConversationID: conversationID,
|
||||
Statuses: normalizeManageStatuses(req.Statuses),
|
||||
MemoryTypes: normalizeMemoryTypes(req.MemoryTypes),
|
||||
IncludeGlobal: conversationID != "",
|
||||
OnlyUnexpired: false,
|
||||
Limit: normalizeLimit(req.Limit, defaultManageListLimit, maxManageListLimit),
|
||||
}
|
||||
|
||||
items, err := s.itemRepo.FindByQuery(ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return toItemDTOs(items), nil
|
||||
}
|
||||
|
||||
// DeleteItem 软删除一条记忆,并补写审计日志。
|
||||
//
|
||||
// 步骤化说明:
|
||||
// 1. 先在事务里读取当前条目快照,确保审计前镜像和实际删除对象一致;
|
||||
// 2. 若该条目已是 deleted,则直接按幂等语义返回,避免重复写多条删除审计;
|
||||
// 3. 状态更新成功后再写 audit log,保证“有删除就有审计”,失败时整笔事务回滚。
|
||||
func (s *ManageService) DeleteItem(ctx context.Context, req memorymodel.DeleteItemRequest) (*memorymodel.ItemDTO, error) {
|
||||
if s == nil || s.db == nil || s.itemRepo == nil || s.auditRepo == nil {
|
||||
return nil, errors.New("memory manage service is not initialized")
|
||||
}
|
||||
if req.UserID <= 0 || req.MemoryID <= 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
operatorType := memoryutils.NormalizeOperatorType(req.OperatorType)
|
||||
reason := normalizeDeleteReason(req.Reason)
|
||||
|
||||
var deletedItem model.MemoryItem
|
||||
err := s.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
|
||||
itemRepo := s.itemRepo.WithTx(tx)
|
||||
auditRepo := s.auditRepo.WithTx(tx)
|
||||
|
||||
current, err := itemRepo.GetByIDForUser(ctx, req.UserID, req.MemoryID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if current.Status == model.MemoryItemStatusDeleted {
|
||||
deletedItem = *current
|
||||
return nil
|
||||
}
|
||||
|
||||
before := *current
|
||||
after := before
|
||||
after.Status = model.MemoryItemStatusDeleted
|
||||
after.UpdatedAt = &now
|
||||
|
||||
if err = itemRepo.UpdateStatusByIDAt(ctx, req.UserID, req.MemoryID, model.MemoryItemStatusDeleted, now); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
audit := memoryutils.BuildItemAuditLog(
|
||||
req.MemoryID,
|
||||
req.UserID,
|
||||
memoryutils.AuditOperationDelete,
|
||||
operatorType,
|
||||
reason,
|
||||
&before,
|
||||
&after,
|
||||
)
|
||||
if err = auditRepo.Create(ctx, audit); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
deletedItem = after
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if deletedItem.ID <= 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
result := toItemDTO(deletedItem)
|
||||
return &result, nil
|
||||
}
|
||||
|
||||
// GetUserSetting 返回用户当前生效的记忆开关。
|
||||
//
|
||||
// 返回语义:
|
||||
// 1. 若数据库中还没有记录,返回系统默认开关,而不是 nil;
|
||||
// 2. 这样前端/上层调用方始终拿到完整结构,避免再做一层判空补默认值;
|
||||
// 3. 这里只读 settings,不附带修改动作。
|
||||
func (s *ManageService) GetUserSetting(ctx context.Context, userID int) (memorymodel.UserSettingDTO, error) {
|
||||
if s == nil || s.settingsRepo == nil {
|
||||
return memorymodel.UserSettingDTO{}, errors.New("memory manage service is nil")
|
||||
}
|
||||
if userID <= 0 {
|
||||
return memorymodel.UserSettingDTO{}, nil
|
||||
}
|
||||
|
||||
setting, err := s.settingsRepo.GetByUserID(ctx, userID)
|
||||
if err != nil {
|
||||
return memorymodel.UserSettingDTO{}, err
|
||||
}
|
||||
return toUserSettingDTO(memoryutils.EffectiveUserSetting(setting, userID)), nil
|
||||
}
|
||||
|
||||
// UpsertUserSetting 写入用户记忆开关。
|
||||
//
|
||||
// 说明:
|
||||
// 1. 当前阶段先直接覆盖三类开关,不做 patch 语义;
|
||||
// 2. 这样便于前端把整块设置表单一次性提交,接口语义更稳定;
|
||||
// 3. 若后续需要记录设置变更审计,再单独扩展 setting audit,而不是复用 item audit。
|
||||
func (s *ManageService) UpsertUserSetting(ctx context.Context, req memorymodel.UpdateUserSettingRequest) (memorymodel.UserSettingDTO, error) {
|
||||
if s == nil || s.settingsRepo == nil {
|
||||
return memorymodel.UserSettingDTO{}, errors.New("memory manage service is nil")
|
||||
}
|
||||
if req.UserID <= 0 {
|
||||
return memorymodel.UserSettingDTO{}, nil
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
setting := model.MemoryUserSetting{
|
||||
UserID: req.UserID,
|
||||
MemoryEnabled: req.MemoryEnabled,
|
||||
ImplicitMemoryEnabled: req.ImplicitMemoryEnabled,
|
||||
SensitiveMemoryEnabled: req.SensitiveMemoryEnabled,
|
||||
UpdatedAt: &now,
|
||||
}
|
||||
if err := s.settingsRepo.Upsert(ctx, setting); err != nil {
|
||||
return memorymodel.UserSettingDTO{}, err
|
||||
}
|
||||
return toUserSettingDTO(setting), nil
|
||||
}
|
||||
|
||||
func normalizeDeleteReason(reason string) string {
|
||||
reason = strings.TrimSpace(reason)
|
||||
if reason == "" {
|
||||
return "用户删除记忆"
|
||||
}
|
||||
return reason
|
||||
}
|
||||
347
backend/memory/service/read_service.go
Normal file
347
backend/memory/service/read_service.go
Normal file
@@ -0,0 +1,347 @@
|
||||
package service
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
infrarag "github.com/LoveLosita/smartflow/backend/infra/rag"
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryrepo "github.com/LoveLosita/smartflow/backend/memory/repo"
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultRetrieveLimit = 5
|
||||
maxRetrieveLimit = 20
|
||||
)
|
||||
|
||||
// ReadService 负责 memory 模块内部的读取、门控与轻量重排。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 负责把 memory_items 读出来并做用户设置过滤;
|
||||
// 2. 负责最小可用的排序与截断,为后续 prompt 注入提供稳定入口;
|
||||
// 3. 不直接依赖 newAgent,不负责真正把记忆拼进 prompt。
|
||||
type ReadService struct {
|
||||
itemRepo *memoryrepo.ItemRepo
|
||||
settingsRepo *memoryrepo.SettingsRepo
|
||||
ragRuntime infrarag.Runtime
|
||||
cfg memorymodel.Config
|
||||
}
|
||||
|
||||
func NewReadService(
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
settingsRepo *memoryrepo.SettingsRepo,
|
||||
ragRuntime infrarag.Runtime,
|
||||
cfg memorymodel.Config,
|
||||
) *ReadService {
|
||||
return &ReadService{
|
||||
itemRepo: itemRepo,
|
||||
settingsRepo: settingsRepo,
|
||||
ragRuntime: ragRuntime,
|
||||
cfg: cfg,
|
||||
}
|
||||
}
|
||||
|
||||
// Retrieve 读取可供后续注入使用的候选记忆。
|
||||
func (s *ReadService) Retrieve(ctx context.Context, req memorymodel.RetrieveRequest) ([]memorymodel.ItemDTO, error) {
|
||||
if s == nil || s.itemRepo == nil || s.settingsRepo == nil {
|
||||
return nil, nil
|
||||
}
|
||||
if req.UserID <= 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
now := req.Now
|
||||
if now.IsZero() {
|
||||
now = time.Now()
|
||||
}
|
||||
|
||||
setting, err := s.settingsRepo.GetByUserID(ctx, req.UserID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
effectiveSetting := memoryutils.EffectiveUserSetting(setting, req.UserID)
|
||||
if !effectiveSetting.MemoryEnabled {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
limit := normalizeLimit(req.Limit, defaultRetrieveLimit, maxRetrieveLimit)
|
||||
if s.cfg.RAGEnabled && s.ragRuntime != nil && strings.TrimSpace(req.Query) != "" {
|
||||
items, ragErr := s.retrieveByRAG(ctx, req, effectiveSetting, limit, now)
|
||||
if ragErr == nil && len(items) > 0 {
|
||||
return items, nil
|
||||
}
|
||||
}
|
||||
|
||||
return s.retrieveByLegacy(ctx, req, limit, now, effectiveSetting)
|
||||
}
|
||||
|
||||
func (s *ReadService) retrieveByLegacy(
|
||||
ctx context.Context,
|
||||
req memorymodel.RetrieveRequest,
|
||||
limit int,
|
||||
now time.Time,
|
||||
effectiveSetting model.MemoryUserSetting,
|
||||
) ([]memorymodel.ItemDTO, error) {
|
||||
if !effectiveSetting.MemoryEnabled {
|
||||
return nil, nil
|
||||
}
|
||||
query := memorymodel.ItemQuery{
|
||||
UserID: req.UserID,
|
||||
ConversationID: req.ConversationID,
|
||||
AssistantID: req.AssistantID,
|
||||
RunID: req.RunID,
|
||||
Statuses: []string{model.MemoryItemStatusActive},
|
||||
MemoryTypes: normalizeRetrieveMemoryTypes(req.MemoryTypes),
|
||||
IncludeGlobal: true,
|
||||
OnlyUnexpired: true,
|
||||
Limit: normalizeLimit(limit*3, limit*3, maxRetrieveLimit*3),
|
||||
Now: now,
|
||||
}
|
||||
|
||||
items, err := s.itemRepo.FindByQuery(ctx, query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
items = memoryutils.FilterItemsBySetting(items, effectiveSetting)
|
||||
if len(items) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
sort.SliceStable(items, func(i, j int) bool {
|
||||
left := scoreRetrievedItem(items[i], now, req.ConversationID)
|
||||
right := scoreRetrievedItem(items[j], now, req.ConversationID)
|
||||
if left == right {
|
||||
return items[i].ID > items[j].ID
|
||||
}
|
||||
return left > right
|
||||
})
|
||||
|
||||
if len(items) > limit {
|
||||
items = items[:limit]
|
||||
}
|
||||
_ = s.itemRepo.TouchLastAccessAt(ctx, collectMemoryIDs(items), now)
|
||||
return toItemDTOs(items), nil
|
||||
}
|
||||
|
||||
func (s *ReadService) retrieveByRAG(
|
||||
ctx context.Context,
|
||||
req memorymodel.RetrieveRequest,
|
||||
effectiveSetting model.MemoryUserSetting,
|
||||
limit int,
|
||||
now time.Time,
|
||||
) ([]memorymodel.ItemDTO, error) {
|
||||
if !effectiveSetting.MemoryEnabled {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
result, err := s.ragRuntime.RetrieveMemory(ctx, infrarag.MemoryRetrieveRequest{
|
||||
Query: req.Query,
|
||||
TopK: limit,
|
||||
Threshold: s.cfg.Threshold,
|
||||
Action: "search",
|
||||
UserID: req.UserID,
|
||||
ConversationID: req.ConversationID,
|
||||
AssistantID: req.AssistantID,
|
||||
RunID: req.RunID,
|
||||
MemoryTypes: normalizeRetrieveMemoryTypes(req.MemoryTypes),
|
||||
})
|
||||
if err != nil || result == nil || len(result.Items) == 0 {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
items := make([]memorymodel.ItemDTO, 0, len(result.Items))
|
||||
ids := make([]int64, 0, len(result.Items))
|
||||
for _, hit := range result.Items {
|
||||
dto, memoryID := buildMemoryDTOFromRetrieveHit(hit)
|
||||
if !effectiveSetting.ImplicitMemoryEnabled && !dto.IsExplicit {
|
||||
continue
|
||||
}
|
||||
if !effectiveSetting.SensitiveMemoryEnabled && dto.SensitivityLevel > 0 {
|
||||
continue
|
||||
}
|
||||
if dto.ID <= 0 && memoryID > 0 {
|
||||
dto.ID = memoryID
|
||||
}
|
||||
items = append(items, dto)
|
||||
if dto.ID > 0 {
|
||||
ids = append(ids, dto.ID)
|
||||
}
|
||||
}
|
||||
if len(items) > limit {
|
||||
items = items[:limit]
|
||||
}
|
||||
_ = s.itemRepo.TouchLastAccessAt(ctx, ids, now)
|
||||
return items, nil
|
||||
}
|
||||
|
||||
func normalizeRetrieveMemoryTypes(raw []string) []string {
|
||||
normalized := normalizeMemoryTypes(raw)
|
||||
if len(normalized) > 0 {
|
||||
return normalized
|
||||
}
|
||||
return []string{
|
||||
memorymodel.MemoryTypeConstraint,
|
||||
memorymodel.MemoryTypePreference,
|
||||
memorymodel.MemoryTypeTodoHint,
|
||||
memorymodel.MemoryTypeFact,
|
||||
}
|
||||
}
|
||||
|
||||
func scoreRetrievedItem(item model.MemoryItem, now time.Time, conversationID string) float64 {
|
||||
score := 0.35*clamp01(item.Importance) + 0.3*clamp01(item.Confidence) + 0.2*recencyScore(item, now)
|
||||
if item.IsExplicit {
|
||||
score += 0.1
|
||||
}
|
||||
if strValue(item.ConversationID) != "" && strValue(item.ConversationID) == conversationID {
|
||||
score += 0.08
|
||||
}
|
||||
switch item.MemoryType {
|
||||
case memorymodel.MemoryTypeConstraint:
|
||||
score += 0.12
|
||||
case memorymodel.MemoryTypePreference:
|
||||
score += 0.08
|
||||
case memorymodel.MemoryTypeTodoHint:
|
||||
score += 0.05
|
||||
}
|
||||
return score
|
||||
}
|
||||
|
||||
func recencyScore(item model.MemoryItem, now time.Time) float64 {
|
||||
base := item.UpdatedAt
|
||||
if base == nil {
|
||||
base = item.CreatedAt
|
||||
}
|
||||
if base == nil || now.Before(*base) {
|
||||
return 0.5
|
||||
}
|
||||
age := now.Sub(*base)
|
||||
switch {
|
||||
case age <= 24*time.Hour:
|
||||
return 1
|
||||
case age <= 7*24*time.Hour:
|
||||
return 0.85
|
||||
case age <= 30*24*time.Hour:
|
||||
return 0.65
|
||||
case age <= 90*24*time.Hour:
|
||||
return 0.45
|
||||
default:
|
||||
return 0.25
|
||||
}
|
||||
}
|
||||
|
||||
func clamp01(v float64) float64 {
|
||||
if v < 0 {
|
||||
return 0
|
||||
}
|
||||
if v > 1 {
|
||||
return 1
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func collectMemoryIDs(items []model.MemoryItem) []int64 {
|
||||
if len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
ids := make([]int64, 0, len(items))
|
||||
for _, item := range items {
|
||||
if item.ID <= 0 {
|
||||
continue
|
||||
}
|
||||
ids = append(ids, item.ID)
|
||||
}
|
||||
return ids
|
||||
}
|
||||
|
||||
func buildMemoryDTOFromRetrieveHit(hit infrarag.RetrieveHit) (memorymodel.ItemDTO, int64) {
|
||||
memoryID := parseMemoryIDFromDocumentID(hit.DocumentID)
|
||||
metadata := hit.Metadata
|
||||
dto := memorymodel.ItemDTO{
|
||||
ID: memoryID,
|
||||
UserID: int(readFloatLike(metadata["user_id"])),
|
||||
ConversationID: readString(metadata["conversation_id"]),
|
||||
AssistantID: readString(metadata["assistant_id"]),
|
||||
RunID: readString(metadata["run_id"]),
|
||||
MemoryType: readString(metadata["memory_type"]),
|
||||
Title: readString(metadata["title"]),
|
||||
Content: strings.TrimSpace(hit.Text),
|
||||
Confidence: readFloatLike(metadata["confidence"]),
|
||||
Importance: readFloatLike(metadata["importance"]),
|
||||
SensitivityLevel: int(readFloatLike(metadata["sensitivity_level"])),
|
||||
IsExplicit: readBoolLike(metadata["is_explicit"]),
|
||||
Status: readString(metadata["status"]),
|
||||
TTLAt: readTimeLike(metadata["ttl_at"]),
|
||||
}
|
||||
return dto, memoryID
|
||||
}
|
||||
|
||||
func parseMemoryIDFromDocumentID(documentID string) int64 {
|
||||
documentID = strings.TrimSpace(documentID)
|
||||
if !strings.HasPrefix(documentID, "memory:") {
|
||||
return 0
|
||||
}
|
||||
raw := strings.TrimPrefix(documentID, "memory:")
|
||||
if strings.HasPrefix(raw, "uid:") {
|
||||
return 0
|
||||
}
|
||||
parsed, err := strconv.ParseInt(raw, 10, 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return parsed
|
||||
}
|
||||
|
||||
func readString(v any) string {
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(fmt.Sprintf("%v", v))
|
||||
}
|
||||
|
||||
func readFloatLike(v any) float64 {
|
||||
switch value := v.(type) {
|
||||
case float64:
|
||||
return value
|
||||
case float32:
|
||||
return float64(value)
|
||||
case int:
|
||||
return float64(value)
|
||||
case int64:
|
||||
return float64(value)
|
||||
case string:
|
||||
parsed, err := strconv.ParseFloat(strings.TrimSpace(value), 64)
|
||||
if err == nil {
|
||||
return parsed
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func readBoolLike(v any) bool {
|
||||
switch value := v.(type) {
|
||||
case bool:
|
||||
return value
|
||||
case string:
|
||||
return strings.EqualFold(strings.TrimSpace(value), "true")
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func readTimeLike(v any) *time.Time {
|
||||
text := readString(v)
|
||||
if text == "" {
|
||||
return nil
|
||||
}
|
||||
parsed, err := time.Parse(time.RFC3339, text)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return &parsed
|
||||
}
|
||||
71
backend/memory/utils/audit.go
Normal file
71
backend/memory/utils/audit.go
Normal file
@@ -0,0 +1,71 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"strings"
|
||||
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
)
|
||||
|
||||
const (
|
||||
// AuditOperationCreate 表示系统新建一条记忆。
|
||||
AuditOperationCreate = "create"
|
||||
// AuditOperationDelete 表示对已有记忆做软删除。
|
||||
AuditOperationDelete = "delete"
|
||||
)
|
||||
|
||||
// BuildItemAuditLog 构造记忆变更审计日志。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 负责把 before/after 快照统一序列化为审计日志结构;
|
||||
// 2. 不负责决定“是否应该写审计”,该决策由上层 service/worker 控制;
|
||||
// 3. 不负责落库,调用方仍需显式调用 AuditRepo。
|
||||
func BuildItemAuditLog(
|
||||
memoryID int64,
|
||||
userID int,
|
||||
operation string,
|
||||
operatorType string,
|
||||
reason string,
|
||||
before *model.MemoryItem,
|
||||
after *model.MemoryItem,
|
||||
) model.MemoryAuditLog {
|
||||
return model.MemoryAuditLog{
|
||||
MemoryID: memoryID,
|
||||
UserID: userID,
|
||||
Operation: strings.TrimSpace(operation),
|
||||
OperatorType: NormalizeOperatorType(operatorType),
|
||||
Reason: strings.TrimSpace(reason),
|
||||
BeforeJSON: marshalMemoryItemSnapshot(before),
|
||||
AfterJSON: marshalMemoryItemSnapshot(after),
|
||||
}
|
||||
}
|
||||
|
||||
// NormalizeOperatorType 统一规整审计操作者类型。
|
||||
//
|
||||
// 规则说明:
|
||||
// 1. 目前只接受 user/system 两类固定值;
|
||||
// 2. 空值或未知值统一回退为 user,避免把脏值直接写进审计表;
|
||||
// 3. 若后续扩展 admin/tool 等类型,再在这里集中放开即可。
|
||||
func NormalizeOperatorType(raw string) string {
|
||||
switch strings.ToLower(strings.TrimSpace(raw)) {
|
||||
case "system":
|
||||
return "system"
|
||||
default:
|
||||
return "user"
|
||||
}
|
||||
}
|
||||
|
||||
func marshalMemoryItemSnapshot(item *model.MemoryItem) *string {
|
||||
if item == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
raw, err := json.Marshal(item)
|
||||
if err != nil {
|
||||
empty := "{}"
|
||||
return &empty
|
||||
}
|
||||
|
||||
value := string(raw)
|
||||
return &value
|
||||
}
|
||||
@@ -49,6 +49,11 @@ func NormalizeFacts(candidates []memorymodel.FactCandidate) []memorymodel.Normal
|
||||
if confidence == 0 {
|
||||
confidence = 0.6
|
||||
}
|
||||
importance := clamp01(candidate.Importance)
|
||||
if importance == 0 {
|
||||
importance = defaultImportanceByType(memoryType)
|
||||
}
|
||||
sensitivityLevel := clampInt(candidate.SensitivityLevel, 0, 2)
|
||||
|
||||
normalizedContent := strings.ToLower(content)
|
||||
contentHash := hashContent(memoryType, normalizedContent)
|
||||
@@ -65,6 +70,8 @@ func NormalizeFacts(candidates []memorymodel.FactCandidate) []memorymodel.Normal
|
||||
NormalizedContent: normalizedContent,
|
||||
ContentHash: contentHash,
|
||||
Confidence: confidence,
|
||||
Importance: importance,
|
||||
SensitivityLevel: sensitivityLevel,
|
||||
IsExplicit: candidate.IsExplicit,
|
||||
})
|
||||
}
|
||||
@@ -96,6 +103,29 @@ func clamp01(v float64) float64 {
|
||||
return v
|
||||
}
|
||||
|
||||
func clampInt(v, minValue, maxValue int) int {
|
||||
if v < minValue {
|
||||
return minValue
|
||||
}
|
||||
if v > maxValue {
|
||||
return maxValue
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func defaultImportanceByType(memoryType string) float64 {
|
||||
switch memoryType {
|
||||
case memorymodel.MemoryTypePreference:
|
||||
return 0.85
|
||||
case memorymodel.MemoryTypeConstraint:
|
||||
return 0.95
|
||||
case memorymodel.MemoryTypeTodoHint:
|
||||
return 0.8
|
||||
default:
|
||||
return 0.6
|
||||
}
|
||||
}
|
||||
|
||||
func hashContent(memoryType, normalizedContent string) string {
|
||||
sum := sha256.Sum256([]byte(memoryType + "::" + normalizedContent))
|
||||
return hex.EncodeToString(sum[:])
|
||||
|
||||
62
backend/memory/utils/settings.go
Normal file
62
backend/memory/utils/settings.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
)
|
||||
|
||||
// EffectiveUserSetting 返回用户记忆设置的生效值。
|
||||
//
|
||||
// 规则说明:
|
||||
// 1. 用户未显式配置时,走系统默认值;
|
||||
// 2. 默认允许普通记忆和隐式记忆,但默认关闭敏感记忆;
|
||||
// 3. 返回值始终是完整对象,方便调用方直接使用,不再分支判空。
|
||||
func EffectiveUserSetting(setting *model.MemoryUserSetting, userID int) model.MemoryUserSetting {
|
||||
if setting == nil {
|
||||
return model.MemoryUserSetting{
|
||||
UserID: userID,
|
||||
MemoryEnabled: true,
|
||||
ImplicitMemoryEnabled: true,
|
||||
SensitiveMemoryEnabled: false,
|
||||
}
|
||||
}
|
||||
return *setting
|
||||
}
|
||||
|
||||
// FilterFactsBySetting 按用户记忆开关过滤候选事实。
|
||||
func FilterFactsBySetting(facts []memorymodel.NormalizedFact, setting model.MemoryUserSetting) []memorymodel.NormalizedFact {
|
||||
if !setting.MemoryEnabled || len(facts) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]memorymodel.NormalizedFact, 0, len(facts))
|
||||
for _, fact := range facts {
|
||||
if !setting.ImplicitMemoryEnabled && !fact.IsExplicit {
|
||||
continue
|
||||
}
|
||||
if !setting.SensitiveMemoryEnabled && fact.SensitivityLevel > 0 {
|
||||
continue
|
||||
}
|
||||
result = append(result, fact)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// FilterItemsBySetting 按用户记忆开关过滤已入库记忆。
|
||||
func FilterItemsBySetting(items []model.MemoryItem, setting model.MemoryUserSetting) []model.MemoryItem {
|
||||
if !setting.MemoryEnabled || len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]model.MemoryItem, 0, len(items))
|
||||
for _, item := range items {
|
||||
if !setting.ImplicitMemoryEnabled && !item.IsExplicit {
|
||||
continue
|
||||
}
|
||||
if !setting.SensitiveMemoryEnabled && item.SensitivityLevel > 0 {
|
||||
continue
|
||||
}
|
||||
result = append(result, item)
|
||||
}
|
||||
return result
|
||||
}
|
||||
56
backend/memory/worker/loop.go
Normal file
56
backend/memory/worker/loop.go
Normal file
@@ -0,0 +1,56 @@
|
||||
package worker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"time"
|
||||
)
|
||||
|
||||
// RunPollingLoop 持续轮询 memory_jobs,把异步 worker 真正跑起来。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 这里只负责“循环 + 轮询频率 + 批量触发”;
|
||||
// 2. 不负责抽取逻辑,也不负责落库逻辑;
|
||||
// 3. 任意一次 RunOnce 报错时只打日志并继续下一轮,避免整个后台循环退出。
|
||||
func RunPollingLoop(ctx context.Context, runner *Runner, pollEvery time.Duration, claimBatch int) {
|
||||
if runner == nil {
|
||||
return
|
||||
}
|
||||
if runner.logger == nil {
|
||||
runner.logger = log.Default()
|
||||
}
|
||||
if pollEvery <= 0 {
|
||||
pollEvery = 2 * time.Second
|
||||
}
|
||||
if claimBatch <= 0 {
|
||||
claimBatch = 1
|
||||
}
|
||||
|
||||
runBatch := func() {
|
||||
for i := 0; i < claimBatch; i++ {
|
||||
result, err := runner.RunOnce(ctx)
|
||||
if err != nil {
|
||||
runner.logger.Printf("memory worker loop run once failed: %v", err)
|
||||
return
|
||||
}
|
||||
if result == nil || !result.Claimed {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
runBatch()
|
||||
|
||||
ticker := time.NewTicker(pollEvery)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
runner.logger.Printf("memory worker loop stopped: %v", ctx.Err())
|
||||
return
|
||||
case <-ticker.C:
|
||||
runBatch()
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -6,14 +6,19 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
infrarag "github.com/LoveLosita/smartflow/backend/infra/rag"
|
||||
memorymodel "github.com/LoveLosita/smartflow/backend/memory/model"
|
||||
memoryrepo "github.com/LoveLosita/smartflow/backend/memory/repo"
|
||||
memoryutils "github.com/LoveLosita/smartflow/backend/memory/utils"
|
||||
"github.com/LoveLosita/smartflow/backend/model"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// RunOnceResult 描述一次手工触发执行结果。
|
||||
// RunOnceResult 描述单次手工触发执行的结果。
|
||||
type RunOnceResult struct {
|
||||
Claimed bool
|
||||
JobID int64
|
||||
@@ -21,36 +26,57 @@ type RunOnceResult struct {
|
||||
Facts int
|
||||
}
|
||||
|
||||
// Runner 是 Day1 首版任务执行器。
|
||||
// Runner 负责把 memory_jobs 推进成 memory_items 和审计日志。
|
||||
//
|
||||
// 职责边界:
|
||||
// 1. 只负责推进 memory_jobs 状态机;
|
||||
// 2. Day1 不做 memory_items 真正落库,仅做 mock 抽取与状态推进。
|
||||
// 1. 负责任务抢占、抽取、落库和状态推进;
|
||||
// 2. 不负责 outbox 消费,也不负责 LLM prompt 组装;
|
||||
// 3. 失败时只做可恢复的状态回写,避免把业务错误直接抛到启动层。
|
||||
type Runner struct {
|
||||
jobRepo *memoryrepo.JobRepo
|
||||
extractor Extractor
|
||||
logger *log.Logger
|
||||
db *gorm.DB
|
||||
jobRepo *memoryrepo.JobRepo
|
||||
itemRepo *memoryrepo.ItemRepo
|
||||
auditRepo *memoryrepo.AuditRepo
|
||||
settingsRepo *memoryrepo.SettingsRepo
|
||||
extractor Extractor
|
||||
ragRuntime infrarag.Runtime
|
||||
logger *log.Logger
|
||||
}
|
||||
|
||||
func NewRunner(jobRepo *memoryrepo.JobRepo, extractor Extractor) *Runner {
|
||||
// NewRunner 构造记忆 worker 执行器。
|
||||
func NewRunner(
|
||||
db *gorm.DB,
|
||||
jobRepo *memoryrepo.JobRepo,
|
||||
itemRepo *memoryrepo.ItemRepo,
|
||||
auditRepo *memoryrepo.AuditRepo,
|
||||
settingsRepo *memoryrepo.SettingsRepo,
|
||||
extractor Extractor,
|
||||
ragRuntime infrarag.Runtime,
|
||||
) *Runner {
|
||||
return &Runner{
|
||||
jobRepo: jobRepo,
|
||||
extractor: extractor,
|
||||
logger: log.Default(),
|
||||
db: db,
|
||||
jobRepo: jobRepo,
|
||||
itemRepo: itemRepo,
|
||||
auditRepo: auditRepo,
|
||||
settingsRepo: settingsRepo,
|
||||
extractor: extractor,
|
||||
ragRuntime: ragRuntime,
|
||||
logger: log.Default(),
|
||||
}
|
||||
}
|
||||
|
||||
// RunOnce 手工执行一次任务抢占与处理。
|
||||
// RunOnce 手工执行一轮任务处理。
|
||||
//
|
||||
// 返回语义:
|
||||
// 1. Claimed=false 表示当前无可执行任务;
|
||||
// 2. Claimed=true 且 Status=success/failed/dead 表示状态已推进完成。
|
||||
// 1. Claimed=false 表示当前没有可执行任务;
|
||||
// 2. Claimed=true 且 Status=success/failed/dead 表示本轮已经推进过一个任务;
|
||||
// 3. 只有初始化缺失或数据库级错误才返回 error。
|
||||
func (r *Runner) RunOnce(ctx context.Context) (*RunOnceResult, error) {
|
||||
if r == nil || r.jobRepo == nil || r.extractor == nil {
|
||||
if r == nil || r.db == nil || r.jobRepo == nil || r.itemRepo == nil || r.auditRepo == nil || r.settingsRepo == nil || r.extractor == nil {
|
||||
return nil, errors.New("memory worker runner is not initialized")
|
||||
}
|
||||
|
||||
// 1. 抢占一条可执行任务,避免并发 worker 重复处理同一记录。
|
||||
// 1. 先抢占一条可执行任务,避免多个 worker 重复处理同一条记录。
|
||||
job, err := r.jobRepo.ClaimNextRunnableExtractJob(ctx, time.Now())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -66,7 +92,7 @@ func (r *Runner) RunOnce(ctx context.Context) (*RunOnceResult, error) {
|
||||
Facts: 0,
|
||||
}
|
||||
|
||||
// 2. 解析 payload_json。解析失败属于数据质量问题,走失败重试并打日志。
|
||||
// 2. 解析任务载荷。这里属于数据质量问题,解析失败就直接标记为可重试失败。
|
||||
var payload memorymodel.ExtractJobPayload
|
||||
if err = json.Unmarshal([]byte(job.PayloadJSON), &payload); err != nil {
|
||||
failReason := fmt.Sprintf("解析任务载荷失败: %v", err)
|
||||
@@ -75,7 +101,22 @@ func (r *Runner) RunOnce(ctx context.Context) (*RunOnceResult, error) {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// 3. 调用抽取器执行 mock 抽取。Day1 先保证“能推进状态”,不引入重计算。
|
||||
// 3. 先读取用户记忆设置。总开关关闭时,任务直接成功结束,不再继续抽取和落库。
|
||||
setting, err := r.settingsRepo.GetByUserID(ctx, payload.UserID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
effectiveSetting := memoryutils.EffectiveUserSetting(setting, payload.UserID)
|
||||
if !effectiveSetting.MemoryEnabled {
|
||||
if err = r.jobRepo.MarkSuccess(ctx, job.ID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result.Status = model.MemoryJobStatusSuccess
|
||||
r.logger.Printf("memory worker skipped by user setting: job_id=%d user_id=%d", job.ID, payload.UserID)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// 4. 调用抽取器。LLM 失败时由编排器做保守 fallback,worker 只关心最终结果。
|
||||
facts, extractErr := r.extractor.ExtractFacts(ctx, payload)
|
||||
if extractErr != nil {
|
||||
failReason := fmt.Sprintf("抽取执行失败: %v", extractErr)
|
||||
@@ -83,13 +124,213 @@ func (r *Runner) RunOnce(ctx context.Context) (*RunOnceResult, error) {
|
||||
result.Status = model.MemoryJobStatusFailed
|
||||
return result, nil
|
||||
}
|
||||
facts = memoryutils.FilterFactsBySetting(facts, effectiveSetting)
|
||||
|
||||
// 4. 抽取成功后把任务置为 success。
|
||||
if err = r.jobRepo.MarkSuccess(ctx, job.ID); err != nil {
|
||||
return nil, err
|
||||
if len(facts) == 0 {
|
||||
if err = r.jobRepo.MarkSuccess(ctx, job.ID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result.Status = model.MemoryJobStatusSuccess
|
||||
r.logger.Printf("memory worker run once noop: job_id=%d", job.ID)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
items := buildMemoryItems(job, payload, facts)
|
||||
if len(items) == 0 {
|
||||
if err = r.jobRepo.MarkSuccess(ctx, job.ID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result.Status = model.MemoryJobStatusSuccess
|
||||
r.logger.Printf("memory worker run once empty-after-normalize: job_id=%d", job.ID)
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// 5. 先在事务里写入记忆条目和审计日志,再统一确认 job 成功。
|
||||
if err = r.persistMemoryWrite(ctx, job.ID, items); err != nil {
|
||||
failReason := fmt.Sprintf("记忆落库失败: %v", err)
|
||||
_ = r.jobRepo.MarkFailed(ctx, job.ID, failReason)
|
||||
result.Status = model.MemoryJobStatusFailed
|
||||
return result, nil
|
||||
}
|
||||
|
||||
result.Status = model.MemoryJobStatusSuccess
|
||||
result.Facts = len(facts)
|
||||
r.logger.Printf("memory worker run once success: job_id=%d extracted_facts=%d", job.ID, len(facts))
|
||||
result.Facts = len(items)
|
||||
r.syncMemoryVectors(ctx, items)
|
||||
r.logger.Printf("memory worker run once success: job_id=%d extracted_facts=%d", job.ID, len(items))
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (r *Runner) persistMemoryWrite(ctx context.Context, jobID int64, items []model.MemoryItem) error {
|
||||
return r.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
|
||||
jobRepo := r.jobRepo.WithTx(tx)
|
||||
itemRepo := r.itemRepo.WithTx(tx)
|
||||
auditRepo := r.auditRepo.WithTx(tx)
|
||||
|
||||
if err := itemRepo.UpsertItems(ctx, items); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for i := range items {
|
||||
audit := memoryutils.BuildItemAuditLog(
|
||||
items[i].ID,
|
||||
items[i].UserID,
|
||||
memoryutils.AuditOperationCreate,
|
||||
"system",
|
||||
"LLM 提取入库",
|
||||
nil,
|
||||
&items[i],
|
||||
)
|
||||
if err := auditRepo.Create(ctx, audit); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return jobRepo.MarkSuccess(ctx, jobID)
|
||||
})
|
||||
}
|
||||
|
||||
func buildMemoryItems(job *model.MemoryJob, payload memorymodel.ExtractJobPayload, facts []memorymodel.NormalizedFact) []model.MemoryItem {
|
||||
if job == nil || len(facts) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
items := make([]model.MemoryItem, 0, len(facts))
|
||||
for _, fact := range facts {
|
||||
items = append(items, model.MemoryItem{
|
||||
UserID: payload.UserID,
|
||||
ConversationID: strPtrOrNil(payload.ConversationID),
|
||||
AssistantID: strPtrOrNil(payload.AssistantID),
|
||||
RunID: strPtrOrNil(payload.RunID),
|
||||
MemoryType: fact.MemoryType,
|
||||
Title: fact.Title,
|
||||
Content: fact.Content,
|
||||
NormalizedContent: strPtrFromValue(fact.NormalizedContent),
|
||||
ContentHash: strPtrFromValue(fact.ContentHash),
|
||||
Confidence: fact.Confidence,
|
||||
Importance: fact.Importance,
|
||||
SensitivityLevel: fact.SensitivityLevel,
|
||||
SourceMessageID: int64PtrOrNil(payload.SourceMessageID),
|
||||
SourceEventID: job.SourceEventID,
|
||||
IsExplicit: fact.IsExplicit,
|
||||
Status: model.MemoryItemStatusActive,
|
||||
TTLAt: resolveMemoryTTLAt(payload.OccurredAt, fact.MemoryType),
|
||||
VectorStatus: "pending",
|
||||
})
|
||||
}
|
||||
return items
|
||||
}
|
||||
|
||||
func (r *Runner) syncMemoryVectors(ctx context.Context, items []model.MemoryItem) {
|
||||
if r == nil || r.ragRuntime == nil || r.itemRepo == nil || len(items) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
requestItems := make([]infrarag.MemoryIngestItem, 0, len(items))
|
||||
for _, item := range items {
|
||||
requestItems = append(requestItems, infrarag.MemoryIngestItem{
|
||||
MemoryID: item.ID,
|
||||
UserID: item.UserID,
|
||||
ConversationID: strValue(item.ConversationID),
|
||||
AssistantID: strValue(item.AssistantID),
|
||||
RunID: strValue(item.RunID),
|
||||
MemoryType: item.MemoryType,
|
||||
Title: item.Title,
|
||||
Content: item.Content,
|
||||
Confidence: item.Confidence,
|
||||
Importance: item.Importance,
|
||||
SensitivityLevel: item.SensitivityLevel,
|
||||
IsExplicit: item.IsExplicit,
|
||||
Status: item.Status,
|
||||
TTLAt: item.TTLAt,
|
||||
CreatedAt: item.CreatedAt,
|
||||
})
|
||||
}
|
||||
|
||||
result, err := r.ragRuntime.IngestMemory(ctx, infrarag.MemoryIngestRequest{
|
||||
Action: "add",
|
||||
Items: requestItems,
|
||||
})
|
||||
if err != nil {
|
||||
r.logger.Printf("memory vector sync failed: err=%v", err)
|
||||
for _, item := range items {
|
||||
_ = r.itemRepo.UpdateVectorStateByID(ctx, item.ID, "failed", nil)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
vectorIDMap := make(map[int64]string, len(result.DocumentIDs))
|
||||
for _, documentID := range result.DocumentIDs {
|
||||
memoryID := parseMemoryID(documentID)
|
||||
if memoryID <= 0 {
|
||||
continue
|
||||
}
|
||||
vectorIDMap[memoryID] = documentID
|
||||
}
|
||||
|
||||
for _, item := range items {
|
||||
vectorID := strPtrOrNil(vectorIDMap[item.ID])
|
||||
_ = r.itemRepo.UpdateVectorStateByID(ctx, item.ID, "synced", vectorID)
|
||||
}
|
||||
}
|
||||
|
||||
func resolveMemoryTTLAt(base time.Time, memoryType string) *time.Time {
|
||||
switch memoryType {
|
||||
case memorymodel.MemoryTypeTodoHint:
|
||||
t := base.Add(30 * 24 * time.Hour)
|
||||
return &t
|
||||
case memorymodel.MemoryTypeFact:
|
||||
t := base.Add(180 * 24 * time.Hour)
|
||||
return &t
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func strPtrFromValue(v string) *string {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return nil
|
||||
}
|
||||
value := v
|
||||
return &value
|
||||
}
|
||||
|
||||
func strPtrOrNil(v string) *string {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return nil
|
||||
}
|
||||
value := v
|
||||
return &value
|
||||
}
|
||||
|
||||
func int64PtrOrNil(v int64) *int64 {
|
||||
if v <= 0 {
|
||||
return nil
|
||||
}
|
||||
value := v
|
||||
return &value
|
||||
}
|
||||
|
||||
func strValue(v *string) string {
|
||||
if v == nil {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(*v)
|
||||
}
|
||||
|
||||
func parseMemoryID(documentID string) int64 {
|
||||
documentID = strings.TrimSpace(documentID)
|
||||
if !strings.HasPrefix(documentID, "memory:") {
|
||||
return 0
|
||||
}
|
||||
raw := strings.TrimPrefix(documentID, "memory:")
|
||||
if strings.HasPrefix(raw, "uid:") {
|
||||
return 0
|
||||
}
|
||||
memoryID, err := strconv.ParseInt(raw, 10, 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return memoryID
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user