feat:完善长期记忆控制台导入链路与联调测试
summary:\n- 扩展长期记忆控制台导入、调优与删除相关 UI/接口,补充中文化展示与任务细粒度状态管理\n- 强化 memory API 与后端路由能力,补齐导入任务、图谱检索、配置与运行态相关字段\n- 新增与增强前后端测试,覆盖导入多文件类型、检索、调优、删除及图谱查询关键路径 description:\n- dashboard: 重构 knowledge-base 页面与 memory-api,统一任务队列、分块分页、来源删除恢复、调优闭环交互\n- backend: 扩展 webui memory 路由与 A_Memorix 内核检索逻辑,完善服务侧能力与配置 schema\n- tests: 增加 webui 集成测试和 kernel 单测,提升导入/检索/调优/删除全流程回归保障
This commit is contained in:
@@ -560,11 +560,18 @@ Chat paragraph:
|
||||
)
|
||||
async def _llm_call(self, prompt: str, model_config: Any) -> Dict:
|
||||
"""Generic LLM Caller"""
|
||||
success, response, _, _ = await llm_api.generate_with_model(
|
||||
prompt=prompt,
|
||||
model_config=model_config,
|
||||
request_type="Script.ProcessKnowledge"
|
||||
task_name = llm_api.resolve_task_name_from_model_config(model_config)
|
||||
result = await llm_api.generate(
|
||||
llm_api.LLMServiceRequest(
|
||||
task_name=task_name,
|
||||
request_type="Script.ProcessKnowledge",
|
||||
prompt=prompt,
|
||||
temperature=getattr(model_config, "temperature", None),
|
||||
max_tokens=getattr(model_config, "max_tokens", None),
|
||||
)
|
||||
)
|
||||
success = bool(result.success)
|
||||
response = str(result.completion.response or "")
|
||||
if success:
|
||||
txt = response.strip()
|
||||
if "```" in txt:
|
||||
|
||||
Reference in New Issue
Block a user