Files
smartmate/backend/infra/rag/retrieve/vector_retriever.go
Losita 863cba4e4e Version: 0.9.16.dev.260413
后端:
1. RAG embedding 接入修正,并兼容 Ark 多模态 embedding 链路
   - 更新 backend/infra/rag/embed/eino_embedder.go:文本 embedding 继续走 Eino OpenAI 兼容链路;`doubao-embedding-vision-*` 模型切到 Ark 原生 `/embeddings/multimodal`
   - 增加 embedding baseURL 归一化:兼容把 `.../embeddings` 或 `.../embeddings/multimodal` 误填进配置的情况,统一回退到 `/api/v3`
   - 为第三方 embedding 调用增加 panic recover,避免向量检索/写入异常直接打崩主进程

2. RAG runtime / pipeline / store 稳定性加固,统一降级为 error 语义
   - 更新 backend/infra/rag/runtime.go:runtime 对外入口增加 panic recover 与观测打点
   - 更新 backend/infra/rag/core/pipeline.go:ingest / retrieve 编排边界增加 panic recover
   - 更新 backend/infra/rag/retrieve/vector_retriever.go:向量检索边界补充 panic recover
   - 更新 backend/infra/rag/store/milvus_store.go、backend/infra/rag/store/inmemory_store.go:补齐未初始化保护,避免 nil 依赖直接异常退出

3. RAG embedding 配置口径与普通 LLM 链路对齐
   - 更新 backend/infra/rag/factory.go:RAG embedding API Key 不再走 `apiKeyEnv` 间接映射,统一直接读取 `ARK_API_KEY`
   - 更新 backend/infra/rag/config/config.go:删除 `rag.embed.apiKeyEnv` 配置字段,收敛配置分叉
   - 更新 backend/config.example.yaml:示例配置切到当前联调口径,保持 `rag.enabled=true`、`memory.rag.enabled=true`,并对齐 Milvus / embed 配置

4. Memory + RAG 联调链路可运行态修正
   - 当前已验证 memory 抽取写库、RAG ingest 写入 Milvus、后续语义召回链路可继续联调
   - 检索失败场景已从“直接 panic”收敛为“记录日志并降级”,不再阻断主聊天链路

前端:无
仓库:无

undo:
1. 增删改查的 mysql 记忆去重没实现
2. 提取用户话为记忆的过滤机制不足,有点无脑
3. RAG 召回也有问题
2026-04-13 23:18:59 +08:00

99 lines
2.0 KiB
Go
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
package retrieve
import (
"context"
"fmt"
"strings"
"github.com/LoveLosita/smartflow/backend/infra/rag/core"
)
// VectorRetriever 是通用检索器embed + vector search
type VectorRetriever struct {
embedder core.Embedder
store core.VectorStore
}
func NewVectorRetriever(embedder core.Embedder, store core.VectorStore) *VectorRetriever {
return &VectorRetriever{
embedder: embedder,
store: store,
}
}
func (r *VectorRetriever) Retrieve(ctx context.Context, req core.RetrieveRequest) (result []core.ScoredChunk, err error) {
defer func() {
recovered := recover()
if recovered == nil {
return
}
err = fmt.Errorf("vector retriever panic recovered: %v", recovered)
}()
if r == nil || r.embedder == nil || r.store == nil {
return nil, core.ErrNilDependency
}
query := strings.TrimSpace(req.Query)
if query == "" {
return nil, core.ErrInvalidQuery
}
topK := req.TopK
if topK <= 0 {
topK = 8
}
action := strings.TrimSpace(req.Action)
if action == "" {
action = "search"
}
vectors, err := r.embedder.Embed(ctx, []string{query}, action)
if err != nil {
return nil, err
}
if len(vectors) != 1 {
return nil, fmt.Errorf("embedding query length mismatch: %d", len(vectors))
}
rows, err := r.store.Search(ctx, core.VectorSearchRequest{
QueryVector: vectors[0],
TopK: topK,
Filter: req.Filter,
})
if err != nil {
return nil, err
}
result = make([]core.ScoredChunk, 0, len(rows))
for _, row := range rows {
if row.Score < req.Threshold {
continue
}
result = append(result, core.ScoredChunk{
ChunkID: row.Row.ID,
DocumentID: asString(row.Row.Metadata["document_id"]),
Text: row.Row.Text,
Score: row.Score,
Metadata: cloneMap(row.Row.Metadata),
})
}
return result, nil
}
func cloneMap(src map[string]any) map[string]any {
if len(src) == 0 {
return map[string]any{}
}
dst := make(map[string]any, len(src))
for k, v := range src {
dst[k] = v
}
return dst
}
func asString(v any) string {
if v == nil {
return ""
}
return fmt.Sprintf("%v", v)
}