Is anyone else having trouble with Knowledgebases and Agents?
I’m using mostly.the DeepSeek R1 Distill Llama 70B model, but changing models definitely didn’t resolve issues.
Connecting too many knowledge bases to an Agent results in the “reasoning” field being filled up with a bunch of repetitive garbage from the context or chunks of the context. The <think>{reasoning}</think> appears together with the Agent’s normal response. The Agent often has an “out of tokens error”. Detaching all the knowledge bases and reattaching a single knowledge bases fixes the issue, but reattaching multipole knowledge bases causes the same issue again.
.csv imports will not index. Indexing fails with a “Data sources updated successfully,” message but zero tokens are indexed. I actually want to use Multi QA MPNet Base Dot v1 but I tried all three embedding models, all fail the same way.
Has anyone else had either of these issues and if so have you found a fix?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
This seems to have been resolved by increasing memory for the database.
Heya, @setec
I’ve looked into possible solutions, and it seems that attaching only 1 KB per Agent or carefully controlling chunk sizes and retrieval limits (e.g., 2–5 results max). You can fine-tune your agent retrieval configuration to smaller chunk sizes and fewer matches.
As for the CSV imports, all I can think of is to have simple and clean columns like
title, text
— no weird delimiters or hidden characters.Hope that this helps!