What are the most effective prompting techniques to reduce hallucinations in RAG pipelines?
Asked about 2 months agoViewed 22 times
0
I am building a Retrieval-Augmented Generation (RAG) chatbot for internal company documents. Sometimes the LLM makes up information when the retrieved documents don't contain the exact answer. What exact phrasing in the system prompt works best to force the model to say "I don't know" rather than hallucinate?
asked about 2 months ago
R
Comments
No comments yet. Be the first to comment!
Please log in to add a comment
Log In0 Answers
Sign in to post an answer
Sign In