Getting to “ground truth” with RAG processing

We’ve spent the last couple of months trying to separate truth from fiction about artificial intelligence, a controversial technology that seems to offer both promise and peril, and is much in the news.

As content curators and transformers we’ve gotten interested in (and hopeful about) the area of AI called RAG (retrieval-augmented generation), a natural-language processing technique that helps avoid the “hallucination” phenomenon common to large language models. The RAG process provides to AI assistants a trustworthy, local, “ground-truth” knowledge base as an information prerequisite or supplement.

We have our own sandbox environment in which we’re prototyping RAG and other content-rich AI-oriented solutions.

We’re also taking classes and have just finished one that we highly recommend to anyone interested in a high-level, and, at the same time, a deep dive (thanks to the many Python-based tools available) into the very latest concepts, tools, and techniques (including business applications and RAG processing).

The class is Tech 16, available through Stanford Continuing Studies.

Thanks, Charlie Flanagan (and team) for providing a stimulating, supportive, and collaborative environment. Thanks, also, to our fellow students who put together a number of creative final projects for the rest of us to enjoy, appreciate, and learn from.

We believe the class is being given again in the winter session. We might take it again, ourselves!