In the last few months, we have read about the many different chabots that are available, from ChatGPT to Bing Chat to Bard. However, the large language models that are used as the core of these chatbots have led to lots of concerns about both the level of truth in their answers and also how they sometimes start to spout some off-the-way responses.
Today, NVIDIA has announced a new open source platform designed to put some restrictions on chatbot answers. It"s called, appropriately enough, NeMo Guardrails. It will allow software developers to put in safeguards on the kinds of answers that are created by chatbots.
There will be three different types of guardrails in this program:
- Topical guardrails prevent apps from veering off into undesired areas. For example, they keep customer service assistants from answering questions about the weather.
- Safety guardrails ensure apps respond with accurate, appropriate information. They can filter out unwanted language and enforce that references are made only to credible sources.
- Security guardrails restrict apps to making connections only to external third-party applications known to be safe.
Software developers can learn more about NeMo Guardrails on NVIDIA"s technical blog.