When the AI chatbot trend began to pick up steam, most people used one that connected to a cloud-based server. However, there"s been a growing trend towards having chatbots run locally on PCs and smartphones. Today, Nvidia launched a new way for Windows PC owners to create their own large language model AI chatbots that connect to locally stored content.
Nvidia calls this local LLM Chat with RTX, and it is available now in a demo version from the company"s website. As the name suggests, your PC must have an Nvidia GeForce RTX GPU to run this local chatbot. Here are the exact hardware requirements:
- GPUs - Nvidia GeForce RTX 30 or 40 Series GPU or Nvidia RTX Ampere or Ada Generation GPU with at least 8GB of VRAM
- RAM - 16GB or greater
- OS - Windows 11
- Geforce Driver - 535.11 or later
Once you download and install it, you can connect the Chat with RTX app with your locally stored content, such as documents, PDF files, videos, and more. You can then use that local data set to ask the chatbot so it can give you answers without having to sift through all that material.
In addition, Chat with RTX can find information in some YouTube videos. All you have to do is copy the video URL into the app and then ask questions about the content of the video. Chat with RTX should provide you with the answers.
Of course, the biggest feature of local LLM apps like Chat with RTX is that you can have an AI chatbot that won"t transmit data to a company"s cloud server for training and other purposes. The trade-off is that you will need a fairly powerful PC to run it.
Nvidia is labeling the Chat with RTX app as a free technical demo. That might suggest that the company has plans to turn it into a paid or subscription-based app with more features at some point in the future.