Nvidia is updating its ChatRTX beta with more AI models for RTX GPU owners. The chatbot, running natively on a Windows PC, can already use Mistral or Llama 2 to query personal documents you feed it into it, but now the list of supported AI models is growing to include Google's Gemma, ChatGLM3, and even OpenAI's CLIP model. To make it easier to search your photos.
Nvidia first introduced ChatRTX as “Chat with RTX” in February as a beta app, and you'll need an RTX 30 or 40-series GPU with 8GB of VRAM or more to be able to run it. The app essentially creates a local chatbot server that you can access from your browser and feed it your local documents and even YouTube videos for a powerful search tool complete with summaries and answers to questions about your private data.
Google's Gemma model is designed to run directly on powerful laptops or desktops, so its inclusion in ChatRTX is a fitting fit. Nvidia's implementation takes some of the complexity out of running these models locally, and the resulting chatbot interface lets you choose between models so you can find the one that best suits your particular data you want to analyze or research.
ChatRTX, available as Download 36 GB from the Nvidia website, now also supports ChatGLM3, a large open bilingual (English and Chinese) language model based on the General Language Model framework. OpenAI's Variational Language Pre-Training – Images (CLIP) has also been added, which allows you to search and interact with local image data and essentially train the model for image recognition.
Finally, Nvidia is also updating ChatRTX to support voice queries. Nvidia has integrated Whisper, an AI speech recognition system that lets you search your data using your voice.
“Infuriatingly humble music trailblazer. Gamer. Food enthusiast. Beeraholic. Zombie guru.”
More Stories
Google Pixel 9 Pro official case leaks and promotional videos
There is no solution to the problem of Intel 13th and 14th Gen processors crashing — no permanent damage
Internal change in iPhone 16 models expected to reduce overheating