Introduction

In this article we begin the journey of running generative AI, and large language models at home. But, why would we want to do this? There are several compelling reasons.

First, if you’re concerned about privacy and security, running models locally is the only way to ensure your data isn’t being collected. Large language models gather vast amounts of text and data, which can be problematic for individuals or organizations who value their privacy. By running tools locally, you can avoid trusting online AI platforms with your sensitive information.

Plus, developers may want to learn about the inner workings of AI by tweaking parameters and training models on their own data. This level of customization is not typically available through online tools.

Finally, some online tools are restrictive or configured in a way that won’t meet your needs. For instance, AI platforms might have guardrails in place to prevent certain topics from being discussed. While these safeguards are important, they can also limit the potential of AI for research and scientific purposes. Running models locally allows you to bypass these restrictions.

Running AI Locally: Requirements and Recommendations

To summarize, you’ll need a moderately recent and powerful gaming PC with a high-performance graphics processing unit (GPU) to run AI models locally. This is because many AI workloads rely on the GPU for acceleration, making it essential for reasonable performance. Gaming PCs are a good starting point due to typically including fast CPUs, plenty of system RAM, and GPUs, which are critical for accelerating AI tasks. If you will be using Windows, most tools provide support for Nvidia GPUs by default, although AMD and Intel have been improving AI support on Windows.

The amount of video RAM (VRAM) is also crucial, as it sets an upper bound on the size of models you can run without encountering errors. This is why I’m not recommending gaming laptops, which often come with limited VRAM. With that said, if you already have a gaming laptop or an older GPU, there’s no harm in trying it out to see what’s possible.

To get started, consider checking out this GPU benchmark running Stable Diffusion, an AI image generator, across many GPUs: https://www.tomshardware.com/pc-components/gpus/stable-diffusion-benchmarks

Unlocking the Power of AI on Your Local Machine

With your powerful PC and modern GPU, you can run a wide range of AI tools that cater to diverse needs. Here are some examples:

  • Chat AI for coding, writing, research, and more: Run large language models like ChatGPT to assist with tasks, generate ideas, or simply chat with.
  • Image generation: Turn text or visual prompts into new images using AI-powered generative tools.
  • Audio tools: Generate music, convert text to speech, or create captions using various audio-related AI applications.
  • More tools are coming online regularly: Keep an eye out for new developments and updates in the world of AI.

While these local tools may not be as polished as their cloud-based counterparts, they can still provide impressive experiences while keeping your data private and under your control. Explore what’s available and check back often to stay up-to-date on the latest advancements!

Running AI Tools at Home

Running AI models at home can be complex, but it’s getting easier. Just one year ago, many AI tools were designed for Linux servers, making them inaccessible on Windows operating systems. However, this is changing, and now there are various tools that run on Windows with ease.

Here are some examples of AI tools you can run locally:

  • GPT4All : A text generation tool supporting various models and offers a simple Windows installer file. This is where I got my start with running AI at home.
  • LM Studio : Another text generation tool. LM Studio was an early adopter of support for consumer GPUs, and also offers a simple Windows installer file for easy setup.
  • Stable Diffusion Next : An image generation tool that requires a manual installation process. This image generator has tons of capability and flexibility, providing output that rivals the major cloud players.
  • Audacity AI Plugins : A collection of easy-to-use AI-powered audio effects that can be added to the popular audio editing software, Audacity. Built by Intel, these only directly support Intel GPUs for acceleration, but may be run in software mode.
  • TTS-Generation : An open-source text-to-speech generation tool that offers a web-based interface for easy use.

This is not an exhaustive list, but it provides a starting point for anyone looking to explore AI models at home. By following the setup guides and instructions provided with each tool, you’ll be able to access AI capabilities locally and maintain control over your data. Please note that some tools may require more technical expertise than others, so be sure to review the setup requirements before diving in.

Conclusion

In this article, we’ve explored the world of running AI models at home and uncovered the many benefits and possibilities it has to offer. From generating images and music to providing assistance with coding and research, local AI models can be incredibly powerful tools for anyone looking to take control of their data and creativity. While setting up these tools may require some technical expertise, the rewards are well worth the effort. With this guide, I hope you’ve gained a better understanding of what’s possible with local AI and how to get started. Whether you’re an enthusiast, a developer, or simply someone curious about the possibilities of AI, I invite you to join me on this journey of exploring the many uses of artificial intelligence at home.