Run large language models locally for free in minutes.

Wingman is a chatbot that lets you run LLMs locally on PC and Mac (Intel or Apple Silicon).

First beta release, Rooster, now available!


Introducing Wingman

The best way to run

Llama 2 • Phi • OpenAI • Mistral • Yi • Zephyr

Large Language Models locally.

An easy to use UI with no code or terminals

No more code or terminals. Wingman's intuitive graphical interface makes running LLMs approachable for anyone. Just point, click, and converse.

Wingman Interface

Open source and OpenAI models all in one place

Access a wide range of cutting-edge language models like Llama 2, GPT4, Phi, Mistral and more - all within Wingman's familiar chatbot interface.

Wingman's available LLMs
Stay on top of what's popular and trending, and sort by Emotion IQ. Easily browse, search and access the latest LLMs directly from Hugging Face's model hub without leaving the Wingman app.

See which LLMs are compatible with your machine at a glance

Know which models you can (and can't) realistically run based on your system's specs. Wingman evaluates compatibility upfront to avoid crashes or slow performance. Wingman won't stop you from trying the best, though!

ready for takeoff

Talk with characters using system prompts

Get more out of LLMs by customizing system prompts and creating templates for different use cases. Prompt models like characters or with specific viewpoints.

system prompts example

Give us a star on Github!

Frequently Asked Questions

How much does it cost?
Wingman is open-source, so it’s, you know, free. You can download it here and give it a try yourself.
What are the system requirements?
Wingman works on Windows PCs and MacOS. On PC, we support Nvidia GPUs or (CPU-based inference). On MacOS, we support both Intel and Apple Silicon devices.
How do I use it?
Using Wingman is easy:
  1. Download the app
  2. Run the installer
  3. Use it
Is it secure?
Wingman runs entirely on your own machine, so you aren’t sharing your secrets with OpenAI, Google or anyone else. Our app doesn't phone home or rely on the network, except to initially download models.
How do I help out?
The project is open-source, and I’d love it if you helped. Please visit the GitHub repo to report issues, submit pull requests, etc.
How often can we expect updates?
I’ve got a lot more I want to do with Wingman, so you can expect frequent updates. I’ve got to deal with bugfixes out of the gate, but I’ve got a lot planned going forward. Stay tuned for a roadmap!
Does Wingman require the internet, or can it be run fully offline?
Wingman doesn’t require the internet to run local models, so you can use it in an offline environment. Just make sure to download the models you want beforehand so you can use them when you don’t have network capability.
Does Wingman have an API?
An API is in development but not ready for public release yet. Stay tuned!
Does Wingman support multi-modal prompting?
Not currently, but multi-modal input is being tested internally.

Wingman's first beta release, Rooster, is now available!