Jan

AI Chat & Assistants

An open-source local AI chat app. Connect to GPT-4o and Claude via cloud APIs, or easily run local LLMs like Llama and Mistral through a beautiful GUI. Privacy-first design.

4.1
WindowsmacOSLinux

What is Jan?

Jan is a local-first, open-source AI chat application. With a beautiful, intuitive ChatGPT-style UI, you can download and run local LLMs (Llama 3, Mistral, Phi, etc.) with one click, and also connect to cloud APIs from OpenAI, Anthropic, and others for a unified AI chat experience. Jan's greatest feature is making local LLM usage extremely easy. Download any model from the built-in model hub with a single click and start chatting immediately. The llama.cpp-based inference engine allows CPU-only operation (GPU acceleration available for faster performance). All conversation data is stored locally for complete privacy. An OpenAI-compatible local API server is built in, letting other applications call Jan's models. The extension system also allows adding custom tools and integrations.

Jan screenshot

Pricing Plans

1Completely free (open source, AGPLv3)
2API keys from each provider charged separately

Key Features

One-click local LLM download and execution
Cloud API connection (OpenAI / Anthropic / Google, etc.)
OpenAI-compatible local API server
llama.cpp-based high-efficiency inference engine
GPU (CUDA / Metal) and CPU support
Local conversation history storage and export
Extension (plugin) system
Easy model management from built-in model hub

Pros & Cons

Pros

  • One-click download and run for local LLMs
  • Beautiful, intuitive ChatGPT-style UI
  • Complete data privacy with all data stored locally
  • Runs on CPU only (no GPU required)
  • Also supports cloud APIs (OpenAI, Anthropic, etc.)
  • OpenAI-compatible local API server built in

Cons

  • Local LLM quality is below GPT-4o and Claude
  • High-performance models require a powerful PC
  • UI is English-only with no Japanese localization
  • Extension ecosystem is still developing

Frequently Asked Questions

Q. What is the difference between Jan and Ollama?

A. Ollama is a command-line focused local LLM runtime, while Jan is a chat app with a graphical UI. Jan's strength is its accessibility for non-technical users. You can also use Ollama as a backend.

Q. Can I use Jan without a GPU?

A. Yes, it runs on CPU only. Lightweight models like Phi-3 Mini can chat comfortably on a PC with just 8GB of RAM. However, a GPU is recommended for running large models (70B+ parameters).

Q. Is my data uploaded to the cloud?

A. No, Jan follows a local-first design where all conversation and model data is stored on your PC. Data is only sent to a provider's servers when using cloud APIs.

Related Tools

Explore More on AIpedia