Jan-v2-VL-high reaches 49 steps on the Long-Horizon Execution benchmark.
Qwen3-VL-8B-Thinking reaches 5, Qwen2.5-VL-7B-Instruct and Gemma-3-12B reach 2, and Llama-3.1-Nemotron-8B and GLM-4.1-V-9B-Thinking reach 1.
Models: huggingface.co/collections...
14.11.2025 07:10 β π 0 π 0 π¬ 0 π 0
3 variants are available:
- Jan-v2-VL-low (efficiency-oriented)
- Jan-v2-VL-med (balanced)
- Jan-v2-VL-high (deeper reasoning and longer execution)
To use it, update your Jan App and download Jan-v2-VL from the Model Hub. Activate Browser MCP servers for agentic use cases.
13.11.2025 10:29 β π 0 π 0 π¬ 0 π 0
Introducing Jan-v2-VL, a multimodal agent built for long-horizon tasks.
Jan-v2-VL executes 49 steps without failure, while the base model stops at 5 and other similar-scale VLMs stop between 1 and 2.
Models: huggingface.co/collections...
Credit to the Qwen team for Qwen3-VL-8B-Thinking!
13.11.2025 10:29 β π 1 π 0 π¬ 1 π 0
π
12.11.2025 07:18 β π 3 π 0 π¬ 0 π 0
What is your open-source AI stack for coding?
10.11.2025 07:00 β π 1 π 0 π¬ 1 π 0
You can run Kimi-K2-Thinking, the most strongest agentic model, in Jan through Hugging Face. Add it to your Hugging Face Inference models to use it.
07.11.2025 06:50 β π 1 π 1 π¬ 0 π 0
We're looking for someone who can take full ownership of Jan's growth π©΅
menlo.bamboohr.com/careers/109
06.11.2025 04:58 β π 0 π 0 π¬ 0 π 0
What's your go-to open-source model now?
04.11.2025 09:00 β π 0 π 0 π¬ 0 π 0
What's the highest number of tools your models could use in Jan?
04.11.2025 03:45 β π 0 π 0 π¬ 0 π 0
You can now use Qwen3-VL in Jan.
Find the GGUF model on Hugging Face, click "Use this model" and select Jan, or copy the model link and paste it into Jan Hub.
Thanks Qwen π§‘
03.11.2025 10:28 β π 0 π 0 π¬ 0 π 0
llama.cpp in Jan has been updated. Tap "Update Now", or update it from Settings. Thanks to GGML and the open-source community π
03.11.2025 08:13 β π 1 π 0 π¬ 0 π 0
ANNOUNCEMENT | We've seen multiple tokens appearing under the name "Jan," some using our branding and visuals without authorization. These projects have no link to us. Please be cautious and trust only our official accounts for verified information.
31.10.2025 03:39 β π 2 π 0 π¬ 0 π 0
Run gpt-oss-safeguard-20b in Jan via Groq with 1k+ tokens/sec β‘οΈ
29.10.2025 12:42 β π 2 π 0 π¬ 0 π 0
Your Ollama models can use MCP servers in Jan.
29.10.2025 08:01 β π 0 π 0 π¬ 0 π 0
You can now run Ollama models in Jan.
Go to Settings, Model Providers, add Ollama, and set the Base URL to http://localhost:11434/v1.
Your π¦ Ollama models will then be ready to use in π Jan.
29.10.2025 06:25 β π 1 π 0 π¬ 0 π 0
Jan can now search the web using Exa. Turn it on from the MCP section.
28.10.2025 10:18 β π 0 π 0 π¬ 0 π 0
Jan now shows how much of the context window your chat uses.
28.10.2025 01:33 β π 1 π 0 π¬ 0 π 0
Kimi K2 runs in Jan.
Find it in Jan Hub, download, and you're good to go.
20.10.2025 05:48 β π 0 π 0 π¬ 0 π 0
Jan v0.7.2 is out.
We patched a security issue in happy-dom where untrusted JS could access system-level functions.
Update your Jan or download the latest.
17.10.2025 06:25 β π 0 π 0 π¬ 0 π 0
We spotted a couple of hiccups in v0.7.0 and fixed them up:
Two quick fixes:
- Jan no longer reverts to an older version on load
- OpenRouter can now add models again
Update to v0.7.1 and you should be good to go!
03.10.2025 16:22 β π 0 π 0 π¬ 0 π 0
Organize chats with Projects in Jan v0.7.0.
This update brings:
- Projects to group related chats
- Model context stats
- Auto-loaded cloud models
- Support for Exa as an MCP Server
Update your Jan or download the latest version.
02.10.2025 09:45 β π 0 π 0 π¬ 0 π 0
You can run GGUF models from Hugging Face in Jan.
Jan Hub searches @huggingface and runs them with @ggml's llama.cpp π
02.10.2025 06:21 β π 1 π 0 π¬ 0 π 0
You can adjust model settings yourself or let Jan optimize them for your device.
30.09.2025 04:23 β π 1 π 0 π¬ 0 π 0
You can now auto-adjust llama.cpp settings. It's an experimental feature to help your models run more efficiently.
19.09.2025 05:11 β π 1 π 0 π¬ 0 π 0
Jan v0.6.10 is out: You can now import vision models too.
- Import your vision models
- Experimental setting auto-adjusts llama.cpp for your system
- Fixed: image attachments, copy glitches, API key visibility, and more
Update your Jan or download the latest.
18.09.2025 09:18 β π 1 π 0 π¬ 0 π 0
mistralai/Magistral-Small-2509-GGUF Β· Hugging Face
Magistral 1.2 now runs on π Jan.
Go to the GGUF on Hugging Face, click Use this model, and select Jan.
huggingface.co/mistralai/M...
18.09.2025 05:28 β π 0 π 0 π¬ 0 π 0
What are your go-to MCP servers?
18.09.2025 03:01 β π 1 π 0 π¬ 1 π 0
Jan is lighter than ever.
17.09.2025 09:06 β π 0 π 0 π¬ 0 π 0
Some conversations are too private for cloud AI.
Create your own assistant in π Jan, make it a lawyer, finance, or customer support.
Choose an open-source model and keep every word private.
17.09.2025 08:08 β π 1 π 0 π¬ 0 π 0