Available Models
Chat AI provides a large assortment of state-of-the-art open-weight Large Language Models (LLMs) which are hosted on our platform with the highest standards of data protection. The data sent to these models, including the prompts and message contents, are never stored at any location on our systems. Additionally, Chat AI offers models hosted externally such as OpenAI’s GPT-5, GPT-4o, and o3.
Available models are regularly upgraded as newer, more capable ones are released. We select models to include in our services based on user demand, cost, and performance across various benchmarks, such as HumanEval, MATH, HellaSwag, MMLU, etc. Certain models are more capable at specific tasks and with specific settings, which are described below to the best of our knowledge.
List of open-weight models, hosted by GWDG
| Organization | Model | Open | Knowledge cutoff | Context window in tokens | Advantages | Limitations | Recommended settings |
|---|---|---|---|---|---|---|---|
| ๐จ๐ญ Swiss AI | Apertus 70B Instruct 2509 | yes | Apr 2024 | 65k | Fully open-source, Multilingual | - | temp=0.8 top_p=0.9 |
| ๐จ๐ณ DeepSeek | DeepSeek R1 0528 | yes | Dec 2023 | 32k | Great overall performance, reasoning and problem-solving | Censorship, political bias | default |
| ๐จ๐ณ DeepSeek | DeepSeek R1 Distill Llama 70B | yes | Dec 2023 | 32k | Good overall performance, faster than R1 | Censorship, political bias | default temp=0.7, top_p=0.8 |
| ๐บ๐ธ Google | Gemma 3 27B Instruct | yes | Mar 2024 | 128k | Vision, great overall performance | - | default |
| ๐ธ๐ฌ Z.ai | GLM-4.7 | yes | Apr 2025 | 200k | Great performance | - | temp=1.0 top_p=0.95 |
| ๐จ๐ณ OpenGVLab | InternVL 3.5 30B A3B | yes | Aug 2025 | 40k | Vision, lightweight and fast | - | default |
| ๐ฉ๐ช VAGOsolutions x Meta | Llama 3.1 SauerkrautLM 70B Instruct | yes | Dec 2023 | 128k | German language skills | - | default |
| ๐บ๐ธ Google | MedGemma 27B Instruct | yes | Mar 2024 | 128k | Vision, medical knowledge | - | default |
| ๐บ๐ธ Meta | Llama 3.1 8B Instruct | yes | Dec 2023 | 128k | Fast overall performance | - | default |
| ๐บ๐ธ Meta | Llama 3.3 70B Instruct | yes | Dec 2023 | 128k | Good overall performance, reasoning and creative writing | - | default temp=0.7, top_p=0.8 |
| ๐ซ๐ท Mistral | Mistral Large Instruct | yes | Jul 2024 | 128k | Good overall performance, coding and multilingual reasoning | - | default |
| ๐บ๐ธ OpenAI | GPT OSS 120B | yes | Jun 2024 | 128k | Great overall performance, fast | - | default |
| ๐จ๐ณ Alibaba Cloud | Qwen 3 235B A22B Thinking 2507 | yes | Apr 2025 | 222k | Great overall performance, reasoning | - | temp=0.6, top_p=0.95 |
| ๐จ๐ณ Alibaba Cloud | Qwen 3 30B A3B Instruct 2507 | yes | Mar 2025 | 256k | Good performance, fast | - | temp=0.6, top_p=0.95 |
| ๐จ๐ณ Alibaba Cloud | Qwen 3 30B A3B Thinking 2507 | yes | Mar 2025 | 256k | Good performance, reasoning | - | temp=0.6, top_p=0.95 |
| ๐จ๐ณ Alibaba Cloud | Qwen 3 32B | yes | Sep 2024 | 32k | Good overall performance, multilingual, logic | - | default |
| ๐จ๐ณ Alibaba Cloud | Qwen 3 Coder 30B A3B Instruct | yes | Mar 2025 | 256k | Coding | - | temp=0.7, top_p=0.8 |
| ๐จ๐ณ Alibaba Cloud | Qwen 3 Omni 30B A3B Instruct | yes | Mar 2025 | 256k | Multimodal | - | default |
| ๐จ๐ณ Alibaba Cloud | Qwen 3 VL 30B A3B Instruct | yes | Mar 2025 | 262k | Vision | - | default |
| ๐ฉ๐ช OpenGPT-X | Teuken 7B Instruct Research | yes | Sep 2024 | 128k | European languages | - | default |
| ๐บ๐ธ intfloat x Mistral | E5 Mistral 7B Instruct | yes | - | 4096 | Embeddings | API Only | - |
List of external models, hosted by external providers
| Organization | Model | Open | Knowledge cutoff | Context window in tokens | Advantages | Limitations | Recommended settings |
|---|---|---|---|---|---|---|---|
| ๐บ๐ธ OpenAI | GPT-5.2 Chat | no | Aug 2025 | 400k | Great overall performance | - | default |
| ๐บ๐ธ OpenAI | GPT-5.2 | no | Aug 2025 | 400k | Great overall performance | - | default |
| ๐บ๐ธ OpenAI | GPT-5.1 Chat | no | Sep 2024 | 400k | Great overall performance | - | default |
| ๐บ๐ธ OpenAI | GPT-5.1 | no | Sep 2024 | 400k | Great overall performance | - | default |
| ๐บ๐ธ OpenAI | GPT-5 Chat | no | Jun 2024 | 400k | Good overall performance | - | default |
| ๐บ๐ธ OpenAI | GPT-5 | no | Jun 2024 | 400k | Good overall performance, reasoning | - | default |
| ๐บ๐ธ OpenAI | GPT-5 Mini | no | Jun 2024 | 400k | Fast overall performance | - | default |
| ๐บ๐ธ OpenAI | GPT-5 Nano | no | Jun 2024 | 400k | Fastest overall performance | - | default |
| ๐บ๐ธ OpenAI | o3 | no | Oct 2023 | 200k | - | outdated | default |
| ๐บ๐ธ OpenAI | o3-mini | no | Oct 2023 | 200k | - | outdated | default |
| ๐บ๐ธ OpenAI | GPT-4o | no | Oct 2023 | 128k | - | outdated | default |
| ๐บ๐ธ OpenAI | GPT-4o Mini | no | Oct 2023 | 128k | - | outdated | default |
| ๐บ๐ธ OpenAI | GPT-4.1 | no | June 2024 | 1M | - | outdated | default |
| ๐บ๐ธ OpenAI | GPT-4.1 Mini | no | June 2024 | 1M | - | outdated | default |
Open-weight models, hosted by GWDG
The models listed in this section are hosted on our platform with the highest standards of data protection. The data sent to these models, including the prompts and message contents, are never stored at any location on our systems.
Apertus 70B Instruct
Apertus is a fully open language model designed to push the boundaries of transparent and compliant AI. It supports over 1,800 languages and a context window size of up to 65,536 tokens, using only fully compliant and open training data. The model achieves comparable performance to closed-source models while respecting opt-out consent of data owners. It was pretrained on 15T tokens with a staged curriculum of web, code, and math data.
DeepSeek R1 0528
Developed by the Chinese company DeepSeek (ๆทฑๅบฆๆฑ็ดข), DeepSeek R1 was the first highly-capable open-weights reasoning model to be released. In the latest update, DeepSeek R1 0528, its depth of reasoning and inference capabilities has increased. Although very large and quite slow, it achieves one of the best overall performances among open models.
Warning
DeepSeek models, including R1, have been reported to produce politically biased responses, and censor certain topics that are sensitive for the Chinese government.
DeepSeek R1 Distill Llama 70B
Developed by the Chinese company DeepSeek (ๆทฑๅบฆๆฑ็ดข), DeepSeek R1 Distill Llama 70B is a dense model distilled from DeepSeek-R1 but based on LLama 3.3 70B, in order to fit the capabilities and performance of R1 into a 70B parameter-size model.
Google Gemma 3 27B Instruct
Gemma is Google’s family of light, open-weights models developed with the same research used in the development of its commercial Gemini model series. Gemma 3 27B Instruct is quite fast and thanks to its support for vision (image input), it is a great choice for all sorts of conversations.
GLM-4.7
GLM-4.7 is a coding-focused model that delivers significant improvements over its predecessor in multilingual agentic coding and terminal-based tasks. It achieves strong performance on SWE-bench, SWE-bench Multilingual, and Terminal Bench 2.0. GLM-4.7 also excels at tool use, web browsing, and mathematical reasoning, with notable gains on benchmarks like HLE and ฯยฒ-Bench.
InternVL 3.5 30B-A3B
InternVL 3.5 30B-A3B is a lightweight, fast and powerful multimodal model developed by OpenGVLab. It significantly advances versatility, reasoning capability, and efficiency, by featuring a Visual Resolution Router (ViR) for dynamic visual token adjustment and Decoupled Vision-Language Deployment (DvD) for efficient GPU load balancing, achieving up to 4ร inference speedup compared to its predecessor. The model excels at multimodal reasoning, OCR, document understanding, multi-image comprehension, video understanding, GUI tasks, and embodied agency.
Llama 3.1 SauerkrautLM 70B Instruct
SauerkrautLM is trained by VAGOsolutions on Llama 3.1 70B specifically for prompts in German.
Google MedGemma 27B Instruct
MedGemma 27B Instruct is a variant of Gemma 3 suitable for medical text and image comprehension. It has been trained on a variety of medical image data, including chest X-rays, dermatology images, ophthalmology images, and histopathology slides, as well as medical text, such as medical question-answer pairs, and FHIR-based electronic health record data. MedGemma variants have been evaluated on a range of clinically relevant benchmarks to illustrate their baseline performance.
Meta Llama 3.1 8B Instruct
The standard model we recommend. It is the most lightweight with the fastest performance and good results across all benchmarks. It is sufficient for general conversations and assistance.
Meta Llama 3.3 70B Instruct
Achieves good overall performance, on par with GPT-4, but with a much larger context window and more recent knowledge cutoff. Best in English comprehension and further linguistic reasoning, such as translations, understanding dialects, slang, colloquialism and creative writing.
Mistral Large Instruct
Developed by Mistral AI, Mistral Large Instruct 2407 is a dense language model with 123B parameters. It achieves great benchmarking scores in general performance, code and reasoning, and instruction following. It is also multi-lingual and supports many European and Asian languages.
OpenAI GPT OSS 120B
In August 2025, OpenAI released the gpt-oss model series, consisting of two open-weight LLMs that are optimized for faster inference with state-of-the-art performance across many domains, including reasoning and tool use. According to OpenAI, the gpt-oss-120b model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks.
Qwen 3 235B A22B Thinking 2507
Expanding on Qwen 3 235B A22B, one of the best-performing models of the Qwen 3 series, Qwen 3 235B A22B Thinking 2507 has a significantly improved performance on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks. It is an MoE model with 235B total parameters and 22B activated parameters, and achieves state-of-the-art results among open-weights thinking models.
Qwen 3 30B A3B Instruct 2507
This MoE model features 30.5B total parameters with 3.3B activated parameters for efficient inference. It delivers significant improvements in instruction following, logical reasoning, text comprehension, mathematics, science, coding, and tool usage, with better alignment for subjective and open-ended tasks. The model supports a 256K native context length and operates in non-thinking mode, achieving strong performance across knowledge, reasoning, coding, and multilingual benchmarks.
Qwen 3 30B A3B Thinking 2507
The Thinking variant of Qwen 3 30B A3B is optimized for complex reasoning tasks. It excels at mathematical problem-solving (AIME25, HMMT25), logical reasoning (ZebraLogic), and coding challenges, while maintaining strong performance on knowledge benchmarks like MMLU-Pro and GPQA. This model also demonstrates strong alignment capabilities, scoring high on IFEval, Arena-Hard v2, and creative writing benchmarks.
Qwen 3 32B
Qwen 3 32B is a large dense model developed by Alibaba Cloud released in April 2025. It supports reasoning and outperforms or is at least on par with other strong reasoning models such as OpenAI o1 and DeepSeek R1.
Qwen 3 Coder 30B A3B Instruct
Qwen 3 Coder 30B A3B Instruct is a specialized coding model that achieves strong performance on agentic coding, browser-use, and other foundational coding tasks among open models.
Qwen 3 Omni 30B A3B Instruct
Qwen3 Omni is a natively multilingual omni-modal foundation model that processes text, images, audio, and video. It achieves state-of-the-art performance on many audio/video benchmarks with ASR, audio understanding, and voice conversation performance comparable to Gemini 2.5 Pro. The model features a novel MoE-based ThinkerโTalker architecture with AuT pretraining, supports 119 text languages, 19 speech input languages, and enables low-latency interaction with flexible control via system prompts.
Qwen 3 VL 30B A3B Instruct
Qwen3 VL is the most powerful vision-language model in the Qwen series to date, featuring comprehensive upgrades across visual perception, reasoning, and agent capabilities. This MoE model (30B total, 3B active) excels as a visual agent that can operate PC/mobile GUIs, generate code from images/videos (Draw.io/HTML/CSS/JS), and perform advanced spatial reasoning with 2D and 3D grounding. It supports native 256K context for long-form video understanding, recognizes a wide range of entities including celebrities, anime, products, and landmarks, and offers robust OCR across 32 languages. Text understanding is on par with pure LLMs, enabling seamless text-vision fusion.
OpenGPT-X Teuken 7B Instruct Research
OpenGPT-X is a research project funded by the German Federal Ministry of Economics and Climate Protection (BMWK) and led by Fraunhofer, Forschungszentrum Jรผlich, TU Dresden, and DFKI. Teuken 7B Instruct Research v0.4 is an instruction-tuned 7B parameter multilingual LLM pre-trained with 4T tokens, focusing on covering all 24 EU languages and reflecting European values.
External models, hosted by external providers
Warning
These OpenAI models are hosted on Microsoft Azure, and Chat AI only relays the contents of your messages to their servers. Microsoft adheres to GDPR and is contractually bound not to use this data for training or marketing purposes, but they may store messages for up to 30 days. We therefore recommend the open-weight models, hosted by us, to ensure the highest security and data privacy.
OpenAI GPT-5, 5.1, and 5.2 Series
OpenAI’s GPT-5, 5.1, and 5.2 series models achieve state-of-the-art performance across various benchmarks. The series consists of the following models along with their intended use cases:
- OpenAI GPT-5/5.1/5.2 Chat: Non-reasoning model. Designed for advanced, natural, multimodal, and context-aware conversations.
- OpenAI GPT-5/5.1/5.2: Reasoning model. Designed for logic-heavy and multi-step tasks.
- OpenAI GPT-5 Mini: A lightweight variant of GPT-5 for cost-sensitive applications.
- OpenAI GPT-5 Nano: A highly optimized variant of GPT-5. Ideal for applications requiring low latency.
OpenAI GPT-4o
GPT 4o (“o” for “omni”) is a general-purpose model developed by OpenAI. This model improves on the relatively older GPT 4, and supports vision (image input) too.
OpenAI GPT-4o Mini
This was developed as a more cost-effective and faster alternative to GPT 4o.
OpenAI GPT-4.1
OpenAI’s GPT-4.1-class models improve on the older GPT-4 series. These models also outperform GPT-4o and GPT-4o Mini, especially in coding and instruction following. They have a large context window size of 1M tokens, with improved long-context comprehension, and an updated knowledge cutoff of June 2024.
OpenAI GPT-4.1 Mini
This was developed as a more cost-effective and faster alternative to GPT-4.1.
OpenAI o1 and o1 Mini
OpenAI’s o1-class models were developed to perform complex reasoning tasks. These models have now been superceded by the o3-series, and are therefore no longer recommended.
OpenAI o3
Released in April 2025, OpenAI’s o3-class models were developed to perform complex reasoning tasks across the domains of coding, math, science, visual perception, and more. These models have an iterative thought process, and therefore take their time to process internally before responding to the user. The thought process for o3 models are not shown to the user.
OpenAI o3 Mini
This was developed as a more cost-effective and faster alternative to o3.