Open-Sourced Apr 23, 2026

Tencent Hy3 Preview

Tencent Hunyuan's most powerful open-source model — also known as Hunyuan 3 (Hunyuan 3.0). A 295B-parameter MoE architecture with 21B active parameters and 256K context window. Rebuilt from scratch in 90 days by the Hy team, built for real products like Yuanbao. Now available via Tencent Cloud and OpenRouter.

295B
Total Parameters
21B
Active Parameters
256K
Context Length
192
Expert Models
🔥

Why Hy3 Preview Matters

Three core capabilities fully upgraded in Hy3 Preview — from complex reasoning to coding agents, every capability refined through real product scenarios. A leap forward for the Hunyuan family.

🧠

Hy3 Preview: STEM & Reasoning

Excels on challenging STEM benchmarks like FrontierScience-Olympiad and IMOAnswerBench. Achieved the highest domestic score on the Tsinghua Qiuzhen College Math PhD qualifying exam, demonstrating strong generalizable reasoning that rivals OpenAI and DeepSeek models.

📚

Hy3 Preview: Context Learning

Real-world tasks demand parsing messy, lengthy contexts and following complex rules. The Hunyuan team built CL-bench from real business scenarios to measure context learning. Solid gains in both context learning and instruction following — a key differentiator from Ling 2.6 Flash and other competing models.

🤖

Hy3 Preview: Code & Agent

Coding and agents saw the biggest gains. Competitive scores on mainstream coding agent benchmarks (SWE-bench Verified, Terminal-Bench 2.0) and search agent benchmarks (BrowseComp, WideSearch). Powers Tencent's OpenCode workflows and integrates seamlessly with tools like CodeBuddy.

⚙️

Hy3 Preview Architecture

The Hunyuan 3.0 Mixture-of-Experts architecture fuses fast and slow thinking, achieving optimal balance between parameter scale and performance.

Hy3 Preview Mixture-of-Experts

Hy3 Preview (Tencent Hy3) uses a MoE architecture with 295B total parameters, activating only 21B per forward pass. Top-8 out of 192 experts are activated, routing routine queries to fast pattern-matching experts and complex problems to deeper reasoning chains. Chief AI Scientist Shunyu Yao describes the team as "exploring non-homogeneous capabilities" — features shaped by specific products.

This is not a compromise — it is a deliberate ceiling. Beyond roughly one trillion parameters, multi-node deployment erodes latency and throughput faster than marginal capability gains justify. While competitors like Nemotron 3 Super and DeepSeek-V3 push toward larger scales, the Hunyuan team's 300B range is intentional for cost-performance.

TENCENT (00700.HK)

Tencent Hunyuan officially released and open-sourced the Hy3 Preview language model

Positive 24 Recommend 23
PropertyValue
ArchitectureMoE
Total Parameters295B
Activated Parameters21B
MTP Layer Params3.8B
Layers80
Attention Heads64 (GQA, 8 KV)
Hidden Size4096
Intermediate Size13312
Context Length256K
Vocab Size120832
Experts192, top-8
PrecisionBF16
📈

Hy3 Preview Benchmark Results

Leading performance across multiple benchmarks, outpacing DeepSeek-V3 and GLM-4.5 in math, coding, and multilingual tasks.

Math — MATH (4-shot)

76.28
Pre-trained Model, Best in class
Hy3 Previewvs K2 71.20

Math — GSM8K (4-shot)

95.37
Pre-trained Model, Best in class
Hy3 Previewvs K2 93.46

Code — LiveCodeBench-v6

34.86
Pre-trained Model, Best in class
Hy3 Previewvs K2 30.86

Coding Agent — SWE-bench Verified

74.4
Instruct Model
Hy3 PreviewResolved %

Terminal Agent — Terminal-Bench 2.0

54.4
Instruct Model
Hy3 PreviewScore

Multilingual — MMMLU (5-shot)

80.15
Pre-trained Model, Best in class
Hy3 Previewvs DS-V3 79.54

Hy3 Preview Pre-trained Model Comparison

Benchmark Kimi-K2
32B / 1043B
DeepSeek-V3
37B / 671B
GLM-4.5
32B / 355B
Hy3 Preview
21B / 295B
MMLU (5-shot) 88.24 87.68 87.73 87.42
MMLU-Pro (5-shot) 65.98 63.98 63.67 65.76
SuperGPQA (5-shot) 51.10 46.17 49.64 51.60
MATH (4-shot) 71.20 59.37 61.00 76.28
GSM8K (4-shot) 93.46 88.15 90.06 95.37
LiveCodeBench-v6 30.86 29.31 27.43 34.86
MMMLU (5-shot) 77.63 79.54 79.26 80.15
INCLUDE (5-shot) 75.66 77.86 76.27 78.64

Hy3 Preview: 90 Days, From Scratch

In February 2026, Tencent tore down the Hunyuan infrastructure and rebuilt from scratch. Three core principles drove the birth of Hy3 Preview (Hunyuan 3).

2026 February

Infrastructure Rebuild

Tore down pre-training and RL infrastructure, rebuilt from scratch around three principles: capability systematisation, evaluation authenticity, and cost-performance.

6 Weeks Later

Training Begins

New infrastructure ready. Training began on the rebuilt framework. The Hy model team merged with Yuanbao, CodeBuddy, WorkBuddy, and other product teams into a single development loop.

10 Weeks Later

Model Goes Live

Hy3 Preview went live, integrated into Yuanbao, CodeBuddy, WorkBuddy and other products. Real user feedback began driving the optimization loop.

Apr 23, 2026

Open-Source Release

Model weights open-sourced on Hugging Face, ModelScope, and GitCode. API hosted on Tencent Cloud and available via OpenRouter, priced at roughly one-tenth of OpenAI GPT-4-class rates.

🌐

Hy3 Preview Product Ecosystem

Not just built for products — built with them. Tencent Hunyuan's live product metrics directly shape Hy3 Preview training priorities.

💬

Yuanbao

AI chatbot powered by Hy3 Preview, interactions that feel like a real friend

💻

CodeBuddy

AI coding assistant for code generation and debugging

💼

WorkBuddy

AI productivity assistant for documents and scheduling

🎨

ima

AI creative tool for multimodal content generation

🌐

QQ Browser

Smart browser with AI-powered search and summarization

-54%
Latency Reduction
-47%
E2E Duration
99.99%
Success Rate
495
Max Agent Steps
🚀

Deploy Hy3 Preview

Deploy Hy3 Preview with vLLM or SGLang, OpenAI-compatible API, ready out of the box. Also available on Tencent Cloud and OpenRouter.

quickstart.py
from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="EMPTY")

response = client.chat.completions.create(
    model="hy3-preview",
    messages=[
        {"role": "user", "content": "Hello! Can you briefly introduce yourself?"},
    ],
    temperature=0.9,
    top_p=1.0,
    # reasoning_effort: "no_think" (default) | "low" | "high" (deep CoT)
    extra_body={"chat_template_kwargs": {"reasoning_effort": "no_think"}},
)
print(response.choices[0].message.content)
vLLM Server
# Build from source
uv venv --python 3.12 --seed --managed-python
source .venv/bin/activate
git clone https://github.com/vllm-project/vllm.git
cd vllm && uv pip install -e . --torch-backend=auto

# Launch with MTP
vllm serve tencent/Hy3-preview \
  --tensor-parallel-size 8 \
  --speculative-config.method mtp \
  --speculative-config.num_speculative_tokens 1 \
  --tool-call-parser hy_v3 \
  --reasoning-parser hy_v3 \
  --served-model-name hy3-preview
SGLang Server
# Build from source
git clone https://github.com/sgl-project/sglang
cd sglang
pip3 install pip --upgrade
pip3 install "transformers>=5.6.0"
pip3 install -e "python"

# Launch with MTP
python3 -m sglang.launch_server \
  --model tencent/Hy3-preview \
  --tp 8 \
  --tool-call-parser hunyuan \
  --reasoning-parser hunyuan \
  --speculative-algorithm EAGLE \
  --served-model-name hy3-preview
💰

Hy3 Preview API Pricing

Roughly one-tenth of OpenAI GPT-4-class rates. Exceptional cost efficiency vs DeepSeek, Nemotron 3 Super, and Ling 2.6 Flash.

📥

Hy3 Preview Input (0-16K)

¥1.2

per million tokens

📤

Output

¥4.0

per million tokens

🚀

Hy3 Preview vs GPT-4

~1/10

of the cost

Hy3 Preview FAQ

What is Hy3 Preview?
Hy3 Preview (also called Hunyuan 3 or Hunyuan 3.0) is the latest open-source large language model developed by the Tencent Hunyuan (Hy) team. It uses a Mixture-of-Experts (MoE) architecture with 295B total parameters, activating 21B per inference pass, and supports up to 256K context window. It is the first model trained on Hunyuan's rebuilt infrastructure and the strongest Hunyuan model to date.
Why not build a bigger model?
The 300B parameter range is not a compromise — it is a deliberate design ceiling. Tencent found that beyond roughly one trillion parameters, multi-node deployment erodes latency and throughput faster than marginal capability gains justify. Hy3 Preview's MoE architecture achieves performance comparable to models like Nemotron 3 Super and DeepSeek-V3 while activating only 21B parameters.
How does Hy3 Preview compare to competitors?
On pre-trained benchmarks, Hy3 Preview achieves best-in-class scores on math (MATH 76.28, GSM8K 95.37), code (LiveCodeBench-v6 34.86), and multilingual tasks (MMMLU 80.15) — with less than a quarter of Kimi-K2's total parameters. It outperforms DeepSeek-V3 on most metrics. On instruct model evaluations, it scores 74.4 on SWE-bench Verified and 54.4 on Terminal-Bench 2.0, competitive with OpenAI's best models.
How do I deploy Hy3 Preview?
Hy3 Preview can be deployed via vLLM or SGLang, recommended on 8x H20-3e or GPUs with larger memory. It's also available as a managed service on Tencent Cloud and through OpenRouter for instant access. Once deployed, it provides an OpenAI-compatible API with three reasoning modes: no_think (direct response), low, and high (deep chain-of-thought). Both full fine-tuning and LoRA fine-tuning are supported.
What are the Reasoning Effort modes?
Hy3 Preview offers three reasoning modes: no_think is the default direct response mode for simple conversations; low is a light reasoning mode; high enables deep chain-of-thought reasoning for math, coding, and complex tasks. Switch freely via the reasoning_effort API parameter — a similar approach to OpenAI's reasoning controls.
What is the open-source license?
Hy3 Preview is released under the Tencent Hy Community License Agreement. Model weights are available on Hugging Face, ModelScope, and GitCode. API access via Tencent Cloud and OpenRouter is priced at roughly one-tenth of OpenAI GPT-4-class rates.
What are the known limitations?
Tencent acknowledges known limitations in the current Hy3 Preview version: weak error recovery during tool calls and sensitivity to inference hyperparameters. Chief AI Scientist Shunyu Yao says the team chose to open-source early to gather real-world feedback before the official Hunyuan 3 release. They are simultaneously scaling up pre-training and RL, enhancing functionality, and working more closely with product teams including Yuanbao and OpenCode.

Start Building with Hy3 Preview

Open-source, cost-efficient, built for products. Whether you're building chatbots like Yuanbao, coding assistants via OpenCode, or complex agent workflows — Hy3 Preview by Tencent Hunyuan is your starting point.