Trinity Large: An Open 400B MoE Model

Julien Simon · Intermediate ·✍️ Prompt Engineering ·3mo ago
I built a code review app powered by @ArceeAI's new Trinity Large model and deployed it to a @HuggingFace Space. ⭐️⭐️⭐️ More content on Substack at https://julsimon.substack.com ⭐️⭐️⭐️ In this video, I walk through a live demo — paste a GitHub PR URL, hit review, and watch it catch real issues: security flaws, logic bugs, missing edge cases. It's fast, free to try, and the 512K context window lets it process entire files without chunking. Code Review App (HF Space): https://huggingface.co/spaces/juliensimon/trinity-code-reviewer Trinity Large is a 400B parameter Mixture-of-Experts model trained in 33 days for $20M — a fraction of what frontier labs spend. It has 256 experts but activates only 4 per token, so only 13B active parameters do the work. That translates to 2-3x faster inference than anything in its weight class, which you can feel in the demo when reviews come back in seconds. In this video: - 🔴 Live demo: code review app catching real issues on real PRs - 🏗️ How I built the app — stack, prompting strategy, handling large diffs - 🧠 Architecture deep dive: 256 experts, 4 active, extreme sparsity - 📦 Why three checkpoints exist: Preview, Base, and TrueBase - ⚡ Why MoE makes this practical to actually deploy and run at scale Try the app yourself and let me know what it catches in your code. Hugging Face model page: https://huggingface.co/arcee-ai/Trinity-Large-Preview OpenRouter model page: https://openrouter.ai/arcee-ai/trinity-large-preview Arcee AI blog post: https://www.arcee.ai/blog/trinity-large Arcee AI technical report: https://github.com/arcee-ai/trinity-large-tech-report VentureBeat article: https://venturebeat.com/technology/arcees-u-s-made-open-source-trinity-large-and-10t-checkpoint-offer-rare-look
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Up next
Why AI keeps lying to you
DeepLearningAI
Watch →