Arioron Introduces Vex-Amber-Fable-2.0: Marks a new standard in performance and efficiency
Vex-Amber-Fable-2.0, a 2-billion-parameter causal language model developed by Arioron, has officially set a new World Record for parameter efficiency in the sub-3B category. By achieving a staggering 65.37% accuracy on the SWE-bench (Verified), the model delivers approximately 85% of the performance of frontier-class systems like GPT-5.1 and Claude Sonnet 4.5, despite being nearly 50 times smaller in scale. Engineered with float32 precision and an 8k context window, Vex-Amber-Fable-2.0 effectively matches the coding proficiency of 8B to 30B parameter models (60.98% on HumanEval) and demonstrates superior generalization (44.19% on LiveCodeBench). This breakthrough redefines the performance ceiling for Small Language Models (SLMs), proving that architectural optimization and high-fidelity training can rival massive foundation models in specialized software engineering and reasoning tasks....
Read Article