Buy credits that can be used anywhere on Segmind
Cost per million tokens
Mixtral 8x22B is the latest open model by Mistral AI. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.
Mixtral 8x22B comes with the following strengths:
It is fluent in English, French, Italian, German, and Spanish It has strong mathematics and coding capabilities It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernisation at scale Its 64K tokens context window allows precise information recall from large documents
We believe in the power of openness and broad distribution to promote innovation and collaboration in AI.
We are, therefore, releasing Mixtral 8x22B under Apache 2.0, the most permissive open-source licence, allowing anyone to use the model anywhere without restrictions.