Mixtral 8x22b

Mistral MoE 8x22B Instruct v0.1 model with Sparse Mixture of Experts. Fine tuned for instruction following.


Pricing

Serverless Pricing

Buy credits that can be used anywhere on Segmind

Input: $1.5, Output: $1.5 per million tokens

Mixtral 8x22B

Mixtral 8x22B is the latest open model by Mistral AI. It sets a new standard for performance and efficiency within the AI community. It is a sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B, offering unparalleled cost efficiency for its size.

Strengths

Mixtral 8x22B comes with the following strengths:

It is fluent in English, French, Italian, German, and Spanish It has strong mathematics and coding capabilities It is natively capable of function calling; along with the constrained output mode implemented on la Plateforme, this enables application development and tech stack modernisation at scale Its 64K tokens context window allows precise information recall from large documents

Truly open

We believe in the power of openness and broad distribution to promote innovation and collaboration in AI.

We are, therefore, releasing Mixtral 8x22B under Apache 2.0, the most permissive open-source licence, allowing anyone to use the model anywhere without restrictions.