mistral_tools.embeddings

A wrapper around the Mistral API for getting embeddings

Functions

get_n_tokens(input, model[, tokenizer])

Compute the number of tokens in the input

Classes

EmbeddingModel(*, api_key, model[, ...])

A wrapper around the Mistral API for getting embeddings

mistral_tools.embeddings.get_n_tokens(input, model, tokenizer=None)[source]

Compute the number of tokens in the input

class mistral_tools.embeddings.EmbeddingModel(*, api_key, model, rate_limit: float | RateLimiter = 1.1, max_n_tokens: int = 16384)[source]

A wrapper around the Mistral API for getting embeddings

get_n_tokens(input)[source]

Compute the number of tokens in the input

get_embeddings_batched(inputs)[source]

Get the embeddings for a batch of inputs

get_embeddings_batched_filtered(inputs_filtered)[source]

Get the embeddings for a batch of inputs without checks

assumes all inputs are smaller than the max n tokens

get_batch_embeddings(batch)[source]

Get the embeddings for a batch of inputs smaller than the max n tokens

retries on rate limit errors