Vespa Cloud provides a set of machine-learned models that you can use
in your applications. These models will always be available on Vespa Cloud and are
frozen models. You
can also bring your own embedding model, by deploying it in the Vespa application package.
You specify to use a model provided by Vespa Cloud by setting the model-id
attribute where you specify a model config. For example, when configuring the
Huggingface embedder
provided by Vespa, you can write:
With this, your application will have support for
text embedding
inference for both queries and documents. Nodes that have been provisioned with GPU acceleration, will automatically
use GPU for embedding inference.
Vespa Cloud Embedding Models
Models on Vespa model hub are selected open-source embedding models with
great performance. See the Massive Text Embedding Benchmark (MTEB) Leaderboard for details.
These embedding models are useful for retrieval (semantic search), re-ranking, clustering, classification, and more.
Huggingface Embedder
These models are available for the Huggingface Embedder type="hugging-face-embedder".
All these models supports both mapping from string or array<string> to tensor representations.
The output tensor cell-precision
can be <float> or <bfloat16>.
nomic-ai-modernbert
Trained from ModernBERT-base on the Nomic Embed datasets, bringing the new advances of ModernBERT to embeddings.
Model id
nomic-ai-modernbert
Tensor definition
tensor<float>(x[768]) (supports Matryoshka, so x[256] is also possible)
The E5 family uses keywords with the input to differentiate query and document side embedding.
The query text should be prefixed with "query: ".
In this example the original user query is how to format e5 queries.
{"yql":"select doc_id from doc where ({targetHits:10}nearestNeighbor(embeddings,e))","input.query(e)":"embed(e5, \"query: how to format e5 queries\")"}
The same technique also must be applied for document side embedding inference.
The input text should be prefixed with "passage: "
The above example reads a chunks field of type array<string>,
and prefixes each item with "passage: ", followed by the concatenation
of the title and the item chunk (_).
See execution value example.
Bert Embedder
These models are available for the Bert Embeddertype="bert-embedder":
These are embedder implementations that tokenize text and embed string to the vocabulary identifiers.
These are most useful for creating the tensor inputs to re-ranking models that take both the query and document token identifiers as input.
Find examples in the
sample applications.
bert-base-uncased
A vocabulary text (vocab.txt) file on the format expected by
WordPiece:
A text token per line.
A tokenizer.json configuration file on the format expected by
HF tokenizer.
This tokenizer configuration can be used with e5-base-v2, e5-small-v2 and e5-large-v2.
This can be useful for example to create an application package which uses models from Vespa Cloud
for production and a scaled-down or dummy model for self-hosted development.
Using Vespa Cloud models with any config
Specifying a model-id can be done for any
config field of type model,
whether the config is from Vespa or defined by you.