Vespa Use Cases
Find a sample application at E-commerce: shopping and product catalog.
Vespa is widely used as a recommendation engine, recommending personalized articles, ads matching user profile and history, video recommendations, people matching. Implementations vary from vector dot products to neural nets - both using tensors to represent models and data.
Read more in the blog recommendation tutorial.
Vespa supports text search and grouping (aggregation, faceting) - see the blog search tutorial. Implement multi-phase ranking to spend most resources on the most relevant hits. Often enhanced with auto-complete using n-grams.
Rank profiles are just mathematical expressions, to enable almost any kind of computation over a large data set.
See the text search tutorial for text search using BM25.
A search engine normally implements indexing structures like reverse indexes to reduce query latency. It does indexing up-front, so later matching and ranking is quick. It also normally keeps a copy of the original document for later retrieval / use in search summaries. Simplified, the engine keeps the original data plus auxiliary data structures to reduce query latency. This induces both extra work - indexing - as compared to only store the raw data, and extra static resource usage - disk, memory - to keep these structures.
Streaming search is an alternative to indexed search. It is useful in cases where the document corpus is statically split into many subsets and all searches go to just one (or a few) of the small subsets. The canonical example being personal indexes where a user only searches his own data.