Skip to content
/ sample-apps Public template
forked from vespa-engine/sample-apps

Repository of sample applications for https://vespa.ai, the open big data serving engine

License

Notifications You must be signed in to change notification settings

baldersheim/sample-apps

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

#Vespa

Vespa sample applications

For operational sample applications, see examples/operations. See also PyVespa examples.

Getting started - Basic Sample Applications

Basic album-recommendation

The album-recommendation is the intro application to Vespa. Learn how to configure the schema for simple recommendation and search use cases.

Simple hybrid semantic search

The simple semantic search application demonstrates indexed vector search using HNSW, creating embedding vectors from a transformer language model inside Vespa, and hybrid text and semantic ranking. This app also demonstrates using native Vespa embedders.

Retrieval Augmented Generation (RAG)

The retrieval-augmented-generation sample application demonstrates how to build an end-to-end RAG pipeline with API-based and local LLMs.

Indexing multiple vectors per field

The Vespa Multi-Vector Indexing with HNSW app demonstrates how to index multiple vectors per document field for semantic search for longer documents.

Vespa streaming mode for naturally partitioned data

The vector-streaming-search app demonstrates how to use vector streaming search. See also blog post.

ColBERT token-level embeddings

The colbert application demonstrates how to use the Vespa colbert-embedder for explainable semantic search with better accuracy than regular text embedding models.

ColBERT token-level embeddings for long documents

The colbert-long application demonstrates how to use the Vespa colbert-embedder for explainable semantic search for longer documents.

SPLADE sparse learned weights for ranking

The splade application demonstrates how to use the Vespa splade-embedder for semantic search using sparse vector representations.

Multilingual semantic search

The multilingual sample application demonstrates multilingual semantic search with multilingual text embedding models.

Customizing embeddings

The custom-embeddings application demonstrates customizing frozen document embeddings for downstream tasks.

More advanced sample applications

News search and recommendation tutorial

The news sample application used in the Vespa tutorial. This application demonstrates basic search functionality.

It also demonstrates how to build a recommendation system where the approximate nearest neighbor search in a shared user/item embedding space is used to retrieve recommended content for a user. This app also demonstrates using parent-child relationships in Vespa.

Billion-scale Image Search

This billion-scale-image-search app demonstrates billion-scale image search using CLIP retrieval. It features separation of compute from storage and query time vector similarity de-duping. PCA dimension reduction and more.

State-of-the-art Text Ranking

This msmarco-ranking application demonstrates how to represent state-of-the-art text ranking using Transformer (BERT) models. It uses the MS Marco passage ranking datasets and features bi-encoders, cross-encoders, and late-interaction models (ColBERT).

See also the more simplistic text-search app that demonstrates traditional text search using BM25/Vespa nativeRank.

Next generation E-Commerce Search

The use-case-shopping app creates an end-to-end E-Commerce shopping engine. This use case also bundles a frontend application. It uses the Amazon product data set. It demonstrates building next generation E-commerce Search using Vespa. See also the commerce-product-ranking sample application for using learning-to-rank techniques (Including XGBoost and LightGBM) for improving product search ranking.

Search as you type and query suggestions

The incremental-search application demonstrates search-as-you-type functionality, where for each keystroke of the user, it retrieves matching documents. It also demonstrates search suggestions (query auto-completion).

Vespa as ML inference server (model-inference)

The model-inference application demonstrates using Vespa as a stateless ML model inference server where Vespa takes care of distributing ML models to multiple serving containers, offering horizontal scaling and safe deployment. Model versioning and feature processing pipeline.


Note: Applications with pom.xml are Java/Maven projects and must be built before deployment. Refer to the Developer Guide for more information.

Contribute to the Vespa sample applications.


Vespa Sampleapps Search Feed

sample-apps link checker

sample-apps build

sample-apps verify-guides sample-apps verify-guides-big

About

Repository of sample applications for https://vespa.ai, the open big data serving engine

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 78.7%
  • Python 9.1%
  • Java 8.5%
  • CSS 2.5%
  • JavaScript 0.7%
  • Ruby 0.2%
  • Other 0.3%