We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Muves: Multimodal & multilingual vector search w/ Hardware Acceleration

Formal Metadata

Title
Muves: Multimodal & multilingual vector search w/ Hardware Acceleration
Title of Series
Number of Parts
56
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Bringing multimodal experience into search journey became of high interest lately: searching images with text, or looking inside an audio file, combining that with the rgb frames of a video stream. Today, vector search algorithms (like FAISS, HNSW, BuddyPQ) and databases (Vespa, Weaviate, Milvus and others) make these experiences a reality. But what if you as a user would like to stay with the familiar Elasticsearch / OpenSearch AND leverage the vector search at scale? In this talk we will take a hardware acceleration route to build a vector search experience over products and will show how you can blend the worlds of neural search with symbolic filters. We will discuss use cases where adding multimodal and multilingual vector search will improve recall and compare results from Elasticsearch/OpenSearch with and without the vector search component using tools like Quepid. We will also investigate different fine-tuning approaches and compare their impact on different quality metrics. We will demonstrate our findings using our end-to-end search solution Muves which combines traditional symbolic search with multimodal and multilingual vector search and includes an integrated fine-tuner for easy domain adaptation of pre-trained vector models.