Difference between revisions of "Inverted File Indexes (IVF)"

From
Jump to: navigation, search
m
m
Line 14: Line 14:
 
</script>
 
</script>
 
}}
 
}}
[https://www.youtube.com/results?search_query=k-nearest+neighbors YouTube search...]
+
[https://www.youtube.com/results?search_query=ai+Inverted+File+Indexes+IVF+Bi+Phase+Enhanced+IVFPQ+FAISS YouTube]
[https://www.google.com/search?q=k-nearest+neighbors+deep+machine+learning+ML ...Google search]
+
[https://www.quora.com/search?q=ai%20Inverted%20File%20Indexes%20IVF%20Bi%20Phase%20Enhanced%20IVFPQ%20FAISS ... Quora]
 +
[https://www.google.com/search?q=ai+Inverted+File+Indexes+IVF+Bi+Phase+Enhanced+IVFPQ+FAISS ... Google search]
 +
[https://news.google.com/search?q=ai+Inverted+File+Indexes+IVF+Bi+Phase+Enhanced+IVFPQ+FAISS ... Google News]
 +
[https://www.bing.com/news/search?q=ai+Inverted+File+Indexes+IVF+Bi+Phase+Enhanced+IVFPQ+FAISS&qft=interval%3d%228%22 ... Bing News]
  
 
* [[K-Nearest Neighbors (KNN)]]
 
* [[K-Nearest Neighbors (KNN)]]
Line 39: Line 42:
 
Bi-Phase Enhanced IVFPQ is a recent development that further improves the performance of IVFPQ for time-efficient ad-hoc retrieval. Bi-Phase Enhanced IVFPQ is a novel framework used for time-efficient ad-hoc retrieval in similarity search. It combines two types of features, namely latent topics and explicit features, to enhance the retrieval process.
 
Bi-Phase Enhanced IVFPQ is a recent development that further improves the performance of IVFPQ for time-efficient ad-hoc retrieval. Bi-Phase Enhanced IVFPQ is a novel framework used for time-efficient ad-hoc retrieval in similarity search. It combines two types of features, namely latent topics and explicit features, to enhance the retrieval process.
  
== Faiss library ==
+
<youtube>VOerTAir9SU</youtube>
 +
<youtube>BMYBwbkbVec</youtube>
 +
<youtube>bnP6TsqyF30</youtube>
 +
<youtube>diSszEveJx0</youtube>
 +
 
 +
== FAISS library ==
 
The `IndexIVF` class in the Faiss library is an implementation of the IVF index. In the inverted file, the quantizer (an `Index` instance) provides a quantization index for each vector to be added. The quantization index maps to a list (aka inverted list or posting list), where the id of the vector is stored. The inverted list object is required only after training. If none is set externally, an `InvertedLists` instance is used automatically.
 
The `IndexIVF` class in the Faiss library is an implementation of the IVF index. In the inverted file, the quantizer (an `Index` instance) provides a quantization index for each vector to be added. The quantization index maps to a list (aka inverted list or posting list), where the id of the vector is stored. The inverted list object is required only after training. If none is set externally, an `InvertedLists` instance is used automatically.
  
Line 46: Line 54:
 
Faiss provides several similarity search methods that span a wide spectrum of usage trade-offs. It is optimized for memory usage and speed and offers a state-of-the-art GPU implementation for the most relevant indexing methods. Faiss implements a dozen index types that are often compositions of other indices. The optional GPU version has been optimized for billion-scale datasets. Faiss is commonly used in various AI applications, including [[Video/Image|image and video search]], [[Natural Language Processing (NLP)]], and [[Recommendation|recommendation systems]]. It is also used in research and academic settings for similarity search and [[clustering]] of large datasets.
 
Faiss provides several similarity search methods that span a wide spectrum of usage trade-offs. It is optimized for memory usage and speed and offers a state-of-the-art GPU implementation for the most relevant indexing methods. Faiss implements a dozen index types that are often compositions of other indices. The optional GPU version has been optimized for billion-scale datasets. Faiss is commonly used in various AI applications, including [[Video/Image|image and video search]], [[Natural Language Processing (NLP)]], and [[Recommendation|recommendation systems]]. It is also used in research and academic settings for similarity search and [[clustering]] of large datasets.
  
 
+
<youtube>sKyvsdEv6rk</youtube>
<youtube>VOerTAir9SU</youtube>
+
<youtube>wHltI4kPKjk</youtube>
<youtube>BMYBwbkbVec</youtube>
 
<youtube>bnP6TsqyF30</youtube>
 
<youtube>diSszEveJx0</youtube>
 

Revision as of 12:51, 17 August 2023

YouTube ... Quora ... Google search ... Google News ... Bing News


Inverted File Indexes (IVF) are used in AI for similarity search and nearest neighbor search. IVF is one of the simplest and most basic indexing strategies, and it is commonly used in AI for similarity search and nearest neighbor search. By reducing the search scope, IVF can significantly improve both query speed and throughput. IVF is a partition-based indexing strategy that assigns all database vectors to the partition with the closest centroid. The goal is to map the query vector to a smaller subset of the vector space and then only search that smaller space for approximate nearest neighbors.

Inverted File Product Quantization (IVFPQ)

IVF is implemented alongside Product Quantization (PQ) for a fast and efficient approximate nearest neighbor search. IVFPQ is also commonly referred to as IVFADC in research papers. An inverted file is an index structure that is used to map contents to their locations. IVF reduces the overall search scope by arranging the entire dataset into partitions. All partitions are associated with a centroid, and every vector in the dataset is assigned to a partition that corresponds to its nearest centroid. A two-dimensional Voronoi diagram is used to partition the space into cells, with each cell containing the vectors that are closest to its centroid. The popularity of IVFPQ can be attributed to its ability to significantly reduce the retrieval cost, given that the query only needs to access a small fraction of documents guided by IVF.

IVFPQ is often used in combination with other techniques to further improve its performance. For example, IVFPQ together with Hierarchical Navigable Small World (HNSW) as the coarse quantizer can deliver an extremely memory-efficient and fast search that is of good accuracy for billion-scale similarity search. IVFPQ is implemented with IndexIVFPQ in Faiss, and the IndexHNSWFlat works as the coarse quantizer for IndexIVFPQ.

Bi-Phase Enhanced IVFPQ

Bi-Phase Enhanced IVFPQ is a recent development that further improves the performance of IVFPQ for time-efficient ad-hoc retrieval. Bi-Phase Enhanced IVFPQ is a novel framework used for time-efficient ad-hoc retrieval in similarity search. It combines two types of features, namely latent topics and explicit features, to enhance the retrieval process.

FAISS library

The `IndexIVF` class in the Faiss library is an implementation of the IVF index. In the inverted file, the quantizer (an `Index` instance) provides a quantization index for each vector to be added. The quantization index maps to a list (aka inverted list or posting list), where the id of the vector is stored. The inverted list object is required only after training. If none is set externally, an `InvertedLists` instance is used automatically.

Faiss is a C++ library developed by Meta Facebook AI Research for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. Faiss is written in C++ with complete wrappers for Python. Some of the most useful algorithms are implemented on the GPU.

Faiss provides several similarity search methods that span a wide spectrum of usage trade-offs. It is optimized for memory usage and speed and offers a state-of-the-art GPU implementation for the most relevant indexing methods. Faiss implements a dozen index types that are often compositions of other indices. The optional GPU version has been optimized for billion-scale datasets. Faiss is commonly used in various AI applications, including image and video search, Natural Language Processing (NLP), and recommendation systems. It is also used in research and academic settings for similarity search and clustering of large datasets.