Dimensional Reduction

From
Revision as of 04:36, 13 September 2023 by BPeat (talk | contribs)
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News


To identify the most important Features to address:

  • reduce the amount of computing resources required
  • 2D & 3D intuition often fails in higher dimensions
  • distances tend to become relatively the 'same' as the number of dimensions increases



Dimensional Reduction techniques for reducing the number of input variables in training data - captures the “essence” of the data



Some datasets may contain many variables that may cause very hard to handle. Especially nowadays data collecting in systems occur at very detailed level due to the existence of more than enough resources. In such cases, the data sets may contain thousands of variables and most of them can be unnecessary as well. In this case, it is almost impossible to identify the variables which have the most impact on our prediction. Dimensional Reduction Algorithms are used in this kind of situations. It utilizes other algorithms like Random Forest, Decision Tree to identify the most important variables. 10 Machine Learning Algorithms You need to Know | Sidath Asir @ Medium



Projection

Youtube search... ...Google search

Product Quantization (PQ)

Product quantization (PQ) is a technique used for vector compression and is very effective in compressing high-dimensional vectors for nearest neighbor search. The idea behind PQ is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. This technique is used in Approximate Nearest Neighbor Search (ANNs) and is a vital part of many vector quantization techniques.

Here are some key points about product quantization:

  • PQ divides and splits vectors into segments and quantizes each segment of the vectors separately.
  • Each vector in the database is converted to a short code, known as a PQ code, which is a representation that is extremely memory-efficient for the approximate nearest neighbor search.
  • PQ methods decompose the embedding manifold into a Cartesian product of M disjoint partitions and quantize each partition into K clusters.
  • PQ is highly scalable and can be used for large-scale searches.
  • PQ is used in many vector search libraries, including Faiss, which contains different index types, including one with product quantization (IVF-PQ).