Difference between revisions of "Local Features"

From
Jump to: navigation, search
(Created page with "{{#seo: |title=PRIMO.ai |titlemode=append |keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, M...")
 
m
 
(11 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
}}
 
}}
[http://www.youtube.com/results?search_query=Local+Features+machine+learning+artificial+intelligence YouTube search...]
+
[https://www.youtube.com/results?search_query=Local+Features+machine+learning+artificial+intelligence YouTube search...]
[http://www.google.com/search?q=Local+Features+machine+learning+artificial+intelligence ...Google search]
+
[https://www.google.com/search?q=Local+Features+machine+learning+artificial+intelligence ...Google search]
  
 
* [[Deep Features]]
 
* [[Deep Features]]
 +
* [[Attention]] Mechanism  ...[[Transformer]] ...[[Generative Pre-trained Transformer (GPT)]] ... [[Generative Adversarial Network (GAN)|GAN]] ... [[Bidirectional Encoder Representations from Transformers (BERT)|BERT]]
 +
* [[Vision]]
 +
* [https://openreview.net/forum?id=SkfMWhAqYQ Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet | Wieland Brendel, Matthias Bethge]
 +
* [https://www.semanticscholar.org/paper/Image-Retrieval-with-Deep-Local-Features-and-Noh-Araujo/0e04af52fc230986064994d47207074fe1bccaf2 Image Retrieval with Deep Local Features and Attention-based Keypoints |  H. Noh, A. Araujo, and B Han] DELF (DEep Local Feature)
  
<youtube>ek9jwRA2Jio</youtube>
+
Local features refer to a pattern or distinct structure found in an image, such as a point, edge, or small image patch. They are usually associated with an image patch that differs from its immediate surroundings by texture, color, or intensity. []https://www.mathworks.com/help/vision/ug/local-feature-detection-and-extraction.html Local Feature Detection and Extraction | MathWorks]
 +
 
 +
https://opencv-python-tutroals.readthedocs.io/en/latest/_images/sift_scale_invariant.jpg
 +
 
 +
<youtube>dlqn-wPvjxg</youtube>
 +
<youtube>c3IRD4P2EnA</youtube>
 +
 
 +
== SIFT ==
 +
 
 +
So, in 2004, D.Lowe, University of British Columbia, came up with a new algorithm, Scale Invariant Feature Transform (SIFT) in his paper, Distinctive Image Features from Scale-Invariant Keypoints, which extract keypoints and compute its descriptors. (This paper is easy to understand and considered to be best material available on SIFT. So this explanation is just a short summary of this paper). [https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_sift_intro/py_sift_intro.html Introduction to SIFT (Scale-Invariant Feature Transform) | OpenCV]
 +
 
 +
<youtube>NPcMS49V5hg</youtube>
 +
 
 +
== LF-Net ==
 +
LF-Net has two main components. The first one is a dense, multi-scale, fully convolutional network that returns keypoint locations, scales, and orientations. It is designed to achieve fast inference time, and to be agnostic to image size. The second is a network that outputs local descriptors given patches cropped around the keypoints produced by the first network. We call them detector and descriptor. ...presented a novel deep architecture and a training strategy to learn a local feature pipeline from scratch, using collections of images without the need for human supervision  ... propose a sparse-matching method with a novel deep architecture, which we name LF-Net, for Local Feature Network, that is trainable end-to-end and does not require using a hand-crafted detector to generate training data. ...local features have played a crucial role in computer vision, becoming the de facto standard for wide-baseline image matching https://papers.nips.cc/paper/7861-lf-net-learning-local-features-from-images.pdf | Y. Ono, E. Trulls, P Fua, and K.Yi]

Latest revision as of 12:41, 3 May 2023

YouTube search... ...Google search

Local features refer to a pattern or distinct structure found in an image, such as a point, edge, or small image patch. They are usually associated with an image patch that differs from its immediate surroundings by texture, color, or intensity. []https://www.mathworks.com/help/vision/ug/local-feature-detection-and-extraction.html Local Feature Detection and Extraction | MathWorks]

sift_scale_invariant.jpg

SIFT

So, in 2004, D.Lowe, University of British Columbia, came up with a new algorithm, Scale Invariant Feature Transform (SIFT) in his paper, Distinctive Image Features from Scale-Invariant Keypoints, which extract keypoints and compute its descriptors. (This paper is easy to understand and considered to be best material available on SIFT. So this explanation is just a short summary of this paper). Introduction to SIFT (Scale-Invariant Feature Transform) | OpenCV

LF-Net

LF-Net has two main components. The first one is a dense, multi-scale, fully convolutional network that returns keypoint locations, scales, and orientations. It is designed to achieve fast inference time, and to be agnostic to image size. The second is a network that outputs local descriptors given patches cropped around the keypoints produced by the first network. We call them detector and descriptor. ...presented a novel deep architecture and a training strategy to learn a local feature pipeline from scratch, using collections of images without the need for human supervision ... propose a sparse-matching method with a novel deep architecture, which we name LF-Net, for Local Feature Network, that is trainable end-to-end and does not require using a hand-crafted detector to generate training data. ...local features have played a crucial role in computer vision, becoming the de facto standard for wide-baseline image matching https://papers.nips.cc/paper/7861-lf-net-learning-local-features-from-images.pdf | Y. Ono, E. Trulls, P Fua, and K.Yi]