Vision

From
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News


Computer vision involves developing algorithms and techniques to enable computers to interpret, analyze, and understand visual data from the world around them. It focuses on developing methods for computers to perceive, analyze, and interpret digital images or videos in a way that simulates human vision. Computer vision involves tasks such as object detection, image recognition, segmentation, tracking, and scene reconstruction. It is used in a wide range of applications, including robotics, autonomous vehicles, security and surveillance, medical imaging, and augmented reality.

Interpret, Analyze, & Understand


Vision Transformers (ViT)

YouTube ... Quora ...Google search ...Google News ...Bing News

ViT is a transformer that is targeted at vision processing tasks such as image recognition. It was first proposed in 2019 by Cordonnier et al. and later empirically evaluated more extensively in the well-known paper "An image is worth 16x16 words". ViT works by breaking down input images into a series of patches which, once transformed into vectors, are seen as words in a normal transformer. Each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be used for classification. The authors also add absolute position embeddings and feed the resulting sequence of vectors to a standard Transformer encoder. A [CLS] token is a special token that is used in classification tasks. It stands for “classification” and is used as the only input of the final MLP Head as it has been influenced by all the others. In the case of ViT, it is added to serve as representation of an entire image. The final MLP Head refers to the final Multi-Layer Perceptron (MLP) layer in the model. It takes the [CLS] token as input and outputs the final classification result.



Image Retrieval / Object Detection

Segment Anything Model (SAM)

YouTube ... Quora ...Google search ...Google News ...Bing News

Segment Anything Model (SAM) and the Segment Anything 1-Billion mask dataset (SA-1B), which is the most extensive segmentation dataset to date, democratizes image segmentation by introducing a new task, dataset, and model. Using an efficient model within a data collection loop, Meta AI researchers have constructed the largest segmentation dataset thus far, containing over 1 billion masks on 11 million licensed and privacy-respecting images. The model has been purposefully designed and trained to be promptable, enabling zero-shot transfer to new image distributions and tasks. Meta AI Introduces the Segment Anything Model, a Game-Changing Model for Object Segmentation | Daniel Dominguez - InfoQ

Faster Region-based Convolutional Neural Networks (R-CNN), You only Look Once (YOLO), Single Shot Detector(SSD)

LiDAR

YouTube search...