Bird Identification

From
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News

Overall, AI is being used in various ways to improve our ability to identify birds, providing valuable insights for scientists, conservationists, and bird enthusiasts alike. Researchers at the Cornell Lab of Ornithology are passionate about studying birds and biodiversity and advancing conservation. Their mission encompasses fieldwork, laboratory research, data-intensive science, student training, and globally renowned citizen-science and lifelong learning programs

Merlin - Cornell Lab of Ornithology

Merlin Bird ID is a mobile app developed by the Cornell Lab of Ornithology that uses Machine Learning (ML) and computer vision to identify bird species from photos and sounds.

Merlin's approach to sound identification uses AI technology powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to the Lab's Macaulay Library via eBird, the Cornell Lab's global database. The breakthrough came when researchers began treating the sounds as images and applying new and powerful image classification algorithms like the ones that already power Merlin’s Photo ID feature. Each sound recording a user makes gets converted from a waveform to a spectrogram – a way to visualize the amplitude (volume), frequency (pitch), and duration of the sound. So just like Merlin can identify a picture of a bird, it can now use this picture of a bird’s sound to make an ID. Merlin’s pioneering approach to sound identification is powered by tens of thousands of citizen scientists who contributed their bird observations and sound recordings to eBird, the Cornell Lab’s global database. The app is given a training set of bird vocalizations and photos whose identity is known. The program then seeks to teach itself how to identify each of those vocalizations or images. Once trained, the program is then given unknown vocalizations or images and provides an identification. The programmer can check whether those identifications are correct. If not, the program is informed of the wrong decision and will revise its learning. So, the application should get better and better over time. Now, using the vast photo library at the Cornell Lab of Ornithology, the app is well trained. Over 8,000 species can be identified now, 80% of the world’s bird species. Some users report a 90% or higher rate of correct identifications. Merlin gets over 90% of the sound identifications right. The app is available for free on Android and iOS platforms.



Sound ID is trained on audio recordings that are first converted to visual representations (spectrograms), then analyzed using computer vision tools similar to those that power Photo ID.



Photo ID: Merlin Photo ID uses computer vision technology, developed as part of Dr. Grant Van Horn’s doctoral work at Caltech, to identify birds in photos. Photo ID was developed in collaboration with Dr. Pietro Perona’s computational vision lab at Caltech, and Dr. Serge Belongie’s computer vision group at Cornell Tech, collaborators on the Visipedia project. First publicly released Nov 30th, 2017.

Sound ID Sound ID uses recordings archived in the Macaulay Library to learn how to recognize the vocalizations of different bird species. Dataset preparation began in 2020 with model development starting in early 2021. Sound ID was developed in-house at the Cornell Lab of Ornithology, led by Dr. Grant Van Horn with assistance from Dr. Benjamin Hoffman.

eBird - Cornell Lab of Ornithology

eBird is an online database run by the Cornell Lab of Ornithology that collects bird sightings from scientists and birders across the world. eBird began with a simple idea—that every birdwatcher has unique knowledge and experience. Overall, eBird uses AI technology to improve its ability to track and predict bird movements, providing valuable insights for scientists and conservationists. The goal is to gather this information in the form of checklists of birds, archive it, and freely share it to power new data-driven approaches to science, conservation and education. At the same time, Cornell Lab of Ornithology develops tools that make birding more rewarding. From being able to manage lists, photos and audio recordings, to seeing real-time maps of species distribution, to alerts that let you know when species have been seen, Cornell Lab strives to provide the most current and useful information to the birding community. eBird is among the world’s largest biodiversity-related science projects, with more than 100 million bird sightings contributed annually by eBirders around the world and an average participation growth rate of approximately 20% year over year. A collaborative enterprise with hundreds of partner organizations, thousands of regional experts, and hundreds of thousands of users, eBird is managed by the Cornell Lab of Ornithology.

BirdFlow

The data, which has been accumulating since 2002, is used by scientists to track changes in bird distribution and to know where a specific species is at any given time. Recently, researchers at Cornell and the University of Massachusetts, Amherst, announced that they had developed a new “probabilistic modeling framework” called BirdFlow that uses AI to predict the likely position of a bird species several weeks into the future, once it is given a starting time and location. BirdFlow makes its predictions by using two main factors: information about the biological cost of migration and distribution maps generated from eBird. The maps are gridded into tiny cells, like a board game, and BirdFlow uses the data that it has to try to predict the likelihood that a bird will land on each cell. Like other AI systems, BirdFlow “learns”—that is, it is able to adjust its predictions based on comparisons with the actual routes birds take.

Haikubox

Automatic and continuous birdsong identification Identify your birds using their songs, chirps and peeps. Haikubox is a smart device that brings you 24/7 real-time alerts, sound recordings, and loads of information about your birds. It uses a proprietary neural net called BirdNet for Haikubox that is trained on thousands of bird recordings to identify birds using their songs, chirps, and peeps. Haikubox listens around the clock for every bird song and chirp, and shares what it finds on the Haikubox app (iPhone and Android) and website. Each Haikubox owner becomes a community scientist, sharing information with researchers at the K. Lisa Yang Center for Conservation Bioacoustics at the Cornell Lab of Ornithology which is dedicated to the collection and study of natural sounds.

Birdsongs