Music

From
Revision as of 18:29, 29 February 2024 by BPeat (talk | contribs)
Jump to: navigation, search

Youtube search... ...Google search




suno.ai

"It’s one of the most popular AI music tools available. And more often than not, it’s my first choice when it comes to music creation. Suno allows you to input your own lyrics (or have ChatGPT write you some) and it lets you select the music style, which you can then customise. That’s more than enough to create a decent AI song! If you’re looking to create unique compositions and experiment with different musical translations, this one’s ideal for you!", AI Andy


Stable Audio

Stable Audio is a music generation tool from Stability AI that uses latent diffusion to create high-quality, 44.1 kHz music for commercial use. Latent diffusion is a type of generative AI that works by gradually introducing noise into a latent representation of a desired output. The model then learns to remove the noise, resulting in a generated output that resembles the desired output. Stable Audio's latent diffusion architecture is conditioned on text metadata as well as audio file duration and start time. This allows the model to generate audio of a specified length and style, and to ensure that the generated audio is musically coherent. Stable Audio is still under development, but it has already been used to generate music for a variety of projects, including video games, films, and commercials. Here are some of the key features of Stable Audio:

  • High-quality music: Stable Audio can generate music that is comparable to the quality of human-composed music.
  • Control over the content and length: Users can specify the desired style, mood, and length of the generated music.
  • Ease of use: Stable Audio has a simple and intuitive web interface.
  • Commercial use: Stable Audio is designed for commercial use, and users can generate and download tracks for commercial projects.

MusicLM


Google also shows off MusicLM's "long generation" (creating five-minute music clips from a simple prompt), "story mode" (which takes a sequence of text prompts and turns it into a morphing series of musical tunes), "text and melody conditioning" (which takes a human humming or whistling audio input and changes it to match the style laid out in a prompt), and generating music that matches the mood of image captions. ... MusicLM: Google AI generates music in various genres at 24 kHz | Benj Edwards - Ars Technica


Slow tempo, bass-and-drums-led reggae song. Sustained electric guitar. High-pitched bongos with ringing tones. Vocals are relaxed with a laid-back feel, very expressive.


Neurorack

The first deep AI based synthesizer. We developed the first musical audio synthesizer combining the power of deep generative models and the compacity of Eurorack format; comes in many formats and more specifically in the Eurorack format. The current prototype relies on the NVIDIA Jetson Nano. The goal of this project is to design the next generation of music instrument, providing a new tool for musician while enhancing the musician's creativity. It proposes a novel approach to think and compose music. We deeply think that AI can be used to achieve this quest. The Eurorack hardware and software have been developed by our team, with equal contributions from Ninon Devis, Philippe Esling and Martin Vert.

OpenAI JukeBox AI

Making Music

  • Boomy AI ... select the genre, choose the mood, and create original songs in seconds

Drums

Text-to-Song

Siraj Raval