Google has unveiled MusicLM, an artificial intelligence system that can generate music in any genre from a textual description. However, the company will not yet open access to the neural network.
Alternatives to the new AI, such as Riffusion, are unable to create complex compositions due to technical limitations and a relatively small training dataset.
However, MusicLM has been trained on a dataset of 280,000 hours of music to learn how to generate songs of "considerable complexity" (e.g. "charming jazz" or "90s Berlin techno").
The neural network is able to capture nuanced descriptions such as instrumental riffs, melodies and mood. For example, MusicLM can generate a tune that should evoke "the feeling of being in space" or "the basic soundtrack of an arcade game".
Google researchers explained that the system can build on existing melodies, whether it's humming, singing, whistling or playing an instrument. Moreover, MusicLM can take several sequentially written descriptions (e.g., "time to meditate", "time to wake up", "time to run", "time to lay out 100%") to create a kind of melodic "story" lasting up to several minutes, similar to a movie soundtrack.
MusicLM can also be instructed by a combination of image and caption, or generate a sound that "plays" an instrument of a certain type in a certain genre. You can even set the experience level of the 'musician'.
However, some tunes have distortion as an unavoidable side effect of the training process. Technically MusicLM can generate vocals, including choral harmonies, but so far they leave much to be desired. Most song "lyrics" range from a semblance of English to a set of sounds sung by synthesised voices.
Google researchers also point out the many ethical problems associated with MusicLM, including copyright infringement. During the experiment, they found that about 1% of the music generated by the system was simply excerpts from the songs it was trained on.
Meanwhile, the Riffusion model is open under the Creative ML OpenRAIL-M license, which allows for commercial use. It works by analogy with Stable Diffusion's image modification. For example, it can generate sample spectrograms with a reference style, combine different styles, seamlessly transition from one style to another, or make changes to the existing sound to increase the volume of individual instruments, change the rhythm, etc.