Artificial intelligence (AI) is being used to create new ‘deepfake’ pop songs that sound like they’re being performed by dead musicians, including Elvis Presley, Frank Sinatra, David Bowie and Michael Jackson.

Jukebox, created by California-based company OpenAI, is a neural network that generates eerie approximates of pop songs in the style of multiple artists.

The neural network generates music, including rudimentary singing complete with lyrics in English and a variety of instruments like guitar and piano. 

Experts think the technology has the potential to create copyright-free imitations or even new tunes that could be released as singles under a deceased artist’s name. 

Search Jukebox below 

Elvis back from the dead? The online library has the potential to create copyright-free imitations or even new tunes that could be released as singles under a deceased artist

Elvis back from the dead? The online library has the potential to create copyright-free imitations or even new tunes that could be released as singles under a deceased artist

Elvis back from the dead? The online library has the potential to create copyright-free imitations or even new tunes that could be released as singles under a deceased artist

OpenAI has created a expansive library of new tracks, imitating a diverse selection of artists, including the Beatles, Nirvana, Katy Perry, Simon and Garfunkel, Stevie Wonder, Elton John and Ed Sheeran, as well as deceased heroes that almost appear to be brought back to life. 

Most of the samples have a bizarre, faraway quality to them, as if they’re poorly produced demos from the 1950s that haven’t seen the light of day until now. 

WHAT IS A DEEPFAKE? 

Deepfakes are so named because they are made using deep learning, a form of artificial intelligence, to create fake videos of a target individual.

They are made by feeding a computer an algorithm, or set of instructions, as well as lots of images and audio of the target person.

The computer program then learns how to mimic the person’s facial expressions, mannerisms, voice and inflections.

With enough video and audio of someone, you can combine a fake video of a person with fake audio and get them to say anything you want.

<!—->Advertisement

Not all of them could reasonably be confused with the artist being imitated – one entry listed as ‘rock in the style of the Beatles’ sounds like a bad imitation of Thom Yorke from Radiohead.  

The song starts with the lyrics ‘Without going out of my door I can know all things on Earth’ and warbles along to an unsatisfying conclusion. 

But the entry under ‘glam metal in the style of the Darkness’ could be mistaken for a new single from the flamboyant British rockers. 

OpenAI says there have been rapid advances in creating controversial deepfake videos and text and image generation using AI technology, but artificial music is now making strides forward too. 

‘We introduce Jukebox, a model that generates music with singing in the raw audio domain,’ said the group of OpenAI researchers in their paper, entitled Jukebox: A Generative Model for Music.

‘The combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. 

‘We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable.

‘We are releasing thousands of non cherry-picked samples, along with model weights and code.’

OpenAI scraped 1.2 million songs, 600,000 of which are sung in English, from the internet and paired them the lyrics and metadata, which was fed into the AI to generate approximations of the different artists.

The model by the California firm generate high-fidelity and diverse songs that last up to multiple minutes

The model by the California firm generate high-fidelity and diverse songs that last up to multiple minutes

The model by the California firm generate high-fidelity and diverse songs that last up to multiple minutes 

The firm said its models can produce songs from diverse genres of music like rock, hip-hop, and jazz while capturing ‘melody, rhythm and timbres’.    

‘As a piece of engineering, it’s really impressive,’ Dr Matthew Yee-King, an electronic musician and computing expert at Goldsmiths, told the Guardian

‘They break down an audio signal into a set of lexemes of music – a dictionary if you like – at three different layers of time, giving you a set of core fragments that is sufficient to reconstruct the music that was fed in. 

‘The algorithm can then rearrange these fragments, based on the stimulus you input. 

‘So, give it some Ella Fitzgerald for example, and it will find and piece together the relevant bits of the dictionary to create something in her musical space.’ 

Experts believe the technology could change the music industry by creating new hits, but it is likely to throw up problems with music copyright. 

Deepfake music blurs the line between using a song protected by copyright and using a cheaper or copyright-free approximation. 

‘If someone hasn’t used the actual recording you’d have no legal action against them in terms of copyright with regards to the sound recording,’ said Rupert Skellett, head of legal for British record company Beggars Group.  

One entry in the online library that's listed as 'rock in the style of the Beatles' (pictured) needs a bit of work

One entry in the online library that's listed as 'rock in the style of the Beatles' (pictured) needs a bit of work

One entry in the online library that’s listed as ‘rock in the style of the Beatles’ (pictured) needs a bit of work 

The creation of deepfake music is not new – in January, Microsoft announced a new collaboration with Icelandic singer Björk to create a series of musical compositions with a custom-built AI tool.

The AI created new variations of Björk’s original arrangements based on the changing weather patterns and position of the sun. 

Described as a ‘generative soundscape,’ the composition combines sounds and motifs from Björk’s personal of choir archives, which she has compiled over the last 17 years as a solo artist.  

Back in 2017, scientists created a ‘Bot Dylan’ computer capable of writing its own folk music. 

Bjork's AI project adapted in accordance with changing weather patterns captured by a live video feed taken from the roof of a New York hotel

Bjork's AI project adapted in accordance with changing weather patterns captured by a live video feed taken from the roof of a New York hotel

Bjork’s AI project adapted in accordance with changing weather patterns captured by a live video feed taken from the roof of a New York hotel

The system uses artificial intelligence to compose new tunes after it was trained using 23,000 pieces of Irish folk music.

Study author Dr Oded Ben-Tal, a senior lecturer in music technology at Kingston University in London, said: ‘We didn’t expect any of the machine-generated melodies to be very good.

‘But we, and several other musicians we worked with, were really surprised at the quality of the music the system created.

‘People are reluctant to believe machines can be creative – it’s seen as a very human trait.’

In 2017, Spotify hired French AI expert and composer François Pachet from Sony, fuelling speculation that the streaming giant is dabbling in artificial music creation. 

This post first appeared on Dailymail.co.uk

You May Also Like

Pokémon Go fans go wild as Shadow Mewtwo comes to raids – and everything else this week

POKÉMON Go is introducing a new type of raids that fans are…

Brand new WhatsApp feature makes it near IMPOSSIBLE to avoid voice calls

WHATSAPP users will find it a lot harder to worm their way…

Archaeologists Have Found the Source of Stonehenge’s Boulders

The huge slabs of stone that make up the most iconic structures…

Blue Origin New Shepard vs Virgin Galactic: How do the two spaceships compare

BILLIONAIRE Jeff Bezos is planning to launch himself into space tomorrow, just…