老牛影视

Skip to main content

Can We Teach Computers to Make Like Mendelssohn?

Not yet. Robot composer good at emulating short snippets, but still hasn鈥檛 mastered melody

Duke researchers are teaching computers to compose new classical music in the style of Romantic-era composers like Chopin and Beethoven. Photo from Pixabay.com
Duke researchers are teaching computers to compose new classical music in the style of Romantic-era composers like Chopin and Beethoven. Photo from Pixabay.com

DURHAM, N.C. -- If the field of machine learning had a holiday theme song, it might be of the Christmas carol 鈥淗ark! The Herald Angels Sing鈥 written by a computer.

老牛影视 researchers are teaching computers to write classical piano music in the mode of great composers like Mendelssohn and Beethoven. The are a pastiche of 19th century style.

Duke graduate student Anna Yanchenko started working on the project with Duke statistics professor as part of her Master鈥檚 degree in statistical science. A musician herself, Yanchenko wanted to see if she could use artificial intelligence to turn a computer into a composer. What they found doesn鈥檛 signal the takeover of the machines, but it helps researchers understand their creative potential.

The team wanted to know which machine learning methods come closest to producing classical piano music that could pass as human. To find out, they did an experiment.

They focused on a particular class of methods used to analyze how data changes over time, called state space models. Computer speech recognition uses the same sorts of models to translate what a person says to text on the screen.

To train the models, the researchers chose 10 piano pieces from the Romantic era, the period of Western classical music that began in the early 1800s.

Each piece was converted to a string of numbers representing the pitch and length of each note. The researchers then fed the data set into the system, and had 14 models analyze the music and create new works in the style of the originals.

Before it鈥檚 trained, the system doesn鈥檛 know anything about Romantic music. It is able to learn, on its own, from the data.

The computer sorts through each piece looking for characteristic patterns of notes, or motifs, then builds a new sequence of notes based on the patterns it finds.

A remix based on Beethoven's Fifth Symphony, for example, might still contain the famously foreboding da-da-da-DUM that marks the beginning of the original. But the notes might reappear at a different pitch, or tempo, or be rearranged in some other way.

The researchers used their models to generate 140 new pieces, each 1000 notes long.

To determine the winners, they scored the results in terms of harmony, melody and originality, or how much it differed from the original.

The team also played the top-scoring pieces for 16 people, half of whom were musicians, to see what they thought.

The results are unlikely to replace the maestros soon. When asked how human-like the generated pieces were on a scale of 1 to 5, most listeners gave them a three.

The biggest difference is the lack of melody, Yanchenko says. 鈥淭here鈥檚 really no long-term structure,鈥 Yanchenko said. 鈥淭hat鈥檚 the main giveaway that this hasn鈥檛 been composed by a human.鈥

Instead, many of the pieces sounded more like they were composed by an amnesiac with a looping machine: 鈥淚t seems to forget what it just did. It repeats itself a lot,鈥 Yanchenko said.

鈥淛okingly we can say we can maybe generate Muzak for elevators,鈥 Mukherjee said.

While many listeners couldn鈥檛 identify the original songs that inspired the computer-generated pieces, most named the piece as their favorite.

The tune came in second, though the listeners said it sounded more like new-age piano than classical.

The one was the least popular. 鈥淭hey thought it was a little herky-jerky,鈥 Mukherjee said.

The notes in the generated pieces also didn鈥檛 blend as well as they did in the originals, though models trained on songs with simple structure, such as Pachelbel's Canon, produced more pleasing tunes.

Computer-generated music isn鈥檛 new. Researchers such as David Cope, former professor of music at the University of California, Santa Cruz, have tried to use computers to analyze and imitate musical styles since the early 1980s.

Ultimately, the Duke team is trying to figure out which methods work best for modeling particular types of music. Models developed based on patterns in Romantic-era music may not work as well for jazz, for example.

The researchers concede that the models they tried were trained on a simplified representation of real music. For one, the training data imply that the notes are always played sequentially one after the other, and never simultaneously, as in a chord. But it鈥檚 a start.

鈥淚n the future we鈥檇 also like to look at orchestral pieces with multiple instruments,鈥 said Yanchenko, currently on the staff at MIT Lincoln Laboratory. 鈥淏ut we鈥檙e not there yet.鈥

Yanchenko presented on Dec. 7 at the 12th , held in conjunction with the 2017 Conference on Neural Information Processing Systems in Long Beach, California.

This research was supported by the National Science Foundation (NSF DMS 16-13261, NSF IIS 15-46331, NSF DMS 14-18261, NSF DMS-17-13012, NSF ABI 16-61386) and the National Institutes of Health (NIH R21 AG055777-01A).

CITATION:  "Classical Music Composition Using State Space Models," Anna Yanchenko and Sayan Mukherjee. .