The composition is part of Project Magenta, which seeks to boost the capabilities of machine intelligence to create art and music.
Among other things, the Magenta team is developing algorithms that enable artificial intelligence systems to learn how to create compelling art and music on their own.
Magenta also seeks to build a community of artists, coders and machine learning researchers.
About the Magenta Tune
Google software engineer Elliot Waite created Magenta’s first tune with an LSTM (long short-term memory) neural network trained to use some new techniques in attention, said company spokesperson Jason Friedenfelds.
LSTM networks are well suited to learn from experience to classify, process and predict time series when there are very long lags of unknown duration between events.
“The important parts there are memory and attention,” Friedenfelds told TechNewsWorld. “The neural net has to be able to look over a longer range, and to get a sense of what’s important to focus on, to either repeat it or change it. That’s why it seems to have some structure and some repeating elements.”
The Magenta tune, which consists of a piano melody with the accompaniment of a simple drum beat, “was completely self-learned using just a large collection of MIDI pop tunes,” Friedenfelds noted.
It was primed with four notes — C, C, G, G, and “we added some drums just to hold it together, but the melody is machine-generated,” he said. “We didn’t give it any rules about music, or any little rules of thumb to help it generate anything nice-sounding, as most previous machine-generated music has done.”
What’s Next for Magenta
A small team of researchers from the Google Brain team are building open source infrastructure around TensorFlow, and will release tools and models onMagenta’s GitHub page. They also will post demos, tutorial blogs and technical papers, and soon will begin accepting code contributions.
The researchers will begin with audio and video support tools for working with formats such as MIDI, and platforms that help artists connect to machine learning models.
The alpha version of the code is available on Magenta’s GitHub page now. The team will accept external contributions when it has a stable set of tools and models.
“If you have the processing power to analyze color and note patterns, you’ll come up with stuff that’s unique and will be of interest to a wide range of people,” said Jim McGregor, founder and principal analyst at Tirias Research.
“Then you can take the art or music produced and have the system learn — from hits on the Web or comments by people — or see what appeals to the most people,” he told TechNewsWorld. “It’s beats to music or color patterns that catch the user’s eye.”
Is it Art?
Visual art ranges from the works of masters like Michelangelo, Picasso and Rubens to those of pop artists like Andy Warhol and abstract artists like Jackson Pollock, to name a few. Some would include paintings by animals.
In the world of music, a Beethoven sonata may be miles apart from a piece of modern day techno music or a Lady Gaga song, and genres are countless — classic rock, blues, jazz and heavy metal, to name just a few — but they’re inarguably music.
Are new definitions of the terms “art” and “music” needed to reasonably discuss whether machine intelligence can create works that deserve those labels?
When it comes to defining art, “there are two points of view,” noted Michael Jude, a research program manager at Frost & Sullivan. “First, that art’s in the eye of the beholder — and second, that it’s an emotional expression of the artist or musician.”
The first perspective allows the inclusion of art and music created by machine intelligence, while the second does not, Jude told TechNewsWorld.
“I would say that an AI with sufficient training can create art,” he said. “Whether it’s great or not depends on the reaction of the audience.”
Art typically is “valued as much by its flaws as its intrinsic appeal,” Jude pointed out. “I think machines can create art that’s perceived by some to be of high quality.”