The Importance Of Performance
Hi there! Xiao’an, your friendly neighborhood composer here. After working as a session guitarist, composing for a variety or ensembles, running an orchestra for 5 years in Boston and employing hundreds of musicians in the process, I know first hand that the people performing a piece of music have just as important a role to play in shaping the final recording as the music itself.
This is true whether you’re talking about a solo musician or a full size orchestra. After all, writing the music is one thing, but it’s the performers that take an idea (and their interpretation of it) and turn it into notes we can hear, record, and stream.
A Closer Look
For this first exploration, we’ll be looking at 4 performances of a very famous (and short) section of classical music: the Presto from Bach’s Violin Partita No.1 in B Minor, and the results from our analysis of the audio files. For those of you learning about our technology for the first time, here’s a quick breakdown.
What our AI Does
Our Tagging product takes an audio file, turns it into mathematical information and spectrograms, and analyzes it. The AI then delivers useful information about the track such as its genre, tempo, mood, key, and so on. It has been trained rigorously in each of these categories, listening to nearly two centuries’ worth of audio (in length) in order to obtain an accuracy of >90%.
A user can then use this information to organize massive catalogues efficiently (we can analyze 1 million tracks a day) and derive useful insights – the sky’s the limit.
Now, on to the business at hand:
What can we learn?
A quick glance across all 4 reveals a couple of obvious similarities – 100% Classical, low-medium energy (in the sense of overall sonic energy, compared to an EDM track for example).
A closer look reveals that while all tracks returned the “Majestic” mood tag, Hilary Hahn’s interpretation had the lowest score for that in particular, but the highest “Happy” score (and the most positive Emotion score).
This might have been due also to the fact that it was the only track that returned a specific (and largely correct) tempo of 73BPM. Her playing was very steady, which allowed our AI to determine a tempo. The other players preferred a more rubato approach to tempo and were all analyzed to have a large variance in tempo – perhaps their choice to let the phrases and notes have more time to “speak” gave them a higher “Majestic” score as well.
Midori’s interpretation of the piece was the only one to receive the “dramatic” mood (though a modest 22% score), possibly driven by a more dynamic (higher highs, lower lows) performance and the specific timbre of her violin.
Now, it is worth remembering that since this application is driven by artificial intelligence and neural networks, it is very much like a human brain. It is making the best decision that it can based on its experience and instructions, and is not returning something that is “objectively correct” – we are constantly checking its conclusions to make sure that it accurately reflects human interpretation.
Want to chat about AI? Get in touch with us here!
Musiio's resident Music Strategist and Music nerd. I ran an orchestra for 5 years, a virtual industry community of 7000 for 3 years, and currently run a small globally distributed creative audio team and compose commercially. I also like rocks and cats.