Festival FOMO is real, even if you’re there on the ground. You simply cannot get to all the best shows, even if you’re the most plugged-in person in the music industry.
But what if there was a way of curating your music festival experience based on the kind of music you like? What if you could see the genre, energy, or gender mix of a line-up at a glance? Well, Musiio’s AI-powered tagging system might be able to help.
For the uninitiated, Musiio Tag listens to music at scale – up to five million tracks per day – and automatically tags it with musical attributes, derived from the audio alone. These tags include genre, mood, tempo, energy, key, emotion, vocal gender, etc. And this data can be used in all sorts of ways. Over 70 music companies currently use it to help navigate vast music libraries.
But what if we could use the tech to help festival-goers find the music they’re after? In this blog, we’ll see what information punters might get access to if a recommendation system like this were rolled out at an event like SXSW.
Musiio is proudly sponsoring two stages at SXSW 2022 – the Fierce Panda and End of the Trail official showcase – so we looked at the bands performing. We analysed their last three releases to build a picture of their current sound, and fed this audio into our tagging AI.
While it would be possible to generate a visualisation for each band, one practical application for a multi-venue festival such as SXSW might show back-to-back comparisons of stages. Trying to decide between two stages to check out next? Audiences could see head-to-head data visualisations.
We can see here that the stages at Seven Grand and Las Perlas have a very similar genre make-up with little more folk and Indie on the Seven Grand stage. Meanwhile, there are more Soul and Indietronica influences on the Las Perlas stage.
Using the Musiio tagging AI, we can determine the musical moods, adding another dimension to help festival-goers decide where they might want to spend their time.
We can also determine the energy of tracks. This data help punters choose where to head next, depending on how they’re feeling—looking for something more high energy? The Las Perlas stage might be a better fit. Looking for something more steady? Seven Grand has what you seek.
Our AI can also predict whether a vocal sounds male, female or mixed. These tags have an accuracy of between 90 and 99 per cent, so while we can make predictions, these sorts of nuanced data points may be best coming from the artists themselves.
The sorts of visualisations possible with the AI-generated data could help festival-goers find more of what they like and help music programmers ensure they have the right balance of music on their stages.
Programming and personalisation
If we were to continue this methodology to cover the entire SXSW programme, we could build a massive musical picture that could offer insights into gender splits, genre make-up, moods and more. And for festival organisers, there’s an even greater benefit: programme personalisation.
If festival organisers were to collect the latest releases and DJ mixes from labels and artists, they could build an up-to-date sonic fingerprint for each artist.
Festival organisers could then build an onboarding process where attendees pick artists they’re excited to see. This would inform a personalised programme comprising acts with similar (and not so similar) sonic identities. It would be like that friend who always has the best music tips but totally tailored to you.
Ultimately, an AI-driven approach could create a stronger relationship between festivals, artists and festival-goers, with attendees beginning their festival experience long before making their way to the site.