When Jessie J covered the famous Whitney Houston tune on the popular Chinese show “The Singer”, everyone who watched it was blown away by her sheer craft as a musician and the respect that her rendition showed to the original recording. 

The arrangement was also fairly faithful, with key differences being that the original was a studio version, and the cover was performed live - leading to a very audibly different mix as well as an increased presence in certain instruments and a lower presence in others.

Let’s take a look at what Musiio’s Tag demo has to say about this!

Whitney Houston

Jessie J

Genre

Both tracks came back with very related results - Early Soul and Soul as well as Gospel are highly similar styles in that they originate from Negro Spirituals. Early on in African American musical tradition, styles diverged into secular and non-secular music, the musical elements of which, in instrumentation, harmony, and rhythm etc. are very similar if we ignore the religious context. 

It is possible that this differentiated analysis of genre may have to do with the live versus in-studio mix, since many recorded gospel tracks (example) are derived from live performances - note that the linked example gospel track also features cheering and clapping in the same way that Jessie J’s performance does. This points to the great importance of precise mixing with a knowledge of expected conventions in order to fit precisely within a genre.

Energy

Both tracks had the exact same score in Energy - unsurprising given how faithful the interpretation was in terms of arrangement and tempo (note that 154 is a multiple of 77 - the AI returned two different results because it was hearing the quarter note as the beat in Whitney’s version and the half note as the beat in Jessie J’s version.)

Mood

Jessie J’s version returned virtually identical Moods, scoring a very close 23% Relaxed compared to Whitney’s 24%. The key difference here, though they both scored highly for Romantic, was that Whitney scored 100% in that Mood while Jessie J scored 61%. This may have been due to the slightly increased “showiness” of Jessie J’s performance (a good decision for a live performance).

Emotion

Both tracks score exactly the same for Emotion - a very subtle “Positive”.

A Caveat about Female vs Male Vocal

This is a tag that I’m always conflicted about personally as a musician. Having worked with Countertenors and male Sopranos in opera situations, and having watched multiple performances by female singers who are clearly Tenors - I am acutely aware of the fact that a person’s gender is not necessarily a reliable indicator of voice type.

However, in the case of what the AI “means” here - it has listened to approximately 2 centuries of music and subsequently based its assessments or assumptions about gender based on the most common vocal characteristics or qualities in most singers known to be male vs most singers known to be female in the general history of recorded music.

For the purposes of this discussion then, let us take this tag with a grain of salt, understanding that it is more about vocal character and what the voice “sounds like” relative to most of recorded music history than whether a vocalist is actually male or female.

Female vs Male Vocal

Here, we can see that Whitney’s version “sounds like” it has male and female vocals, and Jessie J’s “sounds like” it has just female vocals. They are both singing the same song in the same key, in the same range, so this may be indicative of notable timbral differences that allow human beings to recognize that one singer is Whitney, and one singer is Jessie J. The following is an assumption: perhaps Whitney’s voice contains a relative preponderance of rich/warm/low-mid range harmonic information, versus Jessie J, whose voice may (in the strict audio sense) be “clearer” and more “direct”.

Conclusion

Our priority in developing this technology is to make sure that it helps our fellow human beings come to a better understanding of the music they love, listen to, make, and sell. In order to do that, we have to make sure that the results it returns and the conclusions it comes to are consistent with what we as human beings understand about music - its technical aspects; its contextual aspects; its historical aspects; its emotional aspects.

Want to chat about AI? Get in touch with us here!

Share this story