Artificial Intelligence: A New Genre of Music
AI is changing the way music is heard, and made
It is raining outside. You are in your bed, cuddled up with your favorite book and listening to your favorite music.
There is a high probability that the music you have been listening to has been recommended to you by your music streaming application, which perfectly suits the weather outside and your current activity (which is reading).
While music tech companies - like Tencent backed Joox, QQ Music, KKBox, etc - seem to have different value propositions, in terms of the myriad of regional music offered to the listeners, the monetization models, etc.; they all sing one same song today.
And that is the song of AI.
Artificial Intelligence has widely gained popularity in the music tech industry in the recent few years. The reasons behind this rise in the uptake of AI in core music streaming application tech is because of some obvious, and yet some other not-so-obvious reasons:
1. AI Augments Listeners’ Experiences Through Personalized Playlists:
Each artist held their own personality which they presented through their music - while some loved the jazzy nature of Louis Armstrong, others melted every time they heard Elvis Presley sing one of his love songs. Some headbanged to The Beatles, while the others swayed to The Doors.
Music streaming app companies like Joox, QQ Music and KuGou have been using AI to analyze the preferences of their listeners and recommend specially curated playlists for personalized customer experience.
By using AI-based “recommendation engines”, the music streaming applications analyze the existing history of the listeners and recommend new songs.
While AI is being used to provide recommendations today, in my opinion there is a great possibility that the music streaming industry will try to offer features that will read body vitals like heart rate, stress levels, breathing rate, maybe even neurological signals from wearable devices. The feature may offer biometric and physiology-based music.
Imagine you are traveling in a metro, filled with people. The rush to reach the office and the excess number of people makes you anxious. The tiny wearable over your ear may identify your anxiety and offer to play music from your favorite artist but in a softer, calmer melody.
A feedback mechanism may autonomously indicate how this softer melody is affecting your health vitals and further improvement in the music will be done to deliver more curative results.
There is a possibility that AI will be able to vary the song’s melody, genre, tonal quality, harmonic rhythm, etc. to suit your body vitals to try and essentially “heal” you.
This may essentially be the next wave of personalizing music through AI.
2. AI Breaks the Paradigm of Infinite Choices:
To listen to music at home, listeners had to purchase 12-inch vinyl record labels which could only play roughly only 22-minutes of music, on each side. Vinyls were expensive and listeners only knew and consumed music created by their local artists.
With mass commercialization of radio for media streaming purposes, artists began creating shorter 3-7 minutes songs that would be easier to stream on air.
This was followed by an era of albums being recorded and sold on CDs and DVDs.
The democratization of the internet and a turning point in the music industry was when Apple Inc commercialized its pay-per-song feature in the early 2000s, which made it possible for listeners to download infinite songs from infinite artists.
The music streaming apps today allow listeners to go one step further in terms of not paying- per- song, but rather paying per month, or per year for unlimited songs from unlimited artists.
Music streaming apps upload about 20,000 new songs per day on their platforms.
The real dent that AI has made is using “filtering engines”, which scan through thousands of newly uploaded songs to develop playlists and recommendations targeted to each individual, eliminating the need for listeners to browse through thousands of songs to pick out favorites.
Moreover, AI filtering engines do not restrict personalization to single genres, but rather gives a whole new definition to the word “genre”, by generating a playlist of supposedly unrelated songs considered “good music” by that individual.
At the very heart of the personalization of music in the future, lies the way individual listeners will experience their music. Through technologies like virtual reality, listeners will not just “listen” to music, but also virtually see their favorite musicians performing exclusively for them. Through wearable neuro-electrical stimulators, listeners will “feel” the music in their bones and muscles and regions of the brain. They may also be able to feel the sentiments and emotions of the artist through these stimulations.
3. AI Affects the Creative Process of Artists and Musicians
Movies on musicians, bands, and artists have been loved by audiences around the world. Fans love to see the creative process of their role models, the way their revered artist draws inspiration for songs, the way lyrics strike them at odd, surreal hours, and the way they orchestrate music to finally produce that one song that fans around the world, regardless of language, nationality, resonate with.
Alcohol, meditations, off-the-beaten-track retreats, drugs, love, people, protests, poverty, among several other things, often served as muses to these artists. I personally love and admire the creative process of French musician Edith Piaf, whose songs brought joy, positivity, and hope to the French during the revolution.
Today, the creative process is very data-driven.
Labels and musicians tend to research preferences of people through data sets shared by sources, like music streaming app companies, to determine what kind of music sells, where, to which age group, religion, gender, occupation, and so on; and which does not.
Another application of AI in music tech is audio mastering. An example of this would be Landr, a Canadian tech company, which helps musicians polish the sound quality of their music at par with professional studios, at half the cost and time. Such applications analyze songs, and scan through a vast repository of similar tracks to provide instant recommendations to improve the audio quality and add uniqueness and individual character to the track.
In continuation with my prior deliberation on biometric- driven music recommendations, AI may heavily impact the creative process of any musician. The future may demand musicians to have in-depth technical knowledge of neurosciences, psychology, along with basics of AI, and use all that to determine the kind of music that must be created.
4. AI May Eventually Create Its Own Music
A lot of music tech veterans like Prashan Agarwal, CEO of Gaana App (India) predict that not very far in the future, AI will create its own music.
Using “Generative Adversarial Networks” or GANs (or some other similar techniques), AI analyses the beats, tempo, EQ, genre, instrumentals, etc. that is preferential to the user to generate new music, specifically created for the “ear of the beholder”.
Imagine an era where every individual has their very own music, created just for them.
Whatever the future holds, one thing is for certain, and that is that the way music is heard, as well as the way it is made, is rapidly changing. The industry is grappling with the implications of AI, and there’s much left to chew over. Specifically:
If the future of music streaming is indeed biometric-dependent personalization, should the music-streaming app companies have the right to provide us “healing therapies” and try to cure us?
What will be the emotional impact of such music recommendation and consumption on our health
Will forever-healing music help us feel happier, or make us even more dependent on technology to cure us?
The data generated out of the music streaming apps may be sold to third party insurance companies and ad agencies and may be misused to influence our spending powers. So what would the real price of music streaming be, per month?
If AI will indeed someday create music all by itself, will it own the copyrights to that music? What would the legal framework for this look like?
When will these issues of privacy policies and copyright issues be discussed by our governments to restrict, control and identify the legal frameworks for AIs that create music?
Most importantly, when will it be the last time that all of us sit around a bonfire and sing the very same song we all know and love?
Anmol Saxena is the CEO and co-founder of Ashva Wearable Technologies Pvt Ltd, a young wearable MedTech startup based out of Bangalore. She has a year-long corporate experience with Ford Motors Company as a Product Developer.
Anmol is passionate about nanoelectronics, data sciences, genetic engineering, and molecular biology and envisions bringing all these streams together to develop wearable technologies that would push human evolution forward.