As a scientist by day, and pianist … well, also by day, I have come to appreciate the creative energies engendered by each realm, the consequential similarities that emerge, and how their crosstalk influences my own scientific thinking. Each forces one to think creatively, generating new knowledge and approaches within science, and new forms of expression and genres in music. Some of the best scientists, in fact, were also well-versed in music or other arts, crediting their scientific aptitude to their musical foundation. Albert Einstein and Max Planck would even play music together when not doing science, and Nobel Chemistry Laureate, Frances Arnold remarked in her Nobel lecture, “the code of life is like Beethoven’s Symphony – it’s intricate, it’s beautiful. But we don’t know how to write like that.” These days, the barriers between music and science are breaking down at a rapid rate; in particular, between music and computer science with the growing popularity of machine learning.
Machine learning in a nutshell is a sub-discipline of computer science that enables computers to make decisions on new information or predict future events based on data it’s already seen. For instance, when you upload a picture of you and your friends to Facebook, it can automatically pick out the faces in said photo, and even identify some of your friends, giving you the option to tag them, even though it may have never seen that photo before. This is machine learning in action. Over the past decade, with the rapid development of new sequencing technologies that generate large data sets, such as single-cell RNA sequencing, the demand for novel machine learning methods tailored to biology has been an unprecedented boom for my field of computational biology. Machine learning allows us to compress high-dimensional single-cell datasets (i.e., multiple measurements per cell) into a 2D projection for easy visualization, classify and group cells, identify scores of genetic mutations implicated in a disease, and so forth. However, a lesser known application of machine learning lies in music.
One example where machine learning is already influencing today’s music is how you find new songs to listen to. If you use Spotify, you may notice how it frequently gives you a curated list of new songs and albums it thinks you may enjoy based on your listening history. Machine learning drives these suggestions using multiple methods, including natural language processing, convolutional neural networks, and collaborative filtering. Natural language processing (NLP) enables an algorithm to instantaneously understand speech and text in real-time (think Google Translate). Spotify uses NLP to surf the web for music-related articles, ascribing keywords and language to songs and artists. From this information, it can suggest similar songs the listener in question may also enjoy. Convolutional Neural Networks (CNN) are commonly used in image processing and facial recognition, and are also employed to learn a listener’s preferences and recommend similar songs. In brief, CNNs are a type of Neural Network algorithm inspired by how the neural networks in our brains process information, that take an image as input, and extract features from that image. For example, it can identify how many cats are in a picture, and group them together. Since we don’t “see” music, instead of taking in images as input, Spotify’s CNNs take in audio files of songs, translating the sounds we hear into mathematical equations, quantifying aspects such as beats per minute, major/minor key, pitch, and so forth. It uses this information to find songs with similar qualities to recommend to listeners. The final technique Spotify uses in its weekly music prescriptions is collaborative filtering. Instead of looking solely into the music the listener is consuming, it combs through playlists of users with similar interests, comparing them with those of the listeners, and makes recommendations accordingly. Together, these algorithms enable Spotify to make powerful recommendations for what you should consider adding to your musical repertoire.
In addition to recommending music, machine learning can take it a step further and enable computers to compose music on their own. Google launched an open source (meaning their code is publicly available for anyone to use and modify) research project called Magenta, which allows musicians to compose in a user-friendly fashion, using an instrument called NSynth (Neural Synthesizer) Super. This synthesizer leverages deep neural networks to replicate the sound of an instrument, rather than individual notes, and generate new sounds, and can easily interface with a keyboard. NSynth is just one of many machine learning models to generate music. Another example is Music Transformer, which generates long pieces of music using a neural network. From a purely musical perspective, composing long, euphonious pieces can pose challenges due to the inherent structure of music, from motifs (common themes persisting throughout the piece, not like the DNA motifs in biology) and key changes (e.g., C major to F major), to repeats of sections and musical timing. Music Transformer addresses this using an autoregressive algorithm, which predicts future behavior based solely on past behavior (i.e., weather forecasting). In this case, it predicts novel sounds in a piece based on previous sounds, essentially improvising throughout the process. As an example, they demonstrate their algorithm on the famous piece “Clair de Lune” by French composer Claude Debussy, generating a novel continuation of its iconic opening just from a few measures of the original beginning.
When the camera first came out and launched the art of photography, it threatened to render painting obsolete, yet the timeless art persists. I foresee a similar future for this marriage of machine learning of music, generating a new age of creativity, and highlighting how the arts and sciences do more to complement than contrast one another. There have already been major steps towards this, with Google collaborating with a dance-pop trio called YACHT to produce an album where each song and associated lyrics were composed using machine learning. As a scientist and musician, I’m excited for what the rest of this century holds in store for the future of both music and machine learning.
Peer edited by Gabrielle Dardis