Language-Guided Music Recommendation for Video via Prompt Analogies
CVPR 2023 - Highlight Paper

  • ♪ University of Illinois at Urbana-Champaign
  • ♫ Adobe Research
  • ♬ CIIRC, CTU in Prague

Input Video

Query text and retrieved music

[Video only]
Hard rock featuring electric guitar riff
Hip-hop beat with heavy synth bass

Abstract

We propose a method to recommend music for an input video while allowing a user to guide music selection with free-form natural language. A key challenge of this problem setting is that existing music video datasets provide the needed (video, music) training pairs, but lack text descriptions of the music. This work addresses this challenge with the following three contributions. First, we propose a text-synthesis approach that relies on an analogy-based prompting procedure to generate natural language music descriptions from a large-scale language model (BLOOM-176B) given pre-trained music tagger outputs and a small number of human text descriptions. Second, we use these synthesized music descriptions to train a new trimodal model, which fuses text and video input representations to query music samples. For training, we introduce a text dropout regularization mechanism which we show is critical to model performance. Our model design allows for the retrieved music audio to agree with the two input modalities by matching visual style depicted in the video and musical genre, mood, or instrumentation described in the natural language query. Third, to evaluate our approach, we collect a testing dataset for our problem by annotating a subset of 4k clips from the YT8M-MusicVideo dataset with natural language music descriptions. We show that our approach can match or exceed the performance of prior methods on video-to-music retrieval while significantly improving retrieval accuracy when using text guidance.

Video


Overview

Training Data Synthesis

We propose to generate music descriptions automatically from a large-scale language model (BLOOM-176B) given pre-trained music tagger outputs and a small number of human text descriptions.


Tri-modal ViML Model

We train a tri-modal model, which we call ViML for Video to Music with Language. The model fuses text and video input representations to query music samples. We rely on per-modality Transformer encoders to encode sequences of features from base encoders (CLIP and DeepSim) and a fusion model to combine the video and text encodings. For training, we introduce a text dropout regularization mechanism which we show is critical to model performance.

Model diagram

YouTube-MTC Dataset

For evaluation, we collect a dataset by annotating 4k clips from music videos in the YouTube8M dataset with natural language music descriptions. These text descriptions are collected by providing annotators with audio only (no video), and contain diverse descriptions of different elements of the music like genre, mood, instrumentation, and lyrics.

Source Video
overview
Music Description Annotation
A faint, simple acoustic piece of singing by a female vocalist with an acoustic guitar with a fast-paced strumming pattern in a closed room recorded live. great for singing along.

Instrumental track featuring an ambient pad and bell-like sounds. Seems to be a film score.

Hip-hop track with a dark synth pad with male aggressive rapping along with a chipmunk voice.

Citation

Music/Video Credits

Ronstik, Snowboarder, Adobe Stock Extended license

Project 5am, “Lila”, CC BY-NC-ND 4.0, used with permission

Sick To The Back Teeth, “Irradiated Dub”, CC BY-NC-SA 3.0, used with permission

Room For A Ghost, “No.02”, CC BY 3.0

Website template credit: Michaël Gharbi and Jon Barron