EarSketch - Sound Recommendations and Library Redesign

Faculty: 
Brian Magerko, Jason Freeman
Students: 
Dillon Weeks, Jason Smith, Mikhail Jacob

Recommendation systems are widespread in music distribution and discovery systems but far less common in music production workflows such as EarSketch, an online learning environment that uses making music with computer programming to engage learners in writing code. The EarSketch interface contains a sample library that learners can access through the sound browser window. The current implementation of the sound browser contains no automated system for sample discovery, such as a recommendation system. As a result, users have historically selected a small subsection of samples in high frequencies, leading to lower compositional diversity. In this paper, we propose a recommendation system for the EarSketch sound library which uses user history through collaborative filtering and relevant audio features in order to display samples. The system will be designed to complement the user's currently selected sounds to provide useful and varied suggestions that make use of a larger percentage of the sounds found in EarSketch.

Lab: 
Director: 
Brian Magerko
Faculty: 
Jason Freeman, Duri Long
Students: 
Takeria Blunt, Erin Truesdell, Manoj Deshpande, Sarah Mathew, Atefeh Mahdavi

The Expressive Machinery Lab (formerly ADAM Lab) explores the intersection between cognition, creativity, and computation through the study of creative human endeavors and by building digital media artifacts that represent our findings. Applications of our findings range from AI-based digital performance to interactive narrative experiences to educational media design and development.