Emotion-driven Motivic Development using Diffusion Models
There is a more recent version of this item available. |
Mo, Ronald K. (2023) Emotion-driven Motivic Development using Diffusion Models. In: CHIME Annual One-day Music and HCI Workshop, 04 Dec 2023, The Open University, Milton Keynes, UK. (Submitted)
Item Type: | Conference or Workshop Item (Other) |
---|
Abstract
Denoising diffusion probabilistic models, or diffusion models, have been successfully used for generating images, audio, and music. This work aims to investigate the potential of employing diffusion models to develop a motif composed by human composers to arouse specific emotions. To achieve this, a dataset consisting of melodies and their corresponding emotion label is constructed for training the diffusion model. The model is conditioned on the user-generated motif and a label displaying the desired emotion category, which opens up an opportunity for human composers to collaborate with computer technology in the field of music composition.
More Information
Depositing User: Ronald Mo |
Identifiers
Item ID: 17021 |
URI: http://sure.sunderland.ac.uk/id/eprint/17021 | Official URL: https://www.chime.ac.uk/chime-annual-workshop |
Users with ORCIDS
Catalogue record
Date Deposited: 21 Nov 2023 10:39 |
Last Modified: 21 Nov 2023 10:39 |
Author: | Ronald K. Mo |
University Divisions
Faculty of Technology > School of Computer ScienceSubjects
Computing > Artificial IntelligenceComputing > Human-Computer Interaction
Performing Arts > Music
Actions (login required)
View Item (Repository Staff Only) |
Available Versions of this Item
- Emotion-driven Motivic Development using Diffusion Models. (deposited 21 Nov 2023 10:39) [Currently Displayed]