Emotion-driven Motivic Development using Diffusion Models
This is the latest version of this item.
Mo, Ronald K. (2023) Emotion-driven Motivic Development using Diffusion Models. In: CHIME Annual One-day Music and HCI Workshop, 04 Dec 2023, The Open University, Milton Keynes, UK.
Item Type: | Conference or Workshop Item (Speech) |
---|
Abstract
Denoising diffusion probabilistic models, or diffusion models, have been successfully used for generating images, audio, and music. This work aims to investigate the potential of employing diffusion models to develop a motif composed by human composers to arouse specific emotions. To achieve this, a dataset consisting of melodies and their corresponding emotion label is constructed for training the diffusion model. The model is conditioned on the user-generated motif and a label displaying the desired emotion category, which opens up an opportunity for human composers to collaborate with computer technology in the field of music composition.
|
PDF
CHIME+Abstract+-+Mo.pdf - Accepted Version Download (142kB) | Preview |
More Information
Related URLs: |
Depositing User: Ronald Mo |
Identifiers
Item ID: 17102 |
URI: http://sure.sunderland.ac.uk/id/eprint/17102 | Official URL: https://www.chime.ac.uk/chime-annual-workshop |
Users with ORCIDS
Catalogue record
Date Deposited: 21 Dec 2023 07:36 |
Last Modified: 21 Dec 2023 07:36 |
Author: | Ronald K. Mo |
University Divisions
Faculty of Technology > School of Computer ScienceSubjects
Computing > Artificial IntelligenceComputing > Human-Computer Interaction
Performing Arts > Music
Actions (login required)
View Item (Repository Staff Only) |
Available Versions of this Item
-
Emotion-driven Motivic Development using Diffusion Models. (deposited 21 Nov 2023 10:39)
- Emotion-driven Motivic Development using Diffusion Models. (deposited 21 Dec 2023 07:36) [Currently Displayed]