This paper presents an investigation of convolutional neural networks as a means of generating human-plausible, goal-oriented music, specifically pop melodies. Deep neural networks were chosen as a focus because their training seems to mimic the way in which a person passively learns music throughout life. The raw dataset of MIDI files was acquired from 17,216 song clips by 4825 artists from Hooktheory's TheoryTab Database. A custom dataset was created by encoding the MIDI files into sparse matrices and sliding a fixed window over each song to generate sequences. A novel approach within the domain of music generation was employed using a custom ‘skip-3’ softmax activation function, as well as a ‘skip-3’ cross-entropy loss function. The current results of generating music, given a seed, using a fully-connected network, a convolutional network, and a dilated convolutional network show some evidence of rhythmic and harmonic patterns, but lack melodic elements.
Myers, Jessica, "Music Generation with Deep Neural Networks Using Flattened Multi-Channel Skip-3 Softmax and Cross-Entropy" (2021). Senior Projects - Computer Science & Software Engineering. 2.