This page will document progress on “MusicMaker” by Drew Jex, which aims to create new music written in the style of other music. My program is written in PHP so that it can eventually be hooked up with a front-end interface and be used as an easily-accessible web application.

Web Client

A Web Client to the current version of the Music Maker is available HERE!

A Web Client to the old version of the Music Maker (Feb 2017) is available HERE!

The Plan

  • Create “Music Analyzers” that gather data about a set of midi files
  • With the gathered data, create a fitness function that gives a score based on how well a given song imitates the original midi files.
  • Using this fitness function, implement a genetic algorithm that returns a song when it reaches a particular threshold.
  • Gather data from other people based on how well they believe the generated song imitates the original music.


So far, the Music Maker can:

  • analyze a song using a simplified fitness function that gives a score based on the number of patterns it can find
  • generate a new song using this simplified fitness function
  • parse a midi file and convert it to my own data model and vice versa
  • analyze the song structure (by measure) of a song
  • determine the chord progression at any point in a song

You can listen to examples of what the MusicMaker has created so far here:

Examples of songs that implement this idea of repeating patterns can be heard below (for some reason, the server is slow sometimes. Be patient or refresh):


Things I want to analyze:

  • Chord Progression at every point in the song
  • Which notes follow which notes
  • Which notes are played with which chords
  • Song Structure


  • Is it important to realize which track is melody/harmony etc.? What if the song doesn’t really have that? Just a bunch of chords and cool sounding stuff?
  • I want the computer to be able to analyze basic stuff that a human could analyze. Like - what is the melodic instrument right now? What chords are being played? Analyze interesting things in the song and be able to imitate them (example - a switch-off of melodies between tracks, etc.)
  • How is all this going to work if you have 10 input songs? How does it pick and choose what to take? Remember - we’re not copying music, but implementing interesting stylistic things that a human perhaps wouldn’t initially recognize.


January 19 - 26

So - do I pick notes based on score or based purely on my stats? I like the idea of the genetic algorithm because it introduces some randomness and potentially crossover (because it can somehow combine two good songs). Basically, it uses stats to get a score, and then makes mutations based on stats OR randomness. But will random guesses ever produce a higher score? It depends on how I write it.

Songs are built of pieces of sub-songs that simply sound good together. They also repeat sections, but introduce variety in order to increase interest. Repeating the same things too often becomes boring. That is important to realize.

So, build sub-pieces, then combine them.

Our genetic algorithm could potentially focus on creating these sub-pieces, and then we simply combine them.

So, we give the algorithm several songs and say we want to create a new song that implements styles we find in those songs. There is also the option of defining song structure, or the song will do it itself (ie. ABABCAvarBvar)…?

These parts are independent to an extent, but they still need to sound good with the other parts…. So they are dependent, at least to an extent.

So, look at each track and find the structure of each track, starting at the measure level, and then getting bigger.

Combine the stats from one or more songs in same database. Know structure of each track Know order of notes Know order of chord progressions

I need to start small, and then grow from there. That’s how the song expands and grows, and the problem becomes more manageable.

So, start out with a bunch of random songs. What will these consist of? How will these combine elements of all the songs? How are mutations chosen?

Start with just a couple notes. Then add other tracks (and consequently, chords).

A song is a bunch of measures - they are built of pieces, not just one giant interconnected DNA strand. It must be built piece by piece, but the pieces have to work well together. What is an elegant way to do that assuming I have 10 songs I feed the program?

So, it’s like I need to build it piece by piece, but perhaps in order. That way, the different pieces can fit well together. In order to do that - that can be another statistic I get - how are transitions handled in other songs? Imitate them!

Big question - is chord progression and structure decided at beginning or is it built as I build the song? Hmmm…

Right now, my Song object is set up so it’s really easy to repeat something if I want to. The idea is that each track is the same length for each section of the song. So how is the structure determined? How do I know I want to repeat a section? To some extent, it needs to be decided in advance. I can build the song, piece by piece, but also make sure the transitions work. Then I can just throw them together.

A major element of the transition is simply the chord progression. What major feeling with the next section have?

January 26 - February 2

In order to keep track of “shape” of phrase, we could just look at the difference between each note, and use something like that again. (ex. 1 1 -1 -1 2 -2 -1 1 for a measure, or something like that)

Things I accomplished this week:

  • refactor code
  • updated wiki
  • made sure my analyzers are solid thus far - working on seeing which notes go together
  • made plans for remaining analyzers
  • keep track of top note for each chord (chord is remaining notes outside of top note)

keeping track of difference between notes in melody

Next week:

  • Start work on next part of project (implementing analyzers into fitness function)
  • Basically, i am planning on my genetic algorithm to focus on notes for the melody. I’ll put together a structure/chord progression a different way by just combining them from the set of input songs. So, I can just look at the chord being played and see which notes sound good with those notes.
  • finish analyzer that keeps track of notes played with what chords

analyzer that looks at difference between notes.

  • I want to analyze how the different tracks line up (right hand with left hand - how the rhythms match up, etc.)

February 2 - 9

Would doing the chord progression first severely limit me? Perhaps… I wish I could implement the GA to do this part. The chords need to flow well. Imitating repeating patterns is a similar thing - i can copy different parts completely. Again, I don’t want to focus too much on specific restraints, like for example, if a part is 16 measures but I only want 8, etc. (unless i want to copy the chord progression, structure of song exactly)

Using stats from analyzed songs to know which chords/notes come next. that is important.

So maybe I just need to break the piece down into tiny structures, and do it piece by piece that way? (instead of bigger structural pieces?)

A big question is: how do i combine elements of several songs? And how do i introduce randomness?

Obviously, the big structural idea can be represented across songs, but what about at a smaller level? We can’t be too specific or controlling, can we?

Different tracks: A A A A A - always the same A B B B B - variety was introduced at 2nd iteration A A B B B - variety was introduced at 3rd iteration Uses perhaps the structure of previous songs, or combines them in interesting ways. Uses chord progression of other songs, and combines them in interesting ways. Now for actual notes being played - here’s where the GA comes in, perhaps.

February 9 - 16

After reviewing some of my ideas and what I was trying to accomplish, I've moved farther away from doing the genetic algorithm. Instead, I am focusing more on the idea of repeating patterns in music. My original fitness function was fundamentally built on this idea - a song is a song because it isn't random - it's musical, meaning it consists of recognizable patterns that our minds can remember.

So this week, instead of having a structural analyzer that just looks at structures by measure, I implemented what my fitness function accomplished by keeping track of patterns across the entire song that are even smaller than one measure. There is still an element of randomness because the rhythms and notes are assigned randomly from notes being played at the same time in the original song - I want to improve this element this week by looking at how different patterns align with other patterns as well as note-pattern shapes, etc. The really cool part of this week, however, is the fact that I can repeat and imitate patterns that are found throughout the song, whether they are two-note patterns or eight-note patterns or whatever. The new songs that were produced, therefore, contain more patterns at a smaller level and less randomness, even though the notes and rhythms are still being assigned randomly for each pattern. I think this shows the idea I'm trying to explore about music - repeating patterns (and a little bit of variety) are the essence of music. These patterns, combined with notes that sound good together, are fundamentally what I'm trying to recreate with MusicMaker.

Examples of songs that implement this idea of repeating patterns can be heard below (for some reason, the server is slow sometimes. Be patient or refresh):

There are still some bugs that need to be worked out, but you can hear more repeating patterns, especially at smaller levels (some of which you probably don't even notice). As they become smaller, you are more likely to get the original song. Theoretically, at the smallest level, you should end up with the original song (some bugs are preventing me from testing this currently).

Keep in mind also that two patterns are considered the same if their similarity is less than 4. This can be adjusted, especially at smaller levels.

Here is the structural output of the original song at a quarter-note level so you can see visually:

  • N represents a rest. Each number is a unique pattern found at that point. Each element in the array is one measure.

Array (

   [0] => N|N|N|0
   [1] => 1|2|2|0
   [2] => 1|2|3|4
   [3] => 5|6|7|8
   [4] => 2|2|9|N
   [5] => 10|2|2|N
   [6] => 10|2|3|4
   [7] => 5|6|7|8
   [8] => 2|2|9|0
   [9] => N|N|N|0
   [10] => 1|2|2|0
   [11] => 1|2|3|4
   [12] => 5|11|7|12
   [13] => 2|2|9|N
   [14] => 13|2|2|N
   [15] => 13|2|3|4
   [16] => 5|11|7|12
   [17] => 2|2|9|0
   [18] => 2|14|N|0
   [19] => 1|15|N|0
   [20] => 1|2|3|16
   [21] => 17|18|19|20
   [22] => 21|22|7|2
   [23] => 21|2|3|8
   [24] => 23|N|24|25
   [25] => 26|N|3|27
   [26] => N|N|N|2
   [27] => N|N|3|28
   [28] => 29|2|30|31
   [29] => 23|2|32|0
   [30] => 29|2|30|31
   [31] => 23|2|32|0
   [32] => 29|2|30|31
   [33] => 29|2|33|2
   [34] => 29|2|30|31
   [35] => 1|2|3|28
   [36] => 29|2|30|31
   [37] => 34|2|3|4
   [38] => 35|2|3|4
   [39] => 2|36|37|0
   [40] => 1|2|2|0
   [41] => 1|2|3|4
   [42] => 5|6|7|8
   [43] => 2|2|9|N
   [44] => 10|2|2|N
   [45] => 10|2|3|4
   [46] => 5|6|7|8
   [47] => 2|2|9|28
   [48] => 2|14|N|28
   [49] => 1|15|N|28
   [50] => 1|2|3|38
   [51] => 17|18|19|20
   [52] => 21|22|7|2
   [53] => 21|2|3|8
   [54] => 23|N|24|25
   [55] => 26|N|3|27
   [56] => N|N|N|2
   [57] => N|N|3|28
   [58] => 29|2|30|31
   [59] => 23|2|32|0
   [60] => 29|2|30|31
   [61] => 23|2|32|0
   [62] => 29|2|30|31
   [63] => 29|2|33|2
   [64] => 29|2|30|31
   [65] => 1|2|3|28
   [66] => 29|2|30|31
   [67] => 34|2|3|4
   [68] => 35|2|3|4

From here, I think it would be cool to combine songs by combining patterns from different songs at various levels. For example, I could take huge structural pattern of ABAB from one song, then in section A implement a smaller pattern from another song, then within each measure, use pattern structure from yet another song, etc.

I would also like to improve the quality of each pattern and specifically how patterns can sound well together. My analyzers can focus on transitions between patterns at different levels (perhaps starting at the smallest and slowly getting bigger), implementing note-shapes and other data it has gathered from the original song.

February 16 - 23

Fixed the bugs that were in the code last week! Now we can generate songs given three parameters: a midi file, the level to which we want to analyze the structure of the midi file, and the maximum number of differences two structural pieces can have to be considered the same.

Check out the basic web-client I am making so people can hear/compare songs that the MusicMaker creates:

Just some background on what the different parameters mean:

  • Structural Level: When the MusicMaker performs its analysis, it first breaks the song into pieces (called “patterns”) and finds pieces that are similar. This parameter indicates how big those pieces will be. Each number represents a 16th note for a 4/4 song, which at this point is the shortest a note can be defined in my data model. Therefore, choosing a high value (such as 16) means the MusicMaker will break the song into pieces of 16 16th notes each, or one measure. This allows for greater variety in the final result. Choosing a smaller value will make your final song much more similar to the original music.
  • Similarity Value: This number represents how many differences can exist between two patterns before they should be considered the same. Typically, a higher structural-level sounds best with a higher similarity value. Picking a small structural level and high similarity would result in the song repeating the same pattern over and over again.

Here are the differences between the creation styles:

  • makeRandom(): This style simply assigns random notes (from the set of notes played at that same time in the original song) at random times to each pattern and throws it all together.
  • makeSmart(): This style does similarly to makeRandom(), but for each pattern, it creates 100 different possible results, running a scoring function on each result that indicates how many patterns it finds. After 100 iterations, the program chooses the result that received the highest score. As a result, the final song will sound less random and more repetitive.
  • makeSmarter(): This style takes the first track and does exactly as makeSmart(). The remaining tracks, however, are forced to align their notes rhythmically with the first track so that the resulting song contains notes that are played together rather than being all over the place. This is under the assumption that the first track is the melody, and we are simply aligning background notes with melodic notes.

I have found that a 4 for structural-level and 1 for similarity-level will create songs that sound like a remix of the original. 8 and 2 will create a good mix of variety and newness. 1 and 1 will sound almost exactly as the original.

There is still much work to be done with making the patterns sound better together, and across tracks. But the original song gives us a great structural template of chords, patterns, and notes. It wouldn't be too difficult to take structural elements from several songs and combine them. I like the idea of trying to create patterns and repeat them throughout the song - I think it is the fundamental characteristic of music.

I would love to align the notes in such a way that they emphasize “the beat.” In other words, the song sounds in such a way that you don't need a metronome to know where the beat of the song is. This idea of “keeping the beat” is essential. This is accomplished by making each of these patterns have similar characteristics so they sound good together and are repetitive and consistent.

February 23 - March 2

I improved my MusicMaker in simple ways this week that dramatically affected the outcome of the song. I haven't yet updated the web client version, but on my machine, I've implemented the following new creation style:

  • createRhythmicallyFirst(): This turned out to create music that was much more repetitive and enjoyable. Instead of checking individual patterns for differences at both the note and rhythmic level, I separate the two. First, I find similar patterns by rhythm only and create new random rhythms. Then, I assign notes to those rhythms from the set of notes that occur in the original song at the same time. This essentially ensures that two different patterns in the original song with the same rhythm but different notes will also occur in the generated song, only with different notes and a different rhythm. Additionally, I assign notes so that when a note has been assigned for a particular section, the following note must be different from the first note and must be the next-closest note in the set. Once a note has been assigned, it is removed from the set so we don't create an infinite loop between notes. This creates sections of music where there are runs and more natural-sounding music.

March 2 - March 9

Due to outside circumstances, this week was rough in terms of making actual progress in the code. I did make some progress on getting some ORCA business done, however, such as attending the IRB workshop meeting. I also took some time to establish the major ideas I want to implement before the semester is over. These include:

  • The ability to give the MusicMaker multiple songs as parameters.
    • Combine song structure and chord progressions where possible in the new song.
    • Track notes that are played at the same time and ensures notes played together in the generated song sound good together.
  • To be continued…

Right now, my MusicMaker can successfully “remix” a song if you provide strict parameters that limits how creative the MusicMaker can be. This provides an unintended use-case in which you could easily create new renditions of popular music or give ideas for a new song-cover. Often, you end up with a “jazzed-up” version of the original song, which sounds quite pleasant because you can identify the original music.

March 9 - March 23

I have made some good progress in combining multiple songs into one generated piece. Here is the basic idea of how I am accomplishing this:

  • Take all provided music and put it in the same key as the first song given.
    • Accomplished by finding matching notes across songs by moving every note of one song by one half-step over an entire scale. The key that returns the most matches is assumed to be the interval where both songs are in the same key.
  • Get the rhythmic-patterns data object and note-patterns data object for each song using whatever parameters the user chooses.
  • At this point, we want to combine patterns from multiple songs in places where chord-progressions match up (we don't have to worry about key because all songs are already in the same key) and combine structural elements from all the songs. Keep in mind that every song at this point has an array defining the structure of the song (at a 2-beat, 4-beat, 8-beat, etc. level) and an array of patterns that can be plugged into that structure. The pseudo code for the function that seeks to find the best combination of patterns/structure across will be similar to the following:
function combineSongs(Array songs) {

   rhythm_patterns = [];
   note_patterns = [];
   structure = [];
   foreach (songs as song) {
   num_tracks = structure[0].length; //num_tracks will be number of tracks in first song
   //we can mix/match across tracks - we don't care so long as we are repeating patterns in a way that returns a higher score.
   //create structure (almost) from scratch from patterns and finding places where they can match up - we can create "chains"
   new_song_structure = [];
   n = 3;
   //get structure for each track of first song
   foreach (track in structure[0]) {
      track_structure = [];
      //look for chains of at least length n, starting at the beginning.
      //initialize our chain with first part of first song for that track.
      for (i=0; i<n; i++) {
      //now for each pattern of track[i], if we find a match from a pattern from another song, add a new part to the chain 
      i = n;
      while (i < track.length)
         foreach (patterns as key => song_pattern) {
             if (key == 0) continue; //skip the first pattern
             foreach (song_pattern as another_key => pattern) {
                if (patterns[0][track[i]]->notes.isSimilar(pattern->notes)) {
                   //when we find another pattern from another song with same chord for that pattern, add that part of song to the chain
                   //but find the place that will return the highest score, based on repeating patterns
                   max_score = 0;
                   best_chain = null;
                   foreach (place in structure that has pattern) {
                      new_chain = [];
                      for (j=0; j<n; j++) {
                         new_chain.push(structure[key][place that has pattern+j]);
                      if (Analyzer::getScore(track_structure.concat(new_chain)) > max_score)  {
                         best_chain = new_chain;
                         max_score = Analyzer::getScore(track_structure);
                   //add in the best chain
                   i += n;
                   //we don't want to break - we want to continue traversing through all the tracks of all the songs
   return new Song(new_song_structure, rhythm_patterns, note_patterns);

The main idea here is that we are creating a new structure from the provided songs by “chaining” structural elements together based on similar chord progressions/notes across songs. We want to choose those elements that contain more patterns because part of my idea is that “songs generally sound better that contain repeating patterns, plus some variety.” This is an rough idea for how this can be accomplished. I think there are better ways that I can experiment with, such as mixing/matching elements at different places using a genetic algorithm that implements Analyzer::getScore() as the fitness function. Essentially, I want to find the best “mix” of songs that produces the highest number of repeating patterns at some level for each track.

Another idea at a very high-level using a genetic algorithm, which I think would be sweet, could be:

function findBestCombination(Array songs) {

   max_iterations = 10000; //whatever we want
   i = 0;
   population = generatePopulationOfSize(songs, 10); //perhaps randomly adds/modifies patterns in song with patterns of another song
   while (i < max_iterations) {
      selected = getIndividualsWithBestFitness(population); //selects 2 best in population. See
      performCrossOver(selected); //cross over is based on "chaining" idea presented in previous function - chains based on similar notes/chords from other members of population.
      performMutation(selected); //mutation randomly changes some structural elements to introduce variation
      population = selected; //the next generation is the result crossover and mutation
   return getIndividualWithBestFitness(population);

I would want to connect the chain at points that are consistent with the original music, not just randomly. Also, after determining a structure, I could change specific numbers to try and increase the repeating pattern score? I would like to do something like that, otherwise at the very end I could end up with a bunch of different structural elements from different songs but associated with different patterns, so it wouldn't be very repetitive. Idea: Use this chaining method for each major sub-part (i.e. A B A B C). Then there would be some repetition. At a smaller level, however, I would want to make sure there are more patterns that show up: so keep the notes the same, but change the rhythmic structure! Also, I start off looking for exact matches on connections to the chain, but if there are none, I just look for ones that are close until I find one. There can also be repeating parts if it fits well, but I don't want to do that too much. Also take the number of notes into account, and perhaps other variables as well.

March 23 - March 30

I worked hard to build a genetic algorithm that would build songs with repeating patterns from multiple songs. I ran into a few problems that halted my progress, however, and made me reconsider what I can accomplish within this semester. The main problem I ran into, which I didn't realize initially was such a big problem, was the fact I couldn't consistently and accurately put all songs in the same key. This step is essential, because combining songs in different keys will obviously result in something that doesn't sound very pleasant. I have a few ideas that could potentially solve this problem, such as:

  • Simply looking at the interval between patterns of notes in the song we want to change, and then repeating that same interval using notes that exist in the key we want.
  • There appear to be several techniques to put songs in the same key, and I haven't yet researched or implemented many of these methods

I was able to combine songs by simple taking the notes from one song and combining that with the structure from another song. It had a pleasant-sounding result, because obviously we weren't attempting to combine notes from songs in different keys, but it didn't allow for combining more than 2 songs, and I'm not sure how well it would work if we tried to combine more than just 2.

I want to start putting together an online survey that would allow people to choose music they like and use the MusicMaker to generate a song written using the same style/internal structure. I am wondering if I should do this by having the user only choose one song at a time rather than attempting to combine multiple songs. It would still potentially be cool, but ultimately I would want to be able to combine styles from multiple songs. Hmmm…

March 30 - April 6

I created a new client website that allows people to now upload their own MIDI files for analysis rather than a provided set of songs. This new client-side is located here: I intend for this client to gather responses from surveys about the MusicMaker. The idea is that people can upload songs that they enjoy and then fill out a brief survey on how the MusicMaker did at making something that sounded similar yet unique to the original set of music they provided.

I also am working on creating a slight alteration to my pattern finder that will find patterns across entire songs rather than just each track. This will help tracks sound better when played together, especially for songs such as Coldplay's “Viva La Vida” where the tracks play notes on the very same rhythms.

I also started working on my ICCC-style report. I've written an introduction and I am part way through filling in my approach. Not sure it's going to be quite as high-quality as Paul or Ben's report, but it's fun to write either way. Right now it's being written in Google docs but I am transferring it to Latex once that's all set up.

April 6 - April 13

I created my survey and I am currently receiving responses! Look HERE!

Also I created my Powerpoint presentation to give next week and I am working on the paper.

LDAP: couldn't connect to LDAP server
mind/musicmaker.txt · Last modified: 2017/04/13 13:18 by djex28
Back to top
CC Attribution-Share Alike 4.0 International = chi`s home Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0