The Jinglinator 4000 [DEMO #9]
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 19 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/18085 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
2014 Fall NuPIC Hackathon18 / 19
1
2
3
5
6
7
10
12
13
15
16
18
00:00
Demo (music)IRIS-TMultiplication signInteractive televisionLogicRight angleInsertion lossMusical ensemblePoint (geometry)NumberAverageView (database)AngleMetropolitan area networkContext awarenessDifferent (Kate Ryan album)PredictabilityMoment (mathematics)outputSequenceFood energyDependent and independent variablesAreaVector spaceFunction (mathematics)AlgorithmDivisorSet (mathematics)Sampling (statistics)TouchscreenGoodness of fitLecture/ConferenceMeeting/Interview
02:20
InformationMusical ensembleComputer configurationThermodynamic equilibriumWave packetMechanism designMereologyData miningLevel (video gaming)Codierung <Programmierung>Data structureScherbeanspruchungPattern languageCodeoutputDifferent (Kate Ryan album)Multiplication signLecture/Conference
04:11
Universe (mathematics)Computer animation
Transcript: English(auto-generated)
00:15
How's it going, guys? My name is Sergei, and I'm here to present the Jinglinator 4000.
00:22
How do you spell that, the Jinglinator? The J-I-N-G. Shut up on the screen. It's on GitHub. Basically, I trained NuPIC on a data set of 500 different jingles.
00:40
And now it can compose or predict, however you want to put it, a jingle based on an input factor of a few notes. So you put in a few different notes. And it will make a prediction based on the jingle. My primary interest in this, I wanted to see if it kind of reverts to the same average pattern, whatever input you give it, or if it's actually something
01:02
based on the initial sequence. Sergei, are you going to play music? Well, I don't know if you can call this music, but I will play it. I'll play audio. Sure. Audio. Yeah, that's much better. Yeah. That wasn't it, by the way. Hip hop.
01:21
So here, this algorithm is working, but I have some sample outputs based on the jingle. So we can play this one. This is a sample of the jingle. It's based on four notes.
01:45
What, you trained your notes? No, this is what the output I gave given an input factor of the first four notes. Ah-ha. What did you train it on? MIT published a jingle set.
02:01
OK, would the jingles be recognized? I haven't heard of any of them. I guess the problem is the quality of the output is going to depend on what it's been trained on. So I have no idea what it's been trained on. Yes, so the Ryan's Mammoth Collection, it's a famous collection of jingles. There are 1,500 in total, but my output isn't good enough, so it's only 4,500.
02:22
But basically, what I wanted to show you guys, which is the somewhat interesting part to me, is that here in the first four notes, you can see. Oh, that's the one we launched, right? Even in the first input factor, if you give two similar notes, there's going to be a pattern of two repeating notes
02:42
over the whole jingle, like here, here, here. So it actually finds genes in music, and given the input factor, will somehow show them later. So since I have exactly zero musical training, what I'm going to do, if somebody's interested, I'm going to ask a friend of mine
03:01
to compose five initial notes for a jingle, train it on the whole data side, and see what it downloads. The code is on GitHub, if you want to do it. Let me ask you a couple questions. Is it time for questions? Sure. Were you about to give them to the? No, no, no, sure. So music has the notes have duration and rhythm to them,
03:20
but you're not showing that here. So how did you encode when you took the original jingles? Did you incorporate that at all, like you just took the intervals, the actual notes? So I actually took just the notes, the data side I used only the same temporal structure. The actual durations I did not encode. This is a categorical encoding based on those.
03:41
I'm thinking about doing an encoder for sheet music. It would require, I think, you can't use any of the existing encoders you will have to use duration, the octave, so the same notes, the different octaves, shear bits, and stuff like that. So I actually planned to do that. Yeah, you'd want to do duration and you'd basically want to do intervals, not actual notes.
04:03
Yeah, so you do the shortest possible interval, and that, as I said, is the one that would go by the original. Cool. Any other questions for Sergey?