Python for realtime audio processing in a live music context
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 118 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/44828 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
EuroPython 201943 / 118
2
7
8
15
20
30
33
36
39
40
45
49
51
55
57
58
63
64
66
68
69
71
74
77
78
80
82
96
98
105
107
108
110
113
115
00:00
Context awarenessGoogolPoint cloudReal-time operating systemEvent horizonBitProcess (computing)Musical ensembleContext awarenessReal-time operating systemComputer animationLecture/Conference
01:18
AlgorithmContext awarenessElement (mathematics)Variety (linguistics)Musical ensembleService (economics)Software developerLecture/Conference
02:38
AlgorithmMusical ensembleLecture/Conference
03:11
Special unitary groupAugmented realityAugmented realityMedical imagingInformationView (database)Musical ensembleNeuroinformatikXML
04:00
Musical ensembleSoftwareAxiom of choiceFreewareLecture/Conference
04:33
Game controllerAugmented realityTheoryArithmetic meanBitSet (mathematics)Game controllerMultiplication signComputer hardwareNeuroinformatikLengthModule (mathematics)Musical ensembleLecture/Conference
06:40
BitMultiplication signSound effectComputer hardwareSet (mathematics)Musical ensembleLecture/Conference
07:25
Point (geometry)BitMusical ensembleSoftwareComputer architectureComputer hardwareComplex (psychology)MultiplicationNeuroinformatikLecture/Conference
08:33
Virtual machineProgrammschleifeVolume (thermodynamics)NeuroinformatikAxiom of choiceStudent's t-testComputer animation
09:21
Group actionDisk read-and-write headState of matterSet (mathematics)Phase transitionMusical ensembleGame controllerNeuroinformatikLevel (video gaming)Goodness of fitFinite-state machineVolume (thermodynamics)Loop (music)Lecture/Conference
09:55
Musical ensembleLecture/ConferenceXMLUMLComputer animation
12:16
Musical ensemble
13:37
Right angleCuboidMereologyConnected spaceFinite-state machineSoftwareLecture/Conference
14:40
State of matterSound effectGame controllerMereologyMultiplication signGreatest elementLoop (music)Musical ensembleEnvelope (mathematics)
15:39
Musical ensembleSupersonic speedLecture/Conference
16:45
Lattice (order)Set (mathematics)Electronic mailing listSingle-precision floating-point formatInheritance (object-oriented programming)Software development kitConnected spaceScripting languageData managementSoftwareNeuroinformatikExecution unitCartesian coordinate systemFreewareTunisDemonCue sportsProgramming languageComputer programmingDifferent (Kate Ryan album)XML
18:44
Electronic mailing listScripting languageNeuroinformatikLevel (video gaming)Parallel portPatch (Unix)Computer animationLecture/Conference
19:52
Electronic mailing listSet (mathematics)Set (mathematics)Electronic mailing listDigital signal processingMoment (mathematics)Module (mathematics)Programming language2 (number)Real-time operating systemSemiconductor memoryWeb 2.0Software frameworkComputer programmingAbstractionMultiplication signMemory managementBlock (periodic table)SpeicherbereinigungProcess (computing)Digital signalLecture/ConferenceXMLUML
23:35
Keyboard shortcutHookingCanonical ensembleBitBoilerplate (text)Semiconductor memoryCodeSampling (music)Level (video gaming)Server (computing)Block (periodic table)Lecture/Conference
24:39
Tap (transformer)Scripting languageoutputCodeDynamical systemGame controllerMultiplication signBoilerplate (text)Core dumpLibrary (computing)ImplementationDemosceneLengthPoint (geometry)ProgrammschleifeLevel (video gaming)Graphical user interfaceLine (geometry)Software frameworkScripting languageObject (grammar)Instance (computer science)Server (computing)Thread (computing)Event horizonCellular automatonType theoryFehlererkennungParameter (computer programming)Module (mathematics)Streaming mediaFlow separationMereologyCodeSound cardComputer programmingBlackboard systemInternet forumFunction (mathematics)TunisInformationNumber2 (number)Context awarenessSet (mathematics)Software development kitElectronic mailing listCycle (graph theory)Video gameExpression
32:30
Revision controlPrototypeFormal languageHacker (term)Frame problemDemosceneLecture/Conference
33:42
DemosceneSoftware testingBitCombinational logicSound effectMusical ensembleInterrupt <Informatik>Process (computing)Tunis
34:27
Multiplication signLevel (video gaming)PrototypeProduct (business)Lecture/Conference
35:07
PrototypeUniversal product codeMultiplication signCodeSoftware developerElectronic mailing listCombinational logicLecture/ConferenceXML
36:20
UsabilityFormal languageElectronic meeting systemHill differential equationFormal languageSoftware developerMusical ensembleError messageMultiplication signLoop (music)Moving averageCodeGame controllerComputing platformSoftware frameworkFile formatFood energyImperative programmingExpert systemMP3Scripting languageSet (mathematics)GoogolCrash (computing)Ultraviolet photoelectron spectroscopyCASE <Informatik>WordAugmented realityPattern languageNP-hardStreaming mediaMathematicsEvent horizonXML
40:50
Presentation of a groupOrder (biology)Musical ensembleMereologyMultiplication signLecture/Conference
41:27
NumberStandard deviationCodePositional notationMereologyAugmented realityMusical ensembleLine (geometry)Lecture/Conference
42:15
Meeting/InterviewLecture/Conference
Transcript: English(auto-generated)
00:04
So, hello, thank you to be here. The first thing I want to say, even if Marc already said it, is that we are giving a concert at the social event tomorrow. And in this concert we are going to use Python heavily,
00:20
even if it's not very visible in the concert. So I propose to have this talk to explain why and how we are using Python for making music. So this is the theoretical part, and if you don't believe that it works, just come tomorrow to the social event and listen.
00:40
So a very usual question when you meet someone in a convention like this one is, and what are you using Python for? And my answer is a little bit less usual. It's mainly for real-time audio processing in live music context. And this is quite unexpected,
01:01
and it sometimes triggers reactions like, what, are you crazy, why Python for this kind of thing? And as it happens, the answers are very easy. Are you crazy? Yes, we are, definitely. And why Python? Because it's fun. So I could stop here.
01:21
Thank you for your attention, have a nice meal. Thank you. But as I was lucky enough to be allotted a 45-minute slot for this talk, I think I can get into some more detail than that. So first, some elements of context. My name is Mathieu Amiguet. I'm a musician and a developer.
01:44
I'm artistic director at Les Chemin Travers, jointly with Barbara Minder. And Les Chemin Travers is a collective of musicians that plays in a variety of styles, from Renaissance repertoire,
02:17
to algorithmic composition.
02:57
By the way, this music was generated by Python, but that's not at all what I'm going to talk about today.
03:03
One thing we've been researching quite a lot for the last decade or so is augmented instruments. What do I mean by that? It's taking an acoustic instrument like, you know, a flute, a violin, piano, or something like that,
03:21
and trying to extend the sonic possibilities of this instrument using new technologies and especially computers. So why the strange name augmented instruments? Actually, it comes from augmented reality. In augmented reality, we mix real-time views of the world with synthetic information
03:41
that we add to the image. And augmented instruments do the same. They mix real-time acoustic sound of the instrument with processed audio. So in a sense, augmented instruments are augmented reality applied to music.
04:02
As a side note, it's not that important in that talk, but it's very important in our research. We decided to use only free software for our research, so we are making music with Linux and free software. That's not a very common choice in the music world.
04:22
But I guess we would have end up to use Python even if we hadn't decided for this restriction. Anyway, the definition of augmented instruments is a little bit theoretical. Why don't you have an example?
04:41
Actually, I will show you a set of examples. The first one is very, very simple. You have a musician. I pictured a flute, it's because the instrument I play, but it could be any instrument, and he plays through a speaker, so you have a set of microphones, wires,
05:02
amplificator, and everything. And this goes to a speaker. And in a very simple setup, you could simply add a delay module that will, like his name suggests, delay the sound in time, so a time shifting of the sound.
05:21
And the musician can have a flute controller. He needs a flute controller because the hands are already busy playing the instrument. A flute controller to control the time of the delay, the length of the delay. And even with a very simple setup like this, you can already do some interesting things.
06:14
Okay, so that's not bad for very simple setup and one flute playing, so this is really one flute playing with itself. There's no pre-recorded sound or anything.
06:24
I'm not sure Telemann had envisioned this way of playing his music, but actually it works pretty well. And for this kind of setup, you really don't need a computer. You can do it with a hardware pedal, and it's cheaper, it's easier.
06:42
But if you get a little bit crazy with delays and begin to have several delays wired in strange manners, and the delay times are linked one to another, it's not that clear that it's better to do it with hardware pedals.
07:00
In this example, with a set of four delays that are set up in the right manner, and if you play the right notes in the right time, you can get some interesting effects.
07:49
By the way, this is an excerpt of a piece we are going to play tomorrow at the social event, so if you like it, just come to the concert. So we are quickly reaching the point
08:01
where it might be more reasonable to use a computer instead of multiple hardware pedals, but it's still relatively easy to do it with stock software. You know, you're just taking existing software and wiring it the right way, and you can play it. The next example is a little bit more complicated.
08:24
I'm going to show you a complex piece of music with a strong architecture, with a beginning, a middle, an end, and really an evolution, and many things happening on the technical side.
08:40
Many volumes changing and loops being recorded and loops being triggered, and it becomes unpracticable for the musician to control all the details himself. So either we have a technician that does all the knob turning and button pressing
09:03
while the musician plays, but that's not exactly what we want to do with augmented instruments because we want them to be musical instruments that can be played by one person. And so the other possibility is to have choice that are made in advance and encoded in the computer
09:26
way or another. So in this example, we have a state machine, and when the musician presses on the buttons of the foot controller, he triggers state changes. I'm going from this stage to this state,
09:42
and this triggers a set of action like changing volumes or recording a loop or something like that. And so many things happen, but the musician has only a few simple actions to make, and hopefully it frees up his head to do better music.
10:13
["The Star-Spangled Banner"]
10:59
Thank you, thank you.
13:45
So as I said before, everything is played live, there are no pre-recorded sounds. I once played this piece in a wedding party and after I played it, someone, a professional musician came to me and said, oh, that was nice, your piece, your karaoke-like piece
14:06
and I said, well, no, it's not really karaoke, you really have to understand that the idea is that everything is played live in the concert. Also we are slowly exiting the realm of existing software, of stock software, because
14:25
here the box state machine doesn't really exist with the right connections and everything, so we had to develop this part ourselves to play this piece. Perhaps a last example, it's very similar, but there's an interesting thing, until now
14:47
I showed you only things with loopers and delays and so only time shifting if you want, and it's of course also possible to add effects of all kinds or synthetic sounds, but that's not something we do much synthetic sounds, but in this one, something funny is happening.
15:05
If you look at the bottom blue path that goes through a looper and then something we call an envelope follower, what comes out is a red path, so an audio path is transformed
15:22
into a control path for another sound, and that's something funny to do and also that we had to develop ourselves.
15:59
If you think of a solo flute piece, probably you don't picture this kind of sound, and
16:34
that's exactly what we are trying to do, to extend as much as possible the sonic possibilities of the instrument, and actually for a few years we have been doing this kind
16:44
of thing, and everything was going very smoothly using partly existing software, so free software as I told you, so super looper, guitarics, rack rack, this kind of thing. Also custom fragments written in audio programming languages, so specialized programming languages
17:02
for audio. We mostly used Chuck, but we also had a few experiences with Pure Data, Super Collider, Never See Sound, but we could have done this kind of thing, and we would connect everything with Jack, I don't know if you're probably not familiar with Jack, for once
17:25
it's one of the best recursive acronyms in the history of free software, Jack is the Jack audio connection kit, and it's an audio daemon that allows to connect different audio
17:43
applications on the same computer, in the same way you would connect different audio, rackable audio units with Jack cables, but you do it in software, it's very nice. And we would manage everything with that script, so simply launch the software we needed and
18:06
connect everything, and everything was good, we thought. But then we hit a wall, we had a big problem and we realized that we couldn't go on the
18:20
same way, we had to change something very fundamental in our way to do it. What was the problem? The problem was that we were able to play single songs, single tunes very easily, but we couldn't go smoothly from one song to another. What we had to do was launch the right script, then play the song, and then go to the computer,
18:47
quit everything, stop every sound, launch a new script, and then we could continue. And that's not that nice in a concert, you know, sometimes you want to crossfade from one song to another, or simply, it's also not so nice on stage to have someone
19:07
going to a computer and bending and typing in things, that's not very nice to look at. Of course, a possibility could have been to have some kind of mega patch with every
19:26
song encoded, every song ready to go, and just going from one to another. But we have two problems with this, the first one is performance. If you have every possible song running in parallel, you are likely to have some performance
19:42
problems on your computer. And the other problem is that we really wanted to have a modular approach, because we composed songs, and then when we make gigs, we say, well, I'm going to play that song and that song and that song, but maybe for another gig, I will take another song,
20:00
and the first one of the first gig, and you know. So we really had to have a modular way of implementing our songs, and then reusing them in gigs or in sets, in set lists.
20:20
So what we needed was some kind of gig framework, you know, like in web framework, but for gigs, the flask of the gigging musician, if you want. And what we realized is that that's something really, really difficult to do in audio programming
20:40
languages. Audio programming languages are very good at programming audio, they better be, but they lack, you know, the higher abstractions, the meta-programming features that make it easy to make something that looks even remotely like a framework. So we did quite a lot of research, and finally, we found this.
21:06
PIO. PIO is a dedicated Python module for digital signal processing. It's a very nice module developed mainly by Olivier Benonger in the University of Montreal in Canada.
21:22
And actually, I was already quite familiar with Python before, and when I saw this, I thought, well, sounds nice, but if you know anything about realtime audio processing, you should be quite skeptical.
21:43
Are you? You should be quite skeptical, because it's very likely that Python is too slow for realtime audio, and even if it's not too slow, things like memory management, you know, and this
22:03
kind of thing are very likely to introduce too much latency, and so you get clicks in your audio, and that's not nice. However, PIO does work, because it works more or less like a marble run, this one.
22:26
The idea is that you have blocks with, and you can build paths with these blocks, and in this example, if you drop a marble on the finish path, it will just follow
22:41
the path on a normal speed, on its own speed, even if you were slow to build the path. And you can build a second path while marbles are running down the first one, and then just switch to the other, you just have to be a little bit careful on the moment
23:00
of the switch, because if there's a marble at that time, it will go out, but you can do things relatively slowly, and then have the path run at a higher speed. And that's exactly what PIO is doing.
23:22
PIO has an audio engine that's implemented in C, it's very efficient, very lightweight, very nice, sorry. And there are bindings to Python that give you building blocks, and hooks to change things
23:49
at all kinds of places, and so all the heavy works of dealing with audio samples, and memory, and everything low level, is completely invisible, and you just have the nice colored
24:07
blocks, and you construct your path. So this is not a toy convention, this is a Python convention, so maybe I can get a little bit more precise on how it works. So remember the first example I showed you, the Teleman canon played by one musician,
24:24
how could we implement this in PIO? Actually it's very easy. First you need some boilerplate code, but really not that much, just an import, create what's called a server, that's the audio engine, and then later on you will start
24:41
the server, and find a way to keep the main thread alive, because the server is started on a different thread, and so if you just say server start, and stop there, the script will quit. So one way is launching a GUI, and there are other ways, we don't use a GUI on stage,
25:05
so we don't launch a GUI, but that's not that important. Then we try to do the upper path on the drawing, so just having the sound of the musician going to the speaker, and that's really easy, you just have to create an input
25:23
object, and the input object will represent the audio stream coming from the input of the program, so from the sound card. And then if you call on any stream, any audio stream of PIO, if you call the out method, it will send this stream to the output of the program.
25:43
So this is a fully working program that will just get the sound through. So that's not bad with what, one, two, three, four, five lines of code. And for the second path, the one that goes through the delay, that's not much more difficult,
26:04
we have several delay objects in PIO, here I use the simple delay, and the first argument to our PIO object, they are called, so to an audio stream, is the input. So here, the delay will take its input from the input object we created, and then
26:27
as we want the delay to go also to the speaker, we call the out method on it. And we have a third path to implement, that's the red one, so I want to use a foot controller
26:44
to tap tempo the length of the delay. For this, this code is using the foot controller controller, which is a small library I implemented to use the soft step foot controller with PIO.
27:03
And, so some boilerplate code, but what's interesting is B1 equals press button one, so mainly I'm making an object that represents all the times when I press a button on my foot controller, and then I make a timer object that will compute the time between
27:28
two successive presses, so if I press and then I wait three seconds and I press, it will contain the number three, and it's also a stream of data which continuously contains
27:41
this information. And then I just say to my delay object, so the length of the delay will be the value of the timer. And this is full implementation of what you see above, and it's really usable in a construct effort, I mean, you have to do some work to set up your computer so that it can deal
28:04
with low latency audio, that can be some work, but the code can work like this. So we are very happy, but we still have the wall, because if I want to go to another
28:24
song, I have to kit this script and launch another one, and I have gained nothing, or almost nothing, because now I have Python. So we really needed to do some kind of framework, and we thought we have to modelize our gigs
28:47
or our sets in a simple way, so we said our gigs will be modules, and we have some naming convention for instance, if I say scenes equals and a list after that, that will be
29:04
the scenes or the tunes that I want to play in my gig. Scenes are also modules, so that means that I will take advantage of the dynamic importing capabilities of Python. And then some set of codes, so I'm saying, well, for this gig I will have two microphones,
29:28
and I want to be able to crossfade from one to another, and I have some kind of blackboard object that anyone can read or write, anyone being the gig and the scenes, the tunes,
29:45
so I can, for instance, in my gig, I set up my microphone and then I said context of make equals make, so I can access it from other parts of my code. That's taking advantage of course of the dynamic typing possibilities of Python.
30:04
And the scenes become very, very easy, so in a scene, so as I said, it's a module, and I can say, well, I need to use the expression pedal, and I want to have loops, of course
30:21
I can use all the features of Python, for instance, in this example I had several buttons of my foot controller that had to behave in a similar fashion, so why not use this comprehension to make all four of them in one time. You see that I use the context of make in the definition of my loops, and I also have
30:50
some decorators that provide hooks in the, at some point in the life cycle of the scene, so when the scene is created, activated, deactivated, and so.
31:02
And then it's very easy to have a master script, that's the core of our framework, that will find the gig, in this example you have to call it in the common line with the name of the gig, so I launch gig Euro Python 2019, it will find the right module,
31:26
it will find the scenes that are in it, import every scene, and then I can register some events, for instance, when I press on certain buttons of my foot controller, to switch from one scene to another, and with this, I can really easily make this kind of gig
31:49
framework I talked about, and it works pretty well, of course, this is only a principle, the real code is much longer, there's some error checking and things like that, but
32:05
still, I think the whole framework must be way under a thousand lines, which is really reasonable for the kind of thing we are doing. And so, this was possible, thanks to very
32:24
nice features of Python, like dynamic typing, dynamic imports, decorators, code introspection, this kind of thing. To be completely honest, in the first version, we also used some disreputable features, like monkey patching, live inspection of execution
32:45
frames, and all kind of hacks, but still, we thought we needed them, and we had them, so we could have a prototype very quickly, and after some months, we thought, well,
33:01
this is really, really ugly, we must make something about it, and we are getting rid of the ugly features one by one, but still, all the features are there, and if you need something, and you need to do something really unusual or strange, everything is there,
33:21
and that's something really nice with the Python language, I think. So, now we found a way around the world, we see that there is still a long journey in front of us, but now we can go forward and explore new territories, and we are now
33:42
able to go seamlessly from one scene to another, without sound interruption, or also with, for those who know a little bit this kind of thing, also with effects tails, you know, if you have a long, long reverb, and you switch to another scene or tune,
34:01
you don't want the reverb to be cut, but you want it to die slowly, this kind of thing, and everything works very well. So my conclusion would be that the combination of Python and PIO really supports our creative process, and in that, it makes experimentation
34:24
easy, when we have an idea, a musical idea, it's very easy to implement it and test it, and this is very important, because we have many ideas, and to be honest, I would say nine out of ten never reach the stage, we try them and say, well, no, that wasn't
34:43
a good idea, so if we need, I don't know, three, four, five days to implement an idea before we test it, we simply don't have the time to do it, and with Python, everything is going very fast, and we can, we have a very direct path from the initial idea to
35:04
its prototype, and, well, most of the time, the prototype is also the production code. Another really, really great thing is that PIO is very actively developed, the main developer is very, very dedicated to making
35:22
PIO better and better, and it happened many times that I was working on some codes, and sometimes suddenly I was blocked, and I would write to the list saying, well, I'm trying to do this and this with PIO, and I can't find how to do it, and usually
35:43
I would do it, you know, in the evening, and I would go to sleep, and I'm living in Switzerland, he's living in Canada, that means that he had still a long day in front of him at that time, and when I would wake up the following morning, I would have an
36:01
answer on the list, well, this was not possible, but it's now implemented, just check out the latest code, and it really happened many times, and, well, that's simply great, so I know he couldn't be here today, but thank you, Olivier, for this great work. And this combination of Python and PIO allows us to have the
36:23
C efficiency, we really need efficiency and very low latency when we do real-time audio, but with all the flexibility of Python. It's also quite an unexpected use case for Python, and I think it really shows the versatility of the language and
36:41
the ecosystem, and that's great. Now maybe an interesting question would be, we are very happy now with this Python PIO solution, but what could possibly make us consider another solution, and I can see two places where I'm not completely
37:02
satisfied and I would consider changing. The first one is catching errors. If you see this code here, I have a callback that would be called when I press a button on my foot controller, and as it happens, I made a typo in my callback
37:20
code, I wanted to say loop set something, and I wrote something else. As I'm a very, very serious developer, I also documented my typo, but I don't do it all, I don't always do it, and of course this typo will be absolutely no problem when I
37:42
launch my script, and it's only when I press on the foot controller that I will get an error. So PIO is relatively resilient in this kind of case, it won't crash the whole thing, so even if it happens in a gig, it's not the end of the world, but one thing is sure, it won't do what I intended it to do, and it can be quite
38:05
annoying. So I would appreciate to have some tools that would catch most of errors before they are even executed. Another thing is that like
38:26
many frameworks in imperative languages, PIO heavily relies on callbacks, and callbacks are very nice, they work nice, we are used to it, to them, but
38:43
they are not always the best way of expressing ideas, and maybe it would be interesting to explore other manner of organising things in time than callbacks. So maybe I read too much about Haskell, now I want catching errors, compile
39:04
time, get rid of callbacks, I don't know. Anyway, reimplementing our whole set-ups and gigs in our new language would be quite an expensive thing to do,
39:22
so I think we would really need to have very, very obvious advantages to go away from this solution, but that was just to say what could be even better. If you want to hear more music than the little excerpt you heard, of course the
39:45
best thing to do is come to the social event tomorrow, we are playing live. If you are the kind of old-fashioned people that still buy CD, like me, you can buy a CD, I have a few with me, you can just come to me. This is our latest album
40:01
with many, many augmented instruments, things, all backed by PIO. You can also have this album in a dematerialised, that's a hard word, in MP3 format on Bandcamp, and if you really want to support the platforms instead of supporting
40:24
the musicians, you can also stream from Spotify, Deezer, Google, and virtually any streaming platform. So, that's it. If you have questions, I think we can take
40:40
one or two questions, and of course I'm available after my talk to answer questions one by one. Thank you for your attention.
41:01
Hello, thank you for this insightful presentation. I'm just curious to know how you chose to annotate your music score in order to know which foot button to press at what time. Sorry, I didn't get it. I said I'm curious to know how you chose to annotate your music score, your partition, in order to know which foot button to press
41:24
at what time. That's a big problem, how to write We do quite some compositions for augmented instruments, and the writing part is a real problem. Sometimes we just have, you know, standard music scores
41:44
and we just annotate like numbers or things like that. Sometimes we really have a completely different notation because we don't have any use for the traditional five lines notation. But actually, we don't really know.
42:06
And sometimes it's even the code that's slowly becoming the score. It happens that we also do a lot of improvisation on canvas, and sometimes
42:22
we don't even write anything, and if we have a question, we go and look at the code and say, oh yes, we decided to have that and that and that. So that's a good question, but I don't really have an answer. Another question? Okay, so thank you very much.