LikeToHear: Self-Adjustment of Open Source Mobile Hearing Aid Prototype
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Untertitel |
| |
Serientitel | ||
Anzahl der Teile | 275 | |
Autor | ||
Lizenz | CC-Namensnennung 4.0 International: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/51889 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Schlagwörter |
00:00
ComputerHypermediaPrototypingMobiles InternetOpen SourceInformatikSoftwarePhysikalisches SystemHardwareGruppenoperationSchlüsselverwaltungMereologiePuls <Technik>Projektive EbenePrototypingVorzeichen <Mathematik>Offene MengeHypermediaComputervirusVorlesung/Konferenz
00:44
PrototypingMobiles InternetOpen SourceReelle ZahlOpen SourcePunktProjektive EbeneZahlenbereichRechter WinkelPermanenteEinfügungsdämpfung
01:37
Reelle ZahlBitZahlenbereichSpieltheorieVorlesung/KonferenzComputeranimation
02:19
SoftwaretestDateiformatProjektive EbeneFitnessfunktionZahlenbereichRechter WinkelEinfügungsdämpfungLie-GruppeSondierungQuelle <Physik>PrototypingVorlesung/KonferenzBesprechung/Interview
03:33
PrototypingInteraktives FernsehenGraphische BenutzeroberflächeSondierungGruppenoperationComputeranimation
04:03
HackerDateiformatProjektive EbeneEinsWeb-SeiteOpen SourceReelle ZahlComputerspielDienst <Informatik>eCosSchnitt <Mathematik>MereologieTopologieVorlesung/Konferenz
04:56
Reelle ZahlNatürliche ZahlBitFormation <Mathematik>ComputerspielNormalvektorEinfügungsdämpfungGrenzschichtablösungMessage-PassingNatürliche ZahlAlgorithmusPrototypingEinflussgrößeEinsVideokonferenzCOMQuaderFormale SpracheSchießverfahrenSystemaufrufGroße VereinheitlichungGeschlecht <Mathematik>TypentheorieMetropolitan area networkCASE <Informatik>FlächeninhaltHypermediaAutomatische HandlungsplanungZellularer AutomatVorlesung/Konferenz
08:13
PrototypingMobiles InternetOpen SourceOffene MengeGamecontrollerPrototypingProjektive EbeneRahmenproblemt-TestBimodulHardwareOpen SourcePi <Zahl>BitVorlesung/KonferenzBesprechung/InterviewComputeranimation
09:09
PrototypingMobiles InternetOpen SourceOffene MengeSoundkarteAlgorithmusHardwareBitOffene MengeSystemplattformGamecontrollerSoftwareentwicklerSoundkarteBildgebendes VerfahrenOffice-PaketProzess <Informatik>Vorlesung/Konferenz
10:04
HardwarePrototypingMobiles InternetSoundkarteBit
10:39
HardwarePunktInterface <Schaltung>SmartphoneSoftwareSystemaufrufQuaderFächer <Mathematik>ServerWeb-SeiteBitHardwareComputeranimation
11:13
PunktSmartphonePrototypingKontrollstrukturQuaderInterface <Schaltung>SmartphoneMusterspracheWeb-SeiteGamecontrollerPunktSoftwaretestOffice-PaketPi <Zahl>Vorlesung/KonferenzComputeranimation
12:13
Web-SeiteKontrollstrukturAppletStandardabweichungEin-AusgabeTelekommunikationSocketSpezifisches VolumenWeb-SeiteGamecontrollerBenutzerbeteiligungSchnittmengePunktSummengleichungLanding PageTelekommunikationApp <Programm>Klasse <Mathematik>AppletGraphische BenutzeroberflächeProzess <Informatik>FrequenzForcingRechter WinkelVorlesung/KonferenzComputeranimation
13:40
SocketTelekommunikationKontrollstrukturStandardabweichungAppletOpen SourceStochastische AbhängigkeitAusgleichsrechnungModul <Datentyp>SkriptspracheInterface <Schaltung>KonfigurationsraumWärmeübergangPlastikkarteSmartphoneHardwareSoftwareRückkopplungPrototypingFunktion <Mathematik>Formation <Mathematik>Einfache GenauigkeitEin-AusgabeDigitalfilterFrequenzProzess <Informatik>Kartesische KoordinatenSummengleichungMAPLokales MinimumSchwellwertverfahrenDruckspannungLineare AbbildungGruppenoperationAutomatische HandlungsplanungEndliche ModelltheorieObjekt <Kategorie>GamecontrollerWärmeübergangSpezifisches VolumenInterface <Schaltung>RückkopplungKreisflächeSoftwareHardwareBitSoftware Development KitEinfach zusammenhängender RaumKartesische KoordinatenBenutzeroberflächePhysikalisches SystemAlgorithmusOpen SourceSystemplattformFrequenzWeb-ApplikationOrdnungsreduktionDynamisches SystemKonfigurationsraumDatenkompressionInteraktives FernsehenFramework <Informatik>DifferenteBimodulEin-AusgabeQuick-SortLokales MinimumMAPFormation <Mathematik>PrototypingProzess <Informatik>PunktSummengleichungInformationZehnForcingSkriptspracheSpieltheorieMultiplikationsoperatorComputerarchitekturSchnitt <Mathematik>Bildgebendes VerfahrenPi <Zahl>Kollaboration <Informatik>Dienst <Informatik>Vorlesung/KonferenzBesprechung/InterviewDiagramm
17:27
ServerBenutzeroberflächeEinfach zusammenhängender RaumKette <Mathematik>AnalysisOffene MengeMAPGamecontrollerTabelleInformationBildverarbeitungProzess <Informatik>MaschinenschreibenDienst <Informatik>Vorlesung/Konferenz
18:12
Klasse <Mathematik>Web SiteSkriptspracheKontrollstrukturSocketInterface <Schaltung>Element <Gruppentheorie>MenütechnikZeichenketteRegulärer Ausdruck <Textverarbeitung>SchnittmengeTelekommunikationGamecontrollerSocket-SchnittstelleForcingSocketSkriptspracheMereologieBenutzerbeteiligungMomentenproblemAggregatzustandDifferenteKreisflächeRauschenProjektive EbeneOrtsoperatorEinsBenutzeroberflächeElement <Gruppentheorie>EinfügungsdämpfungInteraktives FernsehenQuaderVersionsverwaltungKlasse <Mathematik>Interface <Schaltung>Office-PaketMetropolitan area networkNachbarschaft <Mathematik>PrototypingExpertensystemMenütechnikDienst <Informatik>SoftwaretestComputerspielVorlesung/KonferenzComputeranimationBesprechung/Interview
21:01
CASE <Informatik>BrowserTelekommunikationProzess <Informatik>SoftwareentwicklungExpertensystemATMLeistungsbewertungAusgleichsrechnungBenutzeroberflächeInterface <Schaltung>SoftwareentwicklungOpen SourceProzess <Informatik>TopologieTelekommunikationMehrwertnetzCASE <Informatik>InformationMetropolitan area networkEndliche ModelltheorieElektronische PublikationEchtzeitsystemNetzbetriebssystemSondierungPi <Zahl>Web-SeiteHypermediaRechter WinkelSoftwareWeg <Topologie>HardwareIdentitätsverwaltungTwitter <Softwareplattform>MaschinenschreibenDokumentenserverFeasibility-StudieBitModallogikMikrocontrollerProjektive EbeneATMExpertensystemBrowserBenutzeroberflächeComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:00
And welcome back to the last talk of the RC3 this year. Peggy Seilab with Hearables, a citizen-signed open hardware project to hearing aids, Hearables, and the DIY hardware and software behind it.
00:22
So go ahead. Thank you very much. Hello. Yipeng already said it. My name is Peggy Seilab. I'm a computer scientist with Master of Public Policy and media artist. I'm currently working at Fraunhofer IDMT, Oldenburg in the research group Personalized Hearing Systems.
00:42
I want to introduce to you the LikeToHear self-adjustment open source mobile hearing aid prototype, which we developed in the citizen-signed project to hear how you like to hear from 2017 to 2020. So I give you some overview about the motivation
01:03
and what we developed and the open issues. So let's start from the motivation point. And this is all about hearing loss. And when you realize that actually 17% of the population in Germany has
01:22
hearing impairments, that's actually a good say. More or less, the whole population in the world has similar percentages. There are some sources saying slightly different numbers, but it's more or less you can take it for all over the world. Now in Germany, we're actually privileged.
01:42
We have access to hearing aids and hearing supports. But when you realize, even in Germany, about 75% wouldn't use hearing support to get a bit better hearing possibility. That means, I mean, this is very optimistic.
02:01
But when you make a number game, for example, for Europe, and we have 740 million people living in Europe, and that means that actually 52 million people in Europe have untreated hearing impairment.
02:20
That's a huge number. And even when you are living in a country like Germany, and people have access to hearing aids, nearly half of them would return them after first fit. They get it from the audiologist, try it out,
02:40
and say, no, I don't want it, give it back. I don't know if you do the same thing when you're looking for headphones that like half of the people just give it back. But with hearing aids, it's just like this. So something is wrong with the idea of hearing aids and the way it's adjusted. So our assumption was we want to go behind what's actually
03:01
wrong with that idea of hearing aids. Why does that meet the customer's needs? So the first thing what we said is our assumption that adjustments in audio laboratory is, for most people, unsatisfactory. I mean, I myself, I have a mild to moderate hearing loss,
03:20
and I tried it myself and I had the same feeling. But it was now initiated, this citizen science project here. We liked the year 2017 to find out, is this just me or everybody thinking like this? So I took this format from sound art, sound works, like actively listening to surrounding sounds.
03:43
And we had this mobile hearing aid prototype with some intuitive GUI for interaction. So people could easily adjust surrounding sound. And we made these sound works. People were adjusting the sound in the ways
04:00
like they like to. And we got about one hour locked data of people's behavior. And we did also ask people, what do you like from hearing aids? What do you want from a hearing aid? And we got 550 submissions. And we made two hackathons. Actually, the first ones were done at Fraunhofer IDMT.
04:23
And about 200 people took part. So if you're interested to get more about this project behind, you can look at this web page. And actually, it was the first citizen science project, the first open source project, and the first hackathons
04:40
running at Fraunhofer IDMT, Oldenburg. And it was quite a new format for science and was very experimental and very exciting to go out of laboratory and go into real life situations. And what we did, we took more or less three sound
05:01
situations, one on streets, other ones in cafes with a lot of bubble, and also going around in parks and just let people, like we had this mobile hearing aid prototype with some headphones, and let people just adjust in the sound like they wanted to
05:22
without making audiogram, without measurements. And people with hearing problems, people without hearing problems, all age, gender, everything, just let people adjust and see what comes out. Is there any like something advantage just to give the adjustment of hearing aid prototype in the hands
05:43
of the people and not to the algorithm or to audiologists? And one citizen scientist who's still active and actually using the hearing aid prototype in every day's life is Dr. Otto Spiegel, who has a severe hearing loss
06:03
and had really big problems in normal life to communicate at all. He gave me this video message that I want to play now. Hopefully it works. Yeah, here we are.
07:07
Later, we'll have a little bit of fun with them. I'm going to play a little bit of music for you.
07:21
With the like-to-hear box, I'm going to play a little bit of music. The first one is going to play a little bit of music for you. And the second one is going to play a little bit of natural music.
07:53
This shows very good the potential self-adjustment can have
08:01
just by like optimizing the sound of the hearing amplification, of the sound amplification in a way that perfectly fits to its needs. It enhances his possibilities of hearing and understanding much more.
08:21
So, but I want to explain to you how we made actually this hearing aid prototype. We have this like-to-hear framework, which is just what we developed for the Citizen Science Project, which is actually just the control and easy and intuitive control for the hearing aid prototype for everybody, even not like very tech interested.
08:43
And then we had this basic hardware set up. We took over from the open source mobile hearing aid prototype by Professor Dr. Mark Renee Schadler, who made it based on Raspberry Pi and some other modules I'm going to explain now soon.
09:01
And he made also some basic like adjustments to use it for, actually for him to use it for his students for audiology, but we could also use it for our purpose. And this and the control itself is of the hearing aid algorithm is,
09:24
sorry, the hearing aid algorithm itself is actually processing a platform of the open master hearing aids, the open MHA, which has the possibility of like for research and development
09:43
of new hearing aid algorithms, which is also actually more like an academic platform. But I'm going to explain this a bit better later. So about the hardware, this is actually basically the hardware you need if you want to buy and build it yourself, need a strong battery
10:00
and about three ampere, otherwise wouldn't run a Raspberry Pi and a sound card. And we use these headphones, which are quite expensive, but I think you can also use a bit cheaper ones, but not too cheap microphones. Otherwise, you won't be happy with it. And we have this preamplifier for the microphone signal,
10:21
which is a kind of tinkering thing. That's why we made a special for now. But all in all, it's about 200 euro. And I think you can spare a bit if you change headphones and mics. And we made this preamplifier special for now for the RZ3.
10:43
So we have this SMT, we made some of them and already done. And you can order this, just get in contact to me. I have a few, I mean, I made a talk already yesterday and I got, I think three are left. So if you're interested, just contact me and I can give it to you.
11:04
So it's a bit easier to get it together. You don't have to solder it yourself. So for the usage of the hardware and the software from the user side, it looks like that you have this box and you had your smartphone you could use as an interface.
11:22
You would just like log in in a Raspberry Pi as an access point and call the web page. And the web page will give you the possibility to control the sound. I can explain this soon. And the sound control will be processed by the Raspberry Pi.
11:41
And the changed sound then will be sent out by the headphones, to the headphones. And we made also some logging of the surrounding audio and also the presets that people would choose. So we could have a look afterwards if there are any patterns of how in certain situations
12:02
how people react on sound and what kind of presets they would choose. We had some very interesting and inspiring ideas out of this, but I don't want to go too deep into this in this talk. If you are interested for the data analysis, just get in contact with me. So for the control, to explain it how it works, it's really very simple.
12:23
You had this like circle, you look to the right, it would be the control page you would get. And if you would go up with the circle, you would have more volume and it would get louder. And you go down, it would all over again, it would go softer.
12:42
And for the X-axis, you have a sound balance. You know, if you go to the right, you would have brighter sound. If the more you go to the left, you would have taller sound. That's very easy. And by every single point of this GUI, the graphical user interface,
13:02
we will have a certain sound setting. I can explain this later to you. And so basically for the web page, we had this landing page. And then you would like fill in a user ID and start, you have the submit button.
13:21
And then you have, beside of the control, you could have reset for resetting, recalling the app and also an on-off button for amplification. We had this control.js, which is a simple Java class for control and WebSocket communication.
13:42
So for the objectives of the self-fitting control, it should be like nothing special, like common Python models, plain HTML and JavaScript. Easy to understand if you want to like reprogram it, understand how it works.
14:00
It should be independent of third party competence and open source for the participation of citizens. So to look a bit into the software, it's actually this like-to-hear controlling framework and interaction framework, and which would control the hearing aid prototype.
14:21
You remember, this is where hearing aid algorithms are implemented on Raspberry Pi, and that means that there was done some hardware calibration and this software setup that it would run on Raspberry Pi and some basic open-image configuration with dynamic sound compression and feedback reduction.
14:40
That's all what we use for now. Open-image itself has much more possibilities, but actually it's a basic hearing aid. It has basic hearing aid features, and it's like I mentioned before, a research platform for novel algorithms and provides this TCP IP interface with as possible to make the web app.
15:02
And once you get there, it's quite easy to configure and for to run on Raspberry Pi, but you could also run it on any other Linux system. You have the connection to Alta and the transfer audio between the application by the Czech audio connection kit. So this is basically the architecture.
15:26
And then to go a bit into the amplification we use, it's just very simple amplification over all frequency bands. So when you would take the circle more to the left side, it's just like a linear amplification in the same amount over all frequencies.
15:44
And if you go more to the right, that would mean you would have more amplification on the higher frequencies. This is about the sound balance. And if you look for the overall gain, it's like when you put the circle up and down,
16:05
the y-axis, then you would see that you have, the more you go up, the more amplification you have. But in the more, let's say, more volume at a certain point, you wouldn't put so much amplification on it because it would get too loud. And that is why this D point you see.
16:25
And so it's a grid of 10 to 10 presets. You automatically adjust then actually the overall frequency amplifications. And like I mentioned, x-axis is sound balance, y-axis is overall gain, and you have a bit of compression of
16:45
high input levels. And for the frequencies, the processing was about nine bands and also the audio logging was about six bands. The presets were based on two channels, but we put actually the same amplification
17:02
on the two of it as sort of like mono and nine bands. And we had gain presets for 65 different input levels. And a bit of compression and a step size of two for minimum input gain of throw. To get a bit into the software, we use this Python
17:23
modules and had this main who is connected with the user interface. And this would put the information of the user interface to the control, like via control Python script, which would take the presets out of a lookup tables against JSON. On one side, on the other side,
17:48
lock the presets and the auto level analysis with JSON lock Python and would also control the open image processing chain of the sound amplification. So yeah, if you look a bit,
18:06
you see actually basically the same thing here. The main is in connection with the server. Yeah, with the server socket and would also communicate with the control
18:28
script and then do the JSON, the logging, handling. So that's basically how it works. And yeah, the socket, we have a socket as control.js for the socket and web,
18:42
sorry, web socket interaction. And we have the soundformer for controlling 2D interface elements and the manual for handling of the switch and reset you saw before on the user interface. Yeah, this is how it looks like. With the JSON we're logging, we have the state we're logging,
19:03
we have, we will see if there was a reset and the on off and then like where the circle, like the preset position of the circle was set. So that's basically the state of the art at the moment. And so what's our next step? Let's listen to the next citizen science who
19:26
took part at the project and he's gonna give us some inspiration. Looking to let it run. Hello, my name is Giorgio Curicci. I came from Brazil and I have a medium hearing loss
19:42
and also living in Berlin without public health insurance. Only private ones so I cannot afford a hearing aid kit. I took a part of Light To Hear project testing the prototype for about like 10 days. And I just experimented some situations, for example,
20:06
giving a class for one person in a closed place, pretty quiet. It was really, really helpful. I could understand even like the person, it was not like the good distance, but normally I didn't hear. But for example, in a place for example, like a bar with a lot of people talking,
20:27
a lot of noises and different distance, it was quite difficult to understand when the person was talking to me directly. And I think a smaller prototype, like a smaller version will be really,
20:46
I mean like physically speaking, it will be really helpful for me because like to carry like the big box and also like with the wires of the headphones, it's not like so practical. So that's okay. Thank you. Bye. So yeah, there's some open issues and I actually want to call
21:08
for support. It would be cool, like he mentioned, that it would just be a bit smaller and we could even as a even smaller spray piles or you could use microcontroller. We have also some ideas
21:21
to make the microphone pre-amplifier a bit smaller. We had the 3D case but there's some adoption. But you can find the STL file on GitHub if you like to. And yeah, it would be also more economic and better hearing when you use true wireless headphones. So anybody who wants to try it out, just let me know what your experience is. And yeah, for the web interface
21:46
update, it's necessary to, it would be good to have an update of the operating system to Raspbian-Baster and better, more robust browser communication.
22:04
And this is like some basic things and it's not so much work to do. And there's some other things like nice to have, more expert mode of adjustment enhancement and a bit better sound suppression. And what would be really cool would be more feasible sound processing
22:23
in real time. And for example, by FAO's programming language. But if you're interested, just have a look at the GitHub repository and you find more information. I can give you some hints for cool papers. So like the first one is more or less like benchmarking with some
22:44
actual hearables who have these features integrated. And like the next one showing something about over-the-counter hearing aids which are like made for self-adjusting with autologous. And the last one is actually saying where our idea of this 2D touch
23:06
interface is coming from. It's actually my group, my colleagues and Fraunhofer IDMT. So thank you very much for listening. If you want to support, get in contact with me. And we have also some social media accounts on Twitter and Instagram. You can follow if you want
23:26
to keep being kept updated. And yeah, thanks also for BMBF and for supporting and Fraunhofer IDMT. Thank you. Yeah, thank you very much Peggy. There's no questions in the chat right now.
23:46
Anything else you want to add to the talk? No. Okay. But I'll say thank you because it's just what we're supposed to do more. Open source hardware, open source software. And
24:06
yeah, if the citizens can do it themselves, why not? And maybe the cooperation between the pros and the people who need it can be better that way. Thank you very much.
24:22
Thank you too for giving the chance to make this project public. Yeah, all the best for your further updates. And I'm sure we'll see you again sometime. And until then, yeah, check out the social media or the pages where you can find the information.