We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback
00:00

Formale Metadaten

Titel
openvocs
Serientitel
Anzahl der Teile
84
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
We will introduce an open platform for Voice Communication Systems for Mission Control. Our own background is Space Mission Control Room Conferencing at the German Space Operations Center. Markus Töpfer
16
Vorschaubild
41:34
27
Vorschaubild
08:57
36
Vorschaubild
48:55
56
58
Vorschaubild
1:02:53
67
Vorschaubild
45:41
68
69
Vorschaubild
37:44
TaskGamecontrollerAggregatzustandOrbit <Mathematik>LoopMultiplikationsoperatorInterface <Schaltung>ZahlenbereichOrtsoperatorOpen SourceDifferenteSatellitensystemMAPKommunikationssystemTelekommunikationUmwandlungsenthalpieMinkowski-MetrikLeistung <Physik>Physikalisches SystemUmsetzung <Informatik>BitInformationParallele SchnittstelleGruppenoperationEndliche ModelltheorieGrundraumTouchscreenProgrammschleifeSystemaufrufMengeProjektive EbeneRollenbasierte ZugriffskontrollePhasenumwandlungWeg <Topologie>Web SiteBimodulMereologieNichtlinearer OperatorEinfach zusammenhängender RaumSoftwareCoxeter-GruppeComputerarchitekturHyperbelverfahrenWhiteboardWissensbasiertes SystemBefehl <Informatik>Ordnung <Mathematik>DateiformatSchaltnetzRechenschieberFigurierte ZahlCASE <Informatik>VierMetropolitan area networkGarbentheorieGüte der AnpassungVerschlingungPerspektiveXMLUMLVorlesung/Konferenz
Nichtlinearer OperatorTaskMengeProgrammierumgebungSystemverwaltungBildschirmmaskeProgrammschleifeInterface <Schaltung>Computeranimation
OrtsoperatorLoopPhysikalisches SystemGruppenoperationKommunikationssystemComputeranimation
Minkowski-MetrikWhiteboardQuaderKommunikationssystemKontextbezogenes SystemGruppenoperationFunktionalTelekommunikationDatentransferSenderNichtlinearer OperatorSoftwarePhysikalisches SystemFlächeninhaltDistributionenraumEinfach zusammenhängender RaumClientServerTrennschärfe <Statistik>Protokoll <Datenverarbeitungssystem>ProgrammierumgebungZusammenhängender GraphTypentheorieProgrammschleifeEreignishorizontParametersystemPasswortInterface <Schaltung>SatellitensystemProjektive EbeneSpeicherabzugGebäude <Mathematik>DatenverwaltungGamecontrollerLoopMereologieAutorisierungWort <Informatik>MomentenproblemSichtenkonzeptOrdnung <Mathematik>CASE <Informatik>Lokales NetzArithmetisches MittelRadikal <Mathematik>DatenbankMengeSnake <Bildverarbeitung>VerkehrsinformationSocketSystemverwaltungQuick-SortMultiplikationsoperatorComputeranimationVorlesung/Konferenz
AuthentifikationZusammenhängender GraphFront-End <Software>Physikalisches SystemProgrammierumgebungBenutzeroberflächeZentralisatorIndexberechnungDifferenteResultanteMessage-PassingInterface <Schaltung>Einfach zusammenhängender RaumServerProjektive EbeneBildgebendes VerfahrenComputeranimationFlussdiagramm
Projektive EbenePhysikalisches SystemAuthentifikationIndexberechnungTelekommunikationMessage-PassingLoginFront-End <Software>Computeranimation
LoginMengeProgrammschleifeEchtzeitsystemNichtlinearer OperatorMailing-ListeDifferenteLeistung <Physik>ComputersimulationPasswortAuthentifikationMessage-PassingSimulationMomentenproblemStreaming <Kommunikationstechnik>IndexberechnungComputeranimation
Verteilte ProgrammierungLeistung <Physik>Physikalisches SystemZentralisatorMathematikDifferenteServerFront-End <Software>DistributionenraumAggregatzustandZahlenbereichProgrammschleifeIndexberechnungAdditionProgrammierungComputeranimation
DatentransferMessage-PassingLeistung <Physik>MengeProzess <Informatik>Mailing-ListeGesetz <Mathematik>MAPAggregatzustandServerParametersystemLoopEreignishorizontSpezifisches VolumenInformationsspeicherungWasserdampftafelComputeranimation
WasserdampftafelAggregatzustandSpezifisches VolumenLoopAuthentifikationMessage-PassingGamecontrollerKommunikationssystemEreignishorizontDifferenteTypentheorieProtokoll <Datenverarbeitungssystem>PartikelsystemTelekommunikationBenutzeroberflächeDatentransferImplementierungUmwandlungsenthalpieProzess <Informatik>Physikalisches SystemEntwurfsmusterComputeranimationTechnische Zeichnung
AggregatzustandDifferenteDistributionenraumFamilie <Mathematik>Protokoll <Datenverarbeitungssystem>SynchronisierungRelationentheoriePhysikalisches SystemEreignishorizontNotebook-ComputerFunktionalComputeranimation
FunktionalAggregatzustandServerRadikal <Mathematik>SpielkonsoleNichtlinearer OperatorRechter WinkelComputeranimation
LoopAggregatzustandRechter WinkelPhysikalisches SystemFunktionalBefehl <Informatik>ClientTelekommunikationStreaming <Kommunikationstechnik>HypermediaServerGeradeDienst <Informatik>SynchronisierungComputeranimation
AggregatzustandClientMessage-PassingServerTelekommunikationBenutzeroberflächeHypermediaDienst <Informatik>Singularität <Mathematik>Notebook-ComputerPlastikkarteSmartphoneDifferenteMobiles EndgerätAnalytische MengeTablet PCMathematikBitMultiplikationLoginSoftwareInterface <Schaltung>Computeranimation
SoftwareTemplateFormale SpracheCASE <Informatik>Einfach zusammenhängender RaumHypermediaTablet PCZusammenhängender GraphClientNichtlinearer OperatorRelationentheoriePhysikalisches SystemProtokoll <Datenverarbeitungssystem>Computeranimation
Physikalisches SystemLoopInformationAdditionRelationentheorieProgrammschleifeBenutzeroberflächeHypermediaComputeranimation
Streaming <Kommunikationstechnik>DatentransferPunktProtokoll <Datenverarbeitungssystem>GrenzschichtablösungHypermediaBenutzerbeteiligungEinfach zusammenhängender RaumClientServerRelationentheorieDifferenteImplementierungDienst <Informatik>
Streaming <Kommunikationstechnik>HypermediaTablet PCWeb-SeiteImplementierungTelekommunikationProgrammierumgebungMereologieServerSystemaufrufSuite <Programmpaket>VerschlingungSprachsyntheseRelativitätstheorieProgrammschleifeComputeranimation
TelekommunikationGrundraumEreignishorizontPunktRelativitätstheorieSpeicherabzugRadikal <Mathematik>Web-SeiteMinkowski-MetrikNichtlinearer OperatorEinfach zusammenhängender RaumHypermediaRelationentheorieElement <Gruppentheorie>Quick-SortComputeranimation
HypermediaSoftwareEinfach zusammenhängender RaumMultiplikationsoperatorRoutingProtokoll <Datenverarbeitungssystem>ClientMenütechnikPhysikalisches SystemTelekommunikation
SicherungskopieSoftwareentwicklerPrototypingProtokoll <Datenverarbeitungssystem>Produkt <Mathematik>TypentheorieBenutzeroberflächeHypermediaWeb-SeitePhysikalisches SystemEinfach zusammenhängender RaumDemoszene <Programmierung>ProgrammierungImplementierungHidden-Markov-ModellBrowserProgrammschleifeComputeranimation
PrototypingProgrammschleifeBenutzeroberflächeInteraktives FernsehenMaschinenschreibenSoftwareentwicklerTouchscreenPhysikalisches SystemPinchingFunktion <Mathematik>LoopSpezifisches VolumenComputeranimation
MomentenproblemStörungstheorieNichtlinearer OperatorRechenschieberKonditionszahlPhysikalisches SystemNP-hartes ProblemInteraktives FernsehenOffene MengeTouchscreenEreignishorizontKommunikationssystemMultiplikationGruppenoperationMaschinenschreibenTelekommunikationComputeranimation
ImplementierungVersionsverwaltungHumanoider RoboterMultiplikationSoftwaretestSoftwareentwicklerBitrateInverser LimesGamecontrollerStandardabweichungProtokoll <Datenverarbeitungssystem>Physikalisches SystemInteraktives FernsehenTypentheorieDifferenteMobiles EndgerätMathematikAutomatische IndexierungComputeranimation
Protokoll <Datenverarbeitungssystem>QuaderOffene MengeUmwandlungsenthalpieAnalogieschlussMusterspracheDifferentePlastikkarteInteraktives FernsehenAggregatzustandEvoluteNetzbetriebssystemPhysikalisches SystemBildschirmmaskeVier
SoftwareentwicklerAggregatzustandProzess <Informatik>Physikalisches SystemSoftwaretestNetzbetriebssystemHardwareMomentenproblemCASE <Informatik>Web SiteWellenpaketWasserdampftafelComputeranimation
ProgrammierumgebungGravitationBimodulKommunikationssystemEndliche ModelltheorieBildgebendes VerfahrenSichtenkonzeptWort <Informatik>SchlussregelUmwandlungsenthalpieMusterspracheProgrammschleifeFront-End <Software>LoopComputeranimation
Wort <Informatik>Front-End <Software>Einfach zusammenhängender RaumVerdeckungsrechnungNichtlinearer OperatorLoopWasserdampftafelServerSmartphoneCASE <Informatik>Elektronischer ProgrammführerMultiplikationsoperatorComputeranimation
ServerMereologieBildgebendes VerfahrenEinfach zusammenhängender RaumInterface <Schaltung>SoftwareSmartphoneNichtlinearer OperatorMonster-GruppeMengeSoftwaretestComputeranimation
Physikalisches SystemAggregatzustandLeistung <Physik>DifferenteMessage-PassingSpeicherabzugMAPEinfach zusammenhängender RaumTypentheorieInverser LimesSystemaufrufVerdeckungsrechnungWellenpaket
SmartphoneWellenpaketKontextbezogenes SystemPi <Zahl>SystemaufrufProtokoll <Datenverarbeitungssystem>PunktKommunikationssystemDebuggingFront-End <Software>Gamecontroller
MereologieSelbstrepräsentationFront-End <Software>Minkowski-MetrikTelekommunikationGamecontrollerProgrammierumgebungMomentenproblemProzess <Informatik>Physikalisches SystemHypercubeSpezifisches VolumenStochastische AbhängigkeitBildgebendes VerfahrenProgrammschleifeEndliche ModelltheorieMengeMultiplikationMathematikSystemaufrufRechenzentrumCoxeter-GruppeSmartphoneStichprobenumfangCASE <Informatik>GruppenoperationOpen SourceWindkanalQuick-SortInformationsspeicherungLeistung <Physik>UnrundheitSystemverwaltungProtokoll <Datenverarbeitungssystem>Offene MengeLoopTaskSoftwareFlussdiagrammVorlesung/Konferenz
Computeranimation
Transkript: Englisch(automatisch erzeugt)
Hi, I'm Marcus Essat, and I'm talking about OpenVox. That's my project at DLR. And most probably it's a bit different than what you would expect. So it's not about specifically open source software.
It's about an open source system design. And it's about an architecture a bit more than just the software. And I will talk about this and explain the background and explain what we did, how we specified it, and why we specified it.
So to start with the topic, we need to talk about space mission control. What is space mission control and how it is done at the German Space Operation Center. That's a picture of the mission control room. And within the mission control room, you have a lot of different positions
which are working on a specific task set to maneuver or control or do some housekeeping of a satellite or a space station. And this is a picture of our main control room, K1 at GZOC, and that's a time
when they do the launch and early orbit maneuver. That's a positioning of a satellite. And there are a lot of different people work together in the same room. And they communicate specifically to that task with each other. So we have one who is responsible for the power system and another one who is responsible for,
example, for the thrust out. And another one is responsible for the overall mission. And they communicate with each other within their specific task set, within their role. And even if they are all in the same room, they use their voice communication system to communicate because otherwise it's just too loud.
And that's why they all have a headset and they communicate with the people in the control room together and they communicate with people outside of the control room together. For example, if it is a satellite start, then they need to talk to the people at the launch base and things like that. And that's a high activity phase when the launch is done
and other activity phases are fewer consoles operated and then you have three or four people which communicate with different ground stations to handle the track of the satellite dish
and things like that. So to give a bit more introduction into voice communication system, I will use Columbus as an example for voice communication system because that's most probably the most known space mission.
In Europe and before we start, we need to do some naming convention. In the space industry, a voice conference or a channel or a group call is called a voice loop. That's the terminology, voice loop. Just to introduce it here because I will use it during the presentation
from time to time. A voice loop is nothing more than just a voice conference with some people. And for the International Space Station, we have a set up at GSOC where we are responsible before one part and that's the Columbus module which is operated in the Columbus Control Center
and the Space Station is a multinational corporation with a lot of partners outside of Europe and distributed over the whole world. From a voice system perspective, we are connected to Huntsville and Houston and to Moscow.
These are the main uplinks to the stations and in Munich, we have a hub which connects all other European sites. For example, universities which do some experiments on board of the Space Station or industry which is doing some tasks
on board of the Space Station and that's all connected via Munich so that's a hub-based set up. And another uplink is in Japan but that's not directly connected voice system voice. So now are these systems used?
You have an operator which is using a touch screen to select the communication channel or the voice loop. He was using a headset or she and down there you will see a device which is a push-to-talk device.
We will come to that a bit later but that's a very important device for voice conferencing systems and space mission control. The system interface looks something similar to this. You have your voice loop set up where you select the voice loop.
You have different participation states within a voice loop that's shown with this different level. So for example, you have a participation state which is allowing you to talk and it's shown here in green and another one which is for monitoring or listen only that's in blue and another one which is not selected, that's in gray.
You have something like this as an interface and then you talk to other people. Here's an example with Houston and Munich. We have the voice loop, voice loop number C and we have someone in Houston in role A which has the voice loop enabled for talk.
That's why it's shown in green and this guy is communicating with someone in Munich which is listening to the loop in blue and a lot of different people are connected to the loop and listen to the same conversation. They just hear it and for example,
a little bit is a flight director and the flight director in role A is saying whatever you want to say and the one in role D is getting the information via the blue. That's a classical conference. So that's something you can do with each conferencing system which is available.
That's nothing special. What is special is that we use voice loops in parallel. So we have a lot of different voice loops which are running in parallel and you have to be able to listen into all of these voice loops at the same time and the communication is transmitted in parallel. That's something that's very, very special for these kind of systems.
And the other one is these participation states I just talked about. You have the listen participation state when you are just allowed to monitor the loop. We have a talk enabled participation state where you are allowed to talk in a voice loop and we have the talking state. That's the state when you actually talk in a voice loop.
That's this push to talk device I just talked about. So you press a button to enable the microphone and then you talk on that loop which you have previously selected for talking. That's because the communication in the voice loop is very formal.
So you don't want to have some background communication that's going on in a control room to be transmitted within a voice loop. That's why we have this push to talk trigger which is an active activity to make sure the one who is going to talk on that loop
actually wants to talk on that loop. So he's doing another action which is by intention, which is not by chance that he is pressing the button. We have this role-based access control model where we have a user which is, for example,
the flight director and the flight director has the permission to talk on that specific voice loop. Within a 24-7 operation scenario, you need to have several users which are flight directors which have the same permission set because they have the same tasks to perform and the same things to do.
That's something you know from other administration environments. I think that's nothing very special here at this conference, but it's something special to explain at a lot of other conferences. So within that role, we define a layout for each role
because they are all loops connected in an interface. And when we use the previous example, the role A from Houston, which was talking to the role D in Munich. So the green one has, for example,
the permission to talk on the highlighted loop in red. And it's on layout position one, one. In role D, it's the same loop. Here's a different permission which is just allowing to monitor on that loop. And the layout looks different because it's on a different position within the layout.
So that's something we have to handle within that system. And that's all role related. And each role or each group defines their layout, how it's convenient to use for them. And when we talk about a voice communication system
in space mission context and we talk about, or the definition is that VOX defines and transmits voice data within functional communication groups. And the transmission to the space station, for example, it's done like that. We have the operator in Munich,
which is selecting the loop space to ground, to talk. And that loop is transmitted to Huntsville or to Houston and from there to White Sands where the uplink to the space station is. And it's transmitted to a geostationary satellite and from the geostationary satellite
back to the space station. Which means we have a lot of delay within the communication because it's a very long path. And that's how voice is transmitted in this kind of setups. Complete overview about the system is shown here.
So it's just to summarize all the things we just talked about to get into the next steps. So we have an administrative and permission management component and we have a user selection or a user selection based audio transmission component
because the user's actually choosing which loops he wants to hear and which loops he's interested in. And that's all, or that all needs to be handled with a coordinated and dedicated worldwide network. And at the moment, everything for VOX is dedicated. So it's a completely dedicated system,
which means the wide area networks, the local networks, the IT infrastructure, the user terminals, the databases, administration, backend, everything is VOX related and dedicated. Which makes handling such a system very painful
because you have to know everything from every area. And that makes it very difficult to change parts of the system. And that's why we started to redefine
how to build such kind of systems. That's why we started how to rethink what is really needed within such a system. So what is the basic core functionality for voice communication system and space mission control. And the first thing is really specifying what we need,
what is special within such an environment. And as I said, we have a lot of things we transmit between the different stations, so we need to have a protocol which distributes all of the events, the type of the events and the parameter of the events.
We use a WebSocket connection to do the distribution and this allows active communication pushes from the server to the client. We need to transmit everything related to authorization and we need to transmit everything related for audio transmission. For authorization, we have an interface implemented
where we are and definition created where we put in the user credentials. So for example, in this case, username and password could be other tokens. And we have a project selection. So there you select the project you're actually working on. So it could be that a control room is used
for the Columbus mission or a control room is used for Taras mission or a control room is used for some other kinds of missions or whatever mission. We have a multi-mission environment set up in called to see in GSoC, the German Space Operation Center and there we need to support different missions
from the same control room. That's why we have implemented this mission selection and with the mission selection, we can switch with our approach, we can switch the whole authentication backend to a different mission with the same system. This makes the authentication component completely independent and it's no longer dedicated to the voice system.
It's integrated within some other environment. For example, a central LDAP infrastructure or things like that. This authentication, the user is able to select his role and to log in to the system and get his user interface.
Authentication messages we need are a connection to the server which is requesting all projects which are available from the server. So everything which is configured from our central system, every project which is reachable from our central system. Let's phrase it this way.
Every project that's reachable from our central system. So with this message, we can, or the user can switch the whole backend, the whole authentication backend. Then we need to have a user login and a role login. With a user login, we have the authentication of the user
and with a role login, that's the role which are transmitted for the user. Once the user has performed his login with a username and password, he gets a list of his roles. Then he selects his role and with the role, he selects his permission set. For example, a very easy example is that we have real time operations and we have simulations.
And for simulations, we use different voice loops because they don't want, when they simulate something which is critical, for example, there's power down of the space station and then they handle the simulation. They don't want to implement the real operations with the simulation scenario.
That's why we have, for example, two different worlds. One is real time, one is simulation. And the user selects whatever he wants to do at that moment. So that's up to the user. And these are the three authentication messages we need. And what can we implement with these three methods? So as I said, switching to different server backends.
Using different server backends from the same central voice distribution system. And we can perform instant roles which is in the mission. For example, when you have something who is responsible for the power and then he needs to change his role to the system's role.
There is not just responsible for power, he's also responsible for the IT systems on board, for example, because the other guy is just program down in the control room or whatever. So then he needs to handle that too and he can do an instant role switch. In addition to the authentication,
we need to do the state handling to select the voice loops and to select the audio transmission passes. So there we need to select, need to be able to switch off a loop, to switch a loop to monitor and to switch a loop to talk. These are the three states for a voice loop we need to switch.
Of course, the state switch is very easy. We send the state switch to the server which has a parameter list with the voice loop and the new state and then this new state is set. It's pretty easy. And the same is for audio switch events. Audio switch events are related to the volume of a voice loop or to an overall volume
or to switch to the talking state. These are the audio events we need. And with these state events, we can switch a voice loop to the desired state. We can switch the volume of the voice loop and we can signal the talking state. So we need a protocol
with exactly eight different types of messages to build a voice communication system for mission control and these are for authentication events and for state events. Typical voice communication systems are looking like this. So that's a design base for voice communication system
in the control room up to now. So it's still the same design. You have user interface where you have some buttons you click and that was designed in the 50s and it's still the same design pattern. What we did is we just transformed the design process.
We are not going from a voice transmission technology up to the specific implementation in the mission control room. We are starting with a specific implementation set for the voice control room and design a voice communication system below that.
That's the thing. We just reversed the process and with the protocols, we just defined we are able to do this. And we have started some implementations based on our protocol. For example, we have the state distribution with this protocol which is able with the user podcast
to synchronize different devices, states within different devices. This way we are able to build in redundancy within the system just because we switched the events over different devices.
How we do this is we have a state switch that is triggered by the user. The user is pressing a button and then on his laptop, the function button press is triggering the switch state at the server and the server is responding with an okay or not okay.
And this user can be logged in onto multiple devices. For example, he has a console and on the left and on the right, he has a voice terminal and he's logging in on both devices and operating just one of the devices. He's operating the left console, clicking a button and the state is switched
and the left console is, for example, switched to talk for the flight director loop. The right not because he has not operated it yet but if we use the function and distribute the function over all user logins that the state switch was called, so it's a podcast of the state switch
and then at the client evaluate do I have the same state and if I don't have the same state, I switch to the state and implement it this way, we can build the system to synchronize automatically over different devices. And after the state switch, which is actually I want to participate in the loop,
so I click the loop, the state switch is signal to the server. The server is switching the media communication. Maybe I need to introduce the media communication before. No, I introduce it later. So the server is changing the media communication path
and mixing the channel into the stream which is related to that client. Then this client on the left side has the new media stream and is synchronized to that state. On the other side of the console,
the client gets a message switch state from that user because he's connected to the user podcast and he checks his user interface and sees I have not clicked to change the state. So I got the message from the server and then he's sending the same message to the server again.
So he's also switching the state. The server is switching the media communication or the media signaling for that client too, transmitting the new media to the client and the client is then showing in the interface that he's also changed the state. So this way, we are able to synchronize a lot of different devices over multiple devices,
over laptops, tablets, smartphones, smartwatches, over multiple servers because we can just forward the user podcast over all user logins and over multiple networks. So we can have a network connection on one client which is a wired network connection with the laptop, for example.
On the other side, we use a tablet which is connected over a wireless network and have completely independent networks, completely independent devices which are synchronized which distribute the same media and in case of a failure of one component within the system, the user is just using the other device and is able to operate further.
So we don't need to stop the operations because he has a completely independent redundant device set. This way, we implement redundancy very easy based on our protocol.
Otherwise, you have redundant systems which are hardwired, so a hardwired redundancy which is not required, this is protocol. We use the voice loops, we use in addition to the user podcast which is synchronizing devices, a voice loop podcast which is present information.
Whenever a new member is joining a voice loop, so he's selected the loop to monitor or he's selected the loop to talk, then this information is distributed to all of the other users and all other users can then inform the user within the user interface that someone new joined the voice loop or left the voice loop or pressed push to talk
and is actually talking on that voice loop and things like that. It's a voice loop podcast. So we haven't talked about the media connection, now I come to the media connections. And what we want to have is separate media and signaling passes, like for example,
for using web ports, you have zip and RTP and we have our signaling protocol and we also use RTP for transmission of media streams. We want to implement point to point and point to multi-point media connections. So from one client to one server
or from one client to multiple servers. We want to implement redundant media connections, so from one client to two different media servers and all synchronized over one signaling server and we want to have that because we want to be able to implement such kind of setups
where you have, for example, a webpage running on a tablet and the usual telephone, which is calling to a media server and with a telephone call, you have one media stream and with the user interface, you select whatever you want to hear on the media stream. With this kind of setup, you can instantly be part
of a communication environment for space mission control, even if you don't have any equipment for it. So you can call in, get a link to a webpage, switch the loops you want to hear
and you are able to listen to the communication. That's something which is interesting, for example, for some public relation events, of course and it's also interesting for events where,
for example, a university has just some interest to listen to the communication for an experiment which is running, for example, at another university, but they are somehow connected and they don't have the money to spend to a dedicated voice terminal or to a dedicated infrastructure to communicate with the space operation center,
but with such kind of setup, you can reuse a telephone and a webpage and they are able to communicate with this control room. So it's a supportive element which we want to have and we are able to use redundant media connections
within a client with such kind of protocol. That's something we try to work on and we try to figure out how to route voice over wifi as a prime connection and have a fallback connection, for example, of voice over LTE with voice over wifi,
you can have a dedicated infrastructure, a dedicated network which you control within the control room and you can have, for example, a fallback communication channel over LTE to inform your partner in Houston, for example, that your whole system is down,
your whole network is down. You can't communicate otherwise when you are within your own infrastructure, but with a channel over LTE or with a backup channel over LTE, you use the public infrastructure and you communicate over the public infrastructure
as a backup. So that's something we try to figure out how to use it and how to build that and this is enabled with our protocol. We have done a prototype development and implemented a reference system for this protocol.
The first thing we have done is an HTML5 user interface which is using WebRTC to connect the media connections and a webpage with some buttons to switch the loops. That's the easiest implementation you can imagine.
It's just a web browser and you can select and switch the loops you want and everything is provided. Everything you need is provided over just a webpage.
Within that prototype development, we figured out that some multi-touch interactions may be very useful for the user interface. We can have some new kinds to interact with the system. For example, use some volume down pitches and pinches with these two fingers or change the output when we click
with one finger on the screen and use the other finger to select which output we want to use on the system or release a loop if we just swipe it away, enable talk with a longer click, for example, and not just a button click. Open the playback if you scroll up or down with a finger.
That's some new interaction possibilities for the voice system which wasn't available yet. And we have a very, very hard conditioning issue at the moment. So all new operators are used to consume electronics. Within the system which are implemented,
industrial touch screens are used. So that means you have a touch screen where you just have one click or press event and with the consumer devices you have, you are used to multi-touch interaction and that needs to be implemented within the next systems for voice communication system or any other system
within the mission control room because the users are rated. We have done some first designs and implementations and tests on multiple devices on Android.
We have developed a dedicated Android version and our development of a dedicated Android version, we have a web-based version and we are going to develop it on smartwatches. So we want to use different types of devices as interaction devices with the voice system which is completely new, which is something
you can just implement when you have a protocol or a protocol defined which is an open protocol and they can use different endpoints you connect over that protocol.
That's why we've done this open box protocol specification. We try to change the push to talk from the analog button press to pressing on the smartwatch for example and use the smartwatch as a push to talk device playing around with such kind of devices.
We are going for a system evolution which is open and which is enabling state of the art user interactions and which is independent, independent not just from a vendor, which is independent also from the operating system
and from the operated hardware. And that's a design we do. That's a design approach and that's why I said it's not just a software development process, it's a system design process we do.
And for this system design, our current status is that we are testing at the moment at the European Astronaut Training Center. There was yesterday and yesterday we have tested to communicate with the equipment which is on site.
The use case is there are some astronauts which train underwater some external legal activities. So they try to operate on the Columbus module under simulated zero gravity environments
and they are just diving. And they have some diving instructors which are working around and communicate with each other and they use a communication system which is shown here. You see this guy who's standing
has something like I have, my microphone and the back end. And on the back end you can select some loops so it's not just switching it on and off. He also has three buttons to switch to a specific voice loop and this one voice loop he's talking to the astronaut,
with another voice loop he's talking just to the instructor. And with the next voice loop he's talking to a crane operator for example. And the astronauts are underwater and they have the mask and within that mask they have a microphone and the headset.
And that's connected over a wired connection to a back end and this back end is then connected over a wireless connection to the guy who is working around the pool. And that's the use case we use. We have a smartphone as a wireless device
which is using a jogging strap and connected over wifi to a server. Then we have some cabling, which is not the easiest cabling, that's why it's shown like this. And in the end there's an astronaut underwater helmet and the astronaut underwater helmet has a microphone and headset.
And that's actually implemented like this. We have Raspberry Pis. These Raspberry Pis are our servers and so one is our server, the other one is our interface. And the interface is using an audio connection
that is connected over some pre-amps to the mask, which is shown here. And the operator is walking around the pool and is using his smartphone over a wireless network. And that's the setup we tested last week
and yesterday we tested a different type of connection to the mask because of the levels. So we need to adjust the level somehow. We need to have the right amplification of the signal and we need to have the right power to be transmitted over the setup.
That's what we tested yesterday. That's the current state of our system. So we started with an idea to build the core messages or to implement or to identify the core message for our system.
And our current state is that we use this core message and implement a system which is actually useful for astronaut training. And it's using Raspberry Pis and smartphones. So it's very cheap consumer hardware,
which is used here. Because we have that open protocol and we can implement everything needed in this context with that protocol. And that's an important point. So we need to have that protocol in place that everyone can implement.
For example, back end or a front end for a voice communication system for mission control. That's what we are going to do. That's what we want to do. We want to enable everyone to build a voice communication system for mission control. That's why we are here.
Thanks a lot. That was my presentation. And now I am available for questions.
Let me test this, okay. So what other use cases you can image in for open works outside of space control and missions? Any mission control room scenario.
Because usually you have in a mission control room, you have a voice communication with a group. So you communicate over a different group. And something which is coming into mind very early is for example, emergency use cases.
Where you have for example an emergency somewhere. You have someone who is helping within that emergency. You have a lot of helper. And then you organize these helpers in a group So you have for example, let's have an earthquake. And within that earthquake you have one group
which is responsible to search for someone who is missed. And this group is organized in one communication group. Another group is helping the people which are already found which are bounded and things like that. So in a hospital environment.
And then you have the next communication group which is organizing for example equipment. And then you have another communication group which is organizing all kind of food and things like that. So in such kind of use cases you can build it immediately with such a system. And that's one of the scenarios where it's very helpful
to have a system that you can implement with a smartphone. Because everyone has a smartphone. He just has to connect to the backend. Use that protocol and within the backend the administration people are able to set up the groups
and assign someone to a new group. And then he can help with the task that he's assigned to. Okay, thank you. That's one use case. Okay, are there further questions then?
Hi, how are we doing the audio mixing? Are you using your own software for that? Or do you use already existing other open source software? At the moment we are using pre-switch. It's an open source software
and we are mixing with that backend. Okay, I have a question myself. I work in a data center environment and this seems like a fantastic tool to use during larger outages where lots of teams are communicating,
need to be communicating. Let's say it's a power outage and then you've got storage backends which need to be brought back online. And that's interesting to some of the people. So I find the multi-parties aspect really interesting. How does that work in practice when you have, I don't know, five or six background chatter loops? Can you change volume on some of them
so they sort of are background noise? How does that work in practice? So that's an independent, each loop has an independent volume setting. And you can switch each loop to a different volume. That's usually how it is done. So for example, the space to count loop
is set up with a high volume because it's very interesting and when something is communicated on that loop everyone wants to hear it. And then each user has its own loop, home loop. For example, the flight director has a flight director loop. The power guy has a power loop or the storage guy has a storage loop. And this loop is with a high volume and other loops which are just interesting
if nothing special on the high priority loop is going on are set to a lower volume. Okay, thank you. So any more questions? Doesn't look like it. So thanks a lot, Marcus. And give him a round of applause again.
Thanks.