We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Can Linux network configuration suck less?

00:00

Formale Metadaten

Titel
Can Linux network configuration suck less?
Serientitel
Anzahl der Teile
90
Autor
Lizenz
CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
From kernel to the administrators and the users Network configuration scripts proved insufficient in modern environments. NetworkManager has been too focused on desktop and laptop usage. Alternative projects including netifd, netcfg, connman, wicd and wicked are not production-ready for servers either. It is possible to have a single network management daemon serving desktops, laptops, servers and virtualization hosts alike? What features should it support? What are the expectations of the community? Can network configuration in Linux distributions suck less? Bring your own questions, too. This talk will cover current networking problems from kernel to the administrators and users, as well as the current development in this field with the focus on NetworkManager's ongoing transition from a wireless connection configurator to a full-fledge operating system deamon. One of the main topics will be about the reasons to have a network configuration daemon at all and about decisions that affect its usability for various use cases. It's purpose is not only to inform the open source community but also to gather the community to a larger-scale networking round table. We already tried to do this on a smaller scale and it proved good. I expect follow-up discussions to evolve and to contribute to the effort to make network-related projects more community-oriented. Linux network management is already changing. Your choice is whether to participate on the change and influence it, or just wait for the results.
25
Vorschaubild
15:46
51
54
Vorschaubild
15:34
55
57
Vorschaubild
1:02:09
58
Vorschaubild
16:08
62
Vorschaubild
13:26
65
67
RechnernetzNetzwerkverwaltungKernel <Informatik>AdressraumKonfigurationsraumVerschlingungNetzwerkverwaltungQuick-SortPhysikalisches SystemServerKernel <Informatik>KonfigurationsraumDirekte numerische SimulationAbstandElektronische PublikationRechnernetzMailing-ListeEINKAUF <Programm>Interface <Schaltung>DatennetzAdressraumVerschlingungEinfach zusammenhängender RaumStellenringResolventeZweiNetzadresseDefaultKollaboration <Informatik>MereologieRoutingRouterProgrammbibliothekKonfiguration <Informatik>EinsSoftwareentwicklungProzess <Informatik>VerdeckungsrechnungRekursive FunktionVorlesung/KonferenzComputeranimation
RechnernetzKernel <Informatik>SkriptspracheServerOpen SourceGamecontrollerArithmetische FolgeSoftwaretestElektronische PublikationDirekte numerische SimulationQuick-SortServerKernel <Informatik>SkriptspracheKonfigurationsraumMailing-ListeAggregatzustandInformationDämon <Informatik>InformationsspeicherungDynamic Host Configuration ProtocolSchnittmengeMaxwellscher DämonResolventeDienst <Informatik>Vorlesung/KonferenzComputeranimation
Dichte <Stochastik>Dynamic Host Configuration ProtocolClientMaxwellscher DämonInterface <Schaltung>Kernel <Informatik>Direkte numerische SimulationKonfigurationsraumMultiplikationEigentliche AbbildungBeschreibungskomplexitätClientDienst <Informatik>Maxwellscher DämonSkriptspracheMailing-ListeServerDämon <Informatik>KonfigurationsraumVersionsverwaltungDefaultKernel <Informatik>AbfrageSoftwareDatennetzVerschlingungInterface <Schaltung>RoutingFehlertoleranzRechnernetzDivergente ReiheDynamic Host Configuration ProtocolDirekte numerische SimulationVorlesung/KonferenzComputeranimation
Dämon <Informatik>DezimalzahlMultiplikationBeschreibungskomplexitätEigentliche AbbildungMaxwellscher DämonRechnernetzKonfigurationsraumRoutingInterface <Schaltung>RechnernetzKernel <Informatik>EntscheidungstheorieDefaultSchlussregelPhysikalisches SystemVirtuelles privates NetzwerkVorlesung/KonferenzComputeranimation
KonfigurationsraumRechnernetzMultiplikationBeschreibungskomplexitätEigentliche AbbildungDämon <Informatik>DifferenteDirekte numerische SimulationResolventeVerdeckungsrechnungVerzeichnisdienstVirtuelles privates NetzwerkDatennetzServerComputeranimationVorlesung/Konferenz
KontrollstrukturSkriptspracheHydrostatikKonfigurationsraumSoftwaretestVerschlingungAdressraumQuick-SortRechnernetzInterface <Schaltung>Dynamic Host Configuration ProtocolDirekte numerische SimulationVirtuelles privates NetzwerkElektronische PublikationQuick-SortKonfigurationsraumDistributionenraumSkriptspracheInterface <Schaltung>UmwandlungsenthalpieRechnernetzDynamisches SystemBootenServerHydrostatikProjektive EbeneDienst <Informatik>MultiplikationsoperatorBenutzeroberflächeComputeranimationVorlesung/Konferenz
Dynamic Host Configuration ProtocolDirekte numerische SimulationInterface <Schaltung>SkriptspracheSoftwareVirtuelles privates NetzwerkImplementierungDatennetzRechnernetzInterprozesskommunikationEreignishorizontKernel <Informatik>Maxwellscher DämonQuick-SortPermanenteDefaultFunktion <Mathematik>GrenzschichtablösungInterface <Schaltung>Dynamic Host Configuration ProtocolMAPSkriptspracheQuick-SortRechnernetzDirekte numerische SimulationMultiplikationsoperatorKonfigurationsraumCoxeter-GruppeMaxwellscher DämonFront-End <Software>ComputeranimationVorlesung/Konferenz
EreignishorizontKernel <Informatik>DatennetzInterprozesskommunikationMaxwellscher DämonRechnernetzQuick-SortDefaultFunktion <Mathematik>DualitätstheorieBildschirmschonerMultiplikationsoperatorDistributionenraumFront-End <Software>Überlagerung <Mathematik>CASE <Informatik>VirtualisierungAusnahmebehandlungSoftwareNotebook-ComputerRouterGrenzschichtablösungVisualisierungServerHackerEinfach zusammenhängender RaumDatennetzNetzwerkverwaltungDämon <Informatik>ComputeranimationVorlesung/Konferenz
DatennetzEreignishorizontInterprozesskommunikationKernel <Informatik>Quick-SortMaxwellscher DämonRechnernetzDefaultFunktion <Mathematik>PermanenteKonfigurationsraumSchnittmengeKernel <Informatik>KonfigurationsraumEntscheidungstheorieTelekommunikationNetzwerkverwaltungMaxwellscher DämonElektronische PublikationGraphische BenutzeroberflächeRechnernetzDämon <Informatik>Lesen <Datenverarbeitung>Quick-SortGüte der AnpassungMultiplikationsoperatorComputeranimationVorlesung/Konferenz
DatennetzInterface <Schaltung>RechnernetzKonfigurationsraumMixed RealityDefaultDienst <Informatik>Direkte numerische SimulationKonfigurationsraumMultiplikationMultiplikationsoperatorKategorie <Mathematik>Interface <Schaltung>HydrostatikComputeranimation
EreignishorizontRechnernetzDatennetzImplementierungDämon <Informatik>EntscheidungstheorieImplementierungMultiplikationsoperatorMereologieNetzwerkverwaltungProjektive EbeneMetropolitan area networkMaxwellscher DämonIntelBitRechnernetzDämon <Informatik>Computeranimation
Dämon <Informatik>DatennetzRechnernetzRechnernetzNetzwerkverwaltungProjektive EbeneComputeranimationVorlesung/Konferenz
DistributionenraumAuswahlaxiomServerRegulärer GraphSoftwareentwicklerNotebook-ComputerStabilitätstheorie <Logik>Notebook-ComputerSoftwareentwicklerNetzwerkverwaltungBitVersionsverwaltungStrömungsrichtungInklusion <Mathematik>Verzweigendes ProgrammZahlenbereichComputeranimation
Notebook-ComputerRegulärer GraphDistributionenraumAuswahlaxiomServerSoftwareentwicklerGüte der AnpassungServerVisualisierungProjektive EbeneNetzwerkverwaltungZahlenbereichRegulärer GraphMultiplikationsoperatorWhiteboardStrömungsrichtungRechenwerkVorlesung/KonferenzComputeranimation
Notebook-ComputerMAPRegulärer GraphDistributionenraumAuswahlaxiomServerSoftwareentwicklerRegulärer GraphProjektive EbeneKernel <Informatik>ProgrammfehlerNetzwerkverwaltungRechnernetzComputeranimationVorlesung/Konferenz
Notebook-ComputerMAPDistributionenraumAuswahlaxiomServerSoftwareentwicklerRegulärer GraphDatennetzTreiber <Programm>RechnernetzDatennetzStabProgrammfehlerMailing-ListeDatenstrukturDateiformatDistributionenraumTypentheorieProjektive EbeneNetzwerkverwaltungKonfigurationsraumElektronische PublikationDefaultEndliche ModelltheorieComputeranimationVorlesung/Konferenz
RechnernetzDatennetzSoftwareentwicklerDateiformatDefaultKlassische PhysikRechnernetzGüte der AnpassungBus <Informatik>MAPSkriptspracheNetzwerkverwaltungPhysikalisches SystemIntegralVorlesung/KonferenzComputeranimation
RechnernetzDatennetzDesintegration <Mathematik>GrenzschichtablösungZweiNetzwerkverwaltungSoftwaretestFigurierte ZahlStrömungsrichtungSkriptspracheHilfesystemAdressraumMatchingDatennetzRechnernetzComputeranimationVorlesung/Konferenz
KonfigurationsraumParallele SchnittstelleÄhnlichkeitsgeometrieNetzwerkverwaltungQuick-SortPhysikalisches SystemKonfigurationsraumEinsVorlesung/KonferenzComputeranimation
Parallele SchnittstelleKonfigurationsraumPhysikalisches SystemInstallation <Informatik>NetzwerkverwaltungDefaultKonfigurationsraumTermComputeranimationVorlesung/Konferenz
GrenzschichtablösungPlug inDirekte numerische SimulationDirekte numerische SimulationÄußere Algebra eines ModulsNetzwerkverwaltungSkriptspracheImplementierungInformationStapeldateiIntegralOffene MengeMultiplikationsoperatorKapillardruckHoaxEreignishorizontComputeranimation
Direkte numerische SimulationPlug inDesintegration <Mathematik>SoftwaretestMenütechnikDistributionenraumDateiformatKonfiguration <Informatik>StrömungsrichtungEinfach zusammenhängender RaumVerknüpfungsgliedOpen SourceAggregatzustandNetzwerkverwaltungKonfiguration <Informatik>EinsSoftwaretestElektronische PublikationDistributionenraumKonfigurationsraumDateiformatDokumentenserverVerschlingungIntegralUnendlichkeitMultiplikationsoperatorOffene MengeStellenringVerzweigendes ProgrammAdressraumDienst <Informatik>ComputerspielRechnernetzVorlesung/KonferenzComputeranimation
Plug inDistributionenraumDateiformatKonfiguration <Informatik>CodeDatennetzDistributionenraumKonfiguration <Informatik>VersionsverwaltungDefaultEinsVorlesung/KonferenzComputeranimation
TypentheorieDrahtloses lokales NetzMobiles InternetPersonal Area NetworkVirtuelles privates NetzwerkTypentheorieMobiles InternetInterface <Schaltung>PhysikalismusWiMAXKlassische PhysikVersionsverwaltungModemComputeranimationVorlesung/Konferenz
TypentheorieVirtuelle RealitätMehrwertnetzFormation <Mathematik>Automatische HandlungsplanungPlug inVirtuelles privates NetzwerkInterface <Schaltung>SoftwaretestE-MailMultiplikationsoperatorCodierungBridge <Kommunikationstechnik>ImplementierungSoftwareentwicklungMailing-ListeXMLComputeranimation
Plug inVirtuelles privates NetzwerkFormation <Mathematik>CodeIntegralSpeicherabzugTreiber <Programm>TypentheorieInterface <Schaltung>NetzwerkverwaltungPlug inMathematikTemperaturstrahlungBridge <Kommunikationstechnik>Vorlesung/Konferenz
TypentheorieVirtuelle RealitätMehrwertnetzFormation <Mathematik>Interface <Schaltung>Plug inVirtuelles privates NetzwerkATMProxy ServerNotepad-ComputerMailing-ListeEinfach zusammenhängender RaumNotebook-ComputerCASE <Informatik>Quick-SortLesen <Datenverarbeitung>Proxy ServerLokales NetzATMAusnahmebehandlungAdressraumE-MailBildschirmfensterCodeSystemverwaltungMathematikHook <Programmierung>Computeranimation
SpieltheorieProxy ServerFormation <Mathematik>Mobiles InternetVirtuelles privates NetzwerkParametersystemBootenTypentheorieInterface <Schaltung>Einfach zusammenhängender RaumRechnernetzVirtuelles privates NetzwerkNetzwerkverwaltungMultiplikationsoperatorComputeranimationVorlesung/Konferenz
Virtuelles privates NetzwerkAdressraumDateiformatKonfigurationsraumGasströmungDynamic Host Configuration ProtocolKernel <Informatik>Einfach zusammenhängender RaumNetzwerkverwaltungKonfigurationsraumDateiformatQuick-SortMathematikHydrostatikComputeranimationVorlesung/Konferenz
Case-ModdingAdressraumDateiformatKonfigurationsraumGasströmungDynamic Host Configuration ProtocolKernel <Informatik>StellenringZeitzoneServerTermSystemplattformSoftwaretestOpen SourceCodeHyperbelverfahrenClientSystemplattformMereologieZusammenhängender GraphDirekte numerische SimulationHeegaard-ZerlegungSoftwaretestDatennetzMultiplikationsoperatorCodeNetzwerkverwaltungSchnittmengeQuick-SortKonfigurationsraumAbfrageSystemaufrufRechnernetzServerRechenschieberMathematikKernel <Informatik>ProgrammfehlerComputeranimation
CodeSystemplattformMobiles EndgerätSoftwaretestAutomatische HandlungsplanungKernel <Informatik>Desintegration <Mathematik>RechnernetzBootenModul <Datentyp>MereologieGasströmungDämon <Informatik>Physikalisches SystemLaufzeitfehlerGrenzschichtablösungFormation <Mathematik>AdressraumKonfigurationsraumCachingSynchronisierungDynamic Host Configuration ProtocolImplementierungProgrammbibliothekFahne <Mathematik>DefaultHead-mounted DisplaySoftwareDatennetzStandardabweichungDirekte numerische SimulationKonstanteElektronische PublikationElektronische PublikationQuaderNetzwerkverwaltungProjektive EbeneNeuroinformatikInformationHilfesystemProgrammfehlerProgrammbibliothekMailing-ListeRechenschieberFlächeninhaltKonfigurationsraumTelekommunikationVirtualisierungHardwareUmwandlungsenthalpieDatennetzAdressraumDämon <Informatik>LaufzeitfehlerEinfach zusammenhängender RaumRechnernetzResultanteDistributionenraumSystemzusammenbruchKernel <Informatik>WiMAXBus <Informatik>CASE <Informatik>VersionsverwaltungPhysikalisches SystemWikiClientSoftwaretestMaxwellscher DämonStandardabweichungBridge <Kommunikationstechnik>MultiplikationsoperatorStrömungsrichtungComputeranimation
ServerGüte der Anpassungsinc-FunktionDistributionenraumPhysikalisches SystemDienst <Informatik>URLDrahtloses lokales NetzFunktionalKonfigurationsraumPunktRechnernetzCookie <Internet>Coxeter-GruppeInterface <Schaltung>SoftwareNetzwerkverwaltungHilfesystemClientPerspektiveStrömungsrichtungNetzadresseOverhead <Kommunikationstechnik>SystemverwaltungSkriptspracheAbstraktionsebeneAdressraumQuick-SortVirtuelle MaschineDynamic Host Configuration ProtocolBootenBesprechung/Interview
ServerEinfach zusammenhängender RaumRechnernetzMinkowski-MetrikPhysikalisches SystemSystemverwaltungProxy ServerAdressraumNetzwerkverwaltungErwartungswertInterface <Schaltung>ComputeranimationVorlesung/Konferenz
NetzwerkverwaltungPunktKonfigurationsraumServerKernel <Informatik>NP-hartes ProblemSicherungskopieSystemverwaltungPhysikalisches SystemEinfach zusammenhängender RaumInterface <Schaltung>Dynamisches SystemHydrostatikBildschirmfensterDatennetzResultanteAggregatzustandExogene VariableMultiplikationsoperatorDienst <Informatik>SichtenkonzeptFehlermeldungMereologieVorlesung/Konferenz
ProgrammfehlerEinsEigentliche AbbildungVerkehrsinformationElektronische PublikationServerVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
Thank you all for coming here. I will be talking about network manager and network management in Linux systems in general. First, I would like to start with sort of a big picture or looking at things from the
far distance. What you can use, what's possible, what's not possible. And we'll quite a lot of auto-configuration for you. This is sort of special for Linux. For example, BSDs are less auto-configuring. First, you have the basics.
So if you just turn your Lubeck interface up, you are immediately getting two IP addresses. One is IPv4 127.0.0.1. The second is IPv6.
So this is the simplest case of automatic configuration of addresses. And on top of that, you have link local addresses in the kernel today. But these are only implemented for IPv6. Which is not a big deal because it's just link local, so you don't need any connectivity for that. And I think it's
not worth even considering IPv4 link local now. It's just too late to work with that. But then you have the auto-configuration of global addresses, which is sort of more difficult, more collaboration with other things
on the network. But still, you can get public addresses by just receiving router advertisements and the kernel configures the whole stuff. When it configures your public addresses, it can also configure
a default route. It can configure the device routes, which are sort of usually viewed as parts of the addresses. If anybody doesn't know, the device routes are what you get by using your IP address and net mask
or prefix length. And what is somewhat usually forgotten is that the kernel can do the removal of the addresses. So when their lifetime is over, which is usually done for the global ones, the kernel actually removes
the addresses. This is a very important feature that I will be talking about again with regard to other ways to configure addresses. But now let's continue with the kernel. The kernel allows you to use Netlink API,
which you can, without problem, use from C programs using the webinar library. It's actually quite easy. And that way, you can tell the kernel to configure just anything regarding networking like addresses, link options,
and routes. Some things are not done via Netlink. These are usually simple values, simple options that you can do with sys control or directly with the process files. And the kernel also provides a recursive DNS
server list and DNS search list. These are stuff usually pushed to etc.resolve.conf. But this is currently pretty lame. It doesn't work as we expect. And I'll be talking about it later.
There are various, like I say, APIs, usually some IO controls or stuff like that. Various tools use these old APIs. There are sometimes problems that using the old APIs, you get different behavior. So if you do
some first tests, it's best to use the current tools. And especially, you can use the ipout package and the ip command, which directly uses Netlink. It's the simplest way to test.
There are things that you can do with your kernels. This is all sorts of stateful configuration, which is usually DHCP. It doesn't matter whether it's for IPv4 or IPv6. And you cannot do properly this configuration via kernel because kernel does not usually edit your files.
And currently in Linux, you don't store the configuration of DNS in the memory, but in the etc.resolve.conf file. So this is pretty important that you have some daemon that listens to the kernel and sets up
this information. What next you need for many things are some sorts of auxiliary scripts, like for example, when you receive a list of NTP servers over DHCP, you want to run a script to tell the NTP daemon,
the NTP client daemon to use the exact list of servers you received. But very often, just plain DHCP clients or the software you usually call
DHCP clients, yes, all of this. So they are used as configuration daemons. They are not only doing the queries over the network, but they also configure stuff. They also call the auxiliary scripts, as I was talking about. So currently, you can be perfectly okay with IPv4 if you have
just one interface or with some special configuration. You can also do some prioritizing, and if you know, resolve.conf tool or netconfig on SUSE, stuff like that.
So these tools usually use Netlink for kernel configuration, so they can do just the same stuff you are doing with the IP command. And the simplest way is to do DNSes via etcrs of conf.
So things that either don't work or can't easily be done automatically are if you have multiple interfaces, because you first have conflicts regarding the default route. You can have conflicts regarding
the device routes, because you can have two interfaces connected to the same network, so you are receiving exactly the same routes, and sometimes the kernel then stops sending any packets at all, or it randomly sends some packets through one interface, some through the second interface.
It's pretty wrong then. And you would usually like to have some policy decisions by default made according to some unwritten rules that yes, I want to use the wired interface because wireless is usually
not the better one, but you also want to do the policy decisions yourself or at least give some hints to the system to do these decisions for you. And if you try to do this with the HCP, you can have problems
with integrating to VPNs, with integrating to other tools. For example, there is a big problem with the setup that you want to resolve various names through different name servers. For example, when you connect to a company VPN, you want all
company addresses to be resolved through the company VPN, but you might be wanting to use your network's name servers for all other stuff. So this is not possible to do via ResolveConf currently, but it's possible to do
with, for example, DNS mask name server or yesterday I heard Unbound also supports it. So let's go to the current real world, how things are done
or how things can be done. It's quite easy to set up some static network configuration just with IP out commands. You can put it into a script, so you can run it during the boot. No problem with that. You can tweak all sorts of stuff that you know in advance.
It is sort of possible to do some dynamic stuff if you, for example, you'd have to start your scripts. It's possible to do some magic with it, but it's not so easy and not so nice.
So what currently most distributions use for servers is scripts, network scripts usually, usually called, or you can have specific names in various distributions.
And these use pair interface configuration files. So they usually can do stuff like setting one interface up with all the configuration. It doesn't work well at all times. So far, I think, yeah, you see the Fedora theme. So I'm mostly in the Fedora project, but so far I can say that
Debian network scripts seem to be the most flexible and versatile of all solutions on this level. So I don't want to repeat myself, but the biggest problems are
when you want to do DHCP on several interfaces or even when you want to do IPv6, just auto-configuration with DNS and DHCP for IPv4 at the same time. You're getting all sorts of these problems.
One of the biggest thing is that, regardless if you're using IPv4 or IPv6, the DNS configuration is shared.
So currently, like I said, the if-up-down thing was pretty neat to use for all the time. But let's go forward. What I want to show you is that we should not be afraid of using network configuration daemons.
You see I have no screenshots, no UI stuff in this presentation, so I'm really all the time speaking about the backend. And I believe that what we need in Linux distributions
is to have something that is universal. Yes, there are exceptions. There are embedded devices for routers. There are stuff that maybe we can't cover with the same software. But I still think that the desktop use cases, laptop use cases,
server use cases, and visualization use cases should be solved with the same stuff. Actually, I'm working remote, and when I went home and wanted to start to work, what I needed for that. I needed to use virtualization on my laptop, connected sometimes through wired, sometimes through Wi-Fi,
sometimes through other networks, so all the dynamic stuff. And also, at the same time, I wanted to use a VPN. So these are all the stuff I did not do inside the company because there was just one wired connection,
and I just turned off network manager and worked easily. But when I came home, I spent one week hacking, and then I could do all of this. So it was really not so hard. And finally, I could use the one single daemon with some modifications
to read configuration files that I wrote by hand or configure it via some GUI if I care to do so.
It did some policy decisions, not perfect, but it did something, and it did all the communication with the kernel or the settings, so no other daemon was required to do that. And the VPN was integrated to network manager, so there was no problem.
There was OpenVPN, there are a bunch of others. So it sort of worked for me after some time. And what we are going to do is to make network manager good at exactly this stuff, to make it usable,
and provide you with the services you need. So what are the pros? Multiple managed interfaces without problem. What we want to have, you can see that I am very careful not to say we have that everything working,
is to manage multiple interfaces in a very dynamic way so that at any time I can choose to switch to another interface, but have them both configured and only the default routing and DNS stuff will be switched very, very easily then.
Static and dynamic, you can still do all static configuration with tools in this category. And the most important thing in the future, we really want to allow you to step into policy decisions,
and everything is event-based, so everything is dynamic and works according to the current situation. There is not just one implementation of a full-fledged network daemon.
There are like six at the time, six that are sort of at least a little bit active. There is network manager, I will be talking later about more. And Conman from Intel, part of the MECO project.
I've seen some comments, so it's not really that at the time. But I try to work with Conman. I tried to work with WICD. I did not try this tool.
And I was having a long talk with people who made WCIT from OpenSUSE. And still, I can say that network manager, even though it's not perfect, it's currently on the best way. So I would like to invite anyone from the other projects to talk with us.
Maybe we can share something, maybe we can consolidate and work more together, whatever is possible. So now to the current status of network manager and the current development.
We are currently at the stable branch, or stable version 1.9.6, which is the first version I at least a little bit believe in support for basic IPv6.
And we have a development branch, or you just can call it master because it's just to have some version number. And it's actually getting better and better because the current development branch has lots of new stuff included.
And I'll be still talking about it. So what do we have now? We have a long history of laptop and desktop usage, so we are pretty good in wireless stuff and such things. I say we are, I'm really not, but Dan Williams made a good work on that.
And we are trying to expand to server and visualization work, which is sort of new for the project. It's not so new to me in person,
so I actually joined network manager team exactly because it was going to move to the server board. Currently we have four regulars, but we have quite a good number of occasional contributors
and we have some people that contribute really often, especially they are working on stuff that we never get to, that we never have time to do. So they help the project from our priorities and stuff.
We often contribute, by we I mean the four regulars. We contribute to other projects related to networking, so currently I think Dan Williams mostly contributes to the kernel.
I am currently trying to contribute to glibc, which we don't directly use, but we often get some bugs that end up being just glibc and we are trying to work more with other people in the ecosystem.
And that's what make even those of you that don't want to use network manager or never will actually use network manager, you still get some pieces of our work
because if you are using a dynamic tool to configure stuff, it's much different from using just the static tools. You find bugs that you wouldn't find otherwise. Just as a small example, if you're using wireless networking and you perform scanning
and the driver lets the network down when scanning, that's a problem that you will see in network manager very quickly because it performs the scanning quite often to present you the list of the networks.
So let's move to the actual usage of network manager in distributions because this is quite an interesting part as for some reason or other,
the project decided to support various types of configuration files. So for example, in Fedora, which is like a model distribution for network manager, we are not even using the internal structure of network manager
for configuration or the default configuration format. We are using the classic Red Hat format, sometimes called ifconfig, sometimes ifclg, and we are trying to always cope with it so that we work as close as possible to the original network scripts,
which doesn't mean it works perfectly, but it's quite a good level of integration. We have some issues with systemd and dbus, especially we got often auto-activated using dbus,
so it might happen to you that you started, or that you have network manager started by default. You wanted to stop it to do some testing, and actually when you started testing without network manager
in several seconds or such, you got it back up and running, which is very bad. I was trying to get rid of this behavior, and it somehow disappears in our Fedora 17 build, and somehow reappeared in Fedora 18, so it's just crazy.
Currently we have also some problems with cooperation with the network scripts, because the current network manager only identifies the devices by their MAC addresses, while the network scripts can also do name-based matching.
And we are really trying to have things working for the users, so with the help of other people, I don't know if Martin Hoelz is somewhere here. We have networking test weeks in Fedora,
and for Fedora 18 this was the first one, it was three days dedicated to testing networking stuff, not only network manager. We plan to have another for Fedora 19, and that's probably all related to Fedora.
For Debian, we are supporting if-up-down style configuration, but that's sort of difficult. Actually, the four of us, the regulars,
don't really care much about if-up-down, so if there's some problem and it's trivial, we are happy to fix it. Yeah, yeah, I can imagine. So currently, with if-up-down, I think the network manager team itself
will not fix the worse issues or the more complicated ones. If anyone comes and helps us, it's possible, but actually the Debian system is so flexible that I'm not sure if we can properly support it at all.
So this is, you are talking about unstable, or? Yeah, yeah, I don't remember the names very well. Okay, I'll just, yeah, just repeat for all the people.
So as I was saying, in Debian, in Weasy, in Squeeze already if-up-down style configuration is supported but disabled by default because it doesn't work so well, and for Weasy, new installs do switch
to network manager's native configuration. It's done by the installer, which will generate that instead of if-up-down if you are installing a desktop system with network manager. Thank you. So for the long term, it may also,
it may even happen that we actually drop the support if it's not used at all. I recommended something that's already done. It's so funny. What about SUSE?
We receive occasional contributions from SUSE, otherwise some SUSE folks were working on Wicked, which is an alternative. I was talking with them in Prague at open SUSE conference that was together with Linux days,
but I haven't heard from them since then, so I just can't tell you more information because I don't have them. We still support DNS setting through SUSE netconfig, but all other scripting stuff, all other integration is to be done via dispatcher scripts,
which are network manager's implementation for auxiliary scripting on some events. There are some minor issues. Usually, the SUSE folks batch network manager to work perfectly for them,
and time by time, we go and pick up some of the batches that we like. What's very interesting, we still support configuration files for Gen2.
I don't know whether Gen2 folks use the native ones or the Fnet configuration, but what's quite interesting is that integration with OpenRC is really, really good. They have network manager service starting
like, I don't remember, maybe someone helps me disabled or unavailable or something like that, and they wait for network manager to mark its own service running. That means we have actual connectivity.
It's actually not so perfect from the network manager side because we still don't properly distinguish between the various connected states like we have just link local addresses, we have global addresses and such, but from the OpenRC side, it's something I really like.
Because it's very quick to test some quick fixes and stuff on Gen2, I'm quite often using it for testing, and now we got live e-built, that means e-built from the Git repository
that makes it easy to install the most current network manager, or even if you just change the name of the branch, you can get any branch that you are working on. So this is really, really easy for me,
and finally, I have to test in Fedora, but usually the first test I'm doing on Gen2. What's quite new and what's not yet delivered as a stable release is that we can build
without special configure options, which is normal in the open source world, but previously, you would have to do minus minus with minus distro, and you would have to choose from 13 distributions that were already supported,
and we cleaned this up. We realized that only four distributions really used something special, and these were usually just the configuration formats,
and only one distribution had some real conditional code, which was SUSE because of net config, and that's just a small piece of code. So now you can turn on and off these features just by configure options,
and it's still the same by default, so it should behave the same as in previous versions if you don't change the configure options.
So now to the features. Not all features I'll be talking about are already released. Not all features are already coded, but I'll tell you for the new ones.
These are just a recapitulation of what types of interfaces we support. Currently, we support ethernet, Wi-Fi, ADSL, mobile broadband, Bluetooth, some OLPC stuff I know nothing about,
WiMAX, and InfiniBand. So these are the physical type of interfaces. That means we, for example, currently don't support the classic dial-up networking, which we did in previous versions,
but nobody was interested enough to add the support back. USB usually gives you some ethernet device or something, so yes, yes, it's possible.
And we have currently some visual types of interfaces. These are VLANs, which worked for me in some quick testing. These are bridges and bonds. Actually, we already supported bonding, but it never worked for me, so I'm rather talking about it
as a future feature than the existing one. And we are going to deliver bridges and bonds in the next release. That means that will be the first release of bridges, and that means it's maybe better to wait for another
before actually starting using it. Yes, yes, it should work without problem. We actually had quite a lot of coding. The other guys were a little bit deep,
deep in the coding, so they did not read the mailing list, so at one time we had two implementations for bridges. So if you ask me what works and what doesn't, I don't even remember what I tested, but before we release, which should be probably during February,
I will go through some tests too, and then I promise to do all the testing, but I will do some tests on my own to make sure that it works, even for me, because very often I see things that work
for the person who wrote it, but it doesn't work for me, and I tend to make such tests that it doesn't work. I don't know how I do that, but I'm always thinking I'm doing it the easiest way and doesn't work, so I'm quite crazy
and looking at stuff that's written down. It must work, and yeah, so yes, yes. We had quite a lot of changes because of bridges and bonds in the core, network manager core, as these are very, very different from the types of interfaces we had already working well,
and in future we plan team driver integration, but that's really, really just planning. And the only public interface for plugins we have is VPN.
There are several of them. It tends to work quite well with some exceptions like automatic connection, but we'll get back to it. We also have connection sharing,
which is the most hated feature on laptops and desktops by the local network administrators, so it's maybe sort of Windows-like connection sharing. You just have IPv4.
We use mask reading. We currently set up a specific prefix. You cannot change the addresses if I did not miss something in the code, but recently we added support for, it's not easier here, for hotspot mode for WiFi.
We already supported ad hoc, so for some use cases when you have laptops and phones and stuff, it's actually pretty easy. We are still talking about how to support IPv6 because IPv6 masquerade is something
that does not really appeal us. And we don't know yet, actually. If anyone wants to join the discussion, there's a bugzilla issue for that or you can write to the mailing list. We don't yet know what we'll actually do.
There are some current problems that we are trying to address. Some of the network types or some of the interface types are not able to connect, for example, at the booting time
or network manager starting time. Some of them would be expected to connect when it's possible. For example, VPN. You don't want to have the VPN connected when you have no physical connectivity, so we would like to, yeah, we had a long talk or a long argument with Dan about this,
but currently what we support is linking a VPN to a physical connection, so we can choose a physical connection more precisely. You take your physical connection and choose a VPN that it should use,
and network manager will not even announce it as connected unless the VPN succeeds. I made some changes to the configuration format that's unfortunately not documented,
so when I wanted to try static IPv6 configuration, I saw some examples on IPv4, so I tried the same. It did not work, so now at least it works sort of naturally, but we need to have some documentation later.
We dropped support for the h-client 3 because it's pretty old. It doesn't do IPv6, and we did not want to go with that. We were trying to make sure that bridging and bonding works well with dynamic configuration, which is sort of something you would expect,
but it needed some code changes deep in network manager, and we can do some IPv6. It works in any simple networks, including the ACP,
but still IPv6 is so more complicated that we don't support it perfectly. The DNS part is mostly about split DNS scenarios
where you want some queries to go through one network to one set of nameservers and to another. I'll try to shorten some of the slides now so as we finish in time.
I was personally working on an M platform, which is a new component in network manager that would handle all network configuration through kernel. We are doing this throughout all the network manager code currently,
and it's not even testable by manual test or something, but what I want to achieve is to be able to run automatic tests. Currently, I can do tests for the platform code itself, and with that, I find two kernel bugs already
and three webinar bugs, so I'm filing those, and it's a pretty good tool for me, but what's the intent is to be able to test the internal network manager behavior without actually configuring any hardware interfaces
or something. To put it simply, to put it simple, put some tests and make a check and run them with any user without access to the actual configuration.
We are going in Fedora to probably put network manager into initramfs when we need networking features. We actually realized that it does not have so many dependencies. It could be better, but it's not a priority now.
The only thing we want to do is support private DBAS, which means just use the library for communication and not use the daemon because running a DBAS daemon is really currently not necessary for initramfs, so we want to avoid it.
We want to support runtime configuration so that you can just configure some bridges or something. It would work as an API for virtualization tools and so on, and we want to pick runtime configuration
from what's configured before network manager is started. It's, for example, good when you have network manager in your initramfs so that it's then started again in the real system, and it should take over the existing connections the best it can.
So there's a bunch of things we are working with. It's kernel. You can then read the list.
Actually, better than looking at the slides is maybe then open the networking slash bugs on federal wiki, and there's a bunch of things we need to improve in the kernel.
I'm working on problems with get address info in GDPC, which does not directly affect network manager, but it directly affects our users, and I realize that except the most common use cases, like yes, I have IPv4 computer connected all the time,
get address info can often give really stupid results and not even resolve local host and stuff, so it's quite, quite horrible. Then when I'm supported to ymax tools to libnl3,
as we are using network manager currently, also linked to libnl3 to avoid some crashes and problems, distributions are advised to use the new ymax tools from then.
You can look later at this list of centers we would like to update because some of the problems are not only in the implementations, but in the specifications themselves. And at the very end, I would like to ask anyone
interested in this area to talk with us, to help us with network manager, to help us with any other projects we depend on, for example, dhclient or various tools, libnl. There's lots of stuff we can use,
and there are some contacts you could use. On this address, you will find the slides. It's not current version. I'll push it today. So thank you. Do you have any questions? From what I get, network manager is a helpful tool on clients,
but many distributions are shipping network manager also for server setups. I'm still looking to see the benefit for a server configuration because you are adding much overhead for, well, a static configuration.
So what's the current point of network manager versus servers? I would like to ask what sort of overhead you actually mean. Well, actually, network manager is something like another abstraction layer.
At a server, well, the IP addresses or bonding or whatever is set up when I boot the machine or maybe if I reconfigure it, but I don't have flexible stuff like connecting a VPN or maybe I have that, but wireless network or something like that. Actually, I know many, many server administrators
are just happy with any static configuration, and you can just use what I said at the very beginning. You can just use IP command to set up all the stuff and put it in a script. But I'm doing also other stuff with servers.
I'm sometimes doing stuff like taking a server that is configured with the ACP, bringing it to another location. It gets another address. I can access it immediately. So that's something you might not need, actually, but there's lots of stuff we can provide to server administrators that are just convenient,
like, for example, the ACP so that you can move the server so that you can change the IP address from the network equipment side instead of the server side. Okay, yeah, thank you.
One thing maybe for the future, ever consider to integrate the network manager features into systemd directly? I'm not working on systemd, so I have no need for that. Yeah, but it would be a good point because systemd is anyway running and nearly covering anything these days. It replaces cron and whatever,
so it would be a good point to scratch into that, just from another perspective. Yeah, I think you would have to ask Bernard. No, that's a serious point because network manager is scratching into that. It could be used for that, especially if it comes to dynamic stuff. If an interface comes up, starting services,
that's actually what systemd is already doing these days. For example, if you connect a Bluetooth device, systemd starts services related to that. So if you plug in a network, systemd also could handle that together with a network.
What you mean is that systemd needs to be able to start network related services and network manager services, but it doesn't mean it has to be the same software.
And since systemd is already very modular, there is no point in adding functionality to it if it's not related to the init system. So another comment to the same topic, probably?
Well, no, I'll go back to the server because in your presentation, you said the most hated feature of a Linux system, network admin, was connection sharing. Well, in the server space,
it's not that the most hated feature of Linux systems is that they tend to pick an outbound interface which is not the one the network administrator expects. You got some connection going to the server,
so you're trying to debug what's happening, and then the Linux systems decide to send the answer through another physical address. When you do bonding, for reliability, you tell your server,
well, here is my main interface, and because I don't want downtime, here is another one you can use if the main one is down. Except Linux, when the main interface goes down, it decides, well, I switch to the backup interface, but I never go back myself.
So after one downtime, suddenly, your server is only using the backup interface. So if in network manager you can put some smarts, so really if your server should try to send answers
on the interface where they received the request, and even when there is all kind of dynamic backups and switching, go back to the original configuration as soon as possible.
That would make server people love network manager. I'm still not sure whether I understood the question, but maybe someone can help. Really, server people like static configuration, but not in the sense that things never move
when there are problems. In the sense they rarely want their server to go back to the initial configuration as soon as possible. To what initial configuration? So if you change something, because the main path is down,
as soon as the main path is up again, you should go back to the main path, even if the backup path is working. But from management point of view, it's horrible to see Linux server switching all the time,
and you never know in what state exactly they are. Actually, this looks like maybe two separate questions. One is about bonding. It's just a general principle. Many different layers of the networking.
The end result is on another kind of server, Windows or Unix or whatever, they get predictable responses. On the Linux systems, packets tend to go anywhere.
As soon as the Linux systems see some routing, it will send packets. It won't try to send them where the human administrators expect them to go. Just anything that works, it will use it. Yeah, it happens.
This is more about the kernel part. Sometimes you may experience this with network manager itself, whether it chooses some connection or another. Usually, if you're using servers, you'll be using probably bonding interface for failover.
If you have some questions about bonding or some things that should be fixed in bonding, first, probably, I think the kernel people will be more happy to do this with teaming.
We don't support it, so it's sort of hard for me to answer things related to this. Because first, all the people that I'm talking with about bonding and teaming tell me that, yes, we really need to switch to the new one, and I don't yet understand the new one,
and not even very well the old one. I'm looking at it from the configuration point of view, and I can't really answer this. The last question. The thing I'm really missing about network manager is proper command line interface.
Like, nmcli is very, very poor, and when I'm going to use it at server, I would expect something more. Yes, yes. The same for me. I'm not working much on nmcli. It's Eirka, and he's quite quick to fix stuff that is reported.
I think the main thing we need is to report, not that nmcli sucks, that does not help much, but the actual problems, the actual requests, what you need from it.
Yes, this is a big problem. I'm going to file a bunch of bug reports myself. I did it in the past already. But we need more people to either file new bugs or comment on the existing ones.
Right, that's it. Other questions outside, please? Thank you very much. Thank you.