NixOS Security – Vulnerability Roundup n+1

Video thumbnail (Frame 0) Video thumbnail (Frame 4050) Video thumbnail (Frame 5171) Video thumbnail (Frame 6167) Video thumbnail (Frame 8376) Video thumbnail (Frame 10073) Video thumbnail (Frame 10860) Video thumbnail (Frame 11936) Video thumbnail (Frame 14281) Video thumbnail (Frame 15131) Video thumbnail (Frame 18464) Video thumbnail (Frame 20083) Video thumbnail (Frame 20779) Video thumbnail (Frame 21780) Video thumbnail (Frame 22540) Video thumbnail (Frame 23539) Video thumbnail (Frame 24525) Video thumbnail (Frame 32386) Video thumbnail (Frame 35584) Video thumbnail (Frame 38226) Video thumbnail (Frame 39241) Video thumbnail (Frame 41350)
Video in TIB AV-Portal: NixOS Security – Vulnerability Roundup n+1

Formal Metadata

Title
NixOS Security – Vulnerability Roundup n+1
Title of Series
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
2017
Language
English
Production Year
2017

Content Metadata

Subject Area
Abstract
NixOS is receiving more and more attention from users expecting up to date packages and timely security patches. Weekly "catch-up" issues were effective, but faltered after our datasource dried up. In this talk we will explore the challenges and potential solutions around regular NixOS security patching, and how to join the famous "-distro" list.
Web page Distribution (mathematics) Weight Patch (Unix) Projective plane Keyboard shortcut Electronic mailing list Mereology Process (computing) Different (Kate Ryan album) Website Musical ensemble Information security Vulnerability (computing)
Vulnerability (computing) Link (knot theory) View (database) Multiplication sign Computer file Projective plane Electronic mailing list Cuboid Database Information Descriptive statistics Vulnerability (computing)
Multiplication sign Projective plane 1 (number) Electronic mailing list
Distribution (mathematics) Regular graph Process (computing) View (database) Multiplication sign Patch (Unix) Computer file Branch (computer science) Online help Information Information security Regular graph
Authentication Point (geometry) Email Distribution (mathematics) Multiplication sign Patch (Unix) Electronic mailing list Electronic mailing list Process (computing) Mixed reality Information security Information security Embargo
Scripting language Vulnerability (computing) Software bug Email Service (economics) Process (computing) Electronic mailing list Bit Traffic reporting Information security Software bug Vulnerability (computing)
Computer configuration Multiplication sign Mixed reality Propositional formula Physical system Spacetime
Trail Topological vector space Distribution (mathematics) Email Thread (computing) Patch (Unix) Electronic mailing list Client (computing) Software bug Number Graphical user interface Process (computing) Computer configuration Software Computer configuration Touch typing Authorization Energy level Commitment scheme Series (mathematics) Information security Information security Embargo
Email Distribution (mathematics) Standard deviation Open source Patch (Unix) Patch (Unix) Open source Electronic mailing list Component-based software engineering Message passing Hierarchy Information security Information security Physical system Embargo
Distribution (mathematics) Patch (Unix) Electronic mailing list Self-organization Element (mathematics) Self-organization
Goodness of fit Roundness (object) Process (computing) Trail Consistency Patch (Unix) Consistency Ultraviolet photoelectron spectroscopy Information security Information security
Process (computing) Commitment scheme Consistency Electronic mailing list Self-organization Basis <Mathematik> Information security
Topological vector space Parsing Code Multiplication sign 1 (number) Set (mathematics) Software bug Different (Kate Ryan album) Orthogonality Cuboid Process (computing) Information security Vulnerability (computing) Physical system Scripting language Software bug Email Block (periodic table) Structural load Electronic mailing list Process (computing) Website Software testing Right angle Quicksort Information security Embargo Trail Consistency Patch (Unix) Branch (computer science) Mass Checklist Rule of inference Number Element (mathematics) Queue (abstract data type) Software testing Data structure Distribution (mathematics) Scaling (geometry) Graph (mathematics) Patch (Unix) Projective plane Code Software maintenance Performance appraisal Mixed reality Natural language Fiber bundle
Code Weight Set (mathematics) Mereology Number Derivation (linguistics) Mathematics Process (computing) Computer hardware Single-precision floating-point format Computer hardware Software testing Green computing
Building Standard deviation Scripting language View (database) Patch (Unix) Web page Moment (mathematics) Electronic mailing list Attribute grammar Estimator Latent heat Process (computing) Integrated development environment Software Interpreter (computing) Mixed reality Flag
Point (geometry) Demon Laptop Ocean current Suite (music) Building Freeware Multiplication sign Set (mathematics) Open set Mass Client (computing) Mereology Subset Number Goodness of fit Latent heat Term (mathematics) Intrusion detection system Information security Exception handling Overlay-Netz Email Graph (mathematics) Information Projective plane Sampling (statistics) Electronic mailing list Bit Software maintenance Twitter Connected space Cognition Hand fan Process (computing) Mixed reality Self-organization Embargo
so I'm just gonna introduce to you brand question which is basically the reason I can customers yes we have security updates [Music] when in his day-to-day when it's not working on the security of the is working a site reliability engineer so I'd like to start with some history on the efforts of keeping Nick so as secure over the at least since when I started and I just like note that it's actually Franz makes many many security for requests and way more than I do these days so I'd like to thank him first in early 2016 I had just joined the project like 45 days earlier and I noticed that Franz had put in dozens of patches over like an hour and I was just blown away and didn't know what he was doing or where he's getting them from but I wanted to help and I I really wanted to get involved so I asked what he was doing he said you know with there's this list of vulnerabilities on lwn net and I'm just going through basically everything from I don't know maybe the last year going through a lot of pages and what what they do is or what they did is they would track every vulnerability that came out that was announced by a bunch of different Linux distributions and then aggregate them together so if there was an issue in let's pick on bind somewhere if there was an issue in behind they would link to the issue for fedora for her red hat for Ubuntu for Debian for Gen 2 and then you could go to any one of those and find a relevant patch and it made it way easy to go and fix the issue in as little time as possible and made it easy to actually find the list of problems shortly after that this idea of going to l-dub you in and looking at their list and then patching 6 months of security vulnerabilities became officially part of the release process that it's tough it's tough to do that which i think was really nicely
demonstrated by the very first vulnerability roundup that I opened I believe in September of 2016 and it had
850 issues for us to look at and it was broken down into what project it was related to and a link and a small description of the issue and was checkboxes on a github issue which was massive and we found a lot of issues using github issues as a database it doesn't work very well if two people tick two boxes at the same time one wins or neither wins has happened sometimes and this is just a small list of the
people who are actually offering pull requests just the easy ones I could find on the issue thank you to all of these people and people reviewing the poor requests this okay I mean 850 issues opened a whole lot of port requests and certainly contributed to one of those ticks you off end and then the mergers of course a huge amount of effort went into this it took six days to actually
go through that whole list I I was impressed it took six days but at the same time I sort of felt like it took a month it was maybe the longest most exhausting project I've ever worked on
but we managed to get all 850 of these done in time for the 16:09 branch release and I think it made 1609 in a
lot of ways one of the best first started the process that made 1609 one of the best most secure releases we've had after 1609 it started this regular weekly community effort of reviewing the issues on lwn from the previous week so we were quite up to date and regular and prompt and fast at getting all of these issues patched better than many distributions a fun thing about this process is that pretty frequently some distributions realized they hadn't patched something from 2014 again it would be like hatchback the door and the 2003 passed by Debbie a few thousand four and then Jen too did this a lot patch by Jen to 2016 but we were vulnerable like this whole time and so I was really grateful for for all that that help and data that we provided and I really want to focus that this was a community effort it wasn't just a single person doing all this work and I was incredibly helpful shortly after we
officially created the snick security announce mailing list something to help bring us to becoming more mature distribution that's a something that's close to my heart and something that's important to me you could subscribe to this and receive advisories unfortunately this is somewhat fallen by the wayside but I'm hoping we'll bring it back shortly pretty soon after that we formally created a nick so as security team this team has handled for five just at this point specifically around we received some embargoed security issues that we can prepare it's a patch ahead of time most of those yet end up getting leaked but not and then some critical issues that we've discovered in how Nix OS modules work a few authentication issues and then actually some some issues we've found in mix itself which is a lot of fun to find and so we had set up this wonderful
weekly process we had developed this community effort and then what felt like no time at all oh sorry the March first
came and we had done 24 of these try triage 1,500 reports which is blew me away when I saw that and lwn shut down
their vulnerability service which broke all of the tooling that we had built and pretty much ruined our our process it
was it was a bit devastating actually I remember running the script to build the report and it showed three and I thought oh boy that's it shouldn't be free it's never three there were some attempts after l-dub UN shutdown to create alternative tooling some ideas around looking at every see the e that is issued or creating issues out of every mail into the OSS security mailing list unfortunately those haven't really pans out quite yet but I'm optimistic for the future which
I'm looking to discuss and to start out I think Peter and I agree I I really want Nick so as to be commercially viable and I want it to be an option that CTOs can look at and say yes that's a good idea and I think it offers real value that we
need to be selling much better I don't think the value propositions Nix is that it's particularly functional I think the value proposition of Nix is that you can deploy something and undo it and and be back to where you started you accidentally installed some package you know as long as you don't update your system to use it it's just in your story it doesn't impact anything I once interviewed with a company that makes automatic cranes for ports and they run ports all over the world and it costs a million dollars in FAMAS revenue every single time they deploy per port they have to shut down everything all the ships have to wait all the trucks have to wait nothing's happening and if something goes wrong it can quickly quickly add up because this is this is an hour a million dollars in an hour they we discussed briefly using mix but again it's not quite ready but I think it it deserves a space deserves a mention in this space because of how safe it is and I think among the issues
that Peter recommended or pointed out are demonstrating that we do care about security we do care about keeping our users safe and he literally said this in his in his talk isn't is you look at a small distribution and you think you know is somebody gonna leave and is their effort just gonna go away if Franz left today would our security patches evaporate many of them probably and that's that's an issue as I noted
one option that we looked at was just looking at every CV that comes out I don't know if you've looked at a reason a security advisory for chrome lately but it's a hundred TVs per release it doesn't it doesn't make sense to track at that level because the the the patches don't come per CVE that patch has come per overarching issue and the overarching issue for korone might involve five CDs another issue with track and CBE's themselves is they they're issued by numbering authority they start CVE back year and then a one number that number goes up into the millions I believe every year but in reality they're broken up into like a few thousand increments it given to authorities who can issue sir CVEs themselves so and and some of those might be embargo at some of them might be reserved and then never used so you can't actually look at the space and say yes I have a contiguous series of CDs from one to a million I've got all of the CDs because there's going to be gaps it's very difficult to stay on top of this a better option is something that we tried but not very successfully which was watching OSS security and what what we looked into doing was automatically creating github issues per thread on OSS security which it took the hell of an email client and moved it to github and that's really really not any fun what most distributions do is yes they follow SS security and yes they follow full disclosure but the the process of going from an email to the list to an issue in their tracker is manual and a process of review and triage and first I mean seeing if there even if impacted by it or if they even package the the software it touches this is where a community effort comes in we're hey we wouldn't need help with this all of this is
leading up to a goal again Peter set this goal a few years ago of joining the famous Linux distro list this list handles embargo and security patches it has about 10 to 15 Linux distributions on it and we would like to be one of them you can imagine that's gonna be difficult but I think we can make it and the reason we want to be on this is so that we can receive the patches ahead of time get that if prepared and then release them as soon as it goes live and keep keep our users safe one reason I find this is such a important goal is I think it sends an extremely clear message that we care and that were dedicated to the issue they getting on this list is difficult they have a lot of requirements they want to know you were serious they want to know that you are capable of handling and crypted encrypted mail they want to know that you're capable of keeping secrets they want to know that if they let you and you're not gonna spoil it I'm gonna go
through some of the requirements they have one is that we unix-like and while we don't follow us at the hierarchy standard we are unix-like and we are open source we've got this one we are
not an internal product if we were a Linux distribution used only exclusively inside of a greater organization we would not be allowed in we are not a
downstream rebuild for example when Red Hat is a member of this list however downstream rebuilds like scientific Linux are not on this list because as soon as Red Hat issues the patches to their distribution it's very simple for scientific to then take those patches and release to their users since we are our own thing we are allowed in this one's tricky we need somebody who's already on the list to trust us enough to recommend we join the list which means if we screw it up we look poorly reflect poorly on the person who recommends us we need to do it for a
year and I think that's reasonable we we have to we can't just decide one day to be serious about security and then ask for membership and get in and we need to be able to patch and release this patch the issues the fixes within 10 days and back when we were doing the weekly round ups we were we were well within that mark
I think the issue is consistency in the community effort France does great work everybody else who does security patching does great work but we need to do it a lot and we need to do it continuously and we need to be really thorough and build up process and tooling around this to ensure that we are doing a good job and I'm going to
read it we can't choose to be consistent here today we must choose to be consistent every day when we wake up this is this is a commitment that we have to make and I can't make it for the community now your confronts back when I
started the roundups I created an issue that would go off or a reminder at 6:45 every morning and every Wednesday morning sorry and that that's how I achieved consistently making this list everything every single week and I I don't I mean this works for me I'm but I think this is a personal thing that people need to commit to is doing some effort on a weekly basis to keep next to a secure and then as a organization we can organize around that and make the tooling and process to make that easy as I noted earlier most
distros triage issues by hand I think this makes sense I think attempts at person TVs mechanically or doing that growth natural language parsing on emails mechanically is interesting and I would be interested in seeing what comes out of that but ultimately I think in order to be truly consistent we have to have the human element in there reading and and inspecting whether or not we are impacted we need tools and processes this is why an issue tracker something specific to security issues something that lets us record the state of whether or not they impact various different channels or releases we need to be able to document that 16:09 is continuously becoming more and more out-of-date that 17:03 is becoming more and we're out of date because those are important prize to get people off of those distributions we can tell them we don't maintain it but we can't what we can't tell them 16:03 is vulnerable to a thousand issues because we don't track that right now and we must use these tools to share the load I mentioned that earlier as a question about the release process we have to build these tools in a way that is very easy for anybody to just come in and join and participate in the process one thing I liked about the checklists that we used before was that even if somebody didn't have access to ticks the boxes they could send a pull request it was easy for somebody who did that go up and take the box for as it stands now if people can help by simply monitoring mailing lists they care about or if they see on popular news websites an issue they can go and check NICs OS to see if it's secure and or they can watch the 70 commits a day to master and see the security relevant ones and back forth and to stable and if you don't know how it's a patch or are afraid to patch something there's some dependencies that spook me I don't know what they do but I know that they rebuild a lot of things open an issue and that the the most important thing is to try that this is how this is how the first one came about this is how the vulnerability roundup one came felt it wasn't anything fancy I wrote a bunch of bash scripts with curl and said a knockin and whatever and somehow that created a coherent piece of markdown that I can make into a github issue it was ugly it was nasty it got rewritten for roundup number two but I tried and it worked really well for six months and I would encourage anybody to try and I if anybody's interested in trying I would like to help you try some additional issues that we would need to address to join the district is that we have to have quite a bit of private or structure and I mean really private we have to have a bug tracker that isn't github we can't put embargoed security issues on github even a private one we have I believe we have support for private issues or private builds on Hydra but they still push to the binary cash we can't push through the binary cash we have to keep these extremely limited to two how somebody could find the issues and private code branches as well and again not on github all three of these these beyond self hosted systems that we completely control this isn't specifically spelled out and the rules of the OSS the private distro list but I'm pretty sure only because nobody's had the gall to actually ask one thing that I think is popular to talk about but isn't such an issue is how quickly it takes or how much time it takes to release patches the goal of the distro list is to get patches out to maintain errs seven to ten days before they go public and if you can release your patches as a distribution within seven days 10 days of them being becoming public you're well within your what they'd like to see if it takes longer than 10 days there's no sense in you being in the embargoed list because you simply can't benefit from the process a small the small channel that has a much reduced package set we can release a mass rebuild from that in three hours the large the full package set can be released in 24 maybe faster hydra has a existing support to auto scale up based on how many jobs are in the queue and it is also possible to scale up on automatically if we have a big issue that we want to push out
quickly that said I do see our ability to merge and release patches quickly as a security relevant issue for our posture I think it is important that we be able to to merge and release as quickly as possible for when there is a true emergency one of the problems that blocks this is if there's a mass rebuild that is submitted and it breaks a lot of packages so we do need a way to be able to preflight massery village and see what the impact is I think there's a lot we can learn from the open build system and I would like I'd like somebody to study with that and try and figure out how we can bring that to the mix of us the zero Hydra failures is a project that happens before every release and it is essentially what vulnerability roundup number one was as it was looking at everything that's that started to fail since the previous release and said all right here's a massive list of things let's go fix as many as possible and then mark the rest is broken and this is fine but it's a lot of work and it's all at once and it's right when we're trying to do a lot of other things and trying to organize this release and I think it would be better if we just didn't put ourselves in that situation where we needed this massive catch-up issue something like a regular weekly zero hydro failure issue that we all rallied around and worked on sort of orthogonal project would be better PR testing and being able to more automatically review for requests so that when somebody who has merge rights goes to look at a pull request they don't have to spend their time debugging trivial issues like it's just of an eval but a maintainer shouldn't be the ones to point out that a PO request doesn't evaluate and it additionally it would be
nice to have some automatic for request testing as of as a benefit if we if we
on top of ours of our builds and keep everything green it's much easier to merge code because you know exactly this pull request broke it were versus the situation now which is if there's a mass rebuild and a bunch of stuff shows up broken at the end you don't know if it's already broken you don't know what the impact is it's very hard to quantify this and that is actually part of what made my first attempt at this I think fail I set up a Hydra using an astounding amount of hardware donated by packet net I highly recommend you go look look at them I don't work for them but they're really great people and the Hydra would automatically create job sets for every single poor request that came in the the the number of derivation that get built in every single pull request is a lot 40,000 and it turns out the 74 requests a day except the number 17 is is a lot we had a lot of course working on this and fundamentally I think there's some scaling issues with how how Hydra is architected that makes this difficult but really what what made this difficult is it was it would build everything and it was still so hard to tell if it was a if it was a good change or a bad change if the five hundred failures are new or if the five new failures are or even is even an improvement yeah this was this was challenging a more recent attempt I got frustrated at something a few days ago and set up Graham C of Borg to
assimilate all of the PRS it does two things right now one is that as soon as a pull request is in usually sometimes it's broken it does an estimate of how many jobs will have to rebuild on Hydra and it automatically labels them and it automatically thanks to Daniel people's labels them based on how many rebuilds will impact Linux and how many rebuilds will impact Darwin this has been pretty nice because I discovered things that are kind of big rebuilds that I didn't know we're big rebuilds the second thing it
does is from a very small list at the moment it can accept commands to build specific attributes with specific pieces of mix packages this avoids the problem of I submitted an update to this package in an end and okay now I'm gonna go build 40,000 things nothing depends on in and then we know that because the label says it impacts 110 things why try building 40,000 when we can just build in them so this accepts a list of attributes that you'd like to test it then starts a job and builds it right now just on Linux as soon it will also be building PRS on Darwin as well which I think will be really cool this is a difficult issue actually darlin because it can be hard for people using NYX OS to know how their pour requests impacts Darwin Darwin doesn't have curl in the standard environment Darwin can't run certain software's you might add a dependency to a package and then it it ruins the standard environment for Darwin and you wouldn't even know it until one of the Darwin contributors finds out and puts in a patch
surprisingly I didn't expect this but I think it's really good news and pretty neat is we've been invited to become a CNA which is who an organisation at issues CVG IDs we were asked to do this by a member of the Red Hat security team who is also part of the organisation of this the embargo distro list the CNAs I believe would help us develop a relationship with the security organic community as a whole and help us become closer and more involved and ultimately brings us closer to become part of the district list this is my contact information if you'd like to try it with me I'm pretty sure I've talked with probably most of you at some point in time I'm not very good with faces or names so I apologize but feel free to reach out yeah thank you we have ten minutes thank you hi yeah okay so can Lee I think we have a pretty liberal policy in accepting package new new packages and I guess it's not necessarily always guaranteed that they will be maintained for security updates do you have any any opinion on that or like whether we should be more or less liberal in accepting packages I think that with the advent of overlays that changes the issue I think in a lot of ways it's difficult for us to be gatekeepers on what becomes part of mix packages I think it is should be a requirement that if somebody does contribute a package they are listed as the maintainer of the package that said a lot of the maintainer workflows that exist are pre github and as the graph showed earlier we've received tons of drive-by contributions what does it look look look like to be a maintainer when a good chunk of contributions are a drive-by they're not people interested in envy and maintains I think we should accept updates from them but I wouldn't be hesitant to accept new packages if they're not willing to maintain this is actually orthogonal to the issue of people asking for new packages is if you're asking for a new package than nobody who's interested in maintaining it laughter actually I have that question they maybe use the open suits connection that's now we know we have during that talk I was trying to figure out how to make this run an open open Joseph have you spoken to the to the alumnus weekly news people who quit maintaining these things maybe they have some tooling around how they constructed such an amazing list and or maybe they there was some internal politics and maybe someone still wants to do it or like like have you spoken to the people behind it I have yeah so when I first did the first 15 or maybe was even up to 20 I hadn't actually asked permission to use the data and it wasn't specifically licensed so I I got up my courage sent them an email and said hey could I use this data been using it for a long time sorry about not asking but can I use this data and they said yes absolutely and then four weeks later they shut it down I asked them further about why they shut it down just like the distros the process is manual they have as much tooling as you and I have in our email clients to do it and I guess like a search tool unfortunately and that's why they shut it down is it didn't get a lot of interest except Wednesday morning at 6 a.m. and does take you to much employee thing can you tell a bit more about the grimace your walk yeah well its current features its roadmap its scope yeah its current feature set is exactly these two things is it three flights the bill to see how many rebuild and then it'll build specific things you're asked for in terms of roadmap I wrote this in therapy so I'd like it to not be written in PHP that's probably number one number two is adding support to fan out to Darwin that's going to be probably a 20 minute project this afternoon I think an important thing I really like to do is be able to sample and mass rebuild oh the most important thing is that this runs on people's laptops volunteers can install the daemon will be able to install the daemon and pickup approved job sets and build them locally that's why it doesn't build everything that's changed because it needs to be cognizant of being a visitor on somebody's lap top and that's why it only has a specific list of people who can trigger it I'd like to open that up quite wide and I think the NYX sandboxing makes that pretty safe to do but just starting spot any more questions all right if there's no more question so we can get to their pizza earlier thanks for the talk
Feedback