Nix for software deployment in high energy physics

Video thumbnail (Frame 0) Video thumbnail (Frame 1292) Video thumbnail (Frame 3621) Video thumbnail (Frame 5132) Video thumbnail (Frame 8644) Video thumbnail (Frame 9497) Video thumbnail (Frame 11496) Video thumbnail (Frame 13531) Video thumbnail (Frame 14118) Video thumbnail (Frame 14823) Video thumbnail (Frame 15989) Video thumbnail (Frame 17242) Video thumbnail (Frame 18414) Video thumbnail (Frame 19215) Video thumbnail (Frame 20484) Video thumbnail (Frame 22878) Video thumbnail (Frame 24184) Video thumbnail (Frame 27210) Video thumbnail (Frame 28148) Video thumbnail (Frame 28681) Video thumbnail (Frame 29328) Video thumbnail (Frame 31197) Video thumbnail (Frame 32677) Video thumbnail (Frame 34839) Video thumbnail (Frame 36537) Video thumbnail (Frame 39162)
Video in TIB AV-Portal: Nix for software deployment in high energy physics

Formal Metadata

Nix for software deployment in high energy physics
Title of Series
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
High energy physics, and large scale research in general, has both common and unusual requirements for computing. Software must be distributed across a wide range of non-heterogeneous resources, with single experiments able to continuously utilise many 10,000s of globally distributed machines. Exploitation of data continues for decades after it is first taken, making reproducibility and stability essential. The use of Nix has been tested within LHCb, one of the four large experiments at the Large Hadron Collider (LHC). In this talk we will discuss the conclusions of this testing, how Nix is suited to the needs of the ""big science"" community, as well as presenting some of the challenges which have been found when testing Nix. --- Bio: PhD student in high energy physics at the University of Manchester, UK and a member of the LHCb Collaboration studying the decays of Charm quarks. Interested utilising tools from outside the high energy physics community to make the use of computing more efficient.
Roundness (object) Software developer Projective plane Data analysis Student's t-test Student's t-test Computer
Metre Slide rule Multiplication sign Antimatter Range (statistics) Virtual machine 1 (number) Dirac equation Force Different (Kate Ryan album) Collision Fundamental theorem of algebra Modal logic Collaborationism Observational study Inheritance (object-oriented programming) Antimatter Content management system Bit Line (geometry) Computer Spectroscopy Arithmetic mean Radius Ring (mathematics) Physicist Universe (mathematics) Fundamental theorem of algebra Near-ring
Laptop Surface Stapeldatei Server (computing) Multiplication sign Interactive television Tape drive Range (statistics) Workstation <Musikinstrument> Virtual machine Data storage device Mereology Field (computer science) Computer 2 (number) Medical imaging Duality (mathematics) Bit rate Term (mathematics) Core dump Selectivity (electronic) Vertex (graph theory) Process (computing) MiniDisc Physical system Service (economics) Workstation <Musikinstrument> Stapeldatei Dataflow Tape drive Surface Data storage device Computer Statistics 10 (number) Befehlsprozessor Process (computing) Event horizon Software Universe (mathematics) System programming Collision Electric current
Point (geometry) Server (computing) Game controller Proxy server Computer file Multiplication sign File system Virtual machine 1 (number) Host Identity Protocol Virtual reality Hierarchy Software File system Proxy server Installable File System Installation art Scale (map) Moment (mathematics) Bit Computer Virtual machine Process (computing) Software Read-only memory Fingerprint Operating system
Point (geometry) Module (mathematics) Multiplication sign Set (mathematics) PowerPC Moore's law Derivation (linguistics) String (computer science) Software Configuration space Computer architecture Physical system Module (mathematics) Arm Expression Projective plane Bit Compiler Derivation (linguistics) Degree (graph theory) Type theory Digital rights management Software System programming Operating system
Functional (mathematics) Open source Digitizing Data storage device Directory service Equivalence relation 10 (number) Revision control Uniform resource locator Hash function Different (Kate Ryan album) Mixed reality Hash function Hessian matrix Revision control Configuration space Computing platform Physical system
Software Network switching subsystem Mass Collision Computer
Standard deviation Thread (computing) Moment (mathematics) Video game Infinity Stack (abstract data type) Software bug Revision control Mathematics Process (computing) Software Vector space Befehlsprozessor Software Process (computing) Operating system Compilation album Electric current Library (computing)
Group action Range (statistics) Moment (mathematics) 1 (number) Food energy Supercomputer Local Group Digital rights management Computer configuration Different (Kate Ryan album) Software Self-organization Figurate number
Statistical hypothesis testing Asynchronous Transfer Mode Computer file Open source Patch (Unix) Multiplication sign Virtual machine Directory service Mereology Latent heat Mathematics Root Computer configuration Software File system Physical system Installation art Personal identification number Patch (Unix) Moment (mathematics) Data storage device Code Directory service Statistical hypothesis testing Uniform resource locator Computer configuration Software Hash function Repository (publishing) Mixed reality Read-only memory
Overlay-Netz Open source Patch (Unix) Set (mathematics) Directory service Instance (computer science) Bit rate Stack (abstract data type) Product (business) Mathematics Cache (computing) Computer configuration Different (Kate Ryan album) Integrated development environment Data structure Compilation album Stability theory Overlay-Netz Vorwärtsfehlerkorrektur Service (economics) Building Fitness function Data storage device Database Bit Instance (computer science) Directory service Binary file Sign (mathematics) Mathematics Process (computing) Software Mixed reality Configuration space
Scripting language Overlay-Netz Functional (mathematics) Electronic mailing list Run-time system Variable (mathematics) Sphere Variable (mathematics) Revision control Software Integrated development environment Different (Kate Ryan album) Personal digital assistant Software Revision control Set (mathematics) Gastropod shell Computing platform Integrated development environment Gastropod shell Computing platform
Electronic mailing list Configuration space Formal language
Building Computer file Patch (Unix) Directory service Mereology Variable (mathematics) Revision control Latent heat Computer configuration Software Gastropod shell Energy level Integrated development environment Website Extension (kinesiology) Physical system Scripting language Information Patch (Unix) Building Software developer Structural load Moment (mathematics) Bit Directory service Line (geometry) Data mining Software Integrated development environment Physicist Configuration space Website Modul <Datentyp> Library (computing)
Demon Link (knot theory) Length Multiplication sign Mathematical singularity Computer-generated imagery Compiler Attribute grammar Revision control Medical imaging Computer configuration Different (Kate Ryan album) Internetworking String (computer science) Flag Category of being Information security Physical system Area Email Standard deviation Computer file Binary code Data storage device Electronic mailing list Sound effect Binary file Symbol table Compiler Derivation (linguistics) Category of being Computer configuration Software Mathematical singularity Website Library (computing)
Simulation Link (knot theory) Namespace Multiplication sign Patch (Unix) Virtual machine Data storage device Bit Kernel (computing) Roundness (object) Software Information security Spacetime
Software Integrated development environment Multiplication sign Moment (mathematics) Projective plane Set (mathematics) Online help Mereology Resultant Hypothesis Product (business)
all righty welcome back the next talk is going to be given by Chris Berg and he's going to be talking about Nick's for software development in high-energy physics and I'm sure that there is a joke about Nick's not being rocket science and physics in there somewhere but I'm just going to skip it and give the mic to Chris please give you a little round of applause okay thank you very much who I am so I'm experimental particle physics PhD student so at the University of Manchester so none of the my actual like stuff that I'm supposed to be doing is to do with computing and this kind of stuff this is just a side project I'm supposed to be working on physics but I get frustrated with some of the stuff computing stuff we have so I have lots of side projects trying to make it better okay so to give you a
little background so the experiment facility that I work at is on the Swiss French border near Geneva and it's called CERN which is a laboratory that's been around since about the mid 1960s I think and the main attraction there is the Large Hadron Collider or LHC so this is the largest particle accelerator in the world which is a 27 kilometer radius so here you can see a nice aerial picture and the yellow line is where the tunnel runs about 100 meters on the ground so this is a really huge expensive machine and this is used for fundamental physics research and here the actual machine itself isn't the bit that's used for physics that's just used so that you can have experiments based around the ring that then can measure things and study how the universe works so there there's four main experiments Alice Atlas CMS and LHC B and then there's three smaller ones around the ring so the large experiments have at least a thousand people working on them I think the biggest one is about three and a half thousand people and the smaller ones can be anywhere from like just a handful of people and then the LHC isn't the only thing that CERN there's also a lot of other experiments more than I can fit on a slide but it's at least 15 or 20 when I tried to count the ones I could find and the thing that pretty much all of these have in common in that they have a lot of computing so the bigger parents have huge amounts of computing requirements too but even the smaller ones have less people to work on like maintaining their computing stuff but sometimes still are using like huge clusters and having to do lots of computation so I'm a member of LHC B so
that's depending on how you define it the smallest one of the four big LHC experiments though the smallest means that there's about 800 physicists like me working on there with about 400 technicians working with it still a lot of people and the experiment itself is located about 70 meters underground here's a picture of it with some people that were in the collaboration at the time superimposed because you're not allowed to have that many people on the ground at the time and the experiment was designed to I was very disappointed when I found out that people were superimposed so the experiment was designed to study the differences between matter and antimatter using the decays of beauty' hadrons but this has since be expanded out to cover like a wide range of fundamental physics research but I'm not here to talk about that today but this is just say that like this talk is somewhat biased and just talking about what my experience with NIH TB is but the wider community has similar things and we're somewhat working together I'm trying to improve this stuff so for what computing looks
like for us so the particles spin around it and typically protons smash together within the detector and we use the detector kind of like a camera to take images of what happened however because we have to take a lot of collisions field to find anything interesting we so we have about 30 million of these like images being taken per second of what happened if you then look at like the amount of data that this means coming out of the detector that we have some tens of terabytes maybe like 20 or 30 terabytes of data coming out of the detector far more than you could possibly hope to store and do anything with like for a long term so we have to come up with a system of reducing this down so for this we have a network of a few thousand machines that are located near to the detector going to be located near the surface problem at the surface and these those have the job of reducing this few terabytes per second down to a few gigabytes per second the reason this can work is because most collisions we have don't contain anything that's Sara Lee that interesting that we don't understand so we need to so all these computers are working to like select out V like this particular like snapshot that we took of what happened looks interesting there's this these hundred don't look interesting so he does a really good job of filtering down but the process of separating this out really isn't trivial and a lot of work goes into trying to optimize the software so that we actually can process the data this quickly in our amount of servers that we can actually afford but ten gigabytes per second coming out of the detector might be just about enough that you can actually store it for a long time but it's still a huge amount of data to actually process so probably a few tens of petabytes per data that sits on disk and few maybe a hundred petabytes for data of year that sits on tape so to actually process this I can't hope to do this on my laptop so instead we have what's called the worldwide LHC computing grid so this is a network of about 170 computing centers which currently has about a million CPU cores as part of it and about an exabyte of storage though this is rapidly growing as the experiments get bigger and we take more and more data and we submit kind of batch jobs to the system to like process this for me and this is kind of shared by all the experiments with like pledges given to each experiment there's a shared resource that like no one experiment can demand what is installed on all these nodes and then kind of the
last step out of this because still even after you've processed it or Krong the what we call the grid and we still have a few gigabytes or terabytes worth of data which then get processed on a wide range of whatever resources we can find so it can be like small VMs that we stay tuned to alright really heavy-duty workstations or laptops desktops university batch systems it's really just a mess of whatever computing resources we can find will make you self and then this final stage can last easily a year or a few years of people like really studying trying to understand what's in that last little selection of data that they're looking at so how do we manage packaging
software at the moment so because we
can't control what software is stored everywhere and because we can potentially have of like I think typically HDPE has about a hundred thousand jobs running at some point in the world with various bits of software on them we can't just like be installing them every time because we spending a huge time we need a huge amount of infrastructure for install like requesting the software to be installed and so we have a read-only content-addressable file system that was developed at CERN called C VMFS so the way this works is that you have some central node called the stratum 0 which is the only node in which like writes can actually made to this file system and then once you've made some write and you kind of make a commit at this point in time it then gets distributed out to the public mirrors we're specifically about one per country wherever there's a significant amount of work going on and then the way that these community servers communicate with each other is just using HTTP so that the stratum ones have squid proxies on them and proxy the files out so that whenever you request a file you can have kind of a proxy hierarchy in there if I have a thousand machines in my computing center I'll have my own local proxy so I don't have to go out and request it from Switzerland every single time that any file gets requested and this works really no well for distributing software around for the actual operating systems
that is software's running on pretty much everything is using some variant of Red Hat Enterprise Linux so the most common variant is scientific Linux six but this is slowly moving to Center seven and in the next few years we'll probably finally actually move to being the major one and the way that we defined the builds we are is that we have an architecture so this is almost always 64-bit x86 but we're also interested in arm if that's a way that we can get more computing power for our money and people have also like looked at using PowerPC and more esoteric things basically everything we run at this point is x86 and we then compile it for each operating system we support so this is normally scientific linux six which is a rail sixth derivative and sent to a seven some of our older software was built for scientific linux five as that was the dominant one at the time we then specify a compiler and we then have a last bit that we can like specify the build type as we call it for if we like put debugging symbols in or optimize the builds and the way that we rely on backwards compatibility because we keep running the software quite a long time is just hoping that the ABI stays stable in that scientific Red Hat Enterprise Linux six can generally run binary that were compiled against scientific Red Hat Enterprise Linux fives a B is to varying degrees of success the way that
we then build our software on this is that there's a package manager kind of that was built by some people at CERN called El CGC make this is built around C makes external project module and kind of builds like some set of strings that we have like this all the dependencies that we need some package with some base set of system dependencies that we expect there the way these expressions
end up looking like the mix derivation equivalent is that there is some function called LCG package ad where you define similar things with like the package name where to download the source for the package from and how to configure it and all the kind of usual stuff you'd expect and then you can specify multiple versions by using LCG external package and then you just specify the package name and which have a package version and where you want it to be installed so you can have multiple versions side-by-side and the way that
we then the system for then storing these is that we have some installed prefix like the next door and then inside there you have a name for each package and make a folder for each package that you have and then inside there you have a folder which is the version of the package and then sha-1 hash and just the first five digits of it of the names of the dependencies and their versions and their hashes that we used of their dependencies this is how we're like it can have multiple builds like the way the mix does this inside the store directory but then on top of that we then have the different platform was built for and this can get like quite out of hand with having a lot of different variants so here was just one I picked and this isn't the worst one this is just the first one I decided to pick but something we could have like tens of configurations so what are some
of the issues that we have with the
current solution so probably the biggest one the can be so looking back kind of to the past in that before there was the large hadron collider there was the large electron positron Collider that was in the same tunnel roughly the same size and started running in 1989 and continued till 2000 but when it finished running in 2000 like the using the use of its data didn't just suddenly stop it continued to be analyzed for more than a decade after and so like in 2010 there were still some people that were doing research using data that was collected in 1990 and 1995 and having to use software that's 15 or 20 years old and try and still get it to work and this is just getting worse like the Large Hadron Collider is gonna be running until 2035 so we can have at least 20 years of data taking and who knows how much longer after that we'll still be processing data and of course
this is longer than any operating system lasts so right now at the moment there are probably hundreds of jobs running that are using software that was built in 2011 for scientific Linux 5 and this is just getting worse as we go further and further into the future so having reproducible builds is a really nice thing to be able to be able to have especially as if there's any like bugs in any of the math libraries or anything that we use we don't want to pick up a newer version without realizing because we want to know like which bugs in the maths libraries we originally used to process the data because if not we're never gonna realize that we have those bugs there and want to realize them and take them into account I say as I've said before most of our current resources are scientific may 6 with scent OS 7 slowly taking over and but
then we kind of have contradictory requirements and the experiments are expensive and we want to get as much as we can out of them and we have a lot of data we want to process so we want to be able to like use new compilers and new vectorization instructions and make use of multi-threading that gets hot easier with new C++ standards so we kind of want to have these bleeding-edge new features like people are working against GCC 8 at the moment within a few weeks of it coming out while also using GCC 48 or older stacks that are currently running at the moment so strain like
bringing the community together instead of everyone having disjoint solutions for this there's a organization called the hep Software Foundation that trying to bring experiments together in a wide range of like issues one of which is a packaging working group and so at the moment they've tried to just like look at different package managers and figure out like which one is going to be their recommendation for the community for the high-energy physics community to use and the ones that have been spoken about most recently so as to there have been developed within high-energy physics so Ali build an LCD see make there's back which is one that's used in high-performance computing centers and supercomputers and things and then there's nixon portage which have been looked at by various people mixing being me and hence why I'm here
so what my what is my setup with Nix at
the moment so one of the things is that using slash Nix as the store directory just isn't an option for us because the way that we distribute software around is using C VMFS we don't have the ability to get root access on the machines most of the resources we have so we already have this system in place for distributing software through this read-only file system so we basically has to be used and typically there is like the SF t dot CERN dot CH repository the contains a lot of software but then larger experiments tend to manage their own software installations so they know exactly what's there and if they need to like patch things and dynamically they can do quite quickly if necessary so for testing with Nix I moved the store directory to be in kind of mocks even FS directory so that it was as if I was trying to install the software on the stratum 0 to this folder but just to see how that well that worked and then as part of doing this I ended up forking Knicks packages because I found that as needing to pull in some patches and as well as like change some various recipes and new things some of the recipes to like build in with options that I'd we'd need to use and then also as part of this end up throwing in the experiment specific software there's never going to go into the upstream repository because nobody outside of my experiment is going to be interested in using it while preparing this talk I became aware of the Nix package pinning in the ability to apply patches over the top of mixed packages which looks much easier to maintain than my fork but I haven't found time to use it so I only found it two days ago so the issue with all of
this though is because I've changed the store directory everything takes a very long time so you go to bed you wake up and either find that it's still compiling or that it failed because it failed to download some file that it needed that moves the location that package keeps their source or hash checks failed or something
so I ended up setting up an instance of hydro which after like putting this off for quite a while thinking it'd be a lot of work turned out to be really easy and within an hour I had hydro running and was working beautifully so everyone who's done something to do with making hydro easy-to-use you've done a great job right now this is running inside a docker container but then talks to an external database and this had been stabili running in this configuration now for several months so I'm very happy with how it's going ultimately there's a few changes that I'd like to make to it if there's hard to make it into production but it's worked really well for me and so the structure of the fork
that I've set up I based it on mix packages and stable because I found this was easier to find all the source and had fewer things where the source directories had changed but in future I could imagine that maybe you would like take off the fit the actual releases and then maybe we just apply patches and then just pick up a new release every six months for like main keeping our stacks being updated so then I applied some set of patches over the top to change the store directory and pull in some things I needed to actually get annex packages to build and then I applied an overlay over the top of this that kind that applies any stuff that I would expect not to be up streamed into mix packages this goes anywhere and then I applied a second overlay over the top of this again which gives us what we're used to with having like the entire software stack built with different compilers or with different options so the idea that is that we'd have like some like base release that we have within the experiment and then we can say that we've like built this with everything with using GCC six everything using GCC seven or something there this might be quite useful is that it's a bit have a build that users like newer avx-512 instructions or something but then have one that doesn't use these newer instructions because some of our software has this a separate build and can't decide at runtime and we'd want to still be able to use the resources we have that don't have these instructions available so this is quite useful thing to have so for what these overlays look like so here was my GCC seven one simplified down a bit so just override GCC and to be either gc7 or gc6 everywhere and then make some other changes that I found myself wanting to make which was nice and straightforward and then changing everything back to use GCC six I found it was just one thing that actually needed GCC seven which was the AWS SDK so I just manually over rope that so it looks like this might actually be feasible for what we want to do and maybe would expand these out to have some more changes as this develops
for actually running our software we used to using a command that's just a Python scripts and that becomes available wherever our software environment is the it's called it will be run so this is kind of like Nick shell in that you run it a UN specify in our case a platform a package and a version we have maybe like 15 ish packages packaged environments that we can run with this each one of which has anywhere from maybe like 20 to 100 and something versions and this just sets up environment variables to give you a shell that you can then work with and then you close that shell and open up a different environment switch between them so I did this using build em so I
modified Nick's pack sphere just so that it gave me I could apply the overlay path for the GCC version then specified some list of packages and then made this into a function which I could then specify extra packages to install on top of this and then I put this into Hydra
so that it built new channel that just contained Base built in several configuration so it wasn't necessary to just like repeat the same list of package doesn't have duplication and take advantages of mixes abilities to override itself I'm sure this can be done better and if you've got any suggestions I'd love to know how cuz I'm still kind of battling with understanding Nicks and for language
itself so what some of the things that I've like not figure out how to do so
probably the biggest one is the earn all of the physicists will found that they need to develop the software at some level sometimes this will just be that they need to have like one development thing that they do for something quickly which never gets committed or made it made into a release they just want to like patch two lines and a repository somewhere to give them the configuration that we can't run-time because nobody ever thought of doing it so one option to do is kind of have a completely separate tool so at the moment we have some scripts that work around the see make build system that we have might pull in the multiple dependencies and you like just can say I want to modify this package and it will let you run out like higher environment on top and merge these things together quite intelligently because all this informations inside Nix it feels like it should be possible to like ethnics give me a directory here that hat instead of it being like built-ins like /tmp it gives you the build environment for a package in a directory you choose and then also give it can give you build environments for other dependencies along the way and when you build it it will link these directories against each other I don't know how impossible this is do or maybe if something already exists but it would be a very useful thing to have and quite attractive to just have one system for doing this the idea of having Nyx's a live replacement for make also sounds quite interesting but that's a little way away another
thing that I had an issue with that's bit of a pet peeve of mine with other systems as well because we also use Python path files is that I want to encourage people to use Python 3 and as part of this giving them Python 2 and python 3 in the same environment and is a useful thing to do and also sometimes there's some software that only supports Python to your new sports Python 3 and we kind of want to mix them together to some extent and using Python path like Nix shell does means that you can't then import any Python libraries because if any of them aren't compatible with both same for all of their builds then it doesn't work so the solution I came up here is just to use site customized inside the Python build so that it uses a different environment variable to actually add a Python version specific path to each Python insulate when each Python installation loads maybe there's another solution that people have used for this if please let me know afterwards if you have another thing
that we used having in all the systems we currently have is relocate ability so that like potentially we'll have lots of experiments that want to install this two different seeming FS areas I know this kind of goes against the purity the Nix has so but trying out I thought that maybe like the way that replace dependency effectively just reply applies said to the binary is to just assume the store path is the same length and then you can just change the store path in place because it's a huge random string that you don't expect to appear in your ass so I did this on quite a complicated installation and found that everything still works so it just seemed like something that could be done an installation time but I don't know if I've guessed there's been discussions about this before as I found lots of mailing lists and things but and then a few of the things that I'd like to find nicer way of doing is that like a lot of our libraries need the C++ standard to be set so that everything is built with C++ 17 or C++ 14 and we mix these the different versions of software as things get upgraded to support newer standards also want to come other compiler options the other way that the only way that I've thought of to do this is to just add a property attribute to the compiler itself so then each one will like query and add its own C make flags for the C++ standard or whatever and but maybe again this is something I just haven't found the solution and similarly for debugging symbols feels like this one is supported but I never managed to get any of the various explanations of getting debugging symbols to actually work so maybe I'll
think of that out on Saturday and then one last note before I finish so we mentioned a few times and I expect that a lot of these problems will be fixed with containers though still with being able to install the software and a more reproducible way still useful even with containers and this is probably gonna be the same inside hep I think singularity looks the most likely inside high energy physics because of the whole docker demon security issues with it being escalation thing whereas singularities designed for unprivileged unprivileged users and is slowly starting to find its way into the grid sites that we have this could also remove the need to relocate the store which would be quite nice eventually but this is all a long way away for now so we'd need to find a solution that works with the without using containers for now but even when we do move to containers there is the post recently about multi-layered docker images that mount is where Internet packages and this looks like a really nice way of avoiding the problems of as previously thinking about having fat containers and suddenly you've got huge binary blobs and effects that aren't cached and you can get around this by you taking advantage of Nick so if you've not heard of them the link it's quite nice so yeah in
conclusion I think NYX is awesome and it really works nicely for with its purity and reproducibility for research that can last a long time still a few things I need to figure out and hopefully I'll give them out over the next few days is
there any questions [Applause] hi really interesting I just wonder about did you try anything to make slash dicks work anyway like symlinks or user space username spaces or mind mounts so I've tried a few things so symlinks I came into the thing of it saying like the sim link at the sneek store isn't allowed to be a similar link and tried to work it and I don't know if you can actually pat round it but I didn't get that working user name spaces a lot of machines we run on don't have a kernel that's new enough to give you user name spaces but ultimately that might become a nice solution but yeah the other thing I looked at is a piece of software called parrot I think so the way this works is - I think it's kind of like you do an LD preload hack to intercept the SIS calls and I did kind of get that working but some of the like security enhanced the Linux Siskel's that are used unsupported by it and I started patching them in but then I found myself in a bit of a mess that I didn't have time to actually learn what I needed to know to patch it properly so that might be another solution but I didn't find myself having enough time to actually investigate it properly other questions yes
hi thanks for the talk is this a solution that's being in production I'm sorry can you move the microphone closer sorry like do you is the is this a solution that's heading to a production or is it already running so at the moment it's still kind of being investigated subject to my time like I can start thinking about pushing it into production but this is my side project so as much time as I can find before I submit my thesis but it's being taken seriously as part of this help software foundation like set of like recommendations so it may come out as being the recommendation from that well questions we have time you finished early it seemed like a lot of the talk is about reproducibility of builds I guess how interested you in reproducibility of experiments based on is that kind of saying that that foundation is interested in I don't know if raised that way well by that you mean like other experiments being able to reproduce the results yes exactly yeah so that's kind of a separate thing that I'm also involved in of like trying to preserve analyses and knowing exactly how we actually produce results and this can also help for like just giving you a better idea of what software you used and being able to build these environments up again which can be quite tricky well questions nope then thank you very much again for your wonderful talk [Applause]