require.nix: dependency management for your favorite

Video thumbnail (Frame 0) Video thumbnail (Frame 9645) Video thumbnail (Frame 10449) Video thumbnail (Frame 11155) Video thumbnail (Frame 13628) Video thumbnail (Frame 19817) Video thumbnail (Frame 21687) Video thumbnail (Frame 23902) Video thumbnail (Frame 26670) Video thumbnail (Frame 30308) Video thumbnail (Frame 39593) Video thumbnail (Frame 48878)
Video in TIB AV-Portal: require.nix: dependency management for your favorite

Formal Metadata

Title
require.nix: dependency management for your favorite
Title of Series
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
2018
Language
English

Content Metadata

Subject Area
Abstract
Nix is a fantastic tool for managing the dependencies of your development projects, but ironically the Nix language itself has very limited facilities for modular code reuse and composition. In this talk, I will present on my recent experiments with a system, inspired by node.js's "require" mechanism, to define packages of nix language code that can depend on other packages. Together with conventions encouraging detailed documentation, good error messages, and clean interfaces, I hope this project can form the seed for a robust ecosystem of libraries and tools that can help move us past the monolith of nixpkgs and enable us to more easily benefit from each others' work. --- Bio: Shea is a lead engineer at Target's data science and optimization group, whose production stack is built with Nix. He has been part of the NixOS community since 2010, has been working professionally with Nix since 2012, and is part of the Nix core team.
Point (geometry) Dataflow Functional (mathematics) Context awareness Multiplication sign Connectivity (graph theory) Combinational logic Set (mathematics) Mereology Metadata Formal language Mathematics Roundness (object) Different (Kate Ryan album) Core dump File system Flag Energy level Compilation album Stability theory Physical system Module (mathematics) Programming language Standard deviation Dependent and independent variables Interface (computing) Software developer Expression Polygon Projective plane Expert system Bit Line (geometry) Frame problem Type theory Data management Process (computing) Software repository Repository (publishing) Mixed reality Internet service provider Network topology Video game Normal (geometry) Quicksort Cycle (graph theory) Library (computing)
Source code Computer file File format Range (statistics) File format Disk read-and-write head Metadata Revision control Latent heat Type theory Revision control Implementation Descriptive statistics Library (computing)
Functional (mathematics) Implementation Module (mathematics) Identifiability Computer file Electronic mailing list Function (mathematics) Disk read-and-write head Revision control Core dump Identity management Physical system Module (mathematics) Source code Computer file Electronic mailing list Metadata Flow separation Latent heat Bootstrap aggregating Software repository Function (mathematics) Revision control Pattern language Fiber bundle Identity management Local ring Library (computing)
Context awareness Length Multiplication sign Source code Function (mathematics) Mereology Formal language Derivation (linguistics) Type theory Different (Kate Ryan album) Flag Information Descriptive statistics Physical system Source code Electric generator Software developer Data storage device Metadata Instance (computer science) Measurement Flow separation Type theory Interface (computing) Quicksort Whiteboard Digital filter Functional (mathematics) Identifiability Computer file Transformation (genetics) Markup language Moment (mathematics) Online help Field (computer science) Metadata Attribute grammar Latent heat Term (mathematics) Operator (mathematics) Representation (politics) Energy level Hydraulic jump Domain name Module (mathematics) Multiplication Dot product Standard deviation Projective plane Expression Primitive (album) Code Generic programming System call Loop (music) Revision control Resolvent formalism Library (computing)
Building Functional (mathematics) Context awareness Code Source code Virtual machine Function (mathematics) Parameter (computer programming) Formal language Latent heat Type theory output Error message Descriptive statistics Condition number Source code Addition Information Metadata Cartesian coordinate system Type theory Error message Function (mathematics) Chain output Quicksort Resolvent formalism Resultant Row (database)
Building Functional (mathematics) Module (mathematics) Identifiability Computer file Open source Distribution (mathematics) Code Virtual machine Set (mathematics) Modulare Programmierung Mereology Field (computer science) Subset Formal language Number Derivation (linguistics) Latent heat String (computer science) Core dump Flag Descriptive statistics Scripting language Module (mathematics) Default (computer science) Standard deviation Interface (computing) Projective plane Expression Shared memory Expert system Electronic mailing list Complete metric space Type theory Bootstrap aggregating Process (computing) Integrated development environment Software repository Configuration space Right angle Quicksort Mathematical optimization Physical system Resolvent formalism Library (computing)
Suite (music) Multiplication sign Source code File format Insertion loss Usability Fluid statics Derivation (linguistics) Mechanism design Mathematics Core dump File system Endliche Modelltheorie Descriptive statistics Exception handling Overlay-Netz Source code File format Software developer Computer file Electronic mailing list Sound effect Bit Flow separation Entire function Hash function Software repository Repository (publishing) Normal (geometry) Right angle Quicksort Whiteboard Sinc function Asynchronous Transfer Mode Ocean current Point (geometry) Implementation Functional (mathematics) Computer file Connectivity (graph theory) Similarity (geometry) Online help Branch (computer science) Metadata Power (physics) 2 (number) Revision control Latent heat Centralizer and normalizer Term (mathematics) Internetworking Gastropod shell Energy level Software testing Associative property Domain name Default (computer science) Standard deviation Validity (statistics) Information Surface Interface (computing) Projective plane Expression Expert system Content (media) Human migration Performance appraisal Cache (computing) Bootstrap aggregating Grand Unified Theory Software Query language Personal digital assistant Mixed reality Revision control Statement (computer science) Library (computing)
Performance appraisal Source code Type theory Computer file Revision control File format Implementation Metadata
all righty so next up is Shay Shay is working at Target one of our sponsors and Shay is also Linux core team that was created not too long ago and Shay is gonna take us on some sort of inception kind of right talking about the dependency manager for the dependency manager I'm looking forward to it give him a round of applause hi thank you as I said I'm Shay work at Target been working with Nix for quite a while and required it next as a project that's come out of problems I've seen using Nick's especially in professional settings for managing development and deployment life cycles so before I get to what to work for acquired at next is I'm gonna go through some of the problems I've seen that inspired it so first status quo of what exists in the next world today is we're a community of monolithic entry points so almost every Nick's expression in the world outside of next packages has this line at the top and all of the useful interesting functionality is pulled from there or maybe if you're a little bit more sophisticated you'll have built-ins dot fetch get or your company will have its own mixed packages fork or sometimes you'll have a company Nick's repo or if your company has a mono repo then it'll have a big mono repo but either way this is really for the most part your Nix projects will all revolve around some big central repository of all Nick's functionality why this is a problem just this is about to get political I'm gonna make a stand on mono repo versus poly repo for our context in this world not in general mana our current workflow or especially next packages but even outside of it leads to very tightly coupled development all everything within Nick's packages is kind of updated at once atomically which sort of makes sense because you want everything to work together but it means the interfaces between our components are very blurred if you make a standard end change you can just change all of the packages that breaks in one commit and you never have to think about like was that a good interface to begin with and is this change going to be usable for external users etc it's hard to compose in the sense of we have many different functionalities within NICs packages and I can't easily say I want the I certainly can't say I want like the Haskell builder from 1803 but within the package set of 1809 that's certainly not possible but even the Haskell builder of a few weeks ago probably won't work and it's also technically very difficult to actually plug it in we also kind of accumulate a bunch of responsibilities within next packages so we do experiment and we also have our stable package set and we have a lot of different kind of functionalities and different components as I was saying about and they all exist within one file system tree one set of process for managing the system for stable stable updates for pull requests for governance all of that everything is kind of bound by the same system when we have this sort of mix of things so for example I don't know if you guys have been following John Erickson's work on improving cross compilation and standard and a big difficulty he's run into that I've seen is he has to kind of shove the work he's doing into the flow of Nyx packages without breaking too many things and without causing too much disruption at once because there's no good way for him to iterate on standard and independently of of Nyx packages and kind of back to a rock was talking about earlier to in his talk maybe was yesterday I can't even remember where within the next world we can get the benefits of a mono repo without a without actually having a mono repo by tying everything together at the top in principle we could do that same kind of thing with Nix packages where Nix packages remains where everything is integrated in together and the actual sort of trusted said of the world but the individual components are built independently on their own time frame so this is the main thing that pushed me but there are a few other things that I think can be solved with required an expert I also have not yet described first and tations and by annotations I mean documentation types what are the flags how do i given it given some data what's the metadata associated with that whether that's a function definition or whether that's a package definition we have some conventions for I mean it's been set especially for metadata but for package metadata but no we don't have general conventions for this and we don't have Discoverer we don't have discoverability around those annotations or discoverability for what functionality is even available it's pretty hard to say does this package exist unless you know enough about how niks packages is laid out to already know most of the answer you know for if you go through the attributes in in like the top level packages set how do you like how do you know which of those is a package versus a sub package set versus a library instead of library functionality versus some combination of all the above we don't have that available to us within nix packages very easily and finally open sourcing is actually very difficult at least in my experience because again we have these monolithic company nix repos and they combine functionality that we would love to share that is completely generic but it's also like all tied up with functionality like our private package sets and all that and if we open sourced it we would have to break out that functionality and we wouldn't use the thing we open-source because we have no way to combine it in with our big mono repo so this is this is a problem that I kind of was looking at and sat down did some design sessions and I think the answer what we need in Nix which we don't have packages and modules and not packages and modules in the sense that Nix means them but I mean in the sense of normal programming language packages and modules like Python has a module and package exports those modules and you import that and you get your definitions from that and so required annex is a system to provide packages and modules for the next language so there's some kind of core technical capabilities like a
package specification saying you know this is the required at next package itself it's got a description it's got versions a you know this is these are the packages I depend on with version ranges and things like that I can define my what modules I export and kind of some formatting and just a heads up for all of these things these are like the first thing I could think of that would cover exactly what I needed and nothing more so I'm not saying this format is great I'm saying we have a format it gives metadata it gives you everything you need to know to consume the package similarly we could have locked files to
say ok you depend on the base library
this for this repo use this version of base from this revision of github and just a note about how I've kind of built the lock files so far they not only need to contain your direct dependencies they need to contain your whole dependency set but kind of going back to what Adam was talking about earlier it still fails if you try to import a module not in your dependency list so you don't have
that problem that Note has but you still have a top-level kind of tying everything together these are the specific versions I'm using in my package and it also happens to allow for circular dependencies but I don't think anybody's gonna use that and if they did that's on their head in my opinion and the implementation I have written also allows for local lock files that like it's expected you add them to your good ignore so you can kind of override locally and also we use these locks we use a separate lock file for bootstrapping required annex itself which hopefully will go away and we get some like custom tooling and then here's how we actually use it and this is where the name comes from inspired by nodejs is require a module a module within the required at Nix world is a function that takes this require function and then you can call require to import a module and if you if you ignore the module if you omit the module here it'll import like the top-level module so I'm importing the trivial module from base and then I'm and think of this as like the which identifiers you're importing it's an just an inherit but I think this kind of pattern will probably be common so I'm saying okay get me the identity function from base and then in my module is gonna define identity function specialized to lists so that's really the core technical innovation of require that next but as I was saying there's also this is an opportunity to kind of improve on conventions around how we build our projects if we're going to be Bill this new we're gonna be building new libraries in this new system we might as well take advantage of that opportunity to do things more standardized in a kind of more conventional way so first
convention that I like and I think this would probably be the most controversial of my conventions is going for declarative or like plain old data domain specific types for everything we're representing so here this is this is an instance of a type I call source specification so instead of doing everything in paths and doing like built-ins not fetch get this and then I'm gonna get the subdirectory here and then I'm gonna filter that you represent what your path is at a in a declarative way and then only at the very edges when you need to go to a NYX built-in or something else I have a timer on my at the time our measurement if going to Nick's built-in or something else that you actually need to convert it to a path so you you fort you represent all of your your data in the in the domain that you want to think about it in so there's this is for sources you could also have it for packages like a COBOL package is different from a c++ package and in today's world what we do to express that difference is we have a different function we call but if instead we just passed around like these are the these are the fields that define my COBOL package these are the fields that define my C++ package and only when I actually need to consume it in a derivation generic way do I call the function that converts it into a derivation this is sort of related to some of what Elko is doing whether the DRV field of the modules only it's only consumed at the end but you're operating over you're operating over this kind of generic this domain-specific representation rather so one of the one of the reasons I like this kind of approach is it lets you sort of efficiently compose and and and sufficiently insanely compose or transform or query your data so like in in Nix packages as some of you may know there's this way to do composable filter source and it's sort of ad hoc and it's only for one thing if you if you actually just try to do filter source multiple times you'll add to store multiple times and actually won't even work because you have to contexts and all of that but even if you omit that part you still are adding the path to the store over and over and over again if but in practice what you often have in some of these these systems is you have a source you have a the representation of your project source and then you're gonna call Cobalts annex so I want a filter source that just has the COBOL file in it but I don't but then I also want the top-level source thing and then maybe I actually have a source with multiple projects in it and so I want to recurse into each one and if if each time I change the path I was reading it to the store that'd be very inefficient if I make all do all of my transformations on this representation then only and then only at the end when I actually need to consume it kind of close the loop it back to annex primitive then you get things are much more efficient and it's also much easier to look at a path and see what it's doing and what it means so this is this is four paths but I think we should be doing it for everything I think you know the the attribute sets that we so using make derivation and then using dot override or maybe it's cabal don't make generation and dot override for that instead and trying to kind of keep in mind those things instead of just having a standard of this is what the types look like I think is would really help kind of people use Nick's packages especially people who and one of the things I really want to support is developers owning their own Nick's expressions even if they're not people into packaging and giving them a language that is as close to how they want to think about things as possible is a huge way to get there and then also it this again going to what Elko was talking about yesterday this could eventually lead into tooling that lets you discover what what are the flags for this package because the you'd have like a simple Flags field of like what it is and what they mean with metadata that you want to do it and then you can say okay I want to install less with the secure flag great I might my my install tool knows how to parse out the flags from a package and call that in so that's one thing another
thing that I think everybody will be on board with is documentation and here documentation structured in a way so the resolve function is from the source specifications library and so the module which is exporting resolved will also have a metadata field and you could have metadata like module level description metadata but also annotations per whatever is exported there so this is just a very in it and you know the description here is intentionally long and detailed and the idea is you should be able to look at it and kind of understand what it's doing without needing to kind of jump jump a lot I mean sometimes you're gonna have to cross-reference but I think this documentation in terms of length is something we want to look at I think having every identifier having every exported top-level identifier have its own description this could have you know this could be doc book or some kind of markup I don't have strong opinions on that I think we should just choose something in standardize it and then of course given if we're going to break things out into separate projects and separate libraries the libraries themselves should have really good top-level documentation of like just like you would have in anything of the project like this is what this library is for this is how its organized this is how you should think about it these are the top-level concepts etc next thing
that I know a lot of people are have been interested in is type annotation so in addition to human readable
descriptions this is a machine readable description of the type of a function and so I've come up with a way to encode kind of very generally encode types with as niks values so you know this is saying resolve as a function the input type is a source specification a source specification is sort of defined in some other types module and the output is a result type and result is like if you're familiar with rusts result type or it's in either where you've got an error condition or a success condition and say okay if if this succeeded then you will give back a path if it failed will give back some structured error type describing what the failure was and then the structured error the reason I gave this examples I also wanted to say structured errors as opposed to just throwing is often a better way to go because then you can pass information higher up the chain until the error message actually shows the context that the user is going to want so this the type definition I've the the type language I came up with is rich enough to have like sums and record types and also as you can see has function applications and function parameters it's way too general for what we're actually want to do but I just kind of again started with the basic thing that would work so and then the last thing that we can do is build tooling on top of all of this so again
all that documentation stuff and the types could be rendered in some nice way we could have you know some hosted documentation for all of our libraries we could expose configurations the tools as I mentioned earlier if if your package has some slags then your your install script could use those flags to do things and we could build tooling that's sort of native to require that next right now to like resolve that package description in the lock file that I was talking about earlier right now to do that I have some like Genki bootstrap code that reads it all in and then a small subset of the stuff that required at Nix itself uses is implemented twice once within the required X's repo so it can read the file enough to fetch it it's real dependencies and then bootstrap itself but if we had that that the only reason I do that is so that you can just write a default on Nix and use Nix instantiate or annex build like you normally do but if we had required that Nix native tooling it could do the job of parsing these package definitions and these lock files it could be smarter about how it fetches the dependencies and shares them and it can it can also give you a nice repple experience where for any given identifier you can easily find okay give me the doc string for this or show me the type of this and all of that so yeah so now I want to kind of go into so this is the idea this is where I want to go and the big question is what's what's coming next and really the reason the reason what I don't have actually I have stuff locally on my machine that works and it's proven the concept for me the reason I haven't gone much further with it is this is not a viable project unless there's buy-in from the community and unless and and I personally am NOT a PAC a sort of
language packaging expert I don't know what the right conventions should be I don't know what the right field should be so I have a number of things that I would love for people to be looking at in the near future possibly even the hackathon if you guys are here and things that I also want to be looking at but I'm mostly interested in following where there's actually community interests so one set of things we can do is make some libraries so we could have language specific libraries you know we've got we've got the Haskell infrastructure we've got the perl infrastructure we've got the rust infrastructure they all have their own builders many of them have their own kind of ways to generate expressions they have their own package sets maybe we could break those out and just as a note I have some opinions on how that should be done I kind of a disagree with the love for the yarn shinnok's approach I think cobalt UNIX is actually closer to the the one true path but ask me later if you want to hear more about that and just as a little teaser this is something that we're using within target and yeah these these are probably proprietary names I shouldn't be sharing but oh well where we have a single repo with a bunch of packages defined in a COBOL dot project and we just passed this list and a given tag for that repo and it just pulls them all in and we have them available in our package set and this is something we would love to we would love to open source this this functionality is not target specific at all but again we don't have a nice haskell library to plug it into and it would be kind of weird within nix packages proper also just nixed packages library all of the kind of like basic stuff that's in there could be broken out in particular kind of like the basic trivial stuff could easily be broken out into its own base module of just this is this is the kind of stuff you want to do if you're doing basic stuff with nicks standard ends like the core concept of what is part of the standard environment and what is the interface to make derivation and all of those things could be separated out and iterated and experimented upon and Knicks packages could do the actual bootstrapping and definition of what the standard end of is for next packages now but you could come along and plug your own in or plug your own in just for some packages and kind of explore with that and the Knicks west module system is another significant candidate for you know it's it's not really Nick's package is specific it's not really Nick so as specific as we learned earlier today it's a complete standalone Knicks language functionality that could be in its own place and be experimented on and developed on its own path the other
thing and this is I think where I need more help on is the all those standardizations I talked about earlier I don't know what they should be I don't know what the right answers on these things I really what I want to do is import best practices from other communities and get the people who are experts who care about these things to kind of help us figure out what to do so what should the package format be what should the LOC format be what should our conventions be and in particular with conventions one thing I have completely not gotten to is tests and test Suites like how are we how do we want to represent those judo's go in the package format or they some kind of separate thing what kind of tooling would we need to run these tests and all of that and then of course the one last project I want to do is publish these slides and announce that this is happening but again that depends on if people are interested and want to make it happen so that's all I had thank you guys and we've got about 10 minutes it looks like for questions so [Applause] so are there any questions or is everything already entirely clear and hi thanks for the talk I have a couple of questions so that's just one and maybe I'll raise my hand again so this is all nice and it's a great idea pros and cons as you mentioned with mono repo and libraries but do you or have you thought about like a migration path because we all have the next packages as they are and we use it as they are and it's a lot of work that's been put into it so you know you want to don't want to break backwards-compatibility and so on yeah so I mean I think for migration path one one thing that I've thought of to start with is start with this kind of functionality it doesn't really make sense in mix packages to begin with like it's not core to how you use Nix OS but it may be it's how you use Nixon development so like this like this had nice stuff we have for building Haskell package sets right it doesn't really need to be Nix packages Nick's packages has its Haskell package set it's fine so we could start with those libraries validate validate the concepts validate the tooling go with what works the one thing I'm hesitant to say on that though is I'd rather just go with where people are interested in willing to do the implementation and so I'm not gonna say no to people who if there's interest in kind of jumping straight to standard and in terms of migration like once we're ready to do that I think that's going to be kind of it's gonna need a community-wide consideration because it's kind of fun going to fundamentally change what happens when you import Nix packages and how it's all tied together the one nice thing though is that from a on a surface level at least this will all be opaque to the user because though the default Nix will still work it'll just kind of import it'll have some library dependencies and in its package but the default next we'll handle the bootstrapping for you and pull it all together so it's only once you get to the point of people want to override it or developers how they want to use it and I don't have good thoughts there on what exactly that path is again that depends on buy-in and okay let's take a question from the internet why do the log files not contain the hash of the next file that is being imported I mean in principle they could accept as first of all it's not an X file that's being imported its entire it's an entire package right so let's go back to the where is it I wrapped so basically this this kind of gets back to the question of there's a trade-off in usability versus kind of power I guess not really power but this this is a format that kind of I trust as at a high level I trust that if I'm referencing a tag that it's referentially transparent in the meaningful in a meaningful way like this I trust the owner of this repo not to update the tag and so this gives a unique name now it's true that this is sort of cheating this is this is not good enough for Nicks but it's good enough for me the question is if we force hashes if we force people to calculate all these things in practice I have a genuinely seen people give up nicks over little issues like this like they have to update the hash every time and so the question and I'm I'm not I guess I'll say I'm not kind of Wed to this completely but my my assumption my current gut feel is if we can define things at a domain-specific level in a way that lets people that that that it that we the concept of purity makes sense for that domain we'll get a huge win on usability and the loss on purity will be slim to none one more Internet question how much time of any Nick's come on depends on fetching requires from the network I mean so basically the way the way it'll work the way it currently works is they it just kind of fetches everything in the lock up front so you know it's I guess get that that's a a question so we're I'm currently using the the fetch get from targets and X fetchers repo which has some pretty good caching so once you've already fetched it once it's ax it's actually pretty fast to kind of validate that oh it's already there and I'm fine NYX the built-in fetch kid also does that if you provide a revision but since we're just providing tags the that sometimes has to refetch but so it's not too long I haven't actually timed the whole thing and I but I do think this is a place where standalone required NYX native tooling would help because we it could do smart things about fetching and knowing when it needs to fetch and and how it all handles that outside of a NYX evaluation yeah just a small point on what was just discussed in terms of putting hashes there for tags for example so I have seen people push tags to a different version so I would be a big proponent of saying ok do put the shots in there but I think what would
solve the problem much more nicely is that if you have something like this where you have a thing where I okay I understand what a what did you call it source source specification is right just put a little bit of tooling into the next tool that says your hash is different press yes to update right and it writes your thing it's much easier and I think it gets us both so but well the other so the other thing is I what I will say here is this kind of format also has a mode for specifying a branch and a get revision and so as long as you trust gets revisions you can just use those it still doesn't require a hash of the file system contents though but yeah we could also we could do that discussions are already starting it's good to see someone else thinking about these sort of workflows cuz I have a couple similar things where I want to use certain library functions out of mix packages but not the actual packages themselves and in my case I've actually found what works really well is at the top to have a let binding that says something like let licences equals built-in stuff fetch tarball mix packages by sha-256 dot Lib dot licenses in which case I fetched a small sub component of mix packages in a deterministic way from the mono repo creating in effect my own library without busting up the mono repo and introducing version dependencies for upstream testing and discoverability issues have you tried this approachin is it working yeah I mean so the one thing that it doesn't support is when I want to push when I like oh if I want to push changes that I in my opinion don't really make sense in NYX packages except for the fact that NYX packages is kind of the central place doesn't really help that and it doesn't help with a couple deployment development aspect of the mono repo actually causes problems and one thing I actually wanted to say that I didn't mention at the time is of course all of these things I'm talking about are about kind of practice in policy and in principle you can implement policies that don't have these problems within a mono repo but I do but I believe in my experience has been that the mono repo that we have in the palette that practically sort of practices we follow encourage this kind of lack of clean interfaces between components and that sort of thing any more questions from someone who hasn't already asked a question right two minutes it's more of statements I think this is a brilliant idea and something that hasn't been mentioned is that if it gets adopted packages could provide you're required at NYX file and then as next packages contributor we could just imports the derivation directly yeah exactly you could you could Poirot require that next file within your package or a package set that like gives a list of overlays so it could either import its own mixed packages or it could give an overlay list that you can apply to your own mix packages and that sort of thing yeah any more sentences with question marks today and so another similar point that I've seen a handful of people in the community struggle with is if you have a package that you don't intend to upstream to nix packages there's some duplication between the default Nicks in the repository of the package the shell duck Nix the release Nix and potentially your company's mono repo or potentially nix packages mono repo have you thought a little bit about solving this particular problem or do you ever knew I haven't I mean I have thought about that problem I don't know that this is a full solution to that but I do think having more abilities to kind of reference to stand alone libraries we'd have one place where the actual expression lives have you also worked on they say practical example in which you can demonstrate what value this model Association approach provides because I think I also agree with you we need we need very better modeler ization but I think it would help a lot if there's something you can show something really practical for which it makes sense have you thought about that I mean yeah so I've thought about in that and that's to me that's the next step to see what wasn't clear to me before talking about it this weekend is whether anybody else was on board now it seems like it is so yes my next step is polish up the Haskell the the sort of library of like nice Haskell functionality that we're using we're using some of it within target some of what I've used a previous companies and just kind of make it a nice clean standalone piece of functionality that's that's my next project unless again there's some big push for some other thing that I should help out with yeah any hope we're at negative 20 seconds anyway [Laughter] [Applause] are you are you up for answering one more question then let's go for it chuck all night yeah so I mean I thinks it sounds like a really create initiative one question I have around so I think I agree it's only too difficult to get as a metadata of stuff and this messes up the user experience and also tooling like for example nicks and query also basically was a clusterfuck and Nick search sort of improves but it needs a caching mechanism and it seems to me a lot of those difficulties could be ameliorated if we just use JSON or whatever to specify some known well it's the actual metadata stuff like the description as a chars because you don't actually want to compute them and having some as some static information that normal tooling can create and update automatically or so I think what improve or not so one question I have is how where that I use the idea of doing some stuff as Nix expressions I suppose if you consider to do consider doing something like I mean I meant to mention you'll notice both of these are completely representable as JSON that's
not an accident so yes I'm with you possibly I'm open to the idea of saying the metadata for the packages themselves should be JSON so easier make tooling or just use H Nick's or are Nick's or the Nick's evaluator and C++ and import it that way but anything else cool okay already another wonderful pose thank you [Applause]
Feedback