We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

A method for distributing applications independent from the distro

00:00

Formal Metadata

Title
A method for distributing applications independent from the distro
Title of Series
Number of Parts
90
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
For many years the linux distro concept has been about "inclusion of applications" sometimes at the detriment to co-habitating applications and the stability of the core OS. Much discussion has been made over the years about JEOS, embedded Linux, custom distros, applicance building, etc, but not a lot of discussion about how applications could be delivered such that they were more readily able to co-habitate. In a related note, open source applications (because distros are so "inclusive") are put through significant scrutiny around their design and deployment related to their integration with the core OS that may or may not make sense. The scrutiny is certainly more intense than proprietary software is required to undergo. We are proposing a panel discussion around a solution called "Software Collections." Software Collections have been adopted by Red Hat and are under consideration by other distros as a solution to the application delivery problem. Questions include, is this a good solution? Can multiple distros adopt one solution (or are there inherent differences)? Can multiple distros, potentially, even leverage the same package for an application?
25
Thumbnail
15:46
51
54
Thumbnail
15:34
55
57
Thumbnail
1:02:09
58
Thumbnail
16:08
62
Thumbnail
13:26
65
67
SoftwareDistribution (mathematics)QuicksortSlide ruleGoodness of fitDistribution (mathematics)Cartesian coordinate systemTerm (mathematics)NumberRevision controlSystem administratorBinary fileComputer animationLecture/Conference
Open sourceRevision controlDistribution (mathematics)VolumeContent (media)Rule of inferenceBuildingDefault (computer science)QuicksortSoftware developerJava appletOptical disc driveProjective planeDistribution (mathematics)Term (mathematics)Revision controlBit rateNumberDifferent (Kate Ryan album)Bounded variationFile formatCartesian coordinate systemComputer configurationOpen sourceRule of inferenceComputer animation
Rule of inferenceEnterprise architectureLatent heatTurbo-CodeBuildingDefault (computer science)Local ringMobile appIdeal (ethics)Revision controlPortable communications deviceRule of inferenceTerm (mathematics)Software repositoryCartesian coordinate systemDifferent (Kate Ryan album)Mobile appMultiplication signInterface (computing)Perspective (visual)CASE <Informatik>Open setRevision controlSystem administratorOrder (biology)Local ringSet (mathematics)SpacetimeDistribution (mathematics)Computer configurationDigital rights managementPhysical systemLatent heatLevel (video gaming)Right anglePhysical lawPerpetual motionBuildingComplex (psychology)Portable communications deviceConfiguration spaceShared memoryIdeal (ethics)NumberDefault (computer science)Projective planeOpen sourceInternet service providerInversion (music)Library (computing)Lecture/ConferenceComputer animation
Rule of inferenceIdeal (ethics)Mobile appIntegrated development environmentModul <Datentyp>Installation artPhysical systemRootDistribution (mathematics)SoftwareIndependence (probability theory)Product (business)Process (computing)Data structureStandard deviationRevision controlDisintegrationFile systemLatent heatContent (media)State of matterOpen sourceInternet service providerMereologyCASE <Informatik>Cartesian coordinate systemBit rateSoftware developerDifferent (Kate Ryan album)Link (knot theory)HierarchyRevision controlScaling (geometry)Cycle (graph theory)Default (computer science)Set (mathematics)Component-based software engineeringLocal ringSystem administratorQuicksortDigital rights managementLatent heatSpacetimeBinary fileRight anglePerspective (visual)File systemOrder (biology)Directory serviceLibrary (computing)MultiplicationRootIntegrated development environmentDistribution (mathematics)Slide ruleData structureVideo gameBitCovering spaceStack (abstract data type)Standard deviationRule of inferenceUtility softwareSystem identificationString (computer science)Front and back endsSingle-precision floating-point formatInformationUniform resource locatorOracleDatabaseLevel (video gaming)Computer configurationTerm (mathematics)Compilation albumVarianceComputer clusterComputer animation
WikiMultiplicationRevision controlComputer fileIntegrated development environmentMacro (computer science)MetadataMathematicsRootComputer-integrated manufacturingMIDIMathieu functionSoftware testingPhysical systemRevision controlDefault (computer science)Standard deviationComputer fileSoftwareProjective planeVarianceQuicksortElectronic mailing listIntegrated development environmentMetadataRight angleBinary codeGraphics tabletNumberMereologyContrast (vision)Cartesian coordinate systemInstallation artComplex (psychology)Library (computing)Component-based software engineeringWordVariable (mathematics)Scripting languagePerspective (visual)Software developerIndependence (probability theory)SpacetimeDirectory serviceUniform resource locatorSet (mathematics)Metropolitan area networkNetwork topologyPoint (geometry)Meta elementHierarchySystem softwareDifferent (Kate Ryan album)Utility softwareDatabaseWindowDampingComputer configurationCASE <Informatik>AdditionBuildingOrder (biology)Macro (computer science)MathematicsFile systemData structureRootElement (mathematics)Group actionBinary fileComputer animation
Software testingSoftwareProcess (computing)Different (Kate Ryan album)ChainOnline helpMacro (computer science)MathematicsGroup actionQuicksortMechanism designOpen sourceSampling (statistics)Sequel1 (number)Stack (abstract data type)Slide ruleDirectory serviceOcean currentStructural loadMetropolitan area networkLibrary (computing)Gastropod shellComputer fileCartesian coordinate systemContext awarenessString (computer science)Standard deviationData recoveryDefault (computer science)MereologyUniform resource locatorSoftwareStress (mechanics)Installation artMobile appLoginHuman migrationGreatest elementComputer programPerspective (visual)Asynchronous Transfer ModeSystem softwareSoftware developerBitDigital rights managementIntegrated development environmentSoftware maintenanceServer (computing)SpacetimeGame controllerFigurate numberTrailLevel (video gaming)Binary codeSoftware crackingData conversionProcess (computing)Utility softwareModule (mathematics)Software frameworkParallel portFundamental theorem of algebraNumbering schemeCore dumpACIDData structureMiniDiscDatabaseMultiplication signPhysical systemRevision controlFile formatNormal (geometry)Perl 5AdditionBinary file
Cartesian coordinate systemRoutingRootDatabaseIntegrated development environmentWordMacro (computer science)ChainInstallation artPhysical systemUniform resource locatorQuicksortComputer configurationOrder (biology)File formatMultiplication signConfiguration spaceWindowFlagModule (mathematics)Distribution (mathematics)Shared memoryDistributed computingFormal verificationIndependence (probability theory)File systemSoftwareAsynchronous Transfer ModeFlow separationSoftware developerSystem administratorMultiplicationEndliche ModelltheorieMereologyPresentation of a groupData conversionMixed realityDigital rights managementBitSingle-precision floating-point formatScripting languageMathematicsExecution unitTerm (mathematics)Loop (music)GenderSpacetimeSystem callCASE <Informatik>Right angleAbsolute valueData Encryption StandardLink (knot theory)Computer fileLecture/Conference
Transcript: English(auto-generated)
Hey, good morning. Hopefully my slides make some sense since I wrote them this morning. The topic was sort of doing packaging for distros and not inside distros because, you know, we have a lot of distros represented here.
There were a couple talks yesterday about federal packaging and so my talk is not so much about packaging inside the distro but really trying to address some of the problems that we see from users who try to use applications that may or may not match what's inside the distro, right? So this is the typical sysadmin problem.
And so if we look at, you know, where things are coming from, the number of Linux distributions in terms of versions that are out there and the number of options that we have hasn't really grown a lot, right? So there's variations
but fundamentally we still have a handful of Debian, RPM, and sort of other odd variants of Linux distributions. But the number of open source projects on the other hand has grown at a much faster rate than the distributions themselves can include
and make available. So how do we get end users who are interested in these projects that are not included in the distribution get access to those relatively easily and also the fact that developers, unlike packagers, tend to be very version sensitive when they're looking at their particular application. They care about particular versions of Java, particular versions of Ruby and
if it's tied to the OS it becomes relatively hard to manage. So if we look at sort of packaging, packaging has come in a couple different formats. So individual projects always have an upstream. So we always have this upstream tarball
notion and then specific distros have their packaging rules in terms of how things are packaged and where they're maintained. So in Fedora, for example, there are usually a couple of open SSL packages because
open SSL interfaces change over time and you want to provide a couple different options. If you use build tools like AutoConf, they also have a certain set of defaults whether it's in user local or other places. So if you're using, you know, dot configure for your project you get a certain set of defaults that have come through
some level of Unix conventions over the last 20 years, whether it's user local, op, and so on. So we have built our packaging rules and guidelines over time based on behaviors that I've seen. What we typically don't see in the open source space is the actual usage of opt
and the fact that in a number of companies, especially if you've been sys-adminning whether in a university or in a company, there are company specific rules as to where applications go because companies like to manage inversion applications for their own use. And so there are
those rules play a role in how how much pain the sys-admins have to go through in order to manage and deploy applications, right? So typically we think of systems as being all-inclusive and you have every version of the OS that's needed. But if you're managing 10,000 systems and you need to provide, you know, 20 different versions of Perl
just because there are so many different versions of apps available, the complexity of managing that becomes harder. So we've seen examples of, you know, using NFS shares and others for managing applications
but those are always different from what the distribution provides. So whatever the distribution provides no longer is relevant. The system administrator will override that with their with their tools. Now in terms of what is there today from a packaged perspective. Today from a packaged perspective,
if we look at the distributions, right, so in the you-know-whos case we're talking about Debian and the fact that they're using the in-distro versions of Apache, MySQL and so on. And then there are third-party repos, so RPM, Forge and
others provide additional packages that the distributions themselves don't provide, whether it's versions or or other things, right? So those are really the two things that are going on from a packaged application availability. Now if you have been in the sys-admin space
for the end users, this doesn't necessarily meet the needs and some of the ideal goals for packaging, especially for third parties, is that we'd like some portability across distros, right? So
assuming that, you know, we can manage RPMs and DEBs and other things, but even across versions, right? So we want to be able to run the version of the OS that we want to have and we want to be able to run the version of the application or the library on top that we want.
In order to do that you have to look at what are you dependent on in the OS and from a third-party perspective you can actually limit your dependency in the core stack by quite a bit. But really the biggest benefit comes, not from the first two pieces, but from third bullet on the slide,
which is you actually ignore the packaging rules of the distro that you're running on because the packaging rules of the distro that you're running on require you to install and and use the libraries and versions that are embedded in the distro. Now if you ignore those rules
you could actually have much more flexibility and manage a larger scale of environment at a different rate. And so part of what this talk talks about is some work that we started at Red Hat about a year and a half ago. We published some documentation in the Fedora space last year and
are sort of walking through some of the issues that are still to be solved. So just a slight history. This problem about how do you manage multiple versions of applications packages goes back to, you know, when Unix was in its early days. So two of the early solutions
does this environment modules, which was built with TCL TCL in sometime in the mid 90s and it's still in use today in some locations. There was another effort called simple packaging, which was done by Tobias Utter
and that was also in 98 timeframe. So there's some efforts that have been going on in terms of how do you manage different versions of applications and how do you provide utilities
to look at this. What we started doing in the federal and the Red Hat space was to look at what are the things that we want to provide. First is what are we packaging? Are we packaging a simple library or are we packaging an application or a tool set? And typically when we look at this question
we're really talking about an application that is a larger set of components. So it's not a single RPM or it's not a single simple package. It tends to be a conglomeration of things. So if we think of Postgres for example it's an application with five, ten different RPMs that ends up being installed. So if you're talking about a
full Drupal install, you could talk about a full stack of Drupal with the back-end database and everything else concerned. So when we started looking at this we said where we want to install it is actually important because now migration becomes
easier. So we started looking at it as being installed outside the Linux file system hierarchy, right? And the reason we started looking at this as outside the file system hierarchy is precisely because of the rules for packaging inside the inside the normal FHS because that
is couldn't that is covered by the rules of the distribution, right? So Fedora has Fedora packaging guidelines. Ubuntu has its own packaging guidelines and so on. So by going outside the FHS defaults we can now do a couple different things. So talk about this from a RHEL and the Fedora perspective.
And give you a little history. So in RHEL and Fedora we share a common packaging guidelines. So however the packages get built in Fedora, you'll see them built exactly the same way inside RHEL. And what they do is they are completely integrated and
interdependent package set that all sit in the root file system. So everything sits under root user and that's pretty much it, right? You root user and var are the basic file system directories that we cover. The assumption is that
inside that directory structure if you install something with a different version or providing different content sets, if the libraries aren't versioned properly things will start to break. So there's a very tight dependency inside that there. Now
if we look at life cycles, so Fedora moves out of six month cadence, RHEL moves out of ten year cadence, totally different. In some cases on Fedora people actually want packages to slow down. So there are application developers that are using Ruby that actually care about Ruby 1.8.7 rather than Ruby 1.9.3 and
in RHEL we have the exact opposite problem. People care about Ruby 1.9 versus Ruby 1.8.7. So how do we start to provide user bin Ruby to be both in both cases? It's not really possible. So you need a solution that provides
different capabilities. If we look at where sysadmins actually install stuff by default, if you're installing from tarball, you're installing from source, and you're really not doing the packaging step by yourself, you tend to install in user local because that's where the defaults for most of the build tools will occur, right?
Especially if you're not doing version specific items, you end up in user local. If you are doing version specific, you tend to install in a version specific directory and then typically you'll do symlinks, right, in some cases just because you don't want to worry about how big your path name is going to be
or having to munge your path often enough, you'll tend to do versioning to hide the you'll tend to use symlinks to hide the version information. So when you do that you start to actually go outside the default file system hierarchy, right?
You're now in user local or you're in opt which are not part of what is considered the in distro file system space for installing applications. But if you go there, if you're in user local or in opt, there are no other rules. It just says in opt you can do whatever you feel like, in user local you can do whatever you feel like.
There are no sub standards that sort of cover those particular directories. So one of the things that we started to look at was how do we address this problem of multiple sources
providing applications as well as the fact that system admins will want to create their own source on applications locally and you want to do versioning and you want to do multiple application co-hosting and so on. So we looked at what is the default file system hierarchy
capabilities and one of the key things that stood out was slash opt is defined by the Linux Foundation as things that are to be used for not in distro. So not in Fedora, not in Ubuntu, not in RHEL, right? So typically companies
tend to use slash opt for their own use. So one of the things that we looked at was well, how do you identify the same application for multiple sources, right? So if I take my SQL and I say my SQL package by Fedora, my SQL package by RPM forge,
my SQL package by Oracle, you know, three different packaging sources, but still giving me the same application, maybe with different tweaks for compilers and other options, how do I get them to coexist and how do I get them to be installed separately? So we looked at creating two level
identification strings. One is the vendor and and the second one is an application and a version. So I could do an install of an application that is Red Hat compiled using an RH tag for vendor and then an application version. So Postgres 9.2, for example, would give me Postgres 9.2
and then beyond that I have my standard directory structure. So I would have a standard SC user for and so on. Now if Postgres 9.2 was compiled by somebody else with a whole different set of options, all it needed is a different vendor, whoever compiled it. So opt Fedora, for example, opt Fedora,
Postgres 9.2, it's still set on the same file system. Now what this gives you is from a developer's perspective, I can now choose and tweak how I want to have my dependencies or how I want my applications to sit. The
So, so what does that buy from the distro perspective? Well, the distro actually doesn't care at this point because it's actually sitting outside the distro and you're not maintaining it. Wherever the vendor is that's providing the application package is now responsible for that tree and location.
So, so one of the things that we did last year was we created a set of guidelines based on this, essentially the file system hierarchy part, and and some work on RPM. So in the Fedora hosted project, we created a software collection packaging guideline
that allowed us to install as many versions of as many applications as we felt like. Now if you think of a distro that has a set of dependent application inside the distro, that's all completely built from scratch when the distro itself is built.
But when you think of additional applications that sit on top of the distro, those can be either independent applications, standalone, has no dependency other than the base layer of the distro, or you can have dependencies between applications themselves, right? So in the Windows world, for example, if I installed Microsoft Word
and Microsoft Excel, do they share components together or are they totally independent? And you can sort of take that example into pretty much any different, any application you want. The packaging guideline here doesn't really talk about that. It assumes that when you're packaging your application, you are the leaf node, and you're only dependent on the OS. If you're going to have cross
dependencies between the applications, that's another layer of complexity, which you have to manage, but at least at the first step you have a set of guidelines that we started to look at that allow you to create these leaf node applications.
Now while we created this documentation in the Fedora space, so the Fedora guys don't beat me up, I will note that it is not an official standard for the Fedora project itself, right? So part of what I was talking about is that there is
the notion of what is a packaging guideline for something that is in distro, and that's the Fedora packaging standards for Fedora, versus what is or what should be a packaging guidelines for running on top of the distro. And when you're running on top of the distro, you actually want variances from the standard packaging guidelines.
And that's what we did here with the software collections. So as part of the software collections, we have to deal with a number of things, right? We have to deal with the fact that we're now in non-standard paths, right? My default search path for a number of items have to be munched.
I need to worry about my binaries, my libraries, my man path, all the normal things that you expect to work off the bat. You have to worry about adjusting those on the fly. So in order to do that, we created a set of utilities called SEL utilities, essentially
software collection utilities. And what this does is, at least for the RPM cases, it uses additional RPM metadata, as well as a certain set of scripts inside the packages to modify the environment. So the issues that we talked about non-standard paths,
those are automatically managed as part of this SEL utils. So, so let's take a look at what we actually do. So in the RPM space, we actually added a couple new RPM macros.
So the first one is pretty clear. It's SEL name. So all it does is it gives a name for the software collection. And what this allows us to do is, if I go in and I say, give me a list of all the software collections that are installed on the system, it actually goes and looks in the RPM database and finds this element and then gives me a list.
So just, so think of going into a system and saying, give me a list of installed applications. Today, if you look in the RPM database, you get a list of RPMs. You don't get a list of high-level applications. So this is more of the meta packaging and the meta name of what's being installed.
Package name, this is the original package name. So if you were to install it in the file system hierarchy, what would its package name be? So that's where we're coming from. And then the SEL prefix. So what is the root of the software collection?
In this case, I was building it in Red Hat. So I installed opt-rh. If I was building it in Fedora, I would do opt-fidora. Whatever my vendor tag I wanted to use. So opt-vendor-tag. And then I have two directories that I define. One is a location of where the scripts are.
And this is the scripts that are used in Munch. So the scripts are, so this is what is used to Munch the environment variables. So this is where we change our path names for our binaries, libraries, man path, and so on.
And then the actual root directory where actual files are installed as part of the software collection. And if you guys want to play with some collections, we actually have a list of software collections that we've been building and testing
for a while to see what are the issues that we come through. So most of these are running on RHEL 6 or RHEL 5. And they sort of contrast with what's actually built in RHEL, right? So, for example, in the Perl collection for RHEL 6,
RHEL defaults to Perl 5.8. At least RHEL 6 defaults to Perl 5.8. And what we did was built a Perl 5.14 version that would run on RHEL 6 without conflicting with any of the system packages. So the fact that, you know, in Perl 5.12, there was a lot of Unicode enhancements, you can now use all those Unicode enhancements in a version of Perl totally independent of the system Perl.
Same thing with Python, right? Python 2.7 default in RHEL. There's no way we'll change it because a lot of system utilities depend on it. But Python 3.2, Python 3 in general has a lot of neat features. So how do we get it made available to end users and developers who want to build applications on Python 3?
So Python 3.2 was sort of doing that effort. How many of you follow Fedora or how the Fedora packaging is done? So one thing you'll note that when we did Python 3.2, this was actually different from what Fedora did with Python 3.
In Fedora, there's actually a user bin Python and a user bin Python 3 because that's being used to build the transition from Python 2 to Python 3 for all the system utilities. Here, when we build Python 3.2, we're explicitly not looking at the system utilities.
We're looking at it for end-user apps, right? Something different from what the system utilities are dependent on. So that's why it's not user bin Python 3. We can now actually give you much more fine-grained version-controlled Python 3.2 in a different directory structure.
And similarly with all the other ones, we started to do PHP, MySQL, Postgres. As we were doing this, one of the other things that we found to be somewhat interesting is if you look at the history of the databases, MySQL and Postgres, as they have gone through different versions over time, if you have a lot of data in them,
it gets hard to upgrade the database server software because the on-disk format of the data changes. So Postgres, for example, between Postgres 8 and 9, the on-disk data format changes. So if you just reinstall the binaries and try to restart the server,
it wouldn't work because it couldn't reach the on-disk data. So you would typically have to do a dump and restore and so upgrade path management becomes harder. Now, with the software collections, what we're trying to do is install them in parallel. So on a single system, I now have Postgres 8 installed, I have Postgres 9 installed.
My data is in a different location. I can do a dump and restore and bring it up on Postgres 9. If that fails, my Postgres 8 install is still there, my data is still there. I can go back into a recovery mode a lot faster. So from an operational perspective, it makes data migration and fallback capabilities a lot simpler.
I'm not sure you guys can see this properly, but this is just some of the sample commands that
the software collections piece brings in. So, SEL enable is really the only main action that we have available today. We'll have other actions in the future that sort of go into the disabled side, but enable essentially says, change all my paths appropriately such that Perl 5.1.4 is now enabled.
And what it does is, it will go into your path and append where the path where Perl 5.4 is there, along with all the libraries, man paths, Perl docs and so on.
And if you do nothing with it, nothing else. So, SEL enable Perl 5.1.4, it just changes your current shell environment and you can now have that Perl available as part of your default. However, if you add additional commands onto the string, all it does is executes that command in that context
and then returns without changing anything in your current shell. So you can look at it in a couple different ways of how do you want to enable this for a startup application or do you want to just change your default for your login shell. A couple different ways to do that.
If you're running SELinux in the Fedora and RHEL space, you can do the normal SELinux stuff with the software collections. It's no different. And if you look at, you know, sort of the path and the defaults of where these software collections look for their stuff,
the bottom sort of shows where the paths for Perl 5.1.4, in this particular example, land up, right? So if you're installing additional CPAN modules, they'll land up in those particular directories rather than somewhere else.
So as part of this effort, what we saw was, A, we need a mechanism to make the package management and package maintainer process a little bit easier. So creating RPM spec files is paid enough for a lot of developers.
To create an RPM spec file that talks about software collections is a second step that we have to do. So what we started looking at was at least start to create a framework for helper applications. So we have some conversion utilities that look at how do you take a spec file and transition it into a software collection,
adding additional macros and pulling those in. So there is a spec to SEL that takes a standard spec file, will modify it into a software collection spec file. If you're running Python or Ruby programs, since they have their own packaging mechanisms, of course, you need to sort of take them from those packaging mechanisms into RPM in the first place.
So there are some helper applications that do that as well. So once you have it in RPM, you can sort of chain load this and just run it from gem to RPM and then pass it on to spec to SEL and sort of have Ruby gem inside a software collection.
Questions while I flip through some slides? So the biggest question that I wanted to sort of bring up with this talk was when we look at applications, right,
where do the applications sit and how they are managed makes a huge difference. So typically inside the distro we have apps. We've had, you know, the LAMP stack inside a distro for the longest time. We see additional applications show up as the distros tend to expand their footprint.
But the distros will never cover the whole open source app space because it's just too large and to try and sink the whole open source app space into one big distro that is all linked together and working in lockstep is sort of a never-ending effort. So you'll always have apps that run on top of the distro.
So how do we get those apps that run on top of the distro that aren't included inside the distro to be much more effective, easily available such that they fall into the normal package management schemes because then it becomes easier to manage, right. So you're not doing a tarball install and things are not sort of falling in the cracks.
So that's really been sort of the basic question that we've been trying to answer with software collections. And the question here is, you know, how do we take this to the next level, figure out what the issues that we're missing with, missing are.
And that was really, so the biggest question that I see today is, you know, from the Red Hat and the rail space and we're still struggling in Fedora is looking at what is the role of the distro
in defining what runs on top of the distro. The distros have been very good at defining what runs in the distro because that's where the control is and that's where the mechanism is. But the relationship of the distro to what runs on top of the distro, what are the guidelines that we provide to outsiders is just as important.
And how do we provide those and how do we make those known and is it consistent? You know, it doesn't have to be the same for every package management scheme but the fundamentals of the recommendations should be consistent across the distros.
Just makes it easier. STL utils today are dependent on RPM because they look at those RPM macros. But conceptually, they don't have to be, right?
We define the RPM macros because that's what we were using at Red Hat. But you can define other macros. If you look at the environment modules in terms of historical references, the environment modules were built at a time where there was no package management. People just had tarballs. So all the environment modules does is it depends on a little bash script
that sets up the environment module, right? So we sort of do the same thing in scriptlets as part of software collections and you can do that. You don't have to worry about RPM at all. Yep, there's a mic.
Other distributions like Ubuntu or Debian?
Interest from other distributions. So I actually haven't had a lot of conversations with them. In the RPM space, we've had conversations with the SUSE folks a little bit. But I don't think we've had a lot of conversations with Debian.
I'm here. That's part of this presentation. Let's try to open up the conversation. What if the distro is some non-Linux system? Like AIX, for example, or HP Onyx, Solaris, Windows? Yep.
Sure, you can do that. So the SEP example that I had earlier that Tobias was working on, that was actually a cross distro model where he was working on a tool that would work on Solaris and Debian at the same time. And it was a functional system that they were using. So software collections, the way we built it today in Fedora, doesn't do cross OS.
But if there's interest to build this into a cross OS tool, absolutely. And one more. What if you don't have root access on the target machine? That's the next one. So today you do need root access.
This depends on system administrator. We need package management tools to change a little bit in order to allow end users to be able to install and manage their own packages, right? So today package managers depend on root access and there is a single RPM database for RPM systems, right?
So there's no way to do multi-user. If you think of sort of NFS shares as one example, people install applications on NFS shares so that the same application can be shared across multiple systems. It's hard to do packaging and package verification on NFS shares because you can
only do it on one system that you use as target for installation and verification. If you look at the shares on others, you can't do package verification because the RPM databases are inconsistent with the rest of the file system. If we take the packaging tools that we have today and make the packaging tools available for both system with root
privileges and user, such that there is a separate RPM database that is in a user mode with a different location, then it becomes much simpler to get to where users can install and manage their own packages in a different location.
But the packaging systems aren't there today. That's not fully true, I think. That's not fully true, I think. I know of one packaging system that is already able to manage, let's say, prefixed installations.
Okay, which one? It is Gentoo Portage. Okay, without root access. Without root access, yeah. The subproject in Gentoo is called Gentoo Prefix. Okay. And in my company, I did have the very same issues you told in the beginning,
but we have to provide our application on any UNIX system, as well as on Windows. I try to stay away from Windows.
But it's a fair question. I mean, yeah, I mean, the problem exists, right? So the question is, how can we solve it? And yeah, I'll take a look at Gentoo Prefix. I wasn't aware of that, so that's perfectly good. Thanks. By the way, RPM can install a database as a user.
You can just tell it that you're using a different prefix and tell it, put the RPM database somewhere and install software under this root. You can do that with RPM currently, so. Yeah, I mean, you just have to remember to use the same flags every time you do a YUM every time, right? Exactly.
Yeah, so it'd be nice if that wasn't a config file you don't have to do. Yeah, it seems to me that for an independent vendor who's wanting to target multiple distributions, you're still talking about doing a lot of work for them in order to package it in RPM, package it in DEB or package it in whatever packaging format there is.
Have you looked at options that might actually sit almost above those layers, distribute software collections, maybe in a sort of mountable loopback, and then changing the distribution, the systems, and the DEs to
be able to sort of recognize when that is automatically mounted, loopback, and then execute what's inside? Has that been something that's been explored? We have looked at it in terms of somebody that does it today, Bitnami is probably the one that comes to mind. Bitnami does their own installer, sort of ignores RPMs and DEBs and does their own thing.
That's always feasible is that you could always install a secondary package management tool that the system doesn't know anything about. And if you install the init system, you know, the init system will still look at the directory and find its init file, so that shouldn't change anything there.
In terms of introducing secondary package managers, it's always interesting and especially helps your case because you are the application developer and you want the least amount of work. I haven't seen a lot of what I would call system administrator pick up to do multiple package management systems.
They tend to have enough problems with one package management system to do two on the same system becomes harder. But if you could get, you know, your customers to buy into that, hey.
What do you think about the next packet management? Next, I am clueless. Clueless? Yes. Okay. It's the same, the same friendship, but I think Nix is more flexible than ACL.
Okay. Give me a link and I'll take a look at it. Okay. Any more questions?
Is there a question there or? Oh, okay. So, yeah, so this is, you know, seems like, you know, some of you guys have seen this problem a couple times, trying to get different solutions to it.
The biggest question that I have is, you know, how do distros provide guidance to software developers and application developers as to how they should package for the distro? Because nothing that I see out there actually talks about that problem. Everything is, looks at, you know, how do you get something into the distro? Okay. Thanks.