We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

How to be a good upstream

00:00

Formal Metadata

Title
How to be a good upstream
Title of Series
Number of Parts
97
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
How upstreams run their projects determines how easy the projects are for packagers to package. The aim of this coal is to introduce what kind things should be taken into account in order to make the life of distributions easier.
5
15
Thumbnail
48:33
41
Thumbnail
35:21
47
48
Thumbnail
1:03:30
50
75
Thumbnail
50:56
94
Motion blurGoodness of fitProjective planeDistribution (mathematics)Open sourceTerm (mathematics)Software developerJava appletProcess (computing)Rule of inferenceEndliche ModelltheorieShared memoryTowerLecture/ConferenceXMLComputer animation
TowerEndliche ModelltheorieoutputOpen sourceProjective planeProcess (computing)Virtual machineStaff (military)Distribution (mathematics)Computer animationLecture/Conference
CodeDataflowEmulationCodeRevision controlDistribution (mathematics)Game controllerPoint (geometry)Rule of inferenceStability theoryProjective planeHacker (term)XMLProgram flowchartComputer animation
Gamma functionDistribution (mathematics)Multiplication signDisk read-and-write headBuildingRevision controlScripting languageLecture/ConferenceComputer animation
IterationPhysical systemRevision controlDistribution (mathematics)Scripting languageBuildingGame theoryLecture/ConferenceComputer animation
IterationPhysical systemDistribution (mathematics)BuildingScripting languageCodeAxiom of choiceCheat <Computerspiel>Projective planeRepository (publishing)Patch (Unix)Multiplication signLecture/ConferenceComputer animation
Distribution (mathematics)CodePhysical systemSoftware bugTrailDistribution (mathematics)DatabaseProjective planeCodeSoftware developerType theoryUniform resource locatorComputer animationLecture/Conference
System programmingDistribution (mathematics)Software frameworkDistribution (mathematics)Process (computing)Scripting languageMultiplication signProjective planeConfiguration spaceComputer animationLecture/Conference
MathematicsHash functionDistribution (mathematics)Default (computer science)Asynchronous Transfer ModeBlogAxiom of choiceMultiplication signSheaf (mathematics)Distribution (mathematics)Data storage deviceFile Transfer ProtocolComputer filePhysical systemTerm (mathematics)Hash functionComputer animationLecture/Conference
BlogMathematicsAsynchronous Transfer ModeHash functionDistribution (mathematics)Axiom of choiceDefault (computer science)CASE <Informatik>MathematicsSoftware maintenanceExistential quantificationComputer fileDistribution (mathematics)PropagatorPoint (geometry)2 (number)Software developerBuildingSet (mathematics)PrototypeGoodness of fitWave packetBinary codeTurbo-CodeComputer animationLecture/ConferenceMeeting/Interview
BlogMathematicsAsynchronous Transfer ModeDistribution (mathematics)Hash functionAxiom of choiceDefault (computer science)Default (computer science)Binary codeType theoryConfiguration spaceSoftwarePoint (geometry)Error messageBuildingScripting languageComputer animationLecture/Conference
MathematicsBlogAsynchronous Transfer ModeDistribution (mathematics)Hash functionAxiom of choiceDefault (computer science)LoginCompilation albumJust-in-Time-CompilerBitFlagBlock (periodic table)MathematicsFitness functionBinary codeComputer animation
Information securityStability theoryLecture/ConferenceComputer animation
CodeRevision controlSoftware testingCycle (graph theory)Analytic continuationSoftware bugLevel (video gaming)Distribution (mathematics)Lecture/Conference
Enterprise architectureDistribution (mathematics)Software bugSoftware maintenanceSoftwareLecture/Conference
Information securityFiber bundleLink (knot theory)Dynamical systemDistribution (mathematics)State of matterMereologyPerspective (visual)DatabaseAbstractionMusical ensembleSource codeLibrary (computing)Graph (mathematics)Information securityInstallation artMobile appFiber bundleRepository (publishing)Computer animationLecture/Conference
Information securityFiber bundleLink (knot theory)Link (knot theory)Distribution (mathematics)Process (computing)Dynamical systemInformation securityPhysical systemBinary codeLibrary (computing)Shared memoryType theoryLipschitz-StetigkeitLecture/ConferenceComputer animation
Alpha (investment)Beta functionHill differential equationRule of inferenceMathematicsBeta functionCapability Maturity ModelRevision controlProjective planePoint (geometry)Theory of relativityIntegrated development environmentLecture/ConferenceComputer animation
Beta functionAlpha (investment)Integrated development environmentPhysical systemBitNumberBuildingComputer fileBootingComputer animationLecture/Conference
System programmingStandard deviationFormal languageDistribution (mathematics)Formal languageProjective planeGoodness of fitSoftware developerCompilation albumJava appletPhysical systemService (economics)Source codeWeb pageWritingComputer animationLecture/Conference
Revision controlControl flowDirectory serviceData structureConsistencyInclusion mapUniform resource locatorInstallation artPhysical systemBuildingJava appletComputer fileRevision controlDirectory serviceWindowFiber bundleLecture/ConferenceComputer animation
SpacetimeComputer fileBinary multiplierDistribution (mathematics)Revision controlDirectory serviceProjective planeTurbo-CodeLecture/ConferenceMeeting/Interview
BuildingDirectory serviceRevision controlLecture/Conference
Control flowDirectory serviceRevision controlConsistencyData structureInclusion mapUniform resource locatorHill differential equationRevision controlVariable (mathematics)Scripting languageMultiplication signPower (physics)Computer animationLecture/Conference
Stability theoryUniform resource locatorFile archiverWebsiteRevision controlLecture/ConferenceMeeting/Interview
Revision controlDirectory serviceControl flowConsistencyData structureInclusion mapUniform resource locatorRevision controlSoftware bugBitBuildingPhysical systemDistribution (mathematics)Java appletComputer animationLecture/Conference
Binary fileDirectory serviceBinary codeComputer fileSource codeKey (cryptography)PreprocessorProjective planeData managementJava appletComputer animationLecture/Conference
Electronic mailing listData managementElectronic mailing listEmailProjective planeComputer fileCuboidComputer animation
Software bugDistribution (mathematics)Right angleLecture/ConferenceXMLProgram flowchart
CASE <Informatik>Software bugDifferent (Kate Ryan album)Lecture/ConferenceMeeting/Interview
BitProcess (computing)Software bugAsynchronous Transfer ModeCuboidPerspective (visual)Speech synthesisLecture/Conference
Software bugProduct (business)Traffic reportingDistribution (mathematics)Multiplication signNP-hardProcess (computing)Meeting/InterviewLecture/Conference
Projective planeFile archiverComputer fileSubject indexingComputer animationLecture/Conference
Source codeGraphics tabletOnline helpPoint (geometry)Software bugSoftware testingMultiplication signMessage passingProjective planeDistribution (mathematics)Traffic reportingDataflowRevision controlCuboidLecture/ConferenceMeeting/Interview
Software testingHidden Markov modelSoftware maintenanceSoftware bugOpen setCuboidLecture/Conference
Software as a serviceContent (media)Standard deviationUniform resource locatorProjective planeTurbo-CodeBlock (periodic table)Lecture/ConferenceMeeting/Interview
FreewarePatch (Unix)Distribution (mathematics)Lecture/ConferenceMeeting/Interview
Patch (Unix)TheoryLecture/ConferenceMeeting/Interview
Patch (Unix)Distribution (mathematics)CASE <Informatik>Instance (computer science)Streaming mediaTheoryComputer configurationPhase transitionLecture/Conference
Patch (Unix)State of matterCASE <Informatik>Lecture/Conference
Patch (Unix)Touch typingLibrary (computing)Distribution (mathematics)DemosceneDefault (computer science)Matching (graph theory)1 (number)Arithmetic meanLecture/Conference
Patch (Unix)CASE <Informatik>BitDistribution (mathematics)Level (video gaming)Lecture/Conference
Line (geometry)Patch (Unix)CodeFiber bundleProjective planeMathematicsInstallation artStreaming mediaRow (database)Lecture/Conference
Distribution (mathematics)Patch (Unix)Web pageProjective planePhysical systemComputing platformCASE <Informatik>Lecture/ConferenceMeeting/Interview
Patch (Unix)Default (computer science)Arithmetic meanLecture/Conference
MultilaterationCASE <Informatik>Key (cryptography)Patch (Unix)Distribution (mathematics)Open setStreaming mediaDigital photographyLecture/Conference
TelecommunicationDressing (medical)Derivation (linguistics)Lecture/Conference
Distribution (mathematics)Projective planeFlagPhysical systemLinker (computing)String (computer science)Streaming mediaPatch (Unix)Lecture/ConferenceMeeting/Interview
Streaming mediaLecture/ConferenceMeeting/Interview
Distribution (mathematics)1 (number)BitComputer fileCuboidINTEGRALEmailProjective planePhysical systemSoftware developerData managementProcedural programmingMereologyReliefPattern languageWebsiteRevision controlGraphics tabletLecture/ConferenceMeeting/Interview
DampingMultiplication signBitInstance (computer science)Visualization (computer graphics)Source codeMessage passingSoftware developerStreaming mediaRevision controlLecture/ConferenceMeeting/Interview
Point (geometry)CodeBoilerplate (text)Statement (computer science)Proper mapModule (mathematics)Software developerTerm (mathematics)Library (computing)Distribution (mathematics)TouchscreenLecture/ConferenceMeeting/Interview
Software developerMoment (mathematics)Software bugDistribution (mathematics)CuboidPhysical systemEmailLecture/Conference
Patch (Unix)Moment (mathematics)Software developerCore dumpSet (mathematics)Distribution (mathematics)MetreMetadataSoftware bugLecture/Conference
Multiplication signLecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
The title of this talk is how to be a good upstream, well, it could be as well labeled as how to be a good open-source project in terms of distributions. Okay, by myself, this is my Irish team, Nick, I'm a Gentoo developer, a member of the
Gentoo Council and among other things, lead of the Java and recruiting projects for last things like 07. I've done a good share of packaging for Java projects or traditional old tools based projects myself.
The goal here today is to give you a couple models. If we start with how you don't want to be, here's the tower and we want to move from the tower to a more community-based model where you get input from else and get better at finding things going on.
How many of you here are involved in releasing open-source projects or involved in having something to release? Okay, that's good, it applies to you then.
How generally, for people who don't know, how staff on Linux distributions ends up on end-user machines. First we have a code monkey coding at home and then they commit it to hopefully a version control system and upstream makes a release.
At this point they hopefully do some quality assurance or some kind of control that it at least compiles. And then the distribution packager is able to package it and they have their own rules for what's stable and what's not. What your upstream considers of the project the hackers themselves consider stable is
not maybe something that Debian main considers stable at the day of the release. And in the end it ends up to a user running a Linux distribution.
What a packager does most of this time, you have a release to start with. Making distribution packages out of trunk or head or whatever is not really something they strive for.
So hopefully you have a release to start with. And you try building it, if it works, you commit it and you usually have a packaging script from the last version to base your work on. Of course, if you're a good upstream, it's an instant success and everyone's happy and end-users get the latest version.
If it's not, it doesn't build, it has problems, then you start to iterate and you need to modify the build script. If you have, for example, switch your build system from AutoTools to CMake, it might need a considerable amount of work for a distribution packager instead of just bumping the version.
All of these guys sit down. What should be your goal? If you want your stuff and distributions, the easier it is for the packagers, the more likely it is to end up in end-user systems.
And of course, if it doesn't build, it takes a while. And if it needs patches, it takes even more time to get in the end-user systems. So, it's a worthy goal if you want someone to actually use your code. Of course, if you find that it only sits in your cheat repository and cheat hub, then that's your choice.
How to get your project packages? You don't need to be writing the build scripts yourselves. Most distributions have some kind of a bug tracking system, which, at least for Gen2, doubles as a feature request and whatever.
I have a problem database, not generally just for bugs. But you can just go there and just type in the URL of your project and I would like this package. Of course, that guarantees you nothing. Someone has to actually do the work. But if people find your code useful, then someone probably wants to use it.
And a developer who wants to use it usually packages it too. And if it's a weird science project for calculating something in a nuclear reactor, then perhaps there's no one in the industry. Okay, so about distro practices.
Well-behaving auto-tools. You can use auto-tools in many ways, but let's say you use it in a standard way. And the configure script works and there's no voodoo involved, which you would have to account for in the packaging script.
Most distributions have done these kinds of projects, packages for these kinds of projects, so many times that they know what they're doing and it's quite straightforward. And also it's not just auto-tools, but about everything in common, if the distribution is doing Java at all, they know ANT quite well.
Or CMake if they're packaging KDE at all. Okay, so we are going to the actual tooltip section this time. When you release something, the distribution stores section like SHA or MD5, but probably not these days that much anymore.
But if you see there's a problem with your release, you never ever change your release terrible. Again, because the hash doesn't apply anymore. For example, in terms of Gen2, the user gets the same terrible on their system as what you release to your FTP servers.
So if they download it directly from FTB, of course we have a mirroring system, but you change the file, then they get a failure and you get angry users because they can't build it anymore.
Primarily binary, like Debian or Fedora, stuff like that. If the file name doesn't change, then the maintainer will probably not notice that there's a new release out or that something's changed and the change doesn't propagate to the distribution at all. At least in the case of Gen2, it breaks the checksum, but yeah, that's a good point.
For binary distributions, the user never downloads directly the terrible upstream releases. The second bullet point is that if you are releasing a terrible, many prototypes have different build settings for maintainer builds, developer builds or release builds.
You should have your settings in such a way that when you go and type .configure make, it doesn't build your debug binaries or anything like that by default.
When the end user, at least on Gen2, they usually want to build it with optimized binaries and not things like that. So we get to the second point. We have C flags. Don't use a W error because in a new GCC release, we don't change the build script, but in a new GCC release, they usually add a lot more warnings and your software probably doesn't build anymore.
Getting a new GCC release into Gen2 takes quite a while before everything is migrated and without a lot of work, it would never happen. And some people think that the user don't know what they are doing with C flags, but it does have a lot of legitimate uses.
For example, if we are not talking about x86, you might want to control the business of your binaries, like 32 GANs, 64 bits and things like that. So things are actually needed. You need a way to pass in flags to compilers and things like that to be able to get it work at all.
And of course, you want to actually document what you are doing in changelogs. Reading, for example, JIT logs of 1000 comets to find out that there is a security fix in comet 563 of the changelog is probably not something that gets noticed.
But if you have a changelog saying there is a security fix here, someone might actually notice and do the appropriate things needed to get security releases and faster stabling and things like that.
When to release. Release early, release often. Hopefully, a lot of you have heard this phrase before. If you release often, it means that end users get your code faster and you get more testers.
Then you get more testers, you get more bugs and you improve your code and the cycle continues. Release early. Well, of course, the same thing applies. And the distributions, even if you release often, not all the versions end up from like, in Debian it goes down from, let's say, unstable testing and things like that.
Not all of the releases will end up in main or in gen 2. There is two stages from testing to main. But they do still have a considerable user base to find out which of those versions work well. So even if you release often, you don't necessarily have to make sure that they are all of the best possible quality.
Because there is people testing in between before it ends up in Ubuntu 9.10, hopefully.
Yeah, true. But it's a trade-off. The faster you put it in a stable, the more likely there is to be bugs. And of course, the main channel has to care in the first place. If you want something that's guaranteed and certifiable, probably enterprise distributions are much more slower moving targets.
So, it has been tested for a couple of years before release. But of course, first the software there is another question. So, dependencies is a nice issue. What distributions actually do is they handle dependencies for you.
So, it's an important part from a packager's perspective. So if you have optional dependencies, we want to be able to configure them and disable them. Not all users want, if you have a database abstraction layer.
So not all users want to install MySQL, Postgres and Firebird and things like that as part of the dependency graph. If all they want to do is use a music player which has a library to store their albums in. Don't bundle anything. If you have libraries like zlib and you have a copy of the source code in your repository.
Well, it has had quite a few security problems lately. So, if you have a static copy, we remember to update it and follow the upstream releases.
It's not even your job in the first place. The distributions are supposed to be handling, so you are just doubling the work. Link dependencies dynamically. If you do static linking for everything, it's still the same security problem
and people don't probably like the huge size of the binaries on their systems. Using libtool can help you with shared libraries in AutoGoth if you want to go that way.
Virginian. Don't reinvent the wheel. There's probably quite a few guidelines out there. And once you have taken something, don't change it and think something else is better. Beta is newer than alpha. Basic rules. 0.9 is newer than 0.8.9.
And if you do, usually there should be some kind of a matureness relation between the versions. If you release 1.0.1 and the next version is 10.0. Well, it's kind of hard to figure out where your project is going.
And some of you might recognize the reference to 4.0, namely one desktop environment releasing a while ago. Even if you try to communicate, people have some perceptions of how the release numbers work.
And you can't just work around them with PR, which half the people don't reread. Build systems. I totally agree on the S-cons bit, thank you.
So in the beginning when you start coding, writing your makefile by hand seems easy. But when you have one file, it might actually work. But when you're releasing something that's a little bit more complex, you might want to be using anything more sophisticated.
Like AutoTools or whatever is standard for the language your project is written in. If you're a distribution packager, it's not all C and C++. There's a lot of other languages, which is just fine. But if you're writing a Ruby gem, I wouldn't be using Ant.
Or, how many of you know what Ant is? Yeah, okay, most of you? Okay, good. If you think about the developer dependencies, for a source-based distribution, if you write things in Ant, it means you need to install Java to build it.
And if you're from source distribution, you need to install Java to build your Ruby gem. Which users don't like. They pretty much like to keep a minimal system instead of installing half of possible compilers and things they can think of to install something they want to use.
They don't really care how it's built, as long as it installs and installs fast and with as little dependencies as possible. Build systems. There are also turbos. When you make a release, I've seen plenty of releases in Java world, where you have a zip file, usually. Windows, so you make a zip.
And inside it, you can have bundle dependencies and version control directories. So you have like 20 megabytes for a release. When you go and strip out all the stuff, there's maybe one megabyte left.
So the file gets released, mirrored everywhere, mirrored on distributions. It multiplies very fast and useless space gets used. I'm looking at you Mozilla. Mozilla Firefox does this.
And all the other Mozilla projects. It includes CVS directories. It doesn't version the directory under the turbo. It doesn't look like a release. It just took the CVS checkout and packaged it.
That's very, very annoying. Yeah, if you're building it manually, it's very annoying. But for Gen2, I didn't think about it because you have a temporary directory in the first place where you're doing all the work.
So it doesn't come up. But yeah, that's annoying too. Yeah, you do it in a home directory or something like that, I guess. So what does unpacking consistently mean?
Well, if you have a first version, you have a full bar hyphen your version. And the next version is full bar underscore your version. Like that. It doesn't build the first time. Actually, with SQLite, their documentation variables, I think they add two scripts for making them. So every other release, it was underscore, and every other release, it was hyphen.
So it kept breaking for me, which I didn't like that much. But these days, it seems to be fixed. If you... Yeah. I want to repeat it for the...
I said, lesson for downstreams. When that happens to you, engage with your upstream and tell them what's wrong. Yeah, it's coming later in the show. Stable URLs. If you release something, then you go download it.
The next version, you want to keep it around for first. And if you have an archive site, you want to move it, not like right after the next release, but after that. Because, for example, if you're packaging something and you notice you're a couple versions behind, then you take the latest release.
And, well, it doesn't build this box and whatever. You might want to actually try to build the release before. And if you can't find it, it gets a little bit tricky. And it's extra work. We're all slackers and we want to avoid work.
Horrors from the Java world. In general, distributions don't like systems that download dependencies automatically, install to home directories, and are... Oh, yeah. But that's from my world.
You're solving that problem, aren't you? And if you release, you want to release binary. You don't want to release binary about actual source code releases. In the Java world, it's quite common to release only binaries with a precompiled jar file.
So you want to make sure your source is actually available. Not releasing only Git tags in GitHub and things like that doesn't really work. Release actual files. So, project management. We have a downstream project. You want to be available so we can actually relay things back to you.
If you make release announcements, a good thing is that release only mailing list. So if it's done on CVS commit mailing list where there's like ten emails a day,
it's kind of hard to pick up, but if you have an announcement only, the packages can subscribe to that. Announcement mailing list and actually follow it more easily. They might not be interested in your day-to-day work, but when you make a release, they're very interested. And make it easy to file bugs. Mantis sucks and things like that.
So, bug flow. We do act as a filter between upstream bug trackers. So you might not see everything, but we do try to file things back upstream, which makes sense.
Hopefully. For end users, it's kind of hard to know where every bug tracker is. So we just try to encourage them to file everything in our tracker. I don't know how other distributions do it, but probably anyone know here? Debbie and Ubuntu.
On last part, which is the Ubuntu bug tracker, we just link bugs to upstream. Whenever a bug comes in, we try to see if there's an upstream bug available. If not, we file them.
It's probably the same for Debian as well, so I don't think there's much difference. If I may pick in on Debian. In Debian, the case usually depends really on the packager. There are packagers that prefer where you file it yourself in the upstream,
or there are many that forward bugs that are relevant. It really depends a bit. But we have some upstream bug tracker thing that will compare our bugs against the upstream bug. But mainly it's a manual process. Yeah, same in Launchpad.
Speaking from an upstream perspective, we get the bugs, users may report them to your bug, but other users will report them to us. We get them. Sometimes we get them when there are bugs in the distribution rather than in our product too.
Yeah, that's true. It's kind of hard for users sometimes to figure out where to file things. Even if you have automated tools, people will still ignore them. So it's very hard to make a bulletproof process for that. If anyone has ideas, feel free to propose.
So, what did I forget? Hopefully we have here a lot of people who are doing packaging or upstream work. So it's a good time to give suggestions and get a discussion going on more ideas.
One thing I really absolutely hate is for example SourceForge, because when you try to download a file, you usually get a 302 permanently moved, then you get a 301 temporarily moved, and then you get a 200 file found. But even if the file is not found, you get a reply which is a stupid index.html,
which means that I try to unpack it and I think, oh wait, that's not even an archive. And then I'm quite upset because I have to find the file. The only one that's worse is Berlioz.
And I think no project should host anything there. How does Launchpad help? It's better than SourceForge or Berlioz.
I'd like to gather opinions for, how do other people manage if you have a project that is a really popular project, produces a lot of bugs, reports bugs to the distribution abstract,
but the packager is not able to handle the bug flow, and time passes and you know that most of the bugs in your distro bug tracker is probably fixed in a new release, but you just don't know. What do people think is acceptable?
Is it acceptable to just close the bug and say, Dear user, please try a new version? What guidelines do people have for that? Well, at least we have a resolving status of test request. If the user is actually active, they will probably test it and find out it works.
And if the bug manifests itself again, someone will probably file a new bug. Yeah, that works too. It's a question of the individual maintainer, what he wants to do. Probably if it's a very old one, test request goes fine, but if it's filed in the last month, maybe opening is more prudent,
but at least in the 10.2 book tracker, there's open bugs from years and years of past and no one has looked at them. Yeah, I would like to make a comment about, I have a little remark about the SaaS fork.
There's one URL that works pretty well and then you get a 404 without any content. It's standards.saasfork.net slash then the Unix account name of the project and then the tarball. So maybe use this.
Good tips. Yes. How to be a good upstream.
I also like to make some comments about how to be a good downstream. Yeah, feel free. Yeah, so I'm FFmpeg upstream and he's the Debian and Ubuntu packager, so we know about this from first hand and a few comments I'd like to make is for packages, not to patch packages, but instead patch the upstream directly
and tell the upstream about their problems, because sometimes I go hunting through distribution packages and see how they patch FFmpeg or mplayer or whatever, and of course, they all do. Can you speak up?
I don't know if it's on. They are on, but the problem is that, oh no, that one isn't on. I gave you mine, sorry. Yeah.
Okay, I'm trying to speak up. Is that better? Yes, okay. Please don't patch the packages. Instead, talk with your upstream and forward the packages there or have it fixed there in a clean way.
If I may comment on that, it all works very nice in theory, but the practice is that sometimes as a distribution maintainer, you will find that you have a patch that makes a lot of sense for your distribution, but doesn't necessarily make sense for the upstream or that the upstream doesn't agree with,
because for instance you're trying to get everything to look the same in your distribution and that means you need to patch this particular bit so that it looks the same as all the other packages that do the same thing, but your upstream doesn't like that, so you need to carry a patch to do that. In theory, I agree that yes, most patches should be sent upstream, but there are patches that just don't belong there, and there are cases where it doesn't make sense to do that.
I know that there are some patches that will stay local, but my experience is that most of the patches should not stay local. For the FFmpeg, when Reinhardt took over and we finally had a good interaction, because before the packager was just not very active,
then there were a dozen patches, I reviewed them, and I applied one, one stayed, and the other ten we threw away. I'm not saying it's going well in all the cases, I'm just saying... If I can add something. I'm a gentle developer, I've been upstream for a while,
and sometimes you have patches, but it's not easy to merge upstream. When I first took over Xena, I had a 14 kilobyte patch, that just went to fix the problems with dependencies and with libraries internally, and upstream rejected it.
That didn't mean that I had to keep it local to kitchen too. I actually was able to just take it, make it optional, left the default behavior, just what upstream wants, and then added a simple switch to move to what we need. For what I can see in Gen2, almost all the patches
just need to have a conditional part, that can be enabled just when it is needed by the distribution. And that will be an upstream problem if such a patch wouldn't be merged even if it's just conditional, or even if it's just something that the distribution enables. As long as the default is what upstream wants,
I think I should clarify that I'm not saying that it's always a good thing that there are patches or something. I agree, in many cases, patches that are kept in distributions shouldn't be there, and should be sent upstream. There are definitely cases where it's going wrong,
and I totally agree there. I'm just saying that in some cases, merging patches upstream just doesn't work, because in some cases, there are cases where it is a legitimate patch that does need to be kept in the distribution. That's all I'm saying. I'm not saying that you're not right. Certainly there are cases, and indeed in the case of mpack and mplayer,
that's a good example where there have been many patches in distributions that really didn't belong there. But yeah, it's a bit of an edge case in some cases. Do we send this upstream? Do we keep this locally? I think in many cases, it's not being done right currently.
And yeah, definitely there's room for improvement. I have to say, in my history of five years working in Gen2, the only two upstream patches I couldn't send up were one for Avidemux, because I actually wrote it twice. It was a 40 kilobyte patch. Upstream seen it.
Or I just will take a couple of lines of it, change it completely, record for it, patch up the double ply, change it again, stream still doesn't care, still bundles ffmpeg, still patches it, ffmpeg, over and over. And the other is related to the Mono project.
Fedora and Gen2, I think Fedora as well, installed Mono on lib64 for a 64-bit install. And novel actively refused to even make it conditional to install there. But that's one very, very edge of a problem.
Two small remarks to that. First, the issue is different when you are dealing with a temporary distribution patch. So you fix something, so for example a package builds correctly, you submit the patch upstream,
and then it sits there just months without any activity. So in this case, I think it's needed to have distribution patches. And one other thing that's also needed is, I'm downstream for a BSD project, MiraBSD, which is a relatively exotic platform,
and sometimes when I submit patches to fix the build on our systems, I get told by upstream, your platform is irrelevant, go away. What should I do then? Just one more remark to that. So I was just saying that, or trying to say that the default should be don't patch.
Of course, if the upstream is vacant or inactive or stuff, then you have to patch. But you should by all means try not to patch. Also because I saw all the people doing the same patches, doing the same stuff, and instead they had told me what they needed. For example, Reinhard made some patch, and he looked at it and said,
oh, wait, half an hour later I had fixed it in a better way. And it was just really, I gave him what he needed. For signal plus, when we were bundling, half an hour later. Just to do? Well, I agree that you would ideally be that,
well, the downstreams actually contact upstream. I think it's not realistic to require us to wait before you apply it and make a new release, because you made a release and then we, as in Jentu, have to release it and we have to have it fixed. No, I'm not saying that. It looked to me like you said that we should not apply patches,
so we should wait before you did it, so I think that's not working. Of course we should cooperate, but I think we will have to have our local patches before you will release them. Well, it depends on the kind of patch, but if upstream rejects it or doesn't want it or so,
then maybe the distribution should also not try to do it. Okay, but we do need certain patches. Also, the upstreams know the packages much better and usually can come up with better solutions and do it quicker. I think the great lesson has to be communicate.
When we're having this discussion, I have to mention the famous case of Bibian and OpenSSL and all those inadequate keys. It needed more communication.
I'm best known for my role in Apache. We have an issue with Bibian there, because Bibian repackage Apache in such a way that it's virtually unrecognizable. All the users of Bibian come to us for support because our documentation doesn't correspond to what they get. And of course, Bibian derivatives, you wouldn't do, etc.
What I keep saying we need from them is to engage with us so we can come up with something that works for both us and them. Yeah, I didn't understand what you said.
Maybe I'm just saying the same. One thing for upstreams is just try to listen, try to cooperate and don't think your distribution or your system is the only one. I really often had it with upstreams that said, oh, Gentoo is broken anyway, things like that.
Or, who cares about your strange linker flags or things like that. Try to really understand if someone sends you a patch, what his problem is. Like with the guy from MilBSD, it's only good for your project if it runs on MilBSD
or any other strange system. So cooperate with all the downstreams and not just the one from your favorite distribution. Just one, just a little bit worth that there. I've run into some upstreams that say, no, I don't want you to package it.
They literally said that. No, don't package that. That violates my GPL on it. No, he said, do not package it whatsoever. There's a question there.
Same dumb question as before, I'm afraid. What's the best way to announce new packages? Announce new packages. Well, I do know fresh meat is followed by some people, but the best way is to just go on file box
to each distribution tracker for which you want it included. And if it gets popular enough, the minor ones will follow most likely. I don't have any better way in mind, but does Diego have there? I've had a little bit of an headache because of a release notification before. Sincerely, I think the best way to announce a new package
is to have a project management system that automatically warns the interest party that a new file is released. SourceForge used to have a very nice main notification system, probably one of the few nice things that were available among projects.
And they actually broke it lately because they now have an RSS feed of all files. And you get still projects like Launchpad that still has no way to subscribe to a new release of a package and relies on the upstream release
to actually send out an announcement. The best way is to just take the procedure of release and maybe upload to a project site, upload for RubyGem,
uploading to GemCatter nowadays. GemCatter has a configurable feed that allows you to see when a new gem is released. And it's part of the release system. Having an external announcement relies on developers remembering to do something extra. So anything that can be automated,
maybe mail, fresh meat, fresh meat we use to have an API to update their data automatically. And integrated systems like Bardo still should have it to send the mail when the release is done. It's the fastest way, cheapest way to have release notification
pushed to the packages. Not just announcing new releases, announcing new packages too. New packages from scratch. The most simple way is to contact some downstream developer or make a little bit of a fuss around. So for instance I can take
one I added lately, Goers. Not sure if the right way to pronounce it. It was named quite a few times in the Planet So last month. Source Code Management Visualization Tool. I read about it, I package it.
And that's the simplest way. You can make a fuss about it. Yeah. The question about licenses and well the point about licenses and complaining that oh you're not allowed to package this
reminds me that that's pretty extreme but that upstream developers need to choose their license carefully, make sure they understand what it means especially in terms of what they can depend on,
what libraries, what other licenses are compatible with that. And make sure they label their code with the license properly. I actually just had one of my packages kicked out of Debian because it depended on a Perl module
which unfortunately had been left with a boilerplate license statement and didn't say, it didn't actually state the correct copyright and license. And upstream presumably meant to release it under GPL plus artistic but we don't have this clearly stated.
Yeah you do have a point in the earlier comment that announcing new packages to distributions isn't optimal at the moment you need to do manual work. But if it's something many people want to use then probably the users themselves will file bugs to their distribution trackers
that I want to use this and this and that. It's already been said but upstream developers please use Freshmeet because it allows the downstream people to subscribe to releases and you really do get mail when new releases are there. So it's very useful. Please fill it in.
As for the patches discussion a few moments before, in Debian there is a new development with this meta data for patch files. So this creates some common vocabulary
how to write down the status about the bug. Has it been discussed somewhere else? What relevant bug trackers do I have there? And I think it's a really good idea. It's not that deployed in Debian yet but I do hope that it gets more extended and I'd really love to see if we could perhaps agree
on some common set of vocabulary across distributions so that we can more easily talk about patches. So anyone have comments or is it time for dinner?
Seems like it is. Thank you.