We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Case study of creating and maintaining an analysis and instrumentation tool based on LLVM: PARCOACH

00:00

Formal Metadata

Title
Case study of creating and maintaining an analysis and instrumentation tool based on LLVM: PARCOACH
Title of Series
Number of Parts
542
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
PARCOACH is a static and dynamic analysis tool for High Performance Computing applications (using MPI and OpenMP for instance), based on LLVM. Its main purpose is to check that the APIs usage are correct (for instance that all processus or threads call a barrier to avoid a deadlock). It's not always possible to detect all errors statically, therefore the static analysis can be complemented by a dynamic instrumentation of the code to perform some correctness checks during the code execution. This (out-of-tree) tool has been initially written based on LLVM 3.7, and is now based on LLVM 15. As a research project, it has seen a lot of contributions: from PhD students, from interns, and from researchers; with actually a low number of LLVM-specific engineers working on it until recently. The objective of the talk would be to focus on how the project has been using LLVM over the years (and how it's been maintained): - how the lack of maintainance had lead to a relatively high technical dept. - how LLVM tools and structure had been used: from a manual compilation of the code to "properly" using LLVM's CMake integration, from analysis code tangled in the transformation code to properly using the "new" analysis and passes manager. - how the CI/CD evolved to improve the user experience (eg: docker-based jobs, automated images and packages generation, docker-compose entry point). - the weaknesses remaining in the project (as far as LLVM is concerned), where they come from, and the plan to fix them. - and obviously take a look at some mistakes that have been done (trying to maintain compatibility with several LLVM versions at once just to name one). While the content of the talk would not be "new" from a scientific point of view, we believe it would provide some interesting take-aways for people looking into developing or maintaining out-of-tree LLVM-based software.
Normed vector spaceRevision controlSoftware developerCodeMountain passObservational studyNetwork topologyData managementPoint (geometry)Multiplication signUsabilitySoftware developerView (database)Ring (mathematics)Goodness of fitNetwork topologyPhysical systemPlug-in (computing)
Network topologyFeedbackSimilarity (geometry)SupercomputerMathematical analysisDeadlockIndependent set (graph theory)Compilation albumTransformation (genetics)CodeClient (computing)DisintegrationComputer configurationMacro (computer science)Component-based software engineeringLink (knot theory)Fluid staticsSource codeConfiguration spaceSheaf (mathematics)Shared memoryMachine codeRaw image formatString (computer science)Vector spaceoutputType theoryFlagBit rateComputer configurationMedical imagingRight angleSource codeVotingGame theoryFamilyView (database)Student's t-testConnectivity (graph theory)Coefficient of determination2 (number)Library (computing)Goodness of fitSlide ruleProjective planeGame controllerLogic gateWorkstation <Musikinstrument>Group actionMultiplication signRoundness (object)Software maintenanceProcess (computing)Point (geometry)Link (knot theory)Instance (computer science)Presentation of a groupFunktionalanalysisSubsetElectronic mailing listMachine visionGastropod shell1 (number)CodeMathematicsType theoryAreaConstructor (object-oriented programming)WhiteboardSelectivity (electronic)Video gameSoftware developerMechanism designMereologyINTEGRALOffice suiteCASE <Informatik>FrictionPlug-in (computing)Range (statistics)IterationMacro (computer science)DeadlockConfiguration spaceCartesian coordinate systemStack (abstract data type)Buffer overflowMessage passingData typeTransformation (genetics)Data miningEntropiecodierungInformation securityState diagramObject (grammar)BitStandard deviationLevel (video gaming)Predicate (grammar)FeedbackClient (computing)Computer animation
Raw image formatVector spaceString (computer science)outputType theoryCone penetration testObservational studyNetwork topologyCodeMachine codeContext awarenessReading (process)CompilerRevision controlPointer (computer programming)Human migrationElement (mathematics)Transformation (genetics)Mathematical analysisPhysical systemData structureProxy serverGraph (mathematics)Wechselseitige InformationMountain passSoftware maintenanceSource codeUsabilityFeedbackComputer fileEntropiecodierungStandard deviationSoftware developerSoftware maintenancePoint (geometry)Maxima and minimaMessage passingInstance (computer science)Transformation (genetics)ResultantCodeRevision controlData managementMultiplicationVulnerability (computing)Pointer (computer programming)MathematicsMultiplication signPhysical systemSoftware developerMereologyVector spaceMathematical analysisString (computer science)CASE <Informatik>InformationBinary codeMechanism designType theoryBriefträgerproblemMedical imagingRepository (publishing)Search engine (computing)FeedbackSemiconductor memoryBitLine (geometry)Level (video gaming)Object (grammar)Error messageData structureLatent heatPlug-in (computing)Student's t-testProgrammer (hardware)Goodness of fitMenu (computing)Element (mathematics)View (database)Subject indexingEntropiecodierungComputer clusterElectric generatorDifferent (Kate Ryan album)Forcing (mathematics)Group actionTracing (software)Execution unitLetterpress printingUsabilitySheaf (mathematics)CausalityBus (computing)InternetworkingProjective planeProduct (business)Arithmetic meanThomas BayesHuman migrationSummierbarkeitMusical ensembleGame theoryDegree (graph theory)Observational studyNetwork topologyVideo gameRight angleNumbering scheme1 (number)Well-formed formulaComputer animation
Revision controlFeedbackComputer fileEntropiecodierungStandard deviationSoftware developerNetwork topologyFormal verificationDisintegrationSource codeCluster samplingComputerUsabilityIdeal (ethics)Computer configurationLibrary (computing)Default (computer science)Module (mathematics)Software maintenanceMountain passPay televisionStudent's t-testPhysical systemCalculus of variationsMultiplication signDemosceneCASE <Informatik>1 (number)Projective planeParameter (computer programming)Direction (geometry)DeterminantLibrary (computing)Set (mathematics)Point (geometry)Endliche ModelltheorieInstance (computer science)System callDistribution (mathematics)Event horizonRight angleProcess (computing)Insertion lossWindowShared memoryMedical imagingArithmetic progressionAdditionSoftware developerInformationSoftware maintenanceThomas BayesPresentation of a groupShooting methodData managementINTEGRALCuboidGame controllerSource codeCloningAxiom of choiceWorkstation <Musikinstrument>Game theoryMetropolitan area networkArmComputer configurationProper mapRevision controlCompilerMereologyIntermediate languageGoodness of fitSinc functionParsingNP-hardWrapper (data mining)Reading (process)Utility softwareView (database)Computer fileBinary codeFlagObject (grammar)Electric generatorMessage passingGene clusterCartesian coordinate systemPlug-in (computing)Structural loadMathematicsFormal verificationOnline helpFile formatUsabilityConfiguration spaceLoginData typeComputer animation
Program flowchart
Transcript: English(auto-generated)
A tool called Parcoach and how they've been using LLVM for a long time and how they've kept it well maintained, I think. Yeah, we'll see. Alright, up to you Philippe. Thank you very much. So yes, my name is Philippe, I work at CineRIA in France.
So the talk today is not so much about Parcoach itself. I'm sorry for the wrong title, but you know, naming is hard, so I had to put something. Today I want to talk to you about my experience with dealing with out-of-tree plugins and tools for LLVM. Parcoach is just one example and I will give some others.
So first of all, I will try to explain to you why and for whom am I doing this talk, so that you know the audience. I'm going to talk about three different things. The first one is keeping up with LLVM, so we will see some code, CMake, C++ and stuff. The second point is usability, both from a developer, a tool developer point of view and a user point of view.
And the final point will be dealing with packaging when you're actually targeting some system. So why am I doing this talk? First of all, it's to provide some feedback and maybe provide you with some stuff I wish I knew beforehand, before coming into this.
I'm doing this also because I've done a couple of out-of-tree projects and I've faced the same issues, so maybe you've failed them too and it will be helpful. So it's not so much about the tool Parcoach itself, it's rather about the approach.
And for whom, it's basically anyone who is involved in an out-of-tree project for LLVM. This is my own point of view on this topic. If you have ideas, comments, improvements that you think may be helpful to me, don't hesitate. I will welcome them.
So Parcoach is a tool for each PC application. It's basically an instrumentation analysis and instrumentation tool for OpenMP and MPI applications. It basically checks that the user is using the APIs appropriately, that there are no deadlocks or data races.
The developers, this is where it gets interesting, because they are not LLVM engineers. They are interns, students, PhD students, researchers. They have a whole job, which is not LLVM. The users of the tool, they are scientific application developers, so you cannot ask them to compile LLVM from the source.
It's not going to work. They are not going to use your tool. And the last part, which is interesting with this project, it started a long time ago, when it was LLVM 3.7. And now it's based on LLVM 15. So there has been a lot of history in the tool. And I'm working on it right now. It's my main job, so they have an LLVM engineer now.
And I can do stuff. I provided the link for reference if you want to take a look. There are two other motivating projects that I can talk about. One, I'm actually not going to talk about much, because it's not free. It's a commercial compiler, which is based on LLVM.
And basically, the developers are LLVM engineers, so we have more flexibility when doing developments. And the users are clients who are paying for the compiler, so we needed to provide something good. And the other point is student LLVM exercises. I do LLVM courses for security students, and I want them to be able to do some code transformation with LLVM.
So the developer is just a friend of mine and me, and the users are students. We are expecting them to code into the project, so we need to make it easy for them to get into. And we have 16 hours to do this project, and they cannot spend two hours installing LLVM.
It's not going to work either. So in all these projects, I encountered pretty much the same issues, so I'm going to talk about them now. And the first one is keeping up with LLVM. So I'm not sure if it was intentional in the schedule, but having a talk previously about CMake and stuff,
it's quite helpful, because I don't have to go too deep into details. You already know them. So let's go back maybe eight years ago. You wanted to do some LLVM tools. There was no CMake integration. The first approach that you had as a developer was either do stuff manually,
maybe using LLVM config to get the flags and the libraries and so on, but basically you had no easy way to integrate with LLVM. It was quite manual. Then came CMake, and you could use the standard add library, target link libraries, but you had to know what to feed these macros with.
And some stuff I've encountered in this project is some kind of... People who were developing this project were not comfortable with CMake, and they would perform some changes where they would actually do CMake integration
with R-coded passes in the CMake, so it would be awkward. I think, at least from the examples we have, it's way better. So using the LLVM CMake integration simplifies a lot of stuff. You just have basically to know which component of LLVM you want to use,
how you want to build your library, like is it static or shell, basically. And you have dedicated macros to just construct whatever stuff you want to construct for LLVM. So let's take an example code. So you don't have to understand everything. Just to give an example of how it works, you basically say,
OK, I want to find LLVM, provide a version, sometimes, include the LLVM CMake helper, and include some definition. And then this is the interesting part, because you can say, OK, I want these components in my tool. Call the CMake helper with your plugin source, and that's it.
I mean, the CMake helper will take care of saying, OK, depending on how LLVM is installed, like, is it just the big dialog or are there individual libraries? It will set up the target link libraries appropriately,
and you don't have to think about it. It's just automatic. If you want to do some tools or pass plugins, there are macros to do these, too. So basically, you just have to figure out which kind of build you want, and CMake LLVM will configure everything for you.
There are some useful examples. For pass plugins, there is the buy example, which is basically a new pass plugin, very simple. It's kind of a hello world. And LLVM Tutor has some out of three passes to get you started with, and it's actually quite helpful if you are looking into this.
Now, let's talk about some code. So let's say you're new to LLVM, pretty new to C++, your students, for instance, and you want to perform some LLVM transformation. So you go on your search engine,
and you look for how do I iterate over instructions of LLVM function. And pretty much all of the resources, like Stack Overflow or even some presentations, they will give you the code on the left. So it's fine. It works. You are iterating over all the instructions of the function. But if you know a bit better C++,
you know that you can put range instead of raw iterators. And if you know the instruction iterators from LLVM, you know that you can use instruction of F to just get all the instructions of F. All the code works. But arguably, the codes on the right are easier to read, and in the end, easier to maintain,
especially if you consider that there are a lot of examples like this in the code. It adds up, and so simplifying stuff is nice sometimes. So it's not a problem of Stack Overflow or anything. It's just that the answer in Stack Overflow on the slides are old, like from 2015.
Like, if you would update the answer, it would just be the option on the right. Another thing I want to talk about and that I've seen a lot in package is iterating over something but putting a predicate. Like, I want to iterate through this stuff, but only if this stuff is true for some predicate.
So you can do stuff like that, early continues or nested if. But if you know the STL extra from LLVM, you know that you can create filtered range for any range, actually. So you pass a range, you pass a predicate,
and inside the loop, you just get the object you're looking for. Again, it's a simple predicate, so it doesn't matter much as is, but if you add some more stuff, it starts growing up, and maintenance became a bit harder, readability is impacted too. So this is something to consider.
Now, something more critical is advanced data types. There are a lot of data types in LLVM. And if you are not familiar with LLVM, and I've seen a lot of code like this, you will just use whatever data types is available in the STL, and you will get a map, for instance, use some helper.
And the actual issue starts when input, the map of instruction, you want to map an instruction to something. If you go through the input and change an instruction, like if you delete it, or if you replace all the uses with some other value, what happens to the instruction in the map?
So with roadmap from the STL, there is no mechanism, so nothing happens, and you end up iterating or trying to find something which is not valid anymore. Whereas if you are aware of the data types from LLVM, you are able to use some kind of value map,
which has specific handle to remove the value or update the value if it is changed during the life of the value. So some other helpers that are quite nice, I mean, it's not a big deal, but for instance, instead of using std fine-if, you can use LLVM fine-if and just put a range
instead of just like the individual iterator. In this case, it's not a big deal, but it's actually quite nice. But basically, every stuff like that, I've encountered this for a lot of code, where you would be able to replace most of the occurrences with any vector
from the LLVM array, advanced data types, or like array for string ref, like there are a lot of stuff in LLVM that you may not be aware of, and that makes your code quite nicer if you use them.
So yeah, dealing with it. So you may think, okay, this guy is just being picky with people who are writing the code. It may be true. I would argue that it depends on actually who makes the contribution, because you cannot expect the same level of contribution from a student or from an LLVM engineer. And especially when you're a PhD student, you have a deadline.
You just want a tool who does something. You're not going to spend time and time on how you do stuff as long as it works. At least that's my experience dealing with that. But in my opinion, the accumulation of small details matters, and it was very explicit in the case of Barcouch,
because I came after maybe five, six years, where the accumulation of researchers and PhD students led to a lot of technical depth. And if there was some advice that were given
to the PhD students or the researcher, it would have been a way nicer code to read or maintain. And so obviously, it's quite obvious, but doing code reviews helps a lot. Sometimes you cannot do them if there is no one able to actually provide some useful feedback on this.
Like in the case when people don't know LLVM, you cannot expect them to review code and provide a lot of feedback. But what I do know is I redirect every time to the LLVM programmer's manual. It's not like the first thing you do. Usually you just go to a search engine and search for what you want. But I would argue that actually reading
the programmer manual is more helpful, in this specific case. And some things that I know people don't want to do when they are starting LLVM is just read the code from the passes in LLVM. There are a lot of good stuff in there. Obviously, if you're not familiar with C++ and LLVM, it's not the easiest, but I think it's still worth it.
So, the next topic is updating the LLVM versions. So far, when I've developed out of two tools, I've always set the version to one specific number, right? And let's say LLVM 9. And then when LLVM 10 comes out,
you rebase your plugin and check if any API broke. If there were some changes in the IR, most recently I'm thinking about opaque pointers. It was quite a big change when updating the LLVM version. And something to consider when doing this
is that it may be time-consuming. Like, a lot of time can be spent in... It may be like just a day if there were no changes in the API. But it could also be very time-consuming. For instance, if you had to change all your passes because it's been three years that the new Pass Manager was out and you still didn't do the migration, and now suddenly it's duplicated and it's going to be removed.
So you need to migrate your passes. So you have to do it. And in my experience, it's quite obvious too, but skipping versions makes it worse. And something that I've seen, and I know sometimes it cannot be avoided, but in that case it was avoidable,
but basically trying to support multiple LLVM versions at once. Like, say, support from LLVM 9 through 12 is actually what was done. And, yeah, don't do it. Like, if you can, just don't do it. Pick a version and stay like this because otherwise it's just multiple if-def everywhere in the code and it's unmaintainable, I think.
So now, let's talk about passes. If you look for a Hello World pass on the Internet, you will get a Hello World pass, which is a transformation pass. So in LLVM, you have two kinds of passes.
The first kind is analysis, and basically they don't touch the IR. You just look at the IR and maybe provide some result, which is a result of the analysis, and that can be used by transformation passes or other analyses. And they are the transformation passes which may or may not change the IR.
And obviously, when you get your Hello World pass, you want to do everything in it. Like, I mean, I'm not talking about LLVM developers, I'm talking about students and researchers. They have the pass and they put everything in it. And so it's fine when it's just one shot or something like that,
but in the time, at some point, both the analysis and the transformations are semantically different, and LLVM has some mechanism to make it easy for you to have the analysis run only when it's needed. There is a caching mechanism. You can say, okay, I want this analysis for this object, and if it exists, it will give it back to you.
And also, it avoids passing structure around, because when you are in a transformation pass, you can request any analysis from basically anywhere as long as you have the analysis manager. And so this is something that has cost me quite some time,
like just untangling the analysis code from the transformation code, and overall, it improves the performances because some analyses were requested more than once for the same object. And so, yes, it leads me to investigating performance issues
because it was something, too. So what happens when you don't know LLVM and you want to debug your code? You put LLVM errors everywhere, and you comment them out when your code is ready. Okay, so it's a nightmare. I mean, it works, but you're not supposed to do it like this.
So specifically for printf-like debug stuff, you have some LLVM helper. It's actually quite handy. You just put a debug type somewhere, and you .cpp. You wrap everything in LLVM debug because it does all the things for you
if you don't include debug information. It doesn't even appear in the binary. And when you're running your pass with OPT, you can say, okay, I want to show debug information for this kind of pass, and it basically provides the same feature, and you don't have to comment out LLVM errors. The other thing is timing your code,
like being able to tell, okay, this part of the transformation is costing me time. And so what I've seen was some manual attempt to do timers, and basically you declare all the timers, you start them manually, and it starts being a mess really quick. Hopefully, thankfully now we have a time-trace scope.
I think it's what's used when you use F time-trace when starting Clang. And so basically it's just one line. You put one variable, and when it's constructed, it starts a scope, and it starts a timer, and when it's disrupted, it stops the timer. And LLVM has a whole system for this,
and it emits a JSON, and you get, if you put this in this JSON speed scope, you get something like that. And you can see basically everything in your code without having to do anything. You get the entry points, you get the analysis, and here it was quite obvious for us
what the changes were, because this analysis, for instance, was called multiple times, but it was for the same object. So for instance, it would appear here too, but because of the caching mechanism and the untangling, it was just called once. So this is something nice that you get basically for free.
So now let's... OK, some conclusions on the two developments. So it's a fairly basic conclusion. Try to invest in maintenance. I know it's not always possible, especially in a scientific project, but you know, it's worth it.
Don't reinvent the wheel. If you want to do something in LLVM, it likely has already something in LLVM for this. And keep the dish minimal. One of the main weaknesses of Parcoach right now is that we use some passes which exist already in LLVM. I'm thinking about memory SSA, for instance. We use some copy of this,
and from a maintenance point of view, it's not quite nice, so we need to migrate this away. And if your passes can be useful to others, just try to upstream them. I mean, if you don't use them, you don't have to pay for them. Then let's talk a bit about usability, because it's quite a big deal for a tool,
because you want it to be usable. So first, from a developer point of view, if your developers are going to be non-LLVM folks, you don't want them to go into the LLVM install and stuff. So I've had good experience with using Docker, and basically provide a Docker image with the LLVM
compiled and installed somewhere, or just installed using the Apigee repositories. And I have some clear CI, like how to build your tool. Just looking at the CI should be enough to know how to build your tool, from a developer point of view. And the other great thing is,
when you use LLVM, you get LLVM tools with it. So you get a little file check, and so instead of going through some manual testing and stuff, you can just use them, and it's actually quite nice. And yes, of course, I could talk about coding standards, but basically, since you're making a plugin or a tool for LLVM,
it makes sense to follow the same standard, and you have already clunk format and clunk tidy configuration for this. Now, as a user, you obviously don't want a scientific application developer to compile your code from source. You want them to just have the plugin and use it, or have the tool and use it.
If you look at Hello World Passes, you see a lot of times that you have to first get the IR. So in our case, it's either from clunk or from flunk. And you have to call out, load the pass manually, and call the pass manually. So I would argue this is not nice enough
for researchers and students, and since Spark Coach is a verification tool, we cannot expect users to call it on every single file. So we actually had to do some more tooling to create some wrapper,
which takes the original compiler invocation, runs the original compiler invocation, generates a temporary IR, and then runs a tool over it. It makes it much more easy for the users to just integrate with AutoTools or CMake. So that makes the tool more user friendly
than I would say is unusual. And the other part is, how do you get the tool? So again, I've had good experience with Docker, especially for students, because it's easy for them. And sometimes, obviously we also provide some package
for major distributions, but you actually have to worry about how is LLVM packaged on the target system, because depending on what is available, is it chair libraries, dialib and stuff, it's not the same thing. And yeah, Docker, it's not something you can quite use on shared HPC clusters.
You're more looking at stuff like Geeks, for instance, when targeting such platforms. So for this, you need some packaging. And packaging is my last point. So obviously, we used to use a do-it-yourself approach,
basically just create a shared library and hope for the best. It doesn't work. Because you depend on how opt is installed and compiled, because you're loading dynamically a library into opt, so if you have not used the same, like C++ libraries, you're going to run into issues. You don't know for sure which path manager
is enabled by default in opt, so there's also this. So we've moved to doing some proper packages for opt.deb and for Geeks and for Red Hat 2, because we have some users using some custom version of Red Hat.
And for this, we actually have quite an interesting issue, because we are sure that the LLVM version in their image is not available, so we made the choice of shipping just one single static tool. And for this, it was actually quite easy,
because as I said when I talked about CMake, you just say, OK, I want this to be linked statically as a shared library, and LLVM CMake handles it for you. And it was quite a nice experience for us to package for so many distributions without having to worry too much about CMake options and stuff.
So some takeaways for the whole talk. In my opinion, the LLVM integration has evolved a lot and in a good direction. It's way easier to integrate with LLVM now than it was 10 years ago. It's nice, but it's nice to say it, because when nice stuff happens, you have to say it too.
Be prepared for maintenance. If you want to create a node-of-three tool, you have to invest in maintenance, both for LLVM rebases and basically reviews, and make sure that your contributors, if you are able to provide some LLVM guidance to your contributors, do it, and it's worth it.
Investing in CI is worth it, obviously. And LLVM documentation, I would definitely every day recommend going to the LLVM documentation rather than Google for understanding what is available in LLVM.
And yeah, I encourage my students to read LLVM source code, but it's sometimes a bit hard. So if you have questions or comments, feel free, and I will be happy to answer them. Yeah? Yes?
Yeah, yeah. So the question is, for the wrapper we created, what do we use to create this wrapper, right? So basically it's a very, very small LLVM tool.
Maybe you are familiar with Node in LLVM. There is a very small utility in LLVM which just does Node on the return of a program. And it's a very small LLVM tool based on LLVM. And we use a similar approach.
Basically we say, okay, I created basically an empty main where I just use the LLVM support library to get the benefit from argument parsing and the data types and so on. And I just parse the command line and call successively, clone the original compiler line.
And then I just generate the intermediate representation for it by adding the appropriate flag and filtering out the other object generation flags. And then I just run the tool over it. Yes, yes, because you can just, for instance, with CMake,
you can use the CMake C launcher. Basically just like CCache work for LLVM, you just change the launcher and you can use the tool to launch the compiler. And for all the tools, you can actually, actually in our project we use MPICC, but we are able to change the compiler used for MPICC
and say, okay, use instead a package clone instead of GCC, for instance.
So the question is, when you ship your tool, do you link statically or dynamically basically? So both, actually both. When shipping for Red Hat, because we don't have a control over what package are in their custom image, we ship statically because we are not sure about which LLVM we are going to have.
So we just, the binary is 100 megabytes, but we don't have much choice. And when shipping for a system like Kubuntu or Debian, we just ship the dependence on the shared libraries. Yes?
So the question is, when we're basing the tool from one LL version
to the next one, do you use the change log developers put their love into? And if yes, is it helpful? Unfortunately, sir, the answer is no. But that's because I look at the LLVM
weekly, so I kind of know what happens. This is just my way of doing stuff. So no, but if I would look into the change log, I would find helpful information, I'm sure.
So the question is, am I trying to rebase as LLVM progresses, or am I just rebasing every version when it's released? And it's only when a release comes out, I do the rebase.
It's easier because otherwise, you know, I mean, you know the stuff, depending on what kind of target you ship for, it's hard, and it's just simpler to say, okay, we know and then, we know we need to rebase the version, and it's fine.