We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Running Android on the Mainline Graphics Stack

00:00

Formal Metadata

Title
Running Android on the Mainline Graphics Stack
Title of Series
Number of Parts
50
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
It is now possible to run Android ontop of an entirely Open Source Linux Graphics stack, this talk will dig into how you can do it too! Finally, it is possible to run Android on top of mainline Graphics! The recent addition of DRM Atomic Modesetting and Explicit Synchronization to the kernel paved the way, albeit some changes to the Android userspace were necessary. The Android graphics stack is built on a abstraction layer, thus drm_hwcomposer - a component to connect this abstraction layer to the mainline DRM API - was created. Moreover, changes to MESA and the abstraction layer itself were also needed for a full conversion to mainline. This talk will cover recent developments in the area which enabled Qualcomm, i.MX and Intel based platforms to run Android using the mainline graphics stack.
24
Thumbnail
15:29
25
Thumbnail
21:21
32
44
SpacetimeSoftwareAngleSystem programmingOpen sourceDevice driverKernel (computing)Computer hardwareProjective planeSpacetimeAndroid (robot)
Android (robot)CollaborationismSystem programmingLine (geometry)Kernel (computing)Atomic numberOperations researchSoftware frameworkElectronic visual displayPlane (geometry)Personal digital assistantGeneric programmingDevice driverAndroid (robot)File formatGraph coloringFrequencyImage resolutionMathematicsFunction (mathematics)Operator (mathematics)Software bugSoftware frameworkState of matterElectronic visual displayPlanningComputer hardwareCASE <Informatik>Connectivity (graph theory)Order (biology)MereologyBuffer solutionTouchscreenCategory of beingBitSuite (music)2 (number)Multiplication signObject (grammar)String (computer science)Kernel (computing)Open sourceBranch (computer science)NumberAtomic numberDifferenz <Mathematik>Line (geometry)Context awareness1 (number)Single-precision floating-point formatLatent heatGame controllerComputer animation
Android (robot)CollaborationismSystem programmingInterface (computing)Computer hardwareDevice driverKernel (computing)Mobile appStatisticsAndroid (robot)Projective planeOpen sourceKernel (computing)BitPower (physics)Process (computing)Computer hardwareSurfacePlanningMobile appQuicksortDifferent (Kate Ryan album)Device driverInteractive televisionCommunications protocolSelf-organizationMereologyPoint (geometry)Buffer solutionComputer animation
CollaborationismSurfaceSystem programmingComputer hardwareFunction (mathematics)Electronic visual displayKernel (computing)Device driverResource allocationRead-only memoryPixelGoogolImplementationMobile appDigital rights managementComputer hardwareProjective planeOrder (biology)Device driverGraph coloringPixelMetadataAndroid (robot)SoftwarePower (physics)Kernel (computing)Maxima and minimaQuicksortLimit (category theory)Block (periodic table)Digital rights managementSemiconductor memoryData storage deviceSpacetimeNumberAtomic numberInfinityElectronic visual displayFunction (mathematics)Term (mathematics)MereologySubsetInformation securityStack (abstract data type)Goodness of fitGreatest elementOpen sourceConnectivity (graph theory)ImplementationFile formatCASE <Informatik>Different (Kate Ryan album)Library (computing)Online helpMemory managementComponent-based software engineeringGraphical user interfaceComputer iconTheory of relativityInformationWebsiteCategory of beingBuffer solutionProcess (computing)Open setComputer animation
CollaborationismAndroid (robot)System programmingKernel (computing)Device driverDifferenz <Mathematik>Line (geometry)Open sourceDifferenz <Mathematik>CodeSoftware framework1 (number)Electronic visual displayProduct (business)SpacetimeType theoryDevice driverDifferent (Kate Ryan album)Line (geometry)Software testingOpen sourceProjective planeState of matterComputer hardwareTerm (mathematics)Kernel (computing)CASE <Informatik>Sampling (statistics)Atomic numberRevision controlAndroid (robot)Digital rights managementSoftware developerLogicDigitale VideotechnikMathematicsQuicksortComputing platformOrder (biology)ArmRow (database)Stack (abstract data type)SoftwareSurfaceBitShape (magazine)MereologyTablet computerFingerprintSinc functionNumberFirmwareGoodness of fitGastropod shellRoundness (object)Computer animation
System programming
Transcript: English(auto-generated)
Hi, I'm Robert Foss, I work for Collabra and I do open source stuff for a living, typically Linux graphics, user space and kernel stuff.
And this talk is about Android and how you can run the Android open source project on top of essentially whatever hardware you want, at least as long as it has a decent graphics driver. Yeah, let's get into it.
So, Android history, Android on mainline and what the current status is and where we're going from there is what we're going to talk about. And the Android history is interesting and I'm going to visualize it for you. This is the number of lines of diff against the mainline kernel that Qualcomm and their kernel is.
So it's between, let's see, 1.5 and like 3.5 million lines of diff. And this is for their common kernel. For specific ship sets it's even larger and then for every specific ship set
they also ship a kernel branch for every single cellphone that is shipping, at least the major ones. So as you can imagine this is actually even larger for most devices.
So, this is how we got here. Android forked the kernel and that's fine, that's what open source is for. They had good reasons to do so because the graphics stack and infrastructure wasn't particularly good. It didn't suit their needs particularly well.
Essentially support for low power devices was very much lacking and the overall graphics subsystem really needed an overhaul. So specifically a feature that was much requested was atomic support.
And atomic in this context means that you can do a bunch of changes at the same time. Like maybe you want to change the resolution of your output and maybe the update frequency and maybe the color format all at once just so that it all happens at once. But also if one of the operations fail, let's say the resolution change goes through
but the color format change bounces for whatever reason, you don't want to be or end up in a state that's unknown because that's how you get bugs and it's no fun. So in order to avoid this, from within Google the ADF, the Android Atomic Display Framework was created
and it is just that. It is atomic and solves or scratches their particular edge. But it's not extensible or generic and it doesn't support atomic operations for anything but planes.
So in a graphics stack there are a bunch of graphics components like the display controller, the CRTC. There are planes and a few other parts. Planes happen to be essentially like a buffer for a part of something that's going to render on the screen.
But I'm going to visualize that later. So let's pause that for a second. Additionally the ADF wasn't compatible with the current or then current API that was used in the kernel. So it was a hard sell to basically throw out everything that was old and replace it with something new.
That didn't really suit everyone's needs apart from Google's. So it wasn't really upstreamable and this is where we were for a bit until the Atomic KMS API was introduced by Daniel Vetter. And this solves our problems. It supports all of the ADF use cases.
It uses a thing called properties. Properties are essentially just strings with values attached to them. And you can attach them to any object in the graphics pipeline essentially. So you can be very generic and support wonderfully weird hardware without rewriting the kernel every single time.
And it is now replacing ADF for most vendors. Yeah, I think all vendors are on board and they're at the very least slowly rolling it out now. So let's have a look at what this looks like in practice.
If you want you totally can run the Android open source project on just any kernel essentially. But it does require some extra bits and we're going to get into that. So this is what the Android graphics stack essentially looks like if I grossly simplify it.
And I will. So there are a few basic layers and they're stacked essentially like this. So there are the apps. This is what it's all about. This is what we want. Like actually the apps are the point so they're an important part.
Then there's Surface Flinger which sort of mediates between the different apps. And in Android everything is sort of an app, even the stuff you don't think of as an app is an app. Like the notification toolbar, that's an app.
And all of these have to be integrated in some way so that you can have a pleasant user interaction. And that's essentially what Surface Flinger does. It also speaks or talks to the hardware. So it does this app organization and then it communicates that to the hardware. And it does that through the HWC2 protocol.
So this is essentially what it organizes. This is just some desktop and it contains an app for the status bar. It contains an app for the navigational bar. And if you have a look over here, see they overlap and it's not a problem.
That's Surface Flinger doing good stuff. You can also see that these parts are mostly transparent. And these parts are also what I previously referred to as planes. So there's a plane for the status bar and there's a plane for the navigational bar.
And they're all overlaid on top of the background. So the status bar, for example, is backed by a relatively small buffer. I guess it's pretty wide but not terribly tall. And the opposite is true for the navigational bar.
And it's all just stacked on top of the background. And this process of stacking these things is called compositing or composing. And this is something that we have actual hardware to do because it's kind of slow and it's power intensive. And you can make it really fast in hardware if you want to and if you have support for it.
And this whole stack of stuff is communicated through the HWC2 layer to the driver and then it deals with it somehow. So the HWC2 implementation is implemented by the vendor driver.
Which will be whatever Qualcomm gives you or whatever, I guess, NVJ gives you. It will implement HWC2. And the hardware composer does do some interesting stuff.
And it's surprisingly complicated. So it just gets layers through the HWC API, which sounds simple enough. These layers have stuff attached to them, metadata like X and Y coordinates, widths, heights, maybe color information, and the order in the stack, for example.
And if we take these properties into account, maybe we'll be able to do less work. So then these layers are optimized for display. And optimizing for display sounds, I don't know, kind of hand wavy.
You do stuff to it and it becomes better. But it's pretty practical because the hardware for these things are very much limited. Maybe your display hardware only supports four layers. And as you saw in my picture before, we had three layers without really doing anything.
Actually, there are a few I didn't even talk about, like the desktop icons and stuff like that. So there's lots of stuff going on. And we very easily reach four layers. And that's what very fancy hardware supports. Some hardware, much of hardware supports only one layer or maybe two layers. So if we have more than that, we can't just send more layers to the hardware than it supports.
It won't work. So we have to sort of smash some layers together before sending them out to the hardware. And intelligently choosing which layers to combine is, yeah, that's where you get a lot of power savings if you do it cleverly.
But if you only combine maybe the smallest layers, the hardware ends up doing... Sorry, if you end up combining the largest layer on the software side, the hardware doesn't have to do all that much work. And there are very minimal power savings, for example. Additionally, it's kind of tantalizing to think why not just build hardware that supports infinite numbers of layers.
But unfortunately, it's kind of expensive to implement in hardware. Storing all this data and making sure that you have hardware or IP blocks that can efficiently combine them quickly
is something that requires a lot of space in terms of silicon. And yeah, we all want cheap ships, so they don't support more than four layers typically. And then these combined or squashed layers are then just sent to the hardware, which does whatever it does. It takes the buffers, combines them into a single buffer, and then sends them out over whatever connector you have to your display.
And then your display displays it, hopefully. So that's basically the stack for hardware compositing. So this is the user space part of the driver.
Having everything living in the kernel would be terrible for various reasons, security being one. So much of the driver, most of the driver lives in user space. Of course, a part does live in the kernel too, but typically it's a lot smaller and deals with talking to the hardware,
like talking to registers and making sure that the hardware is ready to accept jobs, that kind of stuff. So this is also the part that implements the APIs you're familiar with, like OpenCL, Vulkan, OpenGL, whatever it may be.
And of course, the hardware composer for Android anyway. And yeah, in the bottom we have the kernel. You kind of need a kernel, so there it is.
So the thing about where we are now is that the May 9 Linux kernel has a very good graphics API, and it's so good that some Google devices have shipped using it. So the Pixel C, I think, was the first device that shipped using the Android KMS or Atomic KMS API.
But if you want to ship it, you kind of need something to implement the HWC. And since you're not using NVIDIA's fancy driver or whatever it may be, you need some software component to implement HWC. And Mesa, which is where graphics stuff typically is implemented in the graphics stack,
does not implement it, nor does the kernel, certainly not the kernel anyway. So something else needs to do it, and that project is called DRM Hardware Composer. Yeah, it has a very long, very dull name, but it sort of says what it does on the tin.
And this is where it lives in the stack. So this blob is typically proprietary, and if we break it out into the open source components, this is what we end up seeing. So there's the kernel, and there's the driver, which is actually a bunch of components.
What you think of as the driver is a bunch of stuff. And on top of it, we have DRM Hardware Composer. This bunch of stuff is Mesa, lib DRM, and some other things. It's actually quite a few components.
There's also Graalock, which is the graphics memory allocator. There's no software project called Graalock. There's a bunch of implementations, and it's a giant mess, and memory allocation in this space is unsolved, essentially. So it's still a giant mess, especially when you want to have memory
that different hardware components can actually all use together. It gets pretty problematic when, for example, your GPU supports only some color formats, and then your display hardware only supports some other color formats.
You can't really output whatever your GPU is presenting in that case. So that's why the Graalock situation is tricky. But there are a few Graalock implementations that sort this out for at least the display hardware. So it's fine. It's manageable.
So about DRM Hardware Composer. It's a Google project. It was written by a guy called Sean Paul, and it came from the Chromium OS project, where Chromium OS and Android are sort of related. They do use some of the same technology stack,
and now Android runs inside of Chrome OS, which is slightly perverse, in a VM, so there's Android inside of Chrome OS. Anyway, the project was found within the Chromium OS project, and since then it's been liberated,
and now lives where essentially all open source graphics projects live, at free.stop.org. This is where you'll find Mesa, the OpenGL driver, for example, and most of the other projects.
So I'd like to thank Google for liberating DRM Hardware Composer. It's been very helpful for the community at large, and for us who want to run Android on not-phones. So Sean Paul, Pune Kumar, and Marissa Ball have all been super helpful. So thanks, guys.
And if you want to contribute, you can do so. It's hosted on gitlab.free.desktop.org. You can submit a pull request, and people will complain about it, surely. But yeah, it's all there. So this leads us to the bigger picture.
Why should users and product manufacturers even care about this stuff? Why does it matter how this stuff is implemented? And it sort of does, because we want to be able to develop new features, or support new features that are in hardware,
and we want them supported in our software stack, and doing so is kind of tricky, because if you want to introduce a new feature into the kernel, you sort of need to use it somewhere in order to say this stuff works. Like, I've tested it at least once.
So you want a project that you quickly can implement support for in the user space. That's where DRM Hardware Composer can come in, if you want. Another part of it is how features seem to migrate. So if we looked at the ADF, the display framework that was introduced by Android,
it had a lot of good ideas, like atomic, and these things seem to move from the Android kernel into the mainline kernel. So very slowly, at like a glacial pace, all of us are taking advantage of this stuff, even though it's sort of in an indirect way.
We're not using the exact same code, but the idea is the same. But this doesn't really apply to everything. If we look into the number of lines that are different between the mainline kernel and the Qualcomm kernel, for example, we see that some subsystems seem to have very large devs.
13% is drivers and the GPU stuff, which is a decent size.
But much of this stuff, like drivers for other things, are never going to be upstreamed because no one cares strongly enough about it, essentially. Some things are just not really reasonable to upstream anyway. Like, for example, a fingerprint reader for your phone.
It's mostly like a shell around a signed encrypted blob of firmware. So you can write that driver that uploads this blob to the hardware, but it's not very useful to anyone. No one will enjoy that. So upstreaming it wouldn't be acceptable to the wider community,
and you as an end user wouldn't see any changes, or you wouldn't be able to interact with it any differently anyway. Another conclusion we can draw is that the difference in number of lines between the mainline kernel and the Qualcomm kernel seems pretty constant.
It's not going down drastically. The ideal scenario would be for it to shrink towards zero, but it's somewhat constant. I had a quick look today at some of the latest kernels, and it's 1.8 or 1.9 million lines of diff, so it hasn't really shrunk.
So that's kind of interesting. But what we also want to do is push the industry towards open source. And we want it not just because it gives us warm and fuzzy feelings,
or it gives me warm and fuzzy feelings anyway, but it's actually very helpful in terms of deploying products. If the entire stack is accessible to everyone and isn't in a working state, immediately you can bang out a product pretty quickly. You don't have to spend months trying to configure software and make it work when it should work to begin with.
Increasing the speed also increases, or means lowering development costs. So as a product manufacturer, this stuff really matters to you. And something that we've been seeing is that the open source drivers have a quality that is much, much higher than the proprietary ones.
Much of the code is shared between different drivers, and that means a lot of edge cases are found. Also, there's a lot of testing for the different types of hardware. So they all sort of benefit from it. We also want to push open source adoption into the industry.
Especially with vendors, it's hard. They don't really care about software. They're not into that. They are into selling ships, and that's what they care about. But if there is a viable open source story, they're definitely more interested.
And that's about it. Does anyone have any questions? So you mentioned that the diff doesn't seem to be shrinking, but is it the case that maybe stuff is migrating,
but they're still pouring more stuff into the four kernels? Yes, stuff is definitely migrating, but there's more stuff coming in, like new stuff, weirder hardware, more hardware. So stuff is definitely moving, and I would hope in the long, long term that the diff actually shrinks, but it's hard to say.
But it's really mostly about drivers, right? Not core kernel logic, hopefully. It's mostly drivers, yes. So to try DRM hardware composer, what version of Surface Flinger, Android do we need?
What version of Mesa, DRM, kernel do we need? So there are some sample projects that you can essentially bring up on... Sorry, there are development platforms like the HiKey960, for example, which you can bring up with three command line lines.
It's very simple, you'll pull down the Android AOSP project and the required kernels and the blobs for that platform, and they'll all be combined and built for you. And it's surprisingly not tricky, so you can get up and start it and then start modifying stuff however you like, pretty quickly.
It'll take you four hours to build, but that's fine. So on what kind of hardware could this be tried out today? So on what kind of hardware could you try this out today? Like an ARM tablet or x86 laptop? Yeah, yes, either, all of the above.
Essentially everything that has an open source graphics driver or display driver is a viable target, and I think all of the drivers are in good enough shape to try it. I think you could even try it on a Raspberry Pi. That would, however, be a little bit tricky. I don't have a three-line solution for that, but you can if you want to.
Okay, thank you, and a round of applause.