We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

From zero to first test in your own LAVA laboratory

00:00

Formal Metadata

Title
From zero to first test in your own LAVA laboratory
Subtitle
(in less than 45 minutes)
Title of Series
Number of Parts
95
Author
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Linaro Automated Validation Architecture (LAVA) is without a doubt one of the best currently available tools for managing QA board farms. It is proven to be quite a handy tool for both developers and tests automation engineers. Although it is provided together with extensive documentation, creating first own laboratory might be a challenging task. Does it have to be for every newcomer? During this talk Paweł will guide through the process of setting up own LAVA instance. He will also present how to manage its configuration and how to easily make deployments automated and reproducible.
Keywords
22
Thumbnail
54:22
27
29
36
Thumbnail
1:05:58
38
Thumbnail
1:00:58
65
Thumbnail
44:43
75
91
Thumbnail
1:21:58
94
Mereology1 (number)Physical systemWhiteboardTask (computing)Variety (linguistics)Order (biology)Statistical hypothesis testingSoftware developerOpticsDistribution (mathematics)Sinc functionLevel (video gaming)Shared memoryComputer animationLecture/ConferenceMeeting/Interview
Statistical hypothesis testingPhysical systemStatistical hypothesis testingSoftware developerResultantIntegrated development environmentPhysical lawGroup actionCartesian coordinate systemEndliche ModelltheorieInteractive televisionRange (statistics)Multiplication signKernel (computing)Procedural programmingMereologyLevel (video gaming)Equaliser (mathematics)Normal (geometry)Shared memoryWhiteboardInstance (computer science)Computer hardwareDistribution (mathematics)Perturbation theoryTask (computing)Scheduling (computing)Configuration spacePerformance appraisalArithmetic meanStructural loadPhysicalismType theoryOperator (mathematics)AbstractionPeripheralForm (programming)Computer architecturePerspective (visual)Process (computing)Internet service providerRight anglePoint (geometry)Virtual reality1 (number)Android (robot)Operating systemBootingRootLatent heatSingle-precision floating-point formatNetwork topologyValidity (statistics)EmulatorAutomationVirtualization
Repository (publishing)WhiteboardKernel (computing)Proper mapMedical imagingData storage deviceEmailProjective planeSoftware developerBootingElectronic mailing listInteractive televisionThread (computing)Process (computing)Web pageSlide ruleIntegrated development environmentResultantOffice suiteArithmetic meanComputer fileInstallation artStatistical hypothesis testingAndroid (robot)Physical systemInstance (computer science)View (database)Point (geometry)Human migrationComputer programmingTemplate (C++)Type theoryOcean currentOperating systemRevision controlDefault (computer science)Level (video gaming)Event horizonVertex (graph theory)Coefficient of determinationDomain nameDistribution (mathematics)Rule of inferenceThermal conductivityComputing platformState of matterFocus (optics)Metropolitan area networkTraffic reportingFreewareTelecommunicationValidity (statistics)Statistical hypothesis testingComputer animation
MathematicsInternet service providerInstance (computer science)Multiplication signAddress spaceData managementServer (computing)Service (economics)Virtual machineGraphical user interface1 (number)Configuration managementType theoryComputer fileIntegrated development environmentOrder (biology)Configuration spaceRevision controlWeb pageUtility softwarePoint (geometry)Web 2.0Meta elementSemiconductor memoryVirtualizationSoftwareCommon Language InfrastructureRange (statistics)DatabaseRepository (publishing)Product (business)Maxima and minimaCuboidAdditionMereologyPerformance appraisalLevel (video gaming)Module (mathematics)Default (computer science)Bookmark (World Wide Web)Arithmetic meanResultantTerm (mathematics)BitTemplate (C++)Goodness of fitRobotStatistical hypothesis testingView (database)Basis <Mathematik>Dean numberComputer animation
Virtual machineInstance (computer science)DatabaseConfiguration spaceIntegrated development environmentCheat <Computerspiel>Performance appraisalMultiplication signSet (mathematics)Conservation lawRoutingCanonical ensembleComputer animation
State of matterInstance (computer science)Level (video gaming)Network topologyStatistical hypothesis testingProcess (computing)Template (C++)Group actionFlow separationStatistical hypothesis testingBootingRepository (publishing)Single-precision floating-point formatOperator (mathematics)XML
Condition numberWebsiteGroup actionPeripheralInstance (computer science)Projective planeStatistical hypothesis testingIterationMultiplication signStatistical hypothesis testingWhiteboardElectronic mailing listPoint (geometry)Computing platformFrame problem1 (number)ResultantVideoconferencingMaterialization (paranormal)Level (video gaming)Software developerDependent and independent variablesRange (statistics)CASE <Informatik>Kernel (computing)Alpha (investment)Network topologyMereologyView (database)Validity (statistics)Right angleMotif (narrative)Distribution (mathematics)Touch typingSoftwareClosed setModal logicSoftware frameworkXMLSource codeProgram flowchartComputer animation
Water vaporMedical imagingWhiteboardConnected spaceReverse engineeringBootingSerial portComputer hardwareMaxima and minimaBootstrap aggregatingWeightReflection (mathematics)Data storage deviceVideo game consoleBenchmarkStatistical hypothesis testingDecision tree learningOperating systemSoftwareProcedural programmingSet (mathematics)Interface (computing)Mechanism designPlastikkarteComputer animationLecture/ConferenceMeeting/Interview
Statistical hypothesis testingStatistical hypothesis testingTemplate (C++)Configuration spaceComputer configurationMedical imaging1 (number)Communications protocolSoftwareSet (mathematics)Scripting languagePlastikkarteBootingBlock (periodic table)Plug-in (computing)Library (computing)Gastropod shellFormal languageShared memoryInterface (computing)Radical (chemistry)Heat transferLevel (video gaming)Function (mathematics)MereologyMaxima and minimaCodeMathematical optimizationResultantLatent heatComputer fileWordMathematicsTelecommunicationMeeting/Interview
Address spaceLine (geometry)Drop (liquid)EmailComputer animation
Transcript: English(auto-generated)
Hello everybody. My name is Paweł Wieczorek. I work for Samsung R&D Institute Poland and I'm currently a member of Tizen release engineering team. Tizen is a GNU-Linux distribution and in the common variant which I currently take
care of, its main aim is to provide support for as many development boards as possible. That includes some popular ones like Raspberry Pis and some less frequently used like Odroids, Artix, and so on.
Today I would like to share with you a short introduction to what Laval Laboratory is. Since, as you might imagine, tasks such as validating, verifying, testing operating systems on a wide variety of development boards
involves a lot of automation and in order to stay efficient, we have to put as much of our work on automated systems.
I will start with a short introduction to what Laval actually is, how you might benefit from it, and what can it give you. Then I will go through the steps of actually setting up your own Laval Laboratory.
I would also like to share a few useful tools that might make your future work with Laval much easier. Next, I will show you how you can get these results quickly.
Also, I would like to share with you a few steps that you might go from having your first instance and what to do with it, actually. I will conclude with a short summary, a few final thoughts, and, of course, a Q&A session.
Let's start with what Lava actually is. Lava is an acronym which stands for linear automated validation architecture. It's an automated system for deploying operating systems on embedded devices,
but also not only just on the physical devices but also on the virtual ones like emulators or VMs. That's why we became interested in it in the first place,
to make the whole process of deployment kernel, device tree blobs, root FSes on the embedded devices with no manual interaction. As I mentioned, Lava lets us to test on both actual embedded devices and in a virtualized environment.
Once the operating system is put onto the device, a wide range of tests can be run on each of the devices.
That includes boot tests, even at the bootloader level, or some higher-level tests. Although it's worthy to note that some of the tests might require some additional hardware, like the tests for power consumption or temperatures depending on the load on the devices.
Let's ask ourselves, when is it actually needed? In the most basic setup for embedded development, like a single ARMv7 board like Bigelbone Black,
it's not really hard to memorize all the steps for flushing the device, communicating with it, gathering results from the test runs, but it has some downsides.
For example, a single developer can use it at the time. It's hard to share it without actual physical presence and passing it from one developer to another.
It lacks running tests in parallel, but that's not the only problem with this setup. What about if your application or your embedded non-Linux distribution has to provide support for other target devices,
even ARMv7-based like the BBB? For example, Arctic-10 on the top left, or Adroid Zoo3 on the bottom right.
Even with all the procedures in place and all of that known by the developers, maybe your application or your distribution has to provide support for a completely new development board,
like MinoBoard TourBot, which is Intel-based. All of the operations that I mentioned earlier are done in a completely different way than the ARM-based devices. Even if all of these details are shared among your team, it's no longer an issue.
Sooner or later the test results will have to be provided quicker, it will have to be done in parallel, and managing the whole board farm of yours will require too much effort from the developers.
So maybe you could benefit from an abstraction layer put over your whole board farm. That's actually what Lava is.
Lava allows to simplify even complex ways of managing the devices. You don't have to worry about the specifics of each device, like flashing, running, communicating, gathering results.
That's no longer a concern of yours. You don't have to worry about device-specific features as long as they were previously defined in your Lava configuration. And you don't have to worry about managing multiple instances of your device types as well.
The scheduling and dispatching of the tasks is done for you in the Lava environment. It provides a unified way of communicating with devices from developers' point of view,
as long as all the devices were configured beforehand. Developers no longer have to put effort into managing the devices. They look from their perspective equally the same.
With Lava, you also get to run all of your tests in parallel, as long as they do not depend on each other and can be divided into smaller chunks. Also, it stores and archives all of the results that you get over time,
so that you can review them or investigate more closely if you need to in the long run. Also, you do not lose any of the benefits that you get from the direct interaction with your board.
You can still get to the device directly by using either built-in solutions, which are hacking sessions, or some external tool like board overseer by Free Electrons.
And since this tool gives you so much, let's go through who actually uses Lava currently. First of all, the team behind the tool itself, Linaro, currently uses it for testing both Linux and Android on development boards.
Also, the kernel CI project, people from kernel CI project who test the
support of many development boards in the kernel perform boot tests within Lava laboratories. As for the whole distributions, currently both Debian and automotive-grade Linux do their QA in Lava laboratories.
Now that we have some rough image of how can we benefit from Lava, let's go through the setup steps. We will focus today on a standalone instance, even though Lava allows you to get your environment distributed.
What I mean is that it doesn't matter whether the boards are in your European office or Asian office or any other country.
It will be still the same from the Lava point of view. We will also focus on the virtual devices only today, and we will not get into the topic of writing tests for the Lava.
And why is that? First of all, to make these first steps as straightforward as possible. Since it might be completely new work for you, it is really important to get familiar with all the basic concepts
so that it will not get in your way and will help you instead of making things even more difficult. Even though the tests that you might already use can be reused in Lava,
it might be wise to postpone the migration after the whole setup is already done. So what are Lava's requirements? Fortunately, Lava is not too demanding.
In the setup that we go through today, we will need only a supported Debian release, which currently includes even the old stable Jessie.
Currently the Ubuntu support is frozen due to the old version of Django in default repositories. But if you are interested in the details of this, you might find the proper mailing list thread linked on the slides
on the page of this talk, on program.frostcon.de. Apart from the Debian-based platform, we will need also a few additional files.
First of all, system image. It might be built all by yourselves or taken from the pre-built images provided by Linaro on their main Lava instance on images.validation.linaro.org.
For starters, it might be the quickest way to get up to speed. Next? Yeah, sure.
Does Lava support building your own images within the framework or do you have to build them outside and just supply? Currently, Lava needs to have these images being already built,
so some additional setup would be required. For example, as you mentioned, building them elsewhere and then supplying just to Lava.
Thanks. Next, you will also have to, apart from the operating system image, you will have to supply a health check job just to make sure that the board operates properly. And these can also be obtained directly from Linaro on git.linaro.org.
Under QA domain, you'll find many exemplary health check jobs for various embedded devices. Also, a device type template will be necessary,
but those are also supplied by Linaro in all of the Lava installations. There are multiple templates already in place, and the one that we will need for QEMU is provided by default.
The only file that you'll have to prepare beforehand all by yourselves is this only three-line file which consists of the definition which template will be extended. And two features that will be specified in order to avoid any conflicts between devices,
so the MAC address for the QEMU device, and for maximizing utilization of your resources, specifying the maximum memory of the device.
Once you have all of this prepared, you might go directly to setting up. Thanks to the efforts from Lava packaging team, Debian repositories already provide Lava metapackages that we will use today.
But beforehand, the database for the results, configuration, and so on has to be prepared. It's also worth noting that just for today, we will use the metapackage for Lava,
but as your laboratories get more sophisticated and as your requirements grow, you might benefit from the fine-grained packages to stay at the minimal possible setup of Lava,
and not to install any unnecessary parts. Once this is all in place, all you have to do to set up Web UI would be to enable two additional modules for Apache,
and replacing the default configuration with the one that is already supplied with your Lava server, and of course, restarting your service in order to apply all the changes on your server.
Also, do remember to set up the superuser for the whole configuration. As for actually adding devices to your laboratory, these three steps are everything that you need. First of all, adding the actual device type or letting the laboratory know what do you have in your laboratory.
The first one has to be done only once per each device type. The next ones are adding instance of your device and actually specifying the details of your instance,
or in Lava terms, passing the device dictionary to your Lava laboratory. Once this is done, you might go to the CLI or Web UI,
and since it allows you to automate everything, I believe CLI will be the tool that you will use the most, but for just quick and dirty tests, the Web UI should be sufficient.
There are also a few other tools that you might benefit from when you set up your own Lava laboratory. First of all, it might be beneficial to have your environment reproducible,
and from Lava point of view, it doesn't really matter what provider of configuration management you'll use, they all can perform equally well. But it is important to have your environment reproducible so that you will always be sure
that your staging production or any other evaluation environment will always have the same setup. Choose your personal favorite. In Tizen, we use Ansible Playbooks the most. Apart from that, I believe that your evaluation environment will probably be done in virtual machines.
So, depending on how much time you have to spare, you might either use the quicker solution, which is Vagrant management software for virtual machines,
which allows you to bring up VMs almost instantly and provides a wide range of pre-built boxes via its Atlas service. But do be careful, since not all of the basic boxes are provided in every virtualization provider you might use.
So, take a closer look at what you will be pulling from the Atlas service. If you have some more time to spare and like to tinker a little bit,
Lipfert will probably be the solution for you, since it allows for the setup to be more adjusted to your needs, and still comes with a few user-friendly both CLI and GUI tools.
Don't think that all of those steps that I already mentioned, that you will have to run them all by yourselves. Of course, it might be a good exercise to run them once more,
but on the page of this talk you will find an example of virtual machine configuration with Lava installed from the meta package with QEMU device,
basing on Vagrant Lipfert virtualization provider and provisioned with Ansible Playbook. So, let me just go real quick.
What you'll find in the tarball are actually basic requirements for this evaluation environment to use, which are currently only pip and Ansible installed on your host machine.
Everything else will be brought up for you with Playbooks from the tarball as well. Yeah, sure, of course.
How about now? Is it better? All right. So, let's stay with this font size. As for the configuration of your virtual machine, if you decide to modify it,
do remember to allow the virtual machine to nest other virtual machines within it, so that you will be able to add QEMU device as the KVM machine.
All the steps that I already mentioned are written as Ansible roles, so the database introduction, the setup steps for Lava and others are already done for you,
so that you will get up to speed in no time. Just to be sure, I will bring up the virtual machine. I have to admit that it is a little cheating.
I brought it up just before this talk, so that you won't have to wait for the download of all the packages. Once we have this configuration applied to our virtual machine with Lava evaluation environment,
we can go to the first instance of your own Lava laboratory and, for example, check out the devices that were added and in the initial state just the single QEMU device,
which, as you can see, already performed a few health checks. So, what is inside such a test job? In just the basic template, it requires three main actions.
The deployment instructions, boot instructions and actual test run. Deployment instructions involve what will be run on your device,
and how it will be run. As for boot, it defines what to expect from your operating systems, and the actual tests are described in separate Git repositories available directly from Linaro.
Now let's get back to a few final notes.
What you might do with your own Lava laboratories from now? Well, if you'd be interested in actually adding some physical devices, the documentation on that can be found on each of the Lava laboratories instances,
but the most recent documentation for that can be found in the main Lava instance on validation.linaro.org. The topic of actual writing tests is described in the chapter developing tests on the documentation site as well.
You might also be interested in adding your laboratory to the kernel CI project, if you'd like to contribute some new boards or the ones that are already available,
but you'd like to get the results on a wider range of kernel trees. I believe that everyone will benefit from new boards in kernel CI laboratories.
As for the articles that you might find interesting, Automotive Grade Linux distribution publishes the whole setup of their laboratory and test framework used in their QA activities,
and testing initiative from civil infrastructure platform also provides many interesting materials on using laboratories, running tests on them and managing the results.
If you'd be more interested in watching a video or listening to it rather than reading an article, you might be interested in the talk from Bill Fletcher from last year's Linero Connect,
which goes into the detail on internals of Lava laboratories and the design of it. If you'd be interested in using direct access to development boards connected to Lava laboratories,
the talk from Antoine Tenard and Quentin Schultz from last year's ELCE, Embedded Linux on Europe, would be the one for you. Or maybe you'd be interested in becoming familiar with the whole setup
in the Linux distribution, then Jan-Simon Miller's talk would be the one for you. Of course, much more documentation is already available and this talk just touches the tip of the iceberg of what Lava actually is.
Even though it might seem a little overwhelming at first, all the common issues and some difficulties that you might encounter will probably be all solved if you look in the documentation closely.
If you have some specific issues, also Lava users mailing list or Linero-Lava IRC channel on Freenode will be probably the place you'll get your issues resolved at no time.
To sum it all up, thanks to the efforts from Lava packaging team, once you go through the whole documentation on setting up your first Lava laboratory
and pick necessary instructions, you will get your first instance in no time if you meet all of the requirements.
It allows you to unify the whole setup of your laboratory and you get the parallel execution of all of your test cases with no responsibilities on the developers.
Developers see the devices seem to be the same from the developers' point of view. Even though the documentation for this project might be a little intimidating at first,
the fact that you will get all of the answers from them says it's the right way to go. Also, there is absolutely no need to reinvent the board farm management software
if the one that Linero provides meets all of your requirements and allows you to make your workflow work more efficiently. Even though the way of automating all of these actions might seem a little costly at first,
it will be beneficial in the long run and with each new test run that you will perform in your own Lava Labra trees.
And since you already have the starting points, you might get up to speed in no time. That will be all of what I've got prepared for you today. If you have any questions, I will be more than happy to answer them.
What's the minimum requirements for the set of interfaces for a board to be used by Lava? Does it require Ethernet?
The minimum requirement is depending on the way of deploying your images. If you have to perform only tests which benchmark the possibilities of the board and you have some specific image, only a serial console, just to communicate with the device,
will be sufficient. But if you'd like to deploy different images, then some other transport player would be more efficient. So some network connection might be helpful, but the absolute minimum is just the serial console.
Thank you. Another question, sorry. So I can just have a board which just has serial and do everything with it, except it will be very slow.
That's right. Okay, thank you. A question slightly more on the hardware side. If you want to have the ability of testing bootstrap procedures within your FRAM,
which might of course go wrong, a bootloader might not work, so the board doesn't come up. So one needs some mechanism to replace a bootloader, which normally means taking out storage device, reflashing SD cards or whatever. Do you have experience with a hardware solution that allows remotely deploying boot images?
Unfortunately, no. The requirement for Lava is to have a way of communicating with the operating system with the device already booted up.
So the other option than not booting from SD card would be to boot from NFS. Does Lava have plug-ins or the capability to place an image on an NFS share and then boot it up and switch it around? Yes, there are multiple different ways of deploying images onto the devices
and booting from NFS is one of the available ones. One of the recent changes, a pull request from AGL allows to boot the device directly from NPDs,
from network block devices, if you'd be interested in such activity.
What's the interface Lava uses to talk to tests? The interface for communicating with devices or... With the tests, like when the transfer tests, how it communicates the results, the output of the test.
Does the test have to be written in a specific language or does it use some protocol to communicate with Lava to report the results? The tests are written in the Lava-compatible YAML configuration files,
which have to comply with its pipeline schema. So the minimum requirement would be the three parts that I mentioned earlier. The deployment stage, boot stage and the actual test run.
But I can see that that might not be the answer for your question. Could you rephrase, please? How do I make a test? Is it a separate executable or is it a library for Ruby or something?
What is a test? What the test for the Lava is? Lava deploys a shell script onto your device. If you can write your test as this set of steps that would be run manually by you through the terminal on your device,
then you can also write it as the Lava YAML test template. Okay, thank you.
If there are some other questions that come up, feel free to drop me a line at my email address. Thanks for your attention and have a nice rest of your day. Thank you.