We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Crashtesting LibreOffice in the backyard

00:00

Formal Metadata

Title
Crashtesting LibreOffice in the backyard
Title of Series
Number of Parts
542
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Having your Office suite crashing is rather annoying, but crashing while opening or saving files is unsolvable for end users. The upstream LibreOffice project regularly runs testing against this on master - but is it possible to do it on stable branches maintained for customers? In this talk I'll introduce the set of test files and upstream scripts, and show how we use them to deliver added stability of LibreOffice for our customers.
14
15
43
87
Thumbnail
26:29
146
Thumbnail
18:05
199
207
Thumbnail
22:17
264
278
Thumbnail
30:52
293
Thumbnail
15:53
341
Thumbnail
31:01
354
359
410
3 (number)Function (mathematics)2 (number)Scripting languageComputer fileoutputDirectory serviceData compressionWebsiteResultantEmailModule (mathematics)CodeImage registrationInternet forumCloningParameter (computer programming)Repository (publishing)Core dumpFile formatLoginCrash (computing)Integrated development environmentShared memoryVariable (mathematics)MereologyCompilerSet (mathematics)Default (computer science)Thread (computing)BefehlsprozessorRevision controlPhysical systemSpacetimeMiniDiscSlide ruleUniform resource locatorSource codeMessage passingDistribution (mathematics)BacktrackingSoftware developerProcess (computing)Perspective (visual)Branch (computer science)10 (number)Electronic mailing listError messageOffice suiteSoftware bugBitVirtual machineServer (computing)Software testingFunction (mathematics)DiagramLecture/Conference
Program flowchart
Transcript: English(auto-generated)
Okay, hello, I'm Gabor Kellermann from Alotopia at Myanmar and I would like to talk about how we do crash testing in the backyard. So let's start. What is
even this crash testing? Crash testing is a QA sub-project of TDF and it's run on the master branch around every second week. So what is this testing? It's
testing continuously opening files, saving them in different formats and reopening of those saved files and make sure that crashes during this workflow don't happen because that's kind of bad for user perspective. And usually
this process sends a resulting mail to the developers list and interested parties like mostly Kowlan fix those crashes and that's good on master branch but what if the long-term support branches which we maintain for
customers introduce such errors that would be kind of bad for customers so we wanted to avoid that in the longer run. Okay, so what are the
prerequisites for this work? You need first some hardware, a beefy system with many CPU threads because there is a lot of files to test and of course a
bunch of files, tens of thousands and this can be downloaded using scripts in the core repository. Then you need the crash testing scripts themselves, you need to download them and on the beefy system configure
them to build LibreOffice, run them on the set of files you have just downloaded and also interpret the results. So this is how the beefy system looks in real life. It's nothing entirely in the backyard but on the
couch let's say. It also needs some 40 CPUs or so and a lot of disk space as well. Next, second step is downloading files. First script is called like this in the
core repository and this downloads user-made file attachments from public backtrackers such as TDF and Apache OpenOffice, Bugzilla, Linux
distribution backtrackers and other office software backtrackers such as KOffice, Genumeric and AbiWord. And it has some lovely or less lovely un-user-friendly properties such as you need to install some extra Python
modules and set an environment variable to tailor for your hardware so that the download happens quite quickly and you need to run it from the download target directory but that's all I gave it at this script.
Next, second script is our website scraper code like this. It needs also some Python modules. You can add the target directory as parameter and some Microsoft Office themed forums will need registration before this can work
on them and login data needs to be stored in any format file. Next, getting the crash testing scripts themselves. It's not in the core repository but this other contrib dev tools repository and in the test
Bugzilla files directory. So how to make sense of that? Configuring the environment is also very important and this is the most difficult part of this talk. So before you start running the scripts you need to confirm over the
environment with this config file. It needs to be placed on that path. There are some defaults in the dev tools repository but you should override those.
The most important settings are this compiler version GCC or crank or works the same in this regard but you just need to take care
that the old version of LibreOffice you want to compile actually compiles with your compiler. You need to set the parallelism, how many CPU threads
you have. It's with workers environment variable and the most important thing is the paths for this script. So we need the location of the files to test which were downloaded by the two scripts two
slides ago and after that you need to hardcode the dev tools repository pass with this TT pass and next is source therefore the LibreOffice core
repository clone which you also need to compile and you need a build directory where the output of the compression will go. So in the build directory you need also place the autogen input file it's also in the
dev tools repository. Okay and of course you don't want to send the casual email and upload the results to TDF site because it's internal for the
company so you need to set these two other two last variables. Okay next it's easy there is a crash test data variable for the downloaded files you need to copy your files there and execute the command command share
script which will do all the heavy lifting and basically that's all results will be in the crashes data directory under logs and the male txt
file will be the summary of the run. Next step is finding what went wrong and fix the actual crashes which is just casual backporting bug fixing. So
what are gains of upstream from this work there are some I made these scripts a lot more configurable so you can set them up more easily for
other companies. Before that it was only fixed only able to run on the TDF server and it was a kind of a pain to transplant it to another
machine. Also a little bit of performance gain there was a bottleneck and upstream also can run this work more quickly. And that's all thanks for the attention