Improving LibreOffice quality together
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 561 | |
Author | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/44342 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
Position operatorComputer animation
00:57
BlogEvent horizonSoftware bug
01:46
Metropolitan area networkOpen setStability theoryCode refactoringSoftware bugPatch (Unix)Multiplication signLinear regressionNumberSlide rulePoint (geometry)Traffic reportingTwitterCrash (computing)InformationVideo gameOpen setMathematicsProduct (business)Drop (liquid)Arithmetic progressionNetwork topologyTheory of relativityMereologyReliefEmailCuboidAverageComputer animationEngineering drawingDiagram
11:21
Software testingLink (knot theory)File formatComputer filePythagorean tripleGroup actionSoftware bugLinear regressionRevision controlScripting languageEmailRankingRoyal NavyTotal S.A.Patch (Unix)MathematicsPairwise comparisonHacker (term)Software testingDifferent (Kate Ryan album)Scripting languageSoftware bugBuildingLinear regressionEmailRight angleCrash (computing)Arithmetic progressionElectronic mailing listSoftware frameworkPatch (Unix)Computer fileWikiWebsiteOrder (biology)PasswordTraffic reportingFile formatMultiplication sign1 (number)Pointer (computer programming)Link (knot theory)BacktrackingInformationWordProbability density functionProcess (computing)Doubling the cubeNumberExistential quantificationEvent horizonGodCASE <Informatik>Grass (card game)Coefficient of determinationCuboidCalculusMathematicsComputer animation
21:21
Computer-generated imageryHuman migrationLaptopCode refactoringSoftware bugTotal S.A.Human migrationAlpha (investment)Fiber (mathematics)Right angleComputer programmingLaptopStudent's t-testEvent horizonCode refactoringMereologyMedical imagingBeta functionFocus (optics)CondensationINTEGRALProjective planeKey (cryptography)Computer animation
22:58
BlogWeb pageFingerprintInformationSample (statistics)BlogBlock (periodic table)BuildingLink (knot theory)Scripting languageWeb pageEmailShared memoryTraffic reportingRight angleAverageElectronic mailing listComputer animation
25:25
Streaming mediaView (database)File formatSlide ruleInformationSample (statistics)DialectMoving averageMeta elementSigma-algebraSoftware bugExplosionBlogTime evolutionOpen setComputer-generated imageryDesign by contractTesselationScripting languageDifferent (Kate Ryan album)InformationTraffic reportingPort scannerSoftware bugMultiplication signNumberCrash (computing)Software developerElectronic mailing listEstimatorArithmetic progressionProjective planeRepository (publishing)CuboidCASE <Informatik>Presentation of a groupComputer animation
27:19
Physical lawComputer iconMaizeExecution unitSoftware bugTraffic reportingInformationDependent and independent variablesSoftware testingOrder (biology)Resolvent formalismMusical ensembleoutputComputer animation
30:05
Computer animation
Transcript: English(auto-generated)
00:14
Yeah, so let's get started. Yeah, so welcome everyone.
00:23
My name is Chase Cofauli. I am the QA Engineer at the Document Foundation. I started two years ago in this position. And in today's talk, I'm going to talk about,
00:41
well, how to improve LibreOffice quality together. And well, basically it's a summary of what happened in 2018 and yeah, what's for the near future in 2019. So first, I'm going to talk about
01:03
what happened in Baxila, the back tracker we use. Then I'm going to talk about the automation of things we are doing in QA or how we can find ways
01:20
to automate things. Then I'm gonna talk about QA events in 2018. And then in the middle of 2018, we created a blog about QA. So I'm gonna slightly talk about that and what we are using it for.
01:43
Yeah, basically that's it. So let's start talking about what happened in Baxila last year. So yeah, that's, well, we got around 7,500 reports.
02:05
Yeah, in the whole year, by 3,100 people, 88% of those reports were bugs
02:20
and then 11.5 were enhancements. And yeah, we have, there is a pointer, I think. Oh no, okay. So in average, we have 550 reports every month.
02:45
And then in October, we got around 700 reports. And well, the reason is that, well, we got 6.1.3
03:00
report release about that time. So I think that's, well, when this release become still and then we got more people reporting issues. Then, well, we got almost 6,900 reports closed
03:25
by 520 people. This is slightly lower than the number of reports we got open. But anyway, I think it's quite impressive to see so many bugs reports in a year.
03:44
So in average, it's around, well, it's slightly lower than 600 reports every month. And then we also have October as the month where more reports were closed.
04:01
So yeah, basically, when we get more reports, normally, yeah, some of them are duplicates or not an issue, so they get closed as well. Then, well, on this chart, we see that of those 6,900 bugs
04:25
closed, so almost 32% of them were closed as a fix. So there is a commit fixing them. Then, like 24% of them were duplicates bugs,
04:42
so were triaged by QA, around 18% were work for me, which means those bugs were, well, there were triaged in the past, and then at some point, they got fixed. Then someone retested them,
05:01
and they were no longer reproducible, but we don't know the patch or the commit fixing them, so we just closed as work for me. And then we have around 12% of them which were closed as insufficient data
05:20
because we requested more info from the reporter, but then we didn't have any info back. Then, well, normally, when a bug is reported, it goes to unconfirmed status.
05:41
So then, from that point on, it's when QA jumps in and basically triaged it and decided to move it to new or duplicate or whatever status it needs to go. So the important thing is that
06:00
the lower number we have of unconfirmed bugs, the better because otherwise, if this charge is going up all the time, it means that, well, that we are not triaging those bugs in time. So yeah, it's not that,
06:21
so if we don't triage them, then it's gonna take longer to fix them. So yeah, it's always a struggle to put this number back. Like in March, we put it to 300, then it went up to 500, then back again to 350.
06:41
So then it goes up, and it's always like this kind of trend. But yeah, at least the idea is that yeah, we keep on doing it, so it's not going all the time up. Then, well, this is an interesting one.
07:04
So yeah, we see that over one year, the number of regressions went from 850 to yeah, more than 1,000. So we have in April or May more or less,
07:25
there was a drop there. I took a look at in Baxila, and well, there was this huge change from Armin about AW8080. So then many regressions were introduced,
07:43
and then at some point, he fixed all of them. So we had a drop. But yeah, it's something to take into account, because then while I was preparing the slides, I did the same slide.
08:02
I created the same chart for the previous year, from 2017, and it was the same trend. Like we get more and more regressions, so those are the open regressions. So yeah, I understand that
08:24
like many people report bugs. At some point, we find that they are regressions, because they didn't work, they used to work in the past, but yeah, it's something to take into account. So looking at this chart, I thought, okay,
08:43
we should analyze when those regressions were introduced. So yeah, we see that in 5.1 and 4.1, sorry, we still have like 100 open, and the same for,
09:06
yeah, well, in 6.1, it's kind of expected, because it's, well, it's still a production release, so it's not end of life. So we are still working on that release as well,
09:23
same as 6.2. But yeah, like looking at this chart, I can say that, well, half of them were from four times or three times, so like three releases, major releases,
09:43
or four major releases. So then those regressions seems that, well, they were introduced, so a long time ago, that, well, it's now difficult to get someone to look at them. It's not like if you introduce a regression,
10:01
then it's like two months old, then it's easier to fix it. But then, yeah, we are carrying all these regressions from the past, so yeah, it's difficult to get someone to fix them. Then highest priority bugs from that time,
10:22
from last year. So this, this peak here, it matches also when this work, where the refactor from Armin was done, so then many crashes and many regressions were found.
10:43
So then when he fixed them, then we got back to where we were, and now where we are kind of stable here. And then here with high priority bugs,
11:01
well, it's kind of, yeah, steady as well. And then that peak there, it's another refactor from Armin that, yeah, there were many, and yeah, many problems introduced, and then it got back to normal.
11:22
Yeah, so that was it for Baxila. Well, regarding automation, so two years ago, I talk about this script we are using now. So basically, what we do is,
11:42
well, we have a pool of documents, and what we do is to import them in LibreOffice, then we export them to different formats, like doc, docx, then RTF, then we open those documents,
12:01
those exported documents in Word, or in PowerPoint, or yeah, and then we create a PDF. So then we have the reference from Word, and then the PDF created from LibreOffice, and then round-trip it in Word.
12:22
So we can, well, it works by finding differences in those PDFs. So then we can, yeah, I use it to find regressions, and yeah, so last year, we found 62 bugs,
12:45
with this tool. So yeah, the good news is that 70% of them are already fixed. It seems CIB and NHTSA team, or I don't know how do you pronounce it, they are using this tool as well.
13:01
So right now, like in the TDF infrastructure, we use it to test writer, calc, and impress. So those are the formats we test nowadays, and in writer, we use a pool of 4,000 files, in calc, 5,500 files,
13:23
and then in impress, 2,400 files. So those are random files, downloaded from different backtrackers. So with using a huge number of documents, allows us to find really corner cases,
13:44
like really strange or obscure documents that were like, for normal features, we have already these cases, but using this tool allows us to find, well, problems that otherwise we couldn't find.
14:04
So this is an example here. On the left, you have the reference, and on the right, you have the document exported from LibreOffice. So you see that where some characters are missing.
14:24
So yeah, that was, this way we just find it right away that something's wrong here, then we just bisect it and say, okay, this commit produced this regression. So then it's faster, yeah.
14:41
Same here. So this is from LibreOffice. So the background was white, while it should be transparent. Or the same here, we have the bullets, the bullets should be one specific size,
15:02
and in there they were much smaller. Then we also use some scripts to track what's going on in Baxila. Yeah, as I said before, we get more than 5,000 reports every year.
15:25
You just need to create an account. It's as easy as having an email and a password. You can edit everything you want. We don't restrict what users can do in Baxila. So it's, yeah, we have this philosophy of
15:42
anything can be, like anyone can edit anything. So, well, the downside of that is that where we need to check what things are done in the right way. So in order to do that, I use this script.
16:02
Yeah, we check 30 different things with this script. Some of the ones I find more interesting is that, okay, this script creates a report that let me know, okay, that regression or a crash was just fixed.
16:21
So then I get the list of those regressions and crashes fixed. So I just go there and verify they are fixed. Or also, an interesting one is that we are encouraging newcomers to, well, to confirm bugs.
16:40
But the problem is that if they just confirm the bug, then we don't know if it's a regression or not. So maybe if it's a recent regression but no one checks that, then the bug remains open forever. So then, with these reports, I know that a specific bug was moved to new without confirming whether it's a regression or not.
17:02
So I just check it, double check it, and then it speeds up the process. Yeah, and things like that. So my idea for the future is to, because right now I run this script locally
17:22
because, yeah, it's something I've been working this year. So my idea is to, now that it's more or less working as expected, to have it published somewhere, like in the wiki or to have a website for that. So then, any other contributor can read this report
17:45
and also help on that. Yeah, and I have a talk in Tirana's conference about that, so here's the link. Yeah, this is another thing this script is good for. So when I see a newcomer is doing things in Baxila,
18:06
then I get a notification with this script, and I just send him an email, like, hey, Tony, so welcoming him and giving this person some pointers and some links, some interesting links.
18:22
So yeah, last year I sent this email to 130 people, and I do the same for all contributors. So yeah, people come and go, and so what I do is just like, if I see that someone was contributing in the past actively
18:42
and then after half a year he's not, or this person is not contributing anymore, I just send an email saying, hey, we miss you. Like, we would appreciate if you can help us anytime in the future. So yeah, I send this email to 150, all contributors,
19:04
and yeah, sometimes you get nice replies like, oh, I'm busy now, but whenever I get time again, I'll contribute back again. So yeah, that's, I think that people, well, they appreciate sometimes, not always, but sometimes.
19:22
Then we have UI test as well. So we have Snek, or Ral, here. So he did a really impressive work last year. He did around 200 patches in 2018. Well, we also have Marcus Mohan who did this framework.
19:44
So right now we have 136 testing right there, 15 in the press, five in math, none in draw. So that's something to work on. Then calc is really, well, the most covered, 264.
20:03
So that makes 427 tests. So considering this framework was introduced one year and a half ago, that's a really impressive work and progress here. And this is something I did in the Hackfest the other day.
20:22
So I found one dialogue which was, well, there was a regression in a dialogue. So then I thought, well, we have make a screenshot thing that just prints all the dialogues in PNG files.
20:40
So I thought, well, maybe we could just compare those screenshots between different builds. So then if we have different, like in this case, well, there might be a false positive, but we could also find regressions much faster this way. So that's something I'm doing now. So yeah, let's see if it works.
21:02
I'll have it, yeah, it's a simple script, but I'm gonna have it in a VM, like running, like I pull master, I build it with screenshots and then compare and see if some useful information comes out of it.
21:22
Thank you, Evans. Well, we have for most of the, or normally for alpha one, beta one, and RC one, we organize a back hunting session. So we had three back hunting session in 6.1, three other in 6.2.
21:40
So yeah, we normally have dedicated sessions where we encourage participants to test some new features or to test a part of the project, of the program. So in 6.1, we focus on fiber migration and the image handling refactoring.
22:02
And then in 6.2, where we did it there for KD5 integration, the notebook bar and also the fiber migration. So there was a hack fetch in Taiwan. It was organized by Franklin,
22:20
who was here just one hour ago. Also Chen Xia, Chen, I don't know, and Jeff Huang. So this was an interesting event because there were more than, well, around 70 students attending this event. And yeah, it was focused on QA.
22:41
And also we had, here we have Muhammed. So he ran two inside hunting sessions in Ankara, if I'm right. Yeah, yeah. So yeah. And then finally, the blog.
23:03
So in August, we created a blog for QA. Right now we're using it for announcing pre-releases and also to announce, well, to publish monthly reports.
23:26
So in the past, when we had pre-releases, normally we send an email to the QA list, but that's something that we stopped doing it. So I thought, well, the more we advertise it, the more people is going to download it.
23:41
And then we're going to have more people testing it. So then I thought, okay, let's use the QA blog, which goes to the planet as well. And then I can also share the link in Telegram or whatever channel. And yeah, so right now we announced
24:01
all pre-releases in the blog. So then, well, what we do is to point them to the get involved page where they have the links to download the builds. So yeah, I see like on this chart,
24:21
you can see that yesterday I announced the final pre-release or the RC3 from 6.2, which is going to be announced as final next week. So you here, you have like an average of 35 people
24:44
visiting the get involved page. But then yesterday, as I announced this pre-release, then we jump to 70 people. So then we have, yeah, I think it's important to announce it because then we get more people testing it.
25:06
And finally, we have monthly reports. That's something I've been kind of, yeah, on the one hand, I do it with a script because we have some charts that can be generated
25:20
with a script. But also, I'm gonna show you this example. This is from last report from December. I still have like two minutes. Five minutes left. I have five minutes, okay, but maybe some questions. So yeah, we have here like the number of reports
25:44
triage bugs and the people doing that, fixed bugs, like list of critical bugs, like crashes and highest priority bugs. Verify, yeah, like different information about QA.
26:01
Then we have this chart, which were similar to the one I used at the beginning of this presentation, progressions. So you get an estimation of what's going on in the project. And that's, well, this information is done automatically. Right, well, automatically with a script.
26:21
But then we have this one here, which is quite interesting because, well, I think it's been a topic for quite some time that we wanted to have a place to gather all information about what's going on in development,
26:43
and well, in this case, development and QA. So if you are not really following Git or what's going on in the repository, then you can just come here and see what's going on in the project. Like, I don't know, like scan support,
27:00
like things that are going on in master. So yeah, it's kind of for like human work, a way of knowing what's going on. So yeah, that's it. Thank you for, yeah, thank you for attending.
27:23
And do you have any question? Yeah? But you mean if the bug is reopened?
27:48
Yeah, yeah, so what we do now, it's okay, so someone reports a bug, then we request, if we need some information, we just put it in need info.
28:00
So after six months, if we don't get any reply from the reporter, we just send a reminder that, okay, this bug is going to be moved to insufficient, to resolve insufficient data in a month. So if we don't get that response, then we just close it.
28:20
So they don't remain open forever, or in unconfirmed, or whatever. Oh, a lot of questions.
28:47
Yeah, but then, yeah, that's because the bug is set to new, but then nothing happened within a year, so then we just send a reminder. So the reminder, it's trying to,
29:05
well, the reporter to retest that bug, but yeah, sometimes we don't get, that's what you mean, this long reminder that we send. Why not basically? Yeah, but the bug was confirmed in the past,
29:23
so someone needs to retest it in order to close it. We cannot close it automatically, because then we are losing information. Those, I mean, if you retest those bugs, and they are still reproducible in master, I mean, if we just close them automatically,
29:40
then we're just closing a bug incorrectly, because it's still reproducible. So that's why we ask for input from a third person, or the reporter, or whoever who can retest it, just to make sure it's still reproducible or not.
30:04
Yeah?