So you think your Python startup is worth $10 million...
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 25 | |
Number of Parts | 169 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/21181 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
EuroPython 201625 / 169
1
5
6
7
10
11
12
13
18
20
24
26
29
30
31
32
33
36
39
41
44
48
51
52
53
59
60
62
68
69
71
79
82
83
84
85
90
91
98
99
101
102
106
110
113
114
115
118
122
123
124
125
132
133
135
136
137
140
143
144
145
147
148
149
151
153
154
155
156
158
162
163
166
167
169
00:00
Projective planeFigurate numberProduct (business)Mobile WebDrill commandsLecture/Conference
00:24
MathematicsSoftwareSoftware developerCore dumpValuation (algebra)Mathematical analysisEndliche ModelltheorieStandard deviationSelf-organizationWebsiteSlide ruleSoftware developerCodeJava appletProjective planeProcess (computing)Endliche ModelltheorieProgram flowchartComputer animation
01:11
Endliche ModelltheorieAverageMultiplication signPerformance appraisalProjective planeValuation (algebra)Lecture/Conference
01:35
Valuation (algebra)Coma BerenicesComplete metric spaceProcess (computing)FaktorenanalyseSoftware developerData structureMetric systemCodeSoftware testingExpert systemProduct (business)Different (Kate Ryan album)
02:14
FaktorenanalyseOperations researchSoftware developerData structureSoftware testingMetric systemCodeMilitary operationOpen sourceSource codeInformation securityAreaInformation securityPhysical systemMereologyCopyright infringementCodeLecture/ConferenceXMLComputer animation
02:51
Data structureCodePhysical systemValuation (algebra)FaktorenanalyseEstimationData modelMetropolitan area networkCopyright infringementSound effectDisk read-and-write headElectronic mailing listProjective planeProcess (computing)EstimatorEndliche ModelltheorieJava appletPerformance appraisalSoftwareStandard deviationSoftware developerPhysical systemBitLecture/ConferenceComputer animation
04:34
Endliche ModelltheorieDampingMereologyMathematical analysisDivisorLecture/Conference
05:02
FaktorenanalyseMetric systemSoftware developerArchitectureData modelEstimationData bufferLattice (order)InformationCodeEstimatorDivisorMultiplication signFrame problemMatrix (mathematics)Electronic mailing listOrder (biology)Software metricBuffer solutionStandard deviationComputer animation
05:49
Data analysisSource codeLine (geometry)CodeModule (mathematics)Finitary relationString (computer science)Order (biology)LogicStatement (computer science)CodeSlide ruleLine (geometry)Term (mathematics)Set (mathematics)Source codeRepository (publishing)Traffic reportingSoftware metricGradientBuffer solutionLecture/ConferenceComputer animation
07:00
Data analysisSource codeLine (geometry)CodeModule (mathematics)Line (geometry)Loop (music)SpacetimeSocial classCodeModule (mathematics)Basis <Mathematik>NumberRight angleLecture/ConferenceComputer animation
07:36
Finitary relationLine (geometry)String (computer science)Endliche ModelltheorieOnline helpBit rateCodeModule (mathematics)Lecture/ConferenceComputer animation
08:13
CodeKolmogorov complexityPrice indexDecision theoryPopulation densityLine (geometry)MeasurementDecision theoryInformationCASE <Informatik>AverageCodeNetwork topologyException handlingWell-formed formulaSubject indexingComplex (psychology)Functional (mathematics)Software maintenanceStatement (computer science)Population densityLine (geometry)Similarity (geometry)Lecture/ConferenceComputer animation
09:03
CodeExecution unitSoftware testingPersonal digital assistantData modelStandard deviationEndliche ModelltheorieValuation (algebra)Java appletOrganic computingMixed realityLevel (video gaming)Process (computing)Set (mathematics)Computer hardwareArchitectureParameter (computer programming)Abelian categoryDivisorSoftware developerBounded variationSoftware testingFunction (mathematics)MereologyoutputBit rateCovering spacePerformance appraisalUnit testingTerm (mathematics)Enterprise architectureWell-formed formulaProjective planeParameter (computer programming)Execution unitEndliche ModelltheorieDivisorQuality of serviceNumberMetric systemSelf-organizationCategory of beingLine (geometry)CodeCASE <Informatik>AreaReplication (computing)ResultantWeb applicationSoftware metricTraffic reportingEstimatorDifferent (Kate Ryan album)Formal languageMultiplication signLecture/Conference
11:15
Java appletSoftware developerRegular graphDivisorFormal languageNumberPresentation of a groupDifferent (Kate Ryan album)Multiplication signResultantRight angleLecture/Conference
11:55
Data modelIntelSoftware developerDifferent (Kate Ryan album)Wage labourPhysical systemCountingProcess (computing)Data managementProduct (business)Projective planeException handlingMultiplication signoutputEndliche ModelltheorieSoftware developerEstimatorWell-formed formulaComputer animation
12:29
MereologyDivisorNumberElectronic mailing listBuffer solutionCalculationGoodness of fitSoftware maintenanceEndliche ModelltheorieFactory (trading post)Lecture/Conference
13:10
DivisorEstimationEndliche ModelltheorieData modelDivisorCASE <Informatik>Endliche ModelltheorieAverageSign (mathematics)Process (computing)Right angleEstimatorNumberComputer animationLecture/Conference
14:01
Valuation (algebra)Endliche ModelltheorieDivisorEstimationData modelEndliche ModelltheorieExpert systemMultiplication signEstimatorCopyright infringementComputer animationLecture/Conference
14:49
Product (business)System programmingExpert systemPhysical systemEstimationSoftware developerFood energyMereologyShared memoryProduct (business)Expert systemSoftwareOrder (biology)
15:25
AverageMathematical analysisSoftwareCASE <Informatik>Lecture/Conference
15:50
CodeKolmogorov complexityData structureSoftware testingAlgorithmData structureComplex (psychology)Goodness of fitDivisorCodeAlgorithm1 (number)Endliche ModelltheorieProduct (business)Different (Kate Ryan album)Game controllerNP-hardModule (mathematics)Computer animation
17:12
Connectivity (graph theory)State of matterBitCentralizer and normalizerNumberPerformance appraisalDependent and independent variablesMultiplication signLattice (order)Electronic data processingProzesssimulationoutputProjective planeMereologyInformationExpert systemInstance (computer science)DistanceThread (computing)MetaheuristikPressureDecision theoryProduct (business)Category of beingElectronic mailing list1 (number)Basis <Mathematik>Goodness of fitCausalityDisk read-and-write headMechanism designVideo gameControl flowCodeEndliche ModelltheorieLine (geometry)Mathematical analysisSource codeStructural loadProcess (computing)Theory of relativitySoftwareCASE <Informatik>Object (grammar)VotingData structurePhysical systemSoftware developerCellular automatonReading (process)Right angleGame theoryOnline helpFunction (mathematics)AlgorithmFigurate numberDampingPoint (geometry)WeightExterior algebraRange (statistics)Virtual machine10 (number)Valuation (algebra)AuthorizationScaling (geometry)Software maintenanceModule (mathematics)Computer virusScheduling (computing)MeasurementBuffer solutionLecture/Conference
Transcript: English(auto-generated)
00:00
Ladies and gentlemen, Mark-Andrei Lemberg, give him a big hand. So thanks again for the very nice intro. It's going to be hard to live up to that. Right, so I'm going to talk about a project that we did end of last year. I'm Mark Lemberg, I've been around in the Python community for lots and lots of years.
00:24
I'm also one of the EuroPython organizers. The rest you can read here, it's not that important. So this is the agenda for the talk, quite a few slides. I hope I can show all of them. So what was the outset of this project?
00:42
We were asked last year, we were asked by let's say a big company, because I can't give any details, I assigned an NDA for this. So a big company wanted a small company doing Python development internally. And they wanted to know how much their Python code was actually worth.
01:00
So what the value was of the IT that they had. And because they didn't have any IT skills specifically for Python internally, because they're mostly a Java company, they asked us to help them. And so we had very little time to do this, we just had two or three weeks to actually run the whole project.
01:21
So basically we knew nothing about valuation of companies. And we had to come up with a few things and we thought that might be a good idea to try a few different models and then maybe do some averages and then maybe do some calculation and come up with some value that we give them. And they liked that and so we did that.
01:41
So first a disclaimer, the stuff that I'm talking about here, I'm not an expert in. We basically just kind of did some research, shows some different methods, some different tools to use. And then went ahead with it and mixed all that with our experience in running these projects.
02:01
So what do you have to do if you want to assign a value to an IT startup? Well first of all you have the business value, that is something that I'm not going to talk about because the company did that themselves, they had experts for that. Plus of course if it's IT focused then you have the IT value of the company and that's where we came in.
02:21
So both sides of course have risks and so you need to address those risks when you value the company. Again the business side we did not really participate much in but what we did do is when we looked at the code base that they had created and the way that they had set up their systems we found a few issues with that.
02:46
That were also going for example into the area of data security or maybe patents and then trademark infringement. And those were things that we basically took the business side of the company to take into account and take that risk into account.
03:06
And then we came in to then judge the IT risks that they had. So this is just a list of IT risks that you usually find in larger projects. And so we had to look at the IT side of this valuation process.
03:25
So this is what we set out to do and we told the company that they were fine with doing that. So first we sat down with the team, we tried to analyze the team, we tried to figure out whether the developers were any good, whether the system was any good, whether the data that they had was any good.
03:41
And it turned out to be very good. And then of course we wanted to add some scientific approach to this so we of course we used Wikipedia and then searched a bit around for possible ways of doing valuation. We found this Kokomo model which seems to be a standard in the industry for doing these kinds of things.
04:01
Of course it's based on C, C++ and Java, it's not based on Python and we found out about that later on in the process. And as second model we used an effort model which is basically, well I'm going to talk about it later on. Anyway, and then you get some basic value for the whole thing and then you apply some soft values or you remove some soft values if you find risks for example from the value that you get.
04:27
And we call this added value. And then you get one final value which is kind of the estimate based on models and then you go ahead and then you try to figure out what would it cost to rebuild this whole thing from scratch and you give a value for that.
04:43
And then in the end the company has to decide whether to either buy the company and then maybe run with that company or maybe instead use the approach of building everything themselves which might actually be cheaper. So let's look at the analysis part.
05:01
As I said we have quite a few soft factors, we have some factors that we can actually measure. So you have code metrics and you always have to take into account that you cannot possibly look at everything in that short time frame. So you have to build in some risk buffer for inaccuracies that you know you're going to have in your estimate. And so for the first thing we just sat down with the team, we discussed everything, we tried to figure out as much as possible from them.
05:25
We had a list of questions, something like 200 questions for them to answer. We went through that in a meeting for a complete day and got all the information from them. And that's also how we found out about things that were like risks for example that they had not really addressed yet.
05:49
That the big company buying the small company would have to address in that small company in order to fit their own corporate standards. And then we had to check the code, that was fairly easy because there are tools for this.
06:03
And then again you have to throw in some experience to measure the risk buffer that you have to add to this. So let's have a look at how you can measure the code metrics. There's this nice Python tool called Radon which you just throw at your Python code and it just runs through the whole repository that you have
06:26
and it then takes all the different details from that code and gives you nice summary reports and outputs all the stuff that you need to know about. I think I just skipped a slide. So the standard terms that you have in code metrics are of course lines of code, then
06:44
source lines of code which is basically just lines of code without the comments and the docstrings. Logical lines of code is something special, it's actually just code that gets executed. So for example if you have a for statement then the for line itself is not
07:02
necessarily executed, it's just that the inner loop is executed and so you just count those lines. Then you have blank lines of course and especially important in Python because blank lines are white space and we love white space, right? So the more white space the better, the better you can read the code.
07:21
You also have to look at lines of code per module and then can use that as basis for how maintainable that code is from just looking at these numbers. So the more lines of code you have per module, the more classes and methods and everything you have per module, then the harder it usually gets to maintain that code. It's usually better to break modules into smaller pieces and then do it that way.
07:42
So Radon helps with this, it also helps with figuring out whether you have enough inline documentation which I find very important to have in a code base. I very often get to see code written by companies that don't have a lot of inline documentation and so basically all the documentation about the code itself is either somewhere else or it's just nowhere so it doesn't exist.
08:04
So having docstrings and inline comments is always a good thing so they get a plus added value for that if they're going to show this. So in this particular case it was kind of average, not really that good.
08:22
Then you have two measurements that take all this data and then also add some extra information from the code base so they actually parse the code and then try to figure out how many decision nodes you have in your execution tree. So for example an if statement would be a decision node and the more decision nodes you have per function
08:41
then the higher the complexity and so higher values are worse so you get more complexity and lower values are better. It's similar with the maintainability index except that is the other way around. So the maintainability index takes the complexity, the density, lines of code and everything, puts everything together into a nice huge formula and gives you this index and you get higher values for better maintainability.
09:05
Again you can use Radon for this. So this is what we used as part of the input for the evaluation. Then we had a look at the test coverage so we had them give us all the output of the coverage.py. We also checked for end-to-end tests which are very important.
09:22
So those are things that you usually don't cover with unit tests so you actually have someone sitting there or maybe you have Selenium sitting there and entering the stuff into your web application and you check whether the end results, let's say the report that you get out of it later on actually matches what you expect. Those are very important to have.
09:42
In this case they did have a few, not that much, so that was a bummer. Then we also checked for randomized tests because we found in other projects that if you don't use randomized tests you often end up with test cases that are biased towards one particular area in your code. So even though you have 100% test coverage you're not actually testing 100% of what the possible entry data could look like.
10:08
So you think everything is correct but it's not necessary. So that's what you can do in terms of code metrics by just looking at numbers. Next was this Kokomo model that we basically read up on Wikipedia.
10:24
So this is a very old model. I think it's from the 70s or 80s. It's used to assign a value to give an estimate for how much time you'd have to spend to write code. The only parameter you enter is basically lines of code.
10:41
And then you have to choose one of these models. Of course nowadays most projects are organic projects in the sense of Kokomo. So you have small teams, agile process, and so there's nothing much to decide there. Then you get these very simple formulas here with a few parameters. The small a, b, c, d are parameters.
11:00
Those are predefined by the model so you just look them up on Wikipedia. Use them for that particular organic project category that you want to use. And then you have to use this adjustment factor to accommodate for the efficiency of different languages. And what we found is that Wikipedia recommends 0.9 to 1.4 for Java and C.
11:22
Well, the numbers that came out of this did not really match reality so we had to use a different factor. So we used 0.5 for this which kind of indicates as sort of like a side result from this whole thing that Python is in fact more efficient than Java and C.
11:40
As if we didn't know. Right. So what you then get, you get development time and then you have to look at your development team, the number of senior developers and regular developers that you want to put on that team, how much money you have to spend for them, and then you get the value and that's your estimate.
12:04
And you do the same thing for the effort model except that you don't use some formula for this. You sit down with the project manager of that particular product and ask the company how much time it took to write this thing, how many developers they used and what problems they had and then you use that as input for this formula and you get the similar value.
12:30
So next comes the magic part which is added value. So you have these numbers and then you go through this list of soft factors that you have and you add some percentage
12:43
or you remove some percentage from the value depending on what you think is good quality work or good quality design and you factor in risks, extensibility, maintainability, various costs that you see in the questionnaire that you did.
13:02
You add the risk buffer and in the end you come up with something that you can actually use in your calculation. So you take those two models. We just took a pragmatic approach because we didn't know better. So we used Kokomo model, the value that came out of that. We used the effort model and just used the average.
13:20
Then we added the value factor and in that particular case came out to something like 75% more than what the value from the models was, which is a good sign by the way. I mean they really did a good job there and then you come up with a final estimate.
13:42
Right, and then in that particular case they also had valuable data in that company, which is something that you not necessarily have in startups, but I mean if a startup has worked for a number of years then they have usually gathered some data and you need to assign a value to that as well. Now for that we had not found any good model like the Kokomo one to use so that we could just use the effort model.
14:06
So we basically sat down with them again, tried to figure out how much time it took them, how much they had to pay for those experts to gather that data, and then we had an estimate as well and we added everything together and then we had a final value to give to the big company to then use as estimate.
14:24
Now the next question was make or buy. For doing that you have to basically try to create a new company that does exactly the same thing. I mean I'm just leaving aside all the patent and infringement and stuff. I mean big company, you know, small company, so big company can do this.
14:43
Small company usually can't. So what you have to address, you have to recreate all the products, all the data, you have to get the experts in, which is usually one of the most difficult parts, and then of course you have to work and get the same market share in order to be able to compare those two companies.
15:01
So all this costs, especially the marketing stuff that costs a lot. So that's a business side so we did not do that. So we just focused on the IT side. And so you need to see how much money you would have to put in as let's say you're a software shop like we are,
15:21
how much effort you would have to put in to basically rebuild everything that they did. And you also have to then look at how to recreate the data, which is not necessarily something that you do as a software shop but at least you have to provide some advice on how to do this. And then you have an offer for rebuilding everything and then you put everything together, give it to them and they decide.
15:43
And in this case they decided not to buy. So for them the analysis was maybe not exactly what they wanted. But I think in the end the outcome was good for everyone.
16:00
So how can you add value to your startup? Well basically it's just all I've just said and you work on all these different factors and improve them. And it's not really that hard. I mean you need to write good code. It needs to have a good structure. The complexity should be low, the structure should be right, so it's better to use more modules than larger ones.
16:26
You need to have everything as flexible as possible when you design the whole product. It needs to be extensible because usually the big company wants to enter new markets, which the small company has not thought about.
16:40
So you need to be flexible at that end. And then of course you need to invest into things like data structures, like algorithms. For example for the algorithms you can have lots of books from CS you can use. You don't always have to use the naive algorithms for everything, which many companies do.
17:02
Plus there's one important thing on the IT side, it's for reducing risks by not depending too much on third party packages. Because you don't have that much control over them, even though they might be very high quality. If your company is not capable of maintaining such a third party package, in case for example the author just goes away or does something else,
17:26
or I don't know, the project stops, then you have a huge risk there. Right, and that's all I wanted to say.
17:47
Thank you. Okay, so do we have any questions? I've got loads of questions, so I hope you have the same ones that I have in my head because that will help. There you go. I wanted to ask about this radon tool or this automatic check on the code quality.
18:05
Isn't there usually the danger that you can play a game with it, that you optimize for the tool and not for actual better code? How reliable is this radon? If somebody tells me tomorrow there will be due diligence, just do some make up to the code so that the tool gives a better value.
18:23
Is that possible? I'm quite sure it is possible, yes. But the company in that case did not know about this tool and we gave it to them and said tomorrow we need the output. So they did not really have a chance of manipulating anything. So what we did in that case is we took the output of radon and of course it gives you the outputs per module.
18:43
And then you look at the modules that look a bit like say have a high complexity or have a low maintainability and then you just check those, check the code of those and analyze why the radon tool gives that readings afterwards.
19:00
So of course you have to do some code review as well as part of that. But we simply did not have time to read all the code base. So if you're given a code base that has a few hundred thousand lines of code then there's no way to do it that fast. More questions? Hello. So did you look at any other alternatives to the Kokomo model which
19:30
seemed to use lines of code as perhaps the starting point for measuring things? What other ways could you measure? What other signals from the code could you use to measure the value?
19:43
The signals from code? Basically what we did is we... You were measuring lines of code weren't you? That's how Kokomo works. Yes, that's how the Kokomo model works. And the value that we got out of Kokomo was so much higher than the value that we estimated and the value that we got out of the effort model that we simply had to use a different factor for that.
20:04
Right. Okay. It just reminded me of a musical example where the BBC used to pay arrangers by bar. And that was just an arbitrary measure. And so the arrangers figured this out very quickly that if they wrote everything in 2-4 they got twice as much money because there was twice as many bars.
20:21
And so there's a way you could gain this because if you understand what the thing is that you're measuring... Again, it's very similar to the question we just had before. You could gain these things, but you answered that, I guess, when you said that you didn't tell them that you were coming. Yeah, I mean, of course you look at the code and then you see these things, right? So if you see that someone's been adding lots and lots of dummy lines to the code and just
20:45
to get more lines of code to make it look like more valuable, then of course you will detect that. Yeah, so, I mean, of course. Sorry, hey. Questions? Okay, sorry. Hello. You mentioned the risk buffer in your talk, and what value is appropriate in your opinion?
21:09
This was hard to say. I mean, in our case, for example, we were not able to review all the code that they had. So we just had, we did not even have a chance to look at all the components that they had in their code.
21:20
So we just looked at the main, the central kind of component that had all the interesting bits in it, but we did not look at all the other components that were stuck on the side that did some UI stuff. So we knew that we were only covering, say, maybe 20% of what they actually wrote in their code, and based on those 20%, we kind of had to interpolate the rest, the quality of the rest.
21:46
And so that's what we used for the risk buffer. So the risk buffer was fairly sizable in that case. So, at what number did you? I can't tell you any number. More questions, yep.
22:07
Did you continue tracking the company to see whether your valuation was correct, or did you have any other way of knowing whether your valuation was correct or close to correct? Well, we know that the big company did not buy the small company, and that they're thinking about
22:20
actually doing it themselves, so they, on the make or buy, they're actually more on the make side. But they're still discussing that. Big companies take longer in these things. What was the effort of the reviewer? Did you spend weeks of tens of people, or days?
22:40
No, no, no. We didn't have much time. We had something like two or three weeks, like I just said. And so we had one full day meeting with them, asking all these questions. Also doing a part of the review, because of course they did not give us the source code, so we had to sit there and just look at it.
23:04
And so we did not have that much information from them, so we had to base everything on that kind of input. More questions. It's not ideal just to say, but we simply, I mean, we were told that we have to give them an answer, the big company an answer within those three weeks, and that was all we had.
23:26
So we needed to come up with something that made sense, and so basically that's what we did. As I said in the beginning, this is not necessarily, this is not how you should do it, right? This is just how we did it, and the numbers that came out, they did make sense.
23:45
Hi. So first, thanks for your talk. From what I understand, you basically evaluated a company based on their repository, which is, I would say, interesting. But I would also argue that perhaps the biggest weight in the company evaluation is the developer
24:04
team or the company structure, their processes that were created out of various needs and so on. So I would be interested in how did you approach to measure these kinds of, let's say, more soft things.
24:21
Well, we did not have a look at the business side of things, because we had a different part of the project doing that. So the big company had experts for doing these things, so analyzing the numbers and analyzing the team. All we could do is we could tell them whether we have the impression they have good developers or not, or good software designers, because that's our expertise.
24:44
And so we were not experts on business processes, so we cannot really put a value to that. So what we did do is we told the business side what we think about their team skills, and we told them about the risks that we found in the code base and in the structure of their systems.
25:03
But that was basically it about the business side of things. So it seems like the very common thread in both talk and your answers is you were under a lot of pressure and you made certain decisions just basically because of that.
25:20
Now, let's say you remove all the pressure, you have as long as you want, like the big company comes in and tells you, take as long as you want, have as many people as you want, do whatever you want with this. What would you have done differently? What would you have maybe not done or maybe done more thoroughly from the process? Well, I think we would have done more research on this whole valuation approach.
25:47
I mean, we basically just had this Kokomo model, which just came up on Wikipedia, and we used that. There were also some people in the company who wanted to use that model, so they obviously found it and found it useful, so we kind of made sense to use that.
26:05
We would have put more energy into that. We would have had more interviews with the team members. We would have done more code reviews. We would have had access to all the systems, all the components, so not just looking at a part of it.
26:22
I mean, basically talking to people is very valuable. We did not have time for that. We just talked to the chief developers, not the regular developers that they had. We also did not really have much look into the development structure and how they work, for example.
26:42
That is something that we completely left out. For example, we would look at how they do this agile process, how that works out for them, whether it's sufficient for them or not, this kind of thing. Because when buying the company, of course, you're not just buying the product. You're also buying the people. Of course, people are usually very valuable, but sometimes you need to restructure things
27:05
to make them more efficient because you're not necessarily buying the most efficient process there. Development process, I mean, not business process. Okay, I can't resist asking a question myself. You said you split it very clearly between the business decisions for the business and yours were just the technical ones.
27:21
But it occurs to me that some of the value you get from software isn't down to how complex it is or how much effort you put into making it. But it's like the intellectual property or finding the correct solution. It took 20 years for someone to eventually come up with Tim Sort as being the heuristic algorithm that's going to be really good for sorting lists.
27:41
But actually, it's only eight lines of code and if you wanted to reimplement it and you knew which one you were doing, it wouldn't take long. So did you have any way of kind of evaluating or measuring these guys have come up with a really good solution to this and trying to come up with it again from scratch, going through all the wrong solutions would take ages or cost lots of money? Yes, yes. In that interview session that we did, we had them basically explain to us how they do this, what the solution looks like.
28:07
And then we found that they had a kind of clever way of doing things, not necessarily the most clever way of doing things. So there was some basis for improvement there and you can see where to improve things.
28:28
So that was something that we found was positive so that you can actually make it work better and scale better. And we've taken that into those added values as percentage. We found that they had reasonably good algorithms for everything.
28:45
But actually putting a value to what you're saying is more or less putting a value to say something like a patent or something like a mechanism that you come up with an idea and that we did not do. Any more questions? I know there's one at the front from someone who's already asked a question so we have time for one more.
29:02
If someone hasn't spoken up yet, they can have a burning question answered, otherwise it's a familiar sounding voice. But we're very glad to have it. I'm not sure if I maybe missed it, but is eGenics even allowed to make an offer to the big company because you gained a lot of inside knowledge?
29:26
Isn't that cheating if you redo the project now after extracting all the information from them? Well, if big company asks us, of course we can. I mean, like I just said, this big company and small company. So yes, of course you do have these issues that you cannot take away the intellectual property of the small company and just redo everything.
29:48
But that's not really our decision, right? If they want it like that and they ask us to do it maybe a little differently so that you don't have these issues, of course big company has lots of lawyers and legal departments and everything. So they can make it work, which is not necessarily nice, but I mean, big companies are not necessarily nice, right?
30:09
Good. Well, that takes us to quarter two and time for our last breaks. The next thing on the schedule is lightning talks at five o'clock. Thank you very much, Mark.