Point Segmentation - Part II
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 21 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/59687 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Producer |
Content Metadata
Subject Area | |
Genre |
Image analysis8 / 21
1
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
00:00
Mathematical analysisSurfaceHessian matrixOperator (mathematics)Maxima and minimaDeterminantSign (mathematics)Function (mathematics)Point (geometry)GradientWeight functionMixed realitySquare numberProduct (business)AverageEigenvalues and eigenvectorsParameter (computer programming)Thresholding (image processing)WindowGradientWindowMatrix (mathematics)Descriptive statisticsMixed realityMedical imagingCurvatureSquare numberHessian matrixSystem identificationLocal ringNatural numberMultiplication signPoint (geometry)Maxima and minimaSeries (mathematics)Different (Kate Ryan album)Eigenvalues and eigenvectorsPattern languageForm (programming)Product (business)SmoothingCASE <Informatik>Derivation (linguistics)Direction (geometry)Subject indexingFunctional (mathematics)Level (video gaming)Software testingBitWaveWater vaporCategory of beingArrow of timeExtension (kinesiology)Type theoryCoordinate systemBilderkennungBellman equationIdentity managementAutocorrelationNormal (geometry)RotationApproximationCorrelation and dependenceAreaWeight functionSeries expansionBridging (networking)Mathematical analysisSurfaceParallel portVector spaceProcess (computing)Goodness of fitPixelComputer animation
09:56
Operator (mathematics)Eigenvalues and eigenvectorsPoint (geometry)DeterminantParameter (computer programming)Thresholding (image processing)Maxima and minimaWindowGradientMathematical analysisMixed realitySquare numberWeight functionAverageProduct (business)DeterminantSummierbarkeitMultiplication signPoint (geometry)Cohen's kappaPixelEigenvalues and eigenvectorsSelectivity (electronic)Product (business)Different (Kate Ryan album)Goodness of fitSimilarity (geometry)MathematicsMatrix (mathematics)Thresholding (image processing)Library (computing)NumberConnected spaceMaxima and minimaFile formatMereologyAreaMedical imagingSquare numberWindowNatural numberElement (mathematics)Parameter (computer programming)Direction (geometry)Object (grammar)GradientError messagePublic domainExecution unitDigitizingStudent's t-testAuthorizationException handlingAverageConnectivity (graph theory)Order of magnitudeComputer animation
19:52
Eigenvalues and eigenvectorsPoint (geometry)DeterminantParameter (computer programming)Thresholding (image processing)Maxima and minimaWindowOperator (mathematics)Control flowOrder (biology)INTEGRALComputer animation
20:41
DisintegrationOperator (mathematics)Scale (map)GradientDerivation (linguistics)SmoothingSquare numberProduct (business)Mixed realityDigital filterMathematical analysisMatrix (mathematics)Level set methodEllipseShift operatorNormalgleichungStatisticsInverse elementPoint (geometry)System of linear equationsDerivation (linguistics)Differential (mechanical device)Vector spaceMixed realityArrow of timeSquare numberLinearizationCovariance matrixPerspective (visual)Maxima and minimaRepresentation (politics)Direction (geometry)Scaling (geometry)Mathematical analysisProduct (business)Connectivity (graph theory)Shift operatorInformationError messageGradientPoint (geometry)Term (mathematics)INTEGRALSubject indexingSigma-algebraEstimatorOrder (biology)Functional (mathematics)Cross-correlationHash functionCodeEigenvalues and eigenvectorsOpen setRootWindowLength2 (number)Positional notationOperator (mathematics)Cohen's kappaInterpreter (computing)Matrix (mathematics)Nichtlineares GleichungssystemMedical imagingTransformation (genetics)Element (mathematics)Normal (geometry)Physical systemDifferent (Kate Ryan album)Selectivity (electronic)Position operatorInverse elementSpacetimeAreaSmoothingDegree (graph theory)Differential operatorCorrelation and dependenceAverageClique-widthMultiplication signFood energyIntegraltafelSummierbarkeitParameter (computer programming)GAUSS (software)Theory of relativityComputer animation
29:59
Level set methodEllipseShift operatorNormalgleichungStatisticsMatrix (mathematics)Inverse elementMathematical analysisPoint (geometry)Operator (mathematics)LengthEigenvalues and eigenvectorsDeterminantTexture mappingAutocovarianceMaxima and minimaData modelCoefficient of determinationMathematical optimizationEigenvalues and eigenvectorsCASE <Informatik>Different (Kate Ryan album)MereologyCovariance matrixWeightSummierbarkeitComputer programmingPoint (geometry)EllipseLambda calculusTexture mappingDirection (geometry)Thresholding (image processing)Square numberWindowEndliche ModelltheorieInverse elementError messageShape (magazine)Operator (mathematics)Vector potentialAreaInterpreter (computing)NeuroinformatikMIDIAutocorrelationMatrix (mathematics)Cartesian coordinate systemLine (geometry)CountingForm (programming)Computer animation
39:17
Maxima and minimaData modelWeightOperator (mathematics)Mathematical optimizationCoefficient of determinationPoint (geometry)Eigenvalues and eigenvectorsMatrix (mathematics)AutocovarianceEllipseLengthDeterminantTexture mappingLimit (category theory)Square numberPairwise comparisonPosition operatorEndliche ModelltheorieAlgebraDirection (geometry)IdentifiabilityAlgorithmWindowMultiplication signBlock (periodic table)Line (geometry)Nichtlineares GleichungssystemSound effectLengthObject (grammar)Element (mathematics)GradientPoint (geometry)Normal (geometry)ResultantAreaMatrix (mathematics)Selectivity (electronic)Thresholding (image processing)RootDisk read-and-write headINTEGRALWeightEndliche ModelltheorieMedical imagingGroup actionResidual (numerical analysis)Physical systemSummierbarkeitCASE <Informatik>Operator (mathematics)Contrast (vision)Weight functionInverse elementPattern languageNumberInfinityCorrelation and dependenceBitPosition operatorSquare numberArrow of timeStatistical hypothesis testingPixelTexture mappingClique-widthMathematical optimizationComputer animation
Transcript: English(auto-generated)
00:10
Well, good afternoon, welcome to the afternoon session with 3D image processing and, sorry, on image analysis one.
00:23
After that, the camera. We were going into the topic of segmentation. We said segmentation, there we're going to extract geometrical primitives, points, edges, operations, so all of them. And we started with points, we defined what good points there,
00:42
and then we discussed a couple of methods. First one, or the first really interesting one, so to speak, was based on an analysis of the curvature. We interpret the gray values as a description of the 3D surface. We analyze the curvature and search for windows
01:03
where in the local image content, we have strong curvatures in different directions, which means that those main curvatures, minimum and maximum curvature, should be relatively large, and one metric correctly is the motion curvature, which is one of them is small,
01:22
and so they need a small. This would be the case. Such a surface, we have a high curvature in this direction but a minimum curvature in this direction. And so then, of course, it's zero times whatever it is, it's zero,
01:40
so we would have a small Gaussian curvature, and it's related to the natural identification matrix, which is the matrix containing the second derivative of the gray value function with respect to x and y. If we, in the Gaussian curvature, we have this hominization down here, which is essentially the square of the max
02:02
of the normal vector of the surface. If we neglect this and just do the determinant, we end up with a minimum of s sub two equals. And then we said a series of, my walls are large, not smaller than the other one.
02:35
This is something you always say, these two curvatures are determined as the eigenvectors of this matrix,
02:41
and they will always be kind of different. The point goals curvature are at the maximum. No goals are large. They're not identical, and there is only one, but it's likely, what is it, I don't know. They can only be one maximum curvature,
03:04
one minimum curvature, but the minimum might still be large. It just says that minimum is a bit smaller than maximum. Or it's a lot smaller, but then there is, it's not the point, okay?
03:21
It's small, then we have things like this, where you have a strong curvature in one direction, and you have a curvature in the other direction, okay? These curvatures also have directions associated with them. I didn't mention this.
03:44
It's a fine, hyper-vibrating folder, okay? Exactly, and if you look at what is this, I mean, we have something bright up here, and something bright up here, to say that if there are gray values,
04:00
if there are a corner in the image, the gray value could give you something like this. There is always a bit of water in the images, so this explains this picture. And this is so that you, of course, may be able to do the same thing like that, but this is pretty much what a green arrow would be.
04:23
At the same time, it's a very, very, very, very, it's the same thing, but we do not divide by this gradient set to this gradient one. So it's an approximation in the black effect,
04:42
you're doing like a normal matrix, normal matrix.
05:01
And then you set a series of pointless vectors based on your coordination matrix, so you assume you have every pixel, you put single pixel in the second pixel in the window, and then you ask yourself, well, how does the image change, it will shift the image window
05:21
to a very small vector, PX and Y, and then you just compute the size of the differences of the way that it is represented, of the way that it is represented, and then, well, after you've used your feature series,
05:47
then you realize that some of the differences can be expressed as this, as the red form. And in the center of this diagram, both of these two patterns are in the center of this matrix, and this matrix is called
06:01
the correlation matrix. It contains the smooth squares and mixed products of the gradient, the derivatives of the gradient level function. So in the main diagram, you have to see the squares,
06:20
it goes through the index and the y, and here we have the smooth mixed products of the different products of the gradient and smooth. Now we take the weighted sum over all of these values, over all of these squares and mixed products over the predefined way of having the same size.
06:43
It's a wave function, very often it's a Gaussian, but it can also be a little bit more, so we will test it safely and then make this one, then we test it with some squares. So there's properties in the Gaussian matrix, okay?
07:04
The natural one, so then it's actually, we are trying to do a series expansion here, which means we assume this to be the correction here. The other thing is the way I'm going to compute
07:21
this over here, and some of the differences, just to write the new, you know, this is all the coordination type, which is the hessian matrix of the covariance type.
07:49
So this matrix tells us something about the local image, and this is where we start with the same thing. It's also called structural integration,
08:02
or the structural extension. Someone question, so how can we use this matrix to extract points? First method I'll discuss is where everything is, where it's applied. Now, from the zenith of the first components,
08:23
we need to work with the identity of this matrix, this is the autocorrelation matrix of the tensor, and we have to proceed the eigenvalues of this matrix. Why? Well, using the eigenvalues makes the whole thing a rotation in the area.
08:44
What happens if we have an image we know that looks like this? So more or less, and it has almost the same ray value.
09:00
So it's a homogeneous bridge. Well, what do we do with the velocity versus speed? It goes to zero. What do we do with squares of the first components? It goes to zero. What do we do with its product speed? It goes to zero. So what do we do with the eigenvalues of this matrix? It goes to zero.
09:23
So in such a way that we have such a way that this matrix, we have two small eigenvalues. What if our eigenvalues are like this?
09:40
Well, then, here we have strong gradients. There is an edge, but these strong gradients are all parallel in each other. They're linearly dependent. This means that depending on the direction of these strong gradients,
10:05
we will typically have two large values, except if we have a speed or a sound cost, we can work with that. That's what the error is. So we can have large values here, but there is a constant ratio of the components
10:22
in x and y, and as a consequence, this matrix will be to have one eigenvalue that is very close to zero. And this means that, well, in this, but we will have one strong eigenvalue, and of course, for the eigenvector, we point into the direction that's conservative to the edge.
10:43
And we will have one very small eigenvalue at this two point, or the corresponding eigenvector to the point in the direction of the edge. One large, one small eigenvalue, it's an edge, and not a point. What happens in this case?
11:07
Well, then we will, of course, have two strong eigenvalues, two large eigenvalues. And the eigenvalues will be negative,
11:24
negative squared, squares of its products. So it's actually, this matrix is positive, so it's not positive, so I'm just gonna try to force this to the eigenvalue, and I want the eigenvalue,
11:44
or the two eigenvalues with a zero, but we can't have any negative eigenvalues. So, now we want to compute the error. We want to have a window where both eigenvalues are large.
12:05
Okay, let's just compute this matrix for every pixel, compute the eigenvalues with an advice that's very short to the small eigenvalue. And it's hard to analyze, of course, what the problem is, why it was too slow.
12:24
Because computing the eigenvalues means you have to take a stair loop, computing a stair loop, for every pixel of the unit was then prohibited, you wouldn't do it. It goes by a serious extension, which was just considered to be slow. So what did the authors of this paper
12:42
and the students come up with? They came up with what they call the cornerless criteria digits of the eigenstate. It involves the two eigenvalues, but hence the advantage that we had out at the computer, because we need the product of the eigenvalues, but the product of the eigenvalues is identical to the determinant of the matrix.
13:03
So we just need to compute the determinant of the eigenvalues and the product. And it means the sum of the eigenvalues, but the sum of the eigenvalues is identical to the eigenvalues. So again, you can compute the eigenvalues. So this part here, is just the determinant minus sum
13:23
times the kappa times the trace, or the square of the trace of the matrix and if this criteria is large, or if this value is large, it could be for every pixel, just as it could be the matrix and for every pixel, of course, it is like a new vector, it could be the r for every pixel,
13:42
r is large because eigenvalues are large, and then we can just apply special graph and say every pixel there are is about the kappa difference in the eigenvalue. How do we select kappa? In principle, we can select the authors,
14:02
recommend it to use the principle four, that is basically brand new eigenvalue. It's a heuristic, it's a heuristic that made something new a collective task, which wasn't good,
14:21
which haven't been good before, and it was based on a very important interesting mathematical kind of deduction. So imagine what happens to the eigenvalues of this matrix if they are such, and if they are such, and if they are such, and how we can use these eigenvalues to differentiate these cases.
14:44
Okay? Now we want a special r, and then of course, well, let's assume you have a corner like this, and let's see if the window size is seven by seven. What happens if you shoot the window by one pixel? You still have a good corner inside.
15:02
Okay? And this means that we have two products called non-maximal suppressors. So if you have to look for, you have to go for relative max and all the coordinates in a certain window, which is yet another parameter to compute the resonant,
15:20
might say how large is the area of the non-max on a suppressor. So it's else new for detecting points. The detector is what we call the Aristotle detector, but it is somewhat heuristic. And it still moves every now and then.
15:40
Any questions? Oh, N is a matrix,
16:01
and it's computed for every pixel. So for every pixel, you compute such a matrix. And if you compute this matrix for this pixel, you take into account a certain number of them, right? And the window comes into play here, where we take this weighted mean
16:21
of the squares of the first and the second, because the same thing is computed using a gradient of formats of all pixels inside of the window. And then we shift the window by a pixel and we do the same thing. And so for every pixel, we get one matrix N. When they compute it, this is represented as three images, by the way.
16:42
So we have one GX square bar, image one, image two, these N's are more than two, these N's. And then we go through the two matrix N again with the coordinates part of it. And then we apply non-max approach to N, that's all.
17:01
Okay. One more question. The other part of the number of N, Y or zero for each of the objects. So you mean that, actually the number of the one part in the window. It's a, this is another way. This is not the size of the window, but this is the error in the non-max approach.
17:22
It would be in a similar order of magnitude as the window we used for computing N, but it would be different. So when you compare the R value to any other R value inside of the window, and if this one is the largest, then you maintain this point.
17:42
You maintain the window as contained in one point, and you throw away all the others. And that's what non-max suppression does. One more question. Are the five and seven, like the command and numbering, or is that the event? No, that's the five.
18:01
I mean, that's typical for some applications, but you can always do it otherwise. What you can do is you can, first, compute all of the real connects, and then you make the history of the R values of this way of the connects, and then you say, okay, let's use the five, try to do this because this is what's done in OPCB.
18:26
OPCB is a public domain library. So you do not fix the threshold for R, but you say how many relative maximums you provide. First, you have to do that with all the numbers of the R, and then you can decide
18:41
for the number of the connections. What happens if you have two large I and N? How large will it be large? Some will be large.
19:00
The maximum value for this I and N is, imagine the sum of the I and N is identical to this. They're both non-negative.
19:26
So if there is one I and N which is zero, then the other I and N can be identical, the sum of these elements. If they're both of the same size, then they will be identical to the average
19:41
of these two elements here on the example. I'll try it. This is the CDBOM of the compensation break. You go off to the first one, okay?
20:02
This is the order of the CDBOM. You see the elements, this is like the MOMEX, that's the protected parameters, and the details, and the integrity.
20:22
Both get some stuff that's not quite so obvious from the data we see at the infrastructure. You just have to pull it from there.
20:40
The characteristic there. One thing is that the scale can be played here. The scale does have to be played by the selection of the size of the two pieces or the two smooth outscales related to the use product. It's quite common to kind of connect the Harris operator,
21:05
or it has been done a lot of times, that people connected this Harris operator with the area of scale space. And how can this be done? Choosing an appropriate derivative operator. Of course here, people use the derivative portion
21:22
for computing the derivatives with a so-called differentiation scale. So when you compute the gradients in smooth damage, I compute the first derivatives at the same time, derivative portion filter,
21:40
and in this way, and there we have to select the degree of smoothing, the width of the Gaussian. And the width that's used here is called the differentiation scale. After having computed all of these derivatives in x and y, we have to smooth the squares and mixed products. And here again, we typically apply a Gaussian filter today
22:05
because it is optimal in some respects. And if you use a Gaussian filter, then it has yet another scale, and this scale is different from this one, and it's called the integration scale.
22:20
So we first compute the smoothed first derivatives using a differentiation scale, and then we smooth the squares and mixed products of these first derivatives in the integration scale. And so it means we first compute the squares and mixed products of our first derivatives
22:45
and obtain the bar as the derivative, and these energies are controlled with the Gaussian of the certain mix, and then of course, the initial window size,
23:01
which is used to compute the elements of this, of the correlation matrix, depends on the size of the Gaussian filter, and thus, it depends on the Gaussian scale. Now, in theory, we have two parameters here,
23:21
but it doesn't make sense to make them or to make them two different from each other. So it's common to relate them by tonsils. If you look at the differentiation scale, it's similar to the path, the three-quarters of the integration scale. So the size of the scale we use
23:42
and the scale we use for this equation, for this new thing, they are typically related and they're continuous. All right, so in this way, we can just choose the two scales,
24:00
or we choose K, and then we choose one of these scales and then we take the other one, which you see here, and also means we can be pretty sure to get a much meaningful relation between these two scales. And then we get a matrix M where these average values
24:22
that is representative for a specific scale, and we can do this from our perspective. We choose the scale first, or as you should see, the information. Right? Any questions?
24:45
The second operator that is based on the correlation matrix or the second operator I want to present to you, which is based on this, or the correlation matrix is just a matrix, not an operator. It's also based on an analysis
25:01
of the correlation matrix. Here, I use the same definition of M as for the scale that carries operator, but you see it won't be the case, right? We can just use a simple derivative operator. We can use it as the average value of the square components of each product.
25:22
But here, let's consider these standard integrals. So we smooth the squares and the mixed products with the Gaussian filter. So we can do first, depending on the gradients of the gradation scale sigma delta,
25:40
and then we would smooth the squares and mixed products with the Gaussian filter with the integration scale sigma E and this would give us our correlation function at every position, at a very specific scale. The square down to the order P, the second P,
26:03
and the order, the square of the derivative is not the second P. The matrix is, it's confusing, but the matrix is of the code that you make it against the hash of the open covariance matrix. It's not the hash of the gradients,
26:22
and this means the E of the second P. I can't say that the second P derivatives are the squares of the first P. Let's make some sense there as well. Okay? No question.
26:41
The first operator has the same foundation, but it has this statistical interpretation and thus is not as heuristic as the Harris operator is. Harris is based on this heuristic, all of this criteria that we have to select somewhere
27:02
with kappa. Here we get a statistical notation. It can be shown that the matrix M can be seen as the normal equation matrix for the squares matrix. Possibly squares matrix, as you note for an image window,
27:20
you expect an image window from one window and you apply the product correspondingly during the other element in another window. And you determine the transformation between this one and even the other one. Such that you minimize the square sum of gray-banded differences corresponding to the C-shaped system. It's called the squares matrix,
27:42
as I mentioned, and it leads to a normal equation matrix or to an equation system. It looks like this. So we have some matrix times
28:01
our geometrical shifts, and they are equal to something that depends on the value differences of the position. Interesting thing is M and all the system matrix of such a linear equation system normal equations with respect to Gauss equation.
28:25
Now geodesy is very familiar with this concept. For non-geodesy, it may not be simple linear. If you have something like this, then the arrows of this vector can be analyzed in the inverse of this matrix.
28:42
This inverse of this matrix is proportional to the covariance matrix of each element. That's something we do always in the squares of the inverse. And now if we have inverse, we can use it to find the maximum and minimum error,
29:01
and we can also find the directions of maximum and minimum error of our shifts. And this is expressed in terms of an error index. So geodesists are very, for them it's their daily bread, so to speak. They have this every day. Perhaps for you this is unfamiliar. So what's an error index?
29:21
An error index gives you an idea about the uncertainty of the point. It can be derived from the covariance matrix of the point's coordinates. If the coordinates are estimated in an estimation process, the covariance matrix is proposed in the inverse of this matrix.
29:43
So the error index has two main axes. The axes are the eigenvectors of them, and in both axes we have the length, and the length of the semi-axis is identical to the square root
30:01
of the corresponding eigenvalue. The largest eigenvalue and the direction of largest eigenvalue is smaller, or of the eigenvector corresponding to the large eigenvalue and the eigenvector corresponding to the smaller eigenvalue. And of course these eigenvectors
30:21
and the eigenvectors and the eigenvectors are the eigenvectors for these error entries. Now, what happens in the homogeneous region? In the homogeneous region, performance of the points are not very fine. We have a very large eigenvalue
30:41
error index. Here, this is almost an edge, so we have a very elliptical shape of the error index. To add an error, it's a very different computer with a large one and a small one. The point, the error index,
31:01
should be small and circular. And this is actually the restaurant behind the restaurant plate. How does it seem for the eigenvalues here? You don't have to be small, but it might be it's the eigenvalues of the inverse.
31:21
It's the eigenvalues and both the eigenvalues, which is the same for human capabilities, which sounds familiar. Well, then that's because it's a similar concept. We go for the, but here we go for the eigenvalues and the eigenvalues are one over the eigenvalues and a search for windows
31:41
containing or corresponding to an autocorrelation and it has to be with the inverse of which has two small eigenvalues. So everything knows that has to be two small eigenvalues. So what's new compared to the Harris operator is this
32:00
geometrical interpretation. It's kind of the uncertainty, it's related to the uncertainty by which a point can be located. And we want this uncertainty to be small. So we focus on the inverse of the eigenvalue, which describes this uncertainty
32:22
and then select windows where the answer is small and we have two eigenvalues of the eigenvalues that are small and they should go very similar to the other. So that's the concept of the first operator. We do not want this, we don't want this. We do want to have this
32:40
because it is the point we expect the eigenvalues to be small and circular. So how does it work? Well, the eigenvalues are separated. The eigenvalues of the covariance matrix are the squares of the axis of the eigenvalues. So we want the two eigenvalues of the covariance matrix to be inverse of them
33:02
to be small. The eigenvalues we want the eigenvalues of them to be large and we want the eigenvalues of the inverse to be small. But this is kind of the same. The same model just expresses the eigenvalue. Well, the first part here
33:20
is the size of the eigenvalues has to be small and this is the case if the sum of the eigenvalues is small and the sum of the eigenvalues is the place of the inverse of our matrix. So we have to find the point weight which is one over the place of this matrix and we should draw another potential
33:41
one potential the place of the matrix the inverse of the matrix is identical to the place of the matrix and now again we have something some parts here where we do not need to compute the eigenvectors this was also very important for first
34:01
because it defined the eigenvalues separately and it simply won't and it's very important. We just need to keep the eigenvalues in place. But the good thing about this is now we have this geometrical interpretation which is very useful.
34:20
And now we have this point weight and it should be large. So again, similar as for the corner that's right here the eigenvalues are similar but this is only the first part here. The area of the ellipse has to be small but the area of the ellipse
34:40
is very small and it would still have a very small area but it would contain a good point so the eigenvalues should be circular and this means that essentially the ratio of these eigenvalues
35:02
should be close to one because then the center axis the ratio of the ellipse and the center axis should be close to one Well, again I hope I'm just now I hope I'm just now I hope I'm just now
35:32
Let's count We use this part here it's called the isopropyl texture which again can be a straight form of eigenvalue
35:42
so that the face of the eigenvalue is there What do we have here? Let's see what the two eigenvalues are that we have the circular eigenvalues What's the size of this part here? Well, we have two eigenvalues that we have here So we have one minus zero
36:02
So that's one What if these two the difference is large and the ratio of the difference divided by the sum or the ratio of the difference of the sum is to be smaller than one
36:22
If we one of them is almost zero basically the extreme case then we would have lambda one minus zero divided by lambda one plus zero so that would be one The square of one is one
36:40
minus one is zero So if you have an allocated eigenvalue if there is a circular eigenvalue this eigenvalue is one and it's closely related you can write the eigenvalues and this eigenvalue is one
37:07
and all the eigenvalues will be as large as possible a larger than a certain than a certain special We check whether omega is above the threshold
37:20
If this is the case we check whether p is close to one and if this is the case we keep it open and it's a point We have identified a window of the eigenvalue
37:46
In this case the omega actually will not be able We may have a window where the sum of the eigenvalues is zero
38:02
We could argue that in this case the sum of the eigenvalues is zero the eigenvalue is zero then the eigenvalue is one Then the computer program is something we can return If the face of the eigenvalue is zero then we can't actually keep on it The reader doesn't have any point
38:31
The reader doesn't have any point The first thing we have to ask ourselves How do we choose the threshold? Well, first we need
38:41
the one and if it's threshold for the criterion for the point We can say, okay we have choose the threshold over the mid and the value of omega as the eigenvalue and then we accept which is the center of the window or we make it a candidate and then you also have
39:01
also this eigenvalue otherwise it's not a candidate and yet How can you choose this? Well, that's actually it's not so straightforward because it's very difficult to select such a threshold from the top of your head
39:20
Why would you choose a hundred? Why would you choose ten? Why would you choose any now? No idea Okay, you can fiddle around and then along comes another image which has another average contrast and your threshold is it's always a good idea to make such a threshold depending on
39:40
which contrast and one way is to first compute all of the omega values and then say choose the 20% largest values of omega and check at which point omega is the threshold so what's the smallest number omega such that 20% of
40:00
20% of omega is larger than 20% or we first apply non-max suppression which we have to do also and then we kind of say, okay, which is the 10,000 point 70 largest value of omega
40:21
which we are not doing so we should make this the pattern of which content it's a bit easier with Q, but because Q is normalized omega is not normalized
40:40
0 and infinity infinity of course is equal to infinity okay 0 is equal to infinity so this is difficult this value Q is always equal to 0 and 1 and so it's better, or it's easier
41:02
to select a threshold here you can say you choose a ratio of the two of the length of the two semi-axes of the arrows and then from this you can derive a threshold 50, for instance, 50.75 or .5, that's
41:20
these are typical values right? But we are not done yet because now we do this test, we apply these thresholds to the omega and two values of every pixel and we have the same thing as earlier if we shift this window slightly
41:41
we will still have large omega and Q values so we will actually get a block of point pixels for one such corner or also for small blocks the first I detected was both for corners and for blocks so we have the details in optimum
42:03
Q and X means we have the same thing as the last one, we have to find non-maxed questions already and now we are still not yet there and we have such a nice corner one can analyse
42:21
the optimal window, where this width is very small because we have the largest number of strong gradients so we have the highest algebra
42:40
the largest algebra in both directions but what's the center of this window? the center of this window wouldn't be there this is the point we want to have can we get the corner once we have identified
43:01
the window? the first operator doesn't know the identifier and what we actually do is after we identify the optimum windows we search for the optimum points
43:21
we put the algorithm or a block we actually compute both and then select the one that keeps there so how does it work? we have a small window and the first model says okay, inside of this window we have the corner point
43:41
what do we do? for every picture inside of the window for example this one when we find a straight line the gradient the gradient is what's in the normal vector of that straight line and the straight line is supposed to pass
44:01
through the center so we get one such straight line for every picture of our window and that will say okay, all of these intersect all of these straight lines in that single point now of course here we do not have a strong gradient
44:21
and the straight line may have an arbitrary direction so we need to include some weighting function here and the weighting function is the length of the gradient so that strong gradient that strong effect on the result if we do that our normal equation
44:40
or normal equation for computing the two corners of the second point we don't have the same matrix it's not the same it's the other correlation matrix we didn't use the weight so we we didn't use the weight to compute the elements of the object
45:01
and that means there is something that depends on the positions of the points with the strong values it's the right-hand side so we get this equation system for every picture that was identified with the group from the window
45:21
the second model says that okay we have a block so again for every picture inside of the window we define the straight line but this time the direction of the straight line is identical to the gradient the gradient was consumable to the straight line here it is actually the direction
45:41
of the straight line so if we have the gradient here on this direction and it is equal to the straight line and this is what the normal equation looks like increasingly it's not exactly the other correlation matrix
46:02
but we can derive this matrix easily from the elements of the object and then we can also compute the coordinate speed what happens if we apply this model to this window? well then we will intersect all of these straight lines and force them to be at the same point
46:22
intersect at the same point we cannot intersect at the same point because they tell us all of those that are along this edge will go into this direction and we cannot intersect or intersect at the same point and these straight lines will also intersect at the same point
46:42
and intersect the other one somewhere in this area so we will get a point somewhere here probably but if we take the inverse of this model and place the matrix we can see that it leads to as a negative value the area if we choose this model we can see that
47:01
it is much smaller if we apply this model to that one we can get a very large residual so the selection of the model can be based on the details of the root and square of the matrix that creates the proportional or integral
47:20
square root of the square sum of the residual so if we have large residuals this would be the final model then we add a large residual of the residuals we get two root and square so at the beginning of this case the root and square are based on this model the one based on this model
47:42
the large residual is also based on the statistical test if one of the models fits significantly together with the other one we choose that model in the other case it is not really a model
48:00
it is also not really a model and if we do that then we say it is undefined texture but high texture and if we can't do this model and if the positions are different then it would be very accurate
48:20
because the people around us are very good at proceeding to be the most accurate any questions?