7. Linear Algebra: Vector Spaces and Operators (continued)
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 25 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/42649 (DOI) | |
Publisher | ||
Release Date | ||
Language |
00:00
Free fallQuantumParticle physicsLinear motorHull (watercraft)Ground stationMaterialQuality (business)Lambda baryonRail transport operationsTypesettingPackaging and labelingNanotechnologyVisible spectrumDie proof (philately)Ship classBahnelementMatrix (printing)DrehmasseMapHourMultiplizitätSingle (music)Lecture/Conference
10:17
McDonnell F-101 VoodooTransverse modeGangMatrix (printing)RotationLambda baryonElectric currentInversion <Meteorologie>FlugbahnRail transport operationsBasis (linear algebra)Spare partElektronentheorieCapacitanceNanotechnologyPower inverterFullingLastCartridge (firearms)Single (music)Lecture/Conference
20:02
Digital Video BroadcastingVoltageBasis (linear algebra)VideoPlane (tool)MeasurementDirect currentAutomatic watchCartridge (firearms)TypesettingYearAbsorbanceCombined cycleNanotechnologyOrder and disorder (physics)Alcohol proofCapacitanceElectronic componentBeta particleLecture/Conference
29:47
Hull (watercraft)Linear motorStarH-alphaElectric power distributionNanotechnologyBahnelementPlane (tool)Beta particleCombined cycleCosmic distance ladderDrehmasseTool bitHot workingKette <Zugmittel>Finger protocolLecture/Conference
39:32
TypesettingJoule heatingLastElectronic componentMatrix (printing)Order and disorder (physics)MultiplizitätHose couplingZeitdiskretes SignalH-alphaInterval (mathematics)VideoBasis (linear algebra)StarLimiterBahnelementPackaging and labelingRail transport operationsSchreibkreideKohlenstoffsternSchubvektorsteuerungNanotechnologyWatercraft rowingString theoryLinear motorLecture/Conference
49:17
PagerScanning probe microscopySelf-propelled anti-aircraft weaponCasting defectInterval (mathematics)Order and disorder (physics)Bird vocalizationCartridge (firearms)GenerationAngeregter ZustandElektronentheorieLecture/Conference
57:20
Direct currentNegationTypesettingPlane (tool)MeasurementBasis (linear algebra)Cartridge (firearms)VoltageMultiplizitätAutomatic watchAlcohol proofLecture/Conference
01:05:22
IceStarTool bitMapCombined cycleNanotechnologyFinger protocolLecture/Conference
01:13:25
StarElectronic componentH-alphaHot workingString theoryMultiplizitätBahnelementMatrix (printing)NanotechnologyLastOrder and disorder (physics)VideoWatercraft rowingKohlenstoffsternLecture/Conference
Transcript: English(auto-generated)
00:00
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
00:24
OK, so let's get started. I just wanted to make one announcement before we start the lecture. So Professor Zwiebach is away again today, which is why I'm lecturing. And his office hours, he's obviously not going to have.
00:40
But Professor Harrow has kindly agreed to take them over. So today, I'll have office hours four to five. And then Professor Harrow will have office hours afterwards, five to six. So feel free to come and talk to us. So today, we're going to try and cover a few things. So we're going to spend a little bit of time talking
01:00
about eigenvalues and vectors, which we've finished this discussion from last time. Then we'll talk about inner products and inner product spaces.
01:21
And then we'll talk about, we'll introduce Dirac's notation, some of which we've already been using. And then, depending on time, we'll also talk a little bit more about linear operators.
01:54
So let's start with where we were last time. So we were talking about T-invariant subspaces.
02:03
So we had that U is a T-invariant subspace if the following is satisfied.
02:22
If T of U is equal to, sorry, if this thing, which is all vectors that are generated by T from vectors that live in U, so if T is inside U itself.
02:43
Right, so OK. And we can define this in general for any U. However, one class of these invariant subspaces
03:02
are very useful. So if we take U to be one-dimensional, and so that really means that U I can write
03:20
as some whatever field I'm defining my vector space over, every element of this subspace U is just some scalar multiple of a single vector U. So this is a one-dimensional thing.
03:43
Now, if we have a T-invariant subspace of this one, if this is going to be a T-invariant object, then we get a very simple equation that you have seen before.
04:05
So we're taking all vectors in U, acting on them with T. And if it stays within U, then it has to be able to be written like this. So we have some operator acting on our vector space, producing something in the same vector space,
04:21
just rescaling it for some lambda, which we haven't specified. And you've seen this equation before in terms of matrices and vectors. This is an eigenvalue equation.
04:41
So these are eigenvalues, and these are eigenvectors. But now they're just an abstract version of what you've discussed before. And we'll come back to this in a moment.
05:00
But one thing that we just defined at the end is the spectrum of an operator. The spectrum of T is equal to all eigenvalues of that operator.
05:33
So later on, this object will become important. But let's just concentrate on this
05:40
and ask, what does it mean? So if we have lambda being an eigenvalue, so lambda, what does this equation tell us? Well, it tells us that, so all I'm
06:07
doing is taking this term over to the other side of the equation and inserting the identity operator, i. So this is in itself an operator now.
06:26
And so this tells us also that this operator, because it maps something that's non-zero to 0
06:44
to the null vector, this is not injective. And you can even write that the null space of T minus i
07:04
lambda is equal to, I guess, all eigenvectors with eigenvalue lambda.
07:23
So every eigenvector with eigenvalue lambda, T acting on it is just going to give me lambda times the eigenvector again. And so this will vanish, so for all eigenvectors with that eigenvalue.
07:41
And we've previously seen that if something is not injective, it's also not invertible.
08:06
So this lets us write something quite nice down. So there's a theorem, let me write it out.
08:21
So if we let T is in the space of linear operators acting on this vector space V, and we have a set of eigenvalues, lambda 1, lambda 2, lambda n, distinct eigenvalues, eigenvalues of T.
08:55
And there's corresponding eigenvectors,
09:06
which we will call U. So there's some set U1, U2, up to U n with a correspondence via their label. Cool?
09:21
So then we know that this list is a linearly independent set.
09:47
So we can prove this one very quickly, so let's do that. So let's assume it's false. So the proof is by a contradiction, so assume it's false.
10:03
And what does that mean? Well, that means that there is a non-trivial relation. I could write down some relation C1 U1 plus C2 U2
10:20
plus CK UK equals 0 without all the C's being 0. And what we'll do is we'll actually say, OK, let's let there be a value of K that's less than or equal
10:44
to n such that this holds for CI not equal to 0. So we're postulating that there is some linear dependence of some of these things.
11:01
So what we can do is then act on this vector here with T minus lambda K times the identity acting on this. So this is C1 U1 plus dot dot dot plus CK UK.
11:24
OK, and what do we get here? So we're going to get, if we act on this piece of it, this is an eigenvector. So T acting on this one will just give us lambda 1, right? And so we're going to get products of lambda 1
11:40
minus lambda K for this piece, et cetera. So this will give us C1 lambda 1 minus lambda K U1 plus dot dot dot up to CK minus 1 lambda K minus 1
12:05
minus lambda K UK minus 1. And then when we act on this one here, so this one has an eigenvalue of the eigenvalue corresponding to the eigenvector is lambda K.
12:20
So that last term gets killed, right? So we get plus 0 lots of UK, right? And we know this is still 0. And now we've established, in fact, these things here are just numbers, right? All of these things.
12:41
So we've actually written down a relation that involves less than K. Actually, I should have said this. Let there be a least K less than or equal to n such that we have linear dependence.
13:02
But what we've just shown is that, in fact, there's a smaller space that's also linear dependent, right? So we've contradicted what we assumed to start with. And you can just repeat this procedure, OK? And so this is a contradiction.
13:25
And so, in fact, there must be no non-trivial relation even for K equals n between these vectors, OK? OK, another brief theorem that we won't prove,
13:54
although we sort of will see why it works in a moment, is, again, for T in linear operators on V,
14:06
with V being a finite dimensional complex vector space, there is at least one eigenvalue, OK?
14:34
So I guess for this.
14:42
So T has at least one eigenvalue, OK? Now, remember, in the last lecture, we looked at a matrix, a 2 by 2 matrix, that was rotations in the xy plane and found there were, in fact, no eigenvalues.
15:01
But that's because we were looking at a real vector space. So we were looking at rotations of the real plane. OK, so this is something that you can prove. We will see why it's true, but we won't prove it.
15:20
And so one way of saying this is to go to a basis. And so everything we've said so far about eigenvalues and eigenvectors has not been referring to any particular basis. And in fact, eigenvalues are basis independent. But we can use a basis.
15:41
And then we have matrix representations of operators that we've talked about. And so this operator equation, or the operator statement that T minus lambda I, so as operator,
16:04
so this operator statement T minus lambda I u equals 0, is equivalent to saying that, well, we've said it here. We've said that it's equivalent to saying that it's not
16:20
invertible, that this operator is not invertible. But that's also equivalent to saying that the matrix representation of it in any basis is not invertible.
17:01
And by here, we just mean inverses as in the inverses that you've taken of many matrices in your lives. And so what that means, then, if I'm sure you remember, is if a matrix is not invertible, that means it has a vanishing determinant. So it has a det of this.
17:24
Now you can think of this as a matrix. This determinant has to be 0. And remembering, we can write this thing out. And so it has lambdas on the diagonal, and then whatever entries T has wherever it wants. This just gives us a polynomial in lambda.
17:42
So this gives us some f of lambda, which is a polynomial. And if you remember, this is called the characteristic polynomial. Characteristic, right?
18:02
And so we can write it, if we want, as just some f of lambda is equal to just, in this case, it's going to be just lambda minus some lambda 1.
18:26
I have to be able to write it like this. I can just break it up into these terms here, where the lambda i's and the 0's of this polynomial
18:40
are, in general, complex and can be repeated. Now, what can happen is that you have, well, in the worst case, I don't know if it's the worst case, but in one case, you could have all of the singularities,
19:03
all of the 0's being at the same place. And you could have an eigenvalue that is n-fold degenerate here. So if we have, say, lambda 1 occurring twice in the sequence, then we say that's a degenerate eigenvalue.
19:21
And in principle, you could have just a single eigenvalue that's n-fold degenerate, but you can always write this. There has to be one lambda there, at least. One lambda i there, at least. And so you could see why this is true. Now, if you're in a real vector space,
19:43
you don't get to say that because this polynomial may only have complex roots, and then they're not part of the space you're talking about. So it can be repeated, and this is called degeneracy.
20:07
OK. So are there any questions? Thank you. OK.
20:24
Thank you. I could have put the sign on the next line as well. OK. So any other questions? OK, so let's move on, and we can talk about inner products.
20:41
And so first, what is an inner product? So an inner product is a map, but it's a very specific map. So an inner product on a vector space V
21:16
is a map from V cross V to a field, F.
21:33
And that's really what it's going to be. Now, who has seen an inner product somewhere? OK, what do we call it?
21:45
A dot product, right. So we can learn a lot from thinking about this simple case. So the motivation for thinking about this is really the dot product. So we have a vector space Rn, and on that vector space,
22:10
well, we might have two vectors, A and A, which I'm going to write as A1, A2, dot dot dot, An, and B.
22:26
So we have two vectors, and these are in vector space V. Then we can define the dot product, which is an example of one of these inner products.
22:41
Product, so A dot B. We can even put little vectors over these. And so our definition that we've used for many years is that this is A1 B1 plus A2 B2 plus dot dot dot An Bn.
23:04
And you see that this does what we want. So it takes two vectors, which live in our vector space. And from that, you get a number. So this lives in R. So this is a nice example of an inner product.
23:20
And we can look at what properties it gives us. So what do we know about this dot product? Well, we know some properties that it has is that A dot B,
23:50
so it doesn't care which order you give the arguments in. Also, if I take the same vector,
24:02
I know that this has got to be greater than or equal to 0, because this is going to be our length. And the only case where it's 0 is when the vector is 0.
24:30
And it's also a, well, we can write this. A dotted into, say, beta 1 B1 plus beta 2 B2.
24:45
So these betas are real numbers, and these b's are vectors. So this thing we can just write is equal to beta 1 A dot B1 plus beta 2 A dot B2.
25:06
Let me make them vectors everywhere. So we've got three nice properties. And you can write down more if you want, but these will be enough for us. And the other thing that we can do with this is we can
25:21
define the length of a vector. So we can say this defines a length.
25:41
And more generally, we're going to call this the norm of the vector. And that, of course, you know is that mod A squared is just equal to A dot A. So this
26:05
is our definition of the norm. So this definition over here is really by no means unique in satisfying these properties.
26:21
So if I wrote down something where instead of just A1 B1 plus A2 B2, et cetera, I wrote down some positive number times A1 B1 times some other positive number, A2 B2,
26:42
et cetera, that would also satisfy all of these properties up here. So it's not unique. And so you could consider another dot product,
27:03
which we would write as just C1 A1 B1 plus C2 A2 B2 plus some Cn An Bn, where the C's are just
27:21
positive real numbers. That would satisfy all of the things that we know about our standard dot product. But for obvious reasons, we don't choose to do this because it's not a very natural definition to put these random positive numbers along here.
27:43
But we could. And I guess one other thing that we have is the Schwarz inequality. And so this is the A dot B. So the absolute value
28:08
of the dot product of A dot B is less than or equal to the product of the norms of the vectors.
28:25
And so one of the problems in the p set is to consider this in the more abstract sense. But this is very easy to show for real vectors. So this is all very nice. So we've talked about Rn.
28:42
What we're really going to worry about is complex vector spaces. And so there we have a little problem. And the problem comes in defining what we mean by a norm. Because if I say now that this vector has complex components
29:01
and write this thing here, I'm not guaranteed that this is a real number. And so I need to be a little bit careful. So let's just now talk about complex spaces.
29:25
And we really want to have a useful definition of a length. So let's let z be in this thing, in n-dimensional complex space. So really my z is equal to z1, z2, zn, with the zi's
29:46
being in C. So how can we define a length for this object? Well, we have to do it sort of in two steps. So we've already known how to define
30:02
a length for a complex number. It's just the absolute value, the distance from the origin in the complex plane. But now we need to do this in terms of a more complicated vector space. And so we can really think of this as equal to the sum of the squares of z1
30:27
of the absolute values of these complex numbers, which if we write it out, looks like z1 star z1 plus.
30:57
And so we should now, thinking about the inner product,
31:02
we should be thinking that the appearance of complex conjugation is not entirely unnatural. So if we ask about the length of a vector here, then that's going to arise from an inner product. This object we want to arise from our inner product.
31:22
So we can now define our general inner product with the following axioms. So firstly, we want to basically maintain the properties that we've written down here, because we don't want to make our dot product not
31:43
be an inner product anymore. That would be kind of silly. So let's define our inner product in the following way. So I'm going to write it in a particular way. So the inner product is going to be, again, a map.
32:03
And it's going to take our vector space, two elements of the vector space to the field. And I'm in a complex vector space.
32:21
So it's a map that I'm going to write like this, that takes v cross v to c. And what I mean here is you put the two elements of your vector space in these positions in this thing.
32:43
And so really, ab is what I mean by this, where a and b and b. So let me write it this way. So this thing is in c, where a and b are in v.
33:02
Right? So these dots are just where I'm going to plug in my vectors. And so this inner product should satisfy some axioms. And they look very much like what we've written here. So the first one is a slight modification.
33:23
We want that ab is equal not to ba, but to its complex conjugate. OK? And this is related to what I was discussing here.
33:43
So from this, we can see that the inner product of a with itself is always real, because it and its complex conjugate are the same. So we know that aa is real. And we're also going to demand of a definition
34:00
of this inner product that this is greater than or equal to 0. And it's only 0 if a equals 0. So that's pretty much unchanged. And then we want the same sort of distributivity.
34:22
We do want to have that a inner product with beta 1b plus beta 2b2 should be equal to beta 1ab1 plus beta 2ab2,
34:54
where the beta i are just complex numbers.
35:03
And that's what we need to ask of this. And then we can make a sensible definition of it that will give us a useful norm as well. Now I'll just make one remark.
35:20
This notation here, this is due to Dirac. So it's very prevalent in physics. You will see in most purely mathematical literature,
35:40
you will see this written just like this. So let me write it as a, b and put these things in explicitly. And sometimes you'll even see a combination
36:02
of these written like this. They all mean the same thing. So this may seem, compared to what we've written up here,
36:21
this seems a little asymmetric between the two arguments. So we've got, well, firstly, these are asymmetric. And then down here, we demand something about the second argument, but we don't demand the same thing about the first argument.
36:40
So why not? Can anyone see? I mean, I guess what we would demand is exactly the same thing the other way around. So we would demand another thing
37:06
that would be sort of alpha 1a plus alpha 2a2b is equal to alpha, well, something like this.
37:24
Let me not. Well, we would actually demand this a1b,
37:42
but I don't actually need to demand that, because that follows from number one. I take axiom one, apply it to this, and I automatically get this thing here. And notice what's arisen is, actually, let's just
38:03
go through that, because you really do want to see these complex conjugates appearing here. They are important. So this follows. So 1 plus 3 imply this, but let's just do this.
38:22
So let's start with this expression and start with this piece. And we know that this will then be given by axiom one by b alpha 1a1 plus alpha 2a2, complex conjugate.
38:44
And then by this linearity of the second argument, we can now distribute this piece. We can write this as alpha 1ba1 plus alpha 2ba2,
39:06
all complex conjugated, which, let's put all the steps in, is alpha 1 star and this 1 star.
39:27
And then, again, by the first argument, the first axiom, we can flip these and get rid of the complex conjugation. And that gives us this one up here. So we only need to define this linearity
39:41
of distributive property on one side of this thing. We could have chosen to define it here and wouldn't have needed that one, but we didn't. So let's look at a couple of examples. And the first one is a finite dimensional example.
40:03
And we're going to take v is equal to cn. And our definition is going to be a pretty natural generalization of what we've written down before. So a and b are elements of cn. And this is just going to be a1 star b1 plus a2 star b2
40:30
plus an star bn and another piece of chalk.
40:41
So the only difference from dot product in real vector space is that we've put these complex conjugates here. And that, you can check, satisfies all of these axioms. Another example is actually an example of an infinite dimensional vector space.
41:03
Let's take v is the set of all complex functions, all f of x that are in c with x
41:25
living in some finite interval. OK? And so a natural norm to define on this space, and this is something that we can certainly
41:42
talk about in recitations, is that so if I have f and g in this vector space v, then f, g, I'm going to define. This is my definition of what the dot product is,
42:00
is the integral from 0 to l of f star of x, g of x, dx. And if you think of this, this is arising from evaluating f at a set of discrete points,
42:21
and then where you've got a finite dimensional vector space and then letting the space between those points go to 0, this is kind of the natural thing that will arise. It's really an integral as a limit of a sum, and over here, of course I could write this one, is just the sum over i of ai star bi, i equals 1 to n.
42:44
And so this is the integral is the infinite dimensional generalization of the sum, and so we have this. And that might be something to talk about in recitations.
43:01
So we've gone from having just a vector space to having a vector space where we've added this new operation on it, this inner product operation. And that lets us do things that we couldn't do before. So firstly, it lets us talk about orthogonality.
43:34
So previously, we couldn't ask any question
43:42
about two objects within our vector space. This lets us ask a question about two objects. So if we have the inner product a, b for a and b
44:05
in some vector space b, then if this is 0, we say they're orthogonal, that we say the vectors a and b are orthogonal.
44:31
And I'm sure you know what orthogonal means in terms of Rn, but this is just the statement of what it means in an abstract vector space.
44:43
This is the definition of orthogonality. And so if we have a set of vectors e1, e2, en such
45:06
that ei, ej is equal to delta ij, Kronecker delta ij,
45:20
this set is orthonormal. Again, a word you've seen many times.
45:40
OK. So we can also define the components of vectors now in a basis-dependent way.
46:08
So we're going to choose ei to be a set of vectors in our vector space V. And we've previously
46:23
had things that form a basis of V. And if we also demand that they're orthonormal,
46:47
so then we can always decompose any vector in V in terms of its basis. But if it's also orthonormal, then we can write a, which is a is some vector in V,
47:02
a is equal to sum over i equals 1 to n of some ai ei. So we can do that for any basis. But then we can take this vector and form its inner product with the basis vectors.
47:23
So we can look at what ek a is. So we have our basis vectors ek, and we take one of them. And we dot product into this vector here. And this is straightforward to see. This is going to be equal to the sum over i equals 1 to n
47:43
ai. And then it's going to be the inner product of ek with ei because of this distributive property here. OK? But we also know that because this is an orthonormal basis, this thing here
48:03
is a delta function, delta ik. And so I can, in fact, do this sum. And I get that this is equal to ak. And so we've defined what we mean by the components of this vector in this basis ei.
48:25
They're defined by this inner product. OK. So we can also talk about the norm, which I think,
48:42
unsurprisingly, we are going to take the norm to be, again, equal to this, just as we did in Rn. But now it's the more general definition of my inner product that defines our norm.
49:05
And because of our axioms, so because of number two in particular, this is a sensible norm, right? It's always going to be greater than or equal to 0. OK, and conveniently, we can also
49:22
change this Schwarz inequality. So instead of the one that's specific to Rn, that becomes, not this, becomes ab.
49:44
So let's cross that one out. This is what it becomes. And in the current p set, you've got to prove this is true, right? We can also write down a triangle inequality,
50:03
which is really something that norms should satisfy. So the norm of a plus b should be less than or equal to the norm of a plus the norm of b. And the R3 version of this is
50:22
that the longest side of a triangle is shorter than the two shorter sides, right? So this is fine. OK. OK. So you might ask why we're doing all of this seemingly
50:42
abstract mathematics. Well, so now we're in a place where we can actually talk about the space where all of our quantum states are going to live. And so these vector spaces that we've given an inner product,
51:03
we can call them inner product spaces. So we have a vector space with an inner product
51:22
is actually we call a Hilbert space. And so this needs a little qualifier. So if this is a finite dimensional vector space, then this is just a Hilbert space.
51:46
Let me write it here. So let's write it as a finite dimensional vector space with an inner product is a Hilbert space. But if we have an infinite dimensional vector space,
52:02
we need to be a little bit careful. We need to, for an infinite dimensional vector space, we again need an inner product.
52:26
We need to make sure that this space is complete. And this is a kind of technical point that I don't want to spend too much time on. But if you think about, well, let me just write it down.
52:43
The space, let me write it here. And I haven't defined what this complete vector space means. But if we have an infinite dimensional vector space that is complete, or we make it complete,
53:01
and we have an inner product, we also get a Hilbert space. And all of our quantum mechanical states live in a Hilbert space. What is complete without a mean on it? OK.
53:21
Yes, that's true. So how's that? OK. So we need to define what we mean by complete, though. And I don't want to spend much time on this, but we can just do an example. For example, if we take the space of,
53:44
let v equal the space of polynomials on an interval,
54:00
0 to l, say. So this means I've got all pn's, p0 plus p1x plus pnxn.
54:26
There are things that will live in the completed vector space that are not of this form here. So for example, if I take n larger and larger, I could write down this polynomial.
54:41
I could write pn of x is the sum over i equals 1 up to n of x to the i over i factorial. And all of these pn's live in this space of polynomials.
55:04
But their limit as n becomes large, there's a sequence of these called a Cauchy sequence that as n goes to infinity, I generate something that's actually not a polynomial. So I generate e of x, which lives
55:25
in the completion of this, but it's itself not a polynomial. So don't worry about this too much, but in order to really define a Hilbert space, we have to be a little bit careful for infinite dimensional
55:42
cases. So a few more things that we can do to talk about. Well, how do we make an orthonormal basis?
56:02
So I presume you've all heard of the Gram-Schmidt procedure. Yeah, OK. So that's how we make an orthonormal basis. And just the way you do it in R3, you do it the same way in your arbitrary vector space.
56:23
So we have the Gram-Schmidt procedure. And yeah, so you can define this. So we have a list v1, v2, vn of just vectors
56:49
in our vector space that are linearly independent. So we can construct another list.
57:24
And there's also orthonormal, and so it's a very useful thing for us to have. And so you can define this recursively. You can just write that ej is equal to vj minus the sum over i less than j of ei.
58:04
So this thing divided by its length. And so you just apply this sum. You're orthogonalizing ej versus all of the previous ei's that you've already defined. And then you normalize it by dividing by its length.
58:23
So that's something that's very useful. And the last thing I want to say about these inner product spaces is that we can use them, these inner products at least, is that we can use them to define the orthogonal complement of something, of anything really.
58:51
So let's let u, so we have a vector space v.
59:01
And I can just choose some things in that and make a set. So u is a set of v that are in v. So it doesn't need to be a subspace. It's just a set.
59:21
So for example, if v is rn, I could just choose vectors pointing along two directions. And that will give me my set. But that's not a subspace, because it doesn't contain some multiple of this vector times some multiple of this vector, which
59:40
will be pointing over here. So this is just a set so far. We can define u perpendicular, which we'll call the orthogonal complement.
01:00:02
of u. And this is defined as u perpendicular is equal to the set of v's in v such
01:00:20
that v u is equal to 0 for all u in u. So all of the things that live in this space are orthogonal to everything that lives in u.
01:00:40
And in fact, this one is a subspace automatically. So it is a vector space. So if I took my example of choosing the x direction and y direction for my set here,
01:01:02
then everything perpendicular to the x direction and y direction is actually everything perpendicular to the xy plane. And so that is actually a subspace of the R3. And so there's a nice theorem that you can think about,
01:01:23
but it's actually kind of obvious. So if u is a subspace, then I can actually write that v is equal to the direct sum of u plus its orthogonal complement.
01:01:49
So that one's kind of fairly straightforward to prove, but we won't do it now. So in the last little bit, I want to talk more about this notation
01:02:02
that I've introduced, that Dirac introduced. So what can we say if I can find the eraser here? Are there any questions about this? Yep. So when we're defining over space and the idea of base solids, why
01:02:22
is it that in quantum mechanics a lot we decompose things into plane waves when they're not actually in the plane waves? Yes. So it's because basically it works. Mathematically, we're doing things that are not quite legitimate.
01:02:48
So we can generalize the Hilbert space a little bit, and such that these non-normalizable things can live in this generalized space. But really, the answer is that it works,
01:03:03
but no physical system is going to correspond to something like that. So if I take plane waves, that's not a physically realizable thing. But it gives us an easy way to, instead of talking about some wave packet that's some superposition of plane waves,
01:03:20
we can talk about the plane waves by themselves and then form the wave packet afterwards, for example. That's, yeah. Does that answer the question a little bit, at least? Yeah. Yep. As in some of you and the component of you,
01:03:42
why do you not name this space? OK. So just think about the case that I was talking about. So if we're looking at R3, and we take U to be the set of the unit vector in the x direction,
01:04:05
unit vector in the y direction, that's not a subspace, as I said, because I can take the unit vector in the x direction plus the unit vector in the y direction that goes in the 45 degree direction. And it's not in the things that I've written down originally. So then if I talk about the subspace,
01:04:23
the things spanned by x hat and y hat, then I have a subspace, the whole xy plane. And the things that are orthogonal to it in R3 are just the things proportional to the z hat.
01:04:44
Right? And so then I've got the things in this x hat and y hat. And the thing that's in here is z hat. And so that really is the basis for my R3
01:05:01
that I started with. That contains everything. And more generally, the reason that I need to make this a subspace is just because I define u by some set of vectors
01:05:23
that I'm putting into it. The things that are orthogonal to that are automatically already everything that's orthogonal to it. So there's no combination of the things in the orthogonal complement that's not already in that complement. Right, because I'm saying that this
01:05:42
is everything in v that's orthogonal to these things in this subspace. So I could write down some arbitrary vector v, and I can always write it as a projection onto things
01:06:00
that live in here and things that don't live in this one. Right, and what I'm doing by defining this complement is I'm getting rid of the bits that are proportional to things in this. OK?
01:06:21
All right, any? Yep? So an orthogonal complement is automatically a subspace? Yes. But that doesn't necessarily mean that any random collection of vectors is a subspace? No. No.
01:06:41
All right, so let's move on and talk about the Dirac's notation. And let's do it here. So we've already, I mean, three or four lectures ago,
01:07:02
we started talking about these objects. And we were calling them kets, right? And they were things that live in our vector space v. So these are just a way of writing down our vectors.
01:07:23
And so when I write down the inner product, which we have on the board above, well, one of the bits of it looks a lot like this, right? So we can really think of a, b, the b being a ket.
01:07:40
I mean, we know that b is a vector, and here we're writing it in a particular way of writing things in terms of a ket. And what we can do is actually think about breaking this object, this inner product, up into two pieces. So remember, the dot product is taking two vectors, a and b.
01:08:03
One of them, well, we already have written it like a vector, because a ket is a vector. What Dirac did in breaking this up is he said, OK, well, this thing is a bracket. And so he's going to call this one a ket, and this is a bra.
01:08:25
So this object with something in it. So the things inside these you should think of as just labeling these things. Now, we already know this thing here. So these kets are things that live in,
01:08:41
and this is, I should say, this is Dirac notation. So we already know these kets are things that live in the vector space. But what are the bras?
01:09:01
Well, they're not vectors in V. So b is a vector. So maybe I should have called this one b to be a little less confusing. So b is a ket, and this is something that lives in our vector space V. This inner product
01:09:20
we're writing in terms of bra and a ket. The bra, what does it actually do? So I'm going to use it to make this inner product. And so what it's doing is it's taking a vector and returning a complex number.
01:09:44
So the inner product takes V cross V goes to C. But if I think of it as the action of this bra on this ket, then the action is that this bra eats a vector and spits back a complex number.
01:10:02
So a is actually a map. So these bras live in a very different place than the kets do, although they are
01:10:22
going to be very closely related. And so firstly, it's not in V. And you should be careful if you ever say that because it's not right. We actually say that it belongs to a dual space, which
01:10:46
we label as V star because it is very dependent on V. It's maps from V to C. And I should even say this is a linear map.
01:11:10
Now, what is V star? Well, at the moment, it's just the space of all linear maps from V to C. But it itself is a vector space.
01:11:22
So we can define addition of these maps. We can define addition on V star and also a scalar modification of these maps.
01:11:46
And so what that means is that I can define some bra w that's equal to alpha, lots of another one, plus beta b.
01:12:01
And all of these live in this V star space. I guess I couldn't write that explicitly. So a, b, and w live in V star.
01:12:22
And the way we define this is actually through the inner product. We define it such that I take all vectors v in the vector
01:12:51
space big V. And the definition of w is that this holds.
01:13:01
And then, basically, from the properties of the inner product, you inherit the vector space structure. So this tells us V star is a vector space.
01:13:33
And there's actually a correspondence between objects in the original vector space v
01:13:42
and those that live in V star. So we can say for any v in V, there's a unique. Sorry, I actually wrote it like this. Any ket v in the vector space,
01:14:02
there is a unique bra, which I'm also going to label by v. And this lives in V star. And so we can show uniqueness by assuming it doesn't work. So let's assume that there exists a v and a v prime
01:14:39
in here, such that v.
01:14:56
So we'll assume that this one is not unique,
01:15:01
but there are two things, v and v prime. Then we can construct from this. I can take this over to this side here, and I just get that vw minus v prime w is equal to 0,
01:15:22
which I can then use the skew symmetry of these objects to write as wv minus wv prime star.
01:15:46
So I've just changed the order of both of them. And then I can use the property that kets, I can combine them linearly. So I know that this is equal to wv minus v prime star.
01:16:06
And essentially, that's it, because I know that this has to be true for every w in the vector space V. So this thing is equal to 0.
01:16:20
And so the only thing that can annihilate every other vector is going to be 0 from our definition, in fact, of the inner product. So this implies that v minus v prime equals 0, the null vector, which implies that v equals v prime.
01:16:43
And so our assumption was wrong, and so this is unique. OK, let's see. And so we actually have really a one-to-one correspondence between things in the vector space
01:17:02
and things in the dual space. And so we can actually label the bras by the same thing that's labeling the kets. So I can really do what I've done in the top line up there
01:17:24
and have something that's everything is labeled by the same little v. Both the thing in the vector space big V and the thing in v star are labeled by the same thing. And more generally, I could say that v,
01:17:48
so there's a correspondence between this thing and this thing.
01:18:08
And notice the stars appearing here, but they came out of how we defined the inner product.
01:18:21
OK, so really, in fact, any linear map you write down, any linear map like this defines one of these bras, because every linear map that takes v to c lives in v star. So there has to be an element that corresponds to it.
01:18:43
And just if you want to think about a kind of a concrete way of talking about these, if I think of this as a column vector v1 to vn,
01:19:07
right, the way I should think about the bras is that they are really what you want to write as row vectors. And they have the conjugates of the thing.
01:19:26
The components are conjugated. And now you can ask what the dot product looks like.
01:19:41
Alpha v is then just this matrix multiplication. But it's matrix multiplication of an m by 1 thing by a 1 by n thing.
01:20:02
Alpha 1 star, alpha 2 star, alpha n star times this one here, so v1 vn. And this is now just matrix multiplication.
01:20:34
I guess I can write it like this.
01:20:44
So really, they're as concrete as the kets are. So you can construct them as vectors, like just strings of numbers in this way. So I guess we should finish.
01:21:01
So I didn't get to talk about linear operators, but we will resume there next week. Are there any questions about this last stuff or anything? No? OK. See you next week. We'll see you tomorrow, some of you. Thanks.