We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Finding purifications with minimal entanglement

00:00

Formal Metadata

Title
Finding purifications with minimal entanglement
Title of Series
Number of Parts
18
Author
Contributors
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Many interesting strongly interacting Quantum Field Theories are not amenable to analytical treatment. This workshop will focus on systematic numerical approaches to such theories relying on the quantum Hamiltonian, including Truncated Spectrum Approach, Light Front Quantization, Matrix Product States and Tensor Networks. Such methods provide a viable alternative to Lattice Monte Carlo simulations. Their advantage is the ability to access real-time observables, and to study Renormalization Group flows originating from strongly-interacting fixed points.
Quantum entanglementKörper <Algebra>Hamiltonian (quantum mechanics)Perturbation theoryMusical ensembleStudent's t-testQuantum entanglementMaxima and minimaComputer animationDiagramLecture/Conference
SpacetimeQuantum stateMatrix (mathematics)ModulformHelmholtz decompositionSummierbarkeitLengthBipartite graphProduct (business)Dimensional analysisMultiplication signAlpha (investment)Physical systemLocal ringChi-squared distributionResultantOpen setSingular value decompositionComplex numberBoundary value problemWaveGroup representationTerm (mathematics)Power (physics)Wave functionBasis <Mathematik>Order (biology)Maxima and minimaChainGenerating set of a groupHilbert spaceSequencePartition (number theory)Right anglePrice indexSquare numberQuantum entanglementLecture/Conference
Maxima and minimaInvertierbare MatrixTerm (mathematics)Group representationRenormalization groupPhysical lawProduct (business)Dimensional analysisMatrix (mathematics)Quantum stateModulformDensity matrixLocal ringPhysical systemPoint (geometry)Quantum chromodynamicsFluid staticsPositional notationLine (geometry)Circle10 (number)Maxima and minimaWave functionVector spacePrice indexScalar fieldComplex numberGenetic programmingLengthMereologyRight angleHamiltonian (quantum mechanics)MultiplicationMoment (mathematics)Chi-squared distributionOrder (biology)Kontraktion <Mathematik>RankingAreaMultiplication signScaling (geometry)Quantum entanglementSingular value decompositionNumerical analysisPopulation densityCategory of beingAdaptive behaviorObject (grammar)Many-sorted logicPotenz <Mathematik>FinitismusGoodness of fitSlide ruleFilm editingTensorMathematical singularityBounded variationLecture/Conference
Maxima and minimaQuantum entanglementGradientDensity matrixPhysicalismMany-sorted logicSpacetimeMoment (mathematics)Steady state (chemistry)Bounded variationMereologyPhysical systemEigenvalues and eigenvectorsFilm editingMatrix (mathematics)CausalityQuantum stateDegrees of freedom (physics and chemistry)Product (business)Formal power seriesMultiplication signNumerical analysisLight coneKörper <Algebra>Beta functionModulformINTEGRALRight angleArithmetic meanCone penetration testWärmestrahlungHilbert spaceFood energyGroup representationFinitismusCorrespondence (mathematics)Direktes ProduktQuantum chromodynamicsPhysical lawDimensional analysisTerm (mathematics)Population densityLocal ringMaß <Mathematik>Operator (mathematics)Bipartite graphUnitäre GruppeLecture/Conference
Density matrixQuantum stateHamiltonian (quantum mechanics)Operator (mathematics)Rule of inferenceInfinityPhysical systemPopulation densityExpressionPhysicalismOrder (biology)1 (number)Degrees of freedom (physics and chemistry)MereologyDimensional analysisGroup representationPrice indexProduct (business)Matrix (mathematics)Degree (graph theory)Point (geometry)Ising-ModellBeta functionMultiplication signFinitismusSparse matrixSummierbarkeitRootQuantum entanglementDoubling the cubeMusical ensembleKörper <Algebra>Identical particlesState of matterSquare numberArithmetic meanLimit (category theory)Imaginary numberWärmestrahlungModulformLecture/Conference
MathematicsAiry functionQuantum stateImaginary numberRight angleQuantum entanglementTerm (mathematics)Object (grammar)Prime idealTime evolutionDegrees of freedom (physics and chemistry)Physical systemMultiplication signFunctional (mathematics)Operator (mathematics)Price indexEvoluteEntropy2 (number)Group representationModulformBlock (periodic table)Steady state (chemistry)Quantum chromodynamicsPoint (geometry)Density matrixEnergy levelDifferent (Kate Ryan album)Cross-correlationReal numberDimensional analysisSquare numberLecture/Conference
Moving averagePhysical lawChainTransverse waveCorrelation and dependenceClassical physicsChromosomal crossoverHausdorff dimensionQuantum stateWärmestrahlungDensity matrixQuantum entanglementClassical physicsResultantEntropyHelmholtz decompositionDerivation (linguistics)Square numberBipartite graphMereologyOrder (biology)Constraint (mathematics)ExpressionMultiplication signKörper <Algebra>Degrees of freedom (physics and chemistry)Chromosomal crossoverAlgebraic structureGoodness of fitCone penetration testMany-sorted logicPhysical systemDivergenceMaxima and minimaMatrix (mathematics)Element (mathematics)Term (mathematics)Price indexThermal fluctuationsSteady state (chemistry)Product (business)Parameter (computer programming)TheoremFinitismusChainDirektes ProduktDimensional analysisGradient descentPhysicalismRootDistribution (mathematics)MultilaterationInfinityDegree (graph theory)Direction (geometry)Line (geometry)Cross-correlationPartition (number theory)AreaPhysical lawMathematical optimizationComputer animationDiagram
WärmestrahlungPairwise comparisonQuantum entanglementCost curveQuantum stateLine (geometry)Multiplication signUnitäre GruppeMathematical optimizationEntropyInfinityOperator (mathematics)Quantum entanglementDimensional analysisSpectrum (functional analysis)Numerical analysisModulformSquare numberMaß <Mathematik>Time evolutionEngineering drawingDiagram
ChainTransverse waveWärmestrahlungLocal ringQuantum stateTransverse waveKörper <Algebra>ChainIsing-ModellEntropyDistribution (mathematics)Model theoryPhysical systemLocal ringGleichverteilungQuantum entanglementGroup actionRight angleQuantum stateComputer animation
Local ringIndependence (probability theory)Physical systemINTEGRALEntropyPhysicalismMultiplication signEvoluteBounded variationNumerical analysisSweep line algorithmQuantum entanglementQuantum stateLight conePerturbation theoryDegrees of freedom (physics and chemistry)CausalityPropagatorAlgebraic structureImaginary numberPoint (geometry)Cone penetration testLimit (category theory)DiagonalLogarithmHamiltonian (quantum mechanics)LinearizationUnitäre GruppeInfinityLocal ringOperator (mathematics)GradientGroup actionDegree (graph theory)Dimensional analysisMany-sorted logicSpherical capRange (statistics)Right angleComputer animationLecture/Conference
Transcript: English(auto-generated)
Okay, thanks to the organizers for inviting me and giving me opportunity to present some of our work here.
Yes, pointed out, I want to discuss about finding purifications with minimal entanglement.
And this is a work done together with Johannes Hauschildt who is a student in Munich and is published in Archive 1711. Zero, one, two, eight, eight.
So, let me just first give a brief outline what I want to do. I want to first tell a little bit about matrix product states because so far there has not been any introduction to them
and then show, say a few words about what the concept of purification actually is. And then I'm going to bring this together and show how we can then use matrix product states to simulate mixed quantum states. And the main results that I want to show is that,
so the main result is that they show a matrix product state which abbreviate as MPS
based method to iteratively minimize the entanglement of purifications.
So, let me now start by introducing matrix product states because assume that not everyone is familiar with this concept.
So, for everything in my talk,
I will focus on one-dimensional quantum systems. So, I will assume one-dimensional systems, say of length L, and we have a local Hilbert space described by states JN,
where JN is going from one to D. So, we have a D-dimensional local Hilbert space. And having these kind of systems, then we can write down a kind of generic quantum state,
can then be written as psi is just a sum over all J1 to JL over some amplitude in a many-body wave function times the product state basis of J2,
oops, J1, JL. So, this is now a generic state. Every state in this system, in this many-body system, can be expressed in this form.
However, there are two to the L complex numbers that we need to store, and that makes it incredibly difficult to deal in this full representation. And this is a representation used when using exact diagnostics. So, then we just constructed two to the L dimensional matrix, oops, thanks.
And it would be two for a spin-one-half system. Now, this state can then be rewritten in terms of a matrix product state MPS representation.
And the MPS representation, given where we now take the amplitude of the matrix, a many-body wave function, and express it in terms of a product of matrices.
So, we have matrix B1, J1, B2, J2, to B, L, JL.
Now, these are now different matrices. So, there are some index. So, the first matrix here would be, say, a one cross D dimensional matrix, and the last matrix would be a D cross one dimensional matrix. So, in generic, the dimension of these matrices
would be chi N cross chi N plus one cross D. So, this is now a one dimensional system with open boundary conditions. And, in fact, every quantum state
that we can write down on this Hilbert space can be brought into this form by successively applying Schmidt decompositions of this state. So, say that we first start from a state in a full representation, and now we can successively do Schmidt decompositions
at these bonds, and by this, bring it into this form. I mean, I was not planning to tell you exactly what the algorithm is or the sequence, what you have to apply to actually get it into this form, but you can bring every quantum state into this form. So, every quantum state
it can be brought into this form.
Say it again, please. What are the dimensions of the intermediary matrices? Can you say it again for an example? So, say that we take a generic quantum state, and we just follow this procedure that I just advertised, like where we use this singular value decomposition. Then we would, for example, do first a bipartition
between the first spin and the last spins, and do then a Schmidt decomposition. So, we would then write the state as the sum over. Alpha is from one to the minimum of, say, d to the l, and, oops, d to the small l,
and d to the capital L, minus l over lambda l, say left, and say right. So, and then this would now give the dimension
of the matrices here. So, for the first bond, if I do this decomposition, then we would need d states. So, then we have d. The next one would be a d cross d squared matrix, and so on. And then once we go across the center of this chain, then we would, again, have smaller matrices. The last one would then be d cross one.
So, if you follow this generic procedure, then it actually doesn't help us much, because here we had exponentially increasing Hilbert space, and now, if you just choose this representation of matrices,
then we actually have now, however, kind of the maximum of these kind of chi n, essentially of the order of d to the power of l half.
So, that gives a scheme, like every quantum state can be brought into this matrix product state form. For example, using this singular value of Schmitt decomposition, however, the bond dimension grows exponentially with the system size.
Well, it turns out that there is, or this kind of format, or this kind of way of expressing the wave function is efficient, like it's an efficient representation for slides
and slightly entangled systems. So, what this means is if I calculate spatial entanglement,
so say if I just do a cut of the system into two halves, say the left half and the right half, and I look at how strongly the left part is entangled with the right part, and I find that's only slightly entangled, then I can get away with a much smaller bond dimension
than what we would need for a random state or for a strongly entangled state. So, in particular, we will then find that the maximum entanglement or the maximum bond dimension can be much smaller than d to the l half.
So, in fact, we can then maybe go to even infinitely long systems and get away with a small constant chi. So, say that this is our chi and this chi is much smaller than the exponentially growing dimension.
And this is particularly true for ground states of gapped and local Hamiltonians in 1D for which the area law holds. But moreover, it's also true if we have local Hamiltonians and gapless ground states,
then we actually find that the maximum of this chi n that we would need is actually just growing polynomially with the system size instead of exponentially. So, this kind of way of representing states gives us an efficient way of representing quantum states
if the states are slightly entangled. And this is actually also the reason for, like, this way we can explain why numerical methods such as density matrix, density matrix renormalization group
and also the TBD method by Giffre, which was mentioned already this morning, works so nicely to describe the one-dimensional systems. And this is particularly true for ground state properties, but it also helps for quantum quenches
as long as these quenches do not generate too much entanglement. So, particularly if we do a quantum quench and stay at relatively short time scales or we have a rather low defect density, then this method works extremely well.
Good, so, and what we want to do, what I want to get at at some point is we want to look at the dynamics and also statics of mixed states, right? So, at this moment, we are just talking about a pure quantum state. We want to go over to mixed states.
And for this, I will need to do some acrobatic moves with these matrix product states. And for this, it will actually be quite useful to adapt a sort of schematic representation of matrix product states. So, schematic representation
of matrix product states, actually in general. So, what I want to do is instead of matrices and scalars, et cetera,
I'm gonna just write some symbols. So, for example, if I just write a circle with nothing sticking out, that would be just a complex number. For example, some C. Let us write a circle with some line sticking out, then this would be just some vector with one index.
And if two indices are sticking out, then it's just a matrix, M-I-J. And moreover, we can just use it to contract, if you have like, if you contract various tensors. So, for example, a matrix-matrix multiplication would just be something like this. We take two matrices, and we just connect this line,
and then this is now the matrix product, right? We have like a matrix M and a matrix N, and we just multiply them by contracting over one index. So, using this tensor or Penrose representation, we can write down the full many-body wave function
as something like this. It's like a brush, so we have the wave function as a big rank or order L tensor, and then we have here L lags sticking out. And of course, to represent this object,
we then need D to the L complex numbers. And when we write it, or when we compress it in terms of a matrix product state, then we just write it in this form. So, now here we have then the matrices B1 to BL.
And then we see that if we contract all of them together, then we actually get this blob that we have in the full wave function. And I guess this is pretty clear. Is it clear to everyone to use this notation? And in fact, this notation you see also
on the poster for the conference. So, good. So, now we have the representation of pure states in terms of the matrix product states.
Now, the matrix, so if you just follow this example here, then the first matrix will have the variation of one cross D, cross D, right? So, this is now the last one, the last D. And then the next one is of dimension D, cross D squared, cross D, and so on.
I guess you can attach a number to every edge. You need labels, anyway. Well, I mean, this here, this one here would be JL minus one, and this would be JL.
Is this what you mean? No, I mean the dimension of every edge. There are dimensions in between. There are something different, not the same states. Right, I mean, this is dimension D, this is dimension D squared, this is D dimension, and so on, yeah. I could also just make thicker lines here and so on.
Good. Okay, so this is how we can deal with pure quantum states. And now I actually want to come to how we can purify mixed states, right?
Cuz in many cases, we would like to work with mixed states. So for example, if you want to look at the dynamics at finite temperatures, or if you want to look at quenches at finite temperatures, etc. So then there are many cases where we're actually interested not in the dynamics of pure states, but we like to simulate mixed states.
But this framework, first of all, gives us a powerful way to simulate pure states. So, and for this, I want to reduce the concept of purification. So in the statement, it's the following. So we have a density matrix, density matrix
rho on some Hilbert space Hp, like this p is for physical, this is for physical Hilbert space in which our system lives. And we now represent it as a pure state
that lives in some enlarged Hilbert space, Hp direct product Hq.
This is a pure state psi, such that rho is equal to, oops, the trace of q.
So we just enlarge the Hilbert space, such that we can, such that the density matrix that we are interested in is,
so that the density matrix that we actually, or the mixed density matrix that we're actually interested in is the reduced density matrix calculated from the density matrix of psi by tracing out these subsystems.
And in fact, it is always, always sufficient, efficient to choose Hq identical to H physical.
So formally, formally, we can always find a, we can, formally we can always find, and we can always find the purification, find psi by diagonalizing rho.
And for thermal steady states, this gives us the thermal field double,
which means we have then psi of beta equal to one over z times e to the minus beta e
and half times, right? So these are now the eigenvectors of the reduced density matrix and the corresponding eigen energies.
However, this is not a unique representation. So the thermal field, thermal field double is only one possibility, one possible,
possible purification. And in fact, the physical density matrix is independent of unitaries acting on our physical,
on this ancilla space, on this unphysical space, right? Because if we just calculate this trace, then any unitary will be undone that acts only on q.
And while it doesn't matter for, I mean, if we're just interested in some purification to get our physical density matrix back, every, I mean, it doesn't matter what kind of purification we choose. However, our goal is to actually use matrix product state
or this matrix product state formalism for these purified states. And for this matter, we actually want to keep the entanglement low. So we want to be able to represent the purification with a relatively small bond dimension. So what we actually, and this is now the main goal of what I want to show,
is we actually want to choose, want to choose the unitary, which we can call uq or u ancilla,
such that it minimizes the unitary. The entanglement. And then there's actually a name for this.
So if we find the minimally entangled purification, we call it the entanglement of purification. So and this brings us now to the third part,
so where we look at the purification in the MPS form.
So- The entanglement that matters to us for the representability in terms of the MPS is the spatial entanglement between the degrees of freedom. So if we have our kind of purified state living on this kind of physical system,
and now our state psi is defined on this system, where we now double the degrees of freedom on each side, the entanglement that we want to reduce is the bipartite entanglement for cuts like this. I'm gonna specify this also in a moment. So-
Is the unitary operator the entirety of the freedom that you have in choosing this purification? Yes, there's a moment. I mean, I just draw some schematic representation of what we're doing, then it's maybe also getting more clear what the degrees of freedom are that we have.
So now we have this way of purifying our state, and we can now bring it into the matrix product state form so now we can use the MPS representation of this purified state,
and we now have the amplitudes of our purified state. So now we have degrees of freedom, which are JP, physical, one, J, Q, one, and then J, dot, dot, dot,
JL, physical, JL, Q, and this we write now as something like this, so it's a matrix product state where we have now two indices on each side, and the upper one here are now describing
the physical bits, or physical states, and these are the lower ones here are now the ancilla states, or the auxiliary states that we have. And in order to then obtain the full,
or the density matrix that we then get, rho is then just a trace over these extra indices, so the way that we could then write down the density matrix would be something like this, where we take now the two copies of these,
so this is now psi, and this is now psi dagger, where we just flipped it around, and then you see that any kind of unitary acting on these goes away, so the physical density matrix rule
will not depend on any unitary operation acting on these ancilla states. So how do we work with these purifications now? So now we actually know how to express
density matrix into this form, but the way that I presented here to get a purification would actually require to completely diagonalize our density matrix, it's something that's certainly not feasible for a larger system, or for a sizable system. So how do we do this?
So the idea is now that certain purifications we can just write down very easily, so in particular, if we take an infinite temperature, take an infinite temperature, temperature purification,
infinite temperature thermofield double, and easily represent it, easily represent it. And in particular, if we have the,
I call this now psi naught, because beta is naught, beta is zero, can then be expressed simple as a product state of identities, right? So in particular, this corresponds just to a product
over all sides, let's call it m, over one over square root of d times sum over Jm of Jm physical Jm Q.
So now we actually know how to write down the infinite temperature, the thermofield double,
or like the purification of an infinite temperature is now really simple. In particular, this is just a product state where we just have a bond dimension of one. Let me see.
So now we have to state at infinite temperature,
which is incredibly simple to write down. In particular, it's very well-suited for being represented as a matrix product state, because it's just a simple product state. So and to go from here to finite temperature states, we can then use kind of well-known MPS techniques, for example, the TBD algorithm
to go to finite temperatures. So finite temperature states obtained by imaginary time evolution,
particularly the state psi at beta is then, so we take now our infinite temperature state, and now we can, let's assume that we only have
a nearest neighbor interaction, like an Ising type Hamiltonian. So now we can just evolve it in imaginary time, something like this, acting only on the physical indices. And this one here is now going to e to the minus beta.
And of course, this continues up to having a sufficiently low temperature. So I'm not explaining the details, actually how to apply these gates to the MPS, but this is straightforward to the,
like for example, this TBD algorithm. So what is the role then played by the infinite temperature could have started from any dense matrix pretty much? Or was it important that you started from psi zero?
Well, we cannot start from any density matrix, right? So if, for example, you start from the density matrix for the ground state, and you just act on it with e to the minus beta h, then you're not going to a state at temperature beta. So here, actually, the point is that we want to start at infinite temperature, and we cool it down.
Does it make sense? There must be still some freedom. So is there some freedom from where you start when you apply this? Yes, the freedom we have, and this is what I pointed out here, we have a degree of freedom on these ancilla indices, so we can basically apply
any unitary to this auxiliary degrees of freedom. And then this is, well, this will, in fact,
be the main part what we want to use. So we want to choose the, so this is actually the main part that we want to find the purification that's best suited for us. We want to particularly find a purification where the entanglement is reduced.
And if we, and then this is actually what I want to come to now. So now I want to reducing the entanglement,
because the framework is now clear, so we have this framework of purification, how to represent the mixed state in term of a pure state, and also the objective is clear because if you want to use this degree of freedom that we have to have a particularly nice purification for our system.
And the way that what we actually want to do is we want to choose this unitary in a way to reduce the entanglement of our state. And for performing quenches or for having time evolutions,
there was already a rather simple idea, simple idea for real time evolution, time evolution, which is going down to a paper by Christoph Karasch
et al. a while ago was to, perform a backward time evolution
on the ancilla indices. In particular, the idea was if we have now whatever starting state that we, for example,
obtained using this algorithm, and we want to evolve it in time, so if we just take, for example, a steady state that we obtained using this method, and we act on it with some local operator, say this is now just a local quench, or we want to calculate some dynamical correlation function,
so we act on it with some operator, say b here, then we can just obtain a state b as function of t and beta, which would mean we obtain, we just take here, so we just apply here a time evolution of u of t,
where u of t is just e to the minus i t h, and they actually showed that, showed numerically that in many cases, the entanglement growth or the entanglement that's building up can be strongly reduced by simultaneously doing a backward time evolution
on these ancillas, right? So here, we just evolve it forward in time on the physical indices, now we can do whatever to the ancilla indices, and they showed that if we just evolve it backwards in time, that this is actually strongly reducing the entanglement growth of the system.
But it turns out that first of all, this is not ideal, not ideal in many cases, and moreover, it doesn't work whenever we do imaginary time evolution,
so it does not work, it does not work for imaginary time evolution. So the idea that I mainly want to propose now
is how we actually can optimize this unitary, how we can, I want to propose some algorithm that actually allows us to approximate the minimally entangled state.
So the idea now is the following, so we are using the method that I just introduced,
so we use this kind of bond-wise time evolution, so we follow each trotter step, step by acting with a row of disentanglers.
And in particular, what we're doing,
we minimize the second Rene entropy. The second Rene entropy is given by S due to minus log of trace of row squared,
row reduced squared. And using this schematic that I've shown before, we actually now have our purified state. Now what we are doing is we just act
with a two-side gate, oops, a two-side gate performing the time evolution, so this is now our U time, and now we iteratively try to remove as much as entanglement as possible by some U disentangler.
And the way that we find it is by numerically or iteratively minimizing the second Rene entropy. And the reason why we choose the second Rene entropy is because we can actually nicely calculate it in a closed form.
Particularly what we are interested in, we are interested in calculating the trace of row squared, so we want to maximize the trace of row squared. And we can simply calculate trace of row squared the following way, so if we just cut out,
just say this block that we are now actively working on, then if I just write it in this form, what this actually is, this is basically, this is now my state given in a mixed representation. So this is in the everything left of this blob, it is expressed in terms of the Schmitt states to the left,
everything right is expressed in terms of Schmitt states on the right, and here are the local states expressed in this form. So this is the states j and j prime, or jq and jp. So then from this, I can write down the,
this is now the density matrix row calculated, yes? I have a question, so the first total step on the physical initiatives, do you already do a truncation there, or do you keep it exact? At this level, I keep it exact. So I just apply this guy, and now I have an increased
bond dimension at this bond, and now I just choose this guy for this, yes, mm-hmm? Can I ask also a question, so now your question, suppose that you don't do any of these disentanglers, and you just try to evolve, what's gonna happen?
Is it gonna work, or is it something going to break down? Oh, it works, but it just, this is, I'm gonna show some data in a minute, and then you see why this is actually favorable to do something on these extra indices. Frank, how do you define this row? Is this row the previous row?
Oh, this is row reduced that I want to calculate. But how do you define it? Well, I define it, thanks for pointing this out. I mean, this is now the reduced element matrix for the, this is relevant for the entanglement of this one-dimensional chain. So I just do a bipartition in such a way
that I cut my system into a left part and a right part here, thanks. Good, well, given that time is passing by, I'd just rather quickly draw this picture, just for amusement.
So this is now what we have now. This is now the density matrix calculated for the full state. So this is now just psi psi. Now, in order to get the reduced density matrix for this bipartition, so I have to trace out this part.
Oh, and here I just insert my, this is now my unitary that I want to use. So this is now, if I just do this drawing, then this is now the reduced density matrix for bipartition as shown here. But I want to have the square of this,
so I just take two copies of this guy, and now I can just multiply these. Now this is rho reduced times rho reduced,
and I can calculate the trace by just multiplying, by contracting these indices. So this is now the trace of rho reduced squared, and this is the expression that I want to maximize. And there are different ways that I can do this. I can now use a gradient descent method to do this,
or I can use a method that also used for, well, for my known. So if I just want to optimize it with respect to the constraint that is to be unitary, then one can use a trick that one is just calculating the derivative with respect to u,
and then uses a polar decomposition. But there are some technical details, then we have some algorithm that converges relatively well to actually maximize this, and by this, minimizing the entropy. And now let me just then come to some of the results,
numerical results that we get applying this algorithm. So the first part will be on kind of,
I'm now cooling down to a kind of a purified state, and as I just pointed out before on the board, is that we can simply start from an infinite temperature state, so this is just a product state, doesn't have any entanglement, and then we want to cool it down. And ideally what we want to do
is we actually want to find a state where we have now a ground state with a particular entanglement in this direction, and we don't want to have any entanglement between the ancilla degrees of freedom for the zero temperature state. So now if we just numerically do it, if we just use a thermal field double,
so we just do what you've suggested, so we just take this infinite temperature state, and we just cool it down by only acting on the physical degrees of freedom, then we get this red line, so then we start at zero entanglement, and now entanglement is built up
both between the physical degrees of freedom and the ancilla degrees of freedom, and what we actually then find is twice the entanglement of the actual ground state because it's a direct product of the ground state on the physical degrees of freedom and the ground state on the ancilla degrees of freedom. If, however, we are using this technique to iteratively minimize the entanglement,
then we get this purple line, or this bluish line, where we actually see that first it looks quite similar to what we get without doing anything, but then the entanglement is gradually reduced, and eventually we are finding exactly the entanglement of just a single copy of the ground state,
and we also see that this is not done very continuously, but this means that here doesn't converge perfectly fine, so it's just getting stuck for a while, and then only it's removed a little bit later on, so that clearly can use some further improvement, I guess.
And why is this nice? So if we actually can do this perfectly, so if we don't have artifacts from getting some tails in the distribution of the Schmidt values, then in principle that means that the bond dimension that we require for efficiently representing the state is actually going to the square root of what we would need without this optimization.
So I don't understand this how, so there are theorems about the entanglement of low-lying states, right, that say that the entanglement is bounded in one day, but you're cooling down from a product state initially through highly entangled states. Right, but there is indeed also a theorem by,
or at least there's an argument by Thomas Bartle where he argues that finite temperature purifications can also be efficiently represented in terms of matrix product states. So then we actually do have, we will have some maximum of the entanglement in between where we have basically this kind of crossover
from kind of classical fluctuations to kind of quantum correlations, but this maximum will always be at a finite value. And it doesn't matter if your final ground state is critical or something, or?
Well, if we, well now I'm just, I'm not one of us, I'm sure this is correct what I'm saying, but what I suspect is that if we have a critical system, then we will only have at zero temperature a logarithmic divergence of the entanglement entropy
and any finite temperature, it will actually have an area along. I suspect that this is true for, at least for one-dimensional systems. For the? The entropy of the reduced entity matrix? No, the entropy of the purification.
No, this is true for the M, reduced entities. But for the purification, I think it's true what I just said. Good. This is what I want to show for, let me also say I have no idea if there is something useful in this information,
but there is a particular way how this entanglement is reduced. So here, we start then from a ground state and we now just gradually reduce the entanglement. This is now showing the structure of how the entanglement is removed. So it builds up some sort of cone structure and this is actually for a state at criticality.
So I find it interesting, I don't exactly know how to interpret it, that it just takes the longest to remove the entanglement from the center bonds. Good. So this is all that I want to say for steady states
and then I'll come to these quenches and this is now showing again how it helps. So now, we do the following experiment. So we start from an infinite temperature state and we act on it with a non-unitary operator. If we were to act with a unitary operator, on an infinite temperature state,
then of course, the entanglement growth of our optimized state would be, there would be no growth, it would always be zero because we can just always find a unitary that completely undoes whatever the unitary time illusion is doing, but if it is something non-unitary to it, then we cannot completely remove the entanglement. And now we compare, this is the entanglement growth
that we would get without doing anything on these ancilla bonds. This is now the entanglement that we get from a backward time evolution, the idea proposed by Karash and I, and if we use the optimized form, then this purple line is what we get.
So the entanglement is actually strongly reduced to what we would get from other methods. So that's the good news. The bad news is shown in this plot. This is now shown actually the growth of the bond dimension. In particular, we do here the following. We're just doing the simulation with a very large bond dimension so that the time illusion is exact,
and then we actually see to which, like how much we can reduce the bond dimension with a certain truncation. And actually it turns out there, we are only gained at short times and at long times, not really. And the reason for this is because the Schmidt spectrum that we truncate develops some tails.
And this is since the Renly entropy might not be, or certainly is not the optimal cost function to minimize for reducing the number of states that we need here. So, but here we're actually still experimenting, trying different ways to do the disentangling.
This is, and this is like also for, or both, all the data that I'm showing here is done for a simple transverse Ising chain with a longitudinal and transverse field. And lastly, I want to apply these ideas to a system with, to a disordered system.
So I now switch from the Ising chain to a Heisenberg model, like Niels-Niebuhr spin 1.5 Heisenberg model with a disordered longitudinal field, where we choose these disordered fields from a uniform distribution between minus W
and plus W. And this model is known to exhibit a many-body localization transition, in particular, if W is smaller, so this model has a kind of many-body localization transition,
and the critical value for W is approximately 3.5 J. So, and then these plots here are showing now the kind of spatial entanglement distribution
of the purified state, right? So we actually see, and this is now what we get without any disentanglers. So here we see that the entanglement is actually growing rather uniformly and linearly throughout the entire system, independent of disorder. So this is a clean case, this is a weakly disordered case,
and strongly disordered case. But if we now turn on the disentangler, so we just do the same simulation, but now we remove the entanglement, we get this picture. So then it actually turns out that there's a light cone in which the entanglement in this purified state cannot be removed, so in effect, we see for the clean case that we see
a kind of nice linear light cone, and if we go into the regime of the strongly disordered case, then we actually find that instead of having a linear light cone, we find a logarithmic light cone. And this is exactly compatible with a slow entanglement, a slow logarithmic entanglement growth that we actually find in these Hamiltonians.
So, and that I found quite neat how basically the algorithm exactly finds this logarithmic growth, at least, it's compatible with the logarithmic growth. Yes? So this was at infinite temperature? This is at infinite temperature, yeah.
You have infinite temperature, you apply a local quench by flipping a spin? Yes. Yes, as shown here, so we just, I just take my system, I just act with a non-unitary operator, like an S plus operator on a physical spin.
And then I just look at the perturbance, like the perturbation done by this kind of spin flip. So you can clean up a lot of entanglement outside this cone? We can only remove, right. But you can completely remove it? Outside the cone, we can completely remove it, as we would expect, right?
I mean, that would also be immediately clear from the kind of brick wall structure of these unitaries that whatever is away from this cut, we just can completely remove. And then from this, we would, and if you always get a light cone, but it turns out if we just localize the system, we can actually also clean it out.
That was precisely the intention of my question. So this cleanup that you can do, is it given by the structure of the circuit? No, it's given by the structure of, it's by the physics, by the physical speed at which information propagates. Because this is what you see here, and that's why I tell you that. Right, but in the first case,
you have propagation, so you have the circuit will have its own causal structure? Yes. And then you have the physical propagation? Right, but what we see is the physical. So this was not given by the structure? No, but the circuit is the same in all four cases here. Right, but in the first case, the circuit was not already giving you this causal coin,
was slightly different. Oh yeah, of course. I mean, because the structure of the circuit will actually depend on the time step. So the circuit of the, like here going, I think we use the time step like point one or something like that.
So the circuit would have already extended far beyond what we see. So there's something I didn't get. So this circuit that you're using to design time, do you optimize it locally in time or globally in time? Like on full? The way that these, we tried out different ways, but the data that you see here is then we just apply
one layer, so we just apply one layer of imaginary time, of real time evolution here. And then we just do a layer of disentanglers on the ancilla. So it's just one time step in a sense?
Right, so we have to do one time step in physical time, and then we do one disentangling sweep. But it doesn't really matter so much. So we could also do a number of time steps and then remove. But then the picture would look differently because if we just would do a disentangling
every end step, then the entanglement would again grow away from here. But then if the goal is minimizing the final chi, I mean the final bond dimension of the MPS, shouldn't be like the time-dependent version of principle already optimal, in this respect? Well, but the time-dependent. If you do this optimization, it is a local in time.
If you do it globally, I can believe that this thing can be better. But then, in fact, one thing that we thought about, but we haven't implemented it very carefully, is the following. If we have these purified states, then if we just use a time-dependent variation
of principle, how would we implement it? So then we would act on the physical degrees of freedom with a real time evolution. But the question is, what do we want, how to act on the ancilla degrees of freedom? And actually one thing that we thought about, but we haven't carefully implemented, is that we define some sort of entanglement
Hamiltonian, which is basically the gradient towards the low entanglement states. And then we could use a time dependent variational principle. But if we just use a time dependent variational principle on the ancilla degrees of freedom, I think then we would get a similar picture to what we see here.
In fact, I think this is all that I wanted to say.
Looking at this last plot, is this a different way to look for many-body localization transition in large systems? Right, I mean, one thing that we haven't done very carefully, I mean, why I think this is nice to diagnose many-body
localization is that here, we can now cool down to, so this is now for infinite temperature, which basically includes all states. But we can also cool it down to lower temperatures. And then do this experiment, and we might just diagnose even the many-body localization, many-body mobility edge.
But there are even some places on the precise location, can be even in the middle of the spectrum, where you could think even the temperature is kind of like the middle of the spectrum. Yeah, it might give slightly better data than what we have seen before, but still, the problem is that if we are in
the regime where the system is not many-body localized, even the entanglement that we have within this cone is significant. It's still like basically the entanglement entropy or the entanglement, we are still growing linearly with the size of the light cone.
And which means that the number of states that we need to keep is still growing exponentially with the linear size of the light cone. Yeah, it's not a complete solution, but it might still go beyond the size or so which are away from complete diagonalization.
No, no, I do agree. I mean, I have hopes that this can push us a bit further to what we did before, but we haven't pushed the limits yet.