Avatars and motion capture for virtual fit fashion
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 177 | |
Number of Parts | 188 | |
Author | ||
License | CC Attribution - ShareAlike 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/20561 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
re:publica 2016177 / 188
1
2
3
5
6
13
14
15
17
18
20
21
22
24
26
27
28
29
31
32
33
34
35
37
38
39
40
41
42
44
46
47
49
51
52
54
55
58
59
63
64
65
66
67
68
70
71
72
75
77
79
80
82
85
86
90
91
93
94
96
97
98
99
102
103
105
106
109
111
112
113
115
116
118
119
120
121
123
124
126
128
129
132
133
136
137
138
139
140
141
144
146
147
148
149
151
155
157
160
161
162
163
165
167
169
171
172
173
174
176
178
179
180
181
183
185
186
188
00:00
HypermediaWeightMotion captureVirtual realityCurve fittingFinite element methodData miningFitness functionVirtualizationBitGoodness of fitCartesian coordinate systemXMLComputer animationLecture/Conference
00:47
Metropolitan area networkPoint (geometry)Point cloudElectronic mailing listMeasurementPort scannerProduct (business)Mathematical analysisShape (magazine)Standard deviationQuicksortPoint cloud2 (number)Shape (magazine)Software developerSoftwareMeasurementMultiplicationBelegleserAvatar (2009 film)Mathematical analysisPort scannerStandard deviationProcess (computing)Different (Kate Ryan album)Product (business)Dimensional analysisFunction (mathematics)Electronic mailing listCartesian coordinate systemNumberComputer animation
02:01
Point cloudCartesian coordinate systemAvatar (2009 film)Representation (politics)Shape (magazine)Right angleFigurate numberMeasurementBelegleserPoint (geometry)Touchscreen
02:52
PolygonEndliche ModelltheoriePoint (geometry)Point cloudPort scannerObject (grammar)SurfaceMotion captureEndliche ModelltheorieLine (geometry)SurfaceNoise (electronics)Polygon meshTerm (mathematics)Point (geometry)Network topologyAvatar (2009 film)Point cloudRepresentation (politics)View (database)Computer animation
04:00
PolygonEndliche ModelltheoriePoint (geometry)Point cloudPort scannerObject (grammar)SurfaceMotion captureTexture mappingPoint cloudDifferent (Kate Ryan album)Order (biology)Slide ruleEndliche ModelltheorieAvatar (2009 film)Texture mappingTerm (mathematics)Representation (politics)DigitizingRevision controlMedical imagingCartesian coordinate systemAddress spaceMechanism designSpacetimeProcess (computing)VirtualizationVideoconferencingScientific modellingINTEGRALQuicksortDigital photographyCategory of beingUsabilityPolygon meshSoftware developerLine (geometry)Port scannerContext awarenessLie groupSocial classRight angleFitness functionoutput
06:53
SimulationVideoconferencingEndliche ModelltheorieDifferent (Kate Ryan album)Condition numberCategory of beingIntegrated development environmentComputer animation
07:20
Function (mathematics)SimulationIntegrated development environmentFunction (mathematics)Term (mathematics)Representation (politics)Overlay-NetzTexture mappingEndliche ModelltheorieAvatar (2009 film)VirtualizationCondition numberComputer animation
08:00
Function (mathematics)Insertion lossDifferent (Kate Ryan album)Category of beingTerm (mathematics)Condition numberCartesian coordinate systemGamma functionVirtualizationSimulationFitness function
08:28
Motion captureVirtual realitySimulationMoment of inertiaOpticsForm (programming)Real numberVisualization (computer graphics)OpticsReal-time operating systemFunction (mathematics)Virtual realityShape (magazine)Visualization (computer graphics)Avatar (2009 film)Medical imagingDifferent (Kate Ryan album)Physical systemDimensional analysisTexture mappingAreaOpen setMotion captureVirtualizationSocial classTerm (mathematics)SimulationCartesian coordinate systemComputer fileClient (computing)Data storage deviceComputer animation
10:17
VirtualizationFitness functionState of matterLimit (category theory)QuicksortDisk read-and-write headLecture/Conference
10:45
Metropolitan area networkSurface of revolutionMeasurementComputer configurationMobile appPort scannerQuicksortINTEGRALPoint (geometry)Sampling (statistics)Mobile WebReal numberDifferent (Kate Ryan album)BitVirtualizationBefehlsprozessorFitness functionInformationIdeal (ethics)Bound stateWeb 2.0ImplementationReal-time operating systemUsabilitySoftwareMultiplication signSet (mathematics)ApproximationDrop (liquid)Surface of revolutionFile formatExistenceLecture/Conference
13:30
Hidden Markov modelInsertion lossSocial classGreatest elementRight angleSimulationWeb 2.0Electronic visual displayImplementationQuicksortTouchscreenTraffic reportingFunction (mathematics)Order (biology)Focus (optics)SoftwareImage resolutionMultiplication signSpacetimeFile formatBitInformationReal-time operating systemService (economics)Data storage deviceBit rateGeometryShape (magazine)PlastikkarteType theoryNeuroinformatikPattern languageAvatar (2009 film)Graph coloringMachine visionIntegrated development environmentDatabaseForm (programming)Parameter (computer programming)Point (geometry)Computer simulationLecture/Conference
16:02
Service (economics)Reduction of orderDifferent (Kate Ryan album)Curve fittingFrequencyGraph (mathematics)Data storage deviceCharacteristic polynomialQuicksortImplementationReduction of orderFactory (trading post)Game theoryApproximationService (economics)DivisorBit ratePoint (geometry)Keilförmige AnordnungDifferent (Kate Ryan album)Order (biology)Electric generatorMeasurementMultiplication signSoftware development kitBitLecture/Conference
16:57
Electric generatorState of matterNormal (geometry)Metropolitan area networkData storage deviceFitness functionVirtualizationView (database)ImplementationBitContent (media)Lecture/Conference
17:40
Insertion lossContent (media)Polar coordinate systemMassOrder (biology)Medical imagingBound stateHead-mounted displaySinc functionTerm (mathematics)Inverter (logic gate)Lecture/Conference
18:22
BitHypermediaSmartphoneElectric generatorOrder (biology)Set (mathematics)Content (media)AnalogyMultiplication signDifferent (Kate Ryan album)Personal digital assistantOffice suiteLine (geometry)Lecture/Conference
19:08
Metropolitan area networkQuicksortOrder (biology)Web 2.0Frame problemFitness functionReal-time operating systemData storage deviceLevel (video gaming)Virtualization2 (number)ImplementationLattice (order)Ocean currentTerm (mathematics)Head-mounted displaySet (mathematics)Virtual realitySocial class
20:22
Real-time operating systemLatin squareImage resolutionMultiplication signSimulationTerm (mathematics)Bound state
20:52
Image resolutionSpacetimeMeasurementVirtual realityExistential quantificationContext awarenessReal-time operating systemVirtualizationFrame problemLecture/Conference
21:44
Data storage deviceMultiplication signAvatar (2009 film)Replication (computing)QuicksortGame theoryPolygon meshProcess (computing)Generic programmingVector potentialTensorLengthCurvatureCodeLecture/Conference
23:30
Metropolitan area networkQuicksortAreaTouch typingTerm (mathematics)Field (computer science)Lecture/Conference
24:03
Linear regressionLecture/Conference
24:24
Endliche ModelltheorieInsertion lossMotion capturePoint (geometry)AreaPhysical systemPlotterPoint cloudReal-time operating systemCoordinate systemCartesian coordinate systemControl flowRepresentation (politics)Drag (physics)Lecture/Conference
25:25
Control flowObject (grammar)Lecture/ConferenceJSONXML
Transcript: English(auto-generated)
00:25
Okay Hello, good afternoon Thanks for the introduction apologies for the slight technical mishap there. I'm Andrew. This is my colleague Anthony We're going to talk about applications for virtual fashion
00:42
so first of all to give you a little bit of an overview of body scanning technology the Technologies that we've worked with use infrared depth sensors similar to what you'd find in Microsoft Connect Operates in the size of a changing cubicle it takes
01:03
Multiple scans from multiple different sensors of the subject The whole process takes less than 10 seconds to run And the output from that is point cloud data which is essentially a Number of three-dimensional coordinate points
01:23
Along with that the software that ships with the scanners Gives a list of body dimensions body measurements And in sort of fashionable fashion applications, and this is used for body shape analysis
01:40
clothing size standards research product development and things related to that With the software also that produces the measurements It uses these measurements to repurportionalize a pre-made avatar Again, that's used
02:01
The accuracy of the representation you can see above using two different scanners So one being the side screen one being the TC 2 and you can see the right-hand side Representation is a solid representation of the point cloud data
02:21
the Figure to the left you'll see is the the representative avatar And what you will see? That with the Reproportionalization and the application to the avatar whilst the body measurement points are accurate the the overall
02:41
Shape of the figure is not necessarily a true representation as the point cloud data is Problem with the point cloud data is it's quite noisy when we're looking at animating an avatar One important consideration is the topology of the 3d model
03:04
You can see on the left-hand side there the the physical representation of the point cloud data. You can see that noise in terms of its topology So when I talk about topology, we're talking about the anatomy of what makes the 3d model
03:21
And the different points and this is really important when we come to further animate that model as That topology defines the the deformation points. So when we had surface modeling it's a very different workflow to
03:41
Modeling for an organic deformable mesh So as you can see the the reconstruction of the point cloud data is is noisy which makes it quite unusable for animation What we have there is the shaded view of it without the topology lines on
04:00
and something that's been Integrated with 3d modeling applications is is a Retopology mechanism and toolset. So the model that you can see on the right-hand side is a retopologized version of the point cloud data and
04:21
The difference between that and what you've seen in the previous slide with the the re-proportionalized avatars is you get a much truer representation of that point cloud data the the actual scan of the subject so Being able to retopologize and define where those
04:42
Lines of defamation are going to be makes for a much more usable mesh for animation in terms of Texturing what we've seen so far is sort of gray untextured model What we're able to do is
05:01
Is texture a 3d model and the workflow with that involves? breaking up and Unfolding the the 3d model onto a two-dimensional coordinate space which you can see down here on the left-hand corner And applied to the 3d model you can see they enable us to take a 2d image
05:24
And apply that to the 3d model so what you can see there is is a painted texture using digital paint packages In order to colorize the texture and apply that to the 3d model
05:40
Developments of that We could use Photographs of the subject and apply that image based data of a subject onto the 3d representation enabling a much more realistic and true representation of the subject that's been scanned In terms of cloth texturing and developing a 3d cloth model and simulation
06:08
We can use that same process so we can model address around a subject or we can take An image of a flat textile that can be made into a 3d model
06:24
And we can apply and we can texture that and we can clothe the 3d model in in the cloth In terms of then animating that We need to be able to develop physically accurate representations of how that cloth would behave
06:42
So thinking in terms of different textile properties how they react to their environment and and how that cloth would appear in terms of virtual fashion We can look at a couple of videos of the output of that if we can Get that up so
07:00
The video that you see there We've taken the 3d model of of a garment and applied Like breeze to that so you can see how the the cloth would simulate under various conditions and different Textile properties how they would react in an environment
07:23
if we then apply that and overlay it to the Avatar of the subject in terms of virtual fashion we can have a look at that I Think okay, so what we have here is is a rendered output
07:47
using the the textured model the textured representation of the scan subject And overlaid that with a cloth simulation And this enables us to view what that gown would look like on the individual under various lighting conditions
08:04
using different gamma properties So it opens up the virtual fit fashion experience And my colleague aunt is going to now talk about It's going to later talk about sorry the different technologies involved in in the application of that directly
08:26
So in terms of the cloth simulation we've seen that We've seen it applied to a static avatar That's not being animated what we would like to be able to do is is combine that
08:41
With motion capture technologies to be able to animate the avatar as well as the cloth simulation So there's many different technologies for motion capture The Kinect systems as sensor based systems as optical camera based systems all with the different Pros and cons
09:03
so what many of The technologies do allow for is is real-time visualization as you can see on the image there It allows us you to view in real time the output from the motion capture data This we can store in data files save for a later application
09:22
So in terms of personalizing an avatar as well as getting the the body dimensions and the shape and the texture in We can also record a subject's motion and store that and apply it to an animated avatar. So enable us to Really personalize that that animation and and that subject
09:42
Applied to a cloth simulation. You can then see what a Subject would look like in a particular garment For the application of virtual fashion and visualization so it opens up the Avenue for animated fashion shows enabling clients to reach a global audience and
10:07
Opening up the possibilities for virtual reality experience, which and is now going to talk about Thank you very much. So hello. I'm the other speaker. My name is Anthony and
10:20
So where does what under is talking about all fit with regards virtual fit technology? I'm gonna briefly sort of go through the state of play as it is currently the virtual fit technology talk about the successes talk about its current limitations And what problems that poses? as well as the possibilities and how We sort of see this will evolve over the coming years over the next sort of five to ten years or so for the
10:46
future of what could potentially be real sort of virtual fit experiences So so we're looking at virtual fit as it exists currently There's a lot of different options out there this is just a sort of a small sample of what exists currently on the web and
11:07
The first thing you'll notice is none of them necessarily Take information and from body scan data that's retopologized with proper cloth developed place on top like Andrew was just talking about which would be the Ideal solution and in an ideal world we'd be able to do that, but none of these really do
11:26
It's a great step from what was possible a few years ago all these web Integrations and app based integrations mobile integrations are great but what we'd really like to be able to do is provide more of a More of a personalized more in-depth more technical experience, but for the ease of use of the user
11:45
And And I said we've come a long way with regards the original idea behind virtual fit fashion and The you know this idea isn't new at all going back 15 years the exact same idea existed
12:01
But you know incredibly limited by technology technology moves along in leaps and bounds and what we're talking about here is CPU power and GPU power in particular So we can do Wonderful virtual fit fashion given you know a week's notice and the body scan and all this information
12:20
And we can make something that looks Essentially real given enough computing power But you need a week to do it what we need is to be able to do that in real time and for someone to be able to Interact with those clothes in real time and see how they would actually fit But unfortunately we're still limited by technology So I said that the idea has been around a long time, this is a very old implementation
12:46
From around 2000 2001 in a bit of software called see me Which essentially it's the same idea. We're still trying to achieve now is this idea of you put in some approximate measurements drag and drop some clothes over the top and there you go you're good to go and
13:02
What it was was a very at the time quite good but now very poorly rendered sets of JPEGs that you simply clicked through that were automatically generated and You know you were away. It's quite a poor implementation, but it was semi revolutionary for the time. It looks horrendous, but You know you are we are going back 15 years
13:22
The point is we were limited by technology then and we're still limited now While all the implementations are good that exist currently They exist in the format they do because there's not enough computing power still to be able to do what we'd like to do So if you look at these again
13:41
And I just sort of highlight some of the technologies they use so primarily a lot of these will use web GL technologies And there's only one that really takes the sort of a very basic 3d avatar Which is this this low one at the bottom here? But there are a couple of very very good ideas and very very clever bits of software in here as well So if we look at the one at the bottom right for example, and this uses a Kinect sensor, which is a
14:06
depth camera so far bottom right we have And this is of showpiece type technologies. So the person goes into the store They stand in front of the TV screen the Kinect picks them up. It creates a very coarse rudimentary 3d geometry
14:24
They drag and drop using gestures which clothes they want to try on and move around and what they get is a reasonably poor Look of the clothes they're wearing appear on the screen. The idea is very good But unfortunately, we can't simulate the movement of cloth fast enough at high enough resolution in order for that to work
14:43
And the computer has to be able to read in the data based on the person's motion Which it can't do at a high enough rate in order to be able to interpret the information Passage of the class simulation and then display in real time on the screen very very difficult to do Another very clever bit of work that's been done is with this
15:02
mirror Which is very similar technology to the Kinect Implementation but its focus is more on being able to alter in real time the patterns and the colors in a mirror environment and now it might be With the way things are currently going with every single normal thing in your house becoming smart inverted commas
15:21
It might be that the eventual solution is a smart mirror That's in 15 years time. I do mean 15 years at least This won't be a 10-year thing the eventual solution may be a smart mirror whereby information is fed through to your home from Retailers and you do the whole gesture thing and you see it on very sort of minority report sci-fi type
15:42
Futuristic vision for things but that that is a long way away that maybe the end output and what we're interested in is what? Technology we can use now in order to you know, potentially shape this space Sort of a new tech The current implementations are that they have the drive to improve the service
16:05
reduce return rates and Essentially by just by putting in approximate measurements people will get a reasonable idea And provide a bit of woe factor for the store the brand in question things like that But that's their aim especially the in-store implementations it's it's for the woe factor
16:22
It's for this is a very expensive piece of kit and isn't it wonderful and that sort of thing It's kind of a halfway house between online and in-store shopping What if we could take technology from a different industry that's been developed over a long period of time that has an awful lot of Characteristics that we can use that already exist in order to enhance the experience that we're after for this sort of work
16:45
And that in particular is is the gaming industry So this is the perfect place to be talking about this and I hope everyone isn't VR out at this point, but what if VR were to take off now, I'm not saying it will I actually don't think it will this generation and there's a wonderful talk earlier
17:02
I don't know if anyone sat here saw it. I missed her last name of fortune, but he's an entrepreneur by the name of Anna She was talking about VR implementations For you know 360 views inside stores and you can pick on items and all this or the not virtual fit But she also said that she was a bit hesitant of virtual fit and I don't blame her purely because what just talked about
17:25
It's not there yet What if? High-end VR ended up becoming the norm So there's an awful lot of talks going on elsewhere here talking about content creation There was a lot yesterday talking about VR content creation and how we generate that content and how we make that content accessible
17:45
In order for it to appeal to the mass market and that's that's the key thing And the second key thing is while there may be a lot of the inverted commas cheap VR headsets that are due to be sold So there's a forecast that 200 million of them are due to be sold by 2020 in reality a
18:04
Lot of those will be the low power and they will be the mobile phone headsets and we still can't do even though Leaps and bounds since in terms of power over the last five years We still won't be able to do with that What we need is high-end PCs and for that to take off we need high-end VR to take off which requires
18:23
Media content creators to get behind it in a big way. So I don't think it will this generation We're talking five ten fifteen years down the line, but in a future generation next the next set of Oculus is the next set of OSD ours the next set of HTC vibes That's when it will probably gain a foothold
18:42
Because it's not quite ready yet The power is not quite there a good analogy to use is the difference between the PDA and the smartphone PDA hadn't had all the right idea. It was everything you could do with a smartphone, but the technology wasn't there yet It wasn't quite powerful enough in order to be able to do it until We could get a smartphone to work and it worked brilliantly and then it took off And unfortunately the PDA got completely left behind and left some companies with it, but that's a bit of history
19:05
But you know, so a couple of generations time This is what we can use and what if in real time 30 frames a second. We could have a store presence You know a virtual environment that looked I had the fidelity of something like that
19:21
Which looks pretty good and that is actually it's a year old from an artist called Benoit It was very good unreal artist 3d artist and that runs in real time at you know, 60 frames a second This is the kind of fidelity we can get now in a VR headset so if we can create a store Looking like that and we can move around it looking like that and we have
19:44
headsets proliferating into households and it is a big if it does require a Particular set of the market to have these headsets within their households so that the particular demographic That would be interested in this sort of work and then this sort of thing in terms of the VR
20:01
experience and virtual fit It needs to be in their households in order for this to work But we can get this level of fidelity now in a VR headset Rather than the current web implementations of web VR So To be honest, it's kind of not
20:22
Okay, it's in my opinion. This is of course meeting It's not a kind of a question of if or how but when this will happen because we have the power it will happen Leaps and bounds are happening all the time. It's especially in terms of GPU power We're not very far away in terms of getting exceptionally high quality real-time cloth simulation Which is kind of the the crux of where this sits. It's the apex of whether this will work or not
20:45
As I say cloth simulation is the most difficult thing here We need it running in real time at very high resolution. It can run very well in real time at a reasonable resolution That works from far away camera shots, but when you're looking at how you want garments to sit
21:01
On what is essentially your body if we're talking about body measurements It needs to be exceptional It needs to be what currently takes a few days to simulate or a few hours to simulate it But it needs to run in real time at least 30 frames a second And the other problem is when we talk about avatars, especially in the virtual reality space is the problem known as uncanny valley
21:21
Which I'm not sure if anyone's familiar with but it's this idea of if something's not quite right in a virtual space but you're aware that it's Virtual you're aware that it's not real It can cause a sense of repulsion and that's not something that you want your users hitting if You're wanting them to buy clothes. You're wanting them to enjoy the experience. You do not want this sense of repulsion
21:43
So it's this question of what do you do? Do you hit the you know? Do you make this perfect avatar and it would have to be perfect. The person would have to be unable to tell if it's them or not, which is nigh on impossible to achieve completely and Or do you go for you know this Generic avatar or nothing at all
22:01
Do you completely remove the 3d mesh and just focus on how the clothes would fit on a person's body? Which is which would be easy to do but that's the kind of complication we're looking at with this sort of thing But there are potentially huge advantages It could be a very close replication of the shopping experience if we wanted it to be with a very quick very easy update process
22:20
So someone can create an entire store in ten years time and they can change the entire layout of it from one day To the next and you can have direct contact with your consumer telling them the stores updated new clothes Now you can get there at any time of day. It's always open There's no there's no problem getting there and because of gaming technology The netcode is already there and available to allow us to you could actually get you know
22:44
You can do you can have the whole experience with other people there at the same time. You can choose to shop With other people or without other people you have the shop empty if you wanted to you can see everyone else It was there if you wanted to And there's also the added advantage with with this sort of work with regards people that may want to experience this sort of thing
23:00
But can't because because they may struggle with disabilities or so with mental health issues They have social anxiety For example, people may not want to go into stores people may not like even leaving the house We're talking about agoraphobics things like that. So there are there's potential advantages for that sort of thing as well And there's potential higher spend Games been gaming has been experiments with this for a few years with regards
23:24
Introducing monetization You can get people to spend a higher amount of money as it were With sort of non tangible things. Now you'll be able to miss the touch and the feel of it that's that's what we can't quite achieve yet, but
23:41
That would have to come if we created it and we tested it and we'd have to see over many years What exactly happened in terms of spending expenditure? But It'd be very interesting. Like I say as technology develops. This is the sort of area we can push into which would be very fun and very entertaining and But yes, that's good. Thank you very much. There's any questions. I think we got a couple of minutes for questions. Yeah
24:06
Yeah, a few minutes questions if anybody's got any questions on any of that or any ideas Yeah Sorry, I Was looking for the loudspeakers. Thank you. What are you doing exactly with the Kinect tool you measure the people and
24:27
We're actually not using connect we we have some work going on in a department with the Kinect That's not our work with the Kinect on there. That's a pre-existing work that someone else's implemented but it's a depth sensor and
24:40
It generates a very coarse 3d model over the top of the person They can drag clothes on top of themselves and move around in real time It doesn't work perfectly, but that's kind of how it works It's a depth sensing camera that creates a 3d model of the person what we do we use Very very similar to the connects what I was talking about with the the body scanner system. It uses that that depth sensing
25:02
Just to scan the subject it outputs the point cloud data the 3d coordinate points that make up that point cloud data So whilst it's not motion capture and interacting directly in front of a Kinect It uses the same technology to take a representation of the subject. Yeah anybody else
25:23
Well, then thank you very much guys. And now we're going to have a little break 15 minutes and afterwards We're going to continue with the future of e-commerce