We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

IV FMA 2018 - Work Session II

00:00

Formal Metadata

Title
IV FMA 2018 - Work Session II
Title of Series
Number of Parts
13
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
# 05 – “Generating forms via informed motion, a flight inspired method based on wind and topography data” Demircan Tas and Osman Sumer; # 06 – “Shapes and attributes” Rudi Stouffs; # 07 – “Generative biomorphism” Ricardo Massena Gago
Earthworks (engineering)Study (room)Nature (innate)Computer animationLecture/ConferenceMeeting/Interview
Train stationWater wellProfilblechLecture/ConferenceMeeting/Interview
OrientalismusScale modelLecture/Conference
Scale modelProfilblechAir conditioningNew ObjectivityElevatorLecture/ConferenceMeeting/Interview
ProfilblechSpaceLecture/ConferenceMeeting/Interview
VolumeScale modelProfilblechEarthworks (engineering)CurveLecture/Conference
SurveyingBuilding materialScale modelLecture/Conference
PlasterLandscape planningProfilblechPropertyElevatorNature (innate)CurveTypology (theology)Lecture/Conference
Landscape planningLandscape planningScale modelLecture/ConferenceMeeting/Interview
CentringLecture/ConferenceMeeting/Interview
Scale modelLecture/ConferenceMeeting/Interview
Study (room)Lecture/ConferenceMeeting/Interview
Town squareCentringLecture/Conference
Floor planWater wellLecture/Conference
Typology (theology)Classical orderTown squareLecture/Conference
Town squareEarthworks (engineering)Classical orderMeeting/InterviewLecture/Conference
Town squareInfillMeeting/InterviewLecture/Conference
Town squareLecture/Conference
TileLecture/Conference
ColumnMeeting/InterviewLecture/Conference
Floor planMeeting/InterviewLecture/Conference
Floor planLand lotEarthworks (engineering)WärmespeicherungClassical orderSpaceAnstrichfarbeLecture/Conference
Meeting/InterviewLecture/Conference
Classical orderHousing cooperativeNew ObjectivityLecture/Conference
Classical orderLecture/Conference
Aggregate (composite)SpaceSpirit levelCentringLecture/Conference
Soil textureGolden ratioMeeting/InterviewLecture/Conference
CurveSpaceSpirit levelMeeting/InterviewLecture/Conference
RüttlerSoil textureLecture/Conference
Spirit levelEarthworks (engineering)Lecture/Conference
Soil compactionLecture/Conference
Oil platformBauträgerLecture/Conference
ArchitectArchitectureEarthworks (engineering)Keystone (architecture)Lecture/Conference
Study (room)StoreyArchitectureSpaceDecadenceArchLecture/Conference
ArchitectArchitectureKeystone (architecture)BauträgerLecture/Conference
ArchitectureLecture/ConferenceMeeting/Interview
ArchitectNew ObjectivityMeeting/InterviewLecture/Conference
BauträgerHole punchLecture/Conference
Rood screenApartmentCivil engineeringEarthworks (engineering)ArchitectLynch, KevinBauträgerArchitectureComputer animationLecture/Conference
ArchitectureArtificial turfCity blockMeeting/InterviewLecture/Conference
BauträgerArchitectureBuildingOil platformEarthworks (engineering)Keystone (architecture)SpaceLecture/Conference
ArchitectureKeystone (architecture)BauhausArchitectLecture/Conference
SpaceLecture/ConferenceMeeting/Interview
ArchitectLecture/ConferenceMeeting/Interview
ArchitectureBauträgerMeeting/InterviewLecture/Conference
Nature (innate)Structural engineerRationalism (architecture)CurveLecture/ConferenceMeeting/Interview
SpaceProfilblechBuildingLecture/ConferenceMeeting/Interview
DirtArchitectureOrientalismusNew ObjectivityLecture/ConferenceMeeting/Interview
ArchitectArchitectureLecture/ConferenceMeeting/Interview
Nature (innate)Golden ratioBauträgerLecture/ConferenceMeeting/Interview
Nature (innate)Lecture/ConferenceMeeting/Interview
Earthworks (engineering)New ObjectivityLand lotMechanical fanMeeting/InterviewLecture/Conference
SpaceEarthworks (engineering)Meeting/InterviewLecture/Conference
ArchitectureStudy (room)Industrial archaeologyBauhausLightingLecture/ConferenceMeeting/Interview
Alexander, ChristopherUrban planningArchitectural drawingLecture/ConferenceMeeting/Interview
Earthworks (engineering)Lecture/ConferenceMeeting/Interview
Building materialLecture/ConferenceMeeting/Interview
Study (room)WärmespeicherungLecture/ConferenceMeeting/Interview
VillaAbwrackungMeeting/InterviewLecture/ConferenceComputer animation
Transcript: English(auto-generated)
Good afternoon, I'm Dimerjan and I'm going to present the work that we have developed during our master studies. Our main goal in establishing this work was reducing the design gap between physical
models, processes, especially digital processes, and spatial data. While doing this, we have followed a twofold approach where our first aim was to create
algorithms and generative systems that are quite abstract in nature to allow for more freedom and speed through the design process. The other part of the system is preserving a very explicit connection between our generative
systems and the spatial data. For the trials, we have used data for wind velocity, which was divided into two dimensions of speed and direction.
And since our focus was on generating design and generating not a single design but a design method, we created our own data. It's quite arbitrary. We just created imaginary stations for wind values and we started from there.
So for the trials, we have used wind data, but in the end, it could be data from anything. It could probably be results of spatial analysis as well. And so our imaginary stations were placed on a grid and to work with, we needed restive
data. So we needed data for every point in between the stations and for that, we used the simplest form that we could find. We went for inverse distance weighting interpolation by Donald Sheppard.
It's quite an old system and since it was very simple, we preferred it. And using IDW, we came up with raster maps for wind speeds and this is the linear map that we used finally in Maya and this is a more human legible version of it.
And this is our raster data for wind direction from zero to 360 and this is the legible version. And after generating the data, we carried it into Maya, which is an animation software
and it was preferred throughout the whole system. And in Maya, we used XGen, which is a plugin for Maya that is normally used to create
hair and animate hair in movies, but we kind of repurposed it with minor tweaks. And we used some MEL expressions to connect the orientation of placeholders to the raster data and then we connected their affine scale in the local z-axis to the wind speed data
to set up our basic system. And then we replaced the placeholders with generated models that have shape-changing rules based on the data and then we have interpolated.
These are the different specimens that we have tried and the main limitation here is that the geometric topology has to be solid and it shouldn't change, but we can morph the points however we like and this was quite an intuitive approach.
And then we have interpolated between the rules and so we have created the keyframe animations and this allowed us to create new shapes for any point that we want between
the starting and the ending positions. And then we have connected them to our existing system and this has created a situation where the keyframe of each object was connected to wind speed and their alignment was connected
to the wind direction. And then we have generated an arbitrary elevation model. This is completely made up, but we have used it to try if this system can generate form on these conditions.
And this is the rendering of the situation. And it only began interesting when we started playing with the pivot positions of the original geometry. When we changed the pivot position and orientation, it started to become more visible. The main shape was emergent and it certainly did show characteristics of the underlying
data and the designer could play with parameters to shape it to a certain degree. And this is another rendering of the situation.
And what amazed us at this point was that there is not a single parameter here that is random. It's completely taken from the underlying data and parameters are explicit. And then we went on to our second trial and here we have moved on from a morphing
approach to an IK inverse kinematics. And inverse kinematics is used commonly in animation and visual effects. It's actually based on the creation of conceptual skeletons and we can replace parts
of the skeletons with any geometry that we like and we can animate the end point of the skeleton and the system can interpolate all the in-between parts. And we have combined this with long exposure photography, the concept of long exposure
where we can create form from the motion of the inverse kinematics system. I'd like to quote Bernard Shumie here. He actually describes something similar for the logic of choreographic movements.
Notation ultimately suggests real corridors of space as if the dancer has been carving space out of a pliable surface or the reverse, shaping continuous volumes as if movement has literally been solidified. And this approach was used before by artists and mainly in context of art, but we didn't
want the designer to shape the thing. We want data to shape the whole thing, but I'll come to that later. But these are our initial models based on the wing motion of birds.
So here we can use these inverse kinematics to create form and the input here is very simple. We just need to animate the end point and when we have the system work on it, we can generate extremely complex forms and this is another example of it.
So we can move the whole shape on a curve and then we can animate the end point to come up with different shapes.
These are our first results and in this case, it's not what we ultimately want to be because we wanted, as I've said before, data to shape the design, not like
animates in an ambiguous way. And so in this case, we also wanted additional data. We wanted these shapes to be formed on a terrain and that's why we wanted to work very fast. We didn't survey real terrain, so we went on to create conceptual surfaces by different
materials including plaster and fabric. We just wanted surfaces to have some inner structure, but we wanted to work fast and free.
So we came up with a few models to have our algorithms work on and this is the main one and we have used cloth and some rugs underneath the cloth and after attaining tension on the cloth, we covered it in stretch film and then we poured plaster and bandages
on the form and it was something like this, some kind of testing landscape. And then we used photogrammetry, we shot it from different angles and it was an off the shelf approach, so that was very fast as well.
So we took the photos into photo scan software and we came up with a digital landscape where we could test our algorithms further and then we carried it into Maya. It was like much, it had a nature which is much richer than our initial digital
elevation model, which was just interpolated random points. This has some implicit properties of the physical material and this topography alone has supplied us with a good amount of data.
And then our initial test was to just project curves on this and have our system run on the curves and this generated forms like this, but in this case the animation is not connected to anything, it's connected to basically time, so they run the animation in a normal way and then they change their position on the U coordinates of the curve and generate
this form. But coming back again, we want the data to shape it, not curves or user-based input, alone at least. So in our third trial, we went on to try BOIDS, which is some type of swarm intelligence
put together by Craig Reynolds and this algorithm was initially created for animating schools of fish and birds. So it's by nature very simple, there are only three rules and we can change these
rules and the results create complex situations. And I will quote Craig Lynn here, he uses in his book, Animate Form, the metaphor of a frisbee, a dog, and the landscape where the dog has to, if the dog wants to catch
the frisbee, he can't run a linear function, he has to follow the frisbee and change his head, which is heading accordingly, and we really like this approach and I'll borrow his terms, we aim for the designer to throw a frisbee and the data to shape its trajectory
and we didn't want to use a single dog because in AI it's very difficult to create a dog with intelligence, so we use the swarm actually to solve the situation. And this is one of the results, so here there is embedded wind data, the topography
data of our physical model and the long exposure of our system running on it and these are some videos showing the process, so I'll show three variations.
The good thing is there are, we can change the three parameters of the void system and the user can animate them or these parameters can be connected to underlying data as well and this is another situation with different parameters.
And what excited us here was the agents are not very intelligent, they are almost randomly trying to navigate the terrain, but one of the rules forces them to go for the average center of the whole swarm and for that reason, if one member of the swarm
finds a solution, the center is drastically moved so the rest of the swarm tries to follow and that allows them to navigate the terrain in a more intelligent way than each member.
And this is how we sampled the data and for the system to work faster, we didn't sample each agent, but we took a sample from their average position center and we
used the data to run the keyframe animation, so the animation is not connected to time, but it's connected to wind speed data and the direction actually changes their trajectory and these are another set of results and the system, by changing different parameters,
creates a huge amount of shapes that we can use for the design process. And then we wanted, if I go back to the beginning, we wanted this system to work on physical models as well because we value the physical modeling part of design and for this
we used video mapping and since our photography was quite accurate, we could re-project the results onto the physical model and this is a video showing the results, so the user can see the results on the surface, albeit a little flat, but it's possible here to
see the results, make additional changes and run the algorithm recursively. And at this point, in the future, we want instead of video mapping, because it's kind
of limited, even documenting is a little problematic, we took many video shots of the video mapping process, but it never creates the same effect that is in the actual studio.
So for future work, we instead want this system to drive additional or subtractive manufacturing processes to both document the process better and to work more intuitively on it. That's all on the study. Thank you for listening.
Thank you very much. So now we will have Rudi Staffs and Asba Safar from Singapore Natural University and Istanbul Technology University.
The speaker will be Rudi Staffs. He will talk to us about shapes and attributes. Thank you. This is a much more theoretical presentation than I held this morning.
It relates to shape grammars and it's really trying to find a way for people to adapt shape grammars to their own needs and requirements.
So for those of you who may not know exactly what shape grammars are, this is a very typical, simple example of a shape grammar. There are two rules actually only. One takes a square and moves the square diagonally.
The other one takes an L-shaped and moves it also diagonally. And then there's an initial shape, which is shown in the middle, which is the two L-shapes touching each other. And then you can, starting from that initial shape, you can have a derivation. So for example, at first you can only apply rule two because you only have L-shapes.
It doesn't matter which one you move, they will overlap. As a result, they will create five squares. So you could take any square, apply rule one, in this case the center one is taken, it is moved diagonally.
As a result, another L-shape is created. So we can try rule two again, move the L-shape, move it once more, new squares are formed, etc. So this is a typical example, except that this is rather, well it is simple in the sense that it actually only uses geometry.
There are only lines in this. In general, shape grammars often have other information included.
Let's say labeled points, for example, to guide the rule application process. And there's other examples as well. Here are a few from the literature. So George Stein's introduction to shape and shape grammars relies on line segments and labeled points.
Later on he has a paper in which he introduces numeric weights to define line thicknesses or surface tones if we're working with surfaces. Terry Knight introduces other qualitative aspects of design.
For example, color. I'll get back to that. And Stein himself also introduces the notion of descriptions as verbal descriptions of aspects of design. And these of course are only the most important but the most well-known examples.
There are many other examples of people who create shape grammars and decide that they need some information that they can't express in this, and so expand it with additional types of information.
But in order to make the problem understandable, I'm going to use this case study. It's a different shape grammar this time by Stein. It has three rules.
And basically the idea is to take a square and inscribe a smaller square rotated that touches it. So the first rule actually just takes an initial marker or point and creates a square. The square is marked also with a point in order to reduce the rotational symmetry.
Then there's rule two is the one rule that inscribes a rotated square in it which can be applied onto the original square and then recursively on the smaller square that is most recently defined.
And then the last rule is just to take away the marker because the marker is something that isn't part of the design, it's only part of the process. Now if we take this example and instead we, rather than just having inscribed squares,
we want to color these squares alternatively with white and black, then there are already a few different ways to approach this. And the first approach I want to show is inspired by Terry Knight's work on color grammars.
So it uses enumerative colors as attributes. So it basically enumerates two colors. They're called black and white and they adopt an opaque ranking. So ranking is the notion that she uses to deal with these enumerative colors.
Opaque just means that whatever color you apply to it will overwrite whatever color is underneath it because it's opaque, you can't see through it. Now in order to match, so in order to have the rule match, you need to have the same enumerative values or the same color as is there originally.
And as such we have two alternative versions of the second rule. One is a rule that inscribes a white square in a black one and the other one inscribes a black square in a white one, obviously,
because we need to decide which color to inscribe. But the rest of the derivation is quite the same and so this is one way of getting this alternative infill to work. But in fact we can use a different way, for example we could use weights as Steinie suggested.
Now weights are numerical values, so they can go from, for example, 0 to 1. And in this case we could assign 0 to be white and 1 to be black. I mean if you prefer the other way around it could also be,
it depends whether you're considering the background to be white or the background to be black of course. And the rule here is that with numerical values, matching requires either an equal or a higher value.
So if you're looking for a black square, black being 1, then it has to be black because it has to be at least 1. But if you're looking for a white square with a value of 0, then actually any square that would have a value between 0 and 1 would apply.
And as a result of that we not only need one extra rule, but we also have to distinguish the markers between white and black and we assign the color to the marker the opposite of the color we assign to the square.
So that even though if you're looking for a white square, a black square could do, because the markers have the opposite color, then the matching wouldn't occur anymore. And so the only thing as a result, because of the markers having different colors,
we at the end also have to have two rules for termination, one that removes a white marker and one that removes a black marker. So the conclusion from this is that it's important to know it's not just about, well we're using black and white and so we can alternate and that's it.
If you really want to implement a shape grammar, you would have to know how you want to treat these colors. Do you want to treat them as weights? Do you want to treat them as numerative values? Maybe you want to treat them in another way. And that's really what this research is about. So what I've done is look at how you can generalize this.
So even though there are multiple ways of looking at color, how can we generalize, how can we get a uniform description for a behavior of colors, even if the colors can be treated differently?
So this is the outcome. I'll simplify it a little bit. So basically, we've got two values. In this particular case, we just have two colors and they're defined by enumerative values, C and C prime.
And so they are singletons because every two colors automatically combine, so there's always only one value. And all we're interested in is what is the behavior on the sum, what's the behavior on the difference, and what's the behavior on the product,
and how do they compare? And so those four is really the, let's say, the formal approach to defining this behavior for colors. And so in this particular case with enumerative values, you could draw this little table here.
At the top, you have, well, let's say, on the left-hand side, you have the existing color, X, which can be black or white, and at the top, you have the new color that is being added, Y, which can also be black or white.
And since we're using an opaque ranking in this particular example, the new color overrides the old one, and so in our table, black and black gives black, black and white gives white, and et cetera. And we can represent this mathematically using this XY table
with a row and column, C and C prime, so our values are the indices into the table, and so when we're adding two colors together, all we have to look is what is the value according to the row and column in the table, and that is the result of addition.
For subtraction, we've set that, and for intersection, we've set that, well, it only applies if the color is the same, so of course if you subtract the same color, you will get nothing, and otherwise you'll get to the original color and the opposite for intersection. If they're the same, then you keep the color or product,
and if they're different, then you won't have nothing because there's nothing in common. Then we can do the same thing for weights, so as I said, we have numerical values, and we're looking at everything, it has to be at least the same,
so the behavior basically is that when you want to add two weights, you take the maximum, if you wanted to take the product, you take the minimum, and Stein says that, well, when you take the difference, you should really take the arithmetic difference of the two values,
and that's what is expressed there. But is that so? I mean, it's a way of interpretation. A while back, I actually, I hadn't read Stein's quite well, and I thought that, well, if adding two weights,
if adding a smaller weight than the weight you already have doesn't do anything, then why should subtracting a smaller weight have a result? And that's what's shown here at the bottom, so if you add all the way on the left-hand side,
if you subtract the bigger, well, in this case, the weights are expressed as line thicknesses, so if you subtract a line with a thicker thickness from a smaller thickness, then you will get nothing, but if you subtract a smaller thickness,
in Stein's case, you would take the arithmetic difference, and in this variant, nothing would happen, because it's a different interpretation. Maybe it has a particular meaning for somebody. Now what's the result of that? That is interesting to see.
So in Stein's case, if you would have a certain thickness, and you would subtract an infinitesimally small distance, and you subtract it, and you add it, and you subtract it, and you add it, because the adding never does anything, because it's so small, the subtracting eventually makes your line thickness empty,
and so you alternate between the smallest thickness and nothing, and it goes on like that. And in this case, if you subtract an infinitesimally small thickness, nothing happens. And you add it again, and nothing happens.
And it's really about, so how could you explain it? Well, if you subtract something infinitesimally small, it's kind of like subtracting zero. So in this case, you're saying, well, it's really zero, so nothing happens. In the other case, it's like you're subtracting something so small, and then after a while, there's nothing left.
It's even though, so the original thickness doesn't play a role. And as I said, I mean, I'm not trying to say that one is better than the other. Don't get me wrong. It's really about, you know, maybe in some situations, one may be more appropriate than the other.
And then I looked on, and I said, well, you know, colors, of course, if we go beyond black and white, we might want an RGB space, or maybe we want an HSV space. But for the rest, we could do something similar to weights. So if we add two colors, we just take the maximum
of the respective R, G, and B values. And if we subtract, we could use mathematical subtraction and the minimum, and the product is the minimum. And so this is one way of looking at it. And maybe a different way of looking at this is, well, what if we take colors as paints
and we kind of acknowledge the fact that if you take a darker color and a lighter color and you mix them, you get something in between. So maybe we should just take the average. And so this is an alternative behavior for colors where sum is just taking the average.
And then why not subtracting might just also be, because it's kind of similar to adding, so it's also average. And then for minimum, we'll just, for product, we'll just keep it at the minimum. And well, maybe we want to consider colors as a four-dimensional space with transparency,
with an alpha value. And as you're aware, well, as you may be aware, for transparency values, if you're working with a foreground and a background, and your foreground has a transparency value, then the background, of course, comes through
as one minus that transparency value. But rather than trying to understand foreground and background, basically the idea was, whatever the background, what if we have two foreground colors, each with their own transparency, how would we combine them? And it doesn't quite contain the result here.
I said, well, there's some function called over xy, and this is really behavior of over. So if the transparency of the first color is zero, so it is fully transparent, or the second color is fully opaque,
then obviously it's the second color completely. On the other hand, if the second color is fully transparent, then of course it's just the first color. And if the first color is fully opaque, but the second one is transparent, then of course the transparency of the first one doesn't play a role,
and so it simplifies it a little bit, and otherwise you've got the whole, the entire formula of trying to define what are the values and the transparency values. And so once we've done this, and we now understand how we can describe
in many different ways what the sum, product, and differences of two attribute values, then we can actually say, okay, now if we have some geometry, and I just simplified here with points, and they have some attribute, and the attribute, the behavior of the attribute is really not that important, except that we know how sum,
product, and difference are described, then we can come up with this kind of formula, and just to try to explain it a little bit, so if we have two sets of, two shapes, or two sets of points,
P and P prime, with two sets of attributes, A and A prime, then, well, basically we have to distinguish between the points in P that are not in P prime, the points in P prime that are not in P, and then the intersection of it.
And each of them, for, of course, P minus P prime, since there's no commonality with P prime, then of course the attribute will just be A, and et cetera, and then otherwise the attribute is defined as the sum, the difference, or the product. And then these M and Es are just kind of like,
have to do with the fact that, well, you never want to have a point without an attribute. Well, sometimes you don't want that, and so if P is zero, then the whole thing should be zero, rather than just having an attribute.
And M is about the fact that we don't know, it's kind of hard to explain how this thing happens when you have multiple points and multiple points, and so we just say, well, we've got one set of points and then we keep on adding one at a time, and that's easy to describe. But if you want the details of it,
you can find it in the paper. So really what this whole presentation is about is that there's no right behavioral specification. This is just about colors, but we can think about a lot of other attributes
that could be relevant, and there's really, everybody could come up with their own specification based on the context you're working within, what you want to achieve, et cetera, and I think this should be supported. And so in order to support that, not just theoretically,
the work here is basically feeding into a modular implementation of a shape grammar interpreter. So because it's modular, you can actually add new behaviors as little plugins, as little modules that you just throw in there,
and then the shape grammar interpreter can deal with this. So it's like you try to use a shape grammar interpreter, but it only allows for A and B and C, but you want D, then it would be complicated. You would have to redevelop it. So by using this kind of uniform characterization,
allows you firstly to specify explicitly how this information should be dealt with, and secondly, being able to translate it into a module that could be added to a shape grammar interpreter.
And so if anybody is interested in that part, which is the more practical part, you can always visit www.sortl.org where the shape grammar interpreter is available, and I'd always be happy to work with people
to expand its possibilities. Thank you. Thank you very much. So now we will have Ricardo Musena-Gago from Lisbon University.
He's going to speak to us about generative biomorphism. Thank you. So the title of my research is generative biomorphism. It's a research in a bio-inspired design field which deals with the generation of human structures.
So as I mentioned, this generative biomorphism is a research in a bio-inspired design field. More precisely, we deal with the generation of human structures by following the design requirements
that characterize the morphological identity of the biological structures. Despite the shape and diversity of biological structures, achieve their ecological performance by morphological covariance. It happens because they exist in profit to a common whole.
So it reveals that the ecological purpose follows a common geometrical pattern in order to allow the interaction and the cooperation between structures. Most of the solutions developed by the human design strategies do not reflect these qualities.
So this research aims to increase the morphological occurrence between biological and human structures by following the generative design process that characterize the biological structures.
So biological structures are perfectly recognizable in the environment regardless of their shapes. They inform us that they have these qualities. So it means that the phenomenon of life imposes a kind of signature in their structures and identity.
So according to Sasso Vieira, the identity lies in the process. The object only shows the signature. So the identity resides in the geometrical pattern of the shapes,
but the geometry by itself is not enough to generate identity. It requires an organizational pattern, flexible to change in order to allow the shape diversity.
So in geometrical terms, the implementation of the biological identity in human structures demands at least three geometrical requirements,
a generative design process, growth mechanisms, and a geometrical pattern. So why buying inspired structures should follow a generative design process? Because biological structures change without compromising their morphological identity.
It means that this quality is a static quality. What generates shape diversity is how the geometrical vocabulary is sequenced. What is the role of growth in the generation process?
At the geometrical level, it imposes a structural organization. Growth expands through a centroidal configuration, which is characterized by levels and sublevels of expansion
that follows a gradient pattern, and which generates a force field that highlights the composition centers. About the biological geometry.
Biological geometry emerged from a repetition process of elements addition that follows a particular configuration. Both follow geometrical and proportional requirements.
So the elements require geometrical qualities, shape soft triangular derivation that reveals local symmetries. Their arrangement in space should be made by simple aggregation
and should be organized through an expansion pattern. As regards the proportional requirements, the order of magnitude of the elements and shapes should follow the harmonious values of the golden ratio.
To implement this geometrical pattern in human structures, it requires the generative design process able to decode these geometrical qualities into rules.
For that purpose, I use the shape grammars. So the shape generation process is composed by a main grammar that will define the structural base of the shape and the supplementary grammars that will enrich the first structural base
with textures and rudeness and three-dimensional configurations. So as I mentioned before, the main grammar will define the structural base.
Its definition is divided into two phases. In the first phase, I use the eigen rules and they will define the expression pattern of this shape.
Its definition requires levels of expression that will control the proportional relations between shapes and the elements.
Over the expression levels, a referential shape is defined. Its definition requires an expression angle and the boundaries of this shape should follow a concave and convex pattern.
The number of the expression levels and the amplitude of the expression angle will influence the shape diversity. In the second phase, the methodization rules
will define the structural elements of the shapes. These elements will be defined by using the geometrical principles of the Voronoi diagram for centroidal configuration.
The supplementary grammars are composed by three distinct grammars. The spatial grammar is one of them. The spatial rules will transfer the geometrical mesh defined by the main grammar for a curved surface in space.
The size of this curved surface cannot be random. They should follow a centroidal configuration and should reveal concave and convex patterns.
With these geometrical qualities, three geometrical surfaces are used. They are the cone, the sphere and the torus. The rules of this grammar
aims to increase the morphological vibration and dynamism of the shape by adding at the boundaries of the structural elements some irregularity.
This irregularity can be implemented by two distinct ways, by nodes displacement, by curved nodes and curved boundaries. And finally, the texture rules will explore the shape appearance.
The geometrical mesh will work as a differential base for their definition. Their definition is made over the structural elements
and should follow two distinct geometrical patterns, a ramification pattern and a compact pattern. Their definition should also follow a geometrical pattern based on an automated repetition
that should be characterized by contrast strategies. The implementation of these patterns should be made at a two-dimensional level
and when they are transferred for a curved surface, they should reveal a two-dimensional configuration that can be obtained by extrusion. In this slide, I have one example of a shape that has been generated with a ramification pattern.
In this other one, it's a work that I did for a land art competition and I used also the same ramification pattern. In this example, I used the same geometrical mesh but using the compact pattern
and I developed two distinct geometries, one by cell extrusion and the other by cells extrusion. This other one is another shape that I generated with the compact pattern
and by cells extrusion. And about the conclusions, sorry. The drawing tool is able to implement the geometrical qualities simultaneously in the structures.
It is able to generate shape and diversity without compromising the morphological identity and the structural organization imposed by the growth is crucial to generate structures with structural fluidity, elements dependency and the relative balance. The future goals for this research
is continually reaching the drawing tool with other geometrical qualities, the exploration of the design tools through a 3D platform and the development of a fabrication process associated to it. So, that's all, thank you.
And I apologize for my nervous. And so, finally, we are going to have from Lisbon University Institute
Ricardo Menchcohea, Elshedra Payu and Felipe Brandão and the speaker will be Ricardo Cohea who will speak to us about transdisciplinarity digital change. Thank you very much. Hello, my name is Ricardo Menchcohea
and I'm here to talk about transdisciplinarity. The title of this paper is Transdisciplinary Digital Change Science and Architecture. It's about my PhD thesis.
Transdisciplinarity is a rather young concept, almost as young as the digital concept with less than 50 years. It was used for the first time in 1970, less than 50 years ago, 48, by Jean Piaget at an OECD congress
and also by Erich Jansch and Andrey Leshnerovitz. Piaget completed the presentation a year earlier and asked the others to talk about this all new concept of transdisciplinarity. Transdisciplinarity in architecture,
some of the architects with research in transdisciplinarity, the first one is a bit well-known by the Barcelona work, and all of these are architects that work and research in transdisciplinarity.
This research proposal outlines the historic perspective of transdisciplinary digital architecture through the work of key personalities, establishing links between them. There are certain key personalities
that I have considered for this study. The main proposal is not to make 13 monographic studies about them, but to trace the links between them to help us to tell this story about transdisciplinarity or this part of the story
of transdisciplinarity in architecture. This is what they have done. This is the links of each one of them divided in the decades they work,
and also they are linked to the universities in space in a geographical map where they have teached. These are the relationships between them,
where each one of these arcs represents a relation of being the teacher, of being the student, of being the supervisor, of the advisee. In this particular paper, I'm going to talk about
only three of these key figures. The first one, Stephen Koons. The second one, Ivan Sutherland. And the third one, Nicolas Negroponte. Only the last one is an architect. The first key figure, Stephen Koons.
Koons isn't one of the most well-known figures in architectural design. He wasn't even an architect. He was a design teacher for a mechanical engineer at MIT. He's one of the most influential scholars in the development of digital architectural design
as we know it nowadays. Koons add important contributions to the introduction of technology culture in architectural design. Some authors in academic research even considered Koons an essential element to the development of CAD and to computer graphics that are used today
in the 21st century. Just a moment to pick some water, I'm sorry. He's well-known as the guy who invented the Koons patches, which is a mathematical formulation technique
for representing and manipulate any 3D surfaces in a computer. Originally, they were defined in a non-parametrical way and later were based on polynomials. Each patch was therefore defined by four boundaries,
as you can see there, and their intersections and could be manipulated to describe any surface inside. This is the basis of almost all the 3D representation that we use today.
The sad sound, okay. Nowadays, digital architecture can be tracked back. This is the original movie about sketch pad and machine graphical interface, was a PhD thesis of Ivan Sutherland in 1963.
This is important to architecture because this was the first interactive CAD system. With CAD, started a new disciplinary territory that changed design practices. It's not me that I'm saying this is my co-supervisor, so.
Project CAD, the name computer-aided with an iPhone design was coined to a MIT project that lasted from 1959 to 1967. If Sutherland's inventiveness must be highlighted,
sketch pad had a background and used the knowledge of MIT's Project CAD, which meant computer-assisted, sorry, there's a bug here, computer-aided design, of course. The project was based on Kuhn's ideas of a design system to be used by architects and other creative designers
without the knowledge to write computer code because until then, you only could draw in a computer with machine code. Yes, it was a generative process, some sort of. That idea of an interactive design system was given by Kuhn, who with this simple vision
helped to change the way architects design. The Project CAD's main objective was to produce a design machine, also considered the investigation of techniques of representation and manipulation of design information.
It was a project that included the development of communications between the human and the computer because until then, when you wanted a computer to make a draw, you have to make it in code, you have to pass the code to a...
punch to a punched card and or a punch tape and then to the computer. This project CAD was also responsible for creating the scientific basis for a series of digital innovations such as interactive graphical communication, 3D computational graphics and object oriented programming.
All of these you can hear well because this is live TV in 1951. This was the first computer with graphical display. It was called Whirlwind and was funded by military funds and then with research at MIT.
This is the beginning of a computer who could show a picture then in sorry I didn't know
if the sound was going to come up. So this language was called APT was automatically programming tools and was then to work on that computer that we saw before not the computer but the screen and they could draw through
punched cards make these draws the ones low and also can make they could make with that computer a kind of a cam doing to control that myeling machine that's over there.
So I've talked about project CAD. Project CAD had two directors, two different visions. One interactive that man over on the right that Stephen Koons and this other man that wanted an automated design with punched cards Koons wanted to use a light pen because
there was no mouse Engelbart hasn't invented yet in 1960. This is how it worked a non interactive design.
Okay so a typewriter had to put put the code on that computer then a person had to see it then plotted it was an awful project. This is Stephen Koons sorry this was a movie done for national television in 1964 so all
the people could know that it was possible to draw in a computer.
But Koons went further in spreading the words to creative designers at original conference for art teachers and researchers in 1966 in the peak of the civil right movements he dared to say that in a creative process the man had the perfect slave and he was
referring to the computer. The idea was that the computer make the computer to make all the repetitive view work and while the designer could have more time to the creative work. Beside patches and CAD Koons would be vital for a series of technical developments in
PhD thesis related to B-splines, to NURBS, rendering, 3D animation. Koons foresaw an interactive medium of architects using a computer to establish a dialogue through a computer design. With this idea of reconfiguration of design he cast a pioneer digital architect Nicholas
Negroponte as his successor as CAD teacher at MIT.
Okay Kevin Lynch all the people know the other two less known. Nicholas Negroponte although being an architect is mainly considered as one of the people who invented the computer as an interactive media and he transformed a computer into
a cultural machine which is today but also he had an important part as architectural design researcher and teacher. He was chosen by Koons as his replacement as CAD teacher in mechanical engineer 1967
and in 1968 he was teaching CAD to architects. The significance of the intellectual and professional relationship between Koons and Negroponte as important consequences for architecture. At MIT he establishes the architecture machine group in 1968 and in 1970 he publishes
the book with the same name architectural machine which completely transforms the ideas of architecture out of the boundaries of pure ball arts architecture. The sum of the Negroponte projects, I don't have videos of those, this is urban five
early CAD in 1968 completely interactive with the board of buttons and this is the most extraordinary project because this is a mix of CAD and automatic and artificial
intelligence with the mechanical arm and block worlds and gerbils and the mechanical arm changed the place of the block world to the gerbils as an environment as a changeable
environment. This was in 1970, this was Negroponte explaining his early projects of architecture in 2014
TED talk but the idea of linking architecture, art and computer science continued with Negroponte to the MIT media lab that he founded in the vision almost like an interactive digital version of balls. The link of balls, it isn't Kuhn's heritage but from the visual artist Jorik Herpes that
worked at new balls with Laszlo Molinas. Kuhns give Negroponte the chance to rock a figure design as a teacher and not only as a researcher. Soon after Negroponte's master being concluded, Kuhns chooses him as a CAD teacher, an architect.
Okay, Kuhns went off, Negroponte went on. The CADs came from engineering environment to an architectural environment.
This was in 1968, okay, I'm just running out of time.
This research aims to trace evolution of architecture from balls to sketch pad and there to nowadays transdisciplinary digital architecture, focusing in the relationship
in the interaction between people, places and institutions. Building an interactive platform to present a new vision of transdisciplinary digital architecture through the work of key personalities. This is some topics about my methodology, I'm running out of time so I'm going
past it. This is just an example of a short kind of methodology to research the links between all those people. The second stage is a database with universities where all these key personalities teach,
taught, sorry, a second stage database where the names of them and the periods they taught. And we can animate this in space and in time, it's a bit speedy but the years are
on the top, as you could see, most of these people taught in the same places at the same time, they interact together. Discussion and preliminary findings.
The scientists Kuhns and Sutherland reconfigured the architectural design through sketchpad, the first interactive CAD. Kuhns managed to pass those ideas to Negroponte, a teacher of architectural design. So we've got key figures, Kuhns had the computer-aided design concept.
Sutherland used that concept to make the first interactive computer-aided design software. And then computer-aided design concept in architecture with Negroponte. The other key figures that I am studying at my PhD thesis, they relate this way.
There was a first generation that worked at Bauhaus that communicated to a second generation, that communicated to a third generation that includes also Christopher Alexander, William Mitchell, Lionel March and Charles Eastman, which is the third generation and the
first generation of digital architects. These are the references. Obrigado. Does anyone want to speak?
That mechanism already implemented in SORT-OGI. All of these weights and parameters and alpha channels already in the module there. Most of them are, not all. From the ones I presented, the only one that is not present is the colors in RGB
space using weights that are kind of used as an intermediary to get from the one-dimensional to the three-dimensional. All the others are.
Well, thank you very much. Does anyone else want to speak? Some years ago we began just with parametric design in the 90s. Well, we have many discussions about parametric design.
Well, there are many senses about parametricism, but I'm talking about the parametric design as poetic, like Zadeh and company. Sorry. Well, this discussion is, I would say, the question is why.
I think there is a consequence of what you said, Ricardo, in the last speech.
You said that CAD was a very unperformed change in the architectural practice.
Well, what I said in the beginning is that it was not. It was in the drawing practice and not in architect practice. Well, of course, architect has to draw. Not him, but I'd say the complete staff of the architectural office.
But that is not the architectural place. The architectural place is not in the drawing. And CAD really didn't bring any development in architectural practice in its real meaning.
The narrowness that CAD introduces evolved in the creation of a sort of capacity of
production of drawings that have, I would say, no denotational semantics.
Well, it's very good in syntax because it's formal. Good in connotation of semantics because it has a very developed geometrical semantics. Can create shapes, but those shapes have no correspondence to the reality.
And to the reality is human life, social life, and nature. You must, well, those, I'm going to give you an example.
I'm a structural engineer at first. I still have classes of structural engineering in our school. I say my peoples that straight lines and surfaces that are straight,
they are, those are not the rationality as some people say. Structure gives always optimized form, shapes, it's always curve and complex curves.
But they have those curves if they are optimized because they have in their shapes something more than the three dimensions of geographical space. They have other dimensions, for example, movement and energy.
If you only work in 3D spaces and geographical spaces, and it's what CAD makes and parametric CAD makes, it has no correspondence to reality.
You must input there other variables. So I ask, why those parametric forms? You said biomorphic forms. Well, there is a famous sentence from Mies van der Rohe
that said he was a very organic architect. Only he didn't make buildings with the form of organs. Well, this is the question I would like to ask you.
Why those forms? What they bring to people? I have to say something on behalf of my supervisor, which is Daniel Cardozolias, which is he made that statement, not me,
about architectural design being changed by CAD. And by that 60s CAD, not 80s AutoCAD, because it was happened in the 60s, but was a much developed CAD than the 80s AutoCAD.
Some of those guys called the 80s AutoCAD dirty dots, because it wasn't so developed as that. There's a paper of William Mitchell about it. I think it's called Rollover Euclid that talks about it. And it is not my citation.
It is a citation, not my words, but I agree with all that he said, because that way of seeking architecture was completely different from nowadays. And in the 60s they were using already parametric design. They were already using oriented object programming
and they were already using topology. AutoCAD had not topology to nowadays. So it's that difference. Can I pass through the morphology or something that you want to explain? I could have something.
Okay. Well, I think the main advantage these tools can provide with the architects is mainly heuristics, because it's very hard to define architectural problems like engineering, because there are many factors involved.
And even the idea of knowledge, I think is very problematic to define. So I think the way these systems could be beneficial is to use both digital simulations and the other one is, I think, physical models, especially for that reason.
And that's why we focused on always switching between the digital and the physical. Therefore, we could evaluate these shapes for different circumstances.
So in my case, I'm using this kind of tools just for one reason, because biology is a natural thing that everything know,
but is a phenomenon that use this kind of tools. And if I want to achieve ecological goals in my architecture, I need to use the same process that nature uses.
So it's my point of view for use of this kind of drawing tools. Thank you. But later, nature as a proposed, they use, for example, the golden ratio,
because there is a development that is numerically uses the golden ratio because other process that don't have that development,
they don't use them. So you must use the golden ratio only when you have that kind of development. Why use that golden ratio in every situation? No, it's not in every situation. If I want to reproduce the qualities
of the biological structures, I need to use the values that you use. It's so simple like that. So you can point in a different thing.
There are two kinds of geometry in biological structures. So you have one kind of geometry that is the identity. Everything that you know have this kind of geometrical pattern. So you have another second phase on the geometry
that is imposed by a cognitive entity that no one knows. This entity works with all the elements
of the nature and make some function and make everything work as a whole. So in my point, I'm just working with the first phase of the geometry
because it's a geometry that is present in all the objects that we know. But the other stage, I cannot develop anything
because no one knows this cognitive entity. So you need a lot of research in this field. I don't know if I answered to you, but I think no for your fans.
Sorry. Well, thank you very much. So I think we have time for one last question. If anyone wants to ask anything, if you please, a bit. I have a question for the last Ricardo. If you could explain a bit better
the goal of the research. I understand the importance of the relations that you are trying to establish. In a way, I think they are a bit obvious because we connect to each other in the common spaces and it's natural that we share experience
and that kind of things that you explain happen. But at the end of the research, what is your goal? To put that figures in the right recognition to bring their work again
as a very important work. That final stage of your research I needn't understood quite well. What you are trying to prove with those kind of relations that you are seeing between these figures that obviously have this kind of branch between them. But okay, they have.
We all have here, if you want. After this, we all go back to our context and then we can say that one day during three days we have this relation. So what's the point in this? The point is, there are many studies about historical perspectives, digital architecture,
but not as much to focus on the transdisciplinary digital architecture. Also, the way that the use of science changed the architecture and give the tools to work with computer science,
with computation, not only with the computer and as Frankly has told about using the computer, just using as a drafting tool, but not just as that, but as the way computation changed the architecture. And that is the trying to explain
in a historical perspective how it was changing from the first using the machine at Bauhaus and teaching in an industrial way that wasn't used in design and architecture, and then to the new Bauhaus
where they used light and came to the animation through light because in the 40s they had the tools that we have today to deal with movement. And then it came to a new perspective that came with the 60s
with the old digital change. First, it was a non-interactive digital change like the case of Christopher Alexander who had to program because he was a mathematician and he had to program the computers. But to make architecture, to make urban planning and then to relate that with the persons
that use interactive architecture, interactive digital architecture, interactive digital architectural drawing. I want to make this connection. I think that all this connection wasn't made before.
But perhaps if anybody can tell me some ways to do other way. But I think that I can establish the link through here. Have I answered? No. A bit.
Okay. Yes, one last question. If you consider, well, what you presented today works with numeric values. It's very straightforward in the sense that it's mathematical. Almost binary operation, if it's black and white, a part of that.
The only semantics comes when you decide to, should I subtract it or not in a mathematical form or use the same operations in a different way. Do you think that structure that you present here today, which is plainly numerical, can be used to convey
semantic information, for instance, during the development, if some layer has a semantic value that over another rule, the application of a rule may be combined with another one and given a specific meaning. Do you think this is feasible?
Yes. I think the reason why, I mean, it's easier doing it with numbers because taking away the semantics, if it's a broader applicability.
So in a way, the point is, by envisioning these particular uses numerically, I'm not imposing a particular viewpoint. I'm just saying, look, this is a way you can deal with it.
If you want to use it, use it. As soon as you step into the semantics, it becomes obviously much more specific. And I think it's probably much more valuable. The only question then is,
okay, why do you do it this way and why not that way? So for example, I think the enumerative colors kind of take a step in that direction, in that the way Terry Knight uses it. For example, she talks about two materials,
aluminum and wood. And then she specifies in the table what happens when these two materials come together. Now, of course, whether it is realistic or logical is a different thing. It's an interpretation, but I think it does
add some semantics to it. But it is limited, of course, because it's just about, you have two elements with some meaning, you put them together, and then you say, well, this happens, that happens. I think it would be very powerful
and very valuable to dive more into this and see how other approaches can be embedded into this that are much more meaningful. But I think it would have to come from specific case studies
or specific goals or purposes so that it doesn't really matter that it works for you and it might not work for somebody else. It is as valuable, even if it cannot be generalized,
but it is, yeah, from my point of view, let's say, where I'm looking more at the overall ability rather than particular use cases, it makes it more difficult. So very well,
so I think we can finish this session. I thank you all very much for your presence here. Thank you very much. We're going to do a short break.