We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Beyond pre-programmed robots for repetitive tasks

00:00

Formal Metadata

Title
Beyond pre-programmed robots for repetitive tasks
Title of Series
Number of Parts
36
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Beyond pre-programmed robots for repetitive tasks
TrailClient (computing)Software developerComputer programmingGoodness of fitSelf-organizationRoboticsFerry CorstenTask (computing)Sheaf (mathematics)ReliefUniverse (mathematics)Computer animationLecture/Conference
Peg solitaireDegree (graph theory)Total S.A.Color managementUltraviolet photoelectron spectroscopyFamilyWiener filterRing (mathematics)UltrasoundScale (map)PlastikkarteFactory (trading post)Process (computing)Macro (computer science)SurfaceSimulationSystem programmingComplex (psychology)Data managementMathematical singularityTopologyFrictionPointer (computer programming)WärmestrahlungMilitary operationExtreme programmingOpticsComponent-based software engineeringProcess modelingScaling (geometry)Functional (mathematics)Control flowPhysical systemWhiteboardRoboticsVirtual machineDisintegrationSemantics (computer science)Computer-generated imageryPower (physics)Spektrum <Mathematik>Standard deviationProduct (business)Maxima and minimaMarket Access <Datenbank>Task (computing)Mathematical analysisMeasurementAnalytic setMultiplicationSoftware maintenanceMathematical optimizationObservational studyService (economics)MetrologieCache (computing)RobotAerodynamicsObject (grammar)Computer wormArtificial intelligenceModal logicBuildingResultantRoboticsObject (grammar)Different (Kate Ryan album)Form (programming)Projective planeInstance (computer science)Physical systemService (economics)Task (computing)MereologyVideoconferencingProduct (business)Group actionFactory (trading post)Field (computer science)AreaFunctional (mathematics)Computer-aided designInternet der DingeEndliche ModelltheorieError messageMachine visionReal numberUltraviolet photoelectron spectroscopyOperator (mathematics)Figurate numberProcedural programmingMultiplication signExpert systemCartesian coordinate systemGoogolCASE <Informatik>Virtual machinePresentation of a groupAutonomic computingRevision controlDemosceneProcess (computing)Software developerForschungszentrum RossendorfData managementMaterialization (paranormal)SurfaceMechatronicsLecture/ConferenceComputer animation
Artificial intelligenceObject (grammar)Computer-aided designData modelPrimitive (album)Point (geometry)Rule of inferenceSimulationSoftware testingGroup actionRobotAssembly languageGame controllerMilitary operationCollaborationismAugmented realityDemosceneRoboticsProduct (business)Object (grammar)Group actionEndliche ModelltheorieProjective planeSequenceSoftware developerDifferent (Kate Ryan album)MereologyOperator (mathematics)TrajectoryType theoryExterior algebraForcing (mathematics)AdhesionCollaborationismConnectivity (graph theory)Task (computing)Virtual machineArithmetic meanError messageFlow separationPrimitive (album)Symmetric matrixRandomizationPhysical systemMaterialization (paranormal)Point (geometry)Game controllerPhysicalismMultiplication signInformationParallel portInteractive televisionFunctional (mathematics)Mechanism designComputer programmingMultilaterationMachine visionAugmented realityArmComputer-aided designSimulationCASE <Informatik>DivisorCylinder (geometry)CubeSampling (statistics)Coordinate system2 (number)Geometric primitiveState of matterDisk read-and-write headValidity (statistics)Instance (computer science)Software engineeringComputer animationMeeting/Interview
Augmented realityRobotSoftware maintenanceDenial-of-service attackGreen's functionAerodynamicsPlane (geometry)RoboticsWater vaporMereologySoftware crackingOperator (mathematics)Revision controlData managementLine (geometry)FrictionComputing platformSystem identificationSurfaceReal numberUniform resource locatorVideoconferencingStudent's t-testWeightProjective planeFactory (trading post)Cone penetration testPhysical systemSource codeAreaProduct (business)Mobile WebUltrasoundArmPartial derivativeTouch typingResultantPoint (geometry)Universe (mathematics)Instance (computer science)Software maintenanceVapor barrierPlastikkarteType theoryCoefficient of determinationMusical ensembleMultiplication signLatent heatComplex (psychology)Game controllerAutonomic computingSimulationCASE <Informatik>Sign (mathematics)Computer animationLecture/Conference
Interactive televisionLie groupTask (computing)RobotPlane (geometry)Musical ensembleXMLComputer animation
Transcript: English(auto-generated)
How was the party yesterday? I see very few Plone faces here. I guess some of them are still in bed.
So let's start with the last keynote of the Plone Conference. Just as a reminder, today we will have a third track. In the morning there will be a local track. Mostly our clients and our friends will be presenting their work with Plone.
And in the afternoon we will have two or three more talks of the standard program. The third track room is just behind the conference desk. With me here is Inaki Murtua. He's PhD in Industrial Engineering from the University of the Basque Country.
And he leads the Automation and Robotics Section in Technica, a research and development organization here in ABA. And he will explain more about that. And he's going to talk about robots for repetitive tasks.
So, thank you Inaki. Good morning. As most of you will not know anything about Technica,
I will give just a short introduction about the research center that I work with. Technica is a research center located here in ABA. You can see if you go to the exit of ABA towards San Sebastian, you will see this building on top of the mountain.
We are around 280 people working in our center. We are a private research center. Half of our incomes come from industrial projects. The other half from competitive research projects.
We were born 42 years ago. And some figures, just to see what is more relevant. We are very active in European projects, many times leading the project. We have a lot of industrial projects, like you can see here.
We are a foundation, and there are many companies that are members of this foundation and that support us and collaborate with us. We are also members of the BRTI, that is the Alliance of Research and Technological Centers here in Basque Country.
We have four main areas of research. The first one is advanced manufacturing. Here we are working in new manufacturing processes, but also in the mechatronic and precision engineering
to create machines that allow to industrialize these new processes. We also work on production management. In the field of surface engineering, the group is working in designing new materials,
new coatings for providing new functionalities to product, but also developing the processes for providing these coatings and developing new machines specifically for these processes. The ICT for production is the big area where we include a lot of many different things,
but starting from sensors development, IoT, robotics, automation, vision, and also power electronics. Finally, in production engineering, what we are doing is to help local and national industries
to transfer the results of the research into products, into processes. The results of this research is combined, and with that we put these results in the industries
in the form of, I don't know, mechatronic systems or inspection systems, whatever. We are presenting in many different businesses, from aeronautics to health, but I think that most of the sectors.
We also provide a technological service. In fact, it's the origin of Technikr to provide technological services, although now it's not so relevant for us. For instance, we calibrate machines, we measure parts from third parties.
My talk is about robotics. The first thing is, when you think in a robot, it could be something like this one. An industrial robot, this is a car factory, doing repetitive tasks, because robots are very good in doing repetitive actions.
Perhaps, when you think in a robot, it's something like this one, that was a video some time ago. It's a robot developed by Boston Dynamics, that is very spectacular, what the robot is able to do.
In fact, there was some controversy if this was a real robot, or if it was a fake robot, since that is a real robot. So there is a lot of science, a lot of technology in this robot. We will come back to this video at the end of the presentation.
Perhaps you think in a robot, and this is also a robot, this is an autonomous robot, developed by the German Artificial Intelligence Center, and that is able to navigate in very unstructured terms.
What is interesting is that most of the technologies that are behind this mobile robot, is what is nowadays included in the autonomous car. Or perhaps a robot is this one of drones
that are picking apples in Chile. But there are many other examples, and all of them are robots. You can think in a companion robot, you might think in a robot in the medical sector, the da Vinci robot,
or you can think the Roomba that most of you will have at home. The new versions are very close to being a robot. So what I am going to speak about, I am going to speak about some technologies, about some examples of how robots can be more flexible
than the first that we have seen in the car factory. So I will speak about how they learn, how they can interact or collaborate with humans, and some more examples of robots outside the factory.
So what can a robot learn? They can learn a lot of things, probably in the future more. But they can learn to move. They can learn to understand what is happening around the robot. They can learn how humans behave. They can learn how to interact with humans,
how to collaborate with humans, how to mimic what humans are doing, and you can imagine almost everything that they can learn, or they will be able to learn in the future. To focus on a specific task, that is manipulation. This example, this is a scene with a lot of small parts
that belong to a local industry. And for the robot, the question is, which part of this one do I have to take and how?
The normal procedure to do that nowadays is this one. There is an expert that uses the CD model of the part, the CD model of the gripper, using his expertise or expertise,
decides how to pick the object. Then it setups the scenario, the real scenario, with the robot, with the cameras, do some adjustments, some interactions, and at the end, you have the robot that is able to pick this object.
Of course, it's not only to decide how to pick. You need a perception system that allows to identify where is each part, the pose of the part, and so on. This is how we develop applications nowadays. But thinking of other scenarios
where you have hundreds or thousands of different parts that it's not possible to configure each of them. This is a real case. We are now in a project where one of the companies involved manages more than 60,000 different references.
It's impossible to do this exercise for each of them. What is people doing? Well, what people are doing is to introduce artificial intelligence to help robots to learn. This is an example from Google.
They have a group of researchers working on robotics. Here, what they are doing is they have a team of real robots doing real and error pick operations. They use reinforcement learning
for the robots to learn how to pick objects. This is a very long process, but at the end, the robot is able to pick objects that it has never seen before. It is also able to cope with disturbances
that can happen around the system. You can imagine that this is a quite complex task, but you need a lot of time and you need a lot of physical resources. You need a lot of robots doing that. So the alternative to this, and also you have the risk of the robot damaging the product
or even the robot can be damaged because this is not so well controlled. The alternative is this one. Here, the researchers are training a robotic hand to manipulate an object
using all the fingers. What these researchers are doing is not to use the real hand, but the simulated hand. So using simulation, you can have a lot of parallel robot hands,
in this case, doing the same task using real and error approach. And using the reinforcement learning, you can also obtain at the end and assist a model on how to manipulate this object. In technical,
we are coordinating a European project that is called R2, that in Basque means pick, and we are addressing these two problems. So the first one is how to pick an object based on the CID and the gripper.
And second, I have a lot of different parts, several parts, which of them do I have to take? For the first problem, what we are doing is, we take the CID model of the part,
we segment in basic primitives, symmetric primitives, primitives like a cylinder or like a cube, that somebody can decide which is the best way to pick. Then, using these grasping points taken from the primitives,
and introducing some sampling, we choose which geometrically valid grasping points are for this product. And then we test each of them in simulation, as you see here. And we just select those candidates
that provide a robust grasp. For the second problem, to select which of the parts we will take, we are using also simulation. We create random scenes of many different products
randomly placed in the box or in the scene, and as we are working in simulation, immediately we know where is its part, the pose of the part, and then we start
trying the first candidate, picking the part, and based on several criteria, we rewire, open analyze the agent, the reinforcement learning agent that is controlling the system. And at the end, we have the model.
We will have the model. This is a running European project. In the same project, we are also addressing the problem of assembly. In this case, what we want is the robot to learn the sequence of operations
for doing an assembly and also controlling, well, controlling, adapting the controller of the robot based on different factors, mainly the contact between the part that we are assembling. There are two types of teaching from demonstration or learning from demonstration. One is the kinesthetic. That is this one where you move the robot
and the robot learns the trajectory immediately, or this is an alternative that is to mimic, so to use additional system to see what the human being is doing,
and then the robot mimics the same operations. In the same project, we are also dealing with a gripper. This is a new technology that is called electro adhesion that our colleagues from OmniGrass and University of Bari in Italy are developing that allows,
with the same gripper, a certain very small forces to the product to manipulate many different products of different shapes, of different materials without the risk of damaging them, and this allows not to have to change
the gripper from one part to another. In this idea of flexibility in the future, because nowadays, even if it is a lot of people speaking about collaboration, but in the coming future,
robots will collaborate with humans. This is an example of a project that we developed some years ago where you can see a worker and a robot working on the same part, but this is not really collaboration. What they are doing is coexisting.
They are coexisting in the same scenario. Collaboration means another thing. Collaboration means that the robot understands what the human is doing, that the human also has some cues about what the robot will do in the future and so on.
This example that is also quite old is also in technical with our local company also from labor. Here, the challenge was to disassembly a sewing machine, starting from unboxing the sewing machine
to remove all the parts and screwing some bolts, and at the end, to have all the components separate. We have here two robots, but they are not collaborating. They are coordinating their actions. There has been a hard-coded programming
where both arms have been programmed more or less simultaneously to coordinate different actions. The coordination, the collaboration in this kind of a scenario is that each robot knows what it's able to do.
It's able to know or to negotiate with the other robots what can do each of them. When they have a task, they are able to split in different sequences, in different sub-tasks, and to negotiate who is doing what.
During operation, each of them collaborates or coordinates the actions based on what the other is doing. If we speak about collaboration, interacting is a must. Here are some examples of people interacting with the robot
not in the traditional way, but using gestures, using voice commands, or even using natural dialogue with the robot. The robot perhaps has not understand a command from the user and is asking new information to complete the action. This is explicit interaction
that we use, all of us, we use this kind of interaction, but we have other not explicit mechanisms to interact with humans. In the future, robots should be able to understand our actions, understand our state, understand our behavior,
and this is not yet available, this kind of functionality. We are also using augmented reality for programming robots. In this example, the operator, here is the vision of the human with the mounted glasses.
The human has manipulated the virtual development of the robot. He has validated that the robot is doing a safe trajectory, and later on, with the same controller,
the robot is doing the same operation. When this kind of technology is interesting, for instance, if you have to program a robot while the robot is doing something else, or for third operation, or also for education, because real robots are expensive to have a lot in schools,
so if you can train the students using the virtual version of the robot, later on, they can go to the real robot and do things faster. Some examples of robots outside the factory.
We were coming from Valencia to Avar because of a European project, and we recorded this video on the left. What you see on the road, these dark marks, are cracks, cracks that have been repiled
by the operator of the motorway. How do they do? Well, there is a human operator that, using a tool, is providing a sealant in the cracks. He's following the cracks and he's doing this operation. It's a very simple operation, but it has also some risks
because most of the times, the operator doesn't stop the traffic. It only stops half of the lines for repiling. So there is a risk of a car to have an accident with the operator.
But there are other operations in the maintenance of motorways, for instance. You have to remove the painting. You have to clean assets, like signals. You have to repair some safety barriers or you have to always do some
signaling of the work area. In another European project that is called Omicron, it has nothing to do with the COVID, but we were first. In this project, we are developing this concept of a truck-mounted robot that is able to do
most of these operations. For instance, in the top simulation, what it's doing is identifying the cracks and then applying the sealant to the cracks. Or, helping the operator to move these safety barriers that are quite heavy.
Or, placing or removing the cones during signaling. Signaling is very important and in particular, there is an operation of signaling the work area that is very risky. That is to signal the
left side of the road. Because there is traffic because you are signaling that you will start doing an operation, but at the beginning, there is nothing. So, you have to put some signals on the left side of the road. And you have to cross the road. Meanwhile, there is some traffic. And this is a source of
accidents. So, what are we doing in the same project, in this Omicron project? We are validating this concept of mobile robot transporting signals from one side to the other. Or, even a robotized signal.
Other examples of maintenance of infrastructures. Once again, drones. But in this case, drones that have to do an operation of inspection using ultrasound technology that requires
the sensors to be in touch with the surface you are inspecting. And this represents a very complex control problem. To control the drone, meanwhile, you are maintaining contact with the surface. And the last example outside the factory.
This is also another project we developed some years ago that was a robotic arm mounted in a mobile platform. Autonomous mobile platform. The activity was related with identification and treatment
of pests in a tomato greenhouse. So the robot was able to navigate through the different corridors in the greenhouse by using deep learning, we developed a system to identify the presence of some pests in the tomato plants.
And later on, the robot was able to go to these locations and to spray the phytosanitary products in a controlled way. And just to finish this
talk, I will come back to this video that we have seen at the beginning. Just to tell you a story that I listened to people from Boston Dynamics. They explained that when they presented this Atras robot, they received a lot of
inquiries from people of, managers of water houses, looking for a solution based on this for their needs, specific needs in the water houses. Of course, this was too much complex, too much expensive to go to a water house. And they, using
part of this technology, they developed a second version of this robot specifically for the needs of the water houses. You see that for instance, the locomotion is completely different, but they proved that it was able to do what the manager
was asking. But even this one was expensive, it was very complex. And what they have done is to develop this commercial version of that robot that is able to empty the cards from a track
and that uses some of the technologies that were present in the other two, but this is the commercial version. In fact, even if Boston Dynamics is very famous for these nice videos, they only commercialize two types of robots. This one is the last one, and
the dog that probably you have seen also in some videos. So with that, I would like to underline this idea that we have to innovate thinking on the future, but in the way toward this future, there is a lot of opportunities, there is a lot of partial
results that can be aspirated by us. And that's all. Thank you.
Anybody wants to make any questions? Everybody is feeling asleep because of the party, right? I'm sorry to say that the body robot was doing it wrong. He took
first the tomato and then the metal can above the tomato. Mistake. Sorry. One thing, it's a joke.
We saw the video with this body university robot picking things. It took a tomato on the back, and then the metal can above the tomato. You never do that. I understand your point. No Italian human
would do that, I think. They were Italians. We can ask some Italians if they do this. But this is a very nice technology that has a lot of interest, not only because you can manipulate different parts but also you can control the fixed friction between the gripper
and the part. it has a lot of... He would like to choose. He would like to choose, depending on the surface and the weight of it. Okay. Any more?
Thank you, Yaki. Applause