Execution of Multi-Perspective Declarative Process Models Using Complex Event Processing
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 30 | |
Author | ||
License | CC Attribution 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/53688 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
11
Transcript: English(auto-generated)
00:00
I want to give you a little deep dive about, as Mr. Philippiac already said, about the execution of multi-perspective, declarative process models using complex event processing, an approach which was done by me and Mr. Stefan Schönig at the University of Petersburg. To give you a little bit of an intro or motivation, I want to take a look at such
00:25
novel scenarios, which we can see here in the example of predictive maintenance. So we have on the left side a smart factory with machines that are equipped with sensors, and we are using predictive maintenance, so event processing in T1 to predict a machine
00:44
failure in T3. And what we now want to do is, of course, trigger a human task, for example, into T2, the maintenance of this machine, to avoid the machine falling off in T3. So we avoid the machine failure, and we assurance that the production can continuously go on,
01:07
and we don't need to stop and cause any, for example, economic loss. So we have the event processing, then the task is fired, so the maintenance, and in addition, we have a kind of workflow which is executed in the sense of if the maintenance
01:25
cannot be executed within a specific time span, then we need to trigger a plan B. For example, we need to switch to another machine. By doing scenarios like this, we are combining two
01:40
paradigms, which is business process management on the one hand and complex event processing on the other hand. What we can see on this slide is a visualization of the research paper of Sofa et al. They divided this topic in four quarters. The first two quarters are more the direction from the event stream to the process model, so, for
02:04
example, using CP constructs for process mining. And in addition, on the left side, we have the quarter three and four, which is quarter three deriving CP roles from process models, and of course, quarter four, which is executing business process via CP roles.
02:26
And here, the challenge is the transformation of process models into CP roles, and that is also the main focus we did. So, if we combine these two topics, we come up with event-driven
02:45
systems, which is, in fact, the integration of complex event processing into business process management, and there we have three main steps, which are sense, process, and respond. I'm going to explain it a little bit. So, the first step is detecting patterns and events.
03:02
In an ongoing stream of data, and afterwards, we analyze these events and search for correlations to understand what is happening, and, for example, to do predictive maintenance. And in the last step, the response, we can react by predefined actions or call, for example,
03:23
other systems, but there is any kind of answer to a correlation, for example, that we found. So, let's give you a little example. If we have a plant, which is equipped by temperature sensors, and we can now define, okay, if three of all these sensors are detecting a temperature,
03:43
which is at least over 80 degrees, then obviously, it's not just a measurement failure, but there needs to be a fire in the plant, and we definitely should trigger the fire alert and trigger all the actions that should be done during a fire in the plant.
04:04
Then, when we're talking about BPM, we need to understand that process modeling, there are two main approaches of doing process modeling. We have, on the left side, the imperative process modeling, where we have a flow-oriented representation of the reality.
04:20
We can exactly describe, okay, now this, we have a flow of actions, and every action is followed by another action, which is pretty static, but it's very suitable for very predictable processes, where we only have a few possible executions. I guess the most common example of this approach
04:43
is, of course, BPMN. You can see a little visualization on the left-hand corner here of a process model. On the other hand, on the right side, we have the declarative process modeling approach, where we don't represent the reality flow-oriented,
05:04
but we have a constellation of different constraints and business rules, which must not be violated during the execution, which makes the whole modeling pretty flexible, so we can continuously add constraints to the model. In fact, this is more suitable for complex processes, where we
05:26
cannot model the process as an end-to-end process. An example of this is the modeling language declare. We have a little visualization at the bottom, too. We also, of course,
05:40
have a start and a goal, which we want to achieve, and in the middle of this, we have this red area, and these are the business rules and the constraints in this example that must not be violated. Every line or every process execution that does not touch this area is allowed, but as soon as you touch the red area, a rule would be violated. Since we are
06:07
moving more in the context of IoT environments and Industry 4.0, we chose the declarative process models, which are more suitable for event-driven microprocesses in the context of IoT, which was
06:21
already pointed out in the challenge paper of Jarnierz et al. That's why we concentrated on this approach, and then we did not declare, but an extension of declare, which is empty declare. It's a multi-perspective declare. It's extending declare by the
06:43
payload of each event, so we are including the data at the time perspective, and this is more suitable for data-intensive processes and applications in the IoT context, because very often you have this additional information like when did it happen and other payload is very important
07:05
for applications in IoT. In fact, we have a little table on the right side where we can see all the constraints that are available at empty declare, so for example response, which is on this table represented in ATLF semantics, but I would go more into detail on
07:26
some rules to give you more insights, but the main research question was how can multi-perspective declarative process models be executed solely by complex event processing, so
07:40
without using any additional technology. Let's go a bit more in detail where you see the example of response here, and we need to understand that empty declare constraints always consist of two components. We have on the left side the activation and on the right side the target, so when the activation occurs, the target needs to occur too. In addition, we have some
08:04
conditions. On the activation side, we have the activation condition, of course, and then we have on the target side two additional conditions, which we can add, which is, for example, the correlation condition and the target condition. To give you an example of this, if we want to say
08:26
an example of response in our own words, would be when the failure of a production-critical machine occurs, a maintenance of the same company as the manufacturer of the machine, which needs to be available, must occur afterwards. So, in fact, production-critical
08:43
machine failure is the activation, production-critical is the activation condition, then we have the fact that the manufacturer and the maintenance company should be the same as the correlation condition, and the target condition is that the maintenance company should, of course, be available too, because if it is a production-critical machine,
09:03
it doesn't make sense if the company is available in two months. Okay, and for our approach, we need to understand that we can divide these constraints in two kind of chunks. The first one are the mandatory constraints. These constraints can
09:22
only be fulfilled and not, for example, violated or something. Let's give an example, again, with response activated, so machine failure occurs, it cannot be violated, but it can only be not yet fulfilled. So, if you wait, for example, 10 minutes and the
09:41
maintenance wasn't executed, that doesn't mean that the constraint is violated, because maybe in five minutes there will be a maintenance, so it's still activated and we are still waiting for the target to occur. Then, on the other side, we have the negation or the vulnerable
10:00
constraints. These cannot only be fulfilled, but, in fact, can also be violated directly. For example, chain response, chain response is kind of the extension of response. Here we say, when A occurs, B must occur next, which means there must be no other, any other event B in
10:22
between. That means if another event, for example, C, occurs after A, then the constraint would automatically be violated. The question, of course, is now how can we detect activations and targets and violations and so on in an ongoing stream of data. We are using
10:45
complex event processing, of course, to do this, and we see on the lowest level, we see the raw event stream of data where all the time new events occur, and we now use CP
11:00
to detect the left side and the right side of a constraint, so the activation and the target. We use two additional layers of abstraction to store these activations and targets, and if there is a fulfillment, we can react to that. Let's give an example. We can see
11:22
this gray case here. This looks a bit like a moon. This is the activation, and CP is the technical key. There is an activation. We lift this activation up in here on this section of it, and now it is stored on this layer, but we still need to wait for the target. This is what
11:43
we are doing here. We can see, oh yeah, here's the suitable target. We lift up this one too, and now we have on the upper level, upper abstraction level, we have an additional stream which is searching for key. This is an activation, and this activation also followed by the target,
12:02
and if this is the case, okay, we can fulfill the constraint. Same for the light gray example here, and this is an example. The green one is an example where we have an activation which is lifted up, but we cannot already lift it up to the highest level stream because
12:22
we are still waiting for the target, and in contrary to that, I was talking about the mandatory and the negating or binary constraints. The second type of constraints we can see here. Let's take again the example of chain response. We say, okay, there is an activation,
12:45
and the target of this activation must occur next, so after A, B must occur directly, and no other events between. Here we have the activation, and here we would have the suitable target, but there's another activation in between. In fact, there are a few
13:04
other actions in between, but the first one is already violating our rule, so we have a violation, and we can store that in our highest level of abstraction and say, okay,
13:21
this constraint is that we need kind of to react to this. How did we implement this? We did a little prototype of this where we used ESPR, which is, in fact, a Java-based open source complex event processing engine, and the key aspect was how do we transform MPDClaire
13:41
into a language that is processable for complex event processing, which is, in the case of ESPR, is EQL. It stands for event query language. It's a bit like SQL, in fact, where we already can see this, and if we take a look at the example of response,
14:03
we always execute a select query on the stream of data. Now we can see here the select of the ID and the company. We need that to check the payload and the conditions too from the pattern, and in this kind of pattern, we can describe which event, for example,
14:23
should be followed by another event. So we have here the pattern every A is machine failure event, which is followed by the auto maintenance event. Here in this arrows, we have the activation condition and the target condition too, and in addition, we have the where clause,
14:42
where we can also check the target condition, so manufacturer needs to be the same as company. If we go back to our streams, a different level of extractions back to, we have now here the lowest level of extraction, where we have the machine fail event and the auto maintenance event,
15:02
and in the example of response, we define that this would be the activation, which is now lifted up to the middle stream, and in addition, we have here the target, which we also lift up on the stream, and we are now examining this stream if the activation is followed by a suitable target.
15:26
ID is the same, response is the same, it is an activation, which is followed by a target, and also the correlation condition is the same, so we can definitely say this is a fulfillment of a constraint, and here we can see, I don't want to go too much into detail, because it looks
15:47
pretty much the same like the other EQL statements, but we again use the EQL statement to examine the middle stream and see, okay, is the activation followed by a suitable target? If this is the case, a paycheck response is fulfilled.
16:05
We also implemented a graphic user interface, where you can select the constraints and add all the information about what is the activation, what's the target, what kind of constraint, and all the conditions, and if you do all the configuration, you can start the process,
16:22
and now the CP engine is listening to the stream and is giving you a monitoring screen about all the constraints that are going on. The green ones are, of course, fulfilled, the red ones are violated, and the yellow ones are activated, but not yet
16:41
fulfilled, but also not violated, so still waiting for a violation or a fulfillment. As a conclusion, we have now a efficient and scalable and reliable tool to execute MPP-CARE constraints using CP. We have, as I told you, a user interface, a graphical user interface,
17:03
to give the user maximum flexibility and a way to intuitively define these constraints and start the process. We have also the possibility inside of the code to predefine actions that should be triggered if a rule is violated, for example, and we also
17:24
did a little test environment where we used sensors of Raspberry Pi and, for example, temperature sensors and sent these events via MQTT to our environment and to examine the data. And what, of course, was the main research question, we now have proof that it is possible
17:46
to execute MPP-CARE solely by CP. Last of all, as a future work, we will see that we should integrate this approach into a real industry 4.0 environment to really give a proof that it is also reliable in the context of big data and that even large and more multiple
18:07
streams can be customized successfully and not just the test environments that we did. So, thank you for your attention, and I guess we have some time left now for some questions, if there are any.