We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback
00:00

Formal Metadata

Title
Drools
Title of Series
Number of Parts
97
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
5
15
Thumbnail
48:33
41
Thumbnail
35:21
47
48
Thumbnail
1:03:30
50
75
Thumbnail
50:56
94
Physical systemDecision theoryExpert systemDataflowError messageBuildingLocal ringSpring (hydrology)LengthRule of inferenceKnowledge baseDataflowMultiplication signWordSocial classSpring (hydrology)Inheritance (object-oriented programming)Decision tableDrill commandsProcess (computing)LogicCartesian coordinate systemType theoryINTEGRALDifferent (Kate Ryan album)BuildingService (economics)Form (programming)Term (mathematics)Natural numberInstance (computer science)Expert systemGeneric programmingSemiconductor memoryDeclarative programmingTransformation (genetics)Operator (mathematics)BitWeb 2.0Uniform resource locatorWebsiteArithmetic meanSet (mathematics)Web serviceMoment (mathematics)Event horizonCASE <Informatik>Validity (statistics)Data managementServer (computing)1 (number)Equivalence relationCondition numberInclusion mapComputer animationLecture/Conference
Boolean algebraRootLogicRule of inferenceNumberSemiconductor memoryDifferent (Kate Ryan album)Multiplication signBuildingProcess (computing)BitOrder (biology)Spring (hydrology)Functional (mathematics)Field (computer science)Instance (computer science)Validity (statistics)Pulse (signal processing)Object (grammar)Keyboard shortcutTerm (mathematics)Cartesian coordinate systemFirst-order logicGame controllerLecture/Conference
Hash functionView (database)Gamma functionRule of inferenceLocation-based serviceJava appletBoolean algebraPhysical systemKeyboard shortcutVariable (mathematics)Semiconductor memorySlide ruleLine (geometry)TheoryRule of inferenceSelectivity (electronic)CausalityGroup actionDifferent (Kate Ryan album)FrequencyType theoryCalculationView (database)Field (computer science)Pattern languageMultiplication signProduct (business)Row (database)Right angleTable (information)Standard deviationPRINCE2Validity (statistics)SpacetimeTerm (mathematics)Cartesian coordinate systemAttribute grammarWeb 2.0CountingProgramming languageOffice suiteFirewall (computing)Java appletRange (statistics)Constraint (mathematics)DataflowNumberStaff (military)Chemical equationDatabaseObject (grammar)Online helpInstance (computer science)Physical systemFilter <Stochastik>SubsetPower (physics)Keyboard shortcutSocial classFile formatQuicksortDirection (geometry)Lattice (order)Letterpress printingComputer configurationMathematicsComputer clusterXML
Data typeRule of inferenceMenu (computing)Chemical equationRule of inferenceRow (database)Pattern languageConstraint (mathematics)Slide ruleVariable (mathematics)ExpressionMultiplicationXMLProgram flowchartComputer animation
Repository (publishing)Java appletRule of inferenceRead-only memoryInferencePhysical systemView (database)CurvatureConflict (process)Phase transitionDataflowControl flowLoop (music)Game theoryCollaborationismLogicVideo gameDrum memoryRootDisintegrationAreaRule of inferencePhysical systemView (database)Game controllerLevel (video gaming)Default (computer science)Process (computing)Type theoryService (economics)Set (mathematics)Attribute grammarDecision theoryMoment (mathematics)Letterpress printingElectronic mailing listChemical equationCalculationFile formatComputer iconCuboidValidity (statistics)Formal languageCodecSemantics (computer science)Domain nameEndliche ModelltheorieDataflowFerry CorstenSpectrum (functional analysis)Image resolutionComputer fileSeries (mathematics)Revision controlNumberEvent horizonError messageVideo gameDirection (geometry)Phase transitionDifferent (Kate Ryan album)Object (grammar)Data managementProduct (business)Task (computing)Software testingTable (information)Exponential functionJava appletSoftwareCycle (graph theory)Semiconductor memoryMiniDiscCartesian coordinate systemGenderOrder (biology)Causality2 (number)Point (geometry)Expert systemStrategy gameGroup actionMappingQuicksortMultiplication signMereologyCombinational logicRaw image formatCross-correlationReal numberRow (database)Traffic reportingVotingWebsiteBitRange (statistics)Software developerInstance (computer science)Social classInteractive televisionCodeArithmetic meanComputer configurationComplex (psychology)Zoom lensSynchronizationArray data structureWordOdds ratioLecture/Conference
Time domainLatent heatProcess (computing)Conditional probabilityElement (mathematics)Regulärer Ausdruck <Textverarbeitung>Electronic mailing listNumberCodeMultiplication signHybrid computerForestQuery language2 (number)Address spaceAreaDatabaseCuboidParameter (computer programming)Rule of inferenceResultantOcean currentFile archiverBitFunctional (mathematics)Product (business)Content (media)Process (computing)ExpressionDomain nameComputer programmingLatent heatPower (physics)IterationDemo (music)Decision theoryHookingOrder (biology)FrequencyElectronic mailing listCalculationBit rateGateway (telecommunications)INTEGRALIntegrated development environmentJava appletNumberPhysical systemPressureMoment (mathematics)Semantics (computer science)Functional programmingSet (mathematics)Pattern language1 (number)Total S.A.Projective planeMathematical analysisElement (mathematics)Self-organizationPersonal digital assistantSoftware developerSemiconductor memoryService (economics)Gradient descentBus (computing)Social classDrill commandsRegular graphVotingMereologyObject (grammar)Graphical user interfaceType theoryScheduling (computing)Dimensional analysisPurchasingHTTP cookieComputer iconCodeCASE <Informatik>Term (mathematics)Graph (mathematics)Matching (graph theory)Representation (politics)String (computer science)Complex (psychology)ProzesssimulationPrototypeSelectivity (electronic)Envelope (mathematics)Variable (mathematics)Row (database)Combinational logicGoodness of fitEquivalence relationUtility softwareForm (programming)Lecture/Conference
LogicInferenceObject (grammar)Encapsulation (object-oriented programming)AbstractionSoftware maintenanceData integrityInformationLogicSound effectInferenceCondition numberMultiplication signResultantMessage passingFormal languagePoint (geometry)Insertion lossError messageQuery languagePairwise comparisonComplex (psychology)MathematicsSubsetEncapsulation (object-oriented programming)Decision theoryProcess (computing)Physical systemRule of inferenceEvent horizonSoftware maintenanceControl flowBitBus (computing)ChainNumberState of matterStreaming mediaDependent and independent variablesMereologyObject (grammar)Decision tableCapillary actionINTEGRALDrill commandsLattice (order)WordTheoryBasis <Mathematik>Communications protocolMobile appGraph coloringPower (physics)Spherical capModule (mathematics)Theory of relativityWebsiteGroup actionSemiconductor memoryHeegaard splittingDifferent (Kate Ryan album)Extreme programming1 (number)Suite (music)Standard deviationArithmetic meanLogic programmingSoftware design patternLatent heatBuildingAbstraction
Service-oriented architecturePoint (geometry)Streaming mediaEvent horizonCondition numberElement (mathematics)WindowSymbol tableRule of inferenceNumberProcess (computing)Game theoryComputer chessSpacetimeEvent horizonComplex (psychology)Process (computing)WindowInterface (computing)Code2 (number)Functional (mathematics)Distribution (mathematics)Multiplication signAverageDifferent (Kate Ryan album)Operator (mathematics)Formal languageComplex systemPattern languagePhysical systemStatisticsPhase transitionSingle-precision floating-point formatInsertion lossKnowledge baseAddress spaceComputer architectureComputer fileBitSemiconductor memoryStreaming mediaRule of inferencePoint (geometry)Thread (computing)SynchronizationEndliche ModelltheorieCross-correlationCartesian coordinate systemGoodness of fitQueue (abstract data type)Pairwise comparisonJava appletSequenceService-oriented architectureOrder (biology)Visualization (computer graphics)FrequencyParameter (computer programming)State of matterMathematicsAdaptive behaviorDynamical systemDefault (computer science)Deutscher FilmpreisMereologyAreaLengthCountingLevel (video gaming)Covering spaceLimit (category theory)Personal digital assistantRow (database)SineTemporal logicCumulantCore dumpWordRange (statistics)Prisoner's dilemmaEvelyn PinchingWebsiteMetropolitan area networkMathematical analysisContent (media)Right angleMoment (mathematics)Type theoryLecture/Conference
Programmable read-only memoryLecture/ConferenceJSON
Transcript: English(auto-generated)
Rule engineers, the capabilities, the breadth of the stuff, without hopefully losing anyone, without getting too glammed. So it's going to be quite intense, if anyone's looking to go to sleep, you can go to a Ruby session at the World, with Rose. Okay, so let's get started. So, rules
is made up currently of four makers. There's Gunner, which is about the server-side management, offering all the web stuff for the operations. Expert is now the rule engine side, is the logic side. Fusion is for the event processing side, and flow is, we started off with a thing called a rule flow, and it became quite quickly obvious
that the rules and workflow sit side by side. So, just pass it down. Can someone, everyone just pass the little key down? Thank you. So flow has extended that down to full blow. So the topic today is drills expert, quick examples, a little deeper, drills flow, a very very small amount of drills flow, don't have much time,
back onto drills expert, conditional elements, timers, calendars, fire engine treatment instance, and drills fusion. If we get time, we'll see how it goes. Right, so let's start with a class. Applicants, everyone should understand what it is, I haven't put the getters and setters in there. We have a, is of a valid age,
it's simply saying when the applicant has an age of less than 18, then we set the, it's valid equals false. Very simple, everyone should understand it. So rules are of that form. When something is matched, when we recognize a set of data, we can execute on that. Drills has, as we've gone beyond just rules, we started to include workflows,
started to include class definitions. We needed to build a generic APRM knowledge, because all these things are knowledge, and you're building knowledge definitions. So your rules, your workflow, your class definition, we call type declarations,
are all types of definitions. So you have a knowledge builder to build these definitions. So first thing you do is create a knowledge builder, and then you just keep adding different resources. These resources, so that's saying this is a rule for, a resource type of definitions rules. If you wanted to load in a workflow, the only thing that would change is that it'd
go to DRF of flow. So we've got a generic APR to allow you to load and build lots of different types of knowledge. It's a simple way to do the validation, and then you just build up the knowledge base. The knowledge base is the compiled executable form of all this knowledge. One of the things that's different to us is we, you do not choose the engine which it executes on. You build your knowledge base, and then it executes
on top of that. So whether it executes with the flow engine or the rule engine, it's all taken care of visibly. We have, this is spring integration. So this is the XML equivalent of what you just saw there. So what this is doing is it's having resources of a rule flow type from that particular URL. It's also adding in a decision table.
It could also add in a rule first. It builds up a knowledge base of all these different types of knowledge. It's all compiled into one executable form. We then build sessions from these. So the sessions is the short-term way to operate on that. You can think of it in process
terms, the process definition, and the process instance. In rules terms, it's the rule base and the working memory. We use the word sessions, a very generic way of working with this. Once you have everything within spring, you can then get camel integration for out-of-the-box services or something we're working with at the moment. So camel allows you to chain together
what they call processes, connect those up to web services, HTTPP, and then you can channel those into your case session ones. It allows you to get declarative sessions out of the box, rules, workflow, whatever, and there it just specifies the type of incoming data, the transformation. It's that simple. We're running behind time already. So let's go into a bit more.
The example you saw was, you saw how you do the definitions, how you build definitions, and executing. This is a stateless session. Stateless means that it is something that execute once is thrown away. It's like a function. You give it the data, it executes, and when it finishes executing, it returns. So it's very simple. You get the session,
you get your data. We're just doing a little bit of searching here, just checking the date before. We execute it on our session, our working memory or our process instance. We execute it on the session, that returns, and now we can check the values. This is a very simple way, quite common in validation, mortgage applications, insurance of processing your data.
So let's go into a bit more complex now. You have a room, you've got sprinklers, you have a fire and alarm. So this is for building your definitions for a fire control system. So we have the first rule. When there is a fire, turn the sprinklers on.
So there's a fire. These are bindings, by the way. They give you a reference to a field, a reference to the object, and we make a join between the fire and the sprinkler. I'll come on to more of this later on. And it's false because the sprinkler's not on. We turn the sprinkler on, turn on the sprinkler for room notification. When the fire is gone, turn off the sprinkler. So what we have now is a difference there.
On is false, on is true, and we're checking here when there is no fire, and the fire does not exist. Then we turn the sprinkler off. I will come on to a bit more for those who are a bit confused about the bindings and stuff in a minute. So not only do we want to turn sprinklers on and off for a room,
we want to be able to raise alarms. Now, we don't want to have an alarm for every time there's a fire in a room. So we have, the same way you saw, not to say when there's a fire, do an alarm, but only do it for the first time there's a fire. And the same, when there are no more fires in the building, turn the alarm off, retract it.
There's a way of being able to do it with collections. It's called first-order logic, for anyone who's in academic terms. And finally, when there are no alarms, and when no sprinkler is on, so everything is okay, this would just basically, for these different rooms, put them into the working memory. I will put these slides online
I don't have time to go through this line by line. So it basically puts all the data into the session, into the working memory, and calls fire or rules. And that's very, when you put stuff into the engine, especially on the rules side, the calculation is done there that none of the consequences, none of the actions are executed
until you tell it to here. And it basically runs through and says everything is okay. If we then create a fire, and we basically tell it there is a fire, a kitchen fire, and a fire in the office, and say fire or rules, raise the alarm, turn the sprinkler on for the kitchen room, turn the sprinkler on for the office. If we then turn the,
remove the fire from the kitchen, remove the fire from the office, and tell it to go again, turn the sprinkler off for the office, turn the sprinkler, that should be off, sorry, off for the room, cancel the alarm, everything is okay. So what you have there is an example of a validation system, first of all, in mortgage or insurance type applications. Then you have a
monitoring type system based on alarms. So let's go a little bit deeper. Account, cash flow, accounting period. So I told you this is going to be quick by the way, so you're going to have to concentrate, there's a lot to do. What we're going to do is we're going to have debits and credits for an accounting period for a given account. These are all simple examples.
I'm going to start with SQL because it'll help you understand how the engine actually works. So select staff from account period, so select staff from accounts, cash flow and account period. These are your tables, these are the data. We have a join where the account number of the cash flow equals the account number of the accounts. This is standard SQL. Everyone here should be able to understand this. We're checking for when this is the credits and we've got a
date range to correlate the select to a given quarter. What we're doing here is we're creating a view, this is effectively a view. We're going to create a view that gives us the cash flow credits for that quarter. We then create a corresponding view for the cash flow debits for that quarter.
As you know with views, rows materialize in the view based upon the data in the tables. So if we have the data, these tables populated with these fields, so these rows of data, these views with two rows there and one row there with two credits and one debit. If we were to have triggers on the views for each row, each materialized row is going to execute this trigger. This is just
going to increase the balance, this is just going to decrease the balance, you end up with a balance of minus 25. Now actually you couldn't do this on a database because databases have a problem called mutating tables, but you can't change tables you can select from, but in this imaginary world this would work. So what is a rule? You've already seen the format, you have rule, name, quotes,
optional. If you don't have spaces you don't need quotes, some attributes which control the rule execution behavior. You have the left-hand side which is the when. When something happens you have the then side which is the right-hand side. When something happens then do this, then do these
actions. So there's the left-hand side where the person equals mark. There's the right-hand side print hello mark. What's the difference between this and the rule? Methods must be called directly. You pass a specific instance in. It is imperative, do this now, do this now with this data. It is a command.
Rules can never be called directly. In a view you cannot put data into a group, into a view, you cannot say I want this data in this view, this row. You have to put it into the table, it materializes into the view. This is exactly the same thing, you put data into
the view, into the rule. So a rule is like a view. You cannot pass specific instances because you put it into the working memory and then that is materialized into the view. So we have a tool called patterns. A pattern is of the type object type. So you have tables in SQL, you have classes
in Java, in rule engines you have object type. And this is a very simple one, you have the object type which forms a pattern and this is made of one or more field constraints. A field constraint
consists of a field name and a restriction value to that. This is a very, very simple pattern. That's the foundation of this. So now we're going to take this simple SQL and we're going to convert it into a rule. So everyone who now understands how rule engines sort of databases work, how views work, how the views calculate across products when you have the
joins. Standard SQL theory should help you understand the rule engine. So this is doing the credits. So this is saying select staff from a counting period, select staff from a count. The difference is rule engines are a superset of SQL. They are a more powerful SQL. One of that power comes from bindings. AP, ACC. This gives you, if you were to select from your row in your
table and you can create a variable which can point to a column in that row, that's what this is doing. It's saying select staff from a count period and AP now gives the ability to reference that. Then it says select staff from a count and it creates a binding both on the whole object
and from a field and it's a select staff from a cash flow. It has a literal constraint, type equals credit. There's our first join. So where cash flow account number equals, see that says dollar account number. So we've got there that account number was on this field. So that's saying where select is exactly the same as this there and the execution semantics are exactly the
same as SQL. And there's our date range where data is greater than ap.start and data less than ap.end. This is the second same way of doing those two things there. So here we have
our credit rule and our debit rule. We have the data, produces two rows there, one row there. In a rule engine we call the materialized row, we call it an activation. So an activation is the rule plus the matching data. So here we have two activations, here we have one activation.
Those activations will fire, executing its consequences which increases the balance, decreases the balance, balance minus 25. All simple stuff. Just going to show you a few more patterns. I'm this is a literal constraint, a variable constraint, multiple restrictions, it's multiple
restrictions on that. This is just combining the two, it gets a little more complicated as you can start to do expressions. These will be on the slides, you can come back to them later and it gets fairly complex still. Believe it or not this is something we're quite proud of
because if you were to use, whether it's Jess or whether it's Clips, iLog, no one has a rule engine that is as expressive as this, that can have all these ANDs and ORs working with nested accesses, maps and arrays. For someone who worked maybe for Java or JavaScript, just like yeah I deal with this all the time, but actually rule engines, they're still back in the 1990s. So one of the things
we're doing with rules is dragging the system in sync that Java developers can work with, that groovy developers can work with. It doesn't feel like it's built on LISP like the other systems are. Very quickly, what is a rule engine? More importantly it's a production rule system, because actually you could have a little JavaScript engine, this is validation,
that's a rule engine. So to classify, we are an expert system. You have different types of expert systems. We are a production rule expert system. We will actually work with becoming a hybrid one, both a production rule system and a backward chaining system, Prolog-like, but that's our research. You have a production memory, that's your rules, your views. You have your working
memory, that's like your tables, and you insert, update, retract into your tables. Then there is a process which takes the tables and rules and then combines them together to create an agenda, which I'm going to show in a second. That's basically how a view would work to materialize the rows in the view of the new tables. So table, table, table,
object type, object type, object type. Two views, two rules. Now this is the bit that makes it a little bit different. If you were to be able to have one view that could aggregate all your other views, so this is a list of all my views, this is what this does. It basically
says I've got two rows in my credits rule. I have two rows in my debit rule, two activations, one activation. The agenda will have three activations on it, the order in which they fire. So here I've set this, bring in a new option before the attributes can control the execution behavior. I've set the salience here to 50. Salience is a form of priority.
Default is zero. So all the default rules for credits and debits are all on zero. They are what's called in conflicts, and because they are of all the same level of priority, they can be executed arbitrary. Rules are, the more arbitrary you can get execution, the better.
You do not want to have imperative controls in your code. You want to try and limit that. Anyway, so this one here, we clearly can't print balance until the calculations are done, so we have to give some control there. So by doing minus 50, we say,
do these three rules first. Don't care which order those three rules happen, but as long as those three rules happen together, then you can print its balance. So that's conflict resolution with salience. Very quickly, to get over the mutating tables issue, we have a two-phase system. I'm not going to go into this too much, but it means that in Java land, you populate your tables, you populate your object types,
insert, modify, retract. Builds up the agenda to build up an agenda of all your possible activations. Then it goes to the agenda based upon the conflict resolution strategy, which you just saw, pops the first one off, and it goes into the consequence to evaluate the consequence. When it's back in the consequence, then the right-hand side,
it's back into the working memory action phase. So it will insert, modify, retract. So that means if I was to put some data in, and it created 100 activations, and the first one was popped off the agenda, it was evaluating, and it changed some data that meant that 99 of those other activations are no longer true.
They are actually taken off the agenda, and that allows the rule engine to get over the mutating table issue and make sure that no rule fires that it's not true. So that means that just because something is true and activates doesn't mean it necessarily fires. It has to be true when it attempts to fire. Okay, so got rules flow. This is another way of
controlling execution flow. Quite simply, it's a way of saying when a rule is allowed to fire. So this is a declarative format to give procedural control of when rules are allowed to fire. So rules will say when this is true, do this. It has no idea of when or now or
order. So you can think of it as rule orchestration. So we add a new attribute, rule flow group calculation. That means that this rule might be true, but it still can't fire. Until the process engine gets to the point here, until the process engine gets to there, it can't fire. When it gets to there, it allows it to fire. So you know it allows the
rules to have control of the, the workflow to control the rules. We have many more ways in which the rules and processes interact. Just too much to do today. So just highlighting there, there's a rule flow group. Here's a number guess example that combines processes with rules. And then you can see side by side processes and rules there. Because we actually now have
rules to control the processes as well. And we have all sorts of stuff. I won't even go there. I haven't got time. One of the things we say when we go with rules is we're not being a rule engine anymore or a process engine. Because rules and processes and event processing and semantic ontologies, they do not live on their own. And each vendor
obviously starts from a base and they try and grow that. And everything else is quite weak. So about three years ago, we took a start and said, no, we're going to have rules, processes, event processes, all as first class citizens. We need to define a generic API to make this work together so that one does not perceive as stronger than the other. And we want to
make sure that we can work in a range of modeling techniques. So you have the typical SOA, which is decision services, where your rule engine, your process engine are completely decoupled and one will call out the other. Typically, the rule engine is in the stateless format you saw early on, validation, calculation. This is the simple,
that's why SOA is invented. SOA is invented because people couldn't get more complex modeling working. That doesn't mean there isn't value in the complex modeling. It just means systems haven't been good enough to make it possible. So rules allows you to go right back to the other end as well to work with very tightly coupled rules and processes. And we allow you to work any end of the spectrum. And this is what we call behavioral modeling.
It's about taking an application, looking for behavior and modeling it. And we do not make you go process-oriented or rules-oriented. You use the software to solve the problem the way in which you want to solve it. We recognize that when you work with rules, processes, event processing, they all follow the same life cycles. You design, you simulate, you test, you integrate, you collaborate, you deploy, you execute, you audit, you have human
task interaction management. Human interaction, human task interaction is not just a process thing. It's also a rules thing. So you have to start thinking about things in a different way. At the moment, because BPM is the big baby, everyone tends to just shut everything to BPM, rules and art form. So you have to think differently to get the best out of these systems.
We have fully integrated debugging and auditing. What that means is that when the rules, when the processes are executed, we capture all the events in the system. Everything in our system emits events. When you start a process, when you enter a node, when you exit a node, when you insert data into the working memory, when you fire a rule, it all emits events.
If you were to collect all these events, you can create a correlated log of causality. Causality means what caused something to happen. If I have this rule does something and it causes this process to start. If this process is executed, it causes these rules to fire. If you're working with Sarbanes-Oxley, if you're working with anything where companies will have
dedicated people who just correlated reports, and I have someone whose job is to do that. So what we're trying to say is if you use our software, you get this out of the box. You don't have, you can fire that guy. He's gone. Or you can make him do something useful, which would be nicer. So here you can see the rules. I think I've got a little zoom in. There we go. It's basically saying the correlation as well. So here the rule is activated. The row is
materialized. Here it's fired. And because it's part of a rule flow group, the rule flow group has to activate in order for the rule to fire. So you start to see the correlation between the rules and processes. One of the big things we push is domain-specific processes. Because rules is all about declarative modeling, all about behavior modeling,
BPMN, and even BPMN2, does not cut it. One of our areas, our growth areas, is the medical domain. We need to build processes that have language, semantics, dialogues, that the technician, that the skilled person understands. So that means the left-hand side icons need to be icons they know and words they know. Like if they drag that across from the
dialogue box, it has to be things that they know. So rules makes it trivial to design domain-specific processes. Domain-specific processes. Literally you can get up and go in 15 minutes. It's a meta-inf file. You drop it in and it just says this is the icon,
this is the dialog box, and it just appears in the clips. It's trivial. We have a number of what we call work items that come with rules as examples. So here we're taking a series of Apache common stuff for automation. So this will find files, find files on a disk, it does a four-inch composite node. For each file it finds it,
it will log it, it runs some rules to check, that's it might be checking dates or whatever it needs to do to decide whether what happens. It then uses an archive one, so the results of all this will go into the archive. We then copy it, we email it, and we end. And if you were to click on one of these, the associated dialog box would pop up from that. This is incredibly
important. It's good for you developers as well because it keeps you in the job. It means that you're going to work with a business analyst and you're going to say, how would you like to capture your knowledge, your business, what is the codification of your business knowledge, your company? What do you want your processes to say? It's really important. So it means now we give you the tools to work hand in hand
to tailor your process development environment for your business analysts. And then it means they get a tool which allows them to work in a way they can understand. To be honest, even BPN2 is so low level, and you'll understand when it comes to the next one. If you are a medical technician, you'll understand, I need to take a blood pressure,
and I need to go do some BP medication, I need to notify the GP, I need to do a BP follow-up, and you'll understand the order and the orchestration of these. Most of these workflows are not complex. They're not complex, and the technicians can understand the procedural-ness of these. But they can't start with gateways and splits and all the really complex stuff. So they need icons that they are trained upon.
They will tell the developer, this is what I want my workflow to look like, this is what you need to develop so I can get my job done. And now I can do something, and I can do it quicker, I can do it more efficiently. I can, someone who's trained on, if you develop these domain-specific processes,
it allows you to create internal methodologies, which means when you have a new person on, they don't have to learn bkmen2, they just learn the domain-specific workflow themselves. So it's an incredibly powerful tool. And it's not something that anyone addresses in the market at the moment. And I say it's a big growth area for us, people are just not going to use this. Big organizations do their own workflow tools, because they can't use the commercial ones. They can't pay for something that's
this flexible. And when they bring these on themselves, it costs a lot of money. So to have something that's generic, that's powerful, that comes with rules integration, and everything else we do, they absolutely love it. And we've got an announcement done in October where the US Navy healthcare saw this, and OSDEs, and they were just like, we have to have this.
And they invested massively into the project, allowing us to hire 10 people to work on this side of things. So it's incredibly important. Anyway, so back into the hard stuff. I only wanted to touch on processes. So you saw not earlier on,
so this is saying when I have no red buses, these are called conditional elements. You have the pattern, and then you have this little node here. It's like a, yeah, we just call them conditional elements. When there are no red buses in my system, in my session, my working memory. When there's one or more red buses for all, when all my buses are red.
Or when all my buses that have two floors are red. So it's just to be able to look at sets of data, to make decisions based on sets of data. Typically, rule engines, being from their list lineage, do not work well with Java classes.
If you work with JET, if you work with ILOC, if you work with this system, what they do is they actually have an internal representation. It's basically a list, an array, and they will map your classes onto those. And it makes it almost impossible to work with nested objects. So you work with JET, or you can work with the strings and numbers.
You can't start to work with more complex object graphs, as we call them. So Drool's introduced this thing called From. So From allows you to set patterns as just a type of filter. That's what it's doing. It's filtering data coming through it. So that data doesn't have to come from the working memory. It can come from anywhere. So why can't I just say, well, I've got a person in the working memory, match him.
I've got a binding term, and now I can actually filter the results of this expression. So this is an envelope expression, where you just evaluate, and it returns a collection. And we iterate over that. And results of that collection are iterated through here. It's a bit like a correlated subquery.
If you're in Oracle, you can actually do joins with stuff that's not in your current database. It's a bit like that. So you're doing joins with stuff that's not in the working memory. This gets more powerful, because whereas before we were joining with something that was in the working memory, we can have what we call globals. A global is something that's not in the working memory.
It's like a service variable is available. Here we have a hibernate session. This is just globally available. And it means that I have a zip code in my system. I can't put all my people in my system. It's too big. I can put the zip codes in. So what I want to do is go through the system, zip code by zip code. And for each zip code, I'm going to pull out the people and process them locally within
that rule, just for that zip code. So it's a hibernate named query. Sets the parameters. See that dollar zip code there? That's joined from there. So it's saying select star from zip code. For each row that comes back in the materialized view, it's then going to do a correlated or a nested query against results of this hibernate query here.
And we're just going to filter all the people. It's very, very simple. More powerful CE. So this is collect. It will collect everything it can find based on the pattern you give it. So here I'm saying collect all my red buses. It returns a list. You can then evaluate the size of that list.
You have accumulates. Accumulates allows you to do aggregation. This is fully pluggable. We come out with sum, average, total. So we have a number of these, but these are fully pluggable. So FedEx are using this to do geospatial analysis. They have a system which you can get online demos where they...
I think it's called now. Basically, it analyzes all of their environments and uses aggregations to work out their vehicles. Is their rate of climb of the vehicle too much? Because certain things are very sensitive environments. They can't go up too high. Is the temperature too hot? So they use these aggregations to continually summarize the environments.
And they put this into other subsystems to do those continuous calculations. And this is part of JUUL's fusion, which I'll come on to in a minute. So this is saying, get all my red buses. Get a reference to the takings and put it into this function. The result of this function is the right value.
Return it. It's a number in this case. It doesn't have to be a number. But in this case, it's a number. And the number is greater than 100. Do something. So it allows me to look at sets of data and form calculations on those. This starts to take us into territory of functional programming. Actually, this from keyword is allowing us to take a production rule system and extend it with functional programming to allow these conditional elements to be nested and chained.
So this is quite a complex example that's showing how we can combine stuff that's in the working memory along with stuff that's from a hibernate named query, all aggregated by the accumulation, using a more functional approach to programming. I won't go into the details in ages, so you know it's there. Timers.
So one of the things that often happens is you don't want to know just when this is true, do this. That's not enough. Because sometimes when this is true, do this. But I want it to do it every hour because it's an alarm. Or I might want it to do it every month.
So we introduced this concept called timers. Now, drawers have this thing called duration. So duration was a way of delaying the firing. So it's saying, when this is true, wait 30 seconds before actually doing it. And if it stops being true before it gets to 30 seconds, then stop what you're doing. So what we're saying here is, when the light is on,
after a minute and 30 seconds, turn the light off. So it's a timer that will turn the light off after a minute and 30 seconds, if the light is still on. We can then have interval-based semantics, which is based on the JDK timer util. So you have the initial delay and then the iterations after that.
So this one is just the iteration after that. So this is the equivalent of that. If it wants to, I can have an initial delay before the continual iteration started. So this is basically having a way to make a rule continuously fine based on delays and iteration periods. Of course, the next thing is, well, if I can do that, why can't I just combine this with cron scheduling? So now I can have my rules that, when this rule is true,
based upon a given cron definition in here. I told you you like all this stuff. Based upon the cron definition in here, does everyone know what cron is here? Cool. So cron is just a way of defining scheduling. And it's incredibly complex, very difficult, but very powerful. So we can now include crons.
It means that, when this rule is true, based upon this cron definition, it will fire at regular intervals. We can combine this further with calendary, because not only do we want something to fire at regular intervals, we need to be able to say when it can fire. So just because I got an alarm, I want this alarm to fire every hour. But really, that's only true at weekdays. So this is something that belongs to the rule engine, not in the prototype engines.
So this is allowing to say that, only at weekdays, and these calendars are fully pluggable, based upon quartz calendars. You give a quartz calendar to drawers. No point to fire any calendar stuff. Quartz is pretty impressive. Based upon weekdays, this will fire every hour, because weekdays we get a high priority.
Then I can have an opposite rule that's mutually exclusive. It's mutually exclusive because the calendar is now weekends. We can have a different priority there. So now, the interval basis is four hours. We have two rules that are mutually exclusive, based upon the calendar definitions. You can use the standard quartz ones, or you can define your own. So now you have timers, both interval and cron-based, and calendaring all within your rule engine.
And of course, because you have a process engine fully integrated, that means that, based upon the calendaring instructions, you can start processes. You can check and then wait states. And then, unlike the typical systems which are process-oriented, not only are you saying, do this at a weekend, you're giving it conditional information when to do it as well.
So it creates a much more powerful environment, much more powerful. Is everyone's brain hurting? I said this would be intense. How are we doing? Okay, we should just make it. We're going to notch it up a little now. I'm going to try and teach a bit more about rule design. So you have a thing called truth maintenance, TMS, and inference.
Inference is a scary word. You have companies like Corticon going around saying, you don't need deference. It's too scary. It's useless because their tools don't do it. And they try and use really complicated examples to show why inference is something you don't need. Actually, inference is very, very simple. It's very, very useful.
And if you can explain it right, it becomes quite obvious. It's not a scary word anymore. So let's go through an example about issuing bus passes. So children have bus passes and adults have bus passes. So what we're saying here is that, when I have a person whose age is under 16, created child bus pass for them.
When I have a person who is 16 or older, give him an adult bus pass. This couples the logic because imagine I've got a company and the company is split in two. You have one department which chooses the policy, who is a child and who is not a child. You have another department which issues the bus passes.
The people that issue the bus pass don't necessarily care whether that person is 15 or 16. They're just told, issue bus passes for children, issue bus passes for adults. So the decision making is two different departments. And what I've done here is that the department which issued the bus passes, and now tightly coupled to the decision making
of whether it's 16 or over. Not only is it tightly coupled, it means they are exposed. It is leaking information to them, which they should not have to care about. And it makes the change process complex because it means the department that decides the policy to say 16 or over or 16 and under, they have to issue change requests and then go to the other department
and then have to apply this change request. It makes a very brittle process to pass the knowledge through your company and changing policy decisions. What happens when the child stops being 16 as well? So it's monolithic because, actually that shows better in the decision table
because it's bringing all your logic into one place. It is leaking because it means that one person who's making a decision is seeing information they should not need to care about. It is brittle because what happens when a person stops being 16? So how do we get over this? So truth maintenance has a thing called logical insertions.
The logical insertion is a way of saying, not only am I going to put something into the working memory, not only am I going to create something and put it in, it's lifetime will depend upon a condition. Only while that condition is true will it exist. So only while this person is less than 16
will we have this thing called is child. So this child is what we call an inference. It is a fact that represents the results of a decision. It is a fact that represents this person is under 16. It means we have an object which encapsulates some decision-making process in both a way that gives it semantic intent,
it's a child, and a way that gives it encapsulation because you don't know it's a child. In fact, you don't have to know it's a child. What makes a child? You just know that this person's a child. So it gives an encapsulation and it gives it the decoupling. And it logically inserts it, so it means that this inference that we've made, that this person is a child, will only exist while the person's under 16.
As soon as that person stops being 16, that fact is automatically deleted, removed from the system. And then we have a rule that's mutually exclusive, otherwise it can get a bit weird, that says when the person's over 16, we logically insert that he is an adult. So we create an object which encapsulates a decision.
That creates an inference. And then we use logical assertion to maintain the truth of the inference. So this thing that we've inferred, this bit of information that we've discovered, we've decided on, we always know it's true based upon some other logic. So now that means we have one department that's saying when someone's a child,
when someone's an adult. That's their responsibility. And they publish these rules, which creates these inferences. We have another department who's responsible for issuing bus passes. So now this looks much better. They say when a person is a child, give them a bus pass. So we've got decoupling, we've got encapsulation, we've removed the brittleness, we've removed the leakiness,
much nicer. And same here, when the person is an adult, issue an adult bus pass. We can combine this further because when we give the child bus pass, we want to have a request to have it back, obviously. So now we can say when there is no bus pass,
child bus passes, so when there is no longer a child bus pass for this person, issue a request. So this all happens automatically. So you logically insert a child bus pass. That child bus pass is based upon the inference we made by another department
who's less than 16. When that person becomes 16, the inference is automatically retracted. Because that inference is automatically retracted, the child bus is automatically retracted. So that means these inferences, the chain of truth cascades. And if that truth breaks at any point,
everything below that point is automatically cascaded up. So I know this is a little bit complicated. I hope if you just get a bit of it, then it's great. You've all done very well. We're almost there. So anyway, this is basically saying when the logic is cascaded back to this specific point, issue a request. It's all done automatically by the system. So truth maintenance and inference
gives you decoupling of knowledge responsibilities and encapsulates your knowledge. It provides semantic abstractions for those encapsulations. Because if a person is greater than 16 or less than 16, what does that mean? I don't know what that means. It's just some, it means different things to different people. You know, if it was age and consent, that's different in different countries.
So it allows you to encapsulate that to give it semantic meanings. It's quite important. Makes things readable. Makes them more maintainable. And the truth maintenance as well helps add integrity and robustness. So I hope that's just a little smattering of rule-based theory. Okay, this is the last bit. This is about JAWS Fusion. So JAWS Fusion
extends our existing rule language with event processing capabilities or complex event processing capabilities. There are a number of different engines that have evolved for this called query-based engines. So query-based engines typically are based upon SQL. And they just take a stream of data and they look for changes
in that stream of data and they use a design pattern called event condition action. So basically it means they're going to submit an event based upon something they found. For me, this is actually quite limited. And I do another talk that had more time where I start to show comparisons with JAWS and ESPA because not only
it means you have to learn two different languages to do the same thing. One language is very much a subset of the other. These query-based systems do not support side effects, i.e. what happens if information you're querying changes rule engines do. They don't have all the powerful features that rule engines do like truth maintenance. And not only that, you have to learn two different APIs. You have to learn two different ways of building.
It means that you have to learn everything two or three times. If you were to use a process engine, a rule engine, and an event process engine combined together, you have to learn the API for each. You have to learn the language for each. You're going to have to learn the nuances of each one. And as you know, you can learn one thing quite efficiently. When you get to the next one, you may be the same at work,
you get less efficient. But when you get to the third one, you get less efficient again. So you guys should spend a week learning how to redo the same thing, how to build something, how to deploy something, how to debug something, how to check the errors, all these three different systems. And as you get older, when you're in your 20s, you don't mind doing it. When you get to your 30s, you get sick and tired. I don't want to deal with this.
I just want to do my job. It's not interesting. It's not fun anymore. It's like when I was 24, I would install a different Linux distribution every week. Now I'd hate Linux to go back to Windows. Oh, sorry. I work for Red Hat. I don't use Windows. I use Linux on this. This is not a Windows desktop. So anyway, we haven't much time.
Drools uses a full rule-based approach to complex event processing. Tip code do as well. Many of the vendors have more limited systems to put on a lot of files to say why you need their special system. Maybe their special system can do this. Kind of like Appomart, Espa. Many of them try and put down rule engines.
It's all just bollocks. It's simply because the existing rule engines have not been designed yet to do that. It does not mean they can't do it. So if you were to take Jest and you were trying to do complex event processing on Jest, of course it's going to blow. It's not that Jest can't do this. It just hasn't been extended yet for that. And there's no reason why these systems can't be extended. So we've taken
many of the things that are needed to address complex event processing and made it possible in a rule engine. First of all, a rule engine has a single point of insertion which creates a bottleneck. So if you've got 10 different queues and streams of data coming in, what you don't want to do is to have them all
going through one point of funnel. The other problem they'll say is that if you've got a working memory, it sees everything. And if everything has to be evaluated, it can be quite slow and cumbersome. So you're creating these things called entry points. An entry point is a way to partition the knowledge base. These are names. Specifically, for each stream
of data you have will become an entry point. So here we have the home broker stream. That would be connected up to JMS or to HTTP or whatever you want. And they have a little producer consumer which will effectively insert into this named entry point. So that's on the API side. On the language side, we can now have a pattern from an entry point.
So what it means now is this pattern here doesn't filter everything that's in the working memory. Only filter sucks on this stream. Lately that because of the way it works naturally, these streams obviously and these entry points can be correlated with other entry points. And we try and make sure that each entry point does as much of its own thread as possible.
So what that means is that each entry point is automatically on its own thread. And we will take care of the synchronization of the joins between threads and we try and do as much in its own thread to get the efficiency of throughput. So that takes care of that. Rule engines do not have temporal comparators. Very difficult to do comparisons in time. Draws is now extended
with all 13 temporal operators that as I was saying they could it can allow you to model any type of time. And you can do this by saying this comparator and different comparators have different arguments because you can say when something after this is between 1 second and 10 seconds. So this is saying
when I have a bi-acknowledgement event there's between 1 second and 10 seconds after a bi-order event. To help you understand visually A before B, A meets B, A overlaps B, A finishes B. Shows the visual connotation of that. We support all 13 of those.
Just means basically you can express everything. It's not just important to know when time happens, when something happens, it's often more important to know when something does not happen. So what does that mean? If I have a bi-order event what I want to know which is really important is when the bi-acknowledgement does not happen. So what I'm saying here is when I have a bi-order event
when it does not happen between 1 and 10 seconds at my bi-acknowledgement event then I need to do something. And I notice again we now have effectively three different areas I've got my working memory I have my which is actually the default entry point I have my home broker stream and I have my stock trader stream it's correlating all three of these checking for absences events
how much code do you have to write to do this in RHQ? A lot. And you have to maintain it. I showed this to Deutsche Bank they banks are all about time everything they do is about time and they write this complex system to analyse the changes the stocks over time the aggregations over time and then they have to try and test all this.
When you have data and rules that are working with time it's incredibly brittle incredibly brittle very hard when you take someone who spends day in day out dealing with wait states dealing with timers and synchronisations and all the low level code to try and make these things happen you show them this they fall instantly in love
so Deutsche Bank is like a revelation when they saw this but it gets even better so we can take these patterns and we can create windows and we can say when this pattern happens over a period of time so we can say when this pattern happens over a time window of five seconds or we can go for counts
when this stock ticker happens over a length of a thousand tickers because what we do is orthogonal you learn the rule language you only have to learn a few keywords to extend the rule language to prevent processing capabilities so that means you've already learned accumulate you've already learned from you've already learned all your pattern language when you're learning new rules you don't have to go and learn
a whole new thing for the CP it's just an over keyword the 13 operators you've already learned and that's it now you've got four CP capabilities your existing knowledge so we're allowing you to do more by learning less think about it if you're doing more by learning less it's great this is going to produce the average stock price
over a time window of five seconds if the average stock price of a time window of five seconds is more than 100 then do something as I said these functions are fully pluggable there's a Java interface you register that and you can do anything you want there and it will work with any subsystem you like that does statistical analysis to try and show a little bit
how this works with processes we very much when you start to have something that has built-in event capabilities you want to start designing for event-driven architecture what does that mean? it means you want to make sure that everything in your system
emits an event do everything with events everything that happens whether someone needs to consume or not do events every state change emit an event so within the rule engine every time we're working every time you insert something every time you start a process every time a rule fires from an event if this was a business application you would encapsulate events to you know someone is fired someone is hired and you buy a stock
all these are events and if you can model everything that happens every state change that happens in your system and you just emit these it allows you to create systems which are far less brittle it allows you to create systems which allows you to do things with them which you might not necessarily attend so you design a system a good event model up front
and then you try to build different correlations so you actually think about a process what is a process? a process is just a correlation a sequential correlation of events as it goes from node to node to node to node it's just an event and it has a state change and there's also an event so if you can emit events from these it means the complex event processing side can suck all this in and it can analyze this
so just a very very simple one here I have a process started events so I have an order process and it's saying that every time an order process starts this event will be emitted automatically by the system and I'm saying over a time window of one hour do account aggregation if over a period of an hour
this process has started more than a thousand times then do something one of the things that where systems are going I'm pretty much at the end now so I think we've got five minutes left is systems are moving more to what we call dynamic and adaptive so dynamic means the systems
can change on the fly that means rule engines have always been dynamic you can add rules you can move rules and a stateful system while they're running you don't have to take it down you don't have to put it back up again and it's the same with processes in our system in drools you can add processes you move processes remember the knowledge base is there
it's a composite knowledge you can change any parts of these not only that our processes allow you to change sub-parts of that we call this dynamic fragments orchestration so this is about the dynamic side now if you can have something that's monitoring itself you get something that's adaptive because it can monitor what's doing and then it can change itself you start to get something that's both dynamic
and adaptive systems of the future will be that it will be both dynamic and it will be adaptive there'll be self-monitoring questions thank you very much for your patience if anyone has a headache i'm very very sorry i said it's a hell of a lot to go through and i said there's so much more
it takes about three or four hours to do this justice i hope you understand just a small amount of it and i hope people have learned something today questions