Implementing Parallel Programming Design Patterns using EFL for Python
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 119 | |
Number of Parts | 169 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/21197 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
EuroPython 2016119 / 169
1
5
6
7
10
11
12
13
18
20
24
26
29
30
31
32
33
36
39
41
44
48
51
52
53
59
60
62
68
69
71
79
82
83
84
85
90
91
98
99
101
102
106
110
113
114
115
118
122
123
124
125
132
133
135
136
137
140
143
144
145
147
148
149
151
153
154
155
156
158
162
163
166
167
169
00:00
Computer programmingProgramming languageData modelMachine codeSequenceComputer programMereologyModulare ProgrammierungRing (mathematics)DeterminismFunctional programmingMemory managementParallel computingBlock (periodic table)Programmer (hardware)Object-oriented programmingBlock (periodic table)Focus (optics)ImplementationNeuroinformatikDifferent (Kate Ryan album)MereologySound effectSemantics (computer science)Computer programmingSystem callSoftware design patternPresentation of a groupResultantProgramming languageParallel computingPower (physics)Software developerComputing platformBarrelled spaceThread (computing)Modulare ProgrammierungMemory managementSlide ruleStudent's t-testDigital photographySystem programmingDeterminismMachine codeEscape characterFreewareUniverse (mathematics)GenderProgramming paradigmProjective planeSubject indexingFunctional programmingEndliche ModelltheorieInterface (computing)Computer animation
04:31
Data modelParallel computingFunction (mathematics)Programmer (hardware)Block (periodic table)Variable (mathematics)Computer programmingAsynchronous Transfer ModeOrder (biology)SequenceComputer programCommutative propertyImplementationCompilerPressureView (database)Default (computer science)NumberScheduling (computing)Mechanism designObject-oriented programmingModulare ProgrammierungLevel (video gaming)Stochastic processComputerDemonHierarchyData managementRevision controlTask (computing)MereologyAlgorithmAbstractionSimilarity (geometry)Machine codeProgramming languageStochastic processLine (geometry)Revision controlSemantics (computer science)Functional programmingAlgorithmParallel computingDefault (computer science)Block (periodic table)HierarchyLatent heatSequenceComputing platformEndliche ModelltheorieCompilerOrder (biology)Level (video gaming)NumberModulare ProgrammierungResultantSlide ruleComputer programmingObject-oriented programmingMehrprozessorsystemSource codeProgrammer (hardware)ImplementationVariable (mathematics)Run time (program lifecycle phase)Mechanism designScheduling (computing)Data managementView (database)Java appletCharacteristic polynomialNeuroinformatikProgramming paradigmElement (mathematics)DeterminismCore dumpPreprocessorVideo gameRhombusService (economics)Sound effectIdentity managementArithmetic meanSimilarity (geometry)InformationLength1 (number)Workstation <Musikinstrument>Rule of inferenceDifferent (Kate Ryan album)Term (mathematics)MereologySystem callAbstractionComputer animation
12:08
Pattern languageDigital filterBlock (periodic table)Level (video gaming)Loop (music)Parallel computingConstructor (object-oriented programming)Reduction of orderLogarithmKontrollflussSequenceTask (computing)Inheritance (object-oriented programming)Order (biology)Function (mathematics)System callVariable (mathematics)Regulärer Ausdruck <Textverarbeitung>Machine codeBoolean algebraDivision (mathematics)Exception handlingModulare ProgrammierungScheduling (computing)Hand fanLattice (order)BefehlsprozessorSystem programmingStructured programmingRecursionImplementationInstance (computer science)IterationStrutLevel (video gaming)Process (computing)Functional programmingSystem programmingBoolean algebraTask (computing)SequenceNumberMehrprozessorsystemDifferent (Kate Ryan album)ImplementationParallel computingMultiplication signBlogRule of inferenceExpressionStochastic processPattern languageCASE <Informatik>Computer clusteroutputComputer programmingModulare ProgrammierungEndliche ModelltheorieVotingTheoryMechanism designScheduling (computing)Group actionLengthProduct (business)Programming languageGreatest elementChemical polarityComputer configurationElement (mathematics)Function (mathematics)Logical constantMereologySubject indexingDivisorDefault (computer science)ResultantException handlingRegular graphVariable (mathematics)ForestProgrammer (hardware)Instance (computer science)Exterior algebraSoftware design patternForm (programming)Data managementField (computer science)DataflowMathematical analysisNeuroinformatikMoment (mathematics)Forcing (mathematics)Right angleLoop (music)Core dumpGame controllerConstructor (object-oriented programming)Inheritance (object-oriented programming)System callBlock (periodic table)Analytic continuationComputer animation
19:45
MathematicsPattern languageLevel (video gaming)NumberElectronic mailing listMehrprozessorsystemMachine codeBlock (periodic table)Special unitary groupAnalytic continuationLoop (music)Digital filterParallel computingConstructor (object-oriented programming)Function (mathematics)LogarithmSequenceLoginRing (mathematics)RoundingLetterpress printingInterior (topology)Lattice (order)ClefStrutoutputFunctional programmingSpring (hydrology)Element (mathematics)SequenceLevel (video gaming)ResultantAlgorithmoutputTask (computing)System callData managementFocus (optics)Instance (computer science)PreprocessorSocial classCodierung <Programmierung>Complex (psychology)Stochastic processVideo gameElectronic mailing listComputer programmingImage warpingConstraint (mathematics)NumberMereologyAverageIntegerSquare numberRight angleInverse elementMachine codeWave packetGreatest elementFunction (mathematics)Slide ruleEscape characterAssociative propertyRootObject-oriented programmingBlock (periodic table)Translation (relic)Parameter (computer programming)Metropolitan area networkBoolean functionReduction of orderPattern languageInterior (topology)MehrprozessorsystemModulare ProgrammierungLoginLoop (music)Programmer (hardware)Queue (abstract data type)String (computer science)Binary multiplierComputer animation
27:21
Parallel computingBlock (periodic table)Regulärer Ausdruck <Textverarbeitung>Machine codeBoolean algebraCondition numberVariable (mathematics)Conditional probabilityPattern languageMoving averageMatrix (mathematics)Letterpress printingVector processorLengthLoop (music)Row (database)ScalabilityBefehlsprozessorCompilerSoftware frameworkData structureImplementationFunctional programmingCompilation albumPredictionProteinComputer programAlgorithmRevision controlComputer programmingComputing platformCompilerFederation of Bosnia and HerzegovinaLocal GroupFunctional programmingElement (mathematics)CASE <Informatik>PreprocessorVector processorGene clusterBlock (periodic table)Universe (mathematics)Row (database)Limit (category theory)Pattern languageResultantStudent's t-testParalleler AlgorithmusData structureBoolean functionSound effectProjective planeBlogInstance (computer science)InternetworkingElectronic mailing listSoftware frameworkFunction (mathematics)Product (business)Parallel computingSparse matrixStructured programmingDifferent (Kate Ryan album)Compact spaceWater vaporProgramming languageWebsiteLaptopStochastic processComputing platformRule of inferenceLevel (video gaming)ProteinSoftware design patternTranslation (relic)TelecommunicationAlgorithmAreaBit rateSemiconductor memoryMereologyComputer programmingRevision controlLinear regressionExpressionField (computer science)Link (knot theory)Variable (mathematics)Task (computing)Loop (music)Software testingMatrix (mathematics)SoftwareDatabase transactionBoolean algebraScalability2 (number)Kernel (computing)Faculty (division)Software development kitSoftware developerForm (programming)ImplementationComputer animation
34:58
Gene clusterAssembly languageSystem programmingHigh-level programming languageModulare ProgrammierungProgramming languageMereologyProgrammer (hardware)QuicksortNeuroinformatikBell and HowellMoore's lawParallel computingOperating systemMachine codeRevision controlComputer programmingUniverse (mathematics)Computing platformDampingMultiplication signHardware description languageObservational studyEndliche ModelltheorieOpticsSound effectFamilyCompilerFocus (optics)CollaborationismUniformer RaumLevel (video gaming)Shared memoryOvalComputer programLatent heatLecture/Conference
Transcript: English(auto-generated)
00:01
Good morning. Please welcome our next speaker, Honza Kral, who will be talking about designing a Pythonic interface. This presentation is about our research and development project that we are working with David,
00:24
also with students of our laboratory, and also our third partner, Rafael Igezke. The agenda of this presentation will be the motivation of all the ideas of the embedded flexible language,
00:43
our objectives, the programming model of EFL, the execution model, the implementation of all these ideas, how and then the focus of this lecture is how we can implement parallel design patterns in EFL conclusions and further work.
01:05
Then the motivation. We know that we have a huge heterogeneity of incompatibility of parallel programming platforms today, MPI, OpenMP, Python threads, Python multiprocessing, etc.
01:24
There is a need to make it easier to program parallel systems for a common approach which will free programmers from platforms' technical intricacies. Our objectives were, and continue to be, a major objective has been to develop a straightforward language which implements that common approach
01:49
and allows implicit, instead of explicit, parallel programming, making it easier to program parallel systems. This should allow flexible computation that further we will talk about
02:05
it, in which sequential and parallel executions produce identical deterministic results. It doesn't matter if we run the same program sequentially or in parallel, we ensure that the values, the result values, will be the same.
02:25
The run, the execution is completely deterministic. To facilitate this, a deterministic parallel programming tool has been developed and its name is Embedded Flexible Language, EFL.
02:41
The programming model of EFL. Suppose we have a sequential program and we decide that to have a better performance, we want to parallelize part of the code. Then, in those parts that the programmer wants to parallelize that part of the code, we embed blocks of EFL like you see here in the slide.
03:16
And the sequential parts of the program are written in the host language, maybe Python, maybe C, maybe any programming language.
03:27
The parts of the program which are to be executed in parallel are written as EFL embedded code. The EFL syntax is C-style. Why? Because we wanted something universal, that the same embedded language could be used for any language.
03:47
And the most wisely used syntax in programming is the C-style, then because of that we decided that the EFL syntax will be C-style. And host language independent, to be universal.
04:04
The semantics of EFL is deterministic like in functional programming. The memory management we keep to the host language, implemented by translating embedded EFL blocks, the semantics is implemented,
04:21
but by translating embedded EFL blocks of code into the parallel code of the host language. Then, what are the principles of the programming model of EFL to ensure deterministic parallelization?
04:40
First, the programmer should call pure functions, functions that don't have side effects, ensuring the functional programming concept of referential transparency. Variables used inside EFL blocks may be of two kinds only, in or out, but not in-out.
05:04
Variables that we can read only from them, or write only to them, but not both. And the important concept of once only assignment that is connected to the principle that variables may be only in or out.
05:25
Then, the execution model of EFL, the key aspect of the EFL execution model is that parallel and or sequential execution orders of the program execution, a program that is written according to the EFL programming model, will yield deterministic identical values.
05:49
And because of that, the flexibility of execution orders, we call our execution model flexible computation.
06:00
Then, we will now try to understand why we need once only assignment. If you see here in this code example that we initialize x and y with one and three, and here the block of EFL that we have three lines here,
06:22
x receives the value of f of a, y the value of f of b, and x equals x plus f of c. If we execute this code sequentially, the line one after line two after line three, the final values of x and y are x is fa plus fc, and y is fb.
06:56
But in a parallel execution that every line can be executed in any order, and the
07:05
order is three, two, one, we have a completely different value than in the sequential execution. Then, allowing x to be in and also out leads to undeterministic results.
07:22
If we don't allow in, out, and only in or out, the same code will be written this way, that x is only an out variable, also y is an out variable.
07:43
Then, the sequential execution, also the parallel execution, any parallel execution will give exactly the same values. Once only assignment prevents the undeterministic results that we had in the non-EFL code,
08:13
then how we implement our idea of the EFL, the pre-compiler of EFL.
08:23
Let's see here two views, the view of the implementer that he writes the syntax and semantics of EFL. We used a tool that is called the Java CC, Java compiler compiler.
08:40
The Java compiler compiler according to the syntax semantics will generate or create a platform specific EFL pre-compiler. Until now we have two pre-compilers. We will talk about them further in the later slides.
09:06
The programmer's view, he writes an EFL-based host language source code. That code is translated by the pre-compiler to a parallelized host language code,
09:20
and then the host language runtime platform runs that parallelized code. Our approach is that if we have a specific pre-compiler of EFL, we can write with EFL in any language,
09:43
but our implementations until now are for Python. Then we implemented the pre-compiler for the multiprocessing pools module of Python.
10:02
What are the characteristics here? That a pool object is a collection of fixed number of child processes, that the number of child processes defaults to the number of cores in the computer. The pool object mechanism serves as the scheduler of that parallel execution,
10:24
and inside this Python module we have built in the map functionality that we will see later the importance of that functionality. The pools module was modified by us, allowing unlimited hierarchy of non-diamonic processes,
10:46
because the original Python pool generates only diamonic processes, and that constrains the hierarchy or the nesting of parallelism. The pool-based scheduling management is the element that allows us to manage all the scheduling of the parallel execution.
11:15
The second implementation is an MPI version that is based in a module that was developed at the University of Montreal,
11:26
that is called DTM, that is an element inside a package of evolutionary algorithms that they developed that is called DEAP. DTM is a Python module written using the MPI for Pi module.
11:46
It's a layer on over the MPI for Pi. DTM allows EFL implicit parallel programming in a similar level of abstraction as that that we get got by the multiprocessing Python module.
12:06
Even the syntax in DTM is very similar to the syntax in the multiprocessing module. Also in DTM we have a map functionality, and the number of child processes defaults here to the number of cores in the computer cluster that we run the MPI.
12:26
A scheduling mechanism also is built in in the DTM. Now we will focus on how we implement the parallel design patterns in EFL.
12:41
We will talk about the implementation of the fork-joint pattern, and those are the constructs in EFL that allow the implementation of this pattern. Master worker pattern with the for block, map pattern with all, we have here three
13:02
alternatives, map loop, map loop, loop and for, the reduce pattern and the filter pattern. And we will see also a construct that may be very useful but it's not connected to any specific pattern, the if block. Then the fork-joint pattern, maybe that part of you know what is the idea of the fork-joint.
13:33
We have here a program, until now it's a sequential control flow, here there is a fork that generates n child tasks.
13:44
Until all the tasks are not finished, the program waits, the parent task waits until all are finished, and then it joins all the results, partial results of the computation of all the child tasks, and then it continues its sequential control flow.
14:09
Then the first EFL construct that allows us very easily to implement the idea of the fork-joint pattern is the assignment block.
14:23
In the assignment block, we have n assignments that all the right-hand side of the assignments are executed in parallel. When all the child tasks finish their job, then all the variables receive their
14:47
value and then the program continues after the end of the block of EFL. Can we have two examples? The first example, we have my value 1 equals 5, my value 2 equals f of 5.
15:07
We decided that if in an assignment block we have this kind of assignment of simply a value, this assignment will not generate a child task, it's a waste of time.
15:21
This kind of assignment will be executed sequentially, like in a sequential program. Here, because we have a call to a function, this call is executed in parallel.
15:42
Here we have another example of calling two computing-intensive functions. In this case, both are running parallel, both generate a child task, and when both return, my value 1 and my value 2 receive their values, and then the program continues.
16:08
Another option to the fork join is that if we have here, like in Cia
16:21
or like in Python, if, else, if, etc., all the boolean expressions are executed in parallel. And the first one, sequentially, that is true, launches the body of the option.
16:43
But there may be a problem with the pif. If we have a case like this, and a is zero, this may provoke a divide-by-zero exception. The problem or the danger of the pif is that if the programmer is not aware
17:06
of that, one of the options here that all are executed in parallel may provoke an exception. Master-worker pattern. We implement the master-worker pattern with the fork construct.
17:28
And the fork construct looks like a regular fork in C, right? But every instance of the body of the fork is executed in parallel.
17:41
Suppose we have m processors or m cores in the system. When n greater than m, the scheduling built into the pool modules, n is the number of tasks that are generated.
18:01
And in the pools module of multiprocessing, and in the task manager of DTM, they allow implicitly the implementation of the master-worker pattern. Then, in every moment we will have m processes running, and all the others are waiting to be executed.
18:24
If n equals m, the pattern is actually a fork-join pattern that is implemented also by the fork construct. Map pattern. The idea of the map is that we have an input sequence, maybe a list, maybe a tuple, maybe an array in other languages of length n.
18:55
We have a function that is applied over all the n elements of the
19:03
input sequence, and then the map generates an output sequence exactly of the same length, but its elements are the result of applying the function on the corresponding element of the input sequence.
19:23
Here we have the syntax of the map loop. Map loop receives a function and the input sequence. Another construct with a completely different kind of execution is that implements also the map pattern is the loop block.
19:47
In the loop block we have here a label that is like the name of a function, and here there is a recursive call to the label, and then recursively the loop block generates n instances of calling the CPU-intensive function according to the value of i.
20:14
Then also here we will have n processes, n tasks that are run in parallel, but their launching is like a recursive call.
20:35
Now we will see here how actually the EFL pre-compiler works.
20:43
If we have, say, here a program that receives a list of numbers and will return the square root of every one of those numbers, then the par map function that receives the input sequence will generate an output sequence that is the result of the map loop on sec in,
21:13
it should be sec, not sec in here, sec and the map func, and then the result is printed and the program ends.
21:30
Implicitly, you see, here we have parallelized the n callings of the map func.
21:44
How it looks after the translation with the pre-compiler, this example is with the multiprocessing pre-compiler. You see here we have the par map, here the original EFL block starts, then an object of our pool non-diamon module is created,
22:12
a manager of multiprocessing is created, a queue of managers is created, and then here from
22:24
the pool the map asynchronous method is called with the map func and the input sequence. And here map out will, after all the subtasks were launched, then with the get method that waits until all the subtasks are ended,
23:01
map out receives the out sequence, and then here we close the pool and we join all the subtasks that were created and return map out. You see the programmer doesn't have to deal with all this complex code of multiprocessing.
23:28
He wrote very simply like in sequential programming program, right? Okay, reduce pattern. For reduce pattern that works like you see here in the slide, we
23:49
have an input sequence and that function should be an associative function like plus or multiply.
24:02
And then every two elements are passed to the function here and here, and then the results, we have a temporary sequence of the results of the first level,
24:26
and then every couple is passed to the function again until we receive the reduced value, the result of all the reduced.
24:43
And for that we have a construct that is called log loop. It receives the input sequence and it receives that associative function that must be, needs to receive two parameters to be able to do what you see here in this picture.
25:07
Then here we have the algorithm of the log loop that in the questions, if you will like, we can analyze the algorithm.
25:21
Now here you have an example. We have a list of eight numbers and with the add method of the int class of the integers of Python, we call to log loop with L and the add of integers.
25:46
And what happens in the running? One and two plus one and two is three, three plus four is seven, et cetera. And then in the other round, three and seven is 10, 11 and 15 is 26, and then we have the result.
26:08
That way the log loop works. But we can use the add of the list class or the add of the string class and then we can do reduce with every kind of values.
26:31
The filter pattern. The filter pattern is implemented using map. The map uses the map function that is essentially uses a boolean function that every element that the boolean function is true.
27:00
Remains the same element. All the others that define the boolean function is not true. The map function will return none and then the result in map out will be in all the places in the sequence where the boolean function was true.
27:21
Is passed to the output and all the others are none. And then you see after the EFL block, we have a list comprehension that get rid of all the elements in map out, there are none.
27:41
And then in sec out, we have only those that are true for the boolean function. In that way, we may implement the filter pattern using EFL.
28:01
And the if block is like a sequential if that every in this case instead of if, in this case every boolean expression is evaluated sequentially. And the first one that is true, the body is executed but all the EFL block is executed in parallel with all the program.
28:33
Then now we will see two EFL programming examples. The first using assignment block and if block.
28:41
And you see here that in the first block, we have a and b that are for that block are out variables. That will have the result of f on x and the result of g on x. Then we go out from that block and then in the next block we can use a and b as in variables.
29:07
And then in the second block, we can read from a and from b because in the second block, a and b are in variables and not out variables.
29:22
Second example that also shows how we can very easily implement the nesting pattern. Nesting is that we have, I'm sorry, we have n child task and in every child task also we launch parallel tasks.
29:53
We have here a matrix, a 2D matrix with a vector. In the main function, we scan with the form inside the EFL block by rows.
30:08
And every row is passed to mult to make also in parallel the product between every element of the row with the element, the according element in the vector.
30:29
And then we see that we have a nesting of instances of the master and worker pattern when in main, the for
30:44
loop iterates upon the rows of the matrix and in the second, in the nested one, upon the items of the row. Conclusions. Two EFL pre-compilers were implemented. Safe and efficient parallelism has been made possible
31:04
by the EFL framework and parallel design patterns have been shown to be implementable using EFL. We in the electronics department of our college, two students build this cluster of 64 Raspberry Pi processors and
31:23
we, after we go back home, we will try to test EFL scalability with that 64 Raspberry Pi cluster. That in every one of them run the MPI version of EFL.
31:45
Further work, we are developing an EFL curriculum to teach how to implement serial and parallel algorithms, flexible algorithms with EFL. Also, we will want to implement concurrent data structures using EFL.
32:03
Also, we want to answer the question, are purely functional data structures EFL compatible? And also, in a joint project with Professor Miroslav Popovic from Serbia, we are rewriting an
32:21
algorithm for predicting structure of proteins that is called DeepSam using EFL and using software transactional memory. And then the invitation, like in the lightning talks here, we invite you to join us, to collaborate with us, to redesign EFL with Python
32:45
-like syntax and then make happy all the crowd here in the Python conference that we have an EFL version that is completely, purely Python.
33:00
Also, we want to implement new versions of the EFL pre-compilers for other parallel programming platforms and other host programming languages. And also implement all the basic kernel of parallel design patterns using EFL.
33:25
And we also have to answer the question, are there patterns that cannot be implemented within the EFL framework? Maybe we have to research that. Then, to the end, our laboratory is the FlexComp Lab.
33:51
Here you have the website, you are invited to enter the website and download the installation kit of EFL.
34:02
All the faculty in the laboratory is me, David Dayan that is here in the audience, Dr. Rafael Yajeskel and Dr. Shimon Mizrahi from the electronics department that his students build that cluster of Raspberry Pi. And those are our students, part of them already graduated.
34:23
And Elad and Moshen, Bosni, Levi and Aman, they are developing the MPI version, are finishing the development of the MPI version of the pre-compiler. And Miroslav Popovic from the Novi Sad University is our research, European research partner.
34:47
And here you have the picture of our campus, the LEV academic center that is also called Jerusalem College of Technology. Thank you very much. Questions?
35:08
Thank you. Yes, we have time for a few questions. No questions? Nobody wants to collaborate with us in our project?
35:33
It's okay. Yes?
35:41
So I was wondering, I know you use C, but it seems like a hardware description language might be a better syntax for the EFL parts. Do you know of any use of like more hardware description language or did you just use C because that's more familiar to computer programmers?
36:04
We developed EFL especially because ourselves felt that it was not easy to program directly, explicitly,
36:26
with the parallel tools that are in Python or in other languages. Then if we design a layer over the parallel tool of the specific language will make it easier for the programmer to program.
36:53
And we can ask if the code generated by the pre-compiler will be as efficient than if we could write it by hand.
37:14
That is the same question that in the end of the 1960s when people wrote operating systems in assembly,
37:28
Richie and Kernighan in Bell Laboratories, they developed the C language and they argued that we can implement an operating system in a high-level language.
37:42
And this is what we have today. Today nobody writes an operating system in assembly. And also here we would like to allow everybody to write parallel code very easily and then
38:08
utilize all the computing power that we have now in the multi-core and clusters of computers. I answered your question? Okay.
38:25
Any other question? Yes. I didn't quite understand whether there is already some sort of Python module in some beta
38:41
version or are you going to start from scratch implementing it or what's the current state? We can consider that idea but our idea is at least now to have a language that is embedded in a host language.
39:07
And it uses all the platform possibilities. Maybe in the future we will think about transforming EFL in a Python module.
39:26
That is what you are asking, right? EFL will be a module in Python. We may think about that but because what I said in the beginning, our
39:40
idea is to have a universal solution for parallel programming anywhere, in any language. But we can think about that possibility that EFL will be a module for Python. So we have time for one more question. If there are no more questions, please thank again our speaker.
40:18
If you want after the lecture to talk with me, please, I am here.