Image processing with scikit-image and Dash
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 118 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/44868 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
EuroPython 20194 / 118
2
7
8
15
20
30
33
36
39
40
45
49
51
55
57
58
63
64
66
68
69
71
74
77
78
80
82
96
98
105
107
108
110
113
115
00:00
Image processingComputer-generated imageryPoint cloudGoogolComa BerenicesImage processingModule (mathematics)Core dumpCartesian coordinate systemInformation securityPresentation of a groupComputer animationXMLLecture/ConferenceMeeting/Interview
00:38
Image processingImage processingForm (programming)Domain nameLatent heatCartesian coordinate systemPhysical systemPresentation of a groupAutonomic computingSatelliteNumberComputer animationLecture/Conference
01:50
Computer-generated imagerySource codeImage processingGeneric programmingLibrary (computing)Formal languageArray data structureSoftware bugFilter <Stochastik>Generic programmingImage processingOpen sourceCartesian coordinate systemComputer animationLecture/ConferenceMeeting/Interview
02:32
Computer-generated imageryImage processingGeneric programmingLibrary (computing)Formal languageArray data structureDigital filterModul <Datentyp>Array data structureLatent heatOpen setCovering spaceImage processingContent (media)Multiplication signOpen sourceServer (computing)EmailComputer animation
03:04
BitGoodness of fitImage processingOrder (biology)Lecture/Conference
03:41
Image processingComputer-generated imagerySinguläres IntegralMeasurementInformationMusical ensembleInformationNumberMatching (graph theory)Fitness functionShape (magazine)Object (grammar)Computer animationLecture/Conference
04:15
Computer-generated imageryArchitectureProcess (computing)Augmented realityConstraint (mathematics)BefehlsprozessorComputer architectureLibrary (computing)Image processingSoftware maintenanceCodePlanningProcess (computing)Software architecturePhysical systemService (economics)XML
04:52
MereologyMachine learningInteractive televisionPlanningInstance (computer science)ImplementationMeeting/InterviewLecture/Conference
05:21
ArchitectureComputer-generated imageryDatabase normalizationAugmented realityConstraint (mathematics)Graphics processing unitAlgorithmMultiplication signAlgorithmComputer animation
05:59
Integrated development environmentComputer-generated imageryMachine learningPointer (computer programming)Lecture/ConferenceComputer animationProgram flowchart
06:22
Computer-generated imageryMachine learningDigital filterVisualization (computer graphics)Array data structureMessage passingLibrary (computing)Module (mathematics)Visual systemLecture/ConferenceComputer animationProgram flowchart
06:50
Function (mathematics)Computer-generated imageryParameter (computer programming)Digital filterShape (magazine)Thresholding (image processing)Array data structureData typeElement (mathematics)PixelDemo (music)CodeModule (mathematics)Functional (mathematics)YouTubeLecture/ConferenceComputer animation
07:18
Computer-generated imageryParameter (computer programming)Digital filterShape (magazine)Thresholding (image processing)Function (mathematics)Array data structurePixelNumerical analysisElement (mathematics)Module (mathematics)outputVariable (mathematics)NumberComputer fileModule (mathematics)Function (mathematics)PixelArray data structureConnected spaceoutputFunctional (mathematics)Thresholding (image processing)Block (periodic table)Default (computer science)Binary imageParameter (computer programming)Hand fanInformationEmailMusical ensembleSubject indexing
09:44
WaveletMathematical morphologyDimensional analysisWeb pageMerkmalsextraktionComputer-generated imageryRandom numberForestThresholding (image processing)Image processingMathematicsUniqueness quantificationStatisticsBitFilter <Stochastik>Service (economics)MereologyImage processingMachine learningMathematical morphologyFunctional (mathematics)ForestRandomizationCodeNumberInteractive televisionBuildingConsistencyRow (database)Uniqueness quantificationMusical ensembleWebsiteSoftware developerUltraviolet photoelectron spectroscopyJSONXML
13:20
Computer-generated imageryMusical ensembleObject (grammar)GUI widgetMereologyXML
13:57
Link (knot theory)Digital filterComputer-generated imageryWaveletPhase transitionFlow separationGeneric programmingSpacetimeMereologyLink (knot theory)Functional (mathematics)Musical ensembleSampling (statistics)XML
15:07
WaveletSpacetimeFilter <Stochastik>Computer-assisted translationMusical ensembleReal numberComputer animationLecture/Conference
15:38
Computer-generated imageryMedianDigital filterFlow separationPhase transitionBounded variationTotal S.A.WeightPixelHistogramLine (geometry)Parameter (computer programming)EmailTunisMultiplication signRun time (program lifecycle phase)ResultantBounded variationHistogramWorkstation <Musikinstrument>HypermediaComputer animation
16:19
MedianComputer-generated imageryDigital filterWeightPixelHistogramTotal S.A.Bounded variationIntegrated development environmentProcess (computing)Parallel computingDivision (mathematics)Block (periodic table)Function (mathematics)Parallel portDisintegrationSound effectMaizeMusical ensembleComputer animation
16:44
Parallel computingProcess (computing)Integrated development environmentView (database)Block (periodic table)Division (mathematics)Function (mathematics)Computer-generated imageryParallel portSound effectDisintegrationRead-only memoryCache (computing)LiquidGUI widgetObject (grammar)Interactive televisionDifferent (Kate Ryan album)Bit
17:24
GUI widgetComputer-generated imageryGUI widgetParameter (computer programming)Interactive televisionMultiplication sign
18:03
Computer-generated imagerySoftware frameworkWeb applicationWave packetCuboidObject (grammar)SatelliteConnectivity (graph theory)Musical ensembleWeb pageWorkstation <Musikinstrument>Cartesian coordinate systemMatching (graph theory)Multiplication signLecture/Conference
19:41
Interactive televisionSource codeInterface (computing)Analytic setCodeWeb applicationConnectivity (graph theory)Software frameworkCodeServer (computing)Cartesian coordinate systemLecture/Conference
20:25
Clique-widthAlgorithmPositional notationDrag (physics)Drop (liquid)Computer-generated imageryDynamic random-access memoryDemo (music)Web browserGUI widgetFirefox <Programm>Software frameworkFile viewerCore dumpComponent-based software engineeringGraph (mathematics)Object (grammar)Visualization (computer graphics)View (database)Computer fileCodeExtension (kinesiology)CodePhysical systemMusical ensembleComputer animation
20:52
Visualization (computer graphics)Kernel (computing)Function (mathematics)Mobile appComponent-based software engineeringoutputInversion (music)Category of beingInheritance (object-oriented programming)File viewerProgrammable read-only memoryDrop (liquid)Drag (physics)Computer-generated imageryClique-widthBitInteractive televisionGraph (mathematics)Element (mathematics)Computer animation
21:59
Component-based software engineeringCore dumpElement (mathematics)Table (information)Interactive televisionNormal (geometry)Core dumpConnectivity (graph theory)Computer animation
22:27
Computer-generated imageryClique-widthInclusion mapDrag (physics)Moment (mathematics)Function (mathematics)Mobile appComponent-based software engineeringoutputCategory of beingZoom lensGraph (mathematics)RectangleGraph (mathematics)Menu (computing)TorusCore dumpElement (mathematics)Table (information)Interactive televisionInteractive televisionFigurate numberMultiplication signLatent heatLibrary (computing)BitData exchangeClassical physicsProfil (magazine)Computer engineering
23:31
Installation artPlot (narrative)Transformation (genetics)Clique-widthArray data structureModul <Datentyp>Function (mathematics)Sampling (statistics)Demo (music)Object (grammar)Modul <Datentyp>Functional (mathematics)Auditory maskingMusical ensembleWorkstation <Musikinstrument>Computer animation
24:42
Zoom lensGraph (mathematics)Menu (computing)Point (geometry)Least squaresRectangleGraph (mathematics)GUI widgetDigital filterActive contour modelMach's principleComputer-generated imageryClique-widthDrop (liquid)GeometryTransformation (genetics)Array data structureModul <Datentyp>Function (mathematics)Computer fileGeometryMultiplication signWave packetInteractive televisionProjective planeCuboidMachine learningLibrary (computing)Object (grammar)GUI widgetPerfect groupMusical ensembleVirtual machineSpacetimeTouch typingComputer animation
26:48
GeometryComputer-generated imageryFeedbackFeedbackTouch typingNeuroinformatikComputer animationMeeting/InterviewLecture/Conference
27:12
outputField (computer science)Interactive televisionServer (computing)Lecture/ConferenceMeeting/Interview
27:35
Virtual machinePlotterMobile appCellular automatonLocal ringNeuroinformatikCartesian coordinate systemBusiness modelServer (computing)MereologyAsynchronous Transfer ModeCircleMusical ensembleService (economics)Data managementMultiplication signLecture/Conference
28:43
Type theoryMedical imagingVideo gameMeasurementParallel portMultiplication signReal numberInteractive televisionCellular automatonMusical ensembleLecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
00:03
So I'm going to talk about image processing and about two tools, Scikit-image and Dash, which I hope can help you to build very modular pipelines and applications for your own image processing needs. So I'm a member of the Scikit-image Core Dev team,
00:25
and the work on Dash which I'm gonna present has been sponsored by Plotly, a Canadian company which I'm gonna join in the fall. So as you may know in the bi-data community,
00:41
images are a very common form of data, and you have needs for image processing in various domains of science and industry. For example, in biology where you have microscopy images or for satellite imaging, quality inspection,
01:02
but also for autonomous cars, for example, where you want automatic segmentation and so on. So as you see, the needs are very different. They span a very large number of fields, and in all these fields, you have various tools which exist.
01:22
Some tools are libraries, some tools are user interfaces, and all these tools are quite different. Some of them are written in Python, and the tools I'm gonna present today are not meant to cater for one very specific need,
01:42
but rather to help you build your own image processing application when you have a specific need. So let me start with Scikit-image. So just out of curiosity, how many people here in the audience are using Scikit-image? Can you please raise your hand? I am.
02:01
Okay, so some people. So please come and talk to me after the talk to tell me about all your bugs with Scikit-image. I will love that. So Scikit-image is a generic image processing library. When I say generic, it's not for one application in particular, but its mission is more to process scientific images
02:23
rather than, for example, Instagram filters. It's really more for scientific needs. It's open source. All the content I'm gonna cover today will be open source, BSD, or MIT licensed. And it's for Python, obviously, here at Hero Python,
02:42
using NumPy data arrays as images. Compared to other image processing tools, one specificity is that Scikit-image works well with 2D, but also with 3D images, sometimes with ND images. Like in science, you have MOI, CT, a lot of modularities where you have 3D images.
03:05
And last but not least, Scikit-image tries to have consistent and simple API and also good documentation, gentle learning curve, so that when you're getting started with image processing,
03:21
you can get started quite smoothly and learn by yourself. So I'm gonna cover this a little bit. In this slide, here's a short overview about what you can do with Scikit-image. So it's image processing for science, basically manipulations of images in order to transform them for other purposes,
03:43
like when you want to filter them. Here you have a denoising example, or when you want to extract information, like feature extraction for further classification. When you want to extract objects, this is called segmentations. Or after some processing, when you want to measure
04:03
the size of objects, the shape, that is to transform your images into numbers out of which you can do science. So this is what Scikit-image does, and this is what Scikit-image is not. It's not a deep learning library, I'm afraid. You have really great deep learning libraries
04:22
with image processing capabilities, like Keras, for example, has some nice image processing examples. So the reason why there is no deep learning in Scikit-image is mostly because of architecture and maintenance choices. We choose to be a very maintainable library,
04:42
very well integrated into the NumPy, SciPy ecosystem. That is, all the code is in Python or Syson. So there is no GPU-specific code, for example. However, Scikit-image interacts well with machine learning and deep learning,
05:01
both for the pre-processing and for the post-processing parts where you can do normalization, data augmentation, or after deep learning, you can improve your segmentation, do some cleaning of instances, and so on and so forth.
05:20
Also, one thing that we do not want to do in Scikit-image is to have a lot of very bleeding edge algorithms, like the one that you just published during your PhD six months ago. It might be a really cool algorithm, but if we do this, we'll end up with like 100 denoising filters,
05:41
and then how will our users find their way through a library? We want to have a short API so that it's easier to find the functions, and therefore we let time do the Darwinian selection and choose the algorithms which we include. Scikit-image is a full-fledged component
06:02
of the scientific Python ecosystem, and as such, it works with NumPy arrays, which are the images we process. So it interacts also really well with, so this pointer does not work.
06:20
Yeah, it's very weak. It interacts also really well with Scikit-learn because you can pass NumPy arrays from Scikit-image to Scikit-learn and vice versa, and also it interacts also really well with the visualization libraries of this ecosystem because once again, it's this NumPy array object,
06:42
which is kind of the lingua franca of the SciPy ecosystem, which we exchange and pass between all these modules. So here is a very short glimpse into the kind of code that you would write with Scikit-image. I'm not going to make a big demo.
07:02
You can find a lot of tutorials on YouTube, for example, but what you can see is that you first import submodules, so the functions are typically inside submodules. For example, the IO for input-output reading an image from a file.
07:20
This image will be a NumPy array. You see here I'm asking for its shape, and then the syntax, the API is that you have functions like this thresholding function, which take as input NumPy arrays and they return either numbers or filtered images,
07:42
which are once again NumPy arrays like this function, for example, which measures the connected components from here binary image to connected components. The input is a NumPy array, and here the output is a NumPy array, as you can see here.
08:01
So the NumPy array has actually all that we need for image processing, because pixels are just array elements. And so our API is really only functions working on images and returning images. The first argument is always an NumPy array,
08:21
and then we have optional parameters, which are keyword arguments. If you want to tune the behavior of your function, we try to have sensible default values. Also, here I have an example with a 2D image, which does this block of code, but it would work exactly with the same syntax
08:42
if you were to have a 3D array, because we have exactly the same syntax. So pixels are array elements, and it allows us to use all the machinery of NumPy. So here it's just pixel indexing, changing the values of pixels,
09:01
accessing to a channel like a RGB image. It's a three-dimension image with three channels. But you can do also masking, fancy indexing, and so on. So the API is simple.
09:21
We have just submodules, and these submodules have functions taking a NumPy array as input, 2D, 3D, sometimes ND, and the output is a number or an array. For the API, through time, we have converged to a quite consistent API.
09:43
If you started using scikit-image like five years ago, maybe it was a bit more chaotic, but now, for example, all the denoising filters start with denoise so that you can try to discover new functions, new filters, just by browsing the API
10:00
and looking at the docstrings of these functions. I will show also the gallery, which is another way of exploring scikit-image. We try to be consistent also for the variable names and also inside the code for, for example, how we name indices, something as stupid as,
10:24
are you using XYZ, are you using plain row colon? We have heated discussions on the GitHub to try to find some consistency for this. Here is a short example to show you that scikit-image and scikit-learn interact really well.
10:42
It's an image that I acquired for my research with my team. So it's a grain of gypsum, what makes the plasterboards, and part of it has been dehydrated, the part which is textured,
11:01
and part of it is still intact, and we wanted to do automatic segmentation of this. And for this, we extracted features using the feature sub-module of scikit-image in these two regions, and after we fed these features to a random forest classifier of scikit-learn,
11:21
it gave us a first segmentation, but it was not really good, well, it had a lot of mistakes, so we cleaned this segmentation using traditional image processing, like Gaussian filtering, thresholding, and mathematical morphology. So it's to show you really quickly
11:41
this interplay between machine learning and image processing. A few facts about scikit-image. So it's release of 0.15. We have more than 200 contributors, but between five and 10 maintainers,
12:01
so we really try to welcome your contributors, and I would be happy to talk with you if you could be interested in contributing to scikit-image, or reviewing pull requests, and we always need a lot of enthusiastic people. Our community is quite large.
12:24
We have 20,000 unique visitors on the scikit-image website, scikit-image.org per month. That's how we estimate the number of active users, and if you go to the scikit-image.org website, you will find one of our most beloved features,
12:43
which is the gallery of examples, which allows you to browse through thumbnails, showcasing image processing applications, and you can select one and open an example. I would like to give a brief shout out
13:01
to the underlying package of this gallery. It's called Sphinx Gallery. If you're building your documentation with Sphinx, you can just import it as a Sphinx extension, and get such a gallery just from Python scripts. Here is one rendered example with the code,
13:24
the image generated by the code, some explanations. The gallery of example is really the part of the scikit-image website, which is visited the most because our users will come to the gallery and say, I want to measure the size of images,
13:46
the size of objects in an image, and they will do like control F on the gallery, or something like that, and open an example. We also have a C also, sometimes between examples. Sphinx Gallery also gives you nice features,
14:02
like at the end of the docstring in the API documentation, it will create mini galleries, like this one with all the examples using a specific function. So this comes for free when you just import Sphinx Gallery and also in the examples you have here,
14:25
you have links to the API documentation. So it's a lot of redundancy, cross-linking between the different parts of the documentation, and it helps your users not to be lost in some dead end somewhere in the documentation.
14:41
So I really recommend giving a try to this Sphinx Gallery package. So let's say, for example, that you want to denoise an image, like this is an image that I acquired during one of my experiments, and it was very noisy.
15:02
And so how can I denoise it? When I go to the gallery, there is an example showing how to denoise with several different filters. So one shortcoming of our gallery is that, at the moment, it shows mostly pictures, like cat pictures, cars pictures, pictures of people,
15:23
and we miss examples with real data sets, but we're working on this. And if you have a good open data to contribute, we might be interested. So there was explanations about all these different filters, and here you can see,
15:42
so that was on my image this time, that when just with one line of code, I can try one filter, tuning the parameters with keyword arguments, and you can see that from this noisy image, for example, when you use quite specific filter,
16:00
which is this one, the total variation filter, the histogram gets really peaked, so you can start having good results with very generic filters, like the median filter, here in green, but it gets much better when you try the more advanced one, of course, sometimes at the cost of longer execution time.
16:24
Something which we want to improve in the future is the speed of execution, and the parallelization, because some other packages use GPUs, for example, but we use only NumPy code, once again,
16:44
for maintainability, so in the future, we want to experiment with NumPy and Python, for example. But at the moment, what we do is chunking into blocks. So I would like now to go to the interaction
17:03
with the images part, because why do we want speed execution? It's because sometimes when you do image processing, you don't really know what the workflow will be, what the pipeline will be, and then you need to tinker a bit with your images, you need to try different parameters,
17:21
and for this, for example, you can use the widgets, so here, for example, I have used the IPY widgets package and it's a interact decorator, and if I want to choose the best Gaussian filter widths,
17:42
I can just use this slider and select my best parameter, so you gain a lot of time by having this kind of interactivity, and this you get with the widgets. But sometimes you need another kind of interaction with your images, that is, you really,
18:01
you don't want to change the parameter generating your image, but you want directly to draw on images, for example, to have markers for the segmentation, to identify an object to be removed from the background, to delineate roads on a satellite image, or to have bounding boxes for a training set
18:26
for further classification, and for this, we have developed this Dash Canvas package. I will give you one example, which is this tool,
18:41
which is integrated into web applications thanks to the Dash web application framework and so you have the different components of the web application here, I can increase the size of the brush, I can change the color, and so on,
19:02
and then I can perform a segmentation based on my annotations, okay? And so on this annotation tool, I have other features like rectangle, lines,
19:22
undo, and so on and so forth. So what is this tool? First of all, the web application framework here is called Dash, it's developed by Plotly, and the tagline of Dash is no JavaScript,
19:42
so it's a web application framework in which you write only in Python, and so all the components I showed you before are Python code, I will give a few examples, and it can be quite heavily customized
20:05
so that you can really tune the layout. So Dash uses a Flask server to run the applications and also all the components are based on the React JavaScript framework, so there is JavaScript behind the scenes,
20:20
but the principle is that you write only Python, and I have a few examples of Dash code. So where is that? No, it's here. So here I'm using the JupyterLab extension for Dash,
20:41
so you see that I write some Python code here, and when I execute it, I have my reactive graph, I have this radio items buttons here, and each of these elements is defined in the layout here,
21:03
okay, and when I want to add some interaction between these elements, I can do this using the callback decorator of the app, of the Dash app, and when I do this, like when I change, for example, the value inside this text book,
21:23
then this text paragraph is also changed, so this is defined here in this, callback mechanism, and if I go back to, here to my app, for example,
21:43
then there is this, in the dev tools of Dash, you can see the graph of callbacks, which is a bit more complicated, because I have more elements, but it's exactly the same principle as my little examples, okay?
22:01
So, which components can you use in the Dash apps? You have the normal HTML elements, which are provided by the Dash HTML components, the reactive components are found in the Dash core components, for example, the sliders, the dropdowns, the radio items,
22:21
I just quickly showed, you have also reactive charts, plotly, but not only plotly charts, so if I go back here, for example, I have one graph, and just clicking will populate
22:45
the hover data, I can select also, which will change this part, and it's quite classical to have figures which can be changed for user interaction, but here, it's a user interaction with the figures
23:02
which modifies other components, which is a bit more tricky to do. You also have interactive data tables, which you can include, specialized libraries for specific components, like for engineering or biology,
23:20
and basically, every time you have a React JavaScript library, you can wrap it with Dash, and this is what I did for this Dash Canvas package. There was a very neat JavaScript package called ReactSketch, and I just wrote a wrapper around it,
23:41
adding these little buttons, and this is how it was quite easy to create a Dash Canvas package. So, Dash provides you two things. One of them is the Dash Canvas object, which is a modular tool for annotations and selections,
24:01
so you see here some sample of such annotations, and also, you have functions to transform these annotations that is to make, for example, NumPy arrays, masks out of these annotations, which will be then processed by Scikit-image.
24:21
Dash Canvas depends on Scikit-image. So, for example, this is how I could use these annotations to do the segmentation of these objects in the demo. If you're interested, there is a gallery of examples on dashcanvas.plotly.host.
24:42
I can show a few examples of this, so here is the gallery. You have one example with just bounding boxes and then populating a table, a numerical table,
25:01
like when you want to build a training set for machine learning. There is one example in which you want to remove the background from just one person. And then, what you do, okay, so since it's not perfect,
25:30
you can just improve it. That's really the benefits of interactivity and so on and so forth. So, I think I'm running out of time, so I will wrap up.
25:44
So, this was a quick introduction to Dash Canvas, which is quite a new project. It started at the beginning of this year. The roadmap which we have is to improve the interaction with images, having, for example, annotations
26:04
which can be loaded from a given geometry from a file and not only from the user drawing the annotations, also annotations triggering directly callbacks without having to press a button. And I would be very interested also in handling 3D images
26:20
in time series, for example, for segmentation of objects in 3D, like what you have in the medical sciences. Adding more examples for the gallery as well. And since this interactive component is based on JavaScript, it could also be useful for other
26:41
packages, like some libraries using widgets and so on. So, we can talk about it if you're interested. So, thank you very much. Feedback is very welcome on these two tools, iQT image, Dash and Dash Canvas, and please be in touch. Thank you.
27:09
Any questions? Feel free to go to the microphone, please, thank you. Hi, I think this looks very interesting. When you added a little interaction from this input field
27:22
and showing the text, when you changed the input field, does it actually go through the server and back to JavaScript, or does it all happen on the client? It goes through the server, I think. Okay. Let me, yes, yeah. Yes, I see, thanks.
27:40
You don't have computations done like on your local machine, but. So, I didn't speak about deployment, like the app which I was running with segmentation of the cells. Actually, the server, it was a local server on my machine. This you can do.
28:00
You can also add the apps to an existing Flask application, and you can also deploy using G Unicorn, for example, on Heroku. So, this is the only commercial part. Plotly also commercializes deployment solutions
28:20
for Dapp application. That's the business model around Dash, which is otherwise completely open source, yeah. Hey, thanks. Can you tell us more about the medical images?
28:42
What are the challenges that psychic images has regarding these type of images? The question is about medical images. Yeah, exactly. So, for medical images, we have identified several challenges.
29:02
One of them is to add more examples using real life science datasets, because sometimes I go to conferences and like a biology person will tell me, oh, I never thought that psychic image is for myself,
29:21
because I never saw a cell image on the gallery, for example. So, this is one thing which we want you to do. Also, a lot of 3D images are quite large datasets, like acquired automatically with, I don't know, light sheet microscopy, CT, and so on.
29:42
And for this, improving the speed of execution through automatic parallelization is really something which we want to improve. And also, the Dash canvas part, it's not psychic image.
30:01
There are people in common between the two teams, but it's something which I see on top of psychic image, really adding some user interaction to play with images, to annotate them. So, I also see this as something which can be useful to the life science community, because you have a lot of people using ImageJ
30:21
to do measurements or to do just manual segmentation. And this you could do with psychic image in Dash canvas as well. Okay, thank you, time is over. So, now five minutes, and we start again at 35.