Client-side versus server-side geoprocessing: Benchmarking the performance of web browsers processing geospatial data using common GIS operations.

Video in TIB AV-Portal: Client-side versus server-side geoprocessing: Benchmarking the performance of web browsers processing geospatial data using common GIS operations.

Formal Metadata

Client-side versus server-side geoprocessing: Benchmarking the performance of web browsers processing geospatial data using common GIS operations.
Title of Series
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date
Open Source Geospatial Foundation (OSGeo)
Production Year
Production Place
Portland, Oregon, United States of America

Content Metadata

Subject Area
Are web browsers ready to handle a larger portion of the processing load in our GIS applications? Web-based GIS and mapping applications are traditionally based on a client-server model, where most of the data processing work is placed on the server. This study examines what happens when that processing load is shifted to the client, using JavaScript to process geospatial data with GIS operations directly in the browser.The time needed to complete common GIS tasks using the JavaScript library JSTS Topology Suite were benchmarked in popular web browsers including Chrome, Firefox, Internet Explorer, Opera, and Safari. The GIS operations buffer, union, and Voronoi diagram were tested with a suite of points, lines, and polygons ranging in size from 10 up to 100,000 vertices. The testing platforms included Windows, Mac, and Linux desktops and laptops.The same geoprocessing tests were conducted on a cloud-based Linux server using the Java library JTS Topology Suite as a performance comparison of server-side processing applications. The various testing configurations were then analyzed to see how browsers stack up to the performance of traditional client-server applications.
Keywords Benchmark JSTS JTS JavaScript Java browser geoprocessing
Scripting language Thermodynamischer Prozess Server (computing) Software developer Process modeling Web browser Client (computing) Heat transfer Mereology Hypothesis Web 2.0 Computer animation Resultant
Word State transition system Computer animation Mapping Visualization (computer graphics) Mathematical analysis Electronic visual display Twitter Library (computing) Geometry Task (computing)
Suite (music) Server (computing) Multiplication sign Range (statistics) Web browser Client (computing) Open set Subset Statistical hypothesis testing Revision control Semiconductor memory Operator (mathematics) Operating system Graph (mathematics) Software suite Hydraulic jump Task (computing) Physical system Scripting language Thermodynamischer Prozess Process (computing) Topology Graph (mathematics) Cartesian coordinate system Web application Computer animation Software Finite difference Network topology Resultant Library (computing)
Point (geometry) Functional (mathematics) Building Observational study Computer file Range (statistics) Client (computing) Distance Order of magnitude Natural number Operator (mathematics) Core dump Software suite Vertical direction Overlay-Netz Scripting language File format Graph (mathematics) Mathematical analysis Data storage device Bit Computer animation Buffer solution Triangle Voronoi diagram Library (computing) Directed graph
Statistical hypothesis testing Dataflow Thermodynamischer Prozess Server (computing) Process (computing) Multiplication sign Forcing (mathematics) Client (computing) Function (mathematics) Cartesian coordinate system Measurement Graph coloring Statistical hypothesis testing Band matrix Computer animation Telecommunication Library (computing) Directed graph
Server (computing) Process (computing) Computer animation Sequel Multiplication sign Database Client (computing) Representational state transfer Function (mathematics) Statistical hypothesis testing
Revision control User interface Laptop Computer animation Term (mathematics) Gradient Range (statistics) Web browser Computer Resultant Statistical hypothesis testing
Point (geometry) Thermodynamischer Prozess Server (computing) Graph (mathematics) Multiplication sign Bit Web browser Usability 2 (number) Web 2.0 Computer animation Term (mathematics) Metric system Resultant Task (computing)
Point (geometry) Thermodynamischer Prozess Arm Computer animation Different (Kate Ryan album) Multiplication sign Vertical direction Web browser Client (computing) Cartesian coordinate system Resultant 2 (number)
User interface Thermodynamischer Prozess Pairwise comparison Server (computing) Multiplication sign Virtual machine Combinational logic Price index Client (computing) Computer Benchmark Number Statistical hypothesis testing Befehlsprozessor Computer animation Finite difference Semiconductor memory Configuration space Resultant Physical system
Server (computing) Greatest element Computer animation Different (Kate Ryan album) Mehrplatzsystem Client (computing) Web browser Resultant Order of magnitude Computing platform
Process (computing) Computer animation Multiplication sign Range (statistics) Vertical direction Web browser Cartesian coordinate system Resultant 2 (number) Subset
Computer animation Computing platform Vertical direction Client (computing) Resultant 2 (number)
Type theory Computer animation Range (statistics) Polygon Vertical direction Line (geometry) 2 (number)
Statistical hypothesis testing Server (computing) Mehrplatzsystem Range (statistics) Interactive television Web browser Client (computing) Order of magnitude Statistical hypothesis testing Array data structure Computer animation Natural number Ideal (ethics) Vertical direction Resultant
Presentation of a group Voting Computer animation Meeting/Interview Term (mathematics) Operator (mathematics) Client (computing)
Web 2.0 Pairwise comparison Web application Building Meeting/Interview Term (mathematics) Different (Kate Ryan album) Buffer solution Computing platform Web browser Voronoi diagram Library (computing)
Thermodynamischer Prozess Word Message passing Meeting/Interview Multiplication sign Data transmission Statistical hypothesis testing Library (computing)
Thermodynamischer Prozess Server (computing) Divisor Multiplication sign Structural load Total S.A. Data transmission Statistical hypothesis testing Message passing Software Meeting/Interview Internetworking Average Task (computing)
Scripting language Thermodynamischer Prozess Functional (mathematics) Meeting/Interview Java applet Multiplication sign Right angle Bit Client (computing) Directed graph
Server (computing) Meeting/Interview Instance (computer science) Disk read-and-write head Computer
Scripting language Point (geometry) Process (computing) Meeting/Interview Quicksort
Graphical user interface Graph (mathematics) Computer animation Term (mathematics) Vertical direction Statistical hypothesis testing
Meeting/Interview Code
Revision control Mobile Web Meeting/Interview Computing platform Web browser
Android (robot) Meeting/Interview Different (Kate Ryan album) Multiplication sign Vertical direction Web browser Representational state transfer Endliche Modelltheorie Statistical hypothesis testing Formal language
Thermodynamischer Prozess Process (computing) Meeting/Interview Different (Kate Ryan album) Multiplication sign 3 (number) Mereology Resultant Formal language Statistical hypothesis testing
Web page Plastikkarte Website
alright let's get started now thank you all for coming on-line Notes can only in the day so appreciate everyone attending my name's Aaron Hamilton I just graduated from from the University of Wisconsin Madison I'm I was studying JS in cartography there and this is the research that I did for my masters thesis client-side versus server-side your processing it became interested in this topic and as a new developer and noticing this trend towards and moving away from traditional client-server models on the web where I'm a lot of our clients would send requests the server the server would do something it has to return results back to the client and instead of and heavy reliance on being placed on the browser and particularly using script to do processing work and part of that transfer this is based on the improvements of the JavaScript engines of the browsers the
so all of your now launcher familiar with all of these stars that mapping libraries injuries seen talks on all of them and these are not primarily used for a visual display of data and however there's now this trend towards doing more harm to the
cold GIS analysis tasks in the browser and and this is demonstrated by the appearance of these JavaScript libraries and J. STS I'll talk about more do this talk and some others deserving mention on your word of just will still lightly alternative to Gs Ts turf which if you saw house talk and is and made for node and then their shapely witches up caught of the Python library Shapley and then i and j create geo which is more for and display and mapping but also contains some JS analysis tasks and 1 as well so based on
the is of larger processing libraries I I had some research questions that I wanted to answer on the 1st as I wanted to know how the various browsers performed these tasks and and so I want to test all the major browsers and geoprocessing of Annex I wanted to I test various I'm operating systems an a memory and processor sizes to see how all on the client those might and change results of the browsers next I wanted to arm compare the same processing tasks server at to see what happened and then around removing the server from the the a picture I also wanted to see if on their own these processing tasks were in acceptable time range to for incorporation into web applications so in order to test this I
needed to build a custom application and it needed to use common idea processing operations that you might find in any GIS software I needed this of large of us Sweden data sizes ranging from small to fairly large am I needed it to run in all the various browsers and and that test on different operating systems and processes the and I also needed to run the same processing tasks on the server so as not to meet these requirements I needed a geoprocessing library that had a client-side port so for that I chose I JTS Topology Suite which I'm sure many of you are familiar with on it's it's the processing library behind your server open jump J. saying you dig and also has a job script port which is Gs Ts topology suite of which was originally developed for Corporation and OpenLayers I think version 2 . 11 and but is now as stand-alone processing library other geoprocessing
operations I chose were based on some research of kind the core functionality in a GIS and I found out those were there uh proximity analysis buffer overlay analysis union and the and triangular strangulation houses Voronoi diagrams the and trading a sort of
deal is actually fairly challenging and I needed to and a large suite of data that all came from the same source and data like that doesn't really occur naturally so I had to create it and so I I I took data from Miller County GIS image's points buildings and road centerlines and on the data format II of used for this study was well-known text and I I'm a use that because as a passer both libraries and it which is easier for storage reasons and so I use a Python scripts and to create from these would files on data sizes that ranged from 10 vertices at 200 thousand vertices now which is roughly 444 bytes at about 4 20 megabytes in size AT again idea of and how those data sizes compared to a typical the data that you might use I and took some Our compared them to data from natural earth so an inaccurate the can add download large medium and small scale data so on this small scale side and super generalized a smaller data that's roughly equivalent to the data size is I had in the 10 to 10 thousand vertices range I'm in the medium scale a little bit more complex roughly equivalent to about 10 to 50 thousand vertices and at the large end on large scale data 50 thousand of so on the client side of
my application of course use Gs Ts and I also used micro Ajax further communication with the server and but a force to it to be a synchronous and because I needed only 1 processor run at a time so used a sink . js and then to measure bandwidth and latency I use the Yahoo library boomerang and this
is a diagram of current the flow of my testing application and I don't know if you can see the blue color looks like it's showing up I mean everything a blue were the steps that I measured and on total together for the processing time so it started with the client test and which made a request to the server to retrieve of the data and return the data 1st it and render due process and then pass the output on the
server side I I use that Amazon Elastic Compute Cloud Platform on Linux server and um I of course he's JTS I'm and then I use a sequel a database for my uh Wikidata and I used uh Jersey jacks to create custom RESTful web services to communicate with the client I'm again the server had a
similar flows the client on starting the test retrieving the data passing running the process and person output and measuring the time it took to to return the wicked to the client the the my test all major
browsers and Chrome Firefox Opera Safari II below here are their versions I used along with the JavaScript engines
and I also tested on Windows Linux and apple computers both desktops and laptops unfortunately on grad school budget I was limited to what was available to me so these range and in terms of the specs and the age so they get a better idea of how they compared I ran on the bench and benchmarking tests on them and also you those later in the results so of course
the faster processing time in milliseconds equals better performance on however to get an idea on not comparing these to the server how the browsers were handling the data I needed a different metrics for and determining what was a good performance and I couldn't really rely on on unresponsive scripture timeouts from the browser because those vary so widely among the different browsers some of them will pop up after sent 10 seconds and some of them just kind of crashed spectacularly and and it's not consistent between the browsers so instead I'm I decided to use of a common web usability metrics and which is after 1 2nd the user will notice kind land on and after 10 seconds I users likely to abandon the task if something hasn't happened so I looked at my results in terms of our those times as well
so for my 1st question how to various web browsers compare and geoprocessing performance on 1 the walking through this graph a little bit Chris they can't point to here I we have data
size on the X-axis and on axis we have uh the processing time in milliseconds on the 2nd see you probably can't see the difference in the and I'll just that is some firing all of the hearts theL this point out some of the more interesting aspects of this arm II and Safari were the slowest overall and that is praising Allah after that at
about 9 thousand vertices they were already 30 seconds slower than all the other browsers and Firefox is the only a browser of the processing the data size larger than 30 thousand vertices without crashing and 1 thing I found interesting is that operate in chroma builds on the V 8 engine but they had slightly different and results I'm opera was about 1 2nd faster 4 thousand vertices in 20 seconds faster at 20 thousand vertices so how a client
computers with different operating system processes the memory size compare and your processing performance like I said and using just pure numbers of memory and CPU 1 a good indication
of and how the computers performed so instead I used as clench marking us costs to take get a better idea and are for example the largest CPU memory combination of a Windows Linux although machine actually had the slowest up processing time is about 45 per cent slower than the fastest um machine which was a Mac Mini in actually had the smallest CPU memory combination but when you look at the benchmark scores from those on From geek bench the apple had a score of 2007 a and the Windows machine only had a score of 1 thousand 637 so um that kind of made those results make more sense and I I believe this was just due to the age of the Windows machine the the so how the various client test configurations compared to service idea processing performance so it's probably
hard see from here on these are the browser results again and a long history bottom there's the server if what it should so the server was it least an order of magnitude faster than the client and the 1 caveat here is that this was so a single user this
was not multiple users at once so obviously change these results on but that's that's a huge difference they see see the same thing when looking at on the client platforms same wine and fact that the server was the
process it is size of 100 thousand vertices in under 4 seconds so that's pretty fast and the best that the browsers could handle was 50 thousand vertices and 4 minutes sir kind
you processing times and an acceptable range from incorporation into other applications so if we look at the browser results at 1 2nd
and you see that R & the data size able to process range from about 1 thousand vertices to 3 thousand vertices and after 10 seconds on the best as a result that s was between 4 thousand and 10 thousand vertices and the same thing was
true with the client platform that after 1 2nd 2003
thousand vertices was that the range of the data processed and after 10 seconds of 7 thousand to 10 thousand vertices
and if I look at the you processing operations and that I
ran and after 1 2nd of the union the best it did was 900 receives well before lines of type polygons needed to 6 thousand vertices and after 10 seconds again 10 thousand vertices is really the best data size that I saw so in conclusion
I the server was facet the client in all testing scenarios hands down as with the caveat that it was a single user and I would like to see the performance of with multiple users I'm in the the best that's all I really solve the web browsers is that the limited to processing data of about 7 to 10 thousand vertices and this again was under ideal test scenario with only 1 tab open no other background processes running and no other interaction happening and so under those situations that um that the results would probably not be as good and so there 7 thousand to 10 thousand vertices is pretty equivalent to that natural earth data in a small-scale arrays small scale range so pretty generalize data it and that's all health the be I think you're
1st and our thanks to his great
presentation and a question for you are were you able to determine if there were some operations that there would be OK to do on the a the client side where your vote to distinguish some operations that were better than the others were worse than others and in terms of the Union
compared a buffer compared to the other I think Voronoi diagrams ran the best and that's that's i for to mention that in the Web browser comparison as only comparing for an eye diagrams there as well platform comparison and so that's why I chose to use that 1 that 1 did you run the best and Union was the slowest but that's also because years as combining 2 different datasets shows like double the size and I think it it's really limited to data size and I'm not sure how to handle that if you're building a web application that you have to constrain the data size or send it to a Web Worker And it's something I'd like to discuss if people have ideas but and in terms of what I tested Voronoi diagrams perform the best thank you can and so it's I have 2
questions 1 is and whether you have any sense of the the difference in performance between some of the different JavaScript libraries that you looked at and and the other from the question escapes me so on and am I I have not
tested the of the other JavaScript libraries from I'm really curious to know how turf compares I would like to know if anyone has done tests in that and because that seems to be gaining popularity so it is is built on gesture of OK and I remember my 2nd question it was some it was
how how much of the in In the test with the client-side side processing and how much time went into the how much time was the transmission passing of the data as opposed to the that your processing so in other words to use if you're
running a lot of your processing tasks on the same data the likely to save any kind of significance time only having it send the data once so network transmission time and ended up running about 90 tests with that and then averaging is because that vary so widely from where located depending on how many users around Internet at 1 time and how far we are from a server so that was an average test load took so that is going to vary in it and if you have really terrible Internet that's obviously a factor that you need to think about on the passing was not a huge chunk of the processing time in total and offer time had a can't come up with the percentage but it it didn't stick out to me is on beings significant and the time In the whole processing time I think you have the In
the I've missed that but what did you use to
miserable time of each 2 processing on the client side and on the source side right on so that that
was a little bit challenging because in Java and the function to measure time is a lot more accurate than in Java Script and I am it's escaping me right now the function that isn't our script it's it's the native in function but yeah that that wasn't that wasn't concerned because I'm that all the more accurate but it 8 are can measure down to like the nanosecond I think in John end with just that you're stuck to the 2 milliseconds so I can I'll I'll have to look at you look at every relay I can you're announcer if you could do In talk to took
questions do and made this but what was for the specs of the server and I was using the S M 1 medium
instance of the Amazon E C 2 and so on the I'd also can the of that of top my head but I have it with me on my computer here somewhere so I could look about 3 afterward on and the did you do
a sort of profiling to figure out where the choke point so were because Firefox is usually never battering chromosomes unless something is d optimizing crow so which makes me if you've ever looked through just yes that's our . for this job in John script that was a really sort of looking at like is there something that's slowed the optimizing and no Anthony
profiling and I have never had to do that before so I would love to know how but idea I didn't for this
test in Chrome and Opera were faster than Firefox until that 30 thousand mark in Firefox continued on a hobby the user through thousands of 3 thousand vertices in terms of like megabytes however i have when the graph here to
select sorry for the delay so
so on looks like a hundred to do so that here a well 1 . 2 megabytes to believe but up yeah the so that's something wrong you yeah yeah I for the and In December we
pour in the fast noting code from GTS suggest here so I know I asked you but you don't like would you were too just strong or to yes yes I random in February so
I should have had the latest version yes so here
around and figure out what users the animal mobile browsers on a mobile platform watching this which need to make have happened and you you try no I really found that
that that the next thing I would want to look at the time and I was planning to do that but the ran out of time we know and I did it briefly tried running this in a phone it actually works and saw that I a haven't looked into the compatibility of the different mobile browsers and but we've around on Android ICA you were interested in in a working so have it the 2 of this was just a very simple small tests so I only ran out maybe a hundred vertices and it was a little around adding going far above that and I'd be very curious enough to I like you present is for the it all have the same thing solutions to that the 2 this reduces the so you have to have done that
before if you think the as instructed the the might of 30 questions 1 is uh did you any test using said yes yes + node and to isolate the language differences and break out of the browser model and 2nd I'm where's your paper being published so the rest of us can save the time and harmony others you know I did not bring any
and tests in node and that is and yeah that is something I would be curious to see as well and I 9 ensure that part of my results are due to just the differences in languages not my thing job is going to be JavaScript every time processing and you'll serve with the III could you could well I would like someone to show me and my performs we could see the the the the of
but my so don't not help I ahead and
publish my paper but you can find it on my website and mn that Aaron Hamilton . means slash portfolio on it's the 1st thing under there I didn't the but he the half of it but but but but but all of the book for you differ just come come up later and I'll give you my card before that and those are my portfolio page thank you