UrbanFootprint: Next-Gen Scenario Planning Tool
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 188 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/32092 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Year | 2014 | |
Production Place | Portland, Oregon, United States of America |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
00:00
Disk read-and-write headRouter (computing)NumberView (database)Arithmetic progression
06:14
Instance (computer science)Integrated development environmentMathematical analysisFocus (optics)Level (video gaming)Online helpGreen's functionService (economics)Exterior algebraAssociative propertyElement (mathematics)BitMultiplication signComputing platformSoftwareWordPresentation of a groupStack (abstract data type)Open sourceProjective planeOrder (biology)Power (physics)InformationFinite-state machineCategory of beingObservational studyComputer programmingSoftware developerProduct (business)Different (Kate Ryan album)Group actionSuite (music)Process (computing)Neighbourhood (graph theory)Compact spaceOffice suiteInterface (computing)QuicksortAnalytic setComputer cluster
12:29
CodeMachine visionDifferent (Kate Ryan album)AdditionApproximationMathematical analysisPairwise comparisonInformationProjective planeVery-high-bit-rate digital subscriber lineOpen sourceNumberPhysical lawCombinational logicSpreadsheetPoint (geometry)Client (computing)Product (business)Category of beingFunction (mathematics)1 (number)Execution unitGreatest elementDampingGroup actionMathematical modelTransportation theory (mathematics)Term (mathematics)Disk read-and-write headSoftwareFocus (optics)Error messageComputing platformMereologyMixed realityDirection (geometry)QuicksortStack (abstract data type)Presentation of a groupCASE <Informatik>AreaContext awarenessQuery languageSlide ruleDistanceResultantModule (mathematics)Limit (category theory)Software developerPopulation densityFood energyCompact spaceExterior algebraProcess (computing)Right angleTwitterExtreme programmingCore dumpLevel (video gaming)Self-organizationMathematical modelLetterpress printingCartesian coordinate systemWeb 2.0Structural loadDivision (mathematics)Integrated development environmentComputer animation
20:13
Interface (computing)Focus (optics)Mixed realityRevision controlData managementLevel (video gaming)Complete metric spaceLimit setMathematical analysisGreatest elementRight angleCodeQuery languageResultantComplex analysisWeb applicationCASE <Informatik>CodeWebsiteProjective planeDifferent (Kate Ryan album)Open sourceText editoroutputTerm (mathematics)Physical systemInteractive televisionServer (computing)Client (computing)Process (computing)Category of beingSelectivity (electronic)TesselationMessage passingMathematicsComplex (psychology)Network socketInstance (computer science)Execution unitTable (information)Web pageNumberFilter <Stochastik>Self-organizationSoftware frameworkMobile appMetropolitan area networkGroup actionSequelMereologyMappingOperator (mathematics)Single-precision floating-point formatPower (physics)View (database)Game controllerAreaMathematical modelSinc functionAverageAttribute grammarFlow separationOpen setOrder (biology)SummierbarkeitComputer animation
26:49
Front and back endsMathematical analysisUniqueness quantificationPhysical systemMedical imagingModule (mathematics)Projective planeVisualization (computer graphics)Attribute grammarCollaborationismOpen sourceMultiplication signDebuggerTerm (mathematics)Multitier architectureCategory of beingResultantSoftware developerBuilding1 (number)Different (Kate Ryan album)NumberSimilarity (geometry)Confidence intervalSlide ruleLevel (video gaming)Latent heatQuicksortGroup actionClient (computing)View (database)InformationTrailCombinational logicForm (programming)FeedbackPlastikkarteSoftwareRepository (publishing)CloningMachine visionRevision controlText editorProcess (computing)BitSpreadsheetSampling (statistics)Matching (graph theory)MathematicsCodeHierarchyPhase transitionDecision theoryShared memoryAreaAssociative propertyMereologyConvex setSummierbarkeitCASE <Informatik>Mathematical modelFood energyDistributed computing
33:26
Group actionBuildingMultiplication signPrice indexModule (mathematics)Decision theorySoftwareTerm (mathematics)Power (physics)Conservation lawUser interfaceAdaptive behaviorInstance (computer science)MehrplatzsystemProcess (computing)Food energyReal numberResultantInformation1 (number)Service (economics)Product (business)AreaDifferent (Kate Ryan album)Video gameComputer animation
36:00
Network topologySingle-precision floating-point formatGeometryCuboidPhysical systemProjective planePartial derivativePoint cloudClient (computing)Level (video gaming)AreaSoftware developerProcess (computing)PlastikkarteMaxima and minimaComputer programmingTheoryBitStandard deviationMultiplication signSoftwareRight angleDifferent (Kate Ryan album)MultiplicationMathematical analysisPersonal digital assistantMixed realityEntropie <Informationstheorie>MereologyWordPoint (geometry)Online helpHeegaard splittingFront and back endsQuicksortAssociative propertySheaf (mathematics)Open sourceLecture/Conference
Transcript: English(auto-generated)
03:33
Hi, how are you doing?
06:53
There's also, this is a DVI to VGA adapter if you want to use. I don't know what you have for the,
07:02
oh, you can probably just use that, yeah. So if you want to set up on that right now. And there's also, there's a power cable here. I think so.
07:41
Actually, it was the weather back.
08:07
Yeah, I guess you're not a kind of a speckle. It was just on the outside of the river.
08:22
I'm not sure what the neighborhood was. Very small apartment. With a friend. Oh, the office, yeah. We went to the four office. Four office, yeah. It was a couple times in Austria. It was nice.
08:59
OK, I'm going to get started. It's 1 o'clock.
09:03
So let me know if this mic doesn't pick me up. It sounds like it's OK from here. Is that picking up OK? OK. So this is a presentation about an urban planning platform. How many of you are in the urban planning business? OK, quite a few. Great.
09:22
So some of you, and how many people are familiar with Urban Footprint or have just heard the word? OK, a few of you. So Urban Footprint is an open source scenario planning platform. And what is a scenario planning platform? Well, it's a suite of tools that
09:40
let cities and regions compare alternative land use futures. So for instance, alternative land use futures might be deciding whether land use development wants to be business as usual, sort of sprawl, auto-oriented versus more green compact growth around transit. So the goal of Urban Footprint is
10:01
to help cities and municipal regions either create or take the existing future land use plans they have, show them on a map, look at the feature data, and run analysis to see both what the differences are between the base year, the study year might be 2012, for instance, when they gathered their data,
10:21
and a future scenario year, such as 2050 or 2030. And then I'll get more into this, but the goal is to see how those different future scenarios compare. So my name's Andy Lukuski. I'm not Garland Woodson, who's listed in the program. I'm the primary software developer for Urban Footprint.
10:44
Calthorp Associates is an urban design and planning firm located in Berkeley, California. And we're a small team with a big product. So hopefully there's something here of interest to you.
11:01
So this kind of program is often called a sketch tool because it literally lets you sketch futures on the map, which we'll get into on the interface. You're going to have a base parcels on the map that show you what's on the ground and then be able to sketch a future scenario
11:22
and run analytics. The project is also exciting because it is fully open source. The stack is an open source stack, and I'll get into details on that. Since I'm the software guy and less the presenter planner guy, feel free to ask me questions on anything,
11:40
but I'm going to know more. I'm going to be able to give you a lot of information on the software, and I might have to defer a little bit on some of the planning elements of it. So Calthorp Associates has been around for a long time. They have decades of experience with scenario planning, and you can see some of the regions that we've worked in. Our focus is on transit-oriented development
12:02
and compact growth, trying to meet some of these global and regional climate environmental health goals that a lot of you are surely interested in. We've been leveraging geospatial software technology for a long time with the goal of improving the efficiency and the visibility of the planning
12:22
process. We specialize in regional planning, and we also work with individual cities and towns. What got Urban Footprints started was a project called Vision California, which took place from 2008 to 2012, approximately. And this was a project that was funded by the California High Speed Rail Authority, along with the California
12:42
Strategic Growth Council, to basically show some of the differences, how these different future scenarios could have impacts on environmental, fiscal, and health, and really to really tie in the interplay with transportation and land use
13:02
investments. So rather than looking at them in a silo to show where you build can impact either positive or negative results. We modeled data in the five major population regions of California.
13:21
And when you're doing this kind of geographic analysis, you tend to hit some. And also, when you're dealing with this many features and doing analysis, you tend to hit some limits of traditional modeling software. And we actually broke the proprietary software that we were working with. We were having 12 hour long runs of analysis that would go overnight and then end up
13:42
with a very mysterious error message. And so the team was actually forced into an open source stack. And we're very happy that we moved in that direction. So we started with, we moved into the post GIS, Django,
14:01
and we were using open layers. That was the original, the Vision California product. So here's just one of the many outputs of the Vision California project. This shows, we analyzed a number of different future scenarios, but this shows the two extremes, business as usual, sprawl, auto oriented, business park in the suburbs kind
14:24
of growth, versus growing smart, which is really a kind of a package of more aggressive green scenarios designed to concentrate growth around existing and future transit networks. So you can see on the left, a lot of that pink area
14:41
is near population centers, but really sprawled out. And on the right, you can see much more compact growth around the planned high speed rail corridor from San Francisco, Sacramento, to LA and San Diego. And very happily, that high speed rail project is now finally being constructed.
15:03
I'll also show some slides at the end of the presentation that demonstrates the impact of these different scenarios on important analysis, like greenhouse gas emissions, public health, fiscal impacts, water and energy. And please let me know if you have questions.
15:21
I know some of this stuff, especially for non-planners, can go right over your head. So stop me if any of this doesn't make sense. So here's a look at the two software products that came out of the Vision California process. The one on the bottom is actually called Rapid Fire, and that's actually a spreadsheet based model. It's for non-geospatial comparison of scenarios.
15:45
This was developed for regions that either don't have access to geospatial data or don't need to do geospatial comparison. They might just have a few different future scenarios that they're looking at, compact sprawl or something in the middle, and they have some policy packages. They wanna compare what the results are.
16:01
So Rapid Fire was designed to work without geospatial analysis. Urban Footprint, the one on the top, is the one that this talk is gonna focus on, and that is the web-enabled, open-source, and geospatially, obviously, aware platform. So Urban Footprint, like a lot of GIS applications,
16:24
you gotta start with your data, and you have to get your data together and organize it. And then the other two parts of Urban Footprint are taking that base data, developing scenarios and running analysis. So in terms of data development and organization,
16:43
our clients, which are typically the regional entities or cities, typically provide us with feature data. It's often parcels, but it could be something bigger like transportation areas. And that data, at a minimum, needs to contain information about employment categories on the parcels or other features,
17:01
and as well as dwelling unit categories and probably some kind of land use code so that we can categorize it and show it on the map. They might also need to supply us with additional data, such as information about water and energy usage, so that we can run certain types of analysis. Other data we can sometimes infer from census data.
17:23
So what we get depends on what the client has available and what we can get from elsewhere. So once you have that base data load, it gets normalized into our system and at that point, the client is able to visualize one or more layers on the map, usually their base data parcels, we would say,
17:43
but perhaps also transportation networks that they've given us. So that's sort of a starting point where you can review the features that you have on the ground, representing a year such as 2014. The next step is to actually create or import the future scenarios,
18:00
which are gonna be your two or more alternative scenarios that you wanna analyze. Oftentimes, a client will already have future scenarios on the books. A lot of, especially in California, the regional entities are required by law to come up with future plans for every few years for a certain target year. So they might have a few different alternative plans
18:22
for 2040, say, already on the books. And then it's our job to take those plans and translate them into urban footprint. And that gives us scenario development. Now, it might also be the case that you have no future plans or you wanna modify one, in which case you might start with the base scenario
18:40
and create a future scenario and literally paint futures on the map. You select certain parcels and say, well, this used to be this, I want it in 2040 to be this. So one might do a query and say, I want all the low density single family homes that are within this distance of a transit stop and we wanna upgrade them to mixed use development
19:01
for the target year. And that way you get a certain increase of employment, certain employment categories and dwelling unit categories increase, whereas other ones might decrease. So either way you do it, you're going to get future scenarios, whether imported or made from scratch
19:21
or a combination of the two. While you're making these future scenarios, you have the opportunity to run analysis on them. And some of the analysis is very simple. It's just saying, well, what's the Delta from the base year to the future year? How much more employment of certain types do we have? How many more dwelling units do we have? Other analysis, all the modules shown here
19:44
are somewhat more complex or very complex. So we have things like what are the local fiscal impacts? What are the public health impacts? Transportation, you can do VMT analysis, vehicle miles traveled. And that's a much more complicated one that requires analysis of travel networks.
20:03
So I'm gonna, the rest of the presentation is gonna focus on these three different parts of scenario development analysis. So let's start with the software stack for those that are interested in this. Starting from the top going down to the bottom, on the website of things,
20:21
we're currently using polymaps for the maps. That's transitioning to leaflets since polymaps is no longer supported. We use a JavaScript framework called SproutCore which enables us to do a single page web app. It's very powerful model view controller framework for those of you who know what that is.
20:43
Lets us do a single page app with a lot of complex functionality and embedded into the system. On the server side, we've got Django running, Python, Postgres, PostGIS on top of Ubuntu, usual suspects in this world. We also work with D3 for our charting
21:02
and we do some socket IO code to send messages back and forth to the client. And we also use TilesTash to render our tiles on the server side. Let me know if you have more questions on this. I can talk forever about that.
21:20
So here's a first look at Urban Footprint. This is a map of the city of Irvine in the Los Angeles region. It shows their parcels and it's colored by the land use codes that they gave us. So we focus of course on the map is the center of attention. We wanna make sure that most of the interaction
21:42
involves looking at the map. We support a limited set of features on the map such as navigation of course, but also selecting features and very targeted editing of those features. So we don't allow users to just edit any feature they want. We usually create specific editors
22:01
that's tailored to their workflow. In the case of Irvine, they need to be able to review their land use codes, edit them and then comment on the edits they made. So that interface, this little edit interface on the right is doing just that. The rest of the app here, you've got your various layers on the left
22:22
which is unlimited number of layers they can bring into the system. We typically pre-configure layers for them at this point because of the complexity of it. But we also have the ability to import certain types of layers and export layers to various formats.
22:41
We don't yet have a styling tool or a map legend tool, but that's on our short-term roadmap. So we're really hoping to have a pretty full featured input and output and editing process for the layers in the near term. There's also a layer organized
23:02
so you can drag and drop layers to get the order on the map that you want. On the top is a query editing interface. This is one of the neatest parts of the app. This is very powerful SQL based querying of the map features. And you can do all the typical SQL style querying.
23:22
You can do filters, you can join any two layer tables together geographically or by attribute. You can also do aggregation. So you can sum your dwelling units or employments or average them. And then you can group by other attributes. So you might, for instance, have several jurisdictional codes
23:41
that represent different cities in your region. You might wanna group by those jurisdictional codes and say, well, I wanna know what my average employment is or break it down by certain employment categories and get the selections on the map. So these are query results we're seeing here
24:01
for individual features, but you can also get aggregate results and show them there and export them to CSV and other formats. And then finally on the right, I showed that before, that's the pop out editor area, but that also is the area where you can run analysis. So we do a lot of different projects
24:23
for different clients based on their needs. In the case of, we just did a project for the Southern California Association of Governors, also known as SCAG. And SCAG is the largest metropolitan planning organization in the United States. They have 190 cities in six counties.
24:41
So whenever you do work with SCAG, you're dealing with immense amounts of data. So for this project, it was a data review pilot. And what they wanted to do was take the data that they had at the regional level and expose it to the various cities in the region. So we picked several cities out from Orange County
25:01
and LA County to take a look at the data that SCAG had and check over the land use codes, make sure things were correct, and if not, make changes to those codes and add comments in. So this was a neat project because it forced us to, on the fly, implement some new features like a complete user permission system.
25:23
So now we have user permissions that limit what region a certain client sees. So if you log in as the city of Irvine, you're only gonna see the Irvine city parcels, even though it's just one server serving up everything. And we also have the ability to limit what is editable
25:42
and who has administrative access. The other neat thing about this project is because it was designed to have low-level people at a certain jurisdiction making edits and then managers in that jurisdiction or the regional entity review the parcels,
26:02
we implemented a data review system so that when somebody edits a land use code and comments on it, then the manager can go and take a look at everything that's been edited and decide whether they want to approve that and merge it into the master copy, which we would call the, say, the master scenario
26:21
versus the draft scenario. So our focus has really been to implement features based on what our current clients need and those features, of course, because it's open source, everything is fully available to the other clients. We also implemented a data version,
26:40
really primitive data versioning system for this release, which allows whenever any feature is edited, a revision is created and you can go back and take a look at your revision history. So in some ways, we're trying to get a little bit toward a git style repository system for the feature data where we can support, we can show versions of the data
27:02
and allow merging from a draft scenario to a master scenario. And this kind of thing is really important for government agencies that need a lot of, that have a lot of auditing and want to keep track of what's happening with a large number of jurisdictions. Very similar slide, I'm sorry, this is the next slide here,
27:24
is actually sketching futures. This is sort of the second part of those, the three tiers I showed you. So first we had the analysis of what was on the ground, and this is actually a project for the San Diego region, the San Diego Regional Planning Agency. They already had a number of year 2050 future scenarios
27:45
on the books outside of Urban Footprint. So you can see on the top left about six different scenarios that we brought into the system. And showing on the map is one of those future scenarios. Along, so you can see the parcels are colored along with the transit networks there.
28:02
And here's a first view at some of the D3 charts we create. And these D3 charts are great because as the users update their parcels, as they paint them with new land uses, we constantly run analysis on the backend and update the charts. So each time that you paint one or more parcels, you'll see these charts updated
28:21
and you can see, well, now I have a different delta between my base year and my future year in terms of employment. And we also have charts that allow you to compare one or more scenarios side by side. So you can see which scenario is performing better or worse in a certain category. And then finally,
28:43
managing each one of these scenarios typically contains a unique set of layer features. It's a lot of data to manage. Some of the data that doesn't change can be shared. The goal of this is to allow clients to rapidly experiment with new scenarios. So take one scenario, clone it a couple of times,
29:03
make some changes and see how it performs. So we're constantly trying to make the system faster and more distributed on the backend. Clients have the ability to choose land use codes when they're painting their future parcels.
29:22
We really, this is something that Calthorpe Associates really specializes in. A hierarchy of land use and building types. So what we do is we actually take sample buildings that exist in the real world. We model them in the system. You can see the editor in back there for a certain building. And then we combine similar type buildings
29:42
into what's called a building type. And that is what a client will typically use to paint their future scenarios. We also have, when clients are painting bigger areas than parcels, we have something called a place type, which allows them to combine building information with other urban forms, such as streets and parks
30:02
into what's called a place type, a combination of different assets on the ground. And we can, on the front image here, you can see a visualization of one of those place types. So we have some aggregate attribute information as well as some D3 charts based on the information.
30:25
So that was kind of phase two, the scenario development. And then finally, we have to test the impacts using the various analysis. So again, you see public health, fiscal impacts, et cetera. These are analysis modules that are developed
30:42
in-house at Calthorpe or outside, and they are all peer reviewed by the academic and scientific community and continually upgraded as new information becomes available. Since it's an open source project, our goal is to make these as transparent as possible to provide feedback when these analysis modules run
31:01
so that users of the system and the public can understand how the numbers are being generated and have a reasonable amount of confidence in the results. Here's an example of one of the analysis modules. This is the vehicle miles traveled module running.
31:20
This is a big one that takes a long time. We do run distributed processes on the back end to speed it up. So when you're developing scenarios, you can run this analysis at any time and get the aggregate results here. So this is showing some total VMT for the region in aggregate form.
31:41
And then we also have the ability to, of course, show the results on the map as, in this case, green to red going from green is least VMT near the city centers, and red is the most VMT away from the city centers. And we can, of course, also represent it with specific custom VMT charts up on top there.
32:04
So these analysis modules are pretty exciting because they can be updated, extended, and we're always looking to add new ones to make the system more powerful and useful to our clients. We're also working on APIs to better expose the modules
32:20
to the front end and to allow contribution and collaboration. And then finally, I'm just gonna quickly show some of the results of the Vision California project. This is actually from the spreadsheet based model, the rapid fire model, just to give you an idea of how persuasive some of these results can be to decision makers. So here's, this is for California in 2050,
32:42
showing the amount of land that can be saved by adopting a smart growth policy instead of a business as usual policy. Enough land saved to match Delaware and Rhode Island combined. And then similarly, here's one for VMT showing just a tremendous number of miles traveled reduced
33:02
by adopting the smart growth policy. Here's one, here's a, this is a Vision, or this is an urban footprint result from Vision California, again, showing another VMT map, showing how in the LA region, how much of the VMT, how much VMT has increased in the outlying areas versus in near the city centers.
33:26
Here's a couple more, one showing how much water can be saved by 2050, 50 times, Hetch Hetchy. Similarly, building energy, enough power for all homes in California for eight years.
33:43
Energy savings, household energy, household savings from energy and water, and auto fuel and ownership savings by concentrating land use. Greenhouse gas emissions, significant savings by building more compact,
34:02
building, buildings closer to transit and reducing passenger vehicle miles. And there's a lot more information on these results if anybody's interested. Then real quick, a couple next steps for urban footprint. We're really working on scaling this up for multiple users as we did for the LA region,
34:24
providing customer support is a big goal of ours when you have a lot of different municipal area, municipal workers like planners and other people using the software, you need to have basically 24 hour support. And then also constantly user interface enhancements.
34:42
So if people give us suggestions all the time and we're always trying to make the product better. And then some other, always working to improve the analytic modules. We're working on social equity indicators, for instance, climate adoption and resilience and conservation ecosystem services.
35:01
And we have a few other ones that are in the works as well. And then the last thing is we're really working on bringing this more to the local planning process. So when community reviews of plans, we can get the community engaged by showing the software to them and allowing them to see how their decisions or decision makers decisions
35:22
can impact their quality of life. We're also hoping to help out with the general planning process for cities and regions, as well as assessing health impacts and helping with climate action planning. So there's a lot going on here. That's a super quick overview. Let me know if you have specific questions
35:42
either about what we do in terms of planning or in terms of software. Yeah. So is it working?
36:02
Yeah. Okay. So sort of understood the land use is associated to a land parcel. What if a land parcel associates are dealing with future scenarios? There could be a huge land parcel, which is now agricultural land. And you would like to, and I think that the urban planning would like
36:22
just to split that into different sections. How do you deal with that? We don't support editing the parcels right now in the software that will probably be supported in the future. There are, I think some ways to do it on the backend. So even if they can't split it up, we can bring it into the system so that it's different than the base year.
36:42
So I mean, currently what the software does is just, I mean, when you were talking about painting, it means that there is a swath of land which is associated with given land use, right? All the parcel within that bounding box or whatever that they already selected, right?
37:00
So whole parcels, not portions of land. Whole parcels, although the way the analysis works, if you were to take an agricultural parcel and turn it into mixed use development, the system's smart enough to know that that's not gonna be a single parcel. It's gonna analyze it on the assumption that it would be broken up into many parcels. Okay, so we have one more technical question.
37:22
Since you, of course, the software is multi-user, do you deal with people applying different, designing different scenarios to the same dataset? We typically clone the feature sets that are going to be edited.
37:42
That's where really the data review and merging process, sort of the Git-like versioning becomes really important. But each scenario, we clone the minimum. We don't wanna clone anything more than we have to, but obviously if they're editing, we have to make a new copy. So when you're talking about cloning,
38:00
you mean cloning the, including geometry, or just the land use? Yeah, we clone the geometry too. And I- Is that a little bit expensive? It is, yeah. All these things can be optimized and it'd be nice. In the future, the goal would be to have the geometry normalized and just reference it separately.
38:21
We do a little bit of that, and it's gonna keep getting better as time goes on. Thanks. So is this, as an open source project,
38:41
is this something that we, that somebody would need to go through you as a client, or is it something that we can just, where somebody could download it and implement it at their city? At this point, you have to go through us. Our goal is to make it more and more friendly. I mean, it's a complicated process and it's probably gonna usually need some amount of help, but the goal is, as an open source process,
39:03
program to make it more and more accessible. So there is more plug and play to it that we start publishing the data standards so you know how to set it up and get the data into your system. But it does run on, it runs on the Amazon cloud. That's the way we typically run it. And it's becoming, it's getting to the point
39:20
where it's multi, it's very distributed. And so typically it requires some amount of assistance to get set up and running.
39:40
Okay, well, I think my time is up, so thank you very much.