We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Using Street-Level Imagery to Save Cyclists

00:00

Formal Metadata

Title
Using Street-Level Imagery to Save Cyclists
Title of Series
Number of Parts
27
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Map
Computer animation
Computer animationEngineering drawing
Computer animation
Map
Map
Computer animation
Program flowchart
Diagram
Transcript: English(auto-generated)
Hello, everyone. My name is David Greenwood. I'm the founder of Trekview. For about a year now, we've been using street-level imagery to understand the world around us, from species of trees in a certain location to the impact of coastal erosion. We do this by going out with action and 360 cameras
and setting them to capture time-lapse images and videos of the walk, ride, or adventure. Here's me on a South Downs way, close to Brighton in the UK. We've also shared a variety of camera setups others can recreate, from helmet-mounted cameras to underwater bubbles.
We use Mapillary to host our images. For those unfamiliar, Mapillary is a street-level image platform that anyone can contribute imagery to. It's mainly home to dashcam images, but there's a growing amount of non-road-based imagery, too. What's really useful about Mapillary is their integrations with OSM editors.
Their plug-ins make it easier for contributors to improve OSM using street-level imagery for context. Think adding a surface type where satellite imagery isn't clear. This year, I visited the so-called model city for cycling, Amsterdam, and I started to ask questions like,
how many cars does a cyclist encounter? What are the main hazards besides other vehicles? In an attempt to try and answer these questions and others, I used a process known as semantic segmentation to abstract information from the images I captured using a GoPro MAX camera mounted to my helmet.
In short, semantic segmentation assigns a classification to each pixel in an image. Here's an example showing pixels that have been assigned as cars. Freely available models can detect many other objects, including street furniture and signage. Right now, OSM can be used
to mark and identify bike lanes and paths, but this data does not address questions like, is there enough separation between road and bike lane? Is the signage clear? It doesn't account for temporary hazards. It doesn't account for other data sets like accident reports. All of which are vital in designing suitable bike lanes,
which are more important than ever with many of us taking the bicycles over public transport as the COVID-19 pandemic continues. By mixing these data sets, we want to construct a model of the perfect bike lane. If such a thing can be created.
And you can help. You can follow our guidelines online to contribute imagery to the project. To start with, all you need is a mobile phone and a bike mount to hold it. The photos you upload will be accessible to other OSM editors. Ideally, we'd like to start contributing this data automatically back to OSM,
but we need your photos to improve our data models and increase their accuracy. We're already learning some interesting things. As you might expect, lack of signage can be directly attributed to an increase in accidents. Though conversely, too much signage has the same effect.
I'm happy to take questions in constructive feedback today, but we'll end by saying you can start contributing to improving the map today. Visit safer.bike for all the information you'll need. Happy cycling, everyone. Thank you.