We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Frontera: open source large-scale web crawling framework

Formal Metadata

Title
Frontera: open source large-scale web crawling framework
Title of Series
Part Number
130
Number of Parts
173
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Production PlaceBilbao, Euskadi, Spain

Content Metadata

Subject Area
Genre
Abstract
Alexander Sibiryakov - Frontera: open source large-scale web crawling framework In this talk I'm going to introduce Scrapinghub's new open source framework [Frontera]. Frontera allows to build real-time distributed web crawlers and website focused ones. Offering: - customizable URL metadata storage (RDBMS or Key-Value based), - crawling strategies management, - transport layer abstraction. - fetcher abstraction. Along with framework description I'll demonstrate how to build a distributed crawler using [Scrapy], Kafka and HBase, and hopefully present some statistics of Spanish internet collected with newly built crawler. Happy EuroPythoning!
Keywords