We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Data Formats for Data Science

Formal Metadata

Title
Data Formats for Data Science
Title of Series
Part Number
84
Number of Parts
169
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date2016
LanguageEnglish

Content Metadata

Subject Area
Genre
Abstract
Valerio Maggio - Data Formats for Data Science The CSV is the most widely adopted data format. It used to store and share *not-so-big* scientific data. However, this format is not particularly suited in case data require any sort of internal hierarchical structure, or if data are too big. To this end, other data formats must be considered. In this talk, the different data formats will be presented and compared w.r.t. their usage for scientific computations along with corresponding Python libraries. ----- The *plain text* is one of the simplest yet most intuitive format in which data could be stored. It is easy to create, human and machine readable, *storage-friendly* (i.e. highly compressible), and quite fast to process. Textual data can also be easily *structured*; in fact to date the CSV (*Comma Separated Values*) is the most common data format among data scientists. However, this format is not properly suited in case data require any sort of internal hierarchical structure, or if data are too big to fit in a single disk. In these cases other formats must be considered, according to the shape of data, and the specific constraints imposed by the context. These formats may leverage *general purpose* solutions, e.g. [No]SQL databases, HDFS (Hadoop File System); or may be specifically designed for scientific data, e.g. hdf5, ROOT, NetCDF. In this talk, the strength and flaws of each solution will be discussed, focusing on their usage for scientific computations. The goal is to provide some practical guidelines for data scientists, derived from the the comparison of the different Pythonic solutions presented for the case study analysed. These will include `xarray`, `pyROOT` *vs* `rootpy`, `h5py` *vs* `PyTables`, `bcolz`, and `blaze`. Finally, few notes about the new trends for **columnar databases** (e.g. *MonetDB*) will be also presented, for very fast in-memory analytics.