We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

PySpark - Data processing in Python on top of Apache Spark.

Formale Metadaten

Titel
PySpark - Data processing in Python on top of Apache Spark.
Serientitel
Teil
115
Anzahl der Teile
173
Autor
Lizenz
CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
ProduktionsortBilbao, Euskadi, Spain

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Peter Hoffmann - PySpark - Data processing in Python on top of Apache Spark. [Apache Spark] is a computational engine for large-scale data processing. It is responsible for scheduling, distribution and monitoring applications which consist of many computational task across many worker machines on a computing cluster. This Talk will give an overview of PySpark with a focus on Resilient Distributed Datasets and the DataFrame API. While Spark Core itself is written in Scala and runs on the JVM, PySpark exposes the Spark programming model to Python. It defines an API for Resilient Distributed Datasets (RDDs). RDDs are a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. RDDs are immutable, partitioned collections of objects. Transformations construct a new RDD from a previous one. Actions compute a result based on an RDD. Multiple computation steps are expressed as directed acyclic graph (DAG). The DAG execution model is a generalization of the Hadoop MapReduce computation model. The Spark DataFrame API was introduced in Spark 1.3. DataFrames envolve Spark's RDD model and are inspired by Pandas and R data frames. The API provides simplified operators for filtering, aggregating, and projecting over large datasets. The DataFrame API supports diffferent data sources like JSON datasources, Parquet files, Hive tables and JDBC database connections.
Schlagwörter