We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

CK: an open-source framework to automate, reproduce, crowdsource and reuse experiments at HPC conferences

Formale Metadaten

Titel
CK: an open-source framework to automate, reproduce, crowdsource and reuse experiments at HPC conferences
Serientitel
Anzahl der Teile
561
Autor
Lizenz
CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Validating experimental results from articles has finally become a norm at many HPC and systems conferences. Nowadays, more than half of accepted papers pass artifact evaluation and share related code and data. Unfortunately, lack of a common experimental framework, common research methodology and common formats places an increasing burden on evaluators to validate a growing number of ad-hoc artifacts. Furthermore, having too many ad-hoc artifacts and Docker snapshots is almost as bad as not having any (!), since they cannot be easily reused, customized and built upon. While overviewing more than 100 papers during artifact evaluation at HPC conferences, we noticed that many of them use similar experimental setups, benchmarks, models, data sets, environments and platforms. This motivated us to develop Collective Knowledge (CK), an open workflow framework with a unified Python API to automate common researchers’ tasks such as detecting software and hardware dependencies, installing missing packages, downloading data sets and models, compiling and running programs, performing autotuning and co-design, crowdsourcing time-consuming experiments across computing resources provided by volunteers similar to SETI@home, reproducing results, automatically generating interactive articles, and so on: http://cKnowledge.org . In this talk I will introduce CK concepts and present several real world use cases from the Raspberry Pi foundation, ACM, General Motors, Amazon and Arm on collaborative benchmarking, autotuning and co-design of efficient software/hardware stacks for emerging workloads including deep learning. I will also present our latest initiative to create an open repository of reusable research components and workflows at HPC conferences. We plan to use it to automate the Student Cluster Competition Reproducibility Challenge at the Supercomputing conference.