We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

libamicontained: a low-level library for reasoning about resource restriction

Formale Metadaten

Titel
libamicontained: a low-level library for reasoning about resource restriction
Serientitel
Anzahl der Teile
779
Autor
Lizenz
CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
A common question language runtimes have is: how many resources do I have access to? They want to know e.g. how many threads they can run in parallel for their threadpool size, or the number of thread-local memory arenas, etc. The kernel offers many endpoints to query this information. There is /proc/cpuinfo, /proc/stat, sched_getaffinity(), sysinfo(), the cpuset cgroup hierarchy's cpuset.cpus.effective, the isolcpus kernel command line parameter, /sys/devices/system/cpu/online. Further, libcs offer divergent implementations of sysconf(_SC_NPROCESORS_ONLIN). As a bonus, the kernel scheduler may be configured to limit resources using cpu "shares" or cpu quotas, so a task may be able to run on all cores, but have some kind of rate limit that is not reflected in the physical cores the system is allowed to run on. In this talk, we propose a new library "libamicontained" to offer one place to consolidate the logic for the answer to this question. We propose a: C ABI exporting statically linked zero dependency library which is aware of all of these different runtime configurations and answers questions about cpu counts etc. in accordingly reasonable ways. Of course, the real challenge here is adoption. Ideally we can pitch such a library as "from the container people", so it's an easier pitch to language runtimes. We are here seeking feedback on all points (heuristics to reason about CPU counts, design goals, etc.) from container people as a first step.