We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

libamicontained: a low-level library for reasoning about resource restriction

Formal Metadata

Title
libamicontained: a low-level library for reasoning about resource restriction
Title of Series
Number of Parts
779
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
A common question language runtimes have is: how many resources do I have access to? They want to know e.g. how many threads they can run in parallel for their threadpool size, or the number of thread-local memory arenas, etc. The kernel offers many endpoints to query this information. There is /proc/cpuinfo, /proc/stat, sched_getaffinity(), sysinfo(), the cpuset cgroup hierarchy's cpuset.cpus.effective, the isolcpus kernel command line parameter, /sys/devices/system/cpu/online. Further, libcs offer divergent implementations of sysconf(_SC_NPROCESORS_ONLIN). As a bonus, the kernel scheduler may be configured to limit resources using cpu "shares" or cpu quotas, so a task may be able to run on all cores, but have some kind of rate limit that is not reflected in the physical cores the system is allowed to run on. In this talk, we propose a new library "libamicontained" to offer one place to consolidate the logic for the answer to this question. We propose a: C ABI exporting statically linked zero dependency library which is aware of all of these different runtime configurations and answers questions about cpu counts etc. in accordingly reasonable ways. Of course, the real challenge here is adoption. Ideally we can pitch such a library as "from the container people", so it's an easier pitch to language runtimes. We are here seeking feedback on all points (heuristics to reason about CPU counts, design goals, etc.) from container people as a first step.