The Technical BSD Conference 2019
34
2019
454
1 Tag 4 Stunden
34 Ergebnisse
1:11:43
35Bluhm, AlexanderWhen you try a new software version, something may be wrong or slow. You want to figure out when the regression was introduced. After updating OpenBSD you might see that something does not work as it used to be. To simplify debugging it is helpful to determine the point in time when the change was introduced and search for the relevant commit. For functional regressions a test suite that is executed on a daily basis is sufficient. By providing recent test results, the relevant day can be seen and the responsible developer of that area is informed. Making statements about performance is more difficult. The requirements for measurements may change, new test programs are needed, test hardware is only available for a limited time. So it is not sufficient to store historic data on a daily basis. You want to change granularity or look back into the past. For that purpose I have created a system that can create a performance measurement of an OpenBSD kernel in the past. The kernel is compiled from a certain CVS checkout. That may sound easier than it is as the OpenBSD kernel is not self contained. It belongs to a base system, there may be incompatibilities with userland. The performance also depends on the compiler version that changes over an OpenBSD development cycle. My solution installs an OpenBSD release and updates the kernel from source in fixed time steps. If there is a relevant user land change within the step, the necessary parts of the system are updated. This allows quick progress while compiling the kernel without rebuilding the whole system. With each new kernel the performance is measured.
2019Berkeley Software Distribution (BSD)
53:35
10Obser, FlorianDNS is easy. You type bsdcan.org in your browser's address bar, hit enter and you will be greeted by your favorite BSD conference's start page. Actually... We will start by giving a short introduction into DNS from the perspective of a client. We will explore: where to send questions to: upstream resolvers learned from dhcp / router advertisements / static quad-x resolvers vs. doing recursion ourselves, what questions to ask: qname-minimization (yes or no), what to do with the answer: benefits and limitations of DNSSEC. We will then introduce unwind(8) - an always-running, validating DNS recursive nameserver, answering queries on localhost (127.0.0.1). We will explain its privilege-separated design and show that it is secure to run this daemon by default. We will then show how its novel approach of observing changes in network location and actively probing the quality of the local network improve the user experience in DNS resolution. The focus will be on laptops that move through many networks, some good, some bad, some outright hostile. We will compare unwind(8) to prior solutions and show how its design enables it to run without user intervention. While unwind(8) is developed on OpenBSD it is intended to be portable. We will give pointers on a few OpenBSD specific features.
2019Berkeley Software Distribution (BSD)
55:19
22Beck, BobUnveil in OpenBSD Last year the unveil() system call was introduced in OpenBSD 6.4 Unveil has continued to evolve and be included in more programs in OpenBSD base, as well as some in the OpenBSD ports tree. Unveil has gained new features and semantics as a result, with some of the underlying implementation details in the kernel changing considerably. This talk discusses unveil, how it is used in programs, as well as new changes to semantics. We also will touch on how unveil() is implemented in the kernel to handle some of the changes.
2019Berkeley Software Distribution (BSD)
54:32
12Provost, KristofAutomatically testing the pf firewall, using network stack virtualisation. For fun and profit, or at least for the sake of fewer bugs. We're all convinced that automated tests are a good idea. For some applications (e.g. grep, awk, cc, ...) this is very straightforward. Others are a lot harder to test, for example firewalls. Typically testing firewalls takes two to three hosts. One to send traffic, the firewall test host and one to receive traffic. This makes automated test orchestration complex and brittle. This in turn means that tests either don't get written, are difficult to write and/or suffer random failures unrelated to issues in the firewall itself. Virtualisation has made this all somewhat easier, but it's still fiddly and difficult to make robust. It's also slow. FreeBSD 12 will ship with network stack virtualisation (known as VIMAGE or vnet). This is an important feature for many applications, one of which is automated network stack and firewall testing. As of FreeBSD 12 PF fully support VIMAGE, allowing users to configure a firewall for each jail. This talk will introduce VIMAGE and show how it can be used to easily write firewall tests. If there's time a few interesting bugs and their test cases will also be discussed.
2019Berkeley Software Distribution (BSD)
56:57
4Poffenberger, AaronMany of us spend days and weeks planning, and implementing strategies for disaster recovery in the server room, whether corporate or personal. How many of us, however, can really say we're prepared to recover our primary computing system in the office ... or on the go? Or that it's secure enough that if it were lost, stolen, or seized we wouldn't lose important data, or lose sleep? In this talk we'll look at the various strategies available to us to ensure our laptops are secure, synchronized, backed up and ready to recover. Whether at the office or at home, we spend a lot of time ensuring our systems are secure and backed-up, ready to recover in case of disaster. When it comes to our workstations, which are often laptops, we've often done little more than enable full-disk encryption and perhaps the odd occasional rsync for backups. Full-disk encryption is usually adequate protection against data loss due to opportunistic theft or casual loss. 10 years ago that might have been enough. But the times have changed. Today our laptops carry more than just our working files. They often include the entire corporate code repository, passwords and authentication keys, as well as personal files and data. Are our portable computers hardened against directed attack? Are we prepared for border-patrol agents or other state officials demanding passwords or unfettered access to our computing systems ... or online accounts? We're also more mobile. We expect to work when we want and where. How many of us can honestly say we could recover all -- or enough -- of our computing environment from bare metal in a day, half-day, or hour to be productive ... halfway across the globe? In this talk we'll look at the risks to the vast amounts of data we so casually carry around. We'll review strategies and techniques to reduce or mitigate those risks, as well as prepare our systems for easier recovery, at rest or on the go. We'll look at: Risks Machine physical security Encryption Data synchronization options On-the-go backup solutions On-the-go recovery Preparing to cross international borders
2019Berkeley Software Distribution (BSD)
12:37
2Langille, DanThere will be a few short announcements before and after the keynote. The opening session with have some magic giveaways of hardware. Please be there to win. Or perhaps not. More Info:
2019Berkeley Software Distribution (BSD)
48:37
16Bernstein, OriAs of OpenBSD 6.4, VMD supports QEMU's QCOW2 disk format. This talk will go over what QCOW2 is and how it's implemented internally. Until recently, OpenBSD's VMD only supported raw disk images. Raw images are large, lack snapshot support, and are clunky overall. In OpenBSD 6.4, support for QCOW2 disk images landed. QCOW2 is a copy on write disk format that supports lazy growth and external snapshots, among other features. It does this by keeping a page-table like cluster map. This keeps space use down, and allows a lot of nifty snapshotting features. But there's no such thing as a free lunch: QCOW2 images pay a price in both performance and robustness. In this talk, I'll give an overview of QCOW2 features before making a sharp turn into into the details of the disk format, how to use it, and how I implemented it on OpenBSD.
2019Berkeley Software Distribution (BSD)
42:39
32McGrew, Zachary et al.While NetBSD runs on 16 different types of CPU architectures, it did not run on the RISC-V. In order to live up to the slogan “Of course it runs NetBSD” the project of completing the port of the NetBSD kernel to the new RISC-V architecture was started. Adapting the kernel to take advantage of the new platform features while still maintaining NetBSD’s portability was challenging, but became and interesting problem to solve. While many issues were discovered in the process, the final outcome of booting a kernel on a new architecture was informative and rewarding.
2019Berkeley Software Distribution (BSD)
44:27
9Klemkow, JanOpenBSD is a general purpose operating system. Its a common practice to use it as a router, firewall or as a server for other back-end services. Fewer people use it as a desktop system. This talk shows a way to introduce OpenBSD as a desktop system in your company's network without changing the existing OS installation. The lecture shows how easy it is to setup and run a single server that serves diskless(8) OpenBSD desktop systems for a whole company network. Our company uses Unix-like systems for workstations since it was founded. We started with BSD/OS in the 90's and using Linux from the 2000's till today. With that experience and the infrastructure created around it over the years, we were able test an OpenBSD setup on production network without ruin the existing installations. Thus, every employee was able to test this experimental setup without any fear of data lost or waiting time for an installation. In this talk I will give a walk-through to our diskless(8) server setup. Also, I will show how we completed some gaps of the diskless(8) manpage. Further, I will give some tips and best practices how to centralize the administration of such desktop system. At the end, I will talk about my personal UX with an OpenBSD network booted desktop system in a daily usage over several month.
2019Berkeley Software Distribution (BSD)
1:02:44
11Looney, JonathanNetflix has built a CDN to distribute streaming media through most of the world. The content caches run a lightly customized version of the FreeBSD operating system's head branch. This presentation will describe Netflix's development process, and give some reflections on Netflix's experience using FreeBSD head in production. Netflix has built a CDN, called Open Connect, to distribute streaming media through most of the world. According to Sandvine, Netflix accounts for approximately 15% of all downstream traffic volume across the entire internet. The content caches in the Netflix CDN, also known as Open Connect Appliances (or, simply, OCAs), run a lightly customized version of FreeBSD. In some ways, this is nothing special: many products are based on an open-source operating system. However, Netflix does something slightly unusual, in that its OCA operating system code closely tracks the FreeBSD "head" branch (their development branch). In fact, a commit to the upstream FreeBSD development branch will usually be fully deployed across Netflix's CDN within 4-12 weeks. Although it might seem scary to run "development" code in production, we find that it works very well in practice. The FreeBSD development branch is usually quite stable. Additionally, we expect that we will find some bugs. However, we find that it is much better to find and fix those sooner, rather than later. Also, Netflix is committed to upstreaming most of our customizations that have general applicability. Tracking the upstream development branch keeps us in the best position to easily upstream our changes. In this presentation, Jonathan Looney will explain how Netflix uses the FreeBSD development branch code to help Netflix produce the robust operating system which supports the Netflix CDN. The presentation will describe Netflix's development process, and give some reflections on Netflix's experience using FreeBSD head in production.
2019Berkeley Software Distribution (BSD)
51:38
10Mens, Jan-PietThey say MQTT is a PUB/SUB protocol for the Internet of Things, which it was originally designed for, but it's also well suited for monitoring machines and services, in other words, for our daily bread. We take a close look at the MQTT protocol and show you applications for MQTT, focusing on putting it to use for systems administration tasks, for monitoring, and for connecting microcontrollers to your server farms (strike that part when you show this to your boss). They say MQTT is a PUB/SUB protocol for the Internet of Things, which it was originally designed for, but it's also well suited for monitoring machines and services, in other words, for our daily bread. We take a close look at the MQTT protocol and show you applications for MQTT, focusing on putting it to use for systems administration tasks, for monitoring, and for connecting microcontrollers to your server farms (strike that part when you show this to your boss). We'll divulge what a last will and testament has to do with MQTT and monitoring, and we'll also discuss some real-world integrations and applications of MQTT and Unix system utilities. Attendees will be able to understand how they can profit from using MQTT for monitoring programs, and they will benefit from being able to use the protocol for lightweight messaging. Be warned: there will be blinkenlights!
2019Berkeley Software Distribution (BSD)
58:31
24Hendryx-Parker, CalvinIn this day and age, using hand-crafted servers is quickly becoming a losing proposition. Debugging a Python application that runs on custom made servers gets extremely tricky, and since apps no longer run on a single server, releases become increasingly resource intensive. It is therefore vital to have a process to control multiple FreeBSD servers simultaneously in the cloud from a single command. This talk goes through real life and practical examples to roll out orchestration and configuration management projects using Salt on FreeBSD. After going over the pains of manual deployments, we will introduce the open source tools Saltstack and Cloud Custodian to automate the management of FreeBSD servers in a public cloud. We will then walk attendees through a real case study to give a practical understanding of what rolling out orchestration and configuration management means. We will end with hard learned tips to guard against common pitfalls. Saltstack is a event-driven orchestration and configuration management platform written in Python. It lets you run a command across thousands of servers at once through its remote execution function. Its master can listen to events such as user logins or CPU usage thresholds and automatically have minions take corrective actions. Cloud Custodian is an event-driven compliance platform that was originally developed in Python by Capital One. It allows for an organization to ensure compliance in its cloud infrastructure via policies. Cloud Custodian can make sure the correct encryption keys and ciphers are used. It can ensure new applications are always built on the latest machine images. Cloud Custodian can dramatically reduce costs by automatically removing unused cloud resources. With a mandate to migrate all of their applications to the AWS public cloud, the client had to find a way to move their highly custom website hosted on FreeBSD. We used Salt to build the new cloud infrastructure using the APIs provided by AWS. It allowed Notre Dame to deploy new environments for testing or, in an emergency, spin up instances in another region that match identically the current production environment. We also used Salt to power the release process so that the web team can release to the multiple servers that make up the environment with one single command orchestrated from the Salt master. They do not have to have deep knowledge of the servers and command line to release changes to the site. Upfront choices about the operating system or distribution are critical to making this process of deploying simpler. The natural choice for deploying onto AWS might be to use Amazon Linux, but it turns out it has an older, more limited package manager. If you are expecting to use newer features of certain software such as HAProxy, you will need to ensure your distribution supports it out of the box. Managing extra builds and compiling from source is a short term solution as it will add the overhead of maintaining the environment.
2019Berkeley Software Distribution (BSD)
1:01:01
69Stanek, Michal et al.The talk describes recent security additions in the FreeBSD boot process. TPM 2.0 devices are now supported in FreeBSD. They are most often referred to in the context of measured boot, i.e. secure measurements and attestation of all images in the boot chain. The TPM 2.0 specification defines versatile HSM devices which can also strengthen security of various other parts of your system. We will describe the basic features of TPM and mention some caveats and shortcomings which may have contributed to its limited adoption. The presentation will include practical TPM use cases such as hardening Strongswan IPSec tunnels by signing with the TPM and locking in secrets to a particular boot hash chain. The second part of the talk will describe UEFI Secure Boot support in the FreeBSD loader and kernel. The loader is now able to parse UEFI databases of keys and certificates which are used to verify a signed FreeBSD kernel binary, using BearSSL as the cryptographic backend. The talk describes recent security additions in the FreeBSD boot process. TPM 2.0 devices are now supported in FreeBSD. They are most often referred to in the context of measured boot, i.e. secure measurements and attestation of all images in the boot chain. The TPM 2.0 specification defines versatile HSM devices which can also strengthen security of various other parts of your system. We will describe the basic features of TPM and mention some caveats and shortcomings which may have contributed to its limited adoption. The presentation will include practical TPM use cases such as hardening Strongswan IPSec tunnels by performing IKE-related cryptographic operations within the TPM, using private keys which never leave the device. Another example will be sealing secrets in TPM NVRAM with specific boot measurements (hashes) stored in PCR registers so that the secrets are locked in to a specific boot chain. The second part of the talk will describe UEFI Secure Boot support in the FreeBSD loader and kernel. The loader is now able to parse UEFI databases of keys and certificates which are used to verify a signed FreeBSD kernel binary, using BearSSL as the cryptographic backend.
2019Berkeley Software Distribution (BSD)
36:27
17Testart, JasonA story about building a capture-the-flag (CTF) penetration testing challenge using OpenBSD for the Hack The Box platform, where most of the challenges are either based on Linux or Windows. Hack The Box is an online platform allowing you to test your penetration testing skills and exchange ideas and methodologies with other members of similar interests. It contains several challenges that are constantly updated. Some of them simulating real world scenarios and some of them leaning more towards a CTF style of challenge. After joining Hack The Box and completing several challenges, I noticed some technologies missing or underrepresented among the challenges available. In this talk, I will go over the process of developing a CTF challenge, the learning objectives for participants, what went well, what did not go well, and the virtues of OpenBSD as an OS for a CTF challenge. I will also walk through the challenge itself, what I learned building it, share feedback, and identify other OpenBSD features for future challenges.
2019Berkeley Software Distribution (BSD)
50:17
9Turner, AndrewModern C compilers include support for tools to help find bugs in code. These tools, the sanitizers, add instrumentation to the generated code that can be compiled into the kernel to help the kernel developers. In early 2018 I became interested in using these in the FreeBSD kernel to assist bug finding and debugging. This talk will discuss the current state of kernel sanitizers on FreeBSD. This will include the kernel coverage sanitizer that can be used with fuzzers, the undefined behaviour sanitizer to warn when code relies on undefined behaviour, and the address sanitizer to detect out of bounds accesses. It will also discuss future work to port new sanitizers and the use hardware based acceleration. The main fuzzer to use these sanitizers is the syzkaller fuzzer from Google. I will talk about my experiences using this, bugs it has found, and future work to port other fuzzers to work with the kernel.
2019Berkeley Software Distribution (BSD)
45:03
14Phillips, D. ScottAlthough I started my hacking as a college student on FreeBSD, I shortly moved to Linux and remained a professional Linux developer for many years, a majority of the time as a graphics driver developer for i915. Returning now to FreeBSD development, later in my career, I bring a rare perspective and opportunity to highlight what's done well and what could use some improvement. With all due respect, many of the FreeBSD developers are insular folks who've worked in the project for their entire lives. It's not surprising given their passion that they haven't had much time to stick their head up and see how other projects do things. The obvious comparison is Linux, which from a user adoption perspective has been way more successful. I don't intend to propose how to make FreeBSD as successful as Linux, but I do hope to shed some light on how to make the FreeBSD operating system more appealing to both developers and users. I'll be using my experience as a Linux graphics driver developer and active Linux kernel contributor to highlight specific hurdles I've had (as well as pleasantries) in my switch to full-time FreeBSD development. While some of these hurdles seem trivial to grizzled FreeBSD veterans, it's often death by a thousand cuts that will dissuade fresh faces.
2019Berkeley Software Distribution (BSD)
51:07
2Tuffli, ChuckEmulated devices in FreeBSD's bhyve hypervisor exist to provide compatibility with older operating systems. But with a few small changes, we can take a Dr. Frankenstein approach and allow the user to dynamically modify their behavior to create something new and beautiful. Just kidding, the resulting havoc this wreaks on the guest operating system will bring out the villagers with their torches and pitchforks. This talk describes why you would want to do this and an implementation using the NVMe emulation. There is a programmer's adage that says untested code will not work entirely the way you expect. But how do you test code which handles hardware errors? This presupposes having hardware that misbehaves in predictable and repeatable ways. Since this sort of unicorn does not exist, programmers will often create error injection hooks in the code. But this approach is problematic as the hooks tend to be static, work at cross-purposes with the actual code, and do not have access to user-space libraries and scripting languages that can help simplify the task. FreeBSD's bhyve hypervisor and its emulated devices offers an alternative approach. To the guest software (a.k.a. the operating system), these appear to be actual hardware when in reality, they are small user-space programs. By adding hooks to the device emulation code, we can allow user defined plug-in's to modify the device behavior and allow better testing of device drivers and the code which relies on them. This talk will describe an experimental plug-in framework for bhyve's NVMe device emulation, example scripts, and the havoc that they can cause.
2019Berkeley Software Distribution (BSD)
59:27
2Zeeb, Björn AlexanderIn 1998 the KAME project started to provide an IPv6/IPsec reference implementation for *BSD. Where have we gotten to with IPv6 in FreeBSD? The idea for this talk originated by me spotting my badge from BSDCan 2009 hanging on my office wall. For starters we'll go through the time from around back then until today showing how the IPv6 stack and related features evolved. The main will focus on a bit of advocacy (hey do you still need to be convinced to do IPv6?), a big pile of what you can do with all the knobs and features (here's to all sys-admins), and completed with some developer's talk garnish (in case some things need improving or you want contribute or do some research). For dessert you'll learn what else is on the menu and what's currently cooking in FreeBSD's IPv6 kitchen. Hungry for more bits? Then join for the bigger addresses and crunchy colons and learn what that 10 year old badge said.
2019Berkeley Software Distribution (BSD)
55:09
28Joyner, Eric et al.We needed to create a new driver for a new product, but we wanted to develop it in a way that reduced the number of bugs and would keep the code base maintainable in the future. We tried Test Driven Development. Testing a network driver on real hardware is painful: bugs can result in essential subsystems behaving strangely or causing the kernel to panic; and regardless of the presence of bugs, physical hardware needs to spend time being configured, rebooting, or sending/receiving network traffic. With the number and complexity of projects our team was tasked with increasing, we needed to find a way to reduce the amount of time spent fixing bugs or verifying that features work. So, we turned to the TDD (test-driven development) methodology which promises that more bugs can be discovered and fixed before they make it to testers (and users) while improving code maintainability. This talk focuses on how we made the CppUTest framework work with and compile our FreeBSD driver code, other frameworks and utilites we needed, the problems we encountered trying to get kernel source code to compile in C++ and run in userspace, and future plans and areas that we want to investigate with TDD.
2019Berkeley Software Distribution (BSD)
47:59
25Zaborski, MariuszIf you are buying an appliance for your corporate network you probably expect that it meets the highest standards, right? When you buy a security appliance the expectations are even higher. Let’s discuss why FreeBSD is the ideal operating system for building such devices. FreeBSD is one of the most popular Unix-like operating systems, though there are not many appliances that around it. The situation looks even more pessimistic, once we discuss security appliances. The speaker, in his daily job, has spent the last 4-years building the most advanced PAM solution in the world, which is based on the FreeBSD operating system. In this presentation we will discuss which- and more importantly how - FreeBSD features can be used to build appliances. The presentation will only present features that are available in the base system and not in third-party programs. The speaker looks forward to presenting all the nuances and best practices of using FreeBSD as the main component of the appliance. One of the major reasons for using FreeBSD is the best support for ZFS and all its features. Another reason is GELI and GBDE encryption methods, which have never been breached. Furthermore, we also have Capsicum which will help in compartmentalization. There are a host of other benefits of using FreeBSD which we will discuss in the presentation.
2019Berkeley Software Distribution (BSD)
1:00:15
7Davis, Brooks et al.Memory safety bugs such as buffer overflows are an ongoing source of security vulnerabilities. CheriABI is a new process model for FreeBSD on the Capability Hardware Enhanced RISC Instructions (CHERI) hardware platform which eliminates the vast majority of buffer overflows and significantly increases the difficulty of control-flow attacks such as return-oriented programming. Our protections cover programs, the C run-time environment including the dynamic linker, and kernel access to user memory. We have ported virtually all of the FreeBSD user space this platform demonstrating that memory safety can be fitted to existing C software.
2019Berkeley Software Distribution (BSD)
41:50
6Lucas, Michael W.Based on the book "FreeBSD Mastery: Jails" Jails started as a limited virtualization system, but over the last two years they've become more and more powerful. This talk takes you through what modern jails can do, discarding the limits of what they were and demonstrating what they can be today. Jails started as a limited virtualization system, but over the last two years they've become more and more powerful. This talk takes you through what modern jails can do, discarding the limits of what they were and demonstrating what they can be today. We'll cover jails using the base system and the new iocage toolkit, discussing: jails as VMs configuring the jail host properties and parameters jail management packages and upgrades base jails virtual networking with VNET firewalls in jails jails in jails resource restrictions You'll leave with an understanding of what modern jails can and cannot do, and hints for future development.
2019Berkeley Software Distribution (BSD)
35:10
5Rashish, MayaStarting out with no knowledge in Go or ARM64, how do you begin porting a language? Go might be one of the hardest languages to port. Having done that, I came back to tell the tale of how. Unusual characteristics of Go Rules to write the 400 lines of assembly code necessary for a port Debugging in the absence of a functional debugger The talk will describe how to start on such a large project, how to find your way around a large unfamiliar codebase, what's a calling convention, how to debug mistakes in your Go port, and some notes about why Go chose to make such strange choices.
2019Berkeley Software Distribution (BSD)
50:20
10Buehler, PhilippThe sysadmin view of virtualization usually starts at a hypervisor running some kind of "image". Packer is a framework to create such an image using various host and virtualized operating systems and adding some more bolts. This talk shows the efforts and pitfalls of building a plugin for packer using the VMM framework on OpenBSD. Some details go down the rabbit hole (or reducing it) to provide a Go binary runnable as a plugin. For ease of installation, the ways how to package this as an OpenBSD 'port' are shown. On top a bigger picture is provided on how to provide configurable OpenBSD images "at scale" by using the above accomplishments.
2019Berkeley Software Distribution (BSD)
52:48
5Armstrong, JeffreyDo you have a spare mainframe lying around? In this talk, we'll take a look at installing, running, and attempting to use NetBSD on actual VAXen. Trying to perform modern computing tasks on this aging architecture is becoming increasingly infeasible with the rise of encrypted connections as the norm. We'll look at what is actually possible on your spare VAX. NetBSD still supports the VAX architecture from Digital Equipment Corporation, and adventurous users can install the latest NetBSD release on these machines. Would you really want to, though? We'll consider two VAX machines, a relatively common VAXstation 3100 (which will be present) and a far more powerful VAX 4000/200 QBus machine (which won't fit in my car) as example systems. Starting with installation, we'll look at the challenges these machines present simply due to their processing power. While NetBSD does support these systems, using pkgsrc to get your software can be tedious. Setting up a VAX for file sharing at home? Prepare to wait for 72 hours or more while Samba and its dependencies compile. Trying to build almost anything requires days, and we'll just ignore any graphical programs outright. With the modern Internet shifting to encrypted connections for everything, this dated hardware again presents a challenge. Trying to ssh to another server from your VAX? On a VAX 4000, your chances of finishing the handshake are 50% at best before a timeout. Web browsing from the console is painfully slow with every HTTPS connection encountered. Even first boot after installation is an exercise in patience as SSH keys are generated. The VAX does still offer up some fun and questionable practicality for those interested, though. With some lower expectations, anyone can navigate the modern computing world on their spare mainframe.
2019Berkeley Software Distribution (BSD)
45:58
2Carabas, MihaiInterrupts are used in modern systems to signal events that require immediate action. Current CPUs implement interrupts using some type of controller circuit. As such, the ARM architecture uses a system called Generic Interrupt Controller to manage interrupts. In order for virtualization to be possible on ARM hardware, a Virtual Generic Interrupt Controller needs to be present to manage interrupts for guest operating systems. This research project describes implementing such a system for an ARMv7 processor running the FreeBSD hypervisor - bhyve. Also to have a fully reponsive guest, we present the timer virtualization implementation. In the past, handling of external events - such as keyboard input - was done by polling the respective device in order to check whether any processing was required. This method was inefficient due to the difficulty of balancing the promptness of responding to an event versus the time spent in polling. Furthermore, the method did not scale well with the number of sources of possible events. The alternative developed in order to overcome these shortcomings consists of using an interrupt system. In such a system, an interrupt controller is connected to both the CPU and any devices that may generate events that require handling by the operating system. Whenever such an event occurs, the interrupt controller signals the CPU that an interrupt has occurred and sends a number which identifies the interrupt. In response, the processor saves the current execution state and jumps to the interrupt handler associated to the identifier it has received. Actual implementation details depend on hardware. However, all platforms use some type of interrupt controller in order to manage interrupts. Intel processors use an interrupt controller known as Advanced Programmable Interrupt Controller (APIC). ARM uses a different system, called Generic Interrupt Controller, which is further discussed in this paper. Timed events are a core element of many software systems. Their utility ranges from pre-empting processes while in kernel space to scheduling events in high level programming in user space. It is clear that these types of functionality are also desirable when running software in a virtualized environment. The need for keeping time has brought about the introduction of new timer hardware, such as the Programmable Interval Timer(PIT), the Real Time Clock(RTC), the Advanced Configuration and Power Interface(ACPI) and the High Precision Event Timer(HPET), each with their own utility.
2019Berkeley Software Distribution (BSD)
52:03
3Dzonsons, KristapsOpenBSD has as much an association with hiking as it does security and stability. For those of us spending more time below the waterline than above, how can we leverage our favourite operating system? For those of us spending more time below sea-level than above, OpenBSD---and indeed any open source operating system---is a perfect fit in the infrastructure required by diving: photo/video editing and storage, dive planning, and dive computer analysis. In this image-rich talk, I'll discuss how OpenBSD (and open source in general) fits into the fields of free and SCUBA diving. My talk will focus on dive planning, which is integral to technical diving; dive computer analysis, integral to free and technical diving; and most of all, underwater photography, which has its place throughout. I'll pay lip service to videography, but that doesn't quite intersect with my skills. All of the images used throughout the talk---from humpback whales in the Pacific to manta rays in the Indian Ocean---were produced on a fully open source chain of components. As were all of the dives themselves, free and SCUBA, backed by an open source toolchain for planning and analysis. Beyond discussing the tools available, I'll also discuss how open source is important to the diving ecosystem itself, from hardware to nitty-gritty decompression algorithms. I'll update the EuroBSDCon 2018 version with more discussion on practical usage of tools (e.g., darktable) and new developments (e.g., the availability of Subsurface as a port, colour correction in kdenlive, etc.).
2019Berkeley Software Distribution (BSD)
1:02:53
2Dexter, Michael"No OS left behind" would be an excellent mantra in the Open Source community but the reality is that there are "Tier 1" and "Tier >1" operating systems, file systems and other components throughout the Open Source ecosystem. The OpenZFS file system provides and excellent example and opportunity of a fundamental computing component that enjoys every positive aspect of Open Source, but is relegated to "non-tier-1" status on platforms that are not explicitly essential to commercial endeavors. These "not-yet-essential" platforms include NetBSD and Windows, not to mention non-Intel architectures such as ARM, PowerPC, and FreeBSD/Sparc64. This talk will explore exactly how a POSIX Unix environment and specific utilities can provide a common testing environment for OpenZFS on all supported OpenZFS platforms, including Microsoft Windows. This "parallel" testing approach dictates that all targeted platforms are tested simultaneously on identical hardware for instantaneous comparative results. Two additional dimensions arise from this approach: The testing of FreeBSD across multiple versions as far back as 5.0 and even older, plus the testing of non-OpenZFS code on each supported OpenZFS platform. FreeBSD was chosen as the laboratory host operating system for its unique ability to institutionally provide "Jail" container support for previous versions of FreeBSD, and bhyve hypervisor support for not only FreeBSD releases but non-FreeBSD operating systems. Implementation details include the use of Jail and bhyve for system containment and virtualization, and OpenSSH, Cygwin, and the "rtools" (net/bsdrcmds) utilities for remote system control. Unexpected dependencies include a global effort to rebuild FreeBSD's release history, parallel efforts to implement BSD-licensed S.M.A.R.T., Git and rsync utilities; cross-platform data generation tools, and a continuous validation of "guest" operating systems on the bhyve hypervisor. Urgency factors include the transition of non-Illumos/non-Linux OpenZFS platforms from Illumos to "ZFS on Linux" as their upstream source of truth, warranting expanded testing of parallel code bases of formerly-singular OpenZFS ports. Motivations for this talk include encounters with countless operating system and file system regressions in a commercial support environment, with a particular focus on performance "cliffs" over which acceptable performance travels to an unfortunate collision with lower, unacceptable levels.
2019Berkeley Software Distribution (BSD)
59:33
10Sperling, StefanThis talk presents the results of a use case study I performed together with my friend Maurice throughout the first quarter of 2019. Maurice is an actor who survived a brain hemorrhage in 1996 and has since lived with severe physical and cognitive disabilities. His ability to use computers is restricted in many ways. For example, while Maurice can read text just fine, he is unable to fluently spell words and he can only use the fingers on one of his hands. Computers Maurice has at his disposal are a PC and a netbook running Windows in standard configuration. He depends on friends and caretakers to perform basic tasks such as writing email messages. Security of these systems is of course on the lower end of the spectrum. Fooling Maurice into clicking a wrong button and installing malware is not hard. Maurice is certainly not a typical OpenBSD user, but we wanted to find out to which degree we could shape an OpenBSD system towards his needs. We had to find suitable hardware with good device support, configure the base system, a desktop environment, typical desktop programs, system backup and restore, and allow for secure remote system administration. We also looked into porting accessibility software, such as open source speech-to-text systems, to OpenBSD. Any resulting changes and enhancements to both the base system and the ports tree were submitted back to the OpenBSD project. Maurice runs -current!
2019Berkeley Software Distribution (BSD)
1:06:42
5Jude, AllanAn overview of the forthcoming changes to the OpenZFS Project, and how FreeBSD will interact with the OpenZFS Project. This talk will discuss: How the OpenZFS project has changed New problems as ZFS has matured (deprecation policy) How the OpenZFS project is working to reduce the differences across platforms (command line switches, NFS interoperability) Interoperability improvements (feature flag 'compatibility' groups) New procedures to prevent divergence and coordinate development across platforms (reserving flags, wider discussion before names for features/flags are decided) The monthly ZFS Leadership Call Then switch gears and cover FreeBSD specific issues: The switch to ZoL as upstream Why we are making the change What we get out of it How it is better for all of OpenZFS Status report ZoL is OpenZFS, not Linux There is no LinuxKPI in ZFS (kill the FUD) What has OpenZFS done for me lately Then an overview of upcoming changes and features in ZFS.
2019Berkeley Software Distribution (BSD)
40:04
6Vadot, EmmanuelIn this talk I will describe the needed steps to write a DRM (As in Direct Rendering Manager) driver on FreeBSD for an arm64 board. DRM is the defacto standard to have graphics on a modern system using the standard software stack like Xorg or Wayland. While FreeBSD amd64 have correct support for DRM drivers, arm and arm64 are way behind and except on some system like Raspberry Pi where a simple framebuffer is available or NVidia Tegra where a DRM driver is available, the only way to have graphics is to use EFIFB. Both EFIFB and the simple framebuffer on RPI have severe limitations. You cannot have 2D acceleration, have a multi screen setup or even change the resolution at runtime. Writing a DRM driver can be hard and scary, especially when it is the first one that you are writing and don't know the DRM subsystem API. During this talk I'll explain what a DRM driver is consisted of : framebuffer, planes, crtc and encoder won't have anymore secrets for you. You will see why I've chosen to write my first DRM driver for the Allwinner A64 (An ARM64 System on Chip present on the Pine64 board or on the Pinebook, a cheap laptop), what problems I had and how I solved them. Of course a driver related talk will not be complete without a tips and tricks part that will help you having graphics shown on your screen quicker.
2019Berkeley Software Distribution (BSD)
25:11
20Buehler, TheoThe TLS 1.3 handshake is the protocol used for negotiating a TLS 1.3 connection between a client and a server. During the handshake the configuration for the session is agreed upon, ephemeral secrets are exchanged and the server is authenticated. This protocol is encoded in a state machine. After a general discussion of TLS and in particular a comparison of TLS 1.2 and TLS 1.3, this talk will review the TLS 1.3 handshake state machine and discuss its implementation in LibreSSL. Benefits and drawbacks of both the handshake protocol and LibreSSL's implementation will be discussed. We will also elaborate on the way we verify and guarantee our implementation's correctness using regression testing and other methods.
2019Berkeley Software Distribution (BSD)
51:10
16Dengg, AlbertNowadays container technologies like Docker are the first thing you here when the question on how to deploy and manage (micro) services. However, FreeBSD already has lots of features out of the box that can be used to implement lots of the wanted characteristics, but there is still a need for glue code to integrate it into a complete solution. Ansible is a powerful configuration automation and management system that has a relatively low set of initial requirements. It uses mostly python and ssh, of which the later is needed in most cases anyway to be able to remotely manage the systems. This means that not only the overhead is comperativly low, but also it does not have too many dependencies that will break over time with things like new software releases. Utilizing the flexible template engine and already available modules to manage features like ZFS, firewall and jails.conf, we will be able to automatically deploy a system that includes creating read only templates for service jails configuring the network configuring the firewall creating (multiple) running service jails from these templates duplicating jails scripting the upgrade & restart of the base and service jails With that the talk will show how host multiple managed, partly customized applications for multiple distinct user groups with minimal overhead for managing updates and setup of new instances.
2019Berkeley Software Distribution (BSD)