Into the Core…OS

by

At some point in March I received an email stating that I had roughly $40 in DigitalOcean (referral link – get $10 credit!) credit which was going to expire on the first of May. I wanted to do something cool, learn new things, and leverage a large portion of my remaining credits.

My initial plan was to spin up a CoreOS cluster, as it is something that I have had my eye on. Once running I would get Kubernetes running for management/orchestration of deployed applications. During this journey I learned a lot of new things, had a lot of fun, and even got a cool cluster running. However, I never made it to the end goal of Kubernetes in time for my presentation (and this post). Looking back, I don’t consider this to be defeat: I learned a lot of new concepts along the way. I can be certain that I will leverage and use that knowledge in the future too!

CoreOS: Linux for massive deployments

CoreOS is a linux distribution intended to be deployed in large configurations. It can run on bare metal or as virtual instances on almost every major cloud provider. For my experiments I chose to run the alpha release channel of CoreOS. It may be less stable, but bleeding-edge was fine for my experiments. It was also very stable.

There are a few things that really intrigued me about CoreOS. The two most interesting from an operating systems perspective were cloud-config and the update mechanism.

cloud-config

Cloud-init is an initialization system that reads a file known as a cloud-config which can be passed into a booting instance. It comes from user data on AWS and DigitalOcean, or as a mounted drive or kernel options on a bare-metal setup. It is boot-time information for how to join a given instance to the cluster, what services to start, and overall system configuration.

I really liked the ability to specify this once for my entire cluster. My architecture allowed for this as each node in the cluster was an etcd node (more on etcd later!) It made adding/removing instances super easy and seamless.

Cloud-config is a yaml document that describes basics of configuration of the system. While I was learning about cloud-init an improved alternative called ignition was announced. Ignition uses JSON instead of yaml and executes different parts of the script at different times during boot. Since cloud-config was more simple and I was already diving in, I stuck to it.

Example cloud-config file

This cloud-config tells the booting instance to configure etcd to use the discovery url provided, as well as what ip’s and ports to listen or connect on. It is also responsible for starting two services: etcd2 and fleet.

Other options for cloud-config could be specified to add authorized ssh public keys or alternate user accounts. The full documentation is available at the coreOS cloud-config page.

Updates

Updates in CoreOS are handled by the same updater as ChromeOS. The root filesystem is read-only and has two root filesystem images (“A” and “B”). Updated root filesystems, which contain all of the operating system’s software, are downloaded to the inactive root filesystem (If your instance is booted to root A, B is where the update lives, for example.)

Once the update is downloaded the cluster coordinates a rolling restart of nodes and the boot switches to the newly updated partition. This makes managing a cluster of hundreds of nodes much easier to keep up to date.

Core Components

CoreOS is made up of a number of projects that all work together to make everything possible. These components are etcd, fleet, flannel, and docker/rkt.

etcd

Etcd is a distributed key-value store. It can run as a single-source of truth with read-proxies or as a clustered group of nodes with a consensus to keep values correct no matter where they were written.

The uses of etcd include configuring applications, service discoverability, and managing the state of the overall cluster. You can even subscribe to a key in etcd and handle an event when it changes. While researching it I found that there are tools for generating config files and restarting services when a key is updated. Another use I looked into was the Vulcan loadbalancer which uses etcd as a backend for http/https load balancing.

Docker and Rkt

On CoreOS everything runs in a container. This means that no software is actually installed on the base system. By doing this it makes sense that the base system is identical across every CoreOS deployment anywhere. It also makes running applications much easier – your cluster does not require any specific configuration to support your applications. You can run a node webapp just as easily as a java or rails app.

Being a big fan of Docker, I chose to use docker during my experiments. Rkt, pronounced “rocket”, is a container format and runtime built for and by the CoreOS project. It is even compatible with Dockerfiles for building images so it should be easy to try/test.

fleet

Fleet is a distributed init system backed by etcd. It schedules systemd unit files to run across the cluster.

Besides simply allowing you to run a service unit file, fleet allows you to insure that a specific number of instances of a given service are running across the cluster. You can insure that instances of services are on different nodes as well as target specific types of nodes (based on metadata).

Example unit file

This unit file is for running the docker image at coreos/example. It is just a static web page that runs in a container, and is great for testing things on your cluster.

The @ in the service file’s filename means that this is a template unit. You can launch multiple instances by specifying a number following the @: fleetctl start example@1.service will start an instance of this on the cluster. To add a second instance you would run fleetctl start example@2.service.

The [X-Fleet] section is specific extensions to the unit file format for fleet. In this file it specifies that our example conflicts with other instances of itself. This will prevent multiple instances from getting scheduled on the same node.

Flannel

Flannel is a container networking layer. It uses etcd to setup networks between containers across the cluster. This makes service discovery/availability pretty easy to get up and running. During my experiments and learnings I set flannel up but ended up not using it during my demos or examples.

CoreOS deployment/cluster architecture

A CoreOS deployment is flexible and can be architected in many different ways. A deployment may be as simple as a single node running as a virtual machine on a laptop for testing containers. A small deployment of 5-6 machines/instances may all serve as etcd nodes. Finally, a larger deployment may use a small cluster of 3-9 etcd nodes which serve hundreds of “worker” instances that do nothing but run containers and have an etcd proxy running.

Larger deployments of CoreOS are very fault tolerant. Even in an example cluster of 8 nodes I was able to demonstrate how the cluster would schedule instances of coreos/example if I added and removed cluster nodes. Upgrades to CoreOS are just as smooth/seamless thanks to etcd serving as a lock system for coordinating zero-downtime rolling reboots.

There are some really good diagrams of cluster architectures available at the CoreOS documentation site.

Where to go from here

CoreOS is great for managing raw compute power. It seems like it would also be good for running a small collection of containerized apps and services. For large deployments of complex applications, blue-green deployments, and more advanced cluster functionality, I would love to dive into Kubernetes.

As I mentioned in my introduction, I originally wanted to spin up a cluster and run Kubernetes on top. During that exploration I decided that I should learn and understand the underlying concepts and components of CoreOS first. By the time I had built that understanding

Helpful Links

Leave a Reply

Your email address will not be published. Required fields are marked