Ceph storage driver in LXD

containers

Even before LXD gained its new powerful storage API that allows LXD to administer multiple storage pools one frequent request was to extend the range of available storage drivers (btrfs, dir, lvm, zfs) to include Ceph. Now we are happy to announce that we fulfilled this request. As of release 2.16 LXD comes with a Ceph storage driver.

The command line experience for Ceph is similar to the other storage drivers. Anyone who has played with the storage API should feel at home right away. Without going into too much detail of the inner workings of Ceph itself there are a few details one should keep in mind. LXD itself is not concerned with administering the Ceph cluster itself. Instead, LXD can be used to create and administer OSD storage pools in an existing Ceph cluster. The OSD storage pool is then used by LXD to create RBD storage volumes for images, containers, and snapshots just with any other storage driver.

Creating OSD storage pools in Ceph clusters

Like any other storage driver the Ceph storage driver is supported through lxd
init
. So creating a ceph storage pool becomes as easy as this:

For more advanced use cases it’s possible to use our lxc storage command line tool to create further OSD storage pools in a Ceph cluster. Users have the ability to fine tune several parameters when doing so. For example, it is possible to specify the Ceph user via the ceph.user.name and the cluster to use via ceph.cluster_name. So say you wanted to create a new OSD storage pool in the cluster my-cluster for a user called my-user. This can be done by using

lxc storage create my-osd ceph ceph.user.name=my-user ceph.cluster_name=my-cluster

In the following asciinema I’m going to use the default admin for ceph.user.name and ceph for ceph.cluster_name just to illustrate the use of these properties when creating a new OSD storage pool. I will also make use of the osd.osd.pool_name property. This is useful to tell LXD that the internal name LXD uses to represent the OSD storage pool to the user is supposed to be different from the name of the OSD storage pool itself. Usually this is useful when either another OSD storage pool of the same name that you would like LXD to use already exists on disk or when LXD uses the name of the OSD storage pool you would like it to have on disk is already in use by LXD. The final property I’m going to specify is ceph.osd.pg_num to specify the number of placement groups that I want the OSD storage pools to use:

asciicast

Creating images, containers, snapshots on a OSD storage pool

Now that we have created two OSD storage pools we are ready to create containers in them. Let’s see if it all goes well.

asciicast

OSD storage pools use the RBD kernel driver to create and administer storage volumes. RBD storage volumes are conceptually similar to LVM logical volumes and ZFS datasets. They share some properties with both. Similiar to logical volumes, RBD storage volumes are block devices. This means the user can determine which filesystem to use for the storage volumes that are created. By default, LXD will use ext4 for all new storage volumes but it is possible to tell LXD to use xfs instead. Let’s create a new storage pool that uses xfs as its default filesystem for all new storage volumes:

asciicast

But as I said RBD also shares features that make it similar to ZFS. For example, RBD supports the concept of clones. Clones are space-efficient storage volumes based on protected snapshots made of other storage volumes. Internally this leads to a more complex storage pool structure but LXD is smart enough to figure out the right dependencies and keeps track of any storage volumes that need to be kept around even if the container has been deleted. The good news is that not just are these clones space efficient they also are super fast. Let’s try to copy an already existing container. LXD will use RBD clones for that:

Summary

By adding the Ceph storage driver to the storage API LXD gains support for distributed storage. This makes LXD even more suitable for use in critical production environments and in using containers at a very large scale. Administration is easy and intuitive through our storage API. I hope that this short introduction has given you a good impression on what the Ceph storage driver is currently capable of. We have more documentation available in our Github repository and are always open to feature requests and happy to lend support. The Ceph storage driver was fun to implement. I hope you have as much fun using it as I had writing it.

Take care
Christian

Advertisements

Storage management in LXD 2.15

 

containers

For a long time LXD has supported multiple storage drivers. Users could choose between zfs, btrfs, lvm, or plain directory storage pools but they could only ever use a single storage pool. A frequent feature request was to support not just a single storage pool but multiple storage pools. This way users would for example be able to maintain a zfs storage pool backed by an SSD to be used by very I/O intensive containers and another simple directory based storage pool for other containers. Luckily, this is now possible since LXD gained its own storage management API a few versions back.

Creating storage pools

A new LXD installation comes without any storage pool defined. If you run lxd init LXD will offer to create a storage pool for you. The storage pool created by lxd init will be the default storage pool on which containers are created.

asciicast

Creating further storage pools

Our client tool makes it really simple to create additional storage pools. In order to create and administer new storage pools you can use the lxc storage command. So if you wanted to create an additional btrfs storage pool on a block device /dev/sdb you would simply use lxc storage create my-btrfs btrfs source=/dev/sdb. But let’s take a look:

asciicast

Creating containers on the default storage pool

If you started from a fresh install of LXD and created a storage pool via lxd init LXD will use this pool as the default storage pool. That means if you’re doing a lxc launch images:ubuntu/xenial xen1 LXD will create a storage volume for the container’s root filesystem on this storage pool. In our examples we’ve been using my-first-zfs-pool as our default storage pool:

asciicast

Creating containers on a specific storage pool

But you can also tell lxc launch and lxc init to create a container on a specific storage pool by simply passing the -s argument. For example, if you wanted to create a new container on the my-btrfs storage pool you would do lxc launch images:ubuntu/xenial xen-on-my-btrfs -s my-btrfs:

asciicast

Creating custom storage volumes

If you need additional space for one of your containers to for example store additional data the new storage API will let you create storage volumes that can be attached to a container. This is as simple as doing lxc storage volume create my-btrfs my-custom-volume:

asciicast

Attaching custom storage volumes to containers

Of course this feature is only helpful because the storage API let’s you attach those storage volume to containers. To attach a storage volume to a container you can use lxc storage volume attach my-btrfs my-custom-volume xen1 data /opt/my/data:

asciicast

Sharing custom storage volumes between containers

By default LXD will make an attached storage volume writable by the container it is attached to. This means it will change the ownership of the storage volume to the container’s id mapping. But Storage volumes can also be attached to multiple containers at the same time. This is great for sharing data among multiple containers. However, this comes with a few restrictions. In order for a storage volume to be attached to multiple containers they must all share the same id mapping. Let’s create an additional container xen-isolated that has an isolated id mapping. This means its id mapping will be unique in this LXD instance such that no other container does have the same id mapping. Attaching the same storage volume my-custom-volume to this container will now fail:

asciicast

But let’s make xen-isolated have the same mapping as xen1 and let’s also rename it to xen2 to reflect that change. Now we can attach my-custom-volume to both xen1 and xen2 without a problem:

asciicast

Summary

The storage API is a very powerful addition to LXD. It provides a set of essential features that are helpful in dealing with a variety of problems when using containers at scale. This short introducion hopefully gave you an impression on what you can do with it. There will be more to come in the future.