Born in 2008, LXC (pronouced lexy) is a userspace interface for the Linux kernel containment features, that enables the creation and management of application containers.
LXC leverages a number of kernel features to contain processes:
- Kernel namespaces (ipc, uts, mount, pid, network and user)
- Apparmor and SELinux profiles
- Seccomp policies
- Chroots (using pivot_root)
- Kernel capabilities
- CGroups (control groups)
LXC containers are often considered as something in between a chroot and a full blown VM. The goal of LXC is to create an environment as close as possible to a standard Linux installation, without the need for a separate kernel.
Kudos to Dave Cohen who put together a nice LXC video series, that gets straight to the point. What follows are my notes based on his videos. Dave’s videos are actually based on Stéphane Graber’s 2013 LXC 1.0 blog post series.
LXC in 5 steps
- Install it. On a hat based distro with
yum install epel-release
thenyum install lxc lxc-extra lxc-templates
.lxc-extra
includes helpful scripts such aslxc-ls
. Verify withlxc-checkconfig
. - Create containers like this
lxc-create -t centos -n web1
. The container templates-t
available can be found in/usr/share/lxc/templates
. On first use will download the base file system for the chosen template, dumping it in /var/cache e.g./var/cache/lxc/centos/x86_64/7/
. LXC containers are stored in/var/lib/lxc
. A temporary root password to log into the container is placed here/var/lib/lxc/web1/tmp_root_pass
. - Spark it up with
lxc-start -n web1 -d
. The-d
runs it in daemon mode, preventing it from taking over stdout and stdin. - Get a state of play with
lxc-ls -f
(with fancy option) - When you’re ready to jump into a container,
lxc-attach -n web1
Bonus Options
Binding to a specific TTY
lxc-start -n web1 -c /dev/tty4 &
Freezing
LXC employs the freezer cgroup to freeze all running processes within a container, placing them in the blocked state until thawed. The state of LXC containers within the freezer cgroup can be queried using the /sys
(like /proc
) virtual file system, for example:
# cat /sys/fs/cgroup/freezer/lxc/web1/freezer.state
THAWED
lxc-freeze -n web1
will initiate the freeze, and lxc-unfreeze -n web1
will defrost it again.
Kill a container (lxc-stop -k)
lxc-stop -n web1 -k
, the optional -k
switch will send a SIGTERM (15).
Configuration
Templates
Available LXC templates, which are just shell scripts, live in /usr/share/lxc/templates
.
# ls /usr/share/lxc/templates
lxc-alpine lxc-archlinux lxc-centos lxc-debian lxc-fedora lxc-openmandriva lxc-oracle lxc-sshd lxc-ubuntu-cloud
lxc-altlinux lxc-busybox lxc-cirros lxc-download lxc-gentoo lxc-opensuse lxc-plamo lxc-ubuntu
rootfs
Each LXC container created, get a rootfs placed under /var/lib/lxc
. For a container named web1:
# ls /var/lib/lxc/web1/rootfs
bin boot dev etc home lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
config
Each LXC container is given its own config
file, example /var/lib/lxc/web1/config
. For documentation on this config, checkout man 5 lxc.container.conf
. The boilerplate config will look like this:
# Template used to create this container: /usr/share/lxc/templates/lxc-centos
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.hwaddr = fe:c0:1f:81:f8:26
lxc.rootfs = /var/lib/lxc/web1/rootfs
# Include common configuration
lxc.include = /usr/share/lxc/config/centos.common.conf
lxc.arch = x86_64
lxc.utsname = web1
lxc.autodev = 1
# When using LXC with apparmor, uncomment the next line to run unconfined:
#lxc.aa_profile = unconfined
# example simple networking setup, uncomment to enable
#lxc.network.type = veth
#lxc.network.flags = up
#lxc.network.link = lxcbr0
#lxc.network.name = eth0
# Additional example for veth network type
# static MAC address,
#lxc.network.hwaddr = 00:16:3e:77:52:20
# persistent veth device name on host side
# Note: This may potentially collide with other containers of same name!
#lxc.network.veth.pair = v-web1-e0
Lets make some changes to get a static IP, control the boot order, assign the container to 1+ groups, and auto start when the host boots.
lxc.network.ipv4 = 192.168.124.140/24 192.168.124.255
lxc.network.ipv4.gateway = auto
lxc.start.auto = 1 #1 = boot at host startup
lxc.start.delay = 15 #15 second delay prior to boot
lxc.start.order = 10 #the higher the number, the higher the priority
#lxc.group = web,dns #assigns the container to 1 or more groups, which can then be managed in chunks
Beware of the lxc.group
, which renders the lxc.start.auto
option useless.
The final steps to getting a static IP, is to disable DHCP in the network config within the container itself. On CentOS for example:
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
Managing Auto Start
LXC containers, as shown, can be conveniently grouped, and auto started, easing the pain of starting and stopping indvidual containers all the time. The lxc-autostart
command supports a -L
(list) option that will show in priority order the auto boot sequence it is aware of.
# lxc-autostart -L
psql-1 15
psql-2 15
rabbit-mq 15
nginx-1 15
ngnix-2 15
The order of the list shows the priority. The value that follows the container name, is the number of seconds it will delay before proceeding to auto-boot the next container.
Unprivileged Containers
uidmap