Container OS Architecture Explained: How Containers Use Host Kernel While Providing OS-like Environments


2 views

When you run a CentOS container on an Ubuntu host, you're seeing an interesting abstraction. The container doesn't contain a full OS kernel, but it does include:

  • Userland binaries and libraries (e.g., /bin, /usr/bin, /lib)
  • Configuration files (e.g., /etc)
  • Package managers (e.g., yum, dnf)

Here's how you get a shell prompt without a full OS:

# Run a CentOS container interactively
docker run -it centos:7 /bin/bash

# Inside the container, you'll see:
[root@container-id /]# 

This works because:

  1. The container image includes minimal binaries like bash
  2. The host kernel provides process isolation and resource management
  3. Namespaces create the illusion of an independent environment

Traditional init systems (systemd, upstart) aren't typically needed in containers. Modern approaches include:

# Dockerfile example for running a service
FROM centos:7
RUN yum install -y httpd
CMD ["httpd", "-D", "FOREGROUND"]  # Runs in foreground

When running a CentOS container on Ubuntu:

# Pull CentOS image on Ubuntu host
docker pull centos:7

# Verify the container's userland
docker run centos:7 cat /etc/redhat-release
# Output: CentOS Linux release 7.9.2009 (Core)

The container includes:

  • CentOS-compatible glibc version
  • CentOS-specific binaries and paths
  • RedHat-style package management

Key things to remember:

  • Containers share the host kernel but maintain their own userland
  • Container images are distribution-specific but kernel-agnostic
  • Process isolation is handled by kernel features (cgroups, namespaces)
# View container's apparent OS info
docker run --rm alpine cat /etc/os-release
# Different output than host's /etc/os-release

Containers fundamentally differ from virtual machines in their architecture. While VMs require a full guest operating system, containers share the host machine's kernel while maintaining isolated user spaces. This explains why we can run a CentOS container on an Ubuntu host - the container only needs the CentOS userland binaries and libraries.

When you get a shell prompt in a container, you're interacting with the container's user space environment, not a full OS. Common utilities like systemctl and services are either:

  1. Minimal versions included in the container image
  2. Mock implementations that interface with the container runtime
  3. Actually managing processes within the container's namespace

For example, here's how a minimal Alpine container might include just the shell and core utilities:

FROM alpine:latest
RUN apk add --no-cache bash coreutils
CMD ["/bin/bash"]

Running a CentOS container on Ubuntu works because:

  • The container image contains CentOS's /bin, /lib, and /etc directories
  • All system calls route through the Ubuntu host's kernel
  • Package managers like yum operate within the container's filesystem

Here's how you might run different distributions simultaneously:

# On Ubuntu host
docker run -it centos:7 /bin/bash
docker run -it alpine:latest /bin/sh

Each container maintains its own:

  • Package database (/var/lib/rpm for CentOS)
  • Configuration files
  • Initialization system (if present)

While containers appear to have a full OS, they're actually just bundles of:

  1. Application binaries
  2. Shared libraries
  3. Configuration files
  4. Optional init systems

The key takeaway: containers provide distribution userlands, not complete operating systems. This explains both their lightweight nature and their compatibility across different hosts.