View on GitHub


A Benchmark for Container Orchestration Systems

Design Of Thee Container Orchestration Benchmark

This document discusses the CNBM-CO benchmark design and how we got there as well as related efforts.

Architecture of CNBM CO

The benchmark is executed as follows (cf also above architecture):

  1. User provisions the cluster and provides a running cluster to the benchmark.
  2. Benchmark itself runs in the the cluster, triggered by the local cnbm-co command.
  3. Results are dumped to stdout as CSV/JSON, locally.


Supported targets at the moment are:

Benchmark run types

The CNBM-CO benchmark is composed of a number of micro-benchmarks, called benchmark run types (or simply run types) which are covering different areas. See here for a more formal definition.


The following sequence:

  1. Start N containers in seconds potentially with different runtimes (Docker, UCR, CRI-O).
  2. Stop N containers in seconds.


Launches N containers and measures the distribution over nodes in map: nodeid -> set of containers.


Measures API calls from within cluster in seconds:


Measure service discovery in seconds:


Recovery performance in case of re-scheduling a pod/ (container) in seconds.


For individual run types we consider one or more of the following dimensions:

While no prior art exists that has the same scope as the CNBM-CO benchmark there are a number of (related) efforts we reviewed and learned from: