OpenStack and Python

Dmitry Tantsur (Principal Software Engineer, Red Hat)


  • What is OpenStack?
  • Tools:
    • pbr - made better
    • tox - managing virtual environments
    • reno - when is too messy
  • Libraries:
    • stevedore - power of entry points
    • oslo.config - configuration options
  • Our approach to
    • requirements
    • releases

What is OpenStack?

What is OpenStack?

Free and open source software to create public and private clouds.

Implements IaaS (infrastructure as a service) - on-demand:

  • virtual and bare metal machines,
  • NICs, networks, routers,
  • object and block storage,
  • shared file systems,
  • monitoring and billing for all this,
  • and many more.

Provides complete and consistent API, as well as web UI.

What is OpenStack?

An example of routine tasks achievable via API/UI:

Create two small virtual servers with Ubuntu 16.04 and one large bare metal server with CentOS 7. Connect the Ubuntu machines with a private network with network address, route it to the internet, expose port 80 of the Ubuntu machines through a load balancer. Connect all machines with another private network with network address Put my SSH public key on all machines.

OpenStack in numbers

  • > 50 teams contributing to > 600 repositories
  • ~ 2000 contributors from > 180 companies *
  • > 20 public clouds based on OpenStack

* in the last release codenamed "Pike"

So what about Python?

OpenStack is nearly entirely written in Python!

Why Python?

OpenStack is all about glueing together various great bits of free software and providing a solid API on top of it.

Python is very good at glueing things together with it's rich collection of network libraries and good ability to interface with C code.

Performance is rarely a concern, as most of the actual logic lives in other projects.

Tools: pbr

Tools: pbr

Why? is bad:

  • running code (often as root) at install time
  • people get creative in their files, complicating life for users and especially packagers *
  • custom code instead of improving common tools

* please, PLEASE, don't execute pip install from within your!

Tools: pbr

pbr means Python Build Reasonableness.

Addon on top of setuptools

  • allowing declarative configuration via setup.cfg
  • solving common problems, adding missing features
  • reduces your to:
import setuptools


This gets committed in your repository and only ever changes to bump the required version of pbr.

Tools: pbr

The real magic happens in setup.cfg:

name = coolcats
summary = I wrote this project because I'm good at writing
description-file =
author = dtantsur
author-email =
home-page =
classifier =
    Environment :: OpenStack
    Intended Audience :: System Administrators
    License :: OSI Approved :: Apache Software License

Tools: pbr

Handles packages (detects modules recursively!), scripts and data files:

packages =

scripts =

data_files =
    share/pyfoo/ = images/funny-cats/*

Tools: pbr

Generates versions based on git tags:

$ python --version

Generates nice AUTHORS and CHANGES from git.

Generated MANIFEST based on files added to git.

Tools: pbr


  • Requires git
  • Opinionated about versions

Tools: tox

Tools: tox


Managing virtual environments in a repeatable way is annoying:

$ virtualenv venv
$ venv/bin/pip install -r requirements.txt
$ venv/bin/pip install -e .
$ venv/bin/python -m unittest discover coolcats.tests  # for example

Now repeat for every supported Python version...

Tools: tox

tox simplifies routine task on managing virtual environments and running stuff in them.

Run unit tests on Python 2.7:

$ tox -epy27

Run unit tests on the default Python 3:

$ tox -epy3

Build a generic environment and run some commands there:

$ tox -evenv -- python -m some.package

Tools: tox

Configuration in tox.ini:

envlist = py3,py27

usedevelop = True
deps =
commands =
    python -m unittest discover coolcats.tests
setenv =

Tools: tox

Custom environments doing anything:

basepython = python2.7
deps =
commands =
    flake8 coolcats
    doc8 README.rst doc/source

commands = {posargs}

Tools: reno

Tools: reno


pbr can generate a ChangeLog.

But user/operator facing release notes is a different thing: you need to highlight important things and hide techinical details.

Writing release notes by project maintainers does not scale.

Appending them to a single file is messy on merges and backports.

Tools: reno

reno allows a commit author to create a simple yaml file with associated release notes for their change:

  - |
    Introduces support for downloading funny cats pictures.
  - |
    Make sure to enable downloading funny cats pictures or
    we'll set your hard drive on fire.
  - |
    Not watching funny cats is lame and deprecated.

Tools: reno

When building release notes, the reno tool:

  • takes all commits from a requested git branch,
  • extracts release note files,
  • splits them over git tags,
  • combines them in one yaml,
  • converts it to rst using per-project templates,
  • passed the rst to sphinx.


Entry points and stevedore

Entry points

Entry points are great, let's have MORE of them!

This is a great, but often overlooked, feature of setuptools.

Essentially, a collection of dictionaries, mapping short names to Python objects.

Entry points of the same group from different Python projects are merged by setuptools, which make this feature perfect for plugins!

Entry points and pbr

pbr supports entry points in setup.cfg:

console_scripts =
    make-cat-photo = coolcats.cli:my_cat_photo
    post-cat-photo = coolcats.cli:my_cat_photo

coolcats.cats =
    small-and-cute = coolcats.cats:SmallAndCute
    fat-and-awesome = coolcats.cats:FatAndAwesome

The standard console_scripts group simplify creating scripts a lot.


stevedore is a library simplifying interaction with entry points. Provides convenient classes for common patterns:

  • driver - pick one named entity from a choice of several, for example, database drivers
  • hooks - list of entities under one name
  • extensions - collection of named entities

Each type can be enabled automatically (when a package with it is installed) or explicitly (e.g. via configuration).

Libraries: oslo.config

Configuration options

Why bother when we have standard configparser?

  • No way to define a schema
  • Limited support for types
  • No way to generate documentation

oslo.config library

Options schema defined in Python:

opts = [
        help=_('The IP address on which ironic-api listens.')),
        help=_('The TCP port on which ironic-api listens.')),
        help=_('The maximum number of items returned in a single '
               'response from a collection resource.')),

Real example from one of our projects.

oslo.config library

Options accessed as

from oslo_config.cfg import CONF

host_port = "{}.{}".format(CONF.api.host_ip,

Using a global CONF object is ugly, but simplifies things.

oslo.config library

  • Many built-in types, including lists, IP addresses and ports.
  • Validation of loaded options.
  • Option loading from files and command line.
  • Support for deprecating options and renaming with deprecation.

oslo.config library

Bonus: generating example configurations:


# The IP address on which ironic-api listens. (string value)
#host_ip =

# The TCP port on which ironic-api listens. (port value)
# Minimum value: 0
# Maximum value: 65535
#port = 6385

# The maximum number of items returned in a single response
# from a collection resource. (integer value)
#max_limit = 1000

oslo.config library

Bonus: generating documentation

Our approach to requirements

Requirements are easy

Just populate a requirements.txt, pbr will catch it automatically:


That's all, right?

Requirements are easy

Major* versions tend to break things.

* if you're breaking things in non-major versions, please STOP.

Still easy, insert an upper cap:


That's all now, right? RIGHT?

Requirements are easy

Your fellow project may want a newer or older major version:


What if somebody tries to install both projects at the same time?

Requirements are NOT easy at all

Requirements in OpenStack

Centralized requirements handling:

  • There is a separate repository with all requirements from all projects.
  • A bot periodically updates versions in requirements.txt in other repositories.

This gets us:

  • versions that never conflict,
  • a central place to review new requirements,
  • running CI jobs on any updates.

Stable branches handling

Requirements on stable branches* should not change much.

* git branches and release series produced from them that only receive important bug fixes

Upper caps to the rescue?


Stable branches handling

Upper caps to the rescue?


What if e.g. requests 2.15.1 is a critical bug fix?

What if a non-OpenStack project on the same machine requires requests>=2.16?

Stable branches handling

After years of struggling we decided to mostly stop using upper caps, except for known major breaking changes.

We let downstream consumers to decide on appropriate versions.

But what to do with the CI?

Requirements and CI

We still don't want new versions of random projects to break our CI.

Especially on stable branches.

We also want to have a recommendation for downstream consumers on which versions we known to work.

Solution: upper constraints.

Upper contraints

Poll: who knows what -c flag for pip install does?

Upper contraints

Upper contraints complement requirements with stricter limits:


Upper contraints are not synced to projects and are not enforced outside of the CI.

Upper contraints

On the master (current development) branch a bot periodically proposes updates to upper constraints, which then go through the CI.

On stable branches upper constraints are only updated automatically for other stable OpenStack component releases.

For other releases upper constraints on stable branches can only be updated manually.

Our approach to releases


Typical release of a Python project:

$ git tag -s 1.0.0
$ git push origin 1.0.0
$ python sdist upload

Problem solved?


Problems with manual releases:

  • Generating tarball in a potentially "dirty" environment
  • Managing credentials for PyPI
  • No peer reviews for releases


Releases through CI:

  • Create and push a tag
  • A CI job validates it
  • Another CI job builds a tarball and stores it
  • The third CI job publishes it to PyPI
  • Bonus: we have a CI job to do a release announcement :-)

What about peer review and validation before tagging?


Repository for review requests:

team: nova
type: service
  - version:
      - repo: openstack/nova
        hash: af4703cb38580a8cb9c9b293dd4b1637f2734cad


Allows running CI jobs on release requests!

On merging a new release request:

  • Create a git tag and push it
  • Continue with the same actions

Bonus: stable branches are created the same way:

  - location:
    name: stable/pike