All posts by rcritten

Future thoughts on host groups, Foreman, OpenStack and IPA

Get ready for a ramble…

IPA has hostgroups. Foreman has hostgroups. Openstack-Foreman-Installer (aka astapor) has hostgroups. Wouldn’t it be great to somehow link them together into one cohesive package?

Foreman already has some integration via its realm smartproxy. When provisioning a host you can set the class of this host which, via the magic of automember in IPA, will add it to the appropriate hostgroup. But this is really separate from anything happening with Foreman.

Foreman has a host group concept which defines the list of puppet modules and other environment for a group of hosts.

Might there be a way to combine the two, so that hosts could have consistent naming, be associated with proper IPA hostgroups? If so then some more interesting policies could be applied, including:

  • Unified HBAC policies on the hosts to control access
  • The ability to have ipa-getkeytab re-fetch a keytab to maintain naming consistency for load-balancing.
  • Once IPA has support for multiple certificate profiles, providing hostgroup-specific profiles for certain types of service hosts within OpenStack

Enabling SSL or tls-proxy in devstack

If you want to create an OpenStack environment using devstack with most endpoints protected by SSL there are two ways to do it: native SSL or a TLS proxy (aka an SSL terminator). Both are supported in devstack.

To enable native SSL, add this to your local.conf


To enable via TLS Proxy (stud in this case), add this to your local.conf


This will enable SSL endpoints for:

  • keystone
  • nova
  • cinder
  • glance
  • swift
  • neutron

devstack will generate its own CA certificate and add it to the global trust so all clients on the local machine should just work(tm).

Kernel panic from Solaris 10 x86 installation on KVM

Was trying to install Solaris 10 from sol-10-u11-ga-x86-dvd.iso today and it wouldn’t boot on a Generic/Generic/1GB RAM/8GB disk x86_64 KVM VM,. It failed with a kernel panic.

It seems related to the amount of RAM because I bumped it up to 2GB and the VM booted and I’ve started the installation.

As a note to self, 8GB is not enough for a Developer install either. I went with 14GB.

Keystone and HAProxy

I’m trying to get the astapor puppet module (used in the Openstack Foreman Installer and Staypuft) to configure SSL via a proxy. I’m going to use haproxy since it may already be available on the system and it supports SSL termination.

I’m starting with Keystone, as usual, since it is the core of things. Here are some notes from my first crack at doing it manually.

I cheated a bit and used this blog entry to get the basic jist on configuring haproxy for SSL termination. I just copied the default haproxy.cfg to keystone.cfg, deleted the default listeners and added this block:

frontend main *:5000
bind ssl crt /etc/pki/tls/private/combined.pem
default_backend keystone-backend

frontend admin *:35357
bind ssl crt /etc/pki/tls/private/combined.pem
default_backend admin-backend

backend keystone-backend
redirect scheme https if !{ ssl_fc }
server keystone1 check

backend admin-backend
redirect scheme https if !{ ssl_fc }
server admin1 check

I started it with:

# haproxy -f /etc/haproxy/keystone.cfg

And of course it failed because keystone is already listening on those ports. So I left it dead for now. I switched gears and started following my previous blog post on configuring keystone for SSL. The difference is that I just need to create the new secure endpoint, then re-configure keystone.cfg to listen on ports 5001 and 35358 instead.

Note: HAproxy only takes a single option for SSL so you need to concatonate the public cert, private key and CA cert(s) into a single file and use that. When I generate these certs using certmonger I’ll probably end up using a post-save script to do this concatonation.

So I did that, deleted the original keystone endpoint, restart the openstack-keystone service and finally I was able to start up haproxy.

I then fixed my adminrc to use SSL and include OS_CACERT=/path/to/ca and then tried a keystone endpoint-list only to get an SSL failure.

The problem is in python-backports-ssl_match_hostname. The puppet manifests I’m using currently put IP addresses in for everything and I’ve no time or skill to track all that down so I figured I could cheat for a bit and use an IP Address SAN. The problem is that this is explicitly not allowed in match_hostname so the request fails. For now I added some matching code so it works:

if key == 'IP Address':
    if value == hostname:

So with that in place I can now run keystone endpoint-list successfully. I then moved onto the rest of my previous blog on manually converting to secure Keystone and was able to get nova, glance and cinder working. I’m just about ready to fire up a VM at this point.

CA verification and requests

I’ve seen several projects that use requests that try to pass in local CA information. This is fine and generally pretty functional for those that use self-signed certificates, but the fallback when no CA is provided tends to be None. This causes requests to check two environment variables: REQUESTS_CA_BUNDLE and CURL_CA_BUNDLE. If neither is set then you get no CA validation at all which basically dooms the request to failure.

Instead, IMHO, verify should be set to requests.cert.where() if no CA is provided by the client. Really this should be the default in requests.

Adding CAs to the global store is easier than ever and generally a lot easier to handle that copying PEM files all over the place and referencing long paths in potentially multiple configuration files (in the case of OpenStack).

devstack, CA_BUNDLE, requests and pip

In the SSL patches I’m working on for OpenStack in devstack I’m trying to move away relying on client-specific CA file options. There has been pushback from upstream projects on adding new options for every server -> server connection (e.g. glance -> cinder, glance ->  swift, etc).

The system CA bundle was working nicely until I stood up a new dev box. Suddenly I was seeing a bunch of SSL verification errors.

The problem turned out to be requests. I was using the pip-installed requests which uses its own CA bundle by default, rather than the Fedora python-requests package which uses the system bundle in /etc/pki/certs/ca-bundle.crt. The module contains this comment:

If you are packaging Requests, e.g., for a Linux distribution or a managed
environment, you can change the definition of where() to return a separately packaged CA bundle.

We return “/etc/pki/tls/certs/ca-bundle.crt” provided by the ca-certificates package.

So if you are having problems with trust, try installing the distro-specific package. It worked for me.

SSL endpoints for nova, glance and cinder

Continuing on the theme of adding SSL endpoints in OpenStack, lets do a few more. Note that I’m using native SSL here. It is believed that this will suffer from rather bad performance in production . You’ve been warned.

You are going to need to obtain a bunch of SSL server certificates for this to work. It is possible to use the same certificate for each service but it’s bad practice. In my case I’ve used my local IPA server to obtain the certificates, YMMV. Feel free to skip over the IPA parts. In order for this to work with IPA You need to enroll your system(s) with ipa-client-install.

I’m demonstrating this with a packstack installation using nova networking.

Before doing anything, I’d strongly recommend booting an image and ensuring that OpenStack is properly functioning.

Start by securing the Keystone endpoint.

Next add the CA to the global trust.

Let’s start with Cinder.

Create a certificate for us to use. IPA associates a certificate with a service, so we’ll create a service in IPA to store the certificate:

# kinit admin
# ipa service-add cinder/
# ipa-getcert request -f /etc/pki/tls/certs/cinder.crt -k /etc/pki/tls/private/cinder.key -K cinder/

Either way you get the certificate, make sure the cinder use can read the certificate and keys:

# chown cinder /etc/pki/tls/certs/cinder.crt
# chown cinder /etc/pki/tls/private/cinder.key

Find the cinder service endpoints, there will be two. One for the v1 API and one for the v2 API:

# keystone endpoint-list|grep 8776

Delete the existing endpoints:

# keystone endpoint-delete <id>
# keystone endpoint-delete <id>

Now re-create the endpoints using the system FQDN and https:

# keystone endpoint-create --publicurl "" --adminurl "" --internalurl "" --service cinder_v2
# keystone endpoint-create --publicurl "" --adminurl "" --internalurl "" --service cinder

Edit /etc/cinder/cinder.conf to add the SSL options.

ssl_cert_file = /etc/pki/tls/certs/cinder.crt
ssl_key_file = /etc/pki/tls/private/cinder.key

Restart the Cinder API service:

# service openstack-cinder-api restart

Edit /etc/nova/nova.conf to tell it how to talk to Cinder:

cinder_endpoint_template =

Restart the Nova API:

# service openstack-nova-api restart

Test to be sure things still work:

# cinder list
# nova volume-list

Now we move onto the Glance service.

Get a certificate from IPA:

# ipa service-add glance/
# ipa-getcert request -f /etc/pki/tls/certs/glance.crt -k /etc/pki/tls/private/glance.key -K glance/

Fix the permissions on the certificate and key files:

# chown glance /etc/pki/tls/certs/glance.crt
# chown glance /etc/pki/tls/private/glance.key

Find and delete the glance endpoint:

# keystone endpoint-list |grep 9292
# keystone endpoint-delete <id>

And add back the endpoint using the FQDN and https:

# keystone endpoint-create --publicurl --internalurl --adminurl --service glance

Edit /etc/glance/glance-api.conf.

In [DEFAULT] add:

cert_file = /etc/pki/tls/certs/glance.crt
key_file = /etc/pki/tls/private/glance.key

Restart the Glance API service:

# service openstack-glance-api restart

And test that the Glance client works:

# glance image-list

Update Nova to tell it about the secure Glance API. Edit /etc/nova/nova.conf, in [DEFAULT]:


Restart the Nova API:

# service openstack-nova-api restart

And test that Nova can talk to Glance:

# nova image-list

Finally, secure the Nova service (just Nova for now, not EC2 or S3).

Get a certificate for Nova:

# ipa service-add nova/
# ipa-getcert request -f /etc/pki/tls/certs/nova.crt -k /etc/pki/tls/private/nova.key -K nova/

Fix the permissions:

# chown nova /etc/pki/tls/certs/nova.crt
# chown nova /etc/pki/tls/private/nova.key

Find and delete the nova endpoint:

# keystone endpoint-list|grep 8774
# keystone endpoint-delete <id>

Re-create the endpoint with the FQDN and https:

# keystone endpoint-create --publicurl "" --adminurl "" --internalurl "" --service nova

Edit nova.conf, in the [DEFAULT] section:


Restart the Nova API service:

# service openstack-nova-api restart

And finally, verify that Nova works:

# nova list

When I did this, just to be sure, I restarted the world:

# openstack-service restart

Up to this point we’ve only done some very basic validation of each service as we’ve secured them. Now for the real test, fire up a VM:

# nova boot --flavor <flavor> --image <image> ssltest

Make sure you got an address, the image came up, and you can ssh into it.

I’m working on adding this native SSL support, plus via a TLS Proxy, to devstack in bug