Oracle OpenStack for Oracle Linux Release 2 installation in Virtualbox (part 2)



I have to say that many of the issue I faced where linked to the VirtualBox installation I have performed. If you install Oracle OpenStack on a real physical server (or in non-free VMWare player product) you might not face any issue.

I say this because VirtualBox in contrary of VMWare player is not supporting nested Virtualization Technology (Intel VMX or AMD-V) and it will apparently not be implemented soon

As the post would be really two long I have decided to split is in two parts: part1 and part2.

Oracle OpenStack preparation steps

First thing I have done is to configure the mandatory network for my instances. I have chosen the same range as my virtual machine ( and a DHCP allocation around the beginning of the possible IP addresses:


To finally get below network configuration:


Second step is to download OpenStack images from

I have downloaded and uploaded to my OpenStack server via web interface the one of Cirros for rapid testing and the one of Fedora 23 that I know well:


Launch instances

When you have loaded an image you can create an instance of it directly on the image page. I have tested it on Cirros image that is, by definition, a light testing Linux flavor. Choose a name for your image and associate a flavor. A flavor is system allocation to your images (root disk space, cores and memory), you can obviosuly create as many as you like. I choose the tiny one that suit perfectly Cirros:


All rest by default except the network, or your instance cannot be created:


But unfortunately it has not gone well:


In clear text I got below error message:


No valid host was found. There are not enough hosts available.
File "/usr/lib/python2.7/site-packages/nova/conductor/", line 671, in build_instances request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/scheduler/", line 337, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/", line 52, in select_destinations context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/", line 34, in select_destinations context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/scheduler/", line 120, in select_destinations request_spec=request_spec, filter_properties=filter_properties) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/", line 156, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/", line 90, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/", line 350, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/", line 341, in _send raise result

At that step you have to dig a bit inside Docker and reading the official Docker documentation could be useful. My Docker container list is the following, and container to debug for failed instance is nova_compute:

[root@server1 ~]# docker ps
CONTAINER ID        IMAGE                                                                         COMMAND                  CREATED             STATUS              PORTS                    NAMES
2c73c27aefcc                "/"              2 days ago          Up 47 hours                                  nova_compute
bcc4123352f9            "/"              2 days ago          Up 2 days                                    mysqlcluster_api
eab9c687c27a                  "/"              8 days ago          Up 2 days                                    murano_api
f5e4210493ec               "/"              8 days ago          Up 2 days                                    murano_engine
240d3e54c1ae                     "/"              8 days ago          Up 2 days                                    horizon
76c359cdb204                 "/"              8 days ago          Up 2 days                                    heat_engine
bcd0493c4cff                "/"              8 days ago          Up 2 days                                    heat_api_cfn
b51c03e8317f                    "/"              8 days ago          Up 2 days                                    heat_api
fde6baa590a9               "/"              8 days ago          Up 2 days                                    cinder_volume
23b64b04f2c0            "/"              8 days ago          Up 2 days                                    cinder_scheduler
6a002a027c79               "/"              8 days ago          Up 2 days                                    cinder_backup
c0b5b6dafa42                  "/"              8 days ago          Up 2 days                                    cinder_api

I have attached to nova_compute to try to debug a bit:

[root@server1 ~]# docker attach nova_compute
2016-02-29 16:04:24.609 1 ERROR nova.virt.libvirt.driver [req-6ebd375d-c462-48e1-90aa-caf0540c1034 19b412645f174ae3a0318995906541d5 d3629db6f64a4127b1c9bb6e2ebb063c - - -] Error defining a domain with XML: <domain type="kvm">
    <nova:instance xmlns:nova="">
      <nova:package version="2015.1.3"/>
      <nova:creationTime>2016-02-29 16:04:22</nova:creationTime>
      <nova:flavor name="m1.tiny">
        <nova:user uuid="19b412645f174ae3a0318995906541d5">admin</nova:user>
        <nova:project uuid="d3629db6f64a4127b1c9bb6e2ebb063c">admin</nova:project>
      <nova:root type="image" uuid="06953a9c-54f4-4b89-809c-5220e9d33e35"/>
  <sysinfo type="smbios">
      <entry name="manufacturer">OpenStack Foundation</entry>
      <entry name="product">OpenStack Nova</entry>
      <entry name="version">2015.1.3</entry>
      <entry name="serial">7ab3c2a9-7e82-451d-9866-6d5695d252be</entry>
      <entry name="uuid">5e0a6963-d585-4a77-8ca8-3cf8ee5088a9</entry>
    <boot dev="hd"/>
    <smbios mode="sysinfo"/>
  <clock offset="utc">
    <timer name="pit" tickpolicy="delay"/>
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="hpet" present="no"/>
  <cpu mode="host-model" match="exact">
    <topology sockets="1" cores="1" threads="1"/>
    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2" cache="none"/>
      <source file="/var/lib/nova/instances/5e0a6963-d585-4a77-8ca8-3cf8ee5088a9/disk"/>
      <target bus="virtio" dev="vda"/>
    <interface type="bridge">
      <mac address="fa:16:3e:36:7d:b8"/>
      <model type="virtio"/>
      <source bridge="qbr57f672a8-e9"/>
      <target dev="tap57f672a8-e9"/>
    <serial type="file">
      <source path="/var/lib/nova/instances/5e0a6963-d585-4a77-8ca8-3cf8ee5088a9/console.log"/>
    <serial type="pty"/>
    <input type="tablet" bus="usb"/>
    <graphics type="vnc" autoport="yes" keymap="en-us" listen=""/>
      <model type="cirrus"/>
    <memballoon model="virtio">
      <stats period="10"/>
2016-02-29 16:04:24.610 1 ERROR nova.compute.manager [req-6ebd375d-c462-48e1-90aa-caf0540c1034 19b412645f174ae3a0318995906541d5 d3629db6f64a4127b1c9bb6e2ebb063c - - -] [instance: 5e0a6963-d585-4a77-8ca8-3cf8ee5088a9] Instance failed to spawn

Looks like it is an error I already have where KVM is not supported on virtual machine where nested virtualization technology is not supported (Intel VMX in my case)…

Investigated a bit in nova.conf file of my nova_compute container:

[root@server1 ~]# docker exec -ti nova_compute tail /etc/nova/nova.conf
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = password
virt_type = kvm

My idea is to change last line and replace virt_type=kvm by virt_type=qemu. Why QEMU ? Mainly because this hypervisor does not need virtualization technology, with a drawback of performance obviously.

After few tests I have exported the running environment of the nova_compute container created by Oracle:

[root@server1 ~]# docker inspect nova_compute > /tmp/nova_compute.txt
[root@server1 ~]# head /tmp/nova_compute.txt
    "Id": "2c73c27aefcc3538613418427d82f6204a71440d1ef045436ff409179783524b",
    "Created": "2016-03-02T11:22:51.438767627Z",
    "Path": "/",
    "Args": [],
    "State": {
        "Status": "running",
        "Running": true,
        "Paused": false,


[root@server1 ~]# docker inspect -f "{{ .Config.Env }}" nova_compute
[KOLLA_CONFIG_STRATEGY=COPY_ALWAYS PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HTTPS_PROXY= https_proxy= HTTP_PROXY= http_proxy= PIP_TRUSTED_HOST= PIP_INDEX_URL= KOLLA_BASE_DISTRO=oraclelinux KOLLA_INSTALL_TYPE=source]

I connect to container by running bash:

[root@server1 ~]$ docker exec -ti nova_compute bash

Then edit /etc/nova/nova.cnf file with vi. You can control it has been done after exited from container with:

[root@server1 ~]# docker exec -ti nova_compute tail /etc/nova/nova.conf
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = password
#virt_type = kvm
virt_type = qemu

Once the modification is effective I commit the change and create a new image inside my local registry with (tag is, same name):

[root@server1 ~]# docker commit nova_compute
[root@server1 ~]# docker images |grep compute                    0b2d98b31ec9        3 seconds ago       1.678 GB                 2.0.2               d7df5396a5a7        11 weeks ago        1.678 GB
oracle/ol-openstack-nova-compute                                         2.0.2               d7df5396a5a7        11 weeks ago        1.678 GB

Then the idea is to start a nova_compute like container using the newly created image (notice the command to be executed at startup). After many unsuccessful test I discovered I had to bind filesystems between main OS and containers. This is not transferred by docker commit command for the newly created image:

[root@server1 ~]# docker run --detach  --volume="/dev:/dev:rw" --volume="/run:/run:rw" --volume="/etc/kolla/nova-compute/:/opt/kolla/nova-compute/:ro" \
--volume="/lib/modules:/lib/modules:ro" --hostname="" --name nova_compute_new oracle/ol-openstack-nova-compute: "/"

But still when displaying /etc/nova/nova.conf the chosen hypervisor was still kvm and not qemu, so my modification has not been taken into account…

I investigated in / command run at container execution (so inside nove_compute container):

[root@server1 ~]# docker exec -ti nova_compute bash
[root@server1 /]# cat /
set -o errexit
# Loading common functions.
source /opt/kolla/
# Execute config strategy
# Load NDB explicitly
modprobe nbd
exec $CMD $ARGS
[root@server1 /]# cat /opt/kolla/
set_configs() {
            source /opt/kolla/
            if [[ -f /configured ]]; then
                echo 'INFO - This container has already been configured; Refusing to copy new configs'
                source /opt/kolla/
                touch /configured
            echo '$KOLLA_CONFIG_STRATEGY is not set properly'
            exit 1

COPY_ALWAYS is the option that is chosen to run nova_compute container (see docker inspect command above) so /opt/kolla/ is source as well:

[root@server1 /]# cat /opt/kolla/
if [[ -f "$SOURCE" ]]; then
    chown ${OWNER}: $TARGET
    chmod 0644 $TARGET

Here is the trick, the file is overwritten at each restart by /opt/kolla/nova-compute/nova.conf that is in fact on my main host due to binds in mount points used to run nova_compute container !!! So the file to modify was simply /etc/kolla/nova-compute/nova.conf on my main host running Docker, then restart nova_compute container and confirm the /ec/nova/nova.conf file has been updated as well (you can also delete the newly created images as not needed anymore):

[root@server1 ~]# tail /etc/kolla/nova-compute/nova.conf
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = password
#virt_type = kvm
virt_type = qemu

Then my Cirros instance has started successfully this time:


As it is written in a CirrOS image, the login account is cirros. The password is cubswin:). Which was a challenge with the default keyboard layout and my French keyboard. So I have used Alt key and number found in ASCII table to type : and ) character, first action was to change password and confirm root sudo was working:



One thought on “Oracle OpenStack for Oracle Linux Release 2 installation in Virtualbox (part 2)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>