Additional OpenStack use cases

OpenStack has a lot of really detailed documentation available at One of the more useful topics is the architecture and design examples, which both explain the usage scenarios and the ideas behind how a particular scenario can be solved using the OpenStack infrastructure. We are going to talk a lot about two different edge cases when we deploy our test OpenStack, but some things need to be said about configuring and running an OpenStack installation.

OpenStack is a complex system that encompasses not only computing and storage but also a lot of networking and supporting infrastructure. You will first notice that when you realize that even the documentation is neatly divided into an administration, architecture, operations, security, and virtual machine image guide. Each of these subjects is practically a topic for a single book, and a lot of things that guides cover are part experience, part best practice advice, and part assumptions based on best guesses.

There are a couple of things that are more or less common to all these use cases. First, when designing a cloud, you must try and get all the information about possible loads and your clients as soon as possible, even before a first server is booted. This will enable you to plan not only how many servers you need, but their location, the ratio of computing to storage nodes, the network topology, energy requirements, and all the other things that need to be thought through in order to create a working solution.

When deploying OpenStack, we are talking about a large-scale enterprise solution that is usually deployed for one of three reasons:

  • Testing and learning: Maybe we need to learn how to configure a new installation, or we need to test a new computing node before we even go near production systems. For that reason, we need a small OpenStack environment, perhaps a single server that we can expand if there is a need for that. In practice, this system should be able to support probably a single user with a couple of instances. Those instances will usually not be the focus of your attention; they are going to be there just to enable you to explore all the other functionalities of the system. Deploying such a system is usually done the way we described in this chapter – using a readymade script that installs and configures everything so that we can focus on the part we are actually working on.
  • We have a need for a staging or pre-production environment: Usually, this means that we need to either support the production team so they have a safe environment to work in, or we are trying to keep a separate test environment for storing and running instances before they are pushed into production. 

    Having such an environment is definitively recommended, even if you haven’t had it yet, since it enables you and your team to experiment without fear of breaking the production environment. The downside is that this installation requires an environment that has to have some resources available for the users and their instances. This means we are not going to be able to get away with using a single server. Instead, we will have to create a cloud that will be, at least in some parts, as powerful as the production environment. Deploying such an installation is basically the same as production deployment since once it comes online, this environment will, from your perspective, be just another system in production. Even if we are calling it pre-production or test, if the system goes down, your users will inevitably call and complain. This is the same as what happens with the production environment; you will have to plan downtime, schedule upgrades, and try to keep it running as best as you can.

  • For production: This one is demanding in another way – maintenance. When creating an actual production cloud environment, you will need to design it well, and then carefully monitor the system to be able to respond to problems. Clouds are a flexible thing from the user’s perspective since they offer scaling and easy configuration, but being a cloud administrator means that you need to enable these configuration changes by having spare resources ready. At the same time, you need to pay attention to your equipment, servers, storage, networking, and everything else to be able to spot problems before the users see them. Has a switch failed over? Are the computing nodes all running correctly? Have the disks degraded in performance due to a failure? Each of these things, in a carefully configured system, will have minimal to no impact on the users, but if we are not proactive in our approach, compounding errors can quickly bring the system down.

Having distinguished between a single server and a full install in two different scenarios, we are going to go through both. The single server will be done manually using scripts, while the multi-server will be done using Ansible playbooks.

Now that we’ve covered OpenStack in quite a bit of detail, it’s time to start using it. Let’s start with some small things (a small environment to test) in order to provision a regular OpenStack environment for production, and then discuss integrating OpenStack with Ansible. We’ll revisit OpenStack in the next chapter, when we start discussing scaling out KVM to Amazon AWS.

Creating a Packstack demo environment for OpenStack

If you just need a Proof of Concept (POC), there’s a very easy way to install OpenStack. We are going to use Packstack as it’s the simplest way to do this. By using Packstack installation on CentOS 7, you’ll be able to configure OpenStack in 15 minutes or so. It all starts with a simple sequence of commands:

yum update -y
yum install -y centos-release-openstack-train
yum update -y
yum install -y openstack-packstack
packstack --allinone

As the process goes through its various phases, you’ll see various messages, such as the following, which are quite nice as you get to see what’s happening in real time with a decent verbosity level:

Figure 12.7 – Appreciating Packstack's installation verbosity

Figure 12.7 – Appreciating Packstack’s installation verbosity

After the installation is finished, you will get a report screen that looks similar to this:

Figure 12.8 – Successful Packstack installation

Figure 12.8 – Successful Packstack installation

The installer has finished successfully, and it gives us a warning about NetworkManager and a kernel update, which means we need to restart our system. After the restart and checking the /root/keystonerc_admin file for our username and password, Packstack is alive and kicking and we can log in by using the URL mentioned in the previous screen’s output (http://IP_or_hostname_where_PackStack_is_deployed/dashboard):

Figure 12.9 – Packstack UI

Figure 12.9 – Packstack UI

There’s a bit of additional configuration that needs to be done, as noted in the Packstack documentation at If you’re going to use an external network, you need a static IP address without NetworkManager, and you probably want to either configure firewalld or stop it altogether. Other than that, you can start using this as your demo environment.

Related Articles

How to add swap space on Ubuntu 21.04 Operating System

How to add swap space on Ubuntu 21.04 Operating System

The swap space is a unique space on the disk that is used by the system when Physical RAM is full. When a Linux machine runout the RAM it use swap space to move inactive pages from RAM. Swap space can be created into Linux system in two ways, one we can create a...

read more

Lorem ipsum dolor sit amet consectetur


Submit a Comment

Your email address will not be published. Required fields are marked *

seven + eight =