Introduction to OpenStack

In its own words, OpenStack is a cloud operating system that is used to control a large number of different resources in order to provide all the essential services for Infrastructure-as-a-Service (IaaS) and orchestration.

But what does this mean? OpenStack is designed to completely control all the resources that are in the data center, and to provide both central management and direct control over anything that can be used to deploy both its own and third-party services. Basically, for every service that we mention in this book, there is a place in the whole OpenStack landscape where that service is or can be used.

OpenStack itself consists of several different interconnected services or service parts, each with its own set of functionalities, and each with its own API that enables full control of the service. In this part of this book, we will try to explain what different parts of OpenStack do, how they interconnect, what services they provide, and how to use those services to our advantage.

The reason OpenStack exists is because there was the need for an open source cloud computing platform that would enable creating public and private clouds that are independent of any commercial cloud platform. All parts of OpenStack are open source and were released under the Apache License 2.0. The software was created by a large, mixed group of individuals and large cloud providers. Interestingly, the first major release was the result of NASA (a US government agency) and Rackspace Technology (a large US hosting company) joining their internal storage and computing infrastructure solutions. These releases were later designated with the names Nova and Swift, and we will cover them in more detail later.

The first thing you will notice about OpenStack is its services since there is no single OpenStack service but an actual stack of services. The name OpenStack comes directly from this concept because it correctly identifies OpenStack as an open source component that acts as services that are, in turn, grouped into functional sets.

Once we understand that we are talking about autonomous services, we also need to understand that services in OpenStack are grouped by their function, and that some functions have more than one specialized service under them. We will try to cover as much as possible about different services in this chapter, but there are simply too many of them to even mention all of them here. All the documentation and all the whitepapers can be found at http://openstack.org, and we strongly suggest that you consult it for anything not mentioned here, and even for things that we mention but that could have changed by the time you read this.

The last thing we need to clarify is the naming – every service in OpenStack has its project name and is referred to by that name in the documentation. This might, at first glance, look confusing since some of the names are completely unrelated to the specific function a particular service has in the whole project, but using names instead of official designators for a function is far easier once you start using OpenStack. Take, for example, Swift. Swift’s full name is OpenStack Object Store, but this is rarely mentioned in the documentation or its implementation. The same goes for other services or projects under OpenStack, such as Nova, Ironic, Neutron, Keystone, and over 20 other different services.

If you step away from OpenStack for a second, then you need to consider what cloud services are all about. The cloud is all about scaling – in terms of compute resources, storage, network, APIs – whatever. But, as always in life, as you scale things, you’re going to run into problems. And these problems have their own names and solutions. So, let’s discuss these problems for a minute.

The basic problems for cloud provider scalability can be divided into three groups of problems that need to be solved at scale:

  • Compute problems (Compute = CPU + memory power): These problems are pretty straightforward to solve – if you need more CPU and memory power, you buy more servers, which, by design, means more CPU and memory. If you need a quality of service/service-level agreement (SLA) type of concept, we can introduce a concept such as compute resource pools so that we can slice the compute pie according to our needs and divide those resources between our clients. It doesn’t matter whether our client is just a private person or a company buying into cloud services. In cloud technologies, we call our clients tenants.
  • Storage problems: As you scale your cloud environments, things become really messy in terms of storage capacity, management, monitoring and – especially – performance. The performance side of that problem has a couple of most commonly used variables – read and write throughput and read and write IOPS. When you grow your environment from 100 hosts to 1,000 hosts or more, performance bottlenecks are going to become a major issue that will be difficult to tackle without proper concepts. So, the storage problem can be solved by adding additional storage devices and capacity, but it’s much more involved than the compute problem as it needs much more configuration and money. Remember, every virtual machine has a statistical influence on other virtual machines’ performance, and the more virtual machines you have, the greater this entropy is. This is the most difficult process to manage in storage infrastructure.
  • Network problems: As the cloud infrastructure grows, you need thousands and thousands of isolated networks so that the network traffic of tenant A can’t communicate with the network traffic of tenant B. At the same time, you still need to offer a capability where you can have multiple networks (usually implemented via VLANs in non-cloud infrastructures) per tenant and routing between these networks, if that’s what the tenant needs.

This network problem is a scalability problem based on technology, as the technology behind VLAN was standardized years before the number of VLANs could become a scalability problem.

Let’s continue our journey through OpenStack by explaining the most fundamental subject of cloud environments, which is scaling cloud networking via software-defined networking (SDN). The reason for this is really simple – without SDN concepts, the cloud wouldn’t really be scalable enough for customers to be happy, and that would be a complete showstopper. So, buckle up your seatbelts and let’s do an SDN primer.

Related Articles

How to add swap space on Ubuntu 21.04 Operating System

How to add swap space on Ubuntu 21.04 Operating System

The swap space is a unique space on the disk that is used by the system when Physical RAM is full. When a Linux machine runout the RAM it use swap space to move inactive pages from RAM. Swap space can be created into Linux system in two ways, one we can create a...

read more

Lorem ipsum dolor sit amet consectetur

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

four × one =