Howto: Set up a jekyll-based gh-pages site

I’m part of the team developing WebVirt, a web-based graphical libvirt aggregator. We decided to take advantage of Github’s Pages support of a popular Ruby based website generator called Jekyll. Here is (roughly) how the process went:

Step 1: Create your directory structure

Jekyll, when called, will crawl the directory structure you specify, and generate a web site based on it. By creating subfolders that correspond with “categories” of articles, a clearer picture begins to emerge:

.
├── _layouts
├── _includes
|
├── js
├── css
|
├── architecture
│   └── _posts
│       
├── install
│   └── _posts
|
├── reference
│   ├── managerapi
│   │   └── _posts
│   ├── managerconfig
│   │   └── _posts
│   ├── nodeapi
│   │   └── _posts
│   └── nodeconfig
│       └── _posts
├── requirements
│   └── _posts
└── userguide
    └── _posts

All of the categories I intend to generate data about have a subfolder called _posts that will store the copy (think print-copy) that I will be displaying. The _layouts folder on line 2 holds repeated page structures, while _includes holds code snippets for reuse.

Step 2: Set up YAML metadata

Using the Liquid Templating System, an HTML shell can be created as a layout, using special Liquid syntax to indicate where content goes:

<!DOCTYPE html>
<html>
  <head>
    <title>WebVirt Documentation - {{page.title}}</title>
  </head>
  <body data-spy="scroll"  data-offset="25" data-target=".sidebar-nav">
    <div id="virshmanagerapp">
      <div id="main" class="container">
        <div class="row">

            {{ content }}

          <!-- Footer -->
          <footer>
            <div class="row-fluid">
              <div class="span12">
                <footer>
                  <hr>
                  <p class="muted">Designed and Maintained by <abbr title="Centre for the Development of Open Technology @ Seneca College">CDOT</abbr> | Built on {Bootstrap.js Jekyll}</p>
                </footer>
              </div>
            </div>
          </footer>
          <script src="/js/plugins/jquery-1.9.1.min.js"></script>
          <script src="/js/plugins/underscore.js"></script>
          <script src="/js/plugins/toastr.min.js"></script>
          <script src="/js/plugins/bootstrap.min.js"></script>
        </div>
      </div>
    </div>
  </body>
</html>

In addition to a _posts folder, each content category contains an “index.html” file that is loaded when the directory is accessed. This index.html folder uses YAML metadata to indicate a few things to Jekyll – mainly which template to use:

---
layout: default
title: Node API
---

{% include topnav.html %}

<!-- HTML HERE -->

Line 2 indicates that all content below line 4 (‘—‘) will replace Line 10 in the last example ( {{ content }} ). Line 6 is an example of using an html snippet, and makes maintaining common code laughably easy!

Step 3: Set up posts

In order to generate the data properly, the markdown files must be saved with a specific filename format:

XXXX-XX-XX-THIS-IS-THE-TITLE.markdown

Where XXXX is the year, followed by the month then the day. After creating these files in the appropriate _posts folder, we can add metadata for the Liquid Templating system in our HTML files:

---
title: example1
---

## Markdown goes here

Step 4: Use Liquid to display content

On the index.html pages for each category, we can use Liquid’s tag syntax to do basic operations on template variables that jekyll uses to store data programatically:

---
layout: default
title: Node API
---
<div class="row">
  <div class="span2">
    <!-- Navigation -->
    <div class="well sidebar-nav affix">
      <ul class="nav nav-list"> 
        {% for post in site.categories.nodeapi %}
          <li><a href="#{{ post.title }}">{{ post.title }}</a></li>
        {% endfor %}
      </ul>
    </div>
  </div>
  <div class="span10">
    <div id="heading"></div>
    <div class="row">
      <div data-offset="25" class="span10">
	{% for post in site.categories.nodeapi %}
	  <article class="well post" id="{{ post.title }}">
	    {{ post.content }}
	  </article>
	{% endfor %}
      </div>
    </div>
    <div id="pagination" class="pagination pagination-right"></div>
  </div>

By iterating through each post, we can display information in a variety of ways. It becomes very simple to create an API reference. For example, we are using this markdown template:

---
title: example1
---

## API Call `call\format` ##

### Purpose ###

### Usage ###

### Return ###

### Snippets ###

All we have to do is fill in the information, save the files with the appropriate name in the appropriate place and the site will generate itself!

Brilliant. Next post will be an accounting of some common bugs I ran into!

Advertisements

Tutorial: Readme Construction Part 1

I’m designing the readme for our Webvirsh app, and I thought I would document the process of me… documenting stuff. I can’t say I’ve done this before, but there is a certain method to my madness.

Step 1: Define the purpose of the document, then split into logical divisions

By identifying the defining need that caused us to require a document like this, we can then define what function it serves. In this case it’s very simple: software solutions to real world problems are complex, often requiring specialized knowledge to properly set-up and operate.

A good manual that provides detailed information on all aspects of our software could allow someone to become an expert on the software in a short space of time. Ideally, it would also serve as a reference, meaning that someone looking for a specific piece of information could easily find it.

Our software has five logical sections to it: Installation requirements, installation instructions, software functionality, software architecture and supporting technologies (like APIs). Therefore, our readme’s purpose is to clearly define:

  • The prerequisites for the software’s optimal operation
  • The installation & configuration processes
  • The functionality of the app itself
  • The architecture powering it
  • The API’s created to support its operation

Step 2: Investigate each division, taking note of information necessary to document it properly

Software prerequisites

Our software has three kinds of requirements:

  1. Software
  2. Hardware
  3. Network

Software

  1. Supported OS + OS Versions
  2. Critical dependancies

Hardware

  1. CPU/Memory/Storage Requirements
  2. Number of physical nodes required

Network

  1. All nodes must have a direct route to the manager and back

Installation & Configuration

Installation Procedure

The procedure may be slightly different for the node installation and the manager installation. Basic details required:

  • What information must the user collect before installation starts? (e.g. subnet ranges, etc)
  • How to get the software + prerequisites (e.g. git, ssl headers)
  • How to configure the installation (setting any pre-install configuration)
  • How to install the server of choice (node vs manager)
  • How to quickly identify installation problems (troubleshooting)

Configuration

A full reference of all configuration details would be very useful. Basic info required:

  • Which configuration files exist, and what they configure
  • What each setting in the files do, and what they’re options are
  • Common issues that could be encountered (troubleshooting)

App Functionality

Key details:

  • What features does it have?
  • How does the user use each feature?
  • Site-map/use-case chart

Architecture

Key details:

  • What is the network flow of each major function?
  • What ports, layers and technologies underlie each function?
  • What software architecture was the application built on?
  • What system & NPM packages are used?

APIs

  • How many different APIs are being used?
  • What does each call look like? What do they return?
  • How is each call transported?

Step 3: Build an outline of the entire document

Because we know what information we need to document each section, we also know what information is going to be displayed in them! At this point, it should be possible to construct an approximation of the structure of the final document.

Overview                 - h1

Table of Contents        - h1
...

Section 1: Prerequisites - h1
 Hardware                 - h2
  ...
 Software                 - h2
  ...
 Network                  - h2
  ...

Section 2: Installation
      & Configuration    - h1
 Common Configuration     - h2
  ...

 Node Setup               - h2
  Node Configuration       - h3
   ...
  Node Installation        - h3
   ...

 Manager Setup            - h2
  Manager Configuration    - h3
   ...
  Manager Intallation      - h3
   ...

Section 3: Functionality - h1

 Feature 1: Node 
            Management    - h2
  Adding hosts (auto)      - h3
   ...
  Adding hosts (manual)    - h3
   ...
  Common Issues            - h3
   ...

 Feature 2: Dashboard     - h2
  Host information         - h3
   ...
  Viewing Instances        - h3
   ...
  Instance Actions         - h3
   ...

 Feature 3: Server Logs   - h2
  Filtering Logs           - h3
   ...
  Common Issues            - h3
   ...

Section 4: Architecture  - h1

 Server-side Technology   - h2
  Overview                 - h3
   ...
  Node                     - h3
   ...
  Redis                    - h3
   ...

 Client-side Technology   - h2
  Overview                 - h3
   ...
  Backbone.js              - h3
   ...
  Bootstrap.js             - h3
   ...
  Toastr.js                - h3
   ...

 Server-side Architecture - h2
  Node                     - h3
   ...
  Manager                  - h3
   ...

 Client-side Architecture - h2
  Backbone                 - h3
   ...
  Bootstrap                - h3
   ...

 Network Architecture     - h2
  Routing                  - h3
   ...
  Ports                    - h3
   ...
  NMAP                     - h3
   ...

Section 5: Reference     - h1

 Node API                 - h2
  ...

 Manager API              - h2
  ...

 Node Configuration       - h2
  ...

 Manager Configuration    - h2
  ...


Section 6:
        Troubleshooting  - h1
 Installation              -h2
  Node                      -h3
   ...
  Manager                   -h3
   ...
  General                   -h3
   ...

 Configuration             -h2
  Node                      -h3
   ...
  Manager                   -h3
   ...
  General                   -h3
   ...

 Networking                -h2
  Node                      -h3
   ...
  Manager                   -h3
   ...
  General                   -h3
   ...

Section 7: Appendices    - h1
  ...

Conclusion

The goal of the project was to provide a set of APIs that would allow a cloud administrator to remotely access aggregated data about virtual machines running on their hardware, as well as send commands directly to those virtual machines. Our software’s usefulness could vary, depending on who’s looking at it, but the simplicity of its operation was a design goal from the beginning. This means that only a small amount of explanation is needed to understand how to use it, so the focus must be on providing a useful resource.

So far, I’ve determined how this readme is going to be used, what information it needs to be useful, and a basic structure for presenting that information in a clear and useful way. The next step is, of course, collecting the information. Then, it’s down to the iterative process of writing the readme copy and refining it. This will be detailed in a second blog post later this week.

WebVirsh: Migrating Virtual Machines using Libvirt

(See end of post for sources)

Libvirt, and some hypervisors, have the ability to migrate virtual machines from one host to another. I find this extremely impressive, so I’d like to go over the basic concepts of this process. Finally, I will conclude with a description of Libvirt’s migration capabilities in some detail.

Migration

Migration is a three-step process.

FIRST, the virtual machine is suspended, at which point the state of all of its running applications and processes is saved to a file. These files can be referred to as snapshots, since they store information about a VM’s activities at a particular point in time.

SECOND, the snapshot is transferred to the destination host, along with the VM details (usually in the form of an XML file). These details provide information necessary to properly emulate the VM, like the kind of hardware being emulated.

THIRD, the virtual machine is resumed on the destination machine, which constructs an emulation of the hardware from VM details and loads the snapshot into memory. At this point, all network connections will be updated to reflect the new MAC address associated with the VM’s virtual network interfaces and their associated IPs.

Live Migration

Live migration is the holy grain of virtualization technology, and solves the problem of High Availability in many cases. When live migration is possible, it means that an administrator can change which physical machine is hosting a VM, without interrupting the VM’s operations for more than 60-100 milliseconds. Quite a feat! And very useful for balancing load and energy costs without affecting whatever service the VM was providing.

The steps are similar to static migration, but involve some serious trickery when migrating four key components:

  1. CPU state: Required for process migration, so as not to interrupt/corrupt execution.
  2. Storage content: Required for persistent storage access.
  3. Network connections: Required for network activity migration, so as not to interrupt the transport layer of the VM’s network stack.
  4. Memory content: Required for RAM migration. The trickiest part of them all, because the VM is likely to continue to modify its memory even as the migration is occurring.

CPU

As fascinated as I am by the idea, this is too complex a topic to research and explain here.

Storage

Because transferring a virtual hard disk can be time consuming (50-120 seconds, possibly more), cloud providers have side-stepped the problem by having all hosts use a shared storage pool mounted on each host in an identical manner. This way, the transfer can be skipped completely and all machines connected to the pool become (from a storage standpoint) potential hosts.

Network

If the source and destination nodes are on the same subnet, it is as easy as updating the MAC address of the IP associated with the VM’s virtual interface and sending an ARP broadcast to ensure all other machines on the network are aware of the change. If the machines are on separate subnets, there is no way to accomplish this without severely degrading performance.

RAM

For a live migration, the entirety of the VM’s memory (in the form of multiple memory ‘pages’) is copied to the destination host. Memory pages will be repeatedly copied as the VM makes changes to pages on the source host. When the rate of the copying matches the rate of the dirtying, leaving a small number of dirty pages left to be copied, the VM is suspended on the source host. Immediately after suspension, the final “dirty” pages are copied to the destination and the VM is resumed on the new machine. The time between suspension on the source machine, and resumption on the destination machine is trivial with this method – from a few milliseconds to a second or two depending on the size of the paging files. Though the entire process may take longer, the downtime is measured in between these two specific events.

Libvirt Migration

In the same way that Libvirt provides a standard method of sending commands to VMs hosted on a range of hypervisors, there is a standard Libvirt command for migrating a VM from one host to another:

# The most basic form
$> virsh migrate (flags) instanceName destinationURI

In this case, the machine running the command is considered the client, the machine hosting the VM is considered the source and the machine being migrated into is considered the target. In this command, because only the destination URI is specified, Libvirt assumes the client is also the source.

Prerequisites

Migration, live or otherwise, has basic requirements for the source and target machines. Without meeting these, migration will simply fail. Broadly speaking, both machines must have hardware supporting the VM’s emulated environment. Usually this means a similar CPU and a compatible motherboard. The other major requirement is identical storage & networking setups on both machines. On the storage side, the path of the VM’s image on both hosts must be the same. On the network side, all bridged connections to hardware interfaces must be named identically. Ideally, both machines should have identical components, but the networking, storage and basic hardware should be the same.

Options

In order to ensure security, Libvirt can use its daemons on each host to tunnel the transfer of all data during the migration. This is not the default method of migration. A tunnelled connection also requires an explicit peer-to-peer declaration – that is, a tunneled connection must also have the P2P flag enabled, making the command look like this:

# P2P, Tunneling enabled
$> virsh migrate --p2p --tunneled instanceName destinationURI

Conclusion

All in all, this is one impressive piece of technology. Libvirt makes the process quite easy, adding options for letting the Hypervisor manage the process instead of Libvirt, as well as more. See the man page (linked below) for details.

Sources:
Pradeep Padala’s Blog
Virsh MAN Page

CDOT’s VirshManager API: Application Architecture

No, Watson, this was not done by accident, but by design.
— Sherlock Holmes

It’s no small task to create a plan for something entirely consistent with itself.  This really is the definition of good design: consistency.  So, with that in mind, my research parter and I began planning how we were going to take a problem – “We need an API for interacting with Libvirt on multiple hosts!” – and come up with a solution.

First: Break it down

We needed a way to:

  • Easily detect libvirt-hosts on a network, and store their IP addresses
  • Directly communicate with those libvirt instances to check on the health and status of their guest VMs
  • Separate this API from other applications and platforms running concurrently to ours

Identifying the LIbvirt Hosts

Our first thought was to create a daemon that would be installed on each libvirt host in the cluster.  This way, RESTful calls could be made to the daemon, which would then run the relevant virsh command.  This presented the problem of access though, meaning one would have to have direct access to the local network of the libvirt-hosts in order to use the API.

To solve this problem we decided we would have write a web-server program to act as the host of whatever interface the client wanted for the API. So long as the interface host was on the same network as the libvirt hosts, it would have a path to them. Also, if the user chose to connect the interface host to a public network as well, an admin could use the API even if they were woken up in the middle of the night by an emergency VM failure.

On a practical level, this led to another problem: How would the web-server know which IPs on the network were libvirt hosts? We had three options:

  1. Manually enter each libvirt host IP into the interface’s configuration
  2. Manually enter the interface-server’s IP address in the configuration file of the daemon on each libvirt host
  3. Automate the process in some way

We chose number 3, and here’s how we did it:

The Daemon (Or, a Story of Nmap-ing)

First, we had to make it so that the daemon would be listening for API calls through a TCP port that wasn’t used by any other application on any of the machines in the cluster.  This could be set to an obscure default and/or defined at install-time.

Second, we theorized that all the interface-server would have to do is test each IP on the cluster network to see if that particular port was open on whichever IP it was testing – if it was, the daemon was installed and it was hosting virtual machines.  If not, it was an active host on the cluster that had another function.  Either way, this would automate the discovery of the nodes our API was to manage.

With this approach, no special configuration was needed for the daemons other than to specify which port to keep open, and even then only if the default was unavailable.

The Crawler

We decided to write a program, the Crawler, that would use the extremely powerful NMAP linux utility to quickly test a client-specified CIDR range in the manner described above.  But in a huge cluster of possibly hundreds (or more) machines, could this cause network congestion?  We weren’t sure, but just in case we split the Crawler’s function into four:

  1. Scan the entire network, recording IPs of hosts running libvirt, and hosts that were not.
  2. Scan the remainder of the CIDR range that was not originally found to contain any active IPs for new hosts of either kind
  3. Scan the hosts that were not running libvirt to see if there was a change
  4. Periodically probe the libvirt-hosts the Crawler had discovered to ensure there was an active connection with the daemon

Though we may have been overthinking it, we figured that this would cover every possible problem arising from this automated system.

The Agent

The interface-server would need a way to transmit calls to the daemons of specific hosts, and receive data back from those calls.  For this, we conceived of the Agent, which acts as a proxy for calls from the user interface to the daemons, and results from the daemons to the user interface.

The Interface

The final piece of the conceptual puzzle was a way for the client to actually use this RESTful API in a meaningful and streamlined fashion.  Seeing as we already had a interface-server to allow for external access to the interface, a web-based application seemed to make the most sense.

I began the design process yesterday, and began developing the interface this afternoon.  You can read about it here.

The Details

We now needed:

  1. A way for the web-server to access and store vital information without unreasonably increasing the resource footprint of the application
  2. A way to encapsulate a web-server and the scripts that make it do what we need it to do

For the first point, Diogo discovered Redis – a noSQL database that uses RAM to store its data and calculations while the computer is on.  He’s written a post about it here.

For the second, we decided to use Node.JS to write a robust, efficient and portable web server.  This will likely be detailed in a later post.

Conclusion

Now the work begins! Diogo has been furiously writing the logic for our server (using Node.JS) while I’ve been busy developing the user interface.

Expect more to come!

VirshManager: Web Interface

Our research work has taken a more distinct form in the last few weeks.  Our focus has solidified into a server/daemon API that can be put onto any network of cloud virtualization servers and keep track of them, completely independently from the cloud platform in use.  This allows administrators to have a reliable tool for monitoring VMs and hypervisors without fearing that the Cloud Platform is causing the problems.

For a high level overview of how our software directly interacts with VIRSH and keeps track of hypervisors and instances, check out my post on the subject.  We’re quite proud of this work!

However, an API is useless without a means of interacting with it, and because the technology we’re using to power the functionality of the API is server based, we decided to create a web interface for the application.

I was put in charge of developing this interface, and will be using some interesting technologies to accomplish it:

bootstrap

Bootstrap

Originally created as an API for Twitter, Bootstrap is a JavaScript library that makes developing functional, cross-compatible and professional web designs a matter of hours instead of days.  The functionality really is quite impressive – aside from the designing (which I will document separately), implementing a proof-of-concept version of the interface was laughably straightforward.

bootstrap

In just a few hours, I was able to learn the API of the library, and create a dynamic, fluid-width layout for the interface – complete with collapsing areas for data display, colour-based visual cues indicating hypervisor health and more.

 bootstrapversus

bootstrap

Not bad eh?

jQuery & AJAX

For data manipulation, display and retrieval I’ll be coding the more dynamic elements using the jQuery and AJAX libraries.  Most of the data the interface will display is variable, and things like font and button colours will have to be defined in a dynamic way according to the result of calling our new API to check on a virtual machine.

At this point, the design is being developed fully, after which the advanced jQuery & AJAX components can be inserted so that the interface is functional.

Stay tuned for updates, and a full overview of Bootstrap itself!

Basic OpenStack Folsom Overview (working)

System Architecture

Built on a shared-nothing, messaging-based architecture. You can run all of
the major components on multiple servers including a compute controller,
volume controller, network controller, and object store (or image service).
A cloud controller communicates with the internal object store via HTTP
(Hyper Text Transfer Protocol), but it communicates with a scheduler, network
controller, and volume controller via AMQP (Advanced Message Queue Protocol).
To avoid blocking each component while waiting for a response, OpenStack
Compute uses asynchronous calls, with a call-back that gets triggered when a
response is received.

  1) Cloud Controller
    - Represents the global state and interacts with all other components. 
  2) API Server
    - Acts as the web services front end for the cloud controller
  3) Compute Controller
    - Provides server resources to Compute and typically contains the compute service itself
  4) Object Store
    - Optional, provides storage services
  5) Identity Service
    - Provides authentication and authorization services 
  6) Volume Controller
    - Provides fast and permanent block-level storage for the compute servers
  7) Network Contoller
    - Provides virtual networks to enable compute servers to interact with each other and with the public network
  8) Resource Scheduler
    - Selects the most suitable compute controller to host an instance

High-level Overview of OpenStack components:
Conceptual Architecture

  1) Dashboard (Horizon)
     - Modular Django web application
     - Administrator interface for OpenStack
     - Usually deployed via mod_wsgi (a python module that supports the Web Server Gateway Interface standard) in Apache
     - Code is separated into reusable python modules with most of the logic and presentation
     - Can be customer accessible
     - Can communicate w/ each service's public APIs
     - Can also administer functionality for other services (thru admin api endpoints)
  2) Compute (Nova)
     - Most distributed component of OpenStack
     - Turns end user API requests into running VMs
     [Nova-api] 
       * Accepts and responds to end user compute API calls
       * Initiates most VM orchestration (such as running an instance)
       * Enforces some policy (mostly quota checks)
     [Nova-compute]
       * Worker daemon that communicates with hypervisor API's to create and terminate instances
       * Updates nova database containing status of instances
     [Nova-schedule]
       * Conceptually simplest: Takes a request for an instance from the queue and determines which server host it should run on
     [queue]
       * Central hub for passing messages between daemons
       * Usually implemented w/ RabbitMQ but could be any AMPQ(Advanced Message Queuing Protocol) message queue
     [sql database] 
       * Stores most of the build-time + run-time states for a cloud infrastructure
       * Includes instance types, instances in use, networks avaliable and projects
       * Theoretically supports any database supported by SQL-Alchemy (Most common is sqlite3, MySQL, PostgreSQL)
     [nova-consoleauth nova-novncproxy nova-console]
       * Console services to allow end users to access their virtual instance's console through proxy.
  2) Object Store (Swift)
    - Built to be very distributed to prevent any single point of failure
    - Can use up to 3 servers (Account management server, Container management server, Object management server)
  3) Image Store (Glance)
    - [glance-api] Accepts API calls for image discovery retrieval and storage
    - [glance-registry] stores processes and retrieves metadata about images (size, type etc) from database
    - [glance database] stores image metadata
    - [storage repo] stores actual image files. 
       - Can be configured for to use Swift, normal filesystems, RADOS block devices, Amazon S3 and HTTP
    - [replication services] ensures consistency and availability through the cluster.
      *** GLANCE SERVES A CENTRAL ROLE TO OVERALL IaaS ***
  4) Identity (Keystone)
    - Single point of integration for OpenStack policy, catalog, token and authentication
    - [keystone] handles API requests and configurable authentication services
    - Each [keystone] function has a pluggable backend - most support LDAP, SQL or KVS(Key Value Stores)
  5) Network (Quantum)
    - Provides "network connectivity as a service" between interface devices managed by other OpenStack services (usually the Nova suite)
    - Allows users to create their own virtual networks and then attach interfaces to them
    - Highly configurable due to its plugin architecture
    - [quantum-server] accepts API requests and routes them to the appropriate plugin
    - [quantum-*-plugin] perform actual networking actions (
    - Supports plugins for Cisco virtual and physical switches, Nicira NVP product, NEC OpenFlow products, Open vSwitch, Linux bridging and the Ryu Network Operating System
    - Commonly uses an L3 agent and a DHCP agent in addition to the specific plug-in agent
    - Most installations use a messaging queue to route information between [quantum-server] and  agents in use
  6) Block Storage (Cinder)
    - Allows for manipulation of volumes, volume types and volume snapshots
    - [cinder-api] accepts API requests and routes them to [cinder-volume]
    - [cinder-volume] acts upon requests by recording to the Cinder database to maintain state and interacting with other processes through a message queue
      - Has driver support for storage providers: IBM, SolidFire, NetApp, Nexenta, Zadara, linux iSCSI and others
    - [cinder-scheduler] picks the optimal block storage provider node to create the volume on.
    - Mainly interacts with the [nova] suite, providing volumes for its instances

High-level Overview of Important OpenStack capabilities:

  1) Hypervisors
    - Supports KVM, LXC, QEMU, UML, VMWare ESX/ESXi 4.1 update 1 and Xen
  2) Users & Tenants(projects)
    - OpenStack is designed to be used by many different cloud computing consumers or customers, basically tenants on a shared system, using role-based access assignments
    - Roles control the actions that a user is allowed to perform, and are highly customizable
    - User's access to particular images is limited by tenant, but the username and password are assigned per user
    - Key pairs granting access to an instance are enabled per user, but quotas to control resource consumption across available hardware resources are per tenant

Storage on OpenStack

  1) Ephemeral Storage
    - Is associated with a single unique instance. Its size is defined by the template of the instance
    - Ceases to exist when the instance it is associated with is terminated permanently

  2) Volume Storage
    - Volumes are independent or any particular instance and are persistent
    - User created and, within quota and availability limits, may be of any arbitrary size
    - Do NOT provide concurrent access from multiple instances

OpenStack Network Infrastructure
Basic Network Overview

Quantum’s Role
Quantum's Role

Bugfix: Bad Floating IP Address – OpenStack Folsom Basic Install

I ran into a problem in the final step of this tutorial, where I had to assign a floating IP address to a newly created VM. Following the instructions gave me this error:

Bad floatingip request: Cannot create floating IP and bind it to Port ......., since that port is owned by a different tenant.

The problem was using the Controller Node to run the command, without properly re-sourcing the environmental variables involved. To do this, I needed to change the [novarc] file the tutorial had me create when I set up the Controller Node.

This:

export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://localhost:5000/v2.0"
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=password

needs to be changed to:

export OS_TENANT_NAME=demo
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL="http://localhost:5000/v2.0"
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=password

then re-sourced before running the command:

source novarc
quantum floatingip-create ...

And how!