Posts Tagged ‘Ubuntu Server’

Features are in: the diablo-4 milestone

// August 31st, 2011 // 1 Comment » // Uncategorized

August was very busy for OpenStack Nova and Glance developers, and the culmination of those efforts is the delivery of the final feature milestone of the Diablo development cycle: diablo-4.

Glance gained final integration with the Keystone common authentication system, support for sharing images between groups of tenants, a new notification system and i18n. Twelve feature blueprints were completed in Nova, including final Keystone integration, the long-awaited capacity to boot from volumes, a configuration drive to pass information to instances, integration points for Quantum, KVM block migration support, as well as several improvements to the OpenStack API.

Diablo-4 is mostly feature-complete: a few blueprints for standalone features were granted an exception and will land post-diablo-4, like volume types or virtual storage arrays in Nova, or like SSL support in Glance.

Now we race towards the release branch point (September 8th) which is when the Diablo release branch will start to diverge from a newly-open Essex development branch. The focus is on testing, bug fixing and consistency… up until September 22, the Diablo release day.


Ensemble: the Service Orchestration framework for hard core DevOps

// August 19th, 2011 // Comments Off // Uncategorized

I've seen Ensemble evolve from a series of design-level conversations (Brussels May 2010), through a year of fast-paced Canonical-style development, and participated in Ensemble sprints (Cape Town March 2011, and Dublin June 2011).  I've observed Ensemble at first as an outsider, then provided feedback as a stake-holder, and have now contributed code as a developer to Ensemble and authored Formulas.


Think about bzr or git circa 2004/2005, or apt circa 1998/1999, or even dpkg circa 1993/1994...  That's where we are today with Ensemble circa 2011. 

Ensemble is a radical, outside-of-the-box approach to a problem that the Cloud ecosystem is just starting to grok: Service Orchestration.  I'm quite confident that in a few years, we're going to look back at 2011 and the work we're doing with Ensemble and Ubuntu and see an clear inflection point in the efficiency of workload management in The Cloud.

From my perspective as the leader of Canonical's Systems Integration Team, Ensemble is now the most important tool in our software tool belt when building complex cloud solutions.

Period.

Juan, Marc, Brian, and I are using Ensemble to generate modern solutions around new service deployments to the cloud.  We have contributed many formulas already to Ensemble's collection, and continue to do so every day.

There's a number of novel ideas and unique approaches in Ensemble.  You can deep dive into the technical details here.  For me, there's one broad concept in Ensemble that just rocks my world...  Ensemble deals in individual service units, with the ability to replicate, associate, and scale those units quite dynamically.  Service units in practice are cloud instances (or if you're using Orchestra + Ensemble, bare metal systems!).  Service units are federated together to deliver a (perhaps large and complicated) user facing service.

Okay, that's a lot of words, and at a very high level.  Let to me try to break that down into something a bit more digestable...

I've been around Red Hat and Debian packaging for many years now.  Debian packaging is particularly amazing at defining prerequisites packages, pre- and post- installation procedures, and are just phenomenal at rolling upgrades.  I've worked with hundreds (thousands?) of packages at this point, including some mind bogglingly complex ones!

It's truly impressive how much can be accomplished within traditional Debian packaging.  But it has its limits.  These limits really start to bare their teeth when you need to install packages on multiple separate systems, and then federate those services together.  It's one thing if you need to install a web application on a single, local system:  depend on Apache, depend on MySQL, install, configure, restart the services...

sudo apt-get install your-web-app

...

Profit!

That's great.  But what if you need to install MySQL on two different nodes, set them up in a replicating configuration, install your web app and Apache on a third node, and put a caching reverse proxy on a fourth?  Oh, and maybe you want to do that a few times over.  And then scale them out.  Ummmm.....

sudo apt-get errrrrrr....yeah, not gonna work :-(

But these are exactly the type(s) of problems that Ensemble solves!  And quite elegantly in fact.

Once you've written your Formula, you'd simply:

ensemble bootstrap

ensemble deploy your-web-app
...
Profit!

Stay tuned here and I'll actually show some real Ensemble examples in a series of upcoming posts.  I'll also write a bit about how Ensemble and Orchestra work together.

In the mean time, get primed on the Ensemble design and usage details here, and definitely check out some of Juan's awesome Ensemble how-to posts!

After that, grab the nearest terminal and come help out!

We are quite literally at the edge of something amazing here, and we welcome your contributions!  All of Ensemble and our Formula Repository are entirely free software, building on years of best practice open source development on Ubuntu at Canonical.  Drop into the #ubuntu-ensemble channel in irc.freenode.net, introduce yourself, and catch one of the earliest waves of something big.  Really, really big.

:-Dustin

PowerNap Your Data Center! (LinuxCon 2011 Vancouver)

// August 18th, 2011 // 3 Comments » // Uncategorized


I was honored to speak at LinuxCon North America in beautiful Vancouver yesterday, about one of my favorite topics -- energy efficiency opportunities using Ubuntu Servers in the data center (something I've blogged before).

I'm pleased to share those slides with you today!  The talk is entitled PowerNap Your Data Center, and focused on Ubuntu's innovative PowerNap suite, from the system administrator's or data center manager's perspective.

We discussed the original, Cloud motivations for PowerNap, its evolution from the basic process monitoring and suspend/hibernate methods of PowerNap1, to our complete rewrite of PowerNap2 (thanks, Andres!) which added nearly a dozen monitors and the ubiquitously useful PowerSave mode.  PowerNap is now more useful and configurable than ever!

Flip through the presentation below, or download the PDF here.



Get Adobe Flash player


Stay tuned for another PowerNap presentation I'm giving at Linux Plumbers next month in California.  That one should be a bit deeper dive into the technical implementation, and hopefully generate some plumbing layer discussion and improvement suggestions.

:-Dustin

Howto: Install the CloudFoundry Server PaaS on Ubuntu 11.10

// August 8th, 2011 // Comments Off // Uncategorized



I recently gave an introduction to the CloudFoundry Client application (vmc),  which is already in Ubuntu 11.10's Universe archive.

Here, I'd like to introduce the far more interesting server piece -- how to run the CloudFoundry Server, on top of Ubuntu 11.10!  As far as I'm aware, this is the most complete PaaS solution we've made available on top of Ubuntu Servers, to date.

A big thanks to the VMWare CloudFoundry Team who has been helping us along with the deployment instructions.  Also, all of the packaging credit goes straight to Brian Thomason, Juan Negron, and Marc Cluet.

For testing purposes, I'm going to run this in Amazon's EC2 Cloud.  I'll need a somewhat larger instance to handle all the services and dependencies (ie, Java) required by the platform.  I find an m1.large seems to work pretty well, for $0.34/hour.  I'm using the Oneiric (Ubuntu 11.10) AMI's listed at http://uec-images.ubuntu.com/oneiric/current/.

Installation

To install CloudFoundry Server, add the PPA, update, and install:

sudo apt-add-repository ppa:cloudfoundry/ppa

sudo apt-get update
sudo apt-get install cloudfoundry-server


During the installation, there are a couple of debconf prompts, including:
  • a mysql password
    • required for configuration of the MySQL database (enter twice)
All in all, the install took me less than 7 minutes!

Next, install the client tools, either on your local system, or even on the server, so that we can test our server:

sudo apt-get install cloudfoundry-client


Configuration

Now, you'll need to target your vmc client against your installed server, rather than CloudFoundry.com, as I demonstrated in my last post.

In production, you're going to need access to a wildcard based DNS server, either your own, or a DynDNS service.  If you have a DynDNS.com standard account ($30/year), CloudFoundry actually supports dynamically adding DNS entries for your applications.  We've added debconf hooks in the cloudfoundry-server Ubuntu packaging to set this up for you.  So if you have a paid DynDNS account, just sudo dpkg-reconfigure cloudfoundry-server.

For this example, though, we're going to take the poor man's approach, and just edit our /etc/hosts file, BOTH locally on our laptop and on our CloudFoundry server.

First, look up your server's external IP address.  If you're running Byobu in EC2, it'll be the lower right corner.

Otherwise, grab your IPv4 address from the metadata service.

$ wget -q -O- http://169.254.169.254/latest/meta-data/public-ipv4

174.129.119.101

And you'll need to add an entry to your /etc/hosts for api.vcap.me, AND every application name you deploy.  Make sure you do this both on your laptop, and the server!  Our test application here will be called testing123. Don't forget to change my IP address to yours ;-)

echo "174.129.119.101  api.vcap.me testing123.vcap.me" | sudo tee -a /etc/hosts


Target

Now, let's target our vmc client at our vcap (CloudFoundry) server:

$ vmc target api.vcap.me

Succesfully targeted to [http://api.vcap.me]

Adding Users

And add a user.

$ vmc add-user 

Email: kirkland@example.com
Password: ********
Verify Password: ********
Creating New User: OK
Successfully logged into [http://api.vcap.me]

Logging In

Now we can login.

$ vmc login 

Email: kirkland@example.com
Password: ********
Successfully logged into [http://api.vcap.me]

Deploying an Application


At this point, you can jump over to my last post in the vmc client tool for a more comprehensive set of examples.  I'll just give one very simple one here, the Ruby/Sinatra helloworld + environment example.

Go to the examples directory, find an example, and push!

$ cd /usr/share/doc/ruby-vmc/examples/ruby/hello_env

$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: testing123
Application Deployed URL: 'testing123.vcap.me'?
Detected a Sinatra Application, is this correct? [Yn]: y
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M, 1G or 2G)
Creating Application: OK
Would you like to bind any services to 'testing123'? [yN]: n
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (0K): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

Again, make absolutely sure that you edit your local /etc/hosts and add the testing123.vcap.me to the right IP address, and then just point a browser to http://testing123.vcap.me/


And there you have it!  An application pushed, and running on your CloudFoundry Server  -- Ubuntu's first packaged PaaS!

What's Next?

So the above setup is a package-based, all-in-one PaaS.  That's perhaps useful for your first CloudFoundry Server, and your initial experimentation.  But a production PaaS will probably involve multiple, decoupled servers, with clustered databases, highly available storage, and enterprise grade networking.

The Team is hard at work breaking CloudFoundry down to its fundamental components and creating a set of Ensemble formulas for deploying CloudFoundry itself as a scalable service.  Look for more news on that front very soon!

In the meantime, try the packages at ppa:cloudfoundry/ppa (or even the daily builds at ppa:cloudfoundry/daily) and let us know what you think!

:-Dustin

Howto: Install the CloudFoundry Server PaaS on Ubuntu 11.10

// August 8th, 2011 // 1 Comment » // Uncategorized



I recently gave an introduction to the CloudFoundry Client application (vmc),  which is already in Ubuntu 11.10's Universe archive.

Here, I'd like to introduce the far more interesting server piece -- how to run the CloudFoundry Server, on top of Ubuntu 11.10!  As far as I'm aware, this is the most complete PaaS solution we've made available on top of Ubuntu Servers, to date.

A big thanks to the VMWare CloudFoundry Team who has been helping us along with the deployment instructions.  Also, all of the packaging credit goes straight to Brian Thomason, Juan Negron, and Marc Cluet.

For testing purposes, I'm going to run this in Amazon's EC2 Cloud.  I'll need a somewhat larger instance to handle all the services and dependencies (ie, Java) required by the platform.  I find an m1.large seems to work pretty well, for $0.34/hour.  I'm using the Oneiric (Ubuntu 11.10) AMI's listed at http://uec-images.ubuntu.com/oneiric/current/.

Installation

To install CloudFoundry Server, add the PPA, update, and install:

sudo apt-add-repository ppa:cloudfoundry/ppa

sudo apt-get update
sudo apt-get install cloudfoundry-server


During the installation, there are a couple of debconf prompts, including:
  • a mysql password
    • required for configuration of the MySQL database (enter twice)
All in all, the install took me less than 7 minutes!

Next, install the client tools, either on your local system, or even on the server, so that we can test our server:

sudo apt-get install cloudfoundry-client


Configuration

Now, you'll need to target your vmc client against your installed server, rather than CloudFoundry.com, as I demonstrated in my last post.

In production, you're going to need access to a wildcard based DNS server, either your own, or a DynDNS service.  If you have a DynDNS.com standard account ($30/year), CloudFoundry actually supports dynamically adding DNS entries for your applications.  We've added debconf hooks in the cloudfoundry-server Ubuntu packaging to set this up for you.  So if you have a paid DynDNS account, just sudo dpkg-reconfigure cloudfoundry-server.

For this example, though, we're going to take the poor man's approach, and just edit our /etc/hosts file, BOTH locally on our laptop and on our CloudFoundry server.

First, look up your server's external IP address.  If you're running Byobu in EC2, it'll be the lower right corner.

Otherwise, grab your IPv4 address from the metadata service.

$ wget -q -O- http://169.254.169.254/latest/meta-data/public-ipv4

174.129.119.101

And you'll need to add an entry to your /etc/hosts for api.vcap.me, AND every application name you deploy.  Make sure you do this both on your laptop, and the server!  Our test application here will be called testing123. Don't forget to change my IP address to yours ;-)

echo "174.129.119.101  api.vcap.me testing123.vcap.me" | sudo tee -a /etc/hosts


Target

Now, let's target our vmc client at our vcap (CloudFoundry) server:

$ vmc target api.vcap.me

Succesfully targeted to [http://api.vcap.me]

Adding Users

And add a user.

$ vmc add-user 

Email: kirkland@example.com
Password: ********
Verify Password: ********
Creating New User: OK
Successfully logged into [http://api.vcap.me]

Logging In

Now we can login.

$ vmc login 

Email: kirkland@example.com
Password: ********
Successfully logged into [http://api.vcap.me]

Deploying an Application


At this point, you can jump over to my last post in the vmc client tool for a more comprehensive set of examples.  I'll just give one very simple one here, the Ruby/Sinatra helloworld + environment example.

Go to the examples directory, find an example, and push!

$ cd /usr/share/doc/ruby-vmc/examples/ruby/hello_env

$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: testing123
Application Deployed URL: 'testing123.vcap.me'?
Detected a Sinatra Application, is this correct? [Yn]: y
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M, 1G or 2G)
Creating Application: OK
Would you like to bind any services to 'testing123'? [yN]: n
Uploading Application:
  Checking for available resources: OK
  Packing application: OK
  Uploading (0K): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

Again, make absolutely sure that you edit your local /etc/hosts and add the testing123.vcap.me to the right IP address, and then just point a browser to http://testing123.vcap.me/


And there you have it!  An application pushed, and running on your CloudFoundry Server  -- Ubuntu's first packaged PaaS!

What's Next?

So the above setup is a package-based, all-in-one PaaS.  That's perhaps useful for your first CloudFoundry Server, and your initial experimentation.  But a production PaaS will probably involve multiple, decoupled servers, with clustered databases, highly available storage, and enterprise grade networking.

The Team is hard at work breaking CloudFoundry down to its fundamental components and creating a set of Ensemble formulas for deploying CloudFoundry itself as a scalable service.  Look for more news on that front very soon!

In the meantime, try the packages at ppa:cloudfoundry/ppa (or even the daily builds at ppa:cloudfoundry/daily) and let us know what you think!

:-Dustin

A Formal Introduction to The Ubuntu Orchestra Project

// August 5th, 2011 // 1 Comment » // Uncategorized



Today's post by Matthew East, coupled with several discussions in IRC and the Mailing Lists have made me realize that we've not communicated the Ubuntu Orchestra Project clearly enough to some parts of the Ubuntu Community.  Within Ubuntu Server developer circles, I think the project's goals, design, and implementation are quite well understood.  But I now recognize that our community stretches both far and wide, and our messages about Orchestra have not yet reached all corners of the Ubuntu world :-)  Here's an attempt at that now!

History

Disorganized concepts of Ubuntu Orchestra have been discussed at every UDS since UDS-Intrepid in Prague, May 2008.  In its current form, I believe these were first discussed at UDS-Natty in Orlando in October 2010, in a series of sessions led by Mathias Gug and I.  Matthias left Canonical a few weeks later for a hot startup in California called Zimride, but we initiated the project during the Natty cycle based on the feedback from UDS, pulling together the bits and pieces.

The newly appointed Server Manager (and Nomenclature-Extraordinaire) Robbie Williamson suggested the name Orchestra (previously, were calling it Ubuntu Infrastructure Services).  Everyone on the team liked the name, and it stuck.  I renamed the project and packages and branding and everything around Ubuntu Orchestra, or just Orchestra for short.  Hereafter, we may say Orchestra, but we always mean Ubuntu Orchestra.

We had packages in a little-publicized PPA for Natty, but we never pushed the project into the archive for Natty.  It just wasn't baked yet.  And due to other priorities, and it just didn't land before the cycle's Feature Freeze.  Still, it was a great idea, we had a solid foundation, and the seed had been planted in people's minds for the next UDS in Budapest...

Right around UDS-Oneiric in Budapest (May 2011), I left the Ubuntu Platform Server Team, to manage a new team in Canonical Corporate Services, called the Solutions Integration Team (we build solutions on top of Ubuntu Server).  Two rock stars on that team (Juan Negron and Marc Cluet) had been hard at work on a project called the SI-Toolchain -- a series of Puppet Modules and mCollective plugins that can automate the deployment of workloads.  This was the piece that we were missing from Orchestra, the key feature that kept us from uploading Orchestra to Natty.  I worked extensively with them in the weeks before and after UDS merging their functionality into Orchestra, at which point we had a fully functional system for Oneiric.  Since that time, some of that functionality has been replaced with Ensemble, which aligns a bit better with how we see Service Orchestration in the world of Ubuntu Servers (more on that below).

Okay, history lesson done.  Now the technical details!

The Problem


Traditionally, the Ubuntu Server ships and installs from a single ISO.  That's fine and dandy if you're installing one or two servers.  But in the Cloud IaaS world where Ubuntu competes, that just doesn't cut the mustard.  Real Cloud deployments involve installing dozens, if not hundreds or thousands of systems.  And then managing, monitoring, and logging those system for their operational lives.

I've installed the Ubuntu Enterprise Cloud literally hundreds of times in the last 3 years.  While the UEC Installer option in the Server ISO was a landmark achievement in IaaS installations, it falls a bit short on large scale deployments.  With the move to OpenStack, we had a pressing need to rework the Ubuntu Cloud installation.  Rather than changing a bunch of hard coded values in the debian-installer (again), we opted to invest that effort instead into a scalable and automatable network installation mechanism.

Ubuntu Orchestra is an ambitious project to solve those problems for the modern system administrator, at scale, using the best of Ubuntu Server Open Source technologies.  It's tightly integrated with Ubuntu Ensemble, and OpenStack is Orchestra's foremost (but not only) workload.

The Moving Parts

In our experience, anyone who has more than, say, a dozen Ubuntu Servers has implemented some form of a local mirror (or cache), a pxe/tftp boot server, dhcp, dns, and probably quite a bit of Debian preseed hacking etc. to make that happen.  Most server admins have done something like this in their past.  And almost every implementation has been different.  We wanted to bundle this process and make it trivial for an Ubuntu system administration to install Orchestra on one server, and then deploy an entire data center effortlessly.

To do this, we wanted to write as little new code as possible and really focus on Ubuntu's strength here -- packaging and configuring the best of open source.  We reviewed several options in this space.

The Ubuntu Orchestra Server

At a general level, the pieces we decided we needed were:
  • Provisioning Server
  • Management Server
  • Monitoring Server
  • Logging Server
There exist excellent implementations of each of these in Ubuntu already.  The ultimate goal of Orchestra is to tie them all together into one big happy integrated stack.

If you're conversant in Debian control file syntax, take a look at Orchestra's control file, and you'll see how these pieces are laid out.  Much of Orchestra is just a complicated, opinionated meta package with most of the "code" being in the post installation helper scripts that get everything configured and working properly together.

As such, the ubuntu-orchestra-server package is a meta package that pulls in:

  • ubuntu-orchestra-provisioning-server
  • ubuntu-orchestra-management-server
  • ubuntu-orchestra-monitoring-server
  • ubuntu-orchestra-logging-server

Let's look at each of those components...

The Ubuntu Orchestra Provisioning Server

We looked at a hacky little project called uec-provisioning, that several of us were using to deploy our local test and development Eucalyptus clouds.  (In fact, uec-provisioning provides several of the fundamental concepts of Orchestra, going back to the Lucid development cycle -- but they were quick hacks here, and not a fully designed solution.)  We also examined FAI (Fully Automated Install) and Cobbler.  We took a high level look at several others, but really drilled down into FAI and Cobbler.

FAI was already packaged for Debian and Ubuntu, but it's dependency on NFS was a real limitation on what we wanted to do with large scale enterprise deployments.

Cobbler was a Fedora project, popular with many sysadmins, with a Python API and several users on their public mailing lists asking for Ubuntu support (both as a target and host).  All things considered, we settled on Cobbler and spent much of the Natty cycle doing the initial packaging and cleaning up the Debian and Ubuntu support with the upstream Fedora maintainers.  For Natty, we ended up with a good, clean Cobbler package, but as I said above, fell a little short on delivering the full Orchestra suite.  It's well worth mentioning that Cobbler is an excellent open source project with very attentive, friendly upstreams.

Cobbler is installable as a package, all on its own, on top of Ubuntu, and can be used to deploy Debian, Ubuntu, CentOS, Fedora, Red Hat, and SuSE systems.

But the ubuntu-orchestra-provisioning-server is a special meta package that adds some excellent enhancements to the Ubuntu provisioning experience.  It includes a squid-deb-proxy server, which caches local copies of installed packages, such that subsequent installations will occur at LAN speeds.  The Ubuntu Mini ISOs are automatically mirrored by a weekly cronjob, and automatically imported and updated in Cobbler.  Orchestra also ships specially crafted and thoroughly tested preseed files for Orchestra-deployed Ubuntu Servers.  These ensure that your network installations operate reliably unattended.

The Ubuntu Orchestra Management Server

In Orchestra's earliest (1.x) implementations, the Management Server portion of Orchestra was handled by a complicated combination of Puppet, mCollective, and over a dozen mCollective plugins (all of which we have now upstreamed to the mCollective project).  This design worked very well in the traditional "configuration management" approach to data center maintenance.

Instead, we're taking a very modern, opinionated approach on the future of the data center.  In the Orchestra 2.x series, we have adjusted our design from that traditional approach to a more modern "service orchestration" approach, which integrates much better into the overarching Ubuntu Cloud strategy.  Here, we're using Ensemble to provide a modern, Cloud-ready approach to today's data center.  Like Orchestra, Ensemble is a Canonical-driven open source project, driven by Ubuntu developers, for Ubuntu users.

The Ubuntu Orchestra Monitoring Server

We believe that Monitoring is an essential component of a modern, enterprise-ready data center, and we know that there are some outstanding open source tools in this space.  After experimentation, research, and extensive discussions at UDS in Budapest, we have settled on Nagios as our monitoring solution.  Nodes deployed by Orchestra will automatically be federated back to the Monitoring Server.  The goal is to make this as seamless and autonomic as possible, transparent to the system administrator as possible.

The Ubuntu Orchestra Logging Server

Similar, but slightly separate from the Monitoring Server is the need most sysadmins have for comprehensive remote logging.  Data center servers are necessarily headless.  Orchestra is currently using rsyslog to provide this capability, also configured automatically at installation time.

The Ubuntu Orchestra Client

Server provisioned by Orchestra, but before they're managed by Ensemble should all look identical.  We have modeled this behavior after Amazon EC2.  Every instance of Ubuntu Server you run in EC2 looks more or less the same at initial login.  We want a very similar experience in Orchestra deployed servers.

The default Orchestra deployed server looks very much like a default Ubuntu Server installation, with a couple of notable additions.  The preseed also adds the ubuntu-orchestra-client meta package, which pulls in: 
  • byobu, capistrano, cloud-init, ensemble, openssh-server, nagios, powernap, rsyslog, and squid-deb-proxy-client 
Note that administrators who disagree with these additions are welcome to edit the conffile, /etc/orchestra/ubuntu-orchestra-client.seed where this is specified.  But these are the client side pieces required by the rest of Orchestra's functionality.

In Comparison to Crowbar

Crowbar is a solution linking Dell and the OpenStack project that we've been following for some time.  I discussed the design of Orchestra at length with Crowbar's chief architect, Rob Hirschfeld, in San Antonio at the 2nd OpenStack Developer Summit in San Antonio in November 2010.  I've also seen OpsCode Matt Ray's excellent presentation/demonstration on Crowbar at the Texas Linux Fest.

Orchestra and Crowbar are similar in some respects, in that they both deploy OpenStack clouds, but differ significantly in others.  Notably:
  • Crowbar is was designed to deploy OpenStack (yesterday announcing that they're working on deploying Hadoop too).   Orchestra is designed to deploy Ubuntu Servers, and then task  them with jobs or roles (which might well be OpenStack compute, storage, or service nodes).
  • Crowbar was designed and optimized for Dell Servers (which allows it to automate some low-level tasks, like BIOS configuration), but has recently started deploying other hardware too.  Orchestra is designed to work with any hardware that can run Ubuntu (i386, amd64, and even ARM!).
  • Crowbar uses Chef for a configuration-management type experience, and while initially implemented on Ubuntu, should eventually work with other Linux OSes.  Orchestra uses Ensemble for a service-orchestration style experience, and while other OSes could be provisioned by Orchestra, it will always be optimized for Ubuntu.
  • Crowbar has been recently open sourcedOrchestra is, and has been, open source (AGPL) since January 2011.
None of these points should disparage Crowbar.  It sounds like an excellent solution to a specific problem -- getting OpenStack running on a rack of Dell Servers.  In the demos we've seen of Crowbar, they're using Ubuntu as the base OS, and that's great.  We (Ubuntu) will continue to do everything in our power to ensure that Ubuntu is the best OS for running your OpenStack cloud.  In fact, we can even see Orchestra being used to deploy your Crowbar server, which then deploys OpenStack to your rack of Dell Servers, if that's your taste.  In any case, we're quite excited that others are tackling the hard problems in this space.

In Conclusion

Ensemble is how you deploy your workloads into the Cloud.  And Orchestra is how you deploy the Cloud.  Orchestra is a suite of best practices for deploying Ubuntu Servers, from Ubuntu Servers.  After deployment, it provides automatic federation and integrated management, monitoring, and logging.


Orchestra is short hand for The Ubuntu Orchestra Project.  It's an Ubuntu Server solution.  For the Ubuntu community and users, as well as Canonical customers.  Designed and implemented by Ubuntu developers and aspiring Ubuntu developers.



:-Dustin

A Formal Introduction to The Ubuntu Orchestra Project

// August 5th, 2011 // Comments Off // Uncategorized



Today's post by Matthew East, coupled with several discussions in IRC and the Mailing Lists have made me realize that we've not communicated the Ubuntu Orchestra Project clearly enough to some parts of the Ubuntu Community.  Within Ubuntu Server developer circles, I think the project's goals, design, and implementation are quite well understood.  But I now recognize that our community stretches both far and wide, and our messages about Orchestra have not yet reached all corners of the Ubuntu world :-)  Here's an attempt at that now!

History

Disorganized concepts of Ubuntu Orchestra have been discussed at every UDS since UDS-Intrepid in Prague, May 2008.  In its current form, I believe these were first discussed at UDS-Natty in Orlando in October 2010, in a series of sessions led by Mathias Gug and I.  Matthias left Canonical a few weeks later for a hot startup in California called Zimride, but we initiated the project during the Natty cycle based on the feedback from UDS, pulling together the bits and pieces.

The newly appointed Server Manager (and Nomenclature-Extraordinaire) Robbie Williamson suggested the name Orchestra (previously, were calling it Ubuntu Infrastructure Services).  Everyone on the team liked the name, and it stuck.  I renamed the project and packages and branding and everything around Ubuntu Orchestra, or just Orchestra for short.  Hereafter, we may say Orchestra, but we always mean Ubuntu Orchestra.

We had packages in a little-publicized PPA for Natty, but we never pushed the project into the archive for Natty.  It just wasn't baked yet.  And due to other priorities, and it just didn't land before the cycle's Feature Freeze.  Still, it was a great idea, we had a solid foundation, and the seed had been planted in people's minds for the next UDS in Budapest...

Right around UDS-Oneiric in Budapest (May 2011), I left the Ubuntu Platform Server Team, to manage a new team in Canonical Corporate Services, called the Solutions Integration Team (we build solutions on top of Ubuntu Server).  Two rock stars on that team (Juan Negron and Marc Cluet) had been hard at work on a project called the SI-Toolchain -- a series of Puppet Modules and mCollective plugins that can automate the deployment of workloads.  This was the piece that we were missing from Orchestra, the key feature that kept us from uploading Orchestra to Natty.  I worked extensively with them in the weeks before and after UDS merging their functionality into Orchestra, at which point we had a fully functional system for Oneiric.  Since that time, some of that functionality has been replaced with Ensemble, which aligns a bit better with how we see Service Orchestration in the world of Ubuntu Servers (more on that below).

Okay, history lesson done.  Now the technical details!

The Problem


Traditionally, the Ubuntu Server ships and installs from a single ISO.  That's fine and dandy if you're installing one or two servers.  But in the Cloud IaaS world where Ubuntu competes, that just doesn't cut the mustard.  Real Cloud deployments involve installing dozens, if not hundreds or thousands of systems.  And then managing, monitoring, and logging those system for their operational lives.

I've installed the Ubuntu Enterprise Cloud literally hundreds of times in the last 3 years.  While the UEC Installer option in the Server ISO was a landmark achievement in IaaS installations, it falls a bit short on large scale deployments.  With the move to OpenStack, we had a pressing need to rework the Ubuntu Cloud installation.  Rather than changing a bunch of hard coded values in the debian-installer (again), we opted to invest that effort instead into a scalable and automatable network installation mechanism.

Ubuntu Orchestra is an ambitious project to solve those problems for the modern system administrator, at scale, using the best of Ubuntu Server Open Source technologies.  It's tightly integrated with Ubuntu Ensemble, and OpenStack is Orchestra's foremost (but not only) workload.

The Moving Parts

In our experience, anyone who has more than, say, a dozen Ubuntu Servers has implemented some form of a local mirror (or cache), a pxe/tftp boot server, dhcp, dns, and probably quite a bit of Debian preseed hacking etc. to make that happen.  Most server admins have done something like this in their past.  And almost every implementation has been different.  We wanted to bundle this process and make it trivial for an Ubuntu system administration to install Orchestra on one server, and then deploy an entire data center effortlessly.

To do this, we wanted to write as little new code as possible and really focus on Ubuntu's strength here -- packaging and configuring the best of open source.  We reviewed several options in this space.

The Ubuntu Orchestra Server

At a general level, the pieces we decided we needed were:
  • Provisioning Server
  • Management Server
  • Monitoring Server
  • Logging Server
There exist excellent implementations of each of these in Ubuntu already.  The ultimate goal of Orchestra is to tie them all together into one big happy integrated stack.

If you're conversant in Debian control file syntax, take a look at Orchestra's control file, and you'll see how these pieces are laid out.  Much of Orchestra is just a complicated, opinionated meta package with most of the "code" being in the post installation helper scripts that get everything configured and working properly together.

As such, the ubuntu-orchestra-server package is a meta package that pulls in:

  • ubuntu-orchestra-provisioning-server
  • ubuntu-orchestra-management-server
  • ubuntu-orchestra-monitoring-server
  • ubuntu-orchestra-logging-server

Let's look at each of those components...

The Ubuntu Orchestra Provisioning Server

We looked at a hacky little project called uec-provisioning, that several of us were using to deploy our local test and development Eucalyptus clouds.  (In fact, uec-provisioning provides several of the fundamental concepts of Orchestra, going back to the Lucid development cycle -- but they were quick hacks here, and not a fully designed solution.)  We also examined FAI (Fully Automated Install) and Cobbler.  We took a high level look at several others, but really drilled down into FAI and Cobbler.

FAI was already packaged for Debian and Ubuntu, but it's dependency on NFS was a real limitation on what we wanted to do with large scale enterprise deployments.

Cobbler was a Fedora project, popular with many sysadmins, with a Python API and several users on their public mailing lists asking for Ubuntu support (both as a target and host).  All things considered, we settled on Cobbler and spent much of the Natty cycle doing the initial packaging and cleaning up the Debian and Ubuntu support with the upstream Fedora maintainers.  For Natty, we ended up with a good, clean Cobbler package, but as I said above, fell a little short on delivering the full Orchestra suite.  It's well worth mentioning that Cobbler is an excellent open source project with very attentive, friendly upstreams.

Cobbler is installable as a package, all on its own, on top of Ubuntu, and can be used to deploy Debian, Ubuntu, CentOS, Fedora, Red Hat, and SuSE systems.

But the ubuntu-orchestra-provisioning-server is a special meta package that adds some excellent enhancements to the Ubuntu provisioning experience.  It includes a squid-deb-proxy server, which caches local copies of installed packages, such that subsequent installations will occur at LAN speeds.  The Ubuntu Mini ISOs are automatically mirrored by a weekly cronjob, and automatically imported and updated in Cobbler.  Orchestra also ships specially crafted and thoroughly tested preseed files for Orchestra-deployed Ubuntu Servers.  These ensure that your network installations operate reliably unattended.

The Ubuntu Orchestra Management Server

In Orchestra's earliest (1.x) implementations, the Management Server portion of Orchestra was handled by a complicated combination of Puppet, mCollective, and over a dozen mCollective plugins (all of which we have now upstreamed to the mCollective project).  This design worked very well in the traditional "configuration management" approach to data center maintenance.

Instead, we're taking a very modern, opinionated approach on the future of the data center.  In the Orchestra 2.x series, we have adjusted our design from that traditional approach to a more modern "service orchestration" approach, which integrates much better into the overarching Ubuntu Cloud strategy.  Here, we're using Ensemble to provide a modern, Cloud-ready approach to today's data center.  Like Orchestra, Ensemble is a Canonical-driven open source project, driven by Ubuntu developers, for Ubuntu users.

The Ubuntu Orchestra Monitoring Server

We believe that Monitoring is an essential component of a modern, enterprise-ready data center, and we know that there are some outstanding open source tools in this space.  After experimentation, research, and extensive discussions at UDS in Budapest, we have settled on Nagios as our monitoring solution.  Nodes deployed by Orchestra will automatically be federated back to the Monitoring Server.  The goal is to make this as seamless and autonomic as possible, transparent to the system administrator as possible.

The Ubuntu Orchestra Logging Server

Similar, but slightly separate from the Monitoring Server is the need most sysadmins have for comprehensive remote logging.  Data center servers are necessarily headless.  Orchestra is currently using rsyslog to provide this capability, also configured automatically at installation time.

The Ubuntu Orchestra Client

Server provisioned by Orchestra, but before they're managed by Ensemble should all look identical.  We have modeled this behavior after Amazon EC2.  Every instance of Ubuntu Server you run in EC2 looks more or less the same at initial login.  We want a very similar experience in Orchestra deployed servers.

The default Orchestra deployed server looks very much like a default Ubuntu Server installation, with a couple of notable additions.  The preseed also adds the ubuntu-orchestra-client meta package, which pulls in: 
  • byobu, capistrano, cloud-init, ensemble, openssh-server, nagios, powernap, rsyslog, and squid-deb-proxy-client 
Note that administrators who disagree with these additions are welcome to edit the conffile, /etc/orchestra/ubuntu-orchestra-client.seed where this is specified.  But these are the client side pieces required by the rest of Orchestra's functionality.

In Comparison to Crowbar

Crowbar is a solution linking Dell and the OpenStack project that we've been following for some time.  I discussed the design of Orchestra at length with Crowbar's chief architect, Rob Hirschfeld, in San Antonio at the 2nd OpenStack Developer Summit in San Antonio in November 2010.  I've also seen OpsCode Matt Ray's excellent presentation/demonstration on Crowbar at the Texas Linux Fest.

Orchestra and Crowbar are similar in some respects, in that they both deploy OpenStack clouds, but differ significantly in others.  Notably:
  • Crowbar is was designed to deploy OpenStack (yesterday announcing that they're working on deploying Hadoop too).   Orchestra is designed to deploy Ubuntu Servers, and then task  them with jobs or roles (which might well be OpenStack compute, storage, or service nodes).
  • Crowbar was designed and optimized for Dell Servers (which allows it to automate some low-level tasks, like BIOS configuration), but has recently started deploying other hardware too.  Orchestra is designed to work with any hardware that can run Ubuntu (i386, amd64, and even ARM!).
  • Crowbar uses Chef for a configuration-management type experience, and while initially implemented on Ubuntu, should eventually work with other Linux OSes.  Orchestra uses Ensemble for a service-orchestration style experience, and while other OSes could be provisioned by Orchestra, it will always be optimized for Ubuntu.
  • Crowbar has been recently open sourcedOrchestra is, and has been, open source (AGPL) since January 2011.
None of these points should disparage Crowbar.  It sounds like an excellent solution to a specific problem -- getting OpenStack running on a rack of Dell Servers.  In the demos we've seen of Crowbar, they're using Ubuntu as the base OS, and that's great.  We (Ubuntu) will continue to do everything in our power to ensure that Ubuntu is the best OS for running your OpenStack cloud.  In fact, we can even see Orchestra being used to deploy your Crowbar server, which then deploys OpenStack to your rack of Dell Servers, if that's your taste.  In any case, we're quite excited that others are tackling the hard problems in this space.

In Conclusion

Ensemble is how you deploy your workloads into the Cloud.  And Orchestra is how you deploy the Cloud.  Orchestra is a suite of best practices for deploying Ubuntu Servers, from Ubuntu Servers.  After deployment, it provides automatic federation and integrated management, monitoring, and logging.


Orchestra is short hand for The Ubuntu Orchestra Project.  It's an Ubuntu Server solution.  For the Ubuntu community and users, as well as Canonical customers.  Designed and implemented by Ubuntu developers and aspiring Ubuntu developers.



:-Dustin

Summer of OpenStack: the diablo-3 milestone

// July 29th, 2011 // 1 Comment » // Uncategorized

No rest for the OpenStack developers, today saw the release of the July development efforts for Nova and Glance: the Diablo-3 milestone.

Glance gained two performance options: API servers can now cache image data on the local filesystem, and a delayed delete feature allows image deletion to happen asynchronously.

With a bit more than 100 trunk commits over the month, Nova gained support for multiple NICs, FlatDHCP network mode now support a high-availability option (read more about it here), instances can be migrated and system usage notifications were added to the notification framework. Network code was also refactored in order to facilitate integration with the new networking projects, and countless fixes were made in OpenStack API 1.1 support.

We have one more milestone left (diablo-4) before the final 2011.3 release… still a lot to do !


The Obligatory DevOps Blog Post

// July 27th, 2011 // Comments Off // Uncategorized

Any business with half a need for computing resources has traditionally employed or contracted a team of professionals -- usually of the species Systemus Administratus (SysAdmin in the lingua franca) -- to manage those resources.  SysAdmins are distinct from their computer resource hunting/gathering predecessors in their ability to use tools, construct new ones, and most importantly, cultivate farms of local servers.  SysAdmins have ruled the landscape of the IT industry for nearly 30 years.  But the extensive manual labor previously required to provision and maintain entire data centers is quite different now, with the industrial revolution of cloud technologies.  The dawn of the cloud computing age has yielded a demand for a different IT skill set.
 
More recently, we have witnessed the rapid emergence of a successful new species, Developus Operatus, or DevOps for short.  DevOps embody a different collection of technical skills, distinct from their SysAdmin counterparts, finely honed for cloud computing environments and Agile development methodologies.  DevOps excel at data center automation, develop for cloud scale applications, and utilize extensive configuration management to orchestrate massive systems deployments.  DevOps are not exactly pure developers, engineers, testers, or tech operators, but in fact incorporate skills from each of these expertise.  Some SysAdmins have consciously migrated toward DevOps professions, while some others have subconsciously transformed.

With the accelerating adoption of cloud platforms, DevOps professionals are perhaps the most influential individuals in the technology industry.  The cloud’s first colonists and earliest adopters, DevOps technologists are thought leaders and key innovators in this thriving market.  Expert DevOps collaboration is now essential in any Agile development shop, with DevOps stakeholders providing vital guidance to design discussions, platform adoption, and even procurement decisions.

Linux and UNIX server distributions with decades of tradition are hard wired directly into the DNA and collective memory of many SysAdmins.  For veterans who measure system uptime in decades, the Ubuntu Server is still quite a newcomer to this SysAdmin camp, and is often (and unfortunately) treated with inescapable skepticism.

On the other hand, the Ubuntu Server seems rather more attractive to the DevOps guild, as it presents interesting, advantageous opportunities as an ideal Linux platform.  DevOps demand dynamic, cloud-ready environments that older Linux/UNIX distributions do not yet deliver.  The Ubuntu Server is uniquely positioned to appeal to the hearts and minds of the DevOps discipline, who require a unique balance of stability, security, timely releases, yet also the latest and greatest features.  Ubuntu builds on the foundation of Debian’s Linux/UNIX tradition, but continuously integrates the latest application enhancements with high quality, releasing every six months.  On time.  Every...single...time.

I believe that Ubuntu is already appealing to the DevOps crowd as a comprehensive, complimentary platform, particularly in contrast to some of the other industry players.  Never complete, the Ubuntu platform continues to evolve with the likes of the DevOps confluence.

I know that we in Ubuntu are working to ensure that the Ubuntu Server is the ideal Linux platform for the greater DevOps community for many years to come.  Stay tuned to hear how Ubuntu's Orchestra and Ensemble projects are aiming to do just that...






Cheers,
:-Dustin

Getting started with the CloudFoundry Client in Ubuntu

// July 26th, 2011 // Comments Off // Uncategorized


I'm pleased to introduce a powerful new cloud computing tool available in Ubuntu 11.10 (Oneiric), thanks to the hard work of my team (Brian Thomason, Juan Negron, and Marc Cluet), as well as our partners at VMWare -- the cloudfoundry-client package, ruby-vmc and it's command line interface, vmc.

CloudFoundry is a PaaS (Platform as a Service) cloud solution, open sourced earlier this year by VMWare.  Canonical's Systems Integration Team has been hard at work for several weeks now packaging both the client and server pieces of CloudFoundry for Ubuntu 11.10 (Oneiric).  We're at a point now where we'd love to get some feedback from early adopting Oneiric users on the client piece.

PaaS is a somewhat new area of Cloud Computing for Ubuntu.  Most of our efforts up to this point have been focused on IaaS (Infrastructure as a Service) solutions, such as Eucalyptus and OpenStack.  With IaaS, you (the end user of the service) run virtual machine instances of an OS (hopefully Ubuntu!), and build your solutions on top of that infrastructure layer.  With PaaS, you (the end user of the service) develop applications against a given software platform (within a few constraints), but you never actually touch the OS layer (the infrastructure).

CloudFoundry is one of the more interesting open source PaaS offerings I've used lately, supporting several different platforms already (Ruby, NodeJS, Spring Java), and several backend databases (MySQL, MongoDB, Redis), and support for other languages/databases under rapid development.

VMWare is hosting a free, public CloudFoundry server at cloudfoundry.com (though you need to request an invite; took mine less than 48 hours to be arrive).  However, we're rapidly converging on a cloudfoundry-server package in a PPA, as well as an Ensemble formula.  Stay tuned for a subsequent introduction on that, and a similar how-to in the next few days...

In the mean time, let's deploy a couple of basic apps to CloudFoundry.com!

Installing the Tools

The tool you need is vmc, which is provided by the ruby-vmc package.  We in the Canonical SI Team didn't find that package name very discoverable, so we created a meta package called cloudfoundry-client.

sudo apt-get install cloudfoundry-client

Setting the Target

First, you'll need to set the target server for the vmc command.  For this tutorial, we'll use VMWare's public server at cloudfoundry.com.  Very soon, you'll be able to target this at your locally deployed CloudFoundry server!

$ vmc target https://api.cloudfoundry.com
Succesfully targeted to [https://api.cloudfoundry.com]

Logging In

Next, you'll log in with your credentials.  As I said above, it might take a few hours to receive your credentials from CloudFoundry.com, but once you do, you'll log in like this:

$ vmc login
Email: kirkland@example.com
Password: **********
Successfully logged into [https://api.cloudfoundry.com]

Deploying Your First Applications

Your friendly Canonical Systems Integration Team have developed and tested a series of simple hello-world applications in each of CloudFoundry's supported languages.  Each of these applications simply print a welcome message, and display all of the environment variables available to the application.  The latter bit (the environment variables) are important, as several of them, those starting with VCAP_*, serve as a sort of metadata service for your applications.

Our sample apps are conveniently placed in /usr/share/doc/ruby-vmc/examples.

Deploying a Ruby Application

To deploy our sample Ruby application:

$ cd /usr/share/doc/ruby-vmc/examples/ruby/hello_env
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example101
Application Deployed URL: 'example101.cloudfoundry.com'?
Detected a Sinatra Application, is this correct? [Yn]: y
Memory Reservation [Default:128M] (64M, 128M, 256M, 512M or 1G) 128M
Creating Application: OK
Would you like to bind any services to 'example101'? [yN]: n
Uploading Application:
Checking for available resources: OK
Packing application: OK
Uploading (0K): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

And now, I can go to http://example101.cloudfoundry.com/ and see my application working.

Deploying a NodeJS Application

Next, I'm going to deploy our sample NodeJS application:

$ cd /usr/share/doc/ruby-vmc/examples/nodejs/hello_env
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example102
Application Deployed URL: 'example102.cloudfoundry.com'?
Detected a Node.js Application, is this correct? [Yn]: y
Memory Reservation [Default:64M] (64M, 128M, 256M, 512M or 1G) 64M
Creating Application: OK
Would you like to bind any services to 'example102'? [yN]: n
Uploading Application:
Checking for available resources: OK
Packing application: OK
Uploading (0K): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

And now, I can go to http://example102.cloudfoundry.com/ and see my simple NodeJS application running.

Deploying a Java Application

Now, we'll deploy our sample Java application.

$ cd /usr/share/doc/ruby-vmc/examples/springjava/hello_env

As with anything that involves Java, it's hardly as simple as our other examples :-) First we need to install the Java tool chain, and compile our jar file. I recommend you queue this one up and go brew yourself a gourmet pot of coffee. (You might even make it to Guatemala and back.) Also, note that we'll make a copy of this directory locally, because the maven build process needs to be able to write to the local directory.

$ sudo apt-get install openjdk-6-jdk maven2
...
$ cd $HOME
$ cp -r /usr/share/doc/ruby-vmc/examples/springjava .
$ cd springjava/hello_env/
$ mvn clean package
...
$ cd target
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example103
Application Deployed URL: 'example103.cloudfoundry.com'?
Detected a Java Web Application, is this correct? [Yn]: y
Memory Reservation [Default:512M] (64M, 128M, 256M, 512M or 1G) 512M
Creating Application: OK
Would you like to bind any services to 'example103'? [yN]: n
Uploading Application:
Checking for available resources: OK
Packing application: OK
Uploading (4K): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

All that for a Java hello-world ;-) Anyway, I now have it up and running at http://example103.cloudfoundry.com/.

Deploying a More Advanced Application

Hopefully these hello-world style applications will help you get started quickly and effortlessly deploy your first CloudFoundry apps. But let's look at one more complicated example -- one that requires a database service!

In digging around the web for some interesting NodeJS applications, I came across the Node Knockout programming competition. I found a few interesting apps, but had a lot of trouble tracking down the source for some of them. In any case, I really liked a shared-whiteboard application called Drawbridge, and I did find its source in github. So I branched the code, imported it to bzr and made a number of changes (with some awesome help from my boss, Zaid Al Hamami). I guess that's an important point to make here -- I've had to do some fairly intense surgery on pretty much every application I've ported to run in CloudFoundry, so please do understand that you'll very likely need to modify your code to port it to the CloudFoundry PaaS.

In any case, let's deploy Drawbridge to CloudFoundry!

$ cd $HOME
$ bzr branch lp:~kirkland/+junk/drawbridge
$ cd drawbridge
$ vmc push
Would you like to deploy from the current directory? [Yn]: y
Application Name: example104
Application Deployed URL: 'example104.cloudfoundry.com'?
Detected a Node.js Application, is this correct? [Yn]: y
Memory Reservation [Default:64M] (64M, 128M, 256M or 512M) 128M
Creating Application: OK
Would you like to bind any services to 'example104'? [yN]: y
Would you like to use an existing provisioned service [yN]? n
The following system services are available::
1. mongodb
2. mysql
3. redis
Please select one you wish to provision: 2
Specify the name of the service [mysql-4a958]:
Creating Service: OK
Binding Service: OK
Uploading Application:
Checking for available resources: OK
Processing resources: OK
Packing application: OK
Uploading (77K): OK
Push Status: OK
Staging Application: OK
Starting Application: OK

Note that vmc provisioning and linked a new MySQL instance with the app!

Now, let's see what Drawbridge is all about. Visiting http://example104.cloudfoundry.com in my browser, I can work on a collaborative whiteboard (much like Gobby or Etherpad, except for drawing pictures). Brian Thomason helped me create this Pulitzer-worthy doodle:


Listing Apps and Services

Now that I have a few apps and services running, I can take a look at what I have running using a few basic vmc commands.

Here are my apps:

$ vmc apps 
+-------------+----+---------+-----------------------------+-------------+
| Application | # | Health | URLS | Services |
+-------------+----+---------+-----------------------------+-------------+
| example102 | 1 | RUNNING | example102.cloudfoundry.com | |
| example103 | 1 | RUNNING | example103.cloudfoundry.com | |
| example101 | 1 | RUNNING | example101.cloudfoundry.com | |
| example104 | 1 | RUNNING | example104.cloudfoundry.com | mysql-4a958 |
+-------------+----+---------+-----------------------------+-------------+

And my services (available and provisioned):

$ vmc services 
============== System Services ==============
+---------+---------+-------------------------------+
| Service | Version | Description |
+---------+---------+-------------------------------+
| redis | 2.2 | Redis key-value store service |
| mongodb | 1.8 | MongoDB NoSQL store |
| mysql | 5.1 | MySQL database service |
+---------+---------+-------------------------------+
=========== Provisioned Services ============
+-------------+---------+
| Name | Service |
+-------------+---------+
| mysql-4a958 | mysql |
| mysql-5894b | mysql |
+-------------+---------+

In this post, I've demonstrated a couple of frameworks (Ruby/Sinatra, NodeJS, Spring/Java), but here I can see that there are several others supported:

$ vmc frameworks 
+---------+
| Name |
+---------+
| rails3 |
| sinatra |
| lift |
| node |
| grails |
| spring |
+---------+

Scaling Instances

One of the huge advantages of PaaS deployment is how trivial application resource scalability can actually be. Let's increase the memory available to one of these applications:

$ vmc mem example101
Update Memory Reservation? [Current:128M] (64M, 128M, 256M or 512M) 512M
Updating Memory Reservation to 512M: OK
Stopping Application: OK
Staging Application: OK
Starting Application: OK
$ vmc stats example101
+----------+-------------+----------------+--------------+--------------+
| Instance | CPU (Cores) | Memory (limit) | Disk (limit) | Uptime |
+----------+-------------+----------------+--------------+--------------+
| 0 | 0.1% (4) | 16.9M (512M) | 40.0K (2G) | 0d:0h:1m:22s |
+----------+-------------+----------------+--------------+--------------+

Done! Wow, that was easy!

Now, let's add some additional instances, as I suspect they'll crash once my billions of blog readers start pound Drawbridge, maybe it'll stay up a bit longer :-)

$ vmc instances example104 4
Scaling Application instances up to 4: OK
$ vmc stats example104
+----------+-------------+----------------+--------------+---------------+
| Instance | CPU (Cores) | Memory (limit) | Disk (limit) | Uptime |
+----------+-------------+----------------+--------------+---------------+
| 0 | 0.0% (4) | 21.0M (128M) | 28.0M (2G) | 0d:0h:19m:33s |
| 1 | 0.0% (4) | 15.8M (128M) | 27.9M (2G) | 0d:0h:2m:38s |
| 2 | 0.0% (4) | 16.3M (128M) | 27.9M (2G) | 0d:0h:2m:36s |
| 3 | 0.0% (4) | 15.8M (128M) | 27.9M (2G) | 0d:0h:2m:37s |
+----------+-------------+----------------+--------------+---------------+

In Conclusion

I hope this helps at least a few of you with an introduction to PaaS, CloudFoundry, and the CloudFoundry-Client (vmc) in Ubuntu.  As I said above, stay tuned for a post coming soon about hosting your own CloudFoundry-Server on Ubuntu!

:-Dustin