Archive for April, 2012
// April 26th, 2012 // 7 Comments » // Uncategorized
It’s time for another Juju Charm Contest, where you can submit your charms and win fabulous prizes! This contest is for Ubuntu Developer Summit Attendees; and our prizes are three sexy Dell XPS 13 ultrabooks, which we’ll be awarding to the three lucky winners of the contest.
So how can you win yourself one of these? Well, with 66 services already ready to be deployed on the cloud we’re always looking for more, so have a look at what you think is missing from the Juju Charm Store and submit your charm as an entry.
We’ve got the step-by-step instructions on how to write your own charm, we’re looking for things DevOps deploy to the cloud, so be creative! You have from now until May 09 to submit your charm. At that point we’ll judge the entries, and then give out the Dell XPS 13’s during the last day of the Ubuntu Developer Summit, so if you’re missing your favorite service from the Charm Store, submit an entry and you’ll automatically be entered in the contest.
Full contest rules here, including the judging criteria, so you’ll want to read that before you get started. Happy Charming!
// April 24th, 2012 // Comments Off on juju client now available for Mac OSX // Uncategorized
Brandon Holtsclaw has published a Mac port of juju. This will enable Mac users to deploy to their Ubuntu Servers from the comfort of their home operating system. Brandon adds along:
Pull Requests or filing Issue’s are more than Welcome’d from anyone.
Need help getting started with juju? Check out our Getting Started documentation and then browse through the Charm Store to see what you can start deploying today!
// April 19th, 2012 // Comments Off on OpenStack in Ubuntu Server 12.04 LTS // Uncategorized
With the release of Ubuntu Server 12.04 LTS quickly approaching, the Ubuntu Server Team has been working extremely hard on ensuring OpenStack Essex will be of high quality and tightly integrated into Ubuntu Cloud. As with prior Long Term Support releases, Canonical commits to maintaining Ubuntu Server 12.04 LTS for five years, which means users receive five years of maintenance for the OpenStack Essex packages we provide in main. With that said, we recognize that OpenStack is still a relatively young project moving at a tremendous rate of innovation right now, with features and fixes already planned for Folsom that some users require for their production deployment. In the past, these users would have to upgrade off the LTS, in order to get maintenance for the OpenStack release they need on Ubuntu Server… thus foregoing the five year maintenance they want and need for their production deployment. We wholeheartedly believe there are situations where moving to the next release of Ubuntu (12.10, 13.04, etc) for newer OpenStack releases works just fine, especially for test/dev deployments. However, we also know there will be many situations where users cannot afford the risk and/or the cost of upgrading their entire cloud infrastructure just to get the benefits of a newer OpenStack release, and we need to have a solution that fits their needs. After thinking about what users want and where most people expect OpenStack go in terms of continued innovation and stability, we have decided to provide Ubuntu users with two options for maintenance and support in the 12.04 LTS.
The first option is that users can stay with the shipped version of OpenStack (Essex) and remain with it for the full life of the LTS. As per the Ubuntu LTS policy, we commit to maintaining and supporting the Essex release for 5 years. The point releases will also ship the Essex version of OpenStack, along with any bug fixes or security updates made available since its release.
Introducing the Ubuntu Cloud Archive
The second option involves Canonical’s Ubuntu Cloud archive, which we are officially announcing today. Users can elect to enable this archive, and install newer releases of OpenStack (and the dependencies) as they become available up through the next Ubuntu LTS release (presumably 14.04). Bug processing and patch contributions will follow standard Ubuntu practice and policy where applicable. Canonical commits to maintaining and supporting new OpenStack releases for Ubuntu Server 12.04 LTS in our Ubuntu Cloud archive for at least 18 months after they release. Canonical will stop introducing new releases of OpenStack for Ubuntu Server 12.04 LTS into the Ubuntu Cloud archive with the version shipped in the next Ubuntu Server LTS release (presumably 14.04). We will maintain and support this last updated release of OpenStack in the Ubuntu Cloud archive for 3 years, i.e. until the end of the Ubuntu 12.04 LTS lifecycle.
In order to allow for a relatively easy upgrades, and still adhere to Ubuntu processes and policy, we have elected to have archive.canonical.com be the home of the Ubuntu Cloud archive. We will enable update paths for each OpenStack release.
- e.g. Enabling “precise-folsom” in the archive will provide access to all OpenStack Folsom packages built for Ubuntu Server 12.04 LTS (binary and source), any updated dependencies required, and bug/security fixes made after release.
As of now, we have no plans to build or host OpenStack packages for non-LTS releases of Ubuntu Server in the Ubuntu Cloud archive. We have created the chart below to help better explain the options.
Why Not Use Stable Release Updates?
Ubuntu’s release policy states that once an Ubuntu release has been published, updates must follow a special procedure called a stable release update, or SRU, and are delivered via the -updates archive. These updates are restricted to a specific set of characteristics:
- severe regression bugs
- security vulnerabilities (via the -security archive)
- bugs causing loss of user data
- “safe” application layer bugs
- hardware enablement
- partner archive updates
Exceptions to the SRU policy are possible. However, for this to occur the Ubuntu Technical Board must approve the exception, which must meet their guidelines:
- Updates to new upstream versions of packages must be forced or substantially impelled by changes in the external environment, i.e. changes must be outside anything that could reasonably be encapsulated in a stable release of Ubuntu. Changes internal to the operating system we ship (i.e. the Ubuntu archive), or simple bugs or new features, would not normally qualify.
- A new upstream version must be the best way to solve the problem. For example, if a new upstream version includes a small protocol compatibility fix and a large set of user interface changes, then, without any judgement required as to the benefits of the user interface changes, we will normally prefer to backport the protocol compatibility fix to the version currently in Ubuntu.
- The upstream developers must be willing to work with Ubuntu. A responsive upstream who understands Ubuntu’s requirements and is willing to work within them can make things very much easier for us.
- The upstream code must be well-tested (in terms of unit and system tests). It must also be straightforward to run those tests on the actual packages proposed for deployment to Ubuntu users.
- Where possible, the package must have minimal interaction with other packages in Ubuntu. Ensuring that there are no regressions in a library package that requires changes in several of its reverse-dependencies, for example, is significantly harder than ensuring that there are no regressions in a package with a straightforward standalone interface that can simply be tested in isolation. We would not normally accept the former, but might consider the latter.
Once approved by the Tech Board, the exception must have a documented update policy, e.g. http://wiki.ubuntu.com/LandscapeUpdates. Based on these guidelines and the core functionality OpenStack serves in Ubuntu Cloud, the Ubuntu Server team did not feel it was in the best interest of their users, nor Ubuntu in general, to pursue an SRU exception.
What about using Ubuntu Backports?
The Ubuntu Backports process (excludes kernel) provides us a mechanism for releasing package updates for stable releases that provide new features or functionality. Changes were recently made to `apt` in Ubuntu 11.10, whereby it now only installs packages from Backports when they are explicitly requested. Prior to 11.10, `apt` would install everything from Backports once it was enabled, which led to packages being unintentionally upgraded to newer versions. The primary drawbacks with using the Backports archive is that the Ubuntu Security team does not provide updates for the archive, it’s a bit of a hassle to enable per package updates, and Canonical doesn’t traditionally offer support services for the packages hosted there. Furthermore, with each new release of OpenStack, there are other applications that OpenStack depends on that also must be at certain levels. By having more than one version of OpenStack in the same Backports archive, we run a huge risk of having backward compatibility issues with these dependencies.
How Will You Ensure Stability and Quality?
In order for us to ensure users have a safe and reliable upgrade path, we will establish a QA policy where all new versions and updated dependencies are required to pass a specific set of regression tests with a 100% success rate. In addition:
- Unit testing must cover a minimum set of functionality and APIs
- System test scenarios must be executed for 24, 48 and 72 hours uninterrupted.
- Package testing must cover: initial installation, upgrades from the previous OpenStack release, and upgrades from the previous LTS and non-LTS Ubuntu release.
- All test failures must be documented as bugs in Launchpad, with regressions marked Fix Released before the packages are allowed to exit QA.
- Test results are posted publicly and announced via a mailing list specifically created for this effort only.
Only upon successfully exiting QA will packages be pushed into the Ubuntu Cloud archive.
What Happens With OpenStack Support and Maintenance in 14.04?
Good question. The cycle could repeat itself, however at this point Canonical is not making such a commitment. If the rate of innovation and growth of the OpenStack project matures to a point where users become less likely to need the next release for its improved stability and/or quality, and instead just want it for a new feature, then we would likely return to our traditional LTS maintenance and support model.
// April 18th, 2012 // Comments Off on Want to mess with SPDY easily? Come experiment with it via juju // Uncategorized
(This is half broken and not ready, but it’s too cool to not tell you about right away.)
The folks over at Google released a new snapshot of mod_spdy for use with Apache.
Thinking this would be a cool way to show off juju’s subordinate feature, Clint Byrum got to work and hacked together a mod_spdy subordinate charm; which means (assuming it works), that we can just tack it onto things serving via Apache and get mod_spdy relatively easy for all the charms that would use it in the store. Neat huh?
Here it is:
And here’s how you’d test it:
juju deploy wordpress
juju deploy mysql
juju add-relation mysql wordpress
juju expose wordpress
juju deploy cs:~clint-fewbar/precise/mod-spdy
juju add-relation mod-spdy wordpress
Clint realized that juju does not allow subordinates to open ports for their primaries, so you have to use the
open-port script in juju-jitsu.
To do that, after you’ve done the steps above:
bzr branch lp:juju-jitsu
juju-jitsu/sub-commands/open-port your-primary-service 443
https://your-public-ip/ should be SSL, and should be using SPDY if you try it in Chrome/Chromium/Firefox.
This is a rough cut, and not in the charm store yet for obvious reasons, and when I tested it it didn’t even serve the right page, but, at least the error was served over SPDY, heh. But you can immediately see that by using a subordinate charm you can add on a feature to an existing charm, making it a nice way to test something new and shiny.
We’re in #juju on freenode if you want to start whacking on this and making it more deployable, find bugs, and finish the implementation, we can then make this a nice option for Ubuntu Server users. Happy spdying!
// April 17th, 2012 // Comments Off on Juju constraints unbinds your machines // Uncategorized
This week, William “I code more than you will ever be able to” Reade announced that Juju has a new feature called ‘Constraints’.
This is really, really cool and brings juju into a new area of capability for deploying big and little sites.
To be clear, this allows you to abstract things pretty effectively.
juju deploy mysql --constraints mem=10G
juju deploy statusnet --constraints cpu=1
This will result in your mysql service being on an extra large instance since it has 15GB of RAM. Your statusnet instances will be m1.small’s since that will have just 1 ECU.
Even cooler than this is now if you want a mysql slave in a different availability zone:
juju deploy mysql --constraints ec2-zone=a mysql-a
juju deploy mysql --constraints ec2-zone=b mysql-b
juju add-relation mysql-a:master mysql-b:slave
juju add-relation statusnet mysql-a
Now if mysql-a goes down
juju remove-relation statusnet mysql-a
juju add-relation statusnet mysql-b
Much and more is possible, but this really does make juju even more compelling as a tool for simple, easy deployment. Edit: fixed ec2-zone to be the single character, per William’s feedback.
// April 10th, 2012 // Comments Off on Announcing the Ubuntu Cloud Summit, 8 May, Oakland, California // Uncategorized
Canonical in collaboration with Redmonk will be hosting “The Ubuntu Cloud Summit” – a one day event for both technology and business attendees interested in how open-source cloud computing can help their organisations.
The event takes place on Tuesday 8th May, at the The Oakland Marriott City Center Hotel, and runs in conjunction with UDS.
The agenda is still being defined, but the sessions will cover some interesting ideas, challenges and trends around cloud computing and how attendees can deploy an open cloud in their organisation.
Topics will include:
- The Open Cloud – The role of open source in cloud computing—particularly how an open cloud enables a more flexible, vendor-neutral approach.
- Lessons from cloud deployments – Open cloud deployments are real and growing. We’ll discuss and illustrate through case studies the best approaches to deploying and maximising an open cloud.
- Open-source cloud technologies – With Ubuntu including technologies such as OpenStack, MAAS and Juju, we’ll examine how they come together to form an open cloud.
For more information, visit: http://uds.ubuntu.com/cloud-summit/
The cost of a ticket for attending this event is $100 which includes lunch and refreshments.
// April 10th, 2012 // 1 Comment » // Uncategorized
The community submitted over 10 charms as part of the juju charm contest. The judges have deliberated and have picked the following winners:
The Grand Prize ($300 Amazon Gift Card) goes to Jimmi Andersen for his charm that deploys Appflower, a Rapid Application Development (RAD) tool for building web applications. You can check out the charm in the store for deployment instructions. The judges were impressed by how complete the charm is and how it brings software to Ubuntu that was previously only available by installing it by hand.
The 2 runners up (in no particular order) are Kees Cook for sbuild, and Ben Kerensa for Subway. sbuild provides a build environment for developers to test packages against and has been used for portable “hackathons” where having the packages build on the cloud is quicker and more convenient than building on your local machines. The Subway charm deploys the Subway IRC client, a sexy web based IRC client that uses Node.js and MongoDB. Thanks Ben and Kees, you’ll each receive a $100 Amazon gift card.
The charm store continues to grow as we now have over 73 total charms. The following people contributed charms to the contest and will each receive a Juju tshirt and Ubuntu travel mug: Patrick Hetu (znc and OpenERP), Atul Jha (OwnCloud), Nathan Osman (StackMobile), shazzner (Gitolite), and Brandon Holtsclaw (Drupal). Honorable mention goes to Ryan Kather, who attempted Moodle but was not able to complete it in time. Maybe next time! We’ll go deeper into these charms and show off their examples throughout the coming weeks.
Not finding what you need in the Charm Store? Well you can always contribute your own charms, here’s how you can get started.
// April 9th, 2012 // Comments Off on Uploading Known ssh Host Key in EC2 user-data Script // Uncategorized
The ssh protocol uses two different keys to keep you secure:
- The user ssh key is the one we normally think of. This
authenticates us to the remote host, proving that we are who we say we
are and allowing us to log in.
- The ssh host key gets less attention, but is also important. This
authenticates the remote host to our local computer and proves that the ssh session is
encrypted so that nobody can be listening in.
Every time you see a prompt like the following, ssh is checking the
host key and asking you to make sure that your session is going to
be encrypted securely.
The authenticity of host 'ec2-...' can't be established. ECDSA key fingerprint is ca:79:72:ea:23:94:5e:f5:f0:b8:c0:5a:17:8c:6f:a8. Are you sure you want to continue connecting (yes/no)?
If you answer “yes” without verifying that the remote ssh host
key fingerprint is the same, then you are basically saying:
I don’t need this ssh session encrypted. It’s fine for any
man-in-the-middle to intercept the communication.
Ouch! (But a lot of people do this
Note: If you have a line like the following in your ssh config file,
then you are automatically answering “yes” to this prompt for every
# DON'T DO THIS! StrictHostKeyChecking false
Care about security
Since you do
care about security and privacy, you want to verify
that you are talking to the right server using encryption and that no
man-in-the-middle can intercept your session.
There are a couple approaches you can take to check the fingerprint
for a new Amazon EC2 instance. The first is to wait for the console
output to be available from the instance, retrieve it, and verify that
the ssh host key fingerprint in the console output is the same as the
one which is being presented to you in the prompt.
Scott Moser has written a blog post describing how to verify ssh keys
on EC2 instances
. It’s worth reading so that you understand
the principles and the official way to do this.
The rest of this article is going to present a different approach that
lets you in to your new instance quickly and securely.
Passing ssh host key to new EC2 instance
Instead of letting the new EC2 instance generate its own ssh host key
and waiting for it to communicate the fingerprint through the EC2
console output, we can generate the new ssh host key on our local
system and pass it to the new instance.
Using this approach, we already know the public side of the ssh key so
we don’t have to wait for it to become available through the console
(which can take minutes).
Generate a new ssh host key for the new EC2 instance.
tmpdir=$(mktemp -d /tmp/ssh-host-key.XXXXXX) keyfile=$tmpdir/ssh_host_ecdsa_key ssh-keygen -q -t ecdsa -N "" -C "" -f $keyfile
Create the user-data script that will set the ssh host key.
userdatafile=$tmpdir/set-ssh-host-key.user-data cat <<EOF >$userdatafile #!/bin/bash -xeu cat <<EOKEY >/etc/ssh/ssh_host_ecdsa_key $(cat $keyfile) EOKEY cat <<EOKEY >/etc/ssh/ssh_host_ecdsa_key.pub $(cat $keyfile.pub) EOKEY EOF
Run an EC2 instance, say Ubuntu 11.10 Oneiric, passing in the
user-data script. Make a note of the new instance id.
ec2-run-instances --key $USER --user-data-file $userdatafile ami-4dad7424 instanceid=i-...
Wait for the instance to get a public DNS name and make a note of it.
ec2-describe-instances $instanceid host=ec2-...compute-1.amazonaws.com
Add new public ssh host key to our local ssh known_hosts after
removing any leftover key (e.g., from previous EC2 instance at same IP
knownhosts=$HOME/.ssh/known_hosts ssh-keygen -R $host -f $knownhosts ssh-keygen -R $(dig +short $host) -f $knownhosts ( echo -n "$host "; cat $keyfile.pub echo -n "$(dig +short $host) "; cat $keyfile.pub ) >> $knownhosts
When the instance starts running and the user-data script has
executed, you can ssh in to the server without being prompted to
verify the fingerprint
Don’t forget to clean up and to terminate your test instance.
rm -rf $tmpdir ec2-terminate-instances $instanceid
There is one big drawback in the above sample implementation of this
approach. We have placed secret information (the private ssh host
key) into the EC2 user-data, which I generally recommend against.
Any user who can log in to the instance or who can cause the instance
to request a URL and get the output, can retrieve the user-data. You
might think this is unlikely to happen, but I’d rather avoid or
minimize unnecessary risk.
In a production implementation of this approach, I would take steps
like the following:
- Upload the new ssh host key to S3 in a private object.
- Generate an authenticated URL to the S3 object and have that URL
expire in, say, 10 minutes.
- In the user-data script, download the ssh host key with the
authenticated, expiring S3 URL.
Now, there is a short window of exposure and you don’t have to worry
about protecting the user-data after the URL has expired.
// April 5th, 2012 // 4 Comments » // Uncategorized
UK cloud provider Brightbox would like to announce that they now have daily images of Ubuntu 12.04 available for testing. Brightbox has an EC2 compatible metadata service that works with Ubuntu’s cloud-init, you can find more about that in the documentation.
As a thank you to the Ubuntu community Brightbox is running a special through to September. A 10% discount to casual testers, and a 50% discount to anyone registering using their @ubuntu.com address.
Testers can sign up at http://brightbox.com/, and to claim the discount you just need to email email@example.com with your account id and let them know how you’re testing out Ubuntu.
Here’s their Getting Started guide. Happy testing!
// April 5th, 2012 // Comments Off on Don’t miss the inaugural Ubuntu Cloud Summit // Uncategorized
Kicking off this May, the Ubuntu Cloud Summit is a one day event for both technology and business people interested in what cloud computing can do for their organisations.
Hosted by Canonical and Redmonk we’ll be looking at how open-source is playing a critical role in the move to cloud computing. Delegates will also hear how enterprises have made the most of the move to the cloud using open source. There will be plenty of opportunity for discussion and debate ensuring you have all the information you need to deploy an open cloud.
The day will include a keynote from Mark Shuttleworth and others, plus a panel discussion chaired by Stephen O’Grady of Redmonk, before closing with cocktails and canapes.
The Ubuntu Cloud Summit takes place on Tuesday 8th May, at the The Oakland Marriott City Center in Oakland.
The event is sure to be popular, so don’t miss your chance to be there.
To find out more, go to uds.ubuntu.com/cloud-summit/
Hope to see you there!