Archive for November, 2011

Experimenting with juju and Puppet

// November 8th, 2011 // Comments Off // Uncategorized

Painless Hadoop / Ubuntu / EC2

// November 8th, 2011 // 2 Comments » // Uncategorized

#########################################################
NOTE: Repost

The ubuntu project “ensemble” is now publicly known as “juju”. This is a repost of an older article Painless Hadoop / Ubuntu / EC2 to reflect the new names and updates to the api.

#########################################################

Thanks Michael Noll for the posts where I first learned how to do this stuff:

I’d like to run his exact examples, but this time around I’ll use juju for hadoop deployment/management.


The Short Story

Setup

install/configure juju client tools

$ sudo apt-get install juju charm-tools
$ mkdir ~/charms && charm getall ~/charms

run hadoop services with juju

$ juju bootstrap
$ juju deploy --repository ~/charms local:hadoop-master namenode
$ juju deploy --repository ~/charms local:hadoop-slave datanodes
$ juju add-relation namenode datanodes

optionally add datanodes to scale horizontally

$ juju add-unit datanodes
$ juju add-unit datanodes
$ juju add-unit datanodes

(you can add/remove these later too)

Scaling is so easy there’s no point in separate standalone -vs- multinode versions of the setup.

Data and Jobs

Load your data and jars

$ juju ssh namenode/0

ubuntu$ sudo -su hdfs

hdfs$ cd /tmp
hdfs$ wget http://files.markmims.com/gutenberg.tar.bz2
hdfs$ tar xjvf gutenberg.tar.bz2

copy the data into hdfs

hdfs$ hadoop dfs -copyFromLocal /tmp/gutenberg gutenberg

run mapreduce jobs against the dataset

hdfs$ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount -Dmapred.map.tasks=20 -Dmapred.reduce.tasks=20 gutenberg gutenberg-output

That’s it!


Now, again with some more details…

Installing juju

Install juju client tools onto your local machine…

# sudo apt-get install juju charm-tools

We’ve got the juju CLI in ports now too for Mac clients (Homebrew is in progress).

Now generate your environment settings with

$ juju

and then edit ~/.juju/environments.yaml to use your EC2 keys. It’ll look something like:

environments:
  sample:
    type: ec2
    control-bucket: juju-<hash>
    admin-secret: <hash>
    access-key: <your ec2 access key>
    secret-key: <your ec2 secret key>
    default-series: oneiric

In real life you’d probably want to specify default-image-type to at least m1.large too, but I’ll give some examples of that in later posts.

Hadoop

Grab the juju charms

Make a place for charms to live

$ mkdir charms/oneiric
$ cd charms/oneiric
$ charm get hadoop-master
$ charm get hadoop-slave

(optionally, you can charm getall but it’ll take a bit to pull all charms).

Start the Hadoop Services

Spin up a juju environment

$ juju bootstrap

wait a minute or two for EC2 to comply. You’re welcome to watch the water boil with

$ juju status

or even

$ watch -n30 juju status

which’ll give you output like

$ juju status
2011-07-12 15:20:54,978 INFO Connecting to environment.
The authenticity of host 'ec2-50-17-28-19.compute-1.amazonaws.com (50.17.28.19)' can't be established.
RSA key fingerprint is c5:21:62:f0:ac:bd:9c:0f:99:59:12:ec:4d:41:48:c8.
Are you sure you want to continue connecting (yes/no)? yes
machines:
  0: {dns-name: ec2-50-17-28-19.compute-1.amazonaws.com, instance-id: i-8bc034ea}
services: {}
2011-07-12 15:21:01,205 INFO 'status' command finished successfully

Next, you need to deploy the hadoop services:

$ juju deploy --repository ~/charms local:hadoop-master namenode
$ juju deploy --repository ~/charms local:hadoop-slave datanodes

now you simply relate the two services:

$ juju add-relation namenode datanodes

Relations are where the juju special sauce is, but more about that in another post.

You can tell everything’s happy when juju status gives you something like (looks a bit different, but basics are the same):

$ juju status
2011-07-12 15:29:20,331 INFO Connecting to environment.
machines:
  0: {dns-name: ec2-50-17-28-19.compute-1.amazonaws.com, instance-id: i-8bc034ea}
  1: {dns-name: ec2-50-17-0-68.compute-1.amazonaws.com, instance-id: i-4fcf3b2e}
  2: {dns-name: ec2-75-101-249-123.compute-1.amazonaws.com, instance-id: i-35cf3b54}
services:
  namenode:
    formula: local:hadoop-master-1
    relations: {hadoop-master: datanodes}
    units:
      namenode/0:
        machine: 1
        relations:
          hadoop-master: {state: up}
        state: started
  datanodes:
    formula: local:hadoop-slave-1
    relations: {hadoop-master: namenode}
    units:
      datanodes/0:
        machine: 2
        relations:
          hadoop-master: {state: up}
        state: started
2011-07-12 15:29:23,685 INFO 'status' command finished successfully

Loading Data

Log into the master node

$ juju ssh namenode/0

and become the hdfs user

ubuntu$ sudo -su hdfs

pull the example data

hdfs$ cd /tmp
hdfs$ wget http://files.markmims.com/gutenberg.tar.bz2
hdfs$ tar xjvf gutenberg.tar.bz2

and copy it into hdfs

hdfs$ hadoop dfs -copyFromLocal /tmp/gutenberg gutenberg

Running Jobs

Similar to above, but now do

hdfs$ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount gutenberg gutenberg-output

you might want to explicitly call out the number of jobs to use…

hdfs$ hadoop jar /usr/lib/hadoop-0.20/hadoop-examples.jar wordcount -Dmapred.map.tasks=20 -Dmapred.reduce.tasks=20 gutenberg gutenberg-output

depending on the size of the cluster you decide to spin up.

You can look at logs on the slaves by

$ juju ssh datanodes/0
ubuntu$ tail /var/log/hadoop/hadoop-hadoop-datanode*.log
ubuntu$ tail /var/log/hadoop/hadoop-hadoop-tasktracker*.log

similarly for subsequent slave nodes if you’ve spun them up

$ juju ssh datanodes/1

or

$ juju ssh datanodes/2

Horizontal Scaling

To resize your cluster,

$ juju add-unit datanodes

or even

$ for i in {1..10}
$ do
$   juju add-unit datanodes
$ done

Wait for juju status to show everything in a happy state and then run your jobs.

I was able to add slave nodes in the middle of a run… they pick up load and crank.

Check out the juju status output for a simple 10-slave cluster here

Monitoring Hadoop Benchmarks TeraGen/TeraSort with Ganglia

// November 8th, 2011 // Comments Off // Uncategorized

#########################################################
NOTE: Repost

The ubuntu project “ensemble” is now publicly known as “juju”. This is a repost of an older article Monitoring Hadoop Benchmarks TeraGen/TeraSort with Ganglia to reflect the new names and updates to the api.

#########################################################

Here I’m using new features of Ubuntu Server (namely juju) to easily deploy Ganglia alongside a small Hadoop cluster to play around with monitoring some benchmarks like Terasort.

Short Story

Deploy hadoop and ganglia using juju:

$ juju bootstrap
$ juju deploy --repository "~/charms"  local:hadoop-master namenode
$ juju deploy --repository "~/charms"  local:ganglia jobmonitor
$ juju deploy --repository "~/charms"  local:hadoop-slave datacluster
$ juju add-relation namenode datacluster
$ juju add-relation jobmonitor datacluster
$ for i in {1..6}; do
$   juju add-unit datacluster
$ done
$ juju expose jobmonitor

When all is said and done (and EC2 has caught up), run the jobs

$ juju ssh namenode/0
ubuntu$ sudo -su hdfs
hdfs$ hadoop jar hadoop-*-examples.jar teragen -Dmapred.map.tasks=100 -Dmapred.reduce.tasks=100 100000000 in_dir
hdfs$ hadoop jar hadoop-*-examples.jar terasort -Dmapred.map.tasks=100 -Dmapred.reduce.tasks=100 in_dir out_dir

While these are running, we can run

$ juju status

to get the URL for the jobmonitor ganglia web frontend

http://<jobmonitor-instance-ec2-url>/ganglia/

and see…

and a little later as the jobs run…

Of course, I’m just playing around with ganglia at the moment… For real performance, I’d change my juju config file to choose larger (and ephemeral) EC2 instances instead of the defaults.

A Few Details…

Let’s grab the charms necessary to reproduce this.

First, let’s install juju and set up a our charms.

$ sudo apt-get install juju charm-tools

Note that I’m describing all this using an Ubuntu laptop to run the juju cli because that’s how I roll, but you can certainly use a Mac to drive your Ubuntu services in the cloud. The juju CLI is already available in ports, but I’m not sure the version. Homebrew packages are in the works. Windows should work too, but I don’t have a clue.

$ mkdir -p ~/charms/oneiric
$ cd ~/charms/oneiric
$ charm get hadoop-master
$ charm get hadoop-slave
$ charm get ganglia

That’s about all that’s really necessary to get you up and benchmarking/monitoring.

I’ll do another post on how to adapt your own charms to use monitoring and the monitor juju interface as part of the “Core Infrastructure” series I’m writing for charm developers. I’ll go over the process of what I had to do to get the hadoop-slave service talking to monitoring services like ganglia.

Until then, clone/test/enjoy… or better yet, fork/adapt/use!

Hadoop cluster with Ubuntu server and Juju

// November 8th, 2011 // Comments Off // Uncategorized

A while back I started experimenting with Juju and was intrigued by the notion of services instead of machines.

A bit of background on Juju from their website:

  • Formerly called Ensemble, juju is DevOps DistilledTM. Through the use of charms(renamed from formulas), juju provides you with shareable, re-usable, and repeatable expressions of DevOps best practices. You can use them unmodified, or easily change and connect them to fit your needs. Deploying a charm is similar to installing a package on Ubuntu: ask for it and it’s there, remove it and it’s completely gone.

I come from a DevOps background and know first hand the troubles and tribulations of deploying production services, webapps, etc.  One that's particularly "thorny" is hadoop.

To deploy a hadoop cluster, we would need to download the dependencies ( java, etc. ), download hadoop, configure it and deploy it.  This process is somewhat different depending on the type of node that you're deploying ( ie: namenode, job-tracker, etc. ).  This is a multi-step process that requires too much human intervention.  It is also a process that is difficult to automate and reproduce.  Imagine 10, 20 or 50 node cluster using this method.  It can get frustrating quickly and it is prone to mistake.

With this experience in mind ( and a lot of reading ), I set out to deploy a hadoop cluster using an Juju charm.

First things first, let's install Juju.  Follow the Getting Started documentation on the Juju site here.

According to the Juju documenation, we just need to follow some file naming conventions for what they call "hooks" ( executable scripts in your language of choice that perform certain actions ).  These "hooks" control the installation, relationships, start, stop, etc of your charm.  We also need to summarize the description of the formula in a file called metadata.yaml.  The metadata.yaml file describes the formula, it's interfaces, what it requires and provides among other things.  More on this file later when I show you the one for hadoop-master and hadoop-slave.

Armed with a bit of knowledge and a desire for simplicity, I decided to split the hadoop cluster in two:

  • hadoop-master (namenode and jobtracker )
  • hadoop-slave ( datanode and tasktracker )
I know this is not an all-encompassing list but, this will take care of a good portion of deployments and, the Juju charms are easy enough to modify that you can work your changes into them.

One of my colleagues, Brian Thomason did a lot of packaging for these charms so, my job is now easier.  The configuration for the packages has been distilled down to three questions:

  1. namenode ( leave blank if you are the namenode )
  2. jobtracker ( leave blank if you are the jobtracker )
  3. hdfs data directory ( leave blank to use the default: /var/lib/hadoop-0.20/dfs/data )
Due to the magic of Ubuntu packaging, we can even "preseed" the answers to those questions to avoid being asked about them ( and stopping the otherwise automatic process ). We'll use the utility debconf-set-selections for this.  Here is a piece of the code that I use to preseed the values in my charm:
  • echo debconf hadoop/namenode string ${NAMENODE}| /usr/bin/debconf-set-selections
  • echo debconf hadoop/jobtracker string ${JOBTRACKER}| /usr/bin/debconf-set-selections
  • echo debconf hadoop/hdfsdatadir string ${HDFSDATADIR}| /usr/bin/debconf-set-selections
The variable names should be self explanatory.  

Thanks to Brian's work, I now just have to install the packages ( hadoop-0.20-namenode and hadoop-0.20-jobtracker).  Let's put all of this together into a Juju charm.

  • Create a directory for the hadoop-master formula ( mkdir hadoop-master )
  • Make a directory for the hooks of this charm ( mkdir hadoop-master/hooks )
  • Let's start with the always needed metadata.yaml file ( hadoop-master/metadata.yaml ):
ensemble: formula
name: hadoop-master
revision: 1
summary: Master Node for Hadoop
description: |
  The Hadoop Distributed Filesystem (HDFS) requires one unique server, the
  namenode, which manages the block locations of files on the
  filesystem.  The jobtracker is a central service which is responsible
  for managing the tasktracker services running on all nodes in a
  Hadoop Cluster.  The jobtracker allocates work to the tasktracker
  nearest to the data with an available work slot.
provides:
  hadoop-master:
    interface: hadoop-master

  • Every Juju charm has an install script ( in our case: hadoop-master/hooks/install ).  This is an executable file in your language of choice that Juju will run when it's time to install your charm.  Anything and everything that needs to happen for your charm to install, needs to be inside of that file.  Let's take a look at the install script of hadoop-master:
#!/bin/bash
# Here do anything needed to install the service
# i.e. apt-get install -y foo  or  bzr branch http://myserver/mycode /srv/webroot


##################################################################################
# Set debugging
##################################################################################
set -ux
juju-log "install script"


##################################################################################
# Add the repositories
##################################################################################
export TERM=linux
# Add the Hadoop PPA
juju-log "Adding ppa"
apt-add-repository ppa:canonical-sig/thirdparty
juju-log "updating cache"
apt-get update


##################################################################################
# Calculate our IP Address
##################################################################################
juju-log "calculating ip"
IP_ADDRESS=`hostname -f`
juju-log "Private IP: ${IP_ADDRESS}"


##################################################################################
# Preseed our Namenode, Jobtracker and HDFS Data directory
##################################################################################
NAMENODE="${IP_ADDRESS}"
JOBTRACKER="${IP_ADDRESS}"
HDFSDATADIR="/var/lib/hadoop-0.20/dfs/data"
juju-log "Namenode: ${NAMENODE}"
juju-log "Jobtracker: ${JOBTRACKER}"
juju-log "HDFS Dir: ${HDFSDATADIR}"

echo debconf hadoop/namenode string ${NAMENODE}| /usr/bin/debconf-set-selections
echo debconf hadoop/jobtracker string ${JOBTRACKER}| /usr/bin/debconf-set-selections
echo debconf hadoop/hdfsdatadir string ${HDFSDATADIR}| /usr/bin/debconf-set-selections


##################################################################################
# Install the packages
##################################################################################
juju-log "installing packages"
apt-get install -y hadoop-0.20-namenode
apt-get install -y hadoop-0.20-jobtracker


##################################################################################
# Open the necessary ports
##################################################################################
if [ -x /usr/bin/open-port ];then
   open-port 50010/TCP
   open-port 50020/TCP
   open-port 50030/TCP
   open-port 50105/TCP
   open-port 54310/TCP
   open-port 54311/TCP
   open-port 50060/TCP
   open-port 50070/TCP
   open-port 50075/TCP
   open-port 50090/TCP
fi


  • There a few other files that we need to create ( start and stop ) to get the hadoop-master charm installed.  Let's see those files:
    • start
#!/bin/bash
# Here put anything that is needed to start the service.
# Note that currently this is run directly after install
# i.e. 'service apache2 start'

set -x
service hadoop-0.20-namenode status && service hadoop-0.20-namenode restart || service hadoop-0.20-namenode start
service hadoop-0.20-jobtracker status && service hadoop-0.20-jobtracker restart || service hadoop-0.20-jobtracker start

    • stop
#!/bin/bash
# This will be run when the service is being torn down, allowing you to disable
# it in various ways..
# For example, if your web app uses a text file to signal to the load balancer
# that it is live... you could remove it and sleep for a bit to allow the load
# balancer to stop sending traffic.
# rm /srv/webroot/server-live.txt && sleep 30

set -x
juju-log "stop script"
service hadoop-0.20-namenode stop
service hadoop-0.20-jobtracker stop

Let's go back to the metadata.yaml file and examin it in more detail:

ensemble: formula
name: hadoop-master
revision: 1
summary: Master Node for Hadoop
description: |
  The Hadoop Distributed Filesystem (HDFS) requires one unique server, the
  namenode, which manages the block locations of files on the
  filesystem.  The jobtracker is a central service which is responsible
  for managing the tasktracker services running on all nodes in a
  Hadoop Cluster.  The jobtracker allocates work to the tasktracker
  nearest to the data with an available work slot.
provides:
  hadoop-master:
    interface: hadoop-master

The emphasized section ( provides ) tells juju that this formula provides an interface named hadoop-master that can be used in relationships with other charms ( in our case we'll be using it to connect the hadoop-master with the hadoop-slave charm that we'll be writing a bit later ).  For this relationship to work, we need to let Juju know what to do ( More detailed information about relationships in charms can be found here ).

Per the Juju documentation, we need to name our relationship hooks hadoop-master-relation-joined  and it should also be an executable script in your language of choice.  Let's see what that file looks like:

#!/bin/sh
# This must be renamed to the name of the relation. The goal here is to
# affect any change needed by relationships being formed
# This script should be idempotent.

set -x

juju-log "joined script started"

# Calculate our IP Address
IP_ADDRESS=`unit-get private-address`

# Preseed our Namenode, Jobtracker and HDFS Data directory
NAMENODE="${IP_ADDRESS}"
JOBTRACKER="${IP_ADDRESS}"
HDFSDATADIR="/var/lib/hadoop-0.20/dfs/data"

relation-set namenode="${NAMENODE}" jobtracker="${JOBTRACKER}" hdfsdatadir="${HDFSDATADIR}"



juju-log "$JUJU_REMOTE_UNIT joined"

Your formula charm directory should now look something like this:
natty/hadoop-masternatty/hadoop-master/metadata.yamlnatty/hadoop-master/hooks/installnatty/hadoop-master/hooks/startnatty/hadoop-master/hooks/stopnatty/hadoop-master/hooks/hadoop-master-relation-joined
 This charm should now be complete...  It's not too exciting yet as it doesn't have the hadoop-slave counterpart to it but, it is a complete charm.

The latest version of the hadoop-master charm can be found here if you want to get it.

The hadoop-slave charm is almost the same as the hadoop-master charm with some exceptions.  Those I'll leave as an exercise for the reader.

The hadoop-slave charm can be found here if you want to get it.

Once you have both charm ( hadoop-master and hadoop-slave ) you can easily deploy your cluster by typing:

  • juju bootstrap   # ( creates/bootstraps the ensemble environment)
  • juju deploy --repository . local:natty/hadoop-master # ( deploys hadoop-master )
  • juju deploy --repository . local:natty/hadoop-slave # ( deploys hadoop-slave )
  • juju add-relation hadoop-slave hadoop-master # ( connects the hadoop-slave to the hadoop-master )
As you can see, once you have the charm written and tested, deploying the cluster is really a matter of a few commands.  The above example gives you one hadoop-master ( namenode, jobtracker ) and one hadoop-slave ( datanode, tasktracker ).

To add another node to this existing hadoop cluster, we add:

  • juju add-unit hadoop-slave # ( this adds one more slave )
Run the above command multiple times to continue to add hadoop-slave nodes to your cluster.

Juju allows you to catalog the steps needed to get your service/application installed, configured and running properly.  Once your knowledge has been captured in an Juju charm, it can be re-used by you or others without much knowledge of what's needed to get the application/service running.

In the DevOps world, this code re-usability can save time, effort and money by providing self contained charms that provide a service or application.

Quick UDS-P Wrapup

// November 5th, 2011 // 2 Comments » // Uncategorized

It’s late, I’m tired, so this is going to be brief.  But if I didn’t put something up now, chances are I’d procrastinate to the point where it didn’t matter anymore, so something is better than nothing.

JuJu

So the buzz all week was about Juju and Charms.  It’s a very cool technology that I think is really going to highlight the potential of cloud computing.  Until now I always had people comparing the cloud to virtual machines, telling me they already automate deploying VMs, but with Juju you don’t think about machines anymore, virtual of otherwise.  It’s all about services, which is really what you want, a service that is doing something for you.  You don’t need to care where, or on what, or in combination with some other thing, Juju handles all that automatically.  It’s really neat, and I’m looking forward to using it more.

Summit

Summit worked this week.  In fact, this is the first time in my memory where there wasn’t a problem with the code during UDS.  And that’s not because we left it alone either.  IS actually moved the entire site to a new server the day before UDS started.  We landed several fixes during the week to fix minor inconveniences experienced by IS or the admins.  And that’s not even taking into consideration all the last-minute features that were added by our Linaro developers the week prior.  But through it all, Summit kept working.  That, more than anything else, is testament to the work the Summit developers put in over the last cycle to improve the code quality and development processes, and I am very, very proud that.  But we’re not taking a break this cycle.  In fact, we had two separate sessions this week about ways to improve the user experience, and will be joined by some professional designers to help us towards that goal.

Ubuntu One eBook syncing

So what started off as an casual question to Stuart Langridge turned into a full blown session about how to sync ebook data using Ubuntu One.  We brainstormed several options of what we can sync, including reading position, bookmarks, highlights and notes, as well as ways to sync them in an application agnostic manner.  I missed the session on the upcoming Ubuntu One Database (U1DB), but we settled on that being the ideal way of handling this project, and that this project was an ideal test case for the U1DB.  For reasons I still can’t explain, I volunteered to develop this functionality, at some point during the next cycle.  It’s certainly going to be a learning experience.

Friends

Friends!  It sure was good to catch up with all of you.  Both friends from far-away lands, and those closer to home.  Even though we chat on IRC almost constantly, there’s still nothing quite like being face to face.  I greatly enjoyed working in the same room with the Canonical ISD team, which has some of the smartest people I’ve ever had the pleasure of working with.  It was also wonderful to catch up with all my friends from the community.  I don’t know of any other product or project that brings people together the way Ubuntu does, and I’m amazed and overjoyed that I get to be a part of it.

Learn some JuJu at the UDS Charm School

// November 4th, 2011 // 2 Comments » // Uncategorized

If you’ve been doing anything with Ubuntu lately, chances are you’ve been hearing a lot of buzz about Juju.  If you’re attending UDS, then there’s also a good chance that you’ve been to one or more sessions about Juju.  But do you know it?

The building blocks for Juju are it’s “charms”, which detail exactly how to deploy and configure services in the Cloud.  Writing charms is how you harness the awesome power of Juju.  Tomorrow (Friday) there will be a 2 hour session all about writing charms, everything from what they do and how they work, to helping you get started writing your own.  Questions will be answers, minds will be inspired, things will be made, so don’t miss out.

http://summit.ubuntu.com/uds-p/meeting/19875/juju-charm-school/

(Photo courtesy of http://www.flickr.com/photos/slightlynorth/3977607387/)

AWS Virtual MFA and the Google Authenticator for Android

// November 3rd, 2011 // 2 Comments » // Uncategorized

Amazon just announced that the AWS MFA (multi-factor authentication) now supports virtual or software MFA devices in addition to the physical hardware MFA devices like the one that’s been taking up unwanted space in my pocket for two years.

Multi-factor authentication means that in order to log in to my AWS account using the AWS console or portal (including the AWS forums) you not only need my secret password, you also need access to a device that I carry around with me.

Before, this was a physical device attached to my key ring. Now, this is my smart phone which has the virtual (software) MFA device on it. I already carry my phone with me, so the software doesn’t take up any additional space.

To log in to AWS, I enter my password and then the current 6 digit access code displayed by the Android app on my phone. These digits change every 30 seconds in an unguessable pattern, so this enhances the security of my AWS account.

I started by using Amazon’s AWS Virtual MFA app for my Android phone, but had some complaints about it including:

  • You have to click on an account name to see the current digits instead of just having them shown when the app is run. There’s nothing else for the app to do but show me these digits. Just do it!

  • The digits disappear from the screen too fast. Sometimes I want to glance back and see if I typed them in correctly, but they’re gone and I have to click again, hoping that they haven’t changed yet.

  • It’s hard to choose your own account names so that you know which entry to use for different AWS accounts.

I then noticed some cryptic information in the announcements: the new feature will work with “any application that supports the open OATH TOTP standard”.

Hmmm…

Sure ‘nuff!

I already use the Google Authenticator app on my Android phone so that my Google logins can use MFA. As it turns out, Google Authenticator also works seamlessly with AWS Virtual MFA.

  • Google Authenticator shows the codes as soon as it is run with a little timer showing me when they will change.

  • Google Authenticator lets me easily edit the displayed name so that I know at a glance which code is for my personal AWS account and which one is for my company AWS account.

This also means that I only have to run one app to get access to my devices for Google accounts and for AWS accounts. Amazon may improve their Android app over time, but by using open standards users can pick whatever works best for them at the time.

I love the fact that Amazon now supports Virtual MFA. I’ve already thrown away my hardware token and my pocket feels less full.

I love the fact that Amazon implemented this as a service based on existing standards so that I can use Google’s Android app to access my account.

I love open standards.

Update: I just found this great starting page which even links to Google Authenticator as a client for Android and iPhone:

http://aws.amazon.com/mfa/

Original article: http://alestic.com/2011/11/aws-virtual-mfa

eBook syncing with Ubuntu One

// November 3rd, 2011 // 2 Comments » // Uncategorized

Are you both an Ubuntu user and a bibliophile?  Want to keep your ebooks synced between all your connected devices, including bookmarks and reading position?   If so, join us for this UDS session Thursday, Nov 3rd, where we’ll be talking about how to add that functionality to Ubuntu One.

http://summit.ubuntu.com/uds-p/meeting/19820/other-p-u1-book-sync/

Ubuntu & HP’s project Moonshot

// November 2nd, 2011 // 4 Comments » // Uncategorized

HP logo

 

Today HP announced Project Moonshot  - a programme to accelerate the use of low power processors in the data centre.

The three elements of the announcement are the launch of Redstone – a development platform that harnesses low-power processors (both ARM & x86),  the opening of the HP Discovery lab in Houston and the Pathfinder partnership programme.

Canonical is delighted to be involved in all three elements of HP’s Moonshot programme to reduce both power and complexity in data centres.

The HP Redstone platform unveiled in Palo Alto showcases HP’s thinking around highly federated environments and Calxeda’s EnergyCore ARM processors. The Calxeda system on chip (SoC) design is powered by Calxeda’s own ARM based processor and combines mobile phone like power consumption with the attributes required to run a tangible proportion of hyperscale data centre workloads.

HP Redstone Platform

The promise of server grade SoC’s running at less than 5W and achieving per rack density of 2800+ nodes is impressive, but what about the software stacks that are used to run the web and analyse big data – when will they be ready for this new architecture?

Ubuntu Server is increasingly the operating system of choice for web, big data and cloud infrastructure workloads. Films like Avatar are rendered on Ubuntu, Hadoop is run on it and companies like Rackspace and HP are using Ubuntu Server as the foundation of their public cloud offerings.

The good news is that Canonical has been working with ARM and Calexda for several years now and we released the first version of Ubuntu Server ported for ARM Cortex A9 class  processors last month.

The Ubuntu 11.10 release (download) is an functioning port and over the next six months and we will be working hard to benchmark and optimize Ubuntu Server and the workloads that our users prioritize on ARM.  This work, by us and by upstream open source projects is going to be accelerated by today’s announcement and access to hardware in the HP Discovery lab.

As HP stated today, this is beginning of a journey to re-inventing a power efficient and less complex data center. We look forward to working with HP and Calxeda on that journey.