Posts Tagged ‘cloud computing’

Amazon brings single sign-on to AWS management

// January 20th, 2012 // 1 Comment » // Uncategorized

Amazon has made it easier for authorized business users to manage their Amazon Web Services infrastructure after signing on — once — to their corporate network.

This is the latest in a steady drip, drip, drip of functionality that Amazon adds to its services over time. This week, for example, Amazon announced free Windows “micro” instances to its EC2 Elastic Compute Cloud service on Sunday, and three days later announced the DynamoDB NoSQL database to its roster.

In this case, the aim is to make it easier for authorized users to maintain and tweak their Amazon-based services. Once the user is identified and authenticated by whomever manages the AWS account, he or she can sign onto the corporate network using existing credentials, then navigate to the AWS Management Console without re-entering a password, according to an AWS blog posted late Thursday. Before, users had to sign into the AWS Management Console separately.

When that user requests entry into the management console, the identity broker “validates that user’s access rights and provides temporary security credentials which includes the user’s permissions to access AWS. The page includes these temporary security credentials as part of the sign-in request to AWS,” according to the blog.

This all requires up-front work. The person in charge of a company’s AWS account must set up the user’s identity and federate it to the appropriate services. When the user signs into the corporate network, the identity broker pings Amazon’s Security Token Service (STS) to request temporary security credentials. Until now, those credentials gave specified users access to Amazon services for a set period of time (up to 36 hours.)  Now those same credentials will be good for AWS Management Console as well.

The bulk of Amazon services — including Amazon EC2, Amazon S3, VPC, ElastiCache — support that identity federation to the management console. The company is working to add the new Amazon DynamoDB NoSQL database service to that list, said Amazon Web Services Evangelist Jeff Barr in the post.

As Microsoft beefs up its Azure cloud offering with expected Infrastructure-as-a-Service capabilities, and more OpenStack-based IaaS offerings come online, the competition to provide cloud services will only heat up.

Feature photo courtesy of  Flickr user Will Merydith

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.


Developers will flock to public cloud in 2012

// January 12th, 2012 // 3 Comments » // Uncategorized

The public cloud is looking pretty good as a development platform this year and gaining cloud development skills is a top priority for many developers, according to new research.

Of the 3,335 developers surveyed by Zend Technologies about what work they expect to do this year, 61 percent said they expect to use a public cloud for their projects. And, of those going that route, 30 percent named Amazon Web Services as their public cloud of choice; 28 percent did not yet know which cloud they would use; 10 percent named Rackspace; and 6 percent cited Microsoft Azure.  ”Other” public clouds came in at 5 percent and IBM Smart Cloud at 3 percent.

These numbers come courtesy of the Zend Developer Pulse. (Zend, a provider of PHP tools,  is the same company that broke the news that a surprising number of PHP developers are also Metallica fans.)

In terms of overall types of projects, a whopping 66 percent of respondents said they will be doing mobile development this year – hardly surprising given the glut of smartphones, tablets and app stores flooding the market.

Forty-one percent said they expect to work on cloud-based development and 40 percent said they see big data work in their immediate future.  Those cloud and big data numbers seem pretty low given the level of interest around both topics but then again “mobile development” is a broad term that could be interpreted to include cloud work.

Nearly half  of those surveyed (48 percent) said they will work on APIs and 45 percent said they will work on social media integration this year. Zend surveyed the developers in November , 2011, and plans to make the Zend Developer Pulse an annual event.

Photo courtesy of Flickr user skuds

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

Cloud Complexity ? It’s A Wrench.

// January 10th, 2012 // 1 Comment » // Uncategorized

A new year, a old topic. Complexity. “CLOUD IS COMPLEX” screamed the headline of two recent blog posts from my Clouderati alumni, James Urquhart and Sam Johnston. Really ? Who would have thought ? There really is nothing that gets past these two guys, is there ? Joking aside, their respective copy brings a sharp focus on a topic that has, in my opinion, two very different sets of meanings and two very different sets of challenges, depending upon which side of the proverbial cloudfence one is resting one’s posterior upon. If, dear reader, you are a vocal proponent of public cloud, renowned and famed to, upon occasion, theatrically wave your arms around while openly condemning the very notion of private cloud to be evil and reprehensible, then I would metaphorically place you in the “don’t know, don’t care” bucket when it comes to understanding how (to quote Sam Johnston) “the delivery of information technology as a service rather than a product” is brought to your browser and your credit card, respectively. Hmm, the classic “power grid” analogy – just plug it in and it works. Nothing wrong with that. Not at all. If, however, like this author, you are somewhat willing to entertain the concept of private cloud, either through experience or hope, or even as a much needed and logical evolution of the galling monotony and crippling legacy of today’s large enterprise IT environments, then I might suggest that the complexity thrust upon you via veritable pot-pourri of technologies, services operating models and organizational challenges will place you, either wittingly or unwittingly, somewhere between Levels 1 and 2 of the Conscious Competence Ladder. I’ve never been a particularly big fan of the “cloud / utility computing is the same as the move away from building your own power station to the public grid” analogy. It’s fine for an incredibly basic mental picture of the difference between having a substation located at the bottom of your garden versus a medium voltage cable and a meter connected to the local provider, but as far as the depth of the analogy’s relevance to the practical application of any cloud strategy goes, it would be easier to say “someone else provides the capacity”. Job done. Same result. Not much use at all. The major flaw in the analogy is that in today’s rapidly changing enterprise IT world, where virtualization has only just begun to take hold (arguably) but is widely accepted as being a cornerstone of any cloud, it sadly isn’t as simple as just sticking a fat pipe into a service provider and letting someone else provide the capacity. If it was, then everyone would do it. Imagine if every organization since time immaterial had asked the electricity provider to take over running the rest of the machinery, plant or robotic equipment that it’s invisible juice powered ? Well, to me, that’s the crux of the application of cloud infrastructure. Hardly apples to apples. There is complexity in public cloud, there is complexity in private cloud – it’s simply a case of who owns and manages the complexity and how much appetite you have for running your services on each – but equally as service providers are doing a better and better job at managing “simplexity”, most enterprises continue to wrestle with their strategies, egged on by a myriad of vendors who now have the word “cloud” in every piece of marketing literature. It’s not a one size fits all model, there is no either / or. In my incredibly humble opinion, it is increasingly arguable that the case for private clouds is stronger than ever, yet, as the struggle to keep up to date with technology trends and models gains momentum, I don’t see any sign of the landscape becoming simpler to design, implement or operate. In fact, I think many enterprises, in their best efforts to implement all that they are told they can’t do without, are heading for more complexity than they ever dreamed possible  - creating an environment so complex, that even Rube Goldberg might raise a mechanical eyebrow. Physical servers, virtual servers, physical switches, virtual switches, physical interfaces, virtual interfaces, physical storage, virtual storage, physical load balancers, virtual load balancers, physical firewalls, virtual firewalls, physical networks (?), virtual networks, physical interfaces, virtual interfaces, physical IP address (?), virtual IP address, physical data centers, virtual data centers, physical people, virtual people, Mechanical Turks. Mechanical what ? Mechanical Turks. I thought that’s what you said. And so, it goes on and on, something like this. “IT ? Where’s my server ? Oh where did it go ?” (We are all losing money and patience is low) “I want it to work, you must call the Turk.” (We should call the Turk, he’ll get it to work) “We have something to tell you, the Turk isn’t real.” (The Turk isn’t real ? That’s quite a big deal.) “The server is somewhere, it just can’t have gone.” (We just need to find out which storage it’s on) “The CMDB ! Now this one’s in the bag.” (But the CMBD has just waved a white flag) “It can’t be the DevOps, it can’t be those guys !” (The DevOps are admins ? That’s quite a surprise) “So it must be the network, it’s eaten my app.” (As the Net guys will tell you, that’s monstrous crap) “We’ve found it, don’t worry, we’ll just bring it back.” (Now several young admins are facing the sack) “That downtime has killed us, we’ve lost fifty grand.” (..as the CEO enters still waving his hand) “It’s much more efficient !” IT screams out loud. (But the moral is simply “shit happens in cloud”) Today, there isn’t a CMBD tool on earth (yet) that can realistically and efficiently keep pace with the inherent fluidity, agility and flexibility of even the most well intended cloud deployments and the ditty above is a not-so-tongue-in-cheek example of what happens when complexity is mixed with a lack of clear visibility. Interestingly, this problem isn’t unique to technology, nor cloud. In my every day life, I come across a E&C (Engineering & Construction) industry wide problem that relates to a concept called “wrench time”. Typically, wrench time is a measure of crafts personnel at work, using tools, in front of jobs. Wrench time does not include obtaining parts, tools or instructions, or the travel associated with those tasks. In some cases, wrench time can be as low as 20% of a total working week, meaning roughly 8 hours spent fixing problems with the remaining 32 hours spent with non-value-added tasks including finding and qualifying maintenance record information. The parallels are obvious. Difficulty in finding and qualifying information, in and amongst these complex systems – clouds or power stations – leads to inefficient maintenance, poor RTO times and eventually to revenue or reputation loss. Spanner in the works, anyone ?
 

(Cross-posted @ The Loose Couple's Blog)

Automated deployment of Ubuntu with Orchestra

// October 27th, 2011 // 1 Comment » // Uncategorized

 

Orchestra is one of the most exciting new capabilities in 11.10. It provides automated installation of Ubuntu across sets of machines. Typically, it’s used by people bringing up a cluster or farm of servers, but the way it’s designed makes it very easy to bring up rich services, where there may be a variety of different kinds of nodes that all need to be installed together.

There’s a long history of tools that have been popular at one time or another for automated installation. FAI is the one I knew best before Orchestra came along and I was interested in the rationale for a new tool, and the ways in which it would enhance the experience of people building clusters, clouds and other services at scale. Dustin provided some of that in his introduction to Orchestra, but the short answer is that Orchestra is savvy to the service orchestration model of Juju, which means that the intelligence distilled in Juju charms can easily be harnessed in any deployment that uses Orchestra on bare metal.

What’s particularly cool about THAT is that it unifies the new world of private cloud with the old approach of Linux deployment in a cluster. So, for example, Orchestra can be used to deploy Hadoop across 3,000 servers on bare metal, and that same Juju charm can also deploy Hadoop on AWS or an OpenStack cloud. And soon it should be possible to deploy Hadoop across n physical machines with automatic bursting to your private or favourite public cloud, all automatically built in. Brilliant. Kudos to the conductor :-)

Private cloud is very exciting – and with Ubuntu 11.10 it’s really easy to set up a small cloud to kick the tires, then scale that up as needed for production. But there are still lots of reasons why you might want to deploy a service onto bare metal, and Orchestra is a neat way to do that while at the same time preparing for a cloud-oriented future, because the work done to codify policies or practices in the physical environment should be useful immediately in the cloud, too.

For 12.04 LTS, where supporting larger-scale deployments will be a key goal, Orchestra becomes a tool that every Ubuntu administrator will find useful. I bet it will be the focus of a lot of discussion at UDS next week, and a lot of work in this cycle.

Ubuntu Cloud Live 11.10 is Available.

// October 14th, 2011 // 12 Comments » // Sticky Posts

Ubuntu Cloud Live

The much talked about Ubuntu Cloud Live 11.10 image given away at the OpenStack Essex Conference is now available for download at:

http://cdimage.ubuntu.com/ubuntu-cloud-live/releases/11.10/ubuntu-11.10-cloud-live-amd64.img

The image uses OpenStack Diablo, requires an x86_64 compatible desktop/laptop machine, and is approximately 560Mb in size. We recommend flashing to a 4GB USB drive (or larger) to allow for proper setup and use of the cloud.  Use the ‘dd’ command to copy the image over to your USB drive. For example, if your USB drive is connected to /dev/sdb, make sure the drive isn’t mounted, and then run `dd if=ubuntu-11.10-cloud-live-amd64.img of=/dev/sdb`. WARNING: THIS COMMAND WILL ERASE ALL DATA PREVIOUSLY STORED ON THE TARGET DEVICE. MAKE SURE YOU HAVE THE CORRECT DEVICE WHEN FLASHING.

Once flashed, simply boot your laptop/desktop from the USB drive and follow the instructions displayed on the desktop.

Another week of Openstack stabilisation

// February 21st, 2011 // Comments Off // Uncategorized

A week into OpenStack’s third release cycle…

// February 12th, 2011 // Comments Off // Uncategorized

It only took me 20 years..

// January 14th, 2011 // Comments Off // Uncategorized

tl;dr: I now have daily backups of my laptop, powered by Rackspace Cloud Files (powered by Openstack), Deja-Dup, and Duplicity.

I’ve been using computers for a long time. If memory serves, I got my first PC when I was 9, so that’s 20 years ago now. At various times, I’ve set up some sort of backup system, but I always ended up

  • annoyed that I couldn’t acutally *use* the biggest drive I had, because it was reserved for backups,
  • annoyed because I had to go and connect the drive and do something active to get backups running, because having the disk always plugged into my system might mean the backup got toasted along with my active data when disaster struck,
  • and annoyed at a bunch of other things.

Cloud storage solves the hardest part of this. With Rackspace Cloud Files, I have access to an infinite[1] amount of storage. I can just keep pushing data, Rackspace keep them safe, and I pay for exactly how much space I’m using. Awesome.

All I need is something that can actually make backups for me and upload them to Cloud Files. I’ve known about Duplicity for a long time, and I also knew that it’s been able to talk to Cloud Files for a while, but I never got into the habit of running it at regular intervals, and running it from cron was annoying, because maybe I didn’t have my laptop on when it wanted to run, and if I wasn’t logged in, by homedir would be encrypted anyway, etc. etc. Lots of chances for failure.

Enter Deja-Dup! Deja-dup is a project spearheaded by my awesome, former colleague at Canonical, Mike Terry. It uses Duplicity on the backend, and gives me a nice, really simple frontend to get it set up. It has its own timing mechanism that runs in my GNOME desktop session. This means it only runs when my laptop is on and I’m logged in. Every once in a while, it checks how long it’s been since my last backup. If it’s more than a day, an icon pops up in the notification area that offers to run a backup. I’ve only been using this for a day, so it’s only asked me once. I’m not sure if it starts on its own if I give it long enough.

A couple of caveats:

  • Deja-dup needs a very fresh version of libnotify, which means you need to either be running Ubuntu Natty, use backported libraries, or patch Deja-dup to work with the version of libnotify in Maverick. I opted for the latter approach.
  • I have a lot of data. Around 100GB worth. Some of it is VM’s, some of it is code, some of it is various media files. Duplicity doesn’t support resuming a backup if it breaks halfway, and I “only” have 8 Mbit/s upstream bandwidth.. That meant I had to stay connected to the Internet for 28 hours straight (in a perfect world) and not have anything unexpected happen along the way. I wasn’t really interested in that, so I made my initial backup to an external drive and I’m now copying the contents of that to Rackspace at my own pace. I can stop and resume at will. The tricky part here was to get Deja-Dup to understand that the backup it thinks is on an external drive really is on Cloud Files. I’ll save that for a separate post.

[1]: Maybe not actually infinite, but infinite enough.

Introducing Telematica’s New Website, and My New Blog

// December 6th, 2010 // 2 Comments » // Uncategorized

Over the course of the next week, I will be moving the majority of my professional blogging to the new website for Telematica. As many of you know, I've re-established the consulting practice of Telematica, and over the past year have focused my energies on cloud computing, its infrastructure and platforms, as well as the requirements for an 'intercloud' --...

Dell’s hyper scale cloud efforts — Everything you wanted to know in 3 minutes

// October 15th, 2010 // Comments Off // Uncategorized

Last week a couple of us went down to San Antonio to help represent the OpenStack project at Rackspace’s partner summit.  While there I met up with the VAR Guy.   Mr. Guy got me chatting about Dell’s Data Center Solutions group, where we’ve been and where we’re going.  Below is the resulting video he put together featuring myself and San Antonio’s greenery. (See the original article this came from).

Some of topics I tackle:

  • How Dell’s Data Center Solutions Group is designing servers for high-end cloud computing
  • How Dell is integrating hardware with software in cloud servers
  • Coming soon: Dell Cloud Solution for Web Applications/Leveraging Joyent‘s software
  • Dell’s cloud partner program – where Ubuntu Enterprise Cloud, Aster Data and Greenplum fit in.
  • Dell’s commitment to OpenStack

Extra-credit reading:

Pau for now…