Posts Tagged ‘cloud’
// August 8th, 2012 // Comments Off on Juju and Nagios, sittin’ in a tree.. (Part 1) // Uncategorized
Monitoring. Could it get any more nerdy than monitoring? Well I think we can make monitoring cool again…
If you’re using Juju, Nagios is about to get a lot easier to leverage into your environment. Anyone who has ever tried to automate their Nagios configuration, knows that it can be daunting. Nagios is so flexible and has so many options, its hard to get right when doing it by hand. Automating it requires even more thought. Part of this is because monitoring itself is a bit hard to genercise. There are lots of types of monitors. Nagios really focuses on two of these:
- Service monitoring – Make a script that pretends to be a user and see if your synthetic monitor sees what you expect.
- Resource monitoring – Look at the counters and metrics afforded a user of a normal system.
The trick is, the service monitoring wants to interrogate the real services from outside of the machine, while the resource monitoring wants to see things only visible with privileged access. This is why we have NRPE, or “Nagios Remote Plugin Executor” (and NSCA, and munin, but ignore those for now). NRPE is a little daemon that runs on a server and will run a nagios plugin script, returning the result when asked by Nagios. With this you get those privileged things like how much RAM and disk space is used. Normally when you want to use Nagios, you need to sit down and figure out how to tell it to monitor all of your stuff. This involves creating generic objects, figuring out how to get your list of hosts into nagios’s config files, and how to get the classifications for said hosts into nagios. Does anybody trying to make sure their pager goes off when things are broken actually want to learn Nagios? So, here’s how to get Nagios in your Juju environment. First lets assume you have deployed a stack of applications.
juju deploy mysql wikidb # single MySQL db server
juju deploy haproxy wikibalancer # and single haproxy load balancer
juju deploy -n 5 mediawiki wiki-app # 5 app-server nodes to handle mediawiki
juju deploy memcached wiki-cache # memcached
juju add-relation wikidb:db wiki-app:db # use wikidb service as r/w db for app
juju add-relation wiki-app wikibalancer # load balance wiki-app behind haproxy
juju add-relation wiki-cache wiki-app # use wiki-cache service for wiki-app
This gives one a nice stack of services that is pretty common in most applications today, with a DB and cache for persistent and ephemeral storage and then many app nodes to scale the heavy lifting.
Now you have your app running, but what about when it breaks? How will you find out? Well this is where Nagios comes in:
juju deploy nagios # custom nagios charm
juju add-relation nagios wikidb # monitor wikidb via nagios
juju add-relation nagios wiki-app # ""
juju add-relation nagios wikibalancer # ""
You now should have nagios monitoring things. You can check it out by exposing it and then browsing to the hostname of the nagios instance at ‘http://x.x.x.x/nagios3′. You can find out the password for the ‘nagiosadmin’ user by catting a file that the charm leaves for this purpose:
juju ssh nagios/0 sudo cat /var/lib/juju/nagios.passwd
Now, the checks are very sparse at the moment. This is because we have used the generic monitoring interface which can just monitor the basic things (SSH, ping, etc). We can add some resource monitoring by deploying NRPE:
juju deploy nrpe # create a subordinate NRPE service
juju add-relation nrpe wikibalancer # Put NRPE on wikibalancer
juju add-relation nrpe wiki-app # Put NRPE on wiki-app
juju add-relation nrpe:monitors nagios:monitors # Tells Nagios to monitor all NRPEs
Now we will get memory stats, root filesystem, etc.
You may have noticed we left off wikidb, that is because it will show you an ambiguous relation warning when you try this:
juju add-relation nrpe wikidb # Put NRPE on wikidb
ERROR Ambiguous relation 'nrpe mysql'; could refer to:
'nrpe:general-info mysql:juju-info' (juju-info client / juju-info server)
'nrpe:local-monitors mysql:local-monitors' (local-monitors client / local-monitors server)
This is because mysql has special support to be able to specify its own local monitors in addition to those in the usual basic group (more on this in part 2). To get around this we use:
juju add-relation nrpe:local-monitors wikidb:local-monitors
This is a perfect example of how Juju’s encapsulation around services pays off for re-usability. By wrapping a service like Nagios in a charm, we can start to really develop a set of best practices for using that service and collaborate around making it better for everyone.
Of course, Chef and Puppet users can get this done with existing Nagios modules. Puppet, in particular, has really great Nagios support. However, I want to take a step back and explain why I think Juju has a place along side those methods and will accelerate systems engineering in new directions.
While there is some level of encapsulation in the methods that Chef and Puppet put forth, they’re not fully encapsulated in the way that they interact with other components in a Chef or Puppet system. In most cases, you still have to edit your own service configs to add specific Nagios integration. This works for the custom case, but it does not make it easy for users to collaborate on the way to deploy well known systems. It will also be hard to swap out components for new, better methods as they emerge. Every time you mention Nagios in your code, you are pushing Nagios deeper into your system engineering.
With the method I’ve outlined above, any charmed service can be monitored for basic stats (including the 80 or so that are in the official charm store). You might ask though, what about custom Nagios plugins, or specifying more elaborate but somewhat generic service checks. That is all coming. I will show some examples in my next post about this. I will also go on later to show how Nagios + NRPE can be replaced with collectd, or some other system, without changing the charms that have implemented rich monitoring support.
So, while this at least starts to bring the official Nagios charm up to par with configuration management’s rich Nagios ability, it also sets the stage for replacing Nagios with other things. The key difference here is that as you’ll see in the next few parts, none of the charms will have to mention “Nagios”. They’ll just describe what things to monitor, and Nagios, Collectd, or whatever other system you have in place will find a way to interpret that and monitor it.
// June 12th, 2012 // Comments Off on JUJU Everywhere! // Uncategorized
I’ve just published the first iteration of RPM’s targeted at Fedora along with the .spec file used to build them. Its available on github at http://github.com/jujutools/rpm-juju
( along side the Mac Port
), so Fedora users go fourth and test please! Feedback very welcomed along with any patches or contributions.
The goal is for to have these added to the official Fedora, CentOS, and SuSE repositories as they mature and get the early kinks worked out of the packages.
// May 24th, 2012 // 2 Comments » // Uncategorized
After Calxeda demonstrated a real ARM server running Ubuntu with MAAS and Juju at the Ubuntu Developer Summit the amount of interest in the technology continues to build.
Today we made and Ubuntu ARM Server AMI on Amazon’s EC2. This is a 12.04 ARMhf image running on an emulated Calxeda system. Thanks to Dann Frazier for doing a bunch of the heavy lifting, you can find information on the image here:
This AMI is primarily for developers wishing to experiment with Ubuntu ARM Server. Performance is limited due to the emulation overhead. Look for AMI ID `ami-aef328c7`.
Note: this AMI requires the use of an m1.large instance type due to memory requirements.
// May 22nd, 2012 // Comments Off on Juju, MAAS, and VirtualBox // Uncategorized
I’ve been meaning to use MAAS for quite some time. In fact, I’ve been excited about its release since I stumbled upon it a few weeks before its announcement in the package repo. I originally started by trying to install Xen on my Desktop as it’s what I’ll be using in production. That didn’t quite work out, so I took my chances with VirtualBox instead. I skimmed the Testing MAAS section of the documentation and felt confident enough that VirtualBox could handle something like MAAS. To start, I created a few MAAS machines in VirtualBox and attached the 12.04 ISO as the install medium. I started the first one to install a MAAS “master” server.
On the installation screen I selected the “Multiple server install with MAAS” option, selected “Create a new MAAS on this server”, and followed the defaults from there. Toward the end of the install I was given an address through which I could view the MAAS control panel, 10.0.2.5. Needless to say I was pretty excited. Of course, the address didn’t work and I quickly realized that I couldn’t actually access that network. Reviewing the Networking settings for the VM I made the following changes:
Attached to: Bridged Adapter
Promiscuous Mode: Allow All
I updated each unit to reflect these settings. After the update, I had to re-start the VM and reconfigure MAAS to use the new address. This was done simply with:
sudo dpkg-reconfigure maas
Then I updated the IP to reflect the new IP address within my network. After doing so 192.168.5.27 became my MAAS Master, and http://192.168.5.27/MAAS loaded the control panel!
Taking notice of what the Dashboard says, I ran the following two commands:
sudo maas createsuperuser
The first command prompted me for a user, password, and email. The second ran for several minutes downloading and creating different precise images. Once that was finished, my dashboard still showed warnings of impor-isos, but more importantly I was able to log in and see that I had 0 nodes!
This gave me the confidence to push forward. I started the “maas1” VM to begin the install process. Like before, I selected “Multiple server install with MAAS”. The next screen provided the option to Enlist with the maas-master MAAS Server, so I happily selected that option when the machine suddenly SIGKILLs all processes then powers off. The victory was in the dashboard though, as it now reflected 1 node!
I continued doing this for each of the “maas” VMs until all were registered in the maas-master dashboard. Unfortunately, during installation, of one of the nodes lost my naming scheme (I was trying to do maas-node0, maas-node1, … for each MAAS node) and ended up naming one of the maas2, which threw off the naming for the rest of the nodes. That aside, all of the initial nodes I wanted to enlist did so without any issues.
Now it was time to get some Juju goodness pushing against these machines. The first thing I did was hunt down my MAAS Key. I stumbled through a few sections of the dashboard before landing in the account preferences. I also noticed a section for SSH Keys which I added my public key to (for good measure). I copied my MAAS Key and created the following stanza in my juju environments file (I couldn’t find documentation on the Juju site for MAAS setup but I found this URL from a screenshot of MAAS testing tools, which lead me to the answer).
When I first tried to bootstrap the MAAS setup, I received several errors. The port needs
to be specified for Juju to connect to the provider. However, when I attempted to bootstrap again, I received a whole mouthful of errors about 409 CONFLICT. This is when I realized you need to Accept
each machine in order for it to be provisioned. I stepped back and started reading the documentation
as my nodes wouldn’t commission properly (or at all). It was pretty clear that I didn’t have the DNS set up properly. I recommend reading through the documentation to get a grasp on what you’ll need to do for your network. Once I installed maas-dhcp and configured that package, the ISOs needed to be regenerated to use the updated information. Running `sudo maas-import-isos` remedied this for me. After all that I needed to update each VirtualBox VM to include Network in the boot sequence. To do so, open each VM’s settings, go to System, and make sure the Network boot is checked and at the top of the list.
After doing that, boot each VM and the PXE DHCP should find your MAAS Master and set up the VM properly. Each machine will turn off after successful setup and MAAS Dashboard will update. The end result is quite glorious:
Now it’s really time to get Juju working with these lovely MAAS machines! After several non-starts I created a new account in MAAS Dashboard with the same username as my local user and updated Juju environments to use that MAAS Key. After completing that, I issued a bootstrap:
and checked the dashboard after the command completed.
The dashboard now shows one of the nodes allocated to Juju for bootstrapping. I had to manually start each VM, as for some reason they don’t respond to Wake On Lan. However, my goal of using Juju to deploy to MAAS was fulfilled. There is definitely room for improvement with the experience, but I have high hopes when we start throwing bare metal at MAAS.
// April 26th, 2012 // 7 Comments » // Uncategorized
It’s time for another Juju Charm Contest, where you can submit your charms and win fabulous prizes! This contest is for Ubuntu Developer Summit Attendees; and our prizes are three sexy Dell XPS 13 ultrabooks, which we’ll be awarding to the three lucky winners of the contest.
So how can you win yourself one of these? Well, with 66 services already ready to be deployed on the cloud we’re always looking for more, so have a look at what you think is missing from the Juju Charm Store and submit your charm as an entry.
We’ve got the step-by-step instructions on how to write your own charm, we’re looking for things DevOps deploy to the cloud, so be creative! You have from now until May 09 to submit your charm. At that point we’ll judge the entries, and then give out the Dell XPS 13’s during the last day of the Ubuntu Developer Summit, so if you’re missing your favorite service from the Charm Store, submit an entry and you’ll automatically be entered in the contest.
Full contest rules here, including the judging criteria, so you’ll want to read that before you get started. Happy Charming!
// April 24th, 2012 // Comments Off on juju client now available for Mac OSX // Uncategorized
Brandon Holtsclaw has published a Mac port of juju. This will enable Mac users to deploy to their Ubuntu Servers from the comfort of their home operating system. Brandon adds along:
Pull Requests or filing Issue’s are more than Welcome’d from anyone.
Need help getting started with juju? Check out our Getting Started documentation and then browse through the Charm Store to see what you can start deploying today!
// April 17th, 2012 // Comments Off on Juju constraints unbinds your machines // Uncategorized
This week, William “I code more than you will ever be able to” Reade announced that Juju has a new feature called ‘Constraints’.
This is really, really cool and brings juju into a new area of capability for deploying big and little sites.
To be clear, this allows you to abstract things pretty effectively.
juju deploy mysql --constraints mem=10G
juju deploy statusnet --constraints cpu=1
This will result in your mysql service being on an extra large instance since it has 15GB of RAM. Your statusnet instances will be m1.small’s since that will have just 1 ECU.
Even cooler than this is now if you want a mysql slave in a different availability zone:
juju deploy mysql --constraints ec2-zone=a mysql-a
juju deploy mysql --constraints ec2-zone=b mysql-b
juju add-relation mysql-a:master mysql-b:slave
juju add-relation statusnet mysql-a
Now if mysql-a goes down
juju remove-relation statusnet mysql-a
juju add-relation statusnet mysql-b
Much and more is possible, but this really does make juju even more compelling as a tool for simple, easy deployment. Edit: fixed ec2-zone to be the single character, per William’s feedback.
// April 10th, 2012 // Comments Off on Announcing the Ubuntu Cloud Summit, 8 May, Oakland, California // Uncategorized
Canonical in collaboration with Redmonk will be hosting “The Ubuntu Cloud Summit” – a one day event for both technology and business attendees interested in how open-source cloud computing can help their organisations.
The event takes place on Tuesday 8th May, at the The Oakland Marriott City Center Hotel, and runs in conjunction with UDS.
The agenda is still being defined, but the sessions will cover some interesting ideas, challenges and trends around cloud computing and how attendees can deploy an open cloud in their organisation.
Topics will include:
- The Open Cloud – The role of open source in cloud computing—particularly how an open cloud enables a more flexible, vendor-neutral approach.
- Lessons from cloud deployments – Open cloud deployments are real and growing. We’ll discuss and illustrate through case studies the best approaches to deploying and maximising an open cloud.
- Open-source cloud technologies – With Ubuntu including technologies such as OpenStack, MAAS and Juju, we’ll examine how they come together to form an open cloud.
For more information, visit: http://uds.ubuntu.com/cloud-summit/
The cost of a ticket for attending this event is $100 which includes lunch and refreshments.
// April 10th, 2012 // 1 Comment » // Uncategorized
The community submitted over 10 charms as part of the juju charm contest. The judges have deliberated and have picked the following winners:
The Grand Prize ($300 Amazon Gift Card) goes to Jimmi Andersen for his charm that deploys Appflower, a Rapid Application Development (RAD) tool for building web applications. You can check out the charm in the store for deployment instructions. The judges were impressed by how complete the charm is and how it brings software to Ubuntu that was previously only available by installing it by hand.
The 2 runners up (in no particular order) are Kees Cook for sbuild, and Ben Kerensa for Subway. sbuild provides a build environment for developers to test packages against and has been used for portable “hackathons” where having the packages build on the cloud is quicker and more convenient than building on your local machines. The Subway charm deploys the Subway IRC client, a sexy web based IRC client that uses Node.js and MongoDB. Thanks Ben and Kees, you’ll each receive a $100 Amazon gift card.
The charm store continues to grow as we now have over 73 total charms. The following people contributed charms to the contest and will each receive a Juju tshirt and Ubuntu travel mug: Patrick Hetu (znc and OpenERP), Atul Jha (OwnCloud), Nathan Osman (StackMobile), shazzner (Gitolite), and Brandon Holtsclaw (Drupal). Honorable mention goes to Ryan Kather, who attempted Moodle but was not able to complete it in time. Maybe next time! We’ll go deeper into these charms and show off their examples throughout the coming weeks.
Not finding what you need in the Charm Store? Well you can always contribute your own charms, here’s how you can get started.
// April 5th, 2012 // 4 Comments » // Uncategorized
UK cloud provider Brightbox would like to announce that they now have daily images of Ubuntu 12.04 available for testing. Brightbox has an EC2 compatible metadata service that works with Ubuntu’s cloud-init, you can find more about that in the documentation.
As a thank you to the Ubuntu community Brightbox is running a special through to September. A 10% discount to casual testers, and a 50% discount to anyone registering using their @ubuntu.com address.
Testers can sign up at http://brightbox.com/, and to claim the discount you just need to email email@example.com with your account id and let them know how you’re testing out Ubuntu.
Here’s their Getting Started guide. Happy testing!