Tag Archives: apt

A Tempermental Chef, or Something More APT for Configuration Management

If this makes sense to you, HELP, otherwise, nothing to see here, move along.

I’m trying to simplify the task of configuring and maintaining Linux servers at work and I want to build on some existing configuration management system to do so. We are using Ubuntu Linux distribution, and I was thinking of just building on the APT package management tools they’ve borrowed (among other things) from Debian, but I decided to look for something distro agnostic.

I’ve spent a lot of time and frustration the last day trying to get the server working for Chef, a new system written in Ruby. I spent time scanning their bugtracker and asking for help in their IRC group to no avail. It still doesn’t work, and I have no more idea why than when I started.

I’m really doubting my decision:

  • Chef has only been packaged for the bleeding edge version of Ubuntu. Um yeah, great, I really want to use beta software on my SERVERS.
  • The installation documemtation advises that I dowload and install Ruby Gems from a tarball becuase the version in the Ubuntu repositories isn’t to their liking. Great, I have to install extra shit by hand before I can use the software I want to use so I don’t have to install shit by hand. That’s efficient, right?
  • Chef relies on OpenID for authentication. Sweet! I can use my MySpace account to manage my servers! Well, I could, if only I could figure out the appropriate URL for the myspace authentication endpoint (and I was batshit insane). As for how I integrate OpenID authentication with anything else I’m using, I’m sure it will be easy and obvious what to do, in a year or two.
  • Oh yeah, I forgot the most important thing:  It doesn’t work.  At least it doesn’t work for me.  I’ve installed all the prerequisites, I’ve run their “installer,” and I can even get to the login page of “chef-server” but when I actually try to log in, it falls down and goes BOOM.  I get a generic error page warning me about a socket error.  I tried to diagnose it myself to no avail, there wasn’t anything in the log files because…
  • Chef server truncates its log files willy nilly.  It actually writes a fair amount of info to its log file, but you’d never know by looking at it after the fact, because after every request, it ends up as a zero-length file.  Useful, huh?  The trick is to ‘tail -F’ the file before restarting chef-server.  This prints the output as it is written to the file, and reopens the file each time it gets truncated, which happens multiple times during the request.
    • For what it is worth, I figured out what was wrong here, for some bizzarre reason, the hosts file on the machine was only readable by root, which casued lookups for localhost to fail when chef-server was trying to connect to the couchDB server.

Now, to be fair, the Chef site makes it clear in a nice green sidebar that Chef is young and a work in progress.   I knew that when I started with it.  I didn’t expect it to be production ready, but I thought it was far enough along to start working with.  Clearly, I’m reconsidering that.

I’m also reconsidering the assumption that sent me to Chef in the first place, that it was desirable, at this point, not to take a dependancy on a specific Linux distribution by trying to build off of APT, the package distribution and management system at the heart of Debian and Ubuntu.  The truth is APT is awesome.  One of the reasons given for creating Chef was that Puppet, an earlier Ruby-based configuration management system choked on dependancy management.  I haven’t seen that complaint about APT, not lately, in fact, that’s one of the things they love most about Debian and Ubuntu, people love it so much that say things like “I want apt to bear my children,” or words to that effect.

So, my thought is that I create my own apt repository.  I’ll create derivates of the ubuntu packages I need custom versions of, and I’ll create configuration packages derived from their configuration packages whenever possible.  Machine and role specific packages can be used to manage rollouts and/or  I can different repository tiers for different classes of servers, in much the same way that Debian and Ubuntu have different tiers for testing, stable, unstable, etc.  I’m sure I’ll run in to headaches on the way, but it least they will be headaches that other people have suffered and I can learn from their experience.

How and Why to Create Ubuntu Metapackages

Yesterday I wrote about my plans to make deploying new Ubuntu servers both easier and more repeatable by relying on apt metapackages.

I’ve been working on the implementation, and it’s going pretty well so far.  This documents the basic steps:

First, I installed the equivs package

aptitude install equivs

At this point, I’m not including any config files or post installation tasks, so I only have to worry about the control file.

#equivs-control my-metapackage

This generates a control file template in the current directory that you edit. I want to make sure my system has ufw, munin-node, logrotate, rsync, mlocate and wget installed, and I cleaned out the options I didn’t need, so my control file looks like this:

### Commented entries have reasonable defaults.
### Uncomment to edit them.
Section: misc
Priority: optional
Standards-Version: 3.6.2

Package: my-metapackage
Version: 0.01
Maintainer: Your Humble Scribe <fake@fake.com>
Depends: ufw, munin-node, logrotate, rsync, mlocate, wget
Description: Depends on Useful Bits and Pieces
 This is a dependency package that we use to make sure that basics,
 like a firewall, rsync, munin-node, etc, are installed.

Next step is to turn that into a debian package:

#equivs-build my-metapackage
dh_clean -k
dpkg-deb: building package `my-metapackage' in `../my-metapackage_0.01_all.deb'.
The package has been created.
Attention, the package has been created in the current directory,
not in ".." as indicated by the message above!

Now to test it out:

sudo dpkg -i my-metapackage_0.01_all.deb

This will install my-metapackage, but that’s not enough. Dpkg can’t satisfy remote dependancies, so it will throw errors for each of the dependencies you created and leave your newly installed package in a broken state. You can then use aptitude to resolve these missing dependencies to complete the process:

sudo aptitude -fy --safe-resolver install

This tells aptitude to install to fix any missing dependencies. 

That should be it.  I’ll note though, that for some reason, aptitude was insisting on removing my packages, rather than fixing the broken dependencies.  I’m not sure why, and I’m not sure why its working now