Quick Tip: NetworkManager and /etc/resolv.conf

If you have trouble with NetworkManager overwriting your `search` and `domain` configuration after every startup and you’re using DHCP, add the following line to your `/etc/dhclient.conf`:

append domain-name " company.local other.company.local";

So whenever your DHCP server doesn’t provide these information (the one in my company does not), it’ll add this

domain company.local
search company.local other.company.local

to your `/etc/resolv.conf`.

Fixing MySQL / PDO error 2014

The following error on my current project at work really gave me lots of headaches today:

SQLSTATE[HY000]: General error: 2014 Cannot execute queries
while other unbuffered queries are active. Consider using
PDOStatement::fetchAll().
Alternatively, if your code is only ever going to run against
mysql, you may enable query buffering by setting the
PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.

So, yes, I already have PDO::MYSQL_ATTR_USE_BUFFERED_QUERY set to TRUE, so why is PDO still complaining? And especially why it is complaining now, because the same code which triggered the error today ran without problems for the past 9 months?

After struggling a lot I found the cause: I missed to close a statement which was reused in a loop!

So take care that you always call PDOStatement->closeCursor() in use cases like this:

$stmt = $con->prepare(“SELECT * FROM doodle WHERE id = ?”);

foreach (range(1,10) as $id)
{
$stmt->execute(array($id));
$row = $stmt->fetch();
$stmt->closeCursor();
}

Useful Gotcha #27

If you get this error ssl_error_rx_record_too_long when browsing an SSL-secured virtual host and wonder what the heck is going on (hey, it worked the day before), ensure that you’ve noticed that your admins have changed the IP address of the machine and that you have to adapt the IP-based VHost configuration accordingly… not that this SSL error wouldn’t have said that in first place!

Change svn:externals quickly

If you’ve worked with external repository definitions and branches before, you probably know the problem: If you create a new branch off an existing one or merge one branch into another, subversion is not smart enough to update svn:externals definitions which point to the same repository, but rather keep them pointing to the old (wrong) branch. (I read they fixed that with SVN 1.5 by supporting relative URLs, but still, a couple of people might not be able to upgrade and I want to keep rather explicit with external naming anyways.)

Anyways, today at work I was so sick of the problem that I decided should hack something together. Here is the result:

#!/bin/bash
export LANG=C
if [ $# -ne 2 ]
then
    echo "Usage:" $(basename $0) "<old> <new>"
    exit 1
fi

old=$1
new=$2
repo_root=`svn info | grep "Repository Root" | cut -f3 -d" "`;

if [ -n "$(svn info $repo_root/$new 2>&1 | grep "Not a valid URL")" ]
then
    echo "$repo_root/$new is not a valid URL"
    exit 1
fi

for ext in  $(svn st | grep -e "^X" | cut -c 8- | xargs -L1 dirname | uniq)
do
    externals=$(svn propget svn:externals $ext)
    if [[ "$externals" == *$repo_root/$old* ]]
    then
        externals=${externals//$repo_root\/$old/$repo_root\/$new}
        svn propset svn:externals "$externals" $ext
    fi
done

Save this into a file, make it executable and you’re good to go! The script is smart enough to check if the target URL (based on the repositories’s root and the given <new> path) actually exists and also only changes those external definitions which actually match the repository root.

Fun!

Global AJAX responders in Prototype

I encountered a small, but ugly problem in our Symfony-driven project today: Unauthenticated AJAX requests, which f.e. may happen when the session timed out on the server, but the user hasn’t reloaded the page in the meantime, are also forwarded to the globally defined login module / action. This of course leaves the HTML page, which is constructed upon single HTML components, in a total mess. Ouch!

So yeah, rendering the complete login mask HTML as partial to the client is stupid, but also relatively easy to fix:

public function executeLogin($request)
{
    if ($request->isXmlHttpRequest())
    {
        // renderJSON is a custom function which json_encode's
        // the argument and sets an X-JSON header on response
        return $this->renderJSON(array("error" =>
                                    "Your session expired"));
    }
    ...
}

This was of course only half of the fix. I still had to handle this (and other) special JSON response on the browser’s side:

new Ajax.Request("mymodule/myaction", {
     onSuccess: function(response, json) {
         if (json.error)
         {
             // display the error
             alert(json.error);
             return;
         }
         // the actual callback functionality
     }
}

Uh, anyone screams “spaghetti code”? Yeah, you’re right. I quickly headed for a more general implementation, also since we can’t do that for a couple of symfony-specific prototype helpers anyways, like update_element_function, whose Javascript code gets generated by Symfony dynamically. So how can this be generalized?

Ajax.Responders to the rescue

Actually, prototype already contains some kind of “global hook-in” functionality for all Ajax requests triggered by the library: Ajax.Responders.

While this seemed to support all common callbacks (among them onCreate, onSuccess, onFailure and onComplete), some testing showed though that f.e. the onComplete callback was always called after the specific AJAX request’s onComplete callback, so this was pretty useless for me. After all, I also wanted to prevent that the specific callback gets executed when I encountered an error…

After diving through prototype’s code for some hours I found a solution. Particularily helpful here is that prototype signals every created Ajax request to the onCreate handler and gives the request and response object handling this request as arguments to it. Time to overwrite prototype’s responder code! Here is it:

Ajax.Responders.register({
    onCreate: function(request) {
        var oldRespondToReadyState = request.respondToReadyState;
        request.respondToReadyState = function(readyState) {
            var state = Ajax.Request.Events[readyState];
            var response = new Ajax.Response(this);
            var json = response.headerJSON;
            
            if (state == 'Complete' && json && json.error)
            {
                alert(json.error);
                return;
            }
            oldRespondToReadyState.call(response.request, 
                                           readyState);
        }
    }
});

Another particularily useful piece of knowledge I gathered today to let this work is how Function.prototype.call and Function.prototype.apply work (both are available since Javascript 1.3).
Basically they allow the execution of a function in scope of the object given as first parameter (there is a nice introduction available here).

If you’ve ever wanted to “send an event to some object to make its listener fire” because the listener’s code depended on the fact that the this reference points to the object the event was fired upon, you should now have a viable alternative:

Event.observe(myObj, 'click', myHandler);
// is call-wise equivalent to
myHandler.call(myObj);

No need to create custom mouse events and throw them around any longer… 😉

New server setup

I finally got sick of my SuSE 9.3 V-Server when a good friend of mine pointed me to this really fancy and sexy IMAP web frontend called RoundCube last week. Written entirely in PHP 5 there was no real chance that I could get this easily working on my oldish PHP 4.3 installation without recompiling everything. I wanted to upgrade to some Ubuntu LTS server in short to middleterm anyways because I got sick of Plesk as well, and while I had these upgrade thoughts now for a while, the price tag for a temporary setup to make a clean transition was just too high: It would have cost me at least 65 Euro to get a throw-away V-Server for about three months – whereas two weeks would have been enough.

So, being a little kid with these things sometimes in terms of being not able to wait for stuff to happen, I did this one thing what one really should not do at all: Touching a running system.

I backed up all important stuff to a special directory inside the virtual machine and told the automatic installer procedure to start installing Ubuntu 8.04 LTS. After approximately two hours my working setup was gone. No emailing, no webserver, nothing.

So I started with a clean ubuntu server instance from scratch yesterday evening and it took me the whole last night to get some things working. While the mail setup is still a beast (need to read myself into exim and try out a couple of tutorials), I’m already quite proud of my Apache / FastCGI PHP setup. I copied a lot of ideas from the www setup at work where we have implemented separated, secured V-Host users, suexec-protected php wrappers and more.
Tonight I added another little puzzle piece into the mix: SFTP/SCP access for individual users to their virtual hosts.

Again, the Ubuntu community was very helpful – I found a HOWTO for getting an sftp server up and running in a jailed environment. There was actually very little I had to change so it fitted my use case – instead of using /home/chroot as jail, I put the jail into /var/www/vhostjail and all websites / vhosts which should get file transfer access below that directory. The biggest plus with this setup – beside the security point (people can only sftp or scp to the jail and cannot break anything on the rest of the system) – is now that the user who uploads files and the webuser who executes the request (i.e. the Apache user) is one and the same. No need to make files world-readable or even -writable when setting up a web application which has to read or write data. No need to change the owner or the access of uploaded files because the webuser could otherwise not read, write or execute them.

Wow. I like that setup.

Now, if only exim would so easy to understand and master…

Qt Framework introduction

I did a small workshop on Qt today in my company, mainly to introduce the framework to my fellow developers. I think I did a good job, because I’ve seen the glow in their eyes while presenting the graphics view demos and the 2d paint engine amongst many other things.

Anyways, if you’re interested in Qt as well and want to get a short (German) introduction, snag the workshop slides from the Stuff page.

Life and everything

So this was one of those mediocre days, you wake up far too early in the morning because your little son thinks its a good time to be awake. You do the usual things, like get awake yourself somehow in the bathroom, put on some clothes (while picking those ones which don’t smell that bad from the staple), take care about your son and your wife, eat something, later give each a hug, “love you, bye!” and go to work.

Well, work, this is probably what struggles me the most at the moment. Of course you can’t always do or like what you personally find good or bad, you’re bound to projects, internal processes and stuff. Work used to be fun because I had and have great colleques, but sometimes, like recently, this just doesn’t compare or outweight the actual project work which just annoys me.

Usually, there are several kinds of projects, and I’m solely speaking of software projects here. Two of these “types” are the reason for my current struggeling:

The “I knew that this would happen and break our neck”-project. Usually these projects are plainly brain dead, include immense code hacks to get something work or are just build upon the wrong /dysfunctional components. Of course its not an option to just cancel the project – they have to be kept alive most of the time for political and / or reputational reasons.

The “We have to use this whether or not it is sufficient”-project. This is something even more brain dead and normally applies to a component or software you need to build your own software upon. To make things even worse, the component or software is mostly closed source, meaning you or your customer had to pay a huge amount of money for it and by doing this you are solely dependent on this particular vendor. The whitepapers of those things always look great, but when it comes to the actual implementation details you may find out “woops, it doesn’t do what we want, now what?” – yeah, one could bug the vendor and beg him to implement the feature, and maybe he even does that (based on the amount of money you’ve spent before on licensing), but this doesn’t always work out and if it worked out for one missing feature, it might not work out for another one you may find later. So you sit there and try to hack it yourself, and obviously your “system” has something they’d call an “api”, this is badly documented, and of course any support call to the vendor costs either alot struggle or even worse $$$.

For me personally an even worse problem is recently, that even if some project finished successfully, I didn’t get much satisfaction from it. Maybe this was because I couldn’t use the tools or environment I’d like to have used (Windoze anyone?), maybe this was because the resulting code quality didn’t match my own expectations (if you do project work you almost never have much time to get your tools right, think about a proper and extensible architecture, aso.), and maybe this was because I couldn’t tell anyone of the project because of NDAs or other political issues. How should I be proud of something I cannot show and explain in detail to anybody?

So all this made me think a lot in the past. At first I decided to get my joy of work back by doing personal Open Source projects in my spare time. This worked for quite some time and still kind of works, but the obvious problem is time. If you work 8+ hours, have a family with a little child and then should find some time to get some serious hacking done, then this usually fucks up your sleeping rythmn because you start to shift your “personal” workload into evening and night sessions.

Shouldn’t it be somehow possible to get both, satisfaction and money, for something you create? I came to the conclusion that it should be possible, after all these people seemed to have a lot of fun with their day job. Is that, because they produce an Open Source software, which is not only licensed by many companies, but also free to use for anyone else? Is that, because their employer, Trolltech, not only allows, but encourage them to research personal interests and get in contact with the community?

I could be wrong, but I think the answers to the above questions are “yes”. Thats why I think my perfect job would be in a company which not only uses, but respects and lives Open Source. And thats why I recently applied for a job offer from the Trolls, now lets see how this one goes…