monotone-viz [updated]

I’ve recently packaged monotone-viz 1.0.2 for MacPorts (and soon also for openSUSE), a program to display monotone’s DAG of revisions and their properties. This becomes very handy if you need to do a complex (asynchronous) merge or you want to know what exactly monotone has merged together for you. One example is the graph of the “merge fest” we’ve had in spring 2008 for the last summit you see on the right.

complex merge in monotone

(Source: monotone website)

Merging in monotone is actually quite robust; while I’ve had a lot of “fuzzy” feelings in the past when doing complex merges with subversion or even CVS, merging in monotone is a no-brainer. It most of the time does exactly what you want it to do. One exception here is the handling of deleted files however, also known as “die-die-die” merge fallout: If you merge together two distinct development lines where one file has been edited on the left side and deleted on the right side, the deletion always wins over the edit, and there is absolutely nothing you can do against it (well, despite re-adding the file after merge and loosing the file’s previous history). Thankfully this is not such a common use case and keeping an “Attic” directory where deleted, but possibly revivable files reside is the medium-term solution, until someone picks up the topic again.

But back to monotone-viz, I couldn’t fix one problem with monotone-viz on MacPorts: It doesn’t properly draw the arrows on the graph, but rather puts them above the revisions, like this:

monotone-viz-drawing-bug

I’ve already asked the author about it, but he couldn’t find out whats wrong, so I suspect something is wrong with my gtk+ setup. If you have a hint for me where to look at, give me a pointer, I’d be very thankful. And if you tell me that it works correctly for you, then even better, drop me a note as well. I’ve uploaded a test monotone database with a simple merge to test the behaviour. Thanks!

[Update: As this bug points out the render problem comes from Graphviz’ dot program – hopefully the patch will made it into a new release shortly.]

monotone / guitone update

Thanks to Timothy (and probably also thanks to the bad weather) we’ve seen some development activity for monotone recently. It started out with the “key-by-hash” changes: keys are no longer identified by its name, but their unique ID which finally solves the “I have lost my monotone private key – help!” issues from the past. This already went into monotone 0.45 which was released about a month ago.

Now the upcoming monotone release includes another long-awaited feature which is in the trunk as of yesterday: being able to query database contents from remote monotone servers. This is particularily useful if you want to check what branches are available server-side before you fetch them all. I’ve teamed up with Timothy and made a “single-shot” version of his new `automate remote_stdio` command which can be used as follows:

 # this picks your default netsync server stored in the database
 $ mtn au remote branches
 # ... alternatively, give it an optional hostname
 $ mtn au remote --remote-stdio-host myserver.org branches

Since both `automate remote_stdio` and `automate remote` can execute any available remote automate command through this, a little lua guard was implemented which allows the server administrator to pick certain commands which he wants to make available. By default, no command can be executed:

function get_remote_automate_permitted(key_identity, command, options)

where `key_identity` identifies the calling user, command points to a table which contains the command’s name and arguments, and options points to another table which contains the options for the given command.

Now what has all this to do with guitone?

Especially the changes in 0.45 forced me already to pick my old pet project up again, because key-related commands changed their output in an incompatible manner and therefor my code needed to adapt to that as well. I’m also planning to finalize other features in the upcoming weeks, amongst that netsync support (whose automate versions incidentally have to be implemented in monotone at first as well), an improved changeset browser and probably other minor things and bugs which have been on hold since February.

Netsync dialog

The next guitone version won’t be out before monotone 0.46 hits the streets though, simply because I have to wait (and implement) a couple of things in monotone first and because I want to publish a release which does not explode the first time you’ll look at it. But hey, since I’m the release manager for monotone as well, its in my hands when it will be out 🙂
Its also likely that I’ll introduce a beta release cycle for the next guitone release and make a couple of binaries so people get their hands on it easier and earlier.

So, stay tuned for more updates on both projects!

Why the lucky stiff

I actually don’t know where this guy got his name from, nor how exactly I stumbled upon him or his excellent book on Ruby, but one is for sure: He’s one of those adorable people with more than one or two skills. You can already get this if you read the first few pages of the aforementioned book, which is even if you’re not interested in Ruby still worth a read alone for all his genious anecdotes, weird examples and funny cartoons. So if you have half an hour or two, look at it and think about it.

And yet again something repeats which I already encountered in other areas of my life: At the time I get to know to a particular band or in this case software evangelist, they already retired for unknown reasons from their work. All the best for you, you lucky stiff. I’m sure you’ll also do an excellent job in the “offline world”, wherever you are…

Fixing MySQL / PDO error 2014

The following error on my current project at work really gave me lots of headaches today:

SQLSTATE[HY000]: General error: 2014 Cannot execute queries
while other unbuffered queries are active. Consider using
PDOStatement::fetchAll().
Alternatively, if your code is only ever going to run against
mysql, you may enable query buffering by setting the
PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.

So, yes, I already have PDO::MYSQL_ATTR_USE_BUFFERED_QUERY set to TRUE, so why is PDO still complaining? And especially why it is complaining now, because the same code which triggered the error today ran without problems for the past 9 months?

After struggling a lot I found the cause: I missed to close a statement which was reused in a loop!

So take care that you always call PDOStatement->closeCursor() in use cases like this:

$stmt = $con->prepare(“SELECT * FROM doodle WHERE id = ?”);

foreach (range(1,10) as $id)
{
$stmt->execute(array($id));
$row = $stmt->fetch();
$stmt->closeCursor();
}

Development of guitone has been abandoned

For several, mostly personal, reasons the development of guitone has been abandoned. There are a couple of unfinished features and bugfixes waiting in the trunk which are now not released, but I’d rather want to make a major release than another minor one and there is way too much left to do for a single person with way too little time to really make a sound release.

Maybe I’ll pick it up again in a few months, but I can’t guarantee that. For now, my thanks go to all the users and packagers for their time and for the support!

Change svn:externals quickly

If you’ve worked with external repository definitions and branches before, you probably know the problem: If you create a new branch off an existing one or merge one branch into another, subversion is not smart enough to update svn:externals definitions which point to the same repository, but rather keep them pointing to the old (wrong) branch. (I read they fixed that with SVN 1.5 by supporting relative URLs, but still, a couple of people might not be able to upgrade and I want to keep rather explicit with external naming anyways.)

Anyways, today at work I was so sick of the problem that I decided should hack something together. Here is the result:

#!/bin/bash
export LANG=C
if [ $# -ne 2 ]
then
    echo "Usage:" $(basename $0) "<old> <new>"
    exit 1
fi

old=$1
new=$2
repo_root=`svn info | grep "Repository Root" | cut -f3 -d" "`;

if [ -n "$(svn info $repo_root/$new 2>&1 | grep "Not a valid URL")" ]
then
    echo "$repo_root/$new is not a valid URL"
    exit 1
fi

for ext in  $(svn st | grep -e "^X" | cut -c 8- | xargs -L1 dirname | uniq)
do
    externals=$(svn propget svn:externals $ext)
    if [[ "$externals" == *$repo_root/$old* ]]
    then
        externals=${externals//$repo_root\/$old/$repo_root\/$new}
        svn propset svn:externals "$externals" $ext
    fi
done

Save this into a file, make it executable and you’re good to go! The script is smart enough to check if the target URL (based on the repositories’s root and the given <new> path) actually exists and also only changes those external definitions which actually match the repository root.

Fun!

SSL Verification with Qt and a custom CA certificate

So I wanted to make my application updater for guitone SSL-aware the other day. The server setup was an easy job: Add the new domain (guitone.thomaskeller.biz) to cacert.org, create a new certificate request with the new SubjectAltName (and all the other, existing alternative names – a procedure where this script becomes handy), upload to CAcert, sign it there, download and install the new cert on my server, setup a SSL vhost for the domain – done!

Now, on Qt’s side of things using SSL is rather easy as well, the only thing you have to do is give the setHost method another parameter:

QHttp * con = new QHttp();
con->setHost("some.host.com", QHttp::ConnectionModeHttps);
con->get("/index.html");
// connect to QHttp's done() signal and read the response

This should actually work for all legit SSL setups if Qt (or, to be more precise, the underlying openssl setup) knows about the root certificate with which your server certificate has been signed. Unfortunately, CAcert’s root certificate is not installed in most cases, so you basically have two options:

  1. Connect to QHttp’s sslErrors(...) signal to the QHttp::ignoreSslErrors() slot. This, of course, pretty much defeats the whole purpose of an SSL connection, because the user is not warned on any SSL error, so also legit errors (certificate expired or malicious) are just ignored. (*)
  2. Make the root certificate of CAcert known to the local setup, so the verification process can proceed properly.

I decided to do the latter thing. This is how the code should now look like:

QHttp * con = new QHttp();
QFile certFile("path/to/root.crt");
Q_ASSERT(certFile.open(QIODevice::ReadOnly));
QSslCertificate cert(&certFile, QSsl::Pem);
// this replaces the internal QTcpSocket QHttp uses; unfortunately
// we cannot reuse that one because Qt does not provide an accessor
// for it
QSslSocket * sslSocket = new QSslSocket(this);
sslSocket->addCaCertificate(cert);
httpConnection->setSocket(sslSocket);
con->setHost("some.host.com", QHttp::ConnectionModeHttps);
con->get("/index.html");
// connect to QHttp's done() signal and read the response

Particularily interesting to note here is that the QIODevice (in my case the QFile instance) has to be opened explicitely before it is given to QSslCertificate. I did not do this previously, Qt neither gave me a warning nor an error, but simply refused to verify my server certificate, just because it didn’t load the root certificate properly.

(*) One could, of course, check the exact triggered SSL error from QSslError::error(), in our case this could be f.e. QSslError::UnableToGetLocalIssuerCertificate, but this is rather hacky and could certainly be abused by a man in the middle as well.

Qt Creator

Wow, I absolutely did not see this coming – finally the Trolls^WNokians offer a lean and nice cross-platform IDE for Qt which incorporates all other Qt tools and a gdb frontend! Formerly dubbed “Project Greenhouse” the baby just got a new name and fancy logo: Qt Creator. There is a pre-release version available for download, licensed under a special preview license. The final product should be dual-licensed though, like the rest of the Qt tools are.

Oh what a happy day for Qt users (ever wanted to look at the value of a QString in gdb…?) and what a sad one for all the other free Qt IDEs out there, like edyuk or QDevelop. Especially edyuk looked very promising since it provided a lot of features and a good user interface.

Global AJAX responders in Prototype

I encountered a small, but ugly problem in our Symfony-driven project today: Unauthenticated AJAX requests, which f.e. may happen when the session timed out on the server, but the user hasn’t reloaded the page in the meantime, are also forwarded to the globally defined login module / action. This of course leaves the HTML page, which is constructed upon single HTML components, in a total mess. Ouch!

So yeah, rendering the complete login mask HTML as partial to the client is stupid, but also relatively easy to fix:

public function executeLogin($request)
{
    if ($request->isXmlHttpRequest())
    {
        // renderJSON is a custom function which json_encode's
        // the argument and sets an X-JSON header on response
        return $this->renderJSON(array("error" =>
                                    "Your session expired"));
    }
    ...
}

This was of course only half of the fix. I still had to handle this (and other) special JSON response on the browser’s side:

new Ajax.Request("mymodule/myaction", {
     onSuccess: function(response, json) {
         if (json.error)
         {
             // display the error
             alert(json.error);
             return;
         }
         // the actual callback functionality
     }
}

Uh, anyone screams “spaghetti code”? Yeah, you’re right. I quickly headed for a more general implementation, also since we can’t do that for a couple of symfony-specific prototype helpers anyways, like update_element_function, whose Javascript code gets generated by Symfony dynamically. So how can this be generalized?

Ajax.Responders to the rescue

Actually, prototype already contains some kind of “global hook-in” functionality for all Ajax requests triggered by the library: Ajax.Responders.

While this seemed to support all common callbacks (among them onCreate, onSuccess, onFailure and onComplete), some testing showed though that f.e. the onComplete callback was always called after the specific AJAX request’s onComplete callback, so this was pretty useless for me. After all, I also wanted to prevent that the specific callback gets executed when I encountered an error…

After diving through prototype’s code for some hours I found a solution. Particularily helpful here is that prototype signals every created Ajax request to the onCreate handler and gives the request and response object handling this request as arguments to it. Time to overwrite prototype’s responder code! Here is it:

Ajax.Responders.register({
    onCreate: function(request) {
        var oldRespondToReadyState = request.respondToReadyState;
        request.respondToReadyState = function(readyState) {
            var state = Ajax.Request.Events[readyState];
            var response = new Ajax.Response(this);
            var json = response.headerJSON;
            
            if (state == 'Complete' && json && json.error)
            {
                alert(json.error);
                return;
            }
            oldRespondToReadyState.call(response.request, 
                                           readyState);
        }
    }
});

Another particularily useful piece of knowledge I gathered today to let this work is how Function.prototype.call and Function.prototype.apply work (both are available since Javascript 1.3).
Basically they allow the execution of a function in scope of the object given as first parameter (there is a nice introduction available here).

If you’ve ever wanted to “send an event to some object to make its listener fire” because the listener’s code depended on the fact that the this reference points to the object the event was fired upon, you should now have a viable alternative:

Event.observe(myObj, 'click', myHandler);
// is call-wise equivalent to
myHandler.call(myObj);

No need to create custom mouse events and throw them around any longer… 😉

Windows binary available and Outlook

I’ve just uploaded a windows binary for guitone 0.9 – sorry that it took a little longer this time. I’ve been quite busy during the past days and having no windows machine at home doesn’t help much either 😉
Of course if there are other people willing to package guitone on windows, drop me a note. Its actually not much work. A detailed explanation and a InnoSetup installer script are already in place.

On a related note I’m working on a couple of new features for guitone. The next version will be able to create new monotone database and also create new projects from existing ones (basically a frontend for `mtn setup`). Furthermore I decided I should finally implement some workspace commands, so at least the equivalent of `mtn add` and `mtn drop` should be possible, `mtn revert` and `mtn rename` probably as well.

The monotone additions for netsync automation still not made it into trunk, mainly because I was not in the mood to finally fix the anticipated lua testing for stdio traffic (I really should not push this task further away, because the branch where the automate netsync stuff resides in diverges more and more over time…). And of course before this is not in monotone’s trunk it makes no sense to implement it in guitone either – so yeah, if you particularily wait for this feature, give me a kick in the butt so I get finally around.