New PGP Key

I think it was about time to get a new one. While I do not get much encrypted / signed email, the old one from 2003 that used a DSA/ElGamal combination was considered less secure by today’s standards. Since I had a couple of signatures on the old one, I ensured that I signed the new one with the old one to get at least “some” initial trust on this as well.

tl;dr Here is the new key: 0xCD45F2FD

And for those of you who want to span a more “social” web of trust with me, I’m also on and have a couple of invites left as you can see 🙂

Embed Confluence pages in Jira issues

[Updated 2014-05-15 Adapt the iframe’s width as well and add a “edit in confluence link”]

There is a question in Atlassian’s Q&A tracker that hasn’t sufficiently been answered yet and that I stumbled upon today as well, namely embedding whole Confluence pages in Jira description (and any other rich-text enabled) fields.

The reason why one would want to do something like this is to avoid context-switching between both tools. Imagine you write your specification in a wiki, but want to use an issue tracker to manage your workload. And while Atlassian has a solution to link Jira issues to Confluence pages, there is no macro or other function to actually embed the content.

What I’ll show you today is basically a hack. We’ll generate a small HTML snippet dynamically in confluence that includes an <iframe> element and let the user copy that to a Jira description field where it loads the page’s print view. This hack was tested under the following (software) conditions:

  • Jira 6.1.7
  • Confluence 5.4.2, with the documentation theme globally applied
  • both setup under the same domain, i.e. and

Now to the configuration. On Jira’s side only one thing is needed, the {html} macro must be enabled. By default, this is disabled for security reasons (you can tell) and you should really only enable this if your Jira instance is not publicly available. Anyways, follow these steps:

  1. In Jira, go to Manage Add-Ons
  2. Then change the “Filter visible addons” drop down to show “All Add-Ons”
  3. Then search for the word “wiki” and expand the Wiki Renderer Macros Plugin
  4. Then click on the link on the right which says “7 of 8 modules enabled”
  5. Finally, click “enable” next to the last module which says “html”

Now, on Confluence’ side we want to generate some HTML snippet for a specific page and we need some little UI for that. Usually, if you want to change the contents of a web page after it is rendered in the browser, you use some browser-specific mechanism, i.e. you write a Chrome extension or a Greasemonkey script for Firefox. But Confluence offers a better, cross-browser way to inject custom code – custom HTML!

  1. In Confluence, go to Custom HTML
  2. Click on “Edit” and paste the following code into the “At end of the HEAD” textbox
  3. Hit “Save”

Now to the code:

  var meta = AJS.$('meta[name=ajs-page-id]');
  if (meta.size() > 0) {
    var list = AJS.$('<li class="ajs-button normal" />')
      .appendTo('#navigation ul');
    AJS.$('<a rel="nofollow" accesskey="q" title="Copy embed code (q)" />')
      .text('Embed code')
      .on('click', function() {
        window.prompt('Embed code: Ctrl+C, Enter', '{html}\u003Cscript type="text/javascript">function aR(fr){$f=AJS.$(fr);$f.height($f.contents().height());$p=$f.parents("*[data-field-id=description]");if($p.length>0){$f.width($p.width())}else{$p=$f.parents("div.mod-content");if($p.length>0){$f.width($p.width()-30)}}}\u003C/script>\u003Ca href="/confluence/pages/editpage.action?pageId=3375112" style="float: right" target="confluence">\u003Csmall>Edit in Confluence \u003C/small>\u003C/a>\u003Ciframe src="/confluence/plugins/viewsource/viewpagesrc.action?pageId=3375112" style="overflow:hidden;border:0" onload="aR(this);">\u003C/iframe>{html}');

So what is this? Basically, AJS is Atlassian’s entry point for its AUI library (Atlassian User Interface). This contains a full version of jQuery, accessible via AJS.$. Now, if AJS is initialized, we query the page ID of the currently viewed page. This is embedded in the page as meta tag with the name “ajs-page-id”.

Next, a new button is added to the main navigation that opens a window prompt containing the HTML code to be copied (\u003C is <, this was needed to render a valid HTML page, while still showing proper HTML tags in the prompt).

Lets have a closer look at the dynamic code part that is later executed in Jira, this time broken down into lines, for better understanding:

<script type="text/javascript">
function aR(fr) {
  $f = AJS.$(fr);
  $p = $f.parents("*[data-field-id=description]");
  if ($p.length > 0) { 
  } else {
    $p = $f.parents("div.mod-content");
    if ($p.length > 0) { 
      $f.width($p.width() - 30);
<a href="/confluence/pages/editpage.action?pageId=' + meta.attr('content') + '" 
      style="float: right" target="confluence">
   <small>Edit in Confluence</small>
  src="/confluence/plugins/viewsource/viewpagesrc.action?pageId=' + meta.attr('content') + '" 

You can see some Javascript again, a link to edit the page externally in Confluence, and an iframe definition. The frame loads a confluence’ page source view (usually accessible from Tools > Show page source) and is set dynamically to the width of the outer container i.e. either the detailed Jira issue view’s div.mod-content container or Jira Agile’s description container (targetable with dd[data-field-id=description]) in a Scrum-based board. For the former we have to subtract some pixels to avoid that the edit bar on the right of the description is pushed outside of the parent container.

Now, to avoid that the contents of the iframe must be scrolled separately from the browser’s viewport, we also set the height of the iframe, in this case dynamically to the height of the iframe’s contents, as soon as these are loaded. Note that a pure CSS solution, like height: 100% or else, would not work here, because we’re not under control of the parent HTML containers in which the iframe is actually rendered and giving the iframe a fixed height would be nonsense as well, since you don’t know the page length in advance.

And thats it, now you can embed Confluence pages in Jira issues with only a few clicks! Have fun!

Batch-remove empty lines at the end of many Confluence pages

In a customer project we’ve decided to collaboratively write a bigger bunch of documentation in Atlassians Confluence and export that with Scroll Office, a third-party Confluence plugin, into Word.

That worked fine so far, but soon we figured that we’ve been kind of sloppy with empty lines at the end of each page, which were obviously taken over into the final document. So instead of going over each and every page and remove the empty lines there, I thought it might be easier to directly do this on the database, in our case MySQL.

The query was quickly developed, but then I realized that MySQL had no PREG_REPLACE function built-in, so I needed to install a UDF, a user-defined function first. Luckily, this UDF worked out of the box and so the query could be finalized:

SET BODY=PREG_REPLACE("/(<p>&nbsp;<.p>)+$/", "", BODY) 
WHERE BODY LIKE "%<p>&nbsp;</p>";

This query updates all current pages (no old versions) from all spaces that end with at least one empty line <p>&nbsp;</p> – this is Confluence’s internal markup for that – and removes all of these empty lines from all matches pages.

This was tested with MySQL 5.5.35, lib_mysqludf_preg 1.2-rc2 and Confluence 5.4.2.

I don’t need to mention that it is – of course – highly recommended that you backup your database before you execute this query on your server, right?

Custom polymorphic type handling with Jackson

Adding support for polymorphic types in Jackson is easy and well-documented here. But what if neither the Class-based nor the property-based (@JsonSubType) default type ID resolvers are fitting your use case?

Enter custom type ID resolvers! In my case a server returned an identifier for a Command that I wanted to match one-to-one on a specific “Sub-Command” class without having to configure each of these identifiers in a @JsonSubType configuration. Furthermore each of these sub-commands should live in the .command package beneath the base command class. So here is what I came up with:

@JsonTypeInfo(use = JsonTypeInfo.Id.CUSTOM,
              include = JsonTypeInfo.As.PROPERTY,
              property = "command")
public abstract class Command
    // common properties here

The important part beside the additional @JsonTypeIdResolver annotation is the use argument that is set to JsonTypeInfo.Id.CUSTOM. Normally you’d use JsonTypeInfo.Id.CLASS or JsonTypeInfo.Id.NAME. Lets see how the CommandTypeIdResolver is implemented:

public class CommandTypeIdResolver implements TypeIdResolver
    private static final String COMMAND_PACKAGE = 
            Command.class.getPackage().getName() + ".command";
    private JavaType mBaseType;

    public void init(JavaType baseType)
        mBaseType = baseType;

    public Id getMechanism()
        return Id.CUSTOM;

    public String idFromValue(Object obj)
        return idFromValueAndType(obj, obj.getClass());

    public String idFromBaseType()
        return idFromValueAndType(null, mBaseType.getRawClass());

    public String idFromValueAndType(Object obj, Class<?> clazz)
        String name = clazz.getName();
        if ( name.startsWith(COMMAND_PACKAGE) ) {
            return name.substring(COMMAND_PACKAGE.length() + 1);
        throw new IllegalStateException("class " + clazz + " is not in the package " + COMMAND_PACKAGE);

    public JavaType typeFromId(String type)
        Class&lt;?> clazz;
        String clazzName = COMMAND_PACKAGE + "." + type;
        try {
            clazz = ClassUtil.findClass(clazzName);
        } catch (ClassNotFoundException e) {
            throw new IllegalStateException("cannot find class '" + clazzName + "'");
        return TypeFactory.defaultInstance().constructSpecializedType(mBaseType, clazz);

The two most important methods here are idFromValueAndType and typeFromId. For the first I get the class name of the class to serialize and check whether it is in the right package (the .command package beneath the package where the Command class resides). If this is the case, I strip-off the package path and return that to the serializer. For the latter method I go the other way around: I try to load the class with Jackson’s ClassUtils by using the class name I got from the deserializer and prepend the expected package name in front of it. And thats already it!

Thanks to the nice folks at the Jackson User Mailing List for pointing me into the right direction!

Runtime-replace implementations with Roboguice in functional tests

At work we’re heavily depending on Unit and Functional Testing for our current Android application. For Unit testing we’ve set up a pure Java-based project that runs on Robolectric to provide a functional Android environment and we also added Mockito to the mix to ease some code paths with spied-on or completely mocked dependencies. Moritz Post wrote a comprehensive article how to setup this – if you have some time, this is really worth a read.

Now our functional tests are based on what the Android SDK offers us – just that we’re using Robotium as a nice wrapper around the raw instrumentation API – and until recently I thought it would not be possible to screw around much with an unaltered, but instrumented application on runtime. But while I was reading through the Android Testing Fundamentals I stumbled upon one interesting piece:

With Android instrumentation […] you can invoke callback methods in your test code. […] Also, instrumentation can load both a test package and the application under test into the same process. Since the application components and their tests are in the same process, the tests can invoke methods in the components, and modify and examine fields in the components.

Hrm… couldn’t that be used to just mock out the implementation of this one REST service our application uses? Yes, it could! Given the following implementation

public class RequestManager {
    public <I, O> O run(Request<I, O> request) throws Exception {
(where the Request object basically encapsulates the needed request data and Input / Output type information)

it was easy to create a custom implementation that would return predefined answers:

public class MockedRequestManager extends RequestManager {
    private Map<Request, Object> responses = new HashMap<Request, Object>();
    public <I, O> O run(Request<I, O> request) throws Exception {
        Object response = findResponseFor(request);
        if (response instanceof Exception) {
            throw (Exception) response;
        return (O) response;
    public void addResponse(Request request, Object response) {
        responses.put(request, response);

Now that this was in place, the only missing piece was to inject this implementation instead of the original implementation. For that I created a new base test class and overwrote the setUp() and tearDown() methods like this:

public class MockedRequestTestBase extends ActivityInstrumentationTestCase2 {
    protected Solo solo;
    protected MockedRequestManager mockedRequestManager = new MockedRequestManager();
    private class MockedRequestManagerModule extends AbstractModule {
        protected void configure() {
    public MockedRequestTest() {
    protected void setUp() throws Exception {
        Application app = (Application) getInstrumentation()
            app, RoboGuice.DEFAULT_STAGE,
                .with(new MockedRequestManagerModule()));
        solo = new Solo(getInstrumentation(), getActivity());
    protected void tearDown() throws Exception {

It is important to note here that the module overriding has to happen before getActivity() is called, because this starts up the application and will initialize the default implementations as they’re needed / lazily loaded by RoboGuice. Since we explicitely create a specific implementation of the RequestManager class before, the application code will skip the initialization of the actual implementation and will use our mocked version.

Now its time to actually write a test:

public class TestFileNotFoundException extends MockedRequestTestBase {
    public void testFileNotFoundMessage()
        Request request = new FooRequest();
            new FileNotFoundException("The resource /foo/1 was not found")
        solo.clickOnView("request first foo");
        assertTrue(solo.waitForText("The resource /foo/1 was not found"));

Thats it. Now one could probably also add Mockito to the mix, by injecting a spied / completely mocked version of the original RequestManager, but I’ll leave that as an exercise for the reader…

Have fun!

Debugging with MacPorts PHP binaries and Eclipse PDT 3.0

You know the times, when things should really go fast and easy, but you fall from one nightmare into another? Tonight was such a night… but lets start from the beginning.

To debug PHP you usually install the excellent XDebug and so did I with the port command sudo port install php5-xdebug. After that php -v greeted me friendly on the command line already:

PHP 5.3.8 (cli) (built: Sep 22 2011 11:42:56) 
Copyright (c) 1997-2011 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies
  with Xdebug v2.1.1, Copyright (c) 2002-2011, by Derick Rethans

Eclipse Indigo and Eclipse PDT 3 was already installed, so I thought it should be easy to set up the XDebug debugging option in Eclipse. Under “PHP > PHP Executables” I therefor selected /opt/local/bin/php as my CLI version and selected “xdebug” as debugging option.

A first test however showed me that the execution of a test script did not load any module into the PHP interpreter beforehand (for reasons I could only guess, because Eclipse error log kept quite). Looking at the output of phpinfo() from my test script and php -i from command line showed me the difference: The PHP option “Scan this dir for additional .ini files” was empty when PHP ran inside Eclipse, but was properly set when PHP ran from command line (or in an Apache context).

Asking aunt Google brought up this issue that shed some light into my darkness: The directory where additional modules reside is configured as a compile time option in PHP and defaults to /opt/local/var/db/php5 on MacPorts and exactly this can be overridden by either calling PHP with -n -c options or by setting the PHP_INI_SCAN_DIR environment variable.

Having no access to the actual PHP call from inside Eclipse I tried to go down the environment route, but that did not lead to any success. While the variable was recognized as it should on the normal command line (e.g. PHP_INI_SCAN_DIR= php -i disabled the load of additional modules), in Eclipse’ run configuration dialog, in the section environment variables, this was not recognized at all. I tried a little harder and configured the variable inside ~/.MacOSX/environment.plist, logged out and in again, restarted Eclipse obviously, but had no luck either.

The only viable solution I came up with was to place all the single extension= and zend_extension= entries directly into my php.ini and disable the individual module.ini files altogether. At least I can now run and debug properly, but this solution is of course far from being ideal – as soon as I add a new PHP module or want to remove an existing, I have to remember to edit the php.ini myself.

By the way, I also tried to use Zend’s debugger (and PDT plugin) as an alternative. While somebody else already ranted about that the Zend guys have been unable to provide the Zend Debugger for PHP 5.3 as a standalone download (which hasn’t changed to date), PHP 5.2 debugging worked nicely with the old Zend PDT plugin.

Of course, none of my needed PHP modules were loaded and I really needed PHP 5.3 support, so I had to follow the same route the other guy did and downloaded all of the ZendServer glory (a 137MB download, yay) just to get the right After extracting the .pax.gz archive from the installer package I quickly found it underneath usr/local/zend/lib/debugger/php-5.3.x/, copied it to my extension directory and added an ini file to load that one instead, just to find out shortly afterwards that the Zend binary was i386 only and MacPorts of course compiled everything nicely as x86_64 – php was of course unable to load such a module.

Well, the moral of the story is – go for Xdebug and don’t loose the track. And, let us all hope that Eclipse PDT is developed further, so the remaining glitches like the one above are fixed.

Exception chaining in Java

If you catch and rethrow exceptions in Java, you probably know about exception chaining already: You simply give the exception you “wrap” as second argument to your Exception like this

try { ... }
catch (Exception e) {
  throw new CustomException("something went wrong", e);

and if you look at the stack trace of the newly thrown exception, the original one is listed as “Caused by:”. Now today I had the rather “usual” use case of cleanup up a failing action and the cleanup itself was able to throw as well. So I had two causing exceptions and I wanted to conserve both of them, including their complete cause chain, in a new exception. Consider the following example:

try { ... }
catch (Exception e1) {
  try { ... }
  catch (Exception e2) {
     // how to transport e1 and e2 in a new exception here?!
  throw e1;

My idea here was to somehow tack the exception chain of e1 onto the exception chain of e2, but Java offered no solution for this. So I hunted for my own one:

public static class ChainedException extends Exception {
  public ChainedException(String msg, Throwable cause) {
    super(msg, cause);
  public void appendRootCause(Throwable cause) {
    Throwable parent = this;
    while (parent.getCause() != null) {
      parent = parent.getCause();

Now I only had to base the exceptions I actually want to chain on ChainedException and was able to do this (in fact I based all of them on this class):

try { ... }
catch (ChainedException e1) {
  try { ... }
  catch (ChainedException e2) {
    throw new ChainedException("cleanup failed", e2);
  throw e1;

Try it out yourself – you’ll see the trace of e1 at the bottom of the cause chain of e2. Quite nice, eh?

guitone license change

Guitone, my little GUI frontend for the monotone SCM, is currently licensed according to the terms of the GPLv3+ and was previously – before version 0.8 – licensed under GPLv2+. Newer development however forces me to re-license it again, this time under slightly less restrictive “copyleft” terms, under LGPLv3.

The reason for this is my usage of the Graphviz library to render monotone revision trees. Graphviz is released under EPL (older versions under CPL even), and this license is strictly incompatible to any pure GPL version. I contacted the Graphviz guys and I also contacted the legal affairs support of the FSF and checked the options I had and the result is now that I have to adapt, one way or another. I could either use the “command line interface” of Graphviz or choose another license for guitone. I didn’t want to go the first route, simply because I have a working implementation and because I didn’t want to make slow calls into external binaries, so a new license had to be chosen on my side.

So, starting with the upcoming version 1.0 guitone is licensed under LGPLv3. I contacted the previous contributors of guitone and all parties are ok with the license change, so the actual license change will pop up in the current development’s head shortly.

Thanks for your interest.

Access the Android menu in VirtualBox on a Mac host

If you’re desperately trying to get the Menu button in an Android x86 installation working under VirtualBox on a Mac OS X host – whose keyboard of course doesn’t have this “context” / “menu” key Windows keyboards have on the right – you might find the touch-only-device mode in Android x86 handy:

  1. Click on the clock in the status bar to enable / disable this mode altogether
  2. A swipe from the left to the right emulates the Menu button function
  3. A swipe from right to left emulates the Back button function
  4. Simply clicking on the status bar brings you to the Home screen