Embed Confluence pages in Jira issues

[Updated 2014-05-15 Adapt the iframe’s width as well and add a “edit in confluence link”]

There is a question in Atlassian’s Q&A tracker that hasn’t sufficiently been answered yet and that I stumbled upon today as well, namely embedding whole Confluence pages in Jira description (and any other rich-text enabled) fields.

The reason why one would want to do something like this is to avoid context-switching between both tools. Imagine you write your specification in a wiki, but want to use an issue tracker to manage your workload. And while Atlassian has a solution to link Jira issues to Confluence pages, there is no macro or other function to actually embed the content.

What I’ll show you today is basically a hack. We’ll generate a small HTML snippet dynamically in confluence that includes an <iframe> element and let the user copy that to a Jira description field where it loads the page’s print view. This hack was tested under the following (software) conditions:

  • Jira 6.1.7
  • Confluence 5.4.2, with the documentation theme globally applied
  • both setup under the same domain, i.e. yourserver.com/jira and yourserver.com/confluence

Now to the configuration. On Jira’s side only one thing is needed, the {html} macro must be enabled. By default, this is disabled for security reasons (you can tell) and you should really only enable this if your Jira instance is not publicly available. Anyways, follow these steps:

  1. In Jira, go to Manage Add-Ons
  2. Then change the “Filter visible addons” drop down to show “All Add-Ons”
  3. Then search for the word “wiki” and expand the Wiki Renderer Macros Plugin
  4. Then click on the link on the right which says “7 of 8 modules enabled”
  5. Finally, click “enable” next to the last module which says “html”

Now, on Confluence’ side we want to generate some HTML snippet for a specific page and we need some little UI for that. Usually, if you want to change the contents of a web page after it is rendered in the browser, you use some browser-specific mechanism, i.e. you write a Chrome extension or a Greasemonkey script for Firefox. But Confluence offers a better, cross-browser way to inject custom code – custom HTML!

  1. In Confluence, go to Custom HTML
  2. Click on “Edit” and paste the following code into the “At end of the HEAD” textbox
  3. Hit “Save”

Now to the code:

  var meta = AJS.$('meta[name=ajs-page-id]');
  if (meta.size() > 0) {
    var list = AJS.$('<li class="ajs-button normal" />')
      .appendTo('#navigation ul');
    AJS.$('<a rel="nofollow" accesskey="q" title="Copy embed code (q)" />')
      .text('Embed code')
      .on('click', function() {
        window.prompt('Embed code: Ctrl+C, Enter', '{html}\u003Cscript type="text/javascript">function aR(fr){$f=AJS.$(fr);$f.height($f.contents().height());$p=$f.parents("*[data-field-id=description]");if($p.length>0){$f.width($p.width())}else{$p=$f.parents("div.mod-content");if($p.length>0){$f.width($p.width()-30)}}}\u003C/script>\u003Ca href="/confluence/pages/editpage.action?pageId=3375112" style="float: right" target="confluence">\u003Csmall>Edit in Confluence \u003C/small>\u003C/a>\u003Ciframe src="/confluence/plugins/viewsource/viewpagesrc.action?pageId=3375112" style="overflow:hidden;border:0" onload="aR(this);">\u003C/iframe>{html}');

So what is this? Basically, AJS is Atlassian’s entry point for its AUI library (Atlassian User Interface). This contains a full version of jQuery, accessible via AJS.$. Now, if AJS is initialized, we query the page ID of the currently viewed page. This is embedded in the page as meta tag with the name “ajs-page-id”.

Next, a new button is added to the main navigation that opens a window prompt containing the HTML code to be copied (\u003C is <, this was needed to render a valid HTML page, while still showing proper HTML tags in the prompt).

Lets have a closer look at the dynamic code part that is later executed in Jira, this time broken down into lines, for better understanding:

<script type="text/javascript">
function aR(fr) {
  $f = AJS.$(fr);
  $p = $f.parents("*[data-field-id=description]");
  if ($p.length > 0) { 
  } else {
    $p = $f.parents("div.mod-content");
    if ($p.length > 0) { 
      $f.width($p.width() - 30);
<a href="/confluence/pages/editpage.action?pageId=' + meta.attr('content') + '" 
      style="float: right" target="confluence">
   <small>Edit in Confluence</small>
  src="/confluence/plugins/viewsource/viewpagesrc.action?pageId=' + meta.attr('content') + '" 

You can see some Javascript again, a link to edit the page externally in Confluence, and an iframe definition. The frame loads a confluence’ page source view (usually accessible from Tools > Show page source) and is set dynamically to the width of the outer container i.e. either the detailed Jira issue view’s div.mod-content container or Jira Agile’s description container (targetable with dd[data-field-id=description]) in a Scrum-based board. For the former we have to subtract some pixels to avoid that the edit bar on the right of the description is pushed outside of the parent container.

Now, to avoid that the contents of the iframe must be scrolled separately from the browser’s viewport, we also set the height of the iframe, in this case dynamically to the height of the iframe’s contents, as soon as these are loaded. Note that a pure CSS solution, like height: 100% or else, would not work here, because we’re not under control of the parent HTML containers in which the iframe is actually rendered and giving the iframe a fixed height would be nonsense as well, since you don’t know the page length in advance.

And thats it, now you can embed Confluence pages in Jira issues with only a few clicks! Have fun!

Batch-remove empty lines at the end of many Confluence pages

In a customer project we’ve decided to collaboratively write a bigger bunch of documentation in Atlassians Confluence and export that with Scroll Office, a third-party Confluence plugin, into Word.

That worked fine so far, but soon we figured that we’ve been kind of sloppy with empty lines at the end of each page, which were obviously taken over into the final document. So instead of going over each and every page and remove the empty lines there, I thought it might be easier to directly do this on the database, in our case MySQL.

The query was quickly developed, but then I realized that MySQL had no PREG_REPLACE function built-in, so I needed to install a UDF, a user-defined function first. Luckily, this UDF worked out of the box and so the query could be finalized:

SET BODY=PREG_REPLACE("/(<p>&nbsp;<.p>)+$/", "", BODY) 
WHERE BODY LIKE "%<p>&nbsp;</p>";

This query updates all current pages (no old versions) from all spaces that end with at least one empty line <p>&nbsp;</p> – this is Confluence’s internal markup for that – and removes all of these empty lines from all matches pages.

This was tested with MySQL 5.5.35, lib_mysqludf_preg 1.2-rc2 and Confluence 5.4.2.

I don’t need to mention that it is – of course – highly recommended that you backup your database before you execute this query on your server, right?

jazzlib – an alternative for reading ZIP files in Java

Java had zip-reading capabilities for a long time, naturally because jar files are simply compressed zip files with some meta data. The needed classes reside in the java.util.zip namespace and are ZipInputStream and ZipEntry.

Recently, however, ZipInputStream gave me a huge headache. My use case was as simple as

  • read the zip entries of a list of zip files (each varying in size, but usually around 20MB)
  • skip to the zip entry that has a certain name (a single text file with only two bytes of contents)
  • read the contents of this zip entry and close the zip

Doing this for about 25 files took my Pentium D (2GHz) with 3GB of RAM roughly 20 seconds. Wow, 20 seconds really? I created a test case and profiled the code in question separately with YourKit (which is a really great tool, by the way!):

It got stuck quite a bit in java.util.zip.Inflater.inflateBytes – but that seemed to use native code, so I couldn’t profile any further.

So I went on and searched for an alternative of java.util.zip – and luckily I found one with jazzlib, which provides a pure Java implementation for ZIP compression and decompression. This library is GPL-licensed (with a small exception clause to prevent the pervasiveness of the GPL) and comes in two versions, one that duplicates the single library classes underknees java.util.zip (as a drop-in replacement for JDK versions where this is missing) and one that comes in its own namespace, net.sf.jazzlib.

After I went for the second version, I restarted my test and it only took about 7 seconds this time. At first I thought that there must be some downside to this approach, so I checked the timings for a complete decompression of the archive, but the timings here were on par with the ones from java.util.zip (roughly 5 seconds for a single 20MB file).

I haven’t tested compression speed, because it doesn’t matter much for my use case, but the decompression speed alone is astonishing. I wonder why nobody else stumbled upon these performance problems before…

Debugging Java WebStart applications with Eclipse

The last couple of days I went through a little nightmare: I needed to debug a Java application which showed some weird behaviour only when loaded via WebStart, but not when executed within Eclipse.

Multiple internet resources told me to use the JAVAWS_VM_ARGS environment variable to tell the Java VM to either create a debug socket itself or connect to some socket started from Eclipse and then simply set a breakpoint there and wait for the meal. But nothing worked out for me, until I found two very important issues nobody wrote about so far:

  • Under Java 6 JAVAWS_VM_ARGS seems to be completely ignored by javaws, on Linux and on Windows. The way to go was to use separate -J options in the call to javaws – and given a running Eclipse debug server you finally saw the threads of your running application.
  • Wait, were these threads really belonging to your running application? This was the second issue. Apparently javaws directly forks away as soon as it started and runs your application in a separate VM, which of course did not get the original debug options passed. Again, some internet resources said it would be enough to load the jnlp file from the net and start locally, but actually this did not work out. The magic option to apply to javaws here was -Xnofork and finally, on the next run I saw my application’s threads in Eclipse and my breakpoints magically worked as expected.

Here is the complete command line example again (given a running Eclipse debug server on port 8000):

javaws -Xnofork -J-Xdebug -J-Xnoagent \
    -J-Xrunjdwp:transport=dt_socket,server=n,address=8000 \

Hope that helps someone 🙂


Today I completed my ScrumMaster certification. After the two days with Joseph Pelrine in early November they finally send me some login for their website scrumalliance.com, offered a short web-based test (which you couldn’t really fail) and tada – now I can call myself “Certified ScrumMaster”.

So everything seemed to look fine, but wait, why has this certificate an expiry date tacked on it? Uh, yes, erm, I’m supposed to renew it every year beginning from 2012 for 50 bucks. Its not like your drivers license – once you’ve learned to ride your car you cannot unlearn it, right? All the knowledge about Scrum is of course totally forgotten and buried in my head when the magic year 2012 has been reached… now thats business!

But hey, I got something in return. I’m listed with the other 112122 fellows on their website as CSM and are “allowed” to use the term “CSM”, the “logo” (which I had to extract via photoshop from the certificate itself, because they do not offer a download) and the nice little Scrum community they build around the website.

Thank god that Joseph was entertaining and educational enough for the two days in November that all the (company) money was worth the certification. The certificate itself and the “services” around it are certainly not worth it.

The fuzzy cloud

Until now “the cloud” (the computing paradigm) was for me more or less a fuzzy defined hype I missed to really “see” the advantages of: I delegate infrastructure resources to some external “hoster” and pay for that, but I instantly got this nagging fear that I’ll loose full control over my resources and data when I’d do so.

While I am (unfortunately) not in the right business for experiencing and working with “the cloud” myself, I’m still very interested in learning about the possibilities and changes cloud computing introduces, especially from the development point of view.

Today I now stumbled upon Adrian Cockcroft‘s outstanding presentation in which he outlines how Netflix, the company he’s working for, introduced cloud-based services step by step as a replacement for traditional data center services and monolithic architectures. After reading through the roughly hundred slides I can say I’m much more confident to believe that all this is the right direction in the future, also (but not solely) because it forces us developers to write much more domain-independent, easy servicable, stripped down code.

So if you have 30 minutes for a very good introduction in practical cloud computing, I urge you to go ahead and read the slides.

Symfony development

Last week the second incarnation of Symfony Live came to an end and I just had the time to check a couple of shared slides from the event.

Definitely interesting stuff going on there, especially the preview release of Symfony 2.0 whose code is available on GitHub since a couple of weeks and which makes major changes to the “good old way” one used to write symfony applications (actions are now “controllers” and extensions “bundles” and well, a dozen of other things changed as well of course… you can read everything in detail here).

Also, Doctrine 2.0 seems to be the first PHP ORM which decouples the modelling approach from the actual database abstraction layer, skips the need for base classes and enables the model definition via annotations. Also, they seem to fight against the overly complex magic from Doctrine 1.x (one of my top complaints on Doctrine in comparison to, f.e. Propel) – maybe I’ll revisit Doctrine again when the next version gets stable.

The guys at Sensio labs do really have a fast development pace and I get more and more the impression that the Symfony ecosystem is the major competitor for the Zend framework. Community-wise I think Symfony is already much bigger than any other PHP framework.

Never trust doctrine:data-dump…

…and especially not if you get the impression that the dump will afterwards be readable by the doctrine:data-load command of symfony.

It was a costly lesson today when I tried to reimport a dump of a couple of Sympal tables. One of them, the one which models the menu items, has a nested set behaviour, and apparently this one cannot be restored properly by doctrine:

Unknown record property / related component "children" on "sfSympalMenuItem"

Apparently this particular issue popped up a couple of times in the past for other people as well (Google for it) and while the help of doctrine:data-dump still (Doctrine 1.2) blatantly states

The doctrine:data-dump task dumps database data:

 ./symfony doctrine:data-dump

The task dumps the database data in data/fixtures/%target%.

The dump file is in the YML format and can be reimported
by using the doctrine:data-load task.

 ./symfony doctrine:data-load

(with the emphasis of “can be reimported”)

the author of Doctrine, Jonathan Wage, told me today on Sympal’s IRC (shortened):

<jonwage> we don’t want people to think you can dump and then restore
<jonwage> that is not what the data fixtures are for
<jonwage> b/c dumping and then loading will never work
<jonwage> an ORM modifies data on the way and and the way out
<me> I mean the least thing doctrine could do there is that if it detects the nested set behaviour it should error out clearly on dump
<jonwage> so you can’t dump the data through an ORM and then try and reload it
<jonwage> i.e. hashed passwords
<me> if dumping is “never” going to work – why do you support dumping into yaml at all?!
<jonwage> if we do that then we would have to throw errors in sooooooo many other cases too
<jonwage> because it is at least a little bit of a convenience
<me> its like a half-baked feature then
<jonwage> we dump the raw data
<jonwage> and you can tweak it
<jonwage> thats my point though, it will ALWAYS be a half baked feature thats why we document it that way
<jonwage> it can NEVER work 100% the way you want it to
<jonwage> so if we fix that one thing, a million other things will be reported that we cannot fix
<jonwage> bc an ORM is not a backup and restore tool
<jonwage> it is impossible

Now I know that as well. My only problem was that I struggled “what is wrong with my fixtures” the whole time and never dared to ask “what is wrong with doctrine”…