Blog

Debugging with Gradle

If you want to debug something that is build / run with Gradle, you have a few options.

In all cases, your IDE needs to be set up to listen to a remote JDWP (Java Debug Wire Protocol) connection. In IntelliJ this looks like this: Go to “Edit configurations”, hit the “+” button on the top left corner, select “Remote” and give your run configuration a name. Leave the other configuration options as-is. (As Gradle will always start the debug server for us, we leave “Attach to remote JVM” selected.) Finally, hit “OK”.

Now to the actual debugging.

Debugging JUnit tests

More often than not you cannot debug a unit test properly inside the IDE. Even if you use the Gradle builder in IntelliJ for example there are times where the IDE simply won’t get the classpath right and your tests fail with obscure errors.

Now with Gradle we don’t need to start the tests from within the IDE to debug them, luckily! All we need is this:

 ./gradlew app:testDebug --tests MyAwesomeTest --debug-jvm

This is an example from an Android project, but you can think of any other test flavor here. With --tests we define the test we’d like to run (to avoid having to wait for all tests of the module to be executed) and --debug-jvm lets Gradle wait for our debugger to attach, before the test is executed.

Now you can put breakpoints into your code and start the pre-configured “Gradle” run configuration in “Debug” mode. As soon as you see “… connected” in the IDE, the command line execution will continue, execute your test and eventually stop on your breakpoints.

Debugging Gradle build scripts

Debugging Gradle build scripts itself is possible by starting any Gradle command with an additional, slightly different argument:

./gradlew myAwesomeTask -Dorg.gradle.debug=true

Here again, Gradle will start up and wait for your IDE to connect to the debug server, then continue executing your task and eventually stop on your breakpoints.

Not so fast, my breakpoints are not hit!

Well, it wouldn’t be Gradle if it would be that easy, right? Right!

Issue is that in a “properly” configured Gradle project there are probably multiple things set up to speed up your build. First and foremost, a running Gradle Daemon in the background might be re-used and you might fail to attach to that Daemon again once you disconnected from it once. So, best option here is to disable the usage of a global daemon for your run and just spawn a specific daemon for the command you want to debug:

./gradlew --no-daemon ...

(There is also an org.gradle.daemon.debug option to debug the daemon itself, but I never found a useful way of working with this. Would be helpful for feedback on this one :))

Secondly, you might have a build cache set up (either locally or remote). If you run tasks that ran through successful once, Gradle will just use the default outputs and skip task execution completely. (You’ll notice that usually when the Gradle output says something like “x tasks executed, y tasks cached, …”.) So, disable caching temporarily as well:

./gradlew --no-daemon --no-build-cache ...

Lastly, specifically if you execute tests, you should remove the previous test results, so your test is actually executed again:

rm -rf app/build/reports && \
  ./gradlew --no-daemon --no-build-cache ...

Now your breakpoints should be hit for real. Happy debugging!

Magnet – an alternative to Dagger

I meant to write about this for a very long time, but never actually came around and did it, mostly because of time constraints. But here we are, let’s go.

What is Magnet?

Magnet is a Java library that allows you to apply dependency injection (DI), more specifically Dependency Inversion, in your Java / Kotlin application.

Why another DI library?

Traditionally there have been many libraries in the past and there are even in the present that do this job. In the mobile area on Android where I mostly work on, all started out with Roboguice (a Android-friendly version of Googles Guice), then people migrated to Square’s Dagger and later Google picked up once again and created Dagger2 that is still in wide use in countless applications.

I have my own share of experience with Dagger2 from the past; the initial learning curve was steep, but once you was into it enough it worked out pretty well, except for a few nuisances:

  • Complexity – The amount of generated code and the reason why sometimes this code generation failed because of an error on your side is hard to grasp. While literally all code is generated for you, navigating between these generated parts proved to be very hard. On the other hand understanding some of the errors the Dagger2 compiler spit out in case you missed an annotation somewhere is, to put it mildly, not easy either.
  • Boilerplate – Dagger2 differentiates between Modules, Components, Subcomponents, Component relations, Providers, Custom Factories and what not and comes with a variety of different annotations that control the behavior of all these things. This is not only adding complexity, but because of the nature of the library you have to do a lot of footwork to get your first injection available somewhere. Have you ever asked yourself for example if there is a real need to have both, components and modules, in Dagger2?

Now this criticism is not new at all and with the advent of Kotlin on Android other projects emerged that try to provide an alternative to Dagger2, most prominently Kodein and Koin. However, when I played around with those it felt they missed something:

  • In Kodein I disliked that I had to pass kind-of a god object around to get my dependencies in place. Their solution felt more like a service locator implementation than a DI solution, as I was unable to have clean constructors without DI-specific parameters like I was used to from Dagger2 and others
  • In Koin I disliked that I had to basically wire all my dependencies together by hand; clearly this is the task that the DI library should do for me!

Looking for alternatives I stumbled upon Magnet. And I immediately fell in love with it.

Magnet Quick-Start

To get up on speed, let’s compare Magnet with Dagger2, by looking at the specific terms and things both libraries use.

Dagger2Magnet
ComponentScope
Subcomponent
Module
@Inject @Instance on class level
@Provides @Instance on class level or provide method
@Binds@Instance on class level or provide method
@Component
@Singleton @Instance with scoping = TOPMOST
@Named("...")@Instance / bind() with classifier = "..."
dagger.Lazy<Foo>Kotlin Lazy<Foo>
dagger.Provider<Foo>@Instance with scoping = UNSCOPED
Dagger AndroidCustom implementation needed, like this

Don’t be afraid, we’ll discuss everything above in detail.

Initial Setup

Magnet has a compiler and a runtime that you need to add as dependencies to each application module you’d like to use Magnet with:

dependencies {
  implementation "de.halfbit:magnet-kotlin:3.3-rc7"
  kapt "de.halfbit:magnet-processor:3.3-rc7"
}

The magnet-kotlin artifact pulls in the main magnet dependency transitively and adds a couple of useful Kotlin extension functions and helpers on top of it. The annotation processor magnet-processor generates the glue code to be able to construct your object tree later on. Besides that there are other artifacts available, which we’ll come back to later on.

Now that the dependencies are in place, Magnet needs an assembly point in your application to work. This tells Magnet where it should create its index of injectable things in your application, basically a flat list of all features included in the app inside of its gradle dependencies section. The assembly point can be written as an interface that you annotate with Magnet’s @Registry annotation:

import magnet.Registry   

@Registry
interface Assembly

The main application module is the module that usually contains this marker interface, but it could be as well the main / entry module of your library.

About scopes

Scopes in Magnet act similar like Components in Dagger2. They can be seen as “bags of dependencies” by holding references to objects that previously have been provisioned. Scopes can contain child scopes, which in turn, can again contain child scopes themselves.

There is no limit how deep you can nest your scopes; in Android application development however you should usually have at least one scope per screen in addition to the root scope, which we’ll discuss in a second.

Scopes are very easy to create and also very cheap, so it could also be useful to create additional scopes for certain, time-limited tasks, like a separate scope for a background service or even a scope for a specific process-intensive functionality that requires the instantiation of several classes that are not needed outside of this specific task. This way memory that is used by these classes can quickly be reclaimed by letting the particular scope and all its referenced class instances become subject for garbage collection shortly afterwards the task has been finished.

Creating the Root Scope

With the assembly point in place, we can start and actually create what Magnet calls the Root Scope. This – as the name suggests – is the root of any other scope that you might create. In this way it is comparable with what you’d usually call the application component in Dagger2, so you should create it in your application’s main class (your Application subclass in Android, for example) and keep a reference to it there.
We do this as well, but at the same time add a little abstraction to make it easier to retrieve a reference to this (and possibly other) scopes later on:

// ScopeOwner.kt
interface ScopeOwner {
val scope: Scope
}

val Context.scope: Scope
get() {
val owner = this as? ScopeOwner ?:
error("$this is expected to implement ScopeOwner")
return owner.scope
}

// MyApplication.kt
class MyApplication : Application(), ScopeOwner {
override val scope: Scope by lazy {
  Magnet.createRootScope().apply {
  bind(this@MyApplication
as Application)
  bind(this@MyApplication
as Context)
  }
  }
}

You see that the root scope is created lazily, i.e. on first usage. While Magnet scopes aren’t as heavy as Dagger2 components on object creation, it’s still a good pattern to do this way.
In addition you see that – right after the root scope is created – we bind two instances into it, the application context and the application. The bind(T) method comes from magnet-kotlin and actually simply calls into a method whose signature is bind(Class<T>, T).

Creating subscopes

Once the root scope is available, you’re free to create additional sub-scopes for different purposes. This is done by calling scope.createSubscope(). A naive implementation of an “activity scope” could for example look like this:

class BaseMagnetActivity : AppCompatActivity, ScopeOwner {
 override val scope: Scope by lazy {
application.scope.createSubscope()
}
}

But unfortunately this wouldn’t bring us very far, since this scope would be created and destroyed every time the underlying Activity would be restarted (e.g. on rotation). With AndroidX’s ViewModel library we however can create a scope that is not attached to the fragile Activity (or Fragment) , but to a separate ViewModel that is kept around and only destroyed when the user finishes the component or navigates away from it. While the glue code to set up such a thing is not yet part of Magnet, it’s no big wizardy to write it yourself. You might want to take some inspiration from my own solution.

Provisioning of instances

Above we’ve seen how we can bind existing instances of objects into a particular scope, but of course a DI library should allow you to automatically create new instances of classes without that you have to care about the details of the actual creation, like required parameters.

Magnet does this of course, and in addition to that also introduces a novel approach where it places the resulting instances in your object graph: Auto-scoping.

Auto-scoping means that Magnet is smart about figuring out what dependencies your new instance needs and in which scopes these instances themselves are placed. It then determines the top-most location in your scope tree your new instance can go to and places it exactly there. If the top-most location then happens to be the root scope, the instance becomes available globally. This mimics the behavior of Dagger2 when you annotate a type with @Singleton:

@Instance(type = Foo::class, scoping = Scoping.TOPMOST)
class Foo {}

It’s important to understand that with Magnet you only ever annotate types (and eventually pure functions, see below), but never constructors. This is a major gotcha when coming from Dagger2, where you place the @Inject annotation directly at the constructor that Dagger2 should use to create the instance of your type. This also means that Magnet is a bit picky and requires you to only have a single visible constructor for your type (package-protected or `internal` is possible as well), otherwise you’ll receive an error.

The optionScoping.TOPMOST, that you see in the example above, which triggers auto-scoping, is the default, so it can be omitted. Beside TOPMOST there is also DIRECT and UNSCOPED, which – as their name suggests – override the auto-scoping by placing an instance directly in the scope from which it was requested from (DIRECT) or not in any scope at all (UNSCOPED). The latter is very useful as a Factory pattern and can be compared with Dagger2’s Provider<Foo> feature.

Now, while this auto-scoping mechanism sounds awesome in first instnace, there might be times where you want to have a little more control what is going on. For example when you have a class that does not directly depend on anything in your current scope, but you still want to let it live in a specific (or “at-most-top-most”) scope, because it is not useful globally and would just take heap space if kept around. This can be achieved as well, simply by tagging a scope with a limit and applying the same tag to the provisioning you want to limit:

const val ACTIVITY_SCOPE = "activity-scope"

val scope = ...
scope.limit(ACTIVITY_SCOPE)

@Instance(type = Foo::class, scoping = Scoping.TOPMOST, limitedTo = ACTIVITY_SCOPE)
class Foo {}

With all that information, let’s discuss a few specific examples. Consider a scope setup consisting of the root scope, the sub-scope “A” (tagged with SCOPE_A ) which is a child of the root scope, and the sub-scope “B”, which is a child of the sub-scope “A”. Where do specific instances go to?

  • New instance without dependencies and scoping = TOPMOST – Your new instance goes directly into the root scope.
  • New instance with dependencies that themselves all live in the root scope and scoping = TOPMOST – Your new instance goes directly into the root scope.
  • New instance with at least one dependency that lives in a sub-scope “A” and scoping = TOPMOST
    • If requested from the root scope, Magnet will throw an error on runtime, because the specific dependency is not available in the root scope
    • If requested from the sub-scope “A”, the new instance will go into the same scope
    • If requested from the sub-scope “B”, the new instance will go into sub-scope “A”, the “top-most” scope that this instance can be in
  • New instance without dependencies, scoping = TOPMOST and limitedTo = SCOPE_A
    • If requested from the root scope, Magnet will throw an error on runtime, because the root scope is not tagged at all and there is no other parent scope available that Magnet could look for to match the limit configuration
    • If requested from the sub-scope “A”, the new instance will go into the same scope
    • If requested from the sub-scope “B”, the new instance will go into sub-scope “A”, the “top-most” scope that this instance is allowed to be in because of its limit configuration
  • New instance with arbitrary dependencies and scoping = DIRECT – Your instance goes directly into the scope from which you requested it
  • New instance with arbitrary dependencies and scoping = UNSCOPED – Your instance is just created and does not become part of any scope

Provisioning of external types

Imagine you have some external dependency in your application that contains a class that you depend on in one of your own classes. In Dagger2 you have to write a custom provisioning method to make this type “known” to the DI. In Magnet this process is similar, as it does not use reflection to instantiate types, like Dagger2, so you also have to write such provisional methods.

But in case the library you’re integrating was itself built with Magnet, then Magnet already created something that it calls a “provisioning factory” and that was likely packaged within the library already. In this case, Magnet will find that packaged provisioning factory and you don’t need to write custom provision methods yourself!

So, how are these provisioning methods then exactly written? Well, it turns out Magnet’s @Instance annotation is not only allowed on types, but on pure (static, top-level) functions as well:

@Instance(type = Foo::class)
fun provideFoo(factory: Factory): Foo = factory.createFoo()

A best practice for me is to add all those single provisions to a separate file that I usually call StaticProvision.ktand that I put in the specific module’s root package. There it is easy to find and will not only contain the provision methods, but other global configurations / constants that might be needed for the DI setup.

Provision different instances of the same type

Magnet supports providing and injecting different instances of the same type in any scope. All injections and provisions we did so far used no classifier, Classifier.NONE, but this can easily be changed:

// Provision
internal const val BUSY_SUBJECT = "busy-subject"

@Instance(type = Subject::class, classifier = BUSY_SUBJECT)
internal fun provideBusySubject(): Subject<Boolean> =
PublishSubject.create<Boolean>()

// Injection
internal class MyViewModel(
@Classifier(BUSY_SUBJECT) private val busySubject: Subject<Boolean>
) : ViewModel { ... }

Of course you can also bind instances with a specific classifier, the bind(...) method accepts an optional classifier parameter where you can “tag” the instance of the type as well. This is for example useful if you want to bind Activity intent data or Fragment argument values into your scope, so that they can be used in other places.

Provisioning while hiding the implementation

You might have wondered why each @Instance provisional configuration repeated the type that was provided – the reason is that you can specify another base type (and even multiple base types!) you want your instance to satisfy. This allows you to easily hide your implementation and just have the interface “visible” in your dependency tree outside your module.

Consider the following:

interface FooCalculator {
fun calculate(): Foo
}

@Instance(type = FooCalculator::class)
internal class FooCalculatorImpl() : FooCalculator { ... }

While this is obviously something that Dagger2 allows you to do as well, the configuration is usually detached. You specify an abstract provision of FooCalculator in a FooModule, that with luck lives nearby the interface and implementation, but eventually it does not, because Dagger2 modules are tedious to write and most people reuse existing module definitions for all kinds of provisionings.

Magnet’s approach here is clean, concise and simple, so simple actually that I most of the time no longer separate interface and implementation into separate files, but keep them directly together.

Providing Scope

One not so obvious thing is that Magnet is able to provide the complete Scope a specific dependency lives in as dependency itself. This might seem to be counter-intuitive at first, as this makes Magnet look like a service locator implementation, but there are use cases where this becomes handy.

Imagine you have a scheduled job to execute and the worker of this job needs a specific set of classes to be instantiated and available during the execution of the job. It might be the case however that multiple workers might be kicked off in parallel, so each worker instance needs its own set of dependencies, as some of them are also holding state specific to the worker. How would one implement these requirements with Magnet?

Well, it looks like that we could create a sub-scope for each worker and keep them separated this way, like so:

@Instance(type = JobManager::class)
class JobManager(private val scope: Scope, private val executor: Executor) {
fun start(firstParam: Int, secondParam: String) {
val subScope = scope.createSubscope {
bind(firstParam)
bind(secondParam)
}
val worker: Worker = subScope.getSingle()
executor.execute(worker)
}
}

@Instance(type = Worker::class)
class Worker(private val firstParam: Int, secondParam: String): Runnable {
fun run() { ... }
}

Injecting dependencies

Now we’ve talked in great length about how you provide dependencies in Magnet, but how do you actually retrieve them once they are provided?

Magnet offers several ways to retrieve dependencies:

  • Scope.getSingle(Foo::class) (or a simple dependency on Foo in your class’ constructor) – This will try to retrieve a single instance of Foo while looking for it in the current scope and any parent scope. If it fails to find an instance, it will throw an exception on runtime. If several instances of Foo can be found / instantiated, it will also throw an exception.
  • Scope.getOptional(Foo::class) (or a simple dependency on Foo? in your class’ constructor) – This will try to retrieve a single instance of Foo while looking for it in the current scope and any parent scope. If it fails to find an instance, it will return / inject null instead. If several instances of Foo can be found / instantiated, it will throw an exception.
  • Scope.getMany(Foo::class) (or simple dependency on List<Foo> in your class’ constructor) – This will try to retrieve a multiple instances of Foo while, again, looking for it in the current scope and any parent scope. If no instance is provided, an empty list is returned / injected instead.

An important difference to Dagger2 here is that not the provisioning side determines whether a List of instances is available (in Dagger2 annotated with @Provides @IntoSet), but the injection side requests a list of things. Also, there is no way to provision a map of <key, value> pairs in Magnet, but this limitation is easy to circumvent with the provision of a List of instances of a custom type that resembles both, key and value:

interface TabPage {
val id: String
val title: String
}

@Instance(type = TagPagesHost::class)
internal class TagPagesHost(pages: List) {
private val tabPages: Map = pages.associateBy { it.id }
}

Optional features

Now you might not have noticed it in the last section, but the ability to retrieve optional dependencies in Magnet is actually quite powerful.

Imagine you have two modules in your application, foo and foo-impl. The foo module contains a public interface that foo-impl implements:

// foo module, FooManager.kt
interface FooManager {
fun doFoo()
}

// foo-impl module, FooManagerImpl.kt
@Instance(type = FooManager::class)
internal class FooManagerImpl() : FooManager {
fun doFoo() { ... }
}

Naturally, foo-impl depends on the foo module, but in your app module it’s enough that you depend on foo for the time being to already make use of the feature:

// app module, build.gradle
android {
productFlavors {
demo { ... }
full { ... }
}
}
dependencies {
implementation project(':foo')
}

// app module, MyActivity.kt
class MyActivity : BaseMagnetActivity() {
fun onCreate(state: Bundle) {
super.onCreate()
...
findViewById(R.id.some_button).setOnClickListener {
val fooManager: FooManager? = scope.getOptional()
fooManager?.doFoo()
}
}

Now if you then also make foo-impl available to the classpath (e.g. through a different build variant or a dynamic feature implementation), your calling code above will continue to work without changes:

// app module, build.gradle
dependencies {
productionImplementation project(':foo-impl')
}

How cool is that?

Remember though that this technique only works on the specific module that acts as assembly point (see above), so in case you have a more complex module dependency hierarchy you can’t manage optional features in a nested manner.

App extensions

AppExtensions is a small feature that is packaged as an additional module in Magnet. It allows you to extract all code you typically keep in application class into separate extensions by their functionality to keep application class clean and “open for extension and closed for modification” (Open-Closed principle). Here is how you’d set it up:

// app module, build.gradle
dependencies {
  implementation "de.halfbit:magnetx-app:3.3-rc7"
}

Then add the following code into your Application subclass:

class MyApplication : Application(), ScopeOwner {
...

private lateinit var extensions: AppExtension.Delegate

override fun onCreate() {
super.onCreate()
extensions = scope.getSingle()
extensions.onCreate()
}

override fun onTrimMemory(level: Int) {        
extensions.onTrimMemory(level)
super.onTrimMemory(level)
}
}

There are many AppExtensions available, e.g. for LeakCanary, and you can even write your own. Try it out!

Debugging Magnet

Due to its dynamic nature it might not always be totally obvious in which scope a certain instance lives in. That is where Magnet’s Stetho support comes in handy.

At first add the following two dependencies into your app’s debug configuration:

// app module, build.gradle
dependencies {
debugImplementation "de.halfbit:magnetx-app-stetho-scope:3.3-rc7"
debugImplementation "com.facebook.stetho:stetho:1.5.1"
}

This will add an app extension to Magnet that contains some initialization code to connect to Stetho and dump the contents of all scopes to it. In order to have the initialization code being executed, your Application class needs to have the AppExtensions code as shown in the previous section.

Now when you run your application, you can inspect it with Stetho’s dumpapp tool (just copy the dumpapp script and stetho_open.py into your project tree from here):

$ scripts/dumpapp -p my.cool.app magnet scope

Note that you need to have an active ADB connection for this to work. If you stumble upon errors, check first if adb devices shows the device you want to debug and eventually restart the ADB server / reconnect the device if this is not the case. The output then looks like this:

  [1] magnet.internal.MagnetScope@1daafe1
BOUND Application my.cool.app.MyApplication@2906100
BOUND Context my.cool.app.MyApplication@2906100
TOPMOST SomeDependency my.cool.app.SomeDependency@2bd93c7
...
[2] magnet.internal.MagnetScope@f6213e5
BOUND CompositeDisposable io.reactivex.disposables.CompositeDisposable@6f06eba
...
[3] magnet.internal.MagnetScope@4c964c8
BOUND CompositeDisposable io.reactivex.disposables.CompositeDisposable@1250961
TOPMOST SomeFragmentDependency my.cool.app.SomeFragmentDependency@7a6bc86
...
[3] magnet.internal.MagnetScope@d740574
BOUND CompositeDisposable io.reactivex.disposables.CompositeDisposable@fc7da9d
TOPMOST SomeFragmentDependency my.cool.app.SomeFragmentDependency@bf4ff74
...

The number in [] brackets determines the scope level, where [1] stands for the root scope, [2] for an activity scope and [3] for a fragment scope in this example. Then the type of binding is written in upper-case letters; things that are manually bound to the scope via Scope.bind() are denoted as BOUND, things that are automatically bound to a specific scope / level are denoted as TOPMOST and finally things that are directly bound to a specific scope are denoted as DIRECT. Instances that are scoped with UNSCOPED aren’t listed here, because as we learned, are not bound to any scope.

Roundup

Magnet is a powerful, easy-to-use DI solution for any application, but primarily targeted on large multi-module mobile apps.

There are a few more advanced features that I haven’t covered here, like selector-based injection. I’ll leave this as an exercise for the reader to explore and try out for her/himself 🙂

Anyways, if you made it until here, please give Magnet a chance and try it out. Due to it’s non-pervasive nature it can co-exist with other solutions side-by-side, so you don’t have to convert existing applications all at once.

Many thanks to Sergej Shafarenka, the author of Magnet, for proofreading this blog.

New PGP Key

I think it was about time to get a new one. While I do not get much encrypted / signed email, the old one from 2003 that used a DSA/ElGamal combination was considered less secure by today’s standards. Since I had a couple of signatures on the old one, I ensured that I signed the new one with the old one to get at least “some” initial trust on this as well.

tl;dr Here is the new key: 0xCD45F2FD

And for those of you who want to span a more “social” web of trust with me, I’m also on keybase.io and have a couple of invites left as you can see 🙂

Embed Confluence pages in Jira issues

[Updated 2014-05-15 Adapt the iframe’s width as well and add a “edit in confluence link”]

There is a question in Atlassian’s Q&A tracker that hasn’t sufficiently been answered yet and that I stumbled upon today as well, namely embedding whole Confluence pages in Jira description (and any other rich-text enabled) fields.

The reason why one would want to do something like this is to avoid context-switching between both tools. Imagine you write your specification in a wiki, but want to use an issue tracker to manage your workload. And while Atlassian has a solution to link Jira issues to Confluence pages, there is no macro or other function to actually embed the content.

What I’ll show you today is basically a hack. We’ll generate a small HTML snippet dynamically in confluence that includes an <iframe> element and let the user copy that to a Jira description field where it loads the page’s print view. This hack was tested under the following (software) conditions:

  • Jira 6.1.7
  • Confluence 5.4.2, with the documentation theme globally applied
  • both setup under the same domain, i.e. yourserver.com/jira and yourserver.com/confluence

Now to the configuration. On Jira’s side only one thing is needed, the {html} macro must be enabled. By default, this is disabled for security reasons (you can tell) and you should really only enable this if your Jira instance is not publicly available. Anyways, follow these steps:

  1. In Jira, go to Manage Add-Ons
  2. Then change the “Filter visible addons” drop down to show “All Add-Ons”
  3. Then search for the word “wiki” and expand the Wiki Renderer Macros Plugin
  4. Then click on the link on the right which says “7 of 8 modules enabled”
  5. Finally, click “enable” next to the last module which says “html”

Now, on Confluence’ side we want to generate some HTML snippet for a specific page and we need some little UI for that. Usually, if you want to change the contents of a web page after it is rendered in the browser, you use some browser-specific mechanism, i.e. you write a Chrome extension or a Greasemonkey script for Firefox. But Confluence offers a better, cross-browser way to inject custom code – custom HTML!

  1. In Confluence, go to Custom HTML
  2. Click on “Edit” and paste the following code into the “At end of the HEAD” textbox
  3. Hit “Save”

Now to the code:

<script>
AJS.toInit(function(){
  var meta = AJS.$('meta[name=ajs-page-id]');
  if (meta.size() > 0) {
    var list = AJS.$('<li class="ajs-button normal" />')
      .appendTo('#navigation ul');
    AJS.$('<a rel="nofollow" accesskey="q" title="Copy embed code (q)" />')
      .text('Embed code')
      .on('click', function() {
        window.prompt('Embed code: Ctrl+C, Enter', '{html}\u003Cscript type="text/javascript">function aR(fr){$f=AJS.$(fr);$f.height($f.contents().height());$p=$f.parents("*[data-field-id=description]");if($p.length>0){$f.width($p.width())}else{$p=$f.parents("div.mod-content");if($p.length>0){$f.width($p.width()-30)}}}\u003C/script>\u003Ca href="/confluence/pages/editpage.action?pageId=3375112" style="float: right" target="confluence">\u003Csmall>Edit in Confluence \u003C/small>\u003C/a>\u003Ciframe src="/confluence/plugins/viewsource/viewpagesrc.action?pageId=3375112" style="overflow:hidden;border:0" onload="aR(this);">\u003C/iframe>{html}');
      })
      .appendTo(list);
  }
});
</script>

So what is this? Basically, AJS is Atlassian’s entry point for its AUI library (Atlassian User Interface). This contains a full version of jQuery, accessible via AJS.$. Now, if AJS is initialized, we query the page ID of the currently viewed page. This is embedded in the page as meta tag with the name “ajs-page-id”.

Next, a new button is added to the main navigation that opens a window prompt containing the HTML code to be copied (\u003C is <, this was needed to render a valid HTML page, while still showing proper HTML tags in the prompt).

Lets have a closer look at the dynamic code part that is later executed in Jira, this time broken down into lines, for better understanding:

{html}
<script type="text/javascript">
function aR(fr) {
  $f = AJS.$(fr);
  $p = $f.parents("*[data-field-id=description]");
  if ($p.length > 0) { 
    $f.width($p.width());
  } else {
    $p = $f.parents("div.mod-content");
    if ($p.length > 0) { 
      $f.width($p.width() - 30);
    }
  }
  $f.height($f.contents().height());
}</script>
<a href="/confluence/pages/editpage.action?pageId=' + meta.attr('content') + '" 
      style="float: right" target="confluence">
   <small>Edit in Confluence</small>
</a>
<iframe
  src="/confluence/plugins/viewsource/viewpagesrc.action?pageId=' + meta.attr('content') + '" 
  style="overflow:hidden;border:0"
  onload="aR(this);">
</iframe>
{html}

You can see some Javascript again, a link to edit the page externally in Confluence, and an iframe definition. The frame loads a confluence’ page source view (usually accessible from Tools > Show page source) and is set dynamically to the width of the outer container i.e. either the detailed Jira issue view’s div.mod-content container or Jira Agile’s description container (targetable with dd[data-field-id=description]) in a Scrum-based board. For the former we have to subtract some pixels to avoid that the edit bar on the right of the description is pushed outside of the parent container.

Now, to avoid that the contents of the iframe must be scrolled separately from the browser’s viewport, we also set the height of the iframe, in this case dynamically to the height of the iframe’s contents, as soon as these are loaded. Note that a pure CSS solution, like height: 100% or else, would not work here, because we’re not under control of the parent HTML containers in which the iframe is actually rendered and giving the iframe a fixed height would be nonsense as well, since you don’t know the page length in advance.

And thats it, now you can embed Confluence pages in Jira issues with only a few clicks! Have fun!

Batch-remove empty lines at the end of many Confluence pages

In a customer project we’ve decided to collaboratively write a bigger bunch of documentation in Atlassians Confluence and export that with Scroll Office, a third-party Confluence plugin, into Word.

That worked fine so far, but soon we figured that we’ve been kind of sloppy with empty lines at the end of each page, which were obviously taken over into the final document. So instead of going over each and every page and remove the empty lines there, I thought it might be easier to directly do this on the database, in our case MySQL.

The query was quickly developed, but then I realized that MySQL had no PREG_REPLACE function built-in, so I needed to install a UDF, a user-defined function first. Luckily, this UDF worked out of the box and so the query could be finalized:

UPDATE BODYCONTENT 
JOIN CONTENT ON CONTENT.CONTENTID=BODYCONTENT.CONTENTID 
   AND CONTENTTYPE LIKE "PAGE" AND PREVVER IS NULL 
SET BODY=PREG_REPLACE("/(<p>&nbsp;<.p>)+$/", "", BODY) 
WHERE BODY LIKE "%<p>&nbsp;</p>";

This query updates all current pages (no old versions) from all spaces that end with at least one empty line <p>&nbsp;</p> – this is Confluence’s internal markup for that – and removes all of these empty lines from all matches pages.

This was tested with MySQL 5.5.35, lib_mysqludf_preg 1.2-rc2 and Confluence 5.4.2.

I don’t need to mention that it is – of course – highly recommended that you backup your database before you execute this query on your server, right?

Custom polymorphic type handling with Jackson

Adding support for polymorphic types in Jackson is easy and well-documented here. But what if neither the Class-based nor the property-based (@JsonSubType) default type ID resolvers are fitting your use case?

Enter custom type ID resolvers! In my case a server returned an identifier for a Command that I wanted to match one-to-one on a specific “Sub-Command” class without having to configure each of these identifiers in a @JsonSubType configuration. Furthermore each of these sub-commands should live in the .command package beneath the base command class. So here is what I came up with:

@JsonTypeInfo(use = JsonTypeInfo.Id.CUSTOM,
              include = JsonTypeInfo.As.PROPERTY,
              property = "command")
@JsonTypeIdResolver(CommandTypeIdResolver.class)
public abstract class Command
{
    // common properties here
}

The important part beside the additional @JsonTypeIdResolver annotation is the use argument that is set to JsonTypeInfo.Id.CUSTOM. Normally you’d use JsonTypeInfo.Id.CLASS or JsonTypeInfo.Id.NAME. Lets see how the CommandTypeIdResolver is implemented:

public class CommandTypeIdResolver implements TypeIdResolver
{
    private static final String COMMAND_PACKAGE = 
            Command.class.getPackage().getName() + ".command";
    private JavaType mBaseType;

    @Override
    public void init(JavaType baseType)
    {
        mBaseType = baseType;
    }

    @Override
    public Id getMechanism()
    {
        return Id.CUSTOM;
    }

    @Override
    public String idFromValue(Object obj)
    {
        return idFromValueAndType(obj, obj.getClass());
    }

    @Override
    public String idFromBaseType()
    {
        return idFromValueAndType(null, mBaseType.getRawClass());
    }

    @Override
    public String idFromValueAndType(Object obj, Class<?> clazz)
    {
        String name = clazz.getName();
        if ( name.startsWith(COMMAND_PACKAGE) ) {
            return name.substring(COMMAND_PACKAGE.length() + 1);
        }
        throw new IllegalStateException("class " + clazz + " is not in the package " + COMMAND_PACKAGE);
    }

    @Override
    public JavaType typeFromId(String type)
    {
        Class&lt;?> clazz;
        String clazzName = COMMAND_PACKAGE + "." + type;
        try {
            clazz = ClassUtil.findClass(clazzName);
        } catch (ClassNotFoundException e) {
            throw new IllegalStateException("cannot find class '" + clazzName + "'");
        }
        return TypeFactory.defaultInstance().constructSpecializedType(mBaseType, clazz);
    }
}

The two most important methods here are idFromValueAndType and typeFromId. For the first I get the class name of the class to serialize and check whether it is in the right package (the .command package beneath the package where the Command class resides). If this is the case, I strip-off the package path and return that to the serializer. For the latter method I go the other way around: I try to load the class with Jackson’s ClassUtils by using the class name I got from the deserializer and prepend the expected package name in front of it. And thats already it!

Thanks to the nice folks at the Jackson User Mailing List for pointing me into the right direction!

Runtime-replace implementations with Roboguice in functional tests

At work we’re heavily depending on Unit and Functional Testing for our current Android application. For Unit testing we’ve set up a pure Java-based project that runs on Robolectric to provide a functional Android environment and we also added Mockito to the mix to ease some code paths with spied-on or completely mocked dependencies. Moritz Post wrote a comprehensive article how to setup this – if you have some time, this is really worth a read.

Now our functional tests are based on what the Android SDK offers us – just that we’re using Robotium as a nice wrapper around the raw instrumentation API – and until recently I thought it would not be possible to screw around much with an unaltered, but instrumented application on runtime. But while I was reading through the Android Testing Fundamentals I stumbled upon one interesting piece:

With Android instrumentation […] you can invoke callback methods in your test code. […] Also, instrumentation can load both a test package and the application under test into the same process. Since the application components and their tests are in the same process, the tests can invoke methods in the components, and modify and examine fields in the components.

Hrm… couldn’t that be used to just mock out the implementation of this one REST service our application uses? Yes, it could! Given the following implementation

@ContextSingleton
public class RequestManager {
    ...
    public <I, O> O run(Request<I, O> request) throws Exception {
        ...
    }
}
(where the Request object basically encapsulates the needed request data and Input / Output type information)

it was easy to create a custom implementation that would return predefined answers:

public class MockedRequestManager extends RequestManager {
    private Map<Request, Object> responses = new HashMap<Request, Object>();
    ...
    public <I, O> O run(Request<I, O> request) throws Exception {
        Object response = findResponseFor(request);
        if (response instanceof Exception) {
            throw (Exception) response;
        }
        return (O) response;
    }
    ...
    public void addResponse(Request request, Object response) {
        responses.put(request, response);
    }
}

Now that this was in place, the only missing piece was to inject this implementation instead of the original implementation. For that I created a new base test class and overwrote the setUp() and tearDown() methods like this:

public class MockedRequestTestBase extends ActivityInstrumentationTestCase2 {
    protected Solo solo;
    protected MockedRequestManager mockedRequestManager = new MockedRequestManager();
    ...
    private class MockedRequestManagerModule extends AbstractModule {
        @Override
        protected void configure() {
            bind(RequestManager.class).toInstance(mockedRequestManager);
        }
    }
    ...
    public MockedRequestTest() {
        super(MyActivity.class);
    }
    ...
    @Override
    protected void setUp() throws Exception {
        super.setUp();
        Application app = (Application) getInstrumentation()
            .getTargetContext().getApplicationContext();
        RoboGuice.setBaseApplicationInjector(
            app, RoboGuice.DEFAULT_STAGE,
            Modules.override(RoboGuice.newDefaultRoboModule(app))
                .with(new MockedRequestManagerModule()));
        solo = new Solo(getInstrumentation(), getActivity());
    }
    ...
    @Override
    protected void tearDown() throws Exception {
        super.tearDown();
        Roboguice.util.reset();
    }
}

It is important to note here that the module overriding has to happen before getActivity() is called, because this starts up the application and will initialize the default implementations as they’re needed / lazily loaded by RoboGuice. Since we explicitely create a specific implementation of the RequestManager class before, the application code will skip the initialization of the actual implementation and will use our mocked version.

Now its time to actually write a test:

public class TestFileNotFoundException extends MockedRequestTestBase {
    public void testFileNotFoundMessage()
    {
        Request request = new FooRequest();
        mockedRequestManager.addResponse(
            request, 
            new FileNotFoundException("The resource /foo/1 was not found")
        );
        solo.clickOnView("request first foo");
        assertTrue(solo.waitForText("The resource /foo/1 was not found"));
    }
}

Thats it. Now one could probably also add Mockito to the mix, by injecting a spied / completely mocked version of the original RequestManager, but I’ll leave that as an exercise for the reader…

Have fun!

Debugging with MacPorts PHP binaries and Eclipse PDT 3.0

You know the times, when things should really go fast and easy, but you fall from one nightmare into another? Tonight was such a night… but lets start from the beginning.

To debug PHP you usually install the excellent XDebug and so did I with the port command sudo port install php5-xdebug. After that php -v greeted me friendly on the command line already:

PHP 5.3.8 (cli) (built: Sep 22 2011 11:42:56) 
Copyright (c) 1997-2011 The PHP Group
Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies
  with Xdebug v2.1.1, Copyright (c) 2002-2011, by Derick Rethans

Eclipse Indigo and Eclipse PDT 3 was already installed, so I thought it should be easy to set up the XDebug debugging option in Eclipse. Under “PHP > PHP Executables” I therefor selected /opt/local/bin/php as my CLI version and selected “xdebug” as debugging option.

A first test however showed me that the execution of a test script did not load any module into the PHP interpreter beforehand (for reasons I could only guess, because Eclipse error log kept quite). Looking at the output of phpinfo() from my test script and php -i from command line showed me the difference: The PHP option “Scan this dir for additional .ini files” was empty when PHP ran inside Eclipse, but was properly set when PHP ran from command line (or in an Apache context).

Asking aunt Google brought up this issue that shed some light into my darkness: The directory where additional modules reside is configured as a compile time option in PHP and defaults to /opt/local/var/db/php5 on MacPorts and exactly this can be overridden by either calling PHP with -n -c options or by setting the PHP_INI_SCAN_DIR environment variable.

Having no access to the actual PHP call from inside Eclipse I tried to go down the environment route, but that did not lead to any success. While the variable was recognized as it should on the normal command line (e.g. PHP_INI_SCAN_DIR= php -i disabled the load of additional modules), in Eclipse’ run configuration dialog, in the section environment variables, this was not recognized at all. I tried a little harder and configured the variable inside ~/.MacOSX/environment.plist, logged out and in again, restarted Eclipse obviously, but had no luck either.

The only viable solution I came up with was to place all the single extension= and zend_extension= entries directly into my php.ini and disable the individual module.ini files altogether. At least I can now run and debug properly, but this solution is of course far from being ideal – as soon as I add a new PHP module or want to remove an existing, I have to remember to edit the php.ini myself.

By the way, I also tried to use Zend’s debugger (and PDT plugin) as an alternative. While somebody else already ranted about that the Zend guys have been unable to provide the Zend Debugger for PHP 5.3 as a standalone download (which hasn’t changed to date), PHP 5.2 debugging worked nicely with the old Zend PDT plugin.

Of course, none of my needed PHP modules were loaded and I really needed PHP 5.3 support, so I had to follow the same route the other guy did and downloaded all of the ZendServer glory (a 137MB download, yay) just to get the right ZendDebugger.so. After extracting the .pax.gz archive from the installer package I quickly found it underneath usr/local/zend/lib/debugger/php-5.3.x/, copied it to my extension directory and added an ini file to load that one instead, just to find out shortly afterwards that the Zend binary was i386 only and MacPorts of course compiled everything nicely as x86_64 – php was of course unable to load such a module.

Well, the moral of the story is – go for Xdebug and don’t loose the track. And, let us all hope that Eclipse PDT is developed further, so the remaining glitches like the one above are fixed.

Exception chaining in Java

If you catch and rethrow exceptions in Java, you probably know about exception chaining already: You simply give the exception you “wrap” as second argument to your Exception like this

try { ... }
catch (Exception e) {
  throw new CustomException("something went wrong", e);
}

and if you look at the stack trace of the newly thrown exception, the original one is listed as “Caused by:”. Now today I had the rather “usual” use case of cleanup up a failing action and the cleanup itself was able to throw as well. So I had two causing exceptions and I wanted to conserve both of them, including their complete cause chain, in a new exception. Consider the following example:

try { ... }
catch (Exception e1) {
  try { ... }
  catch (Exception e2) {
     // how to transport e1 and e2 in a new exception here?!
  }
  throw e1;
}

My idea here was to somehow tack the exception chain of e1 onto the exception chain of e2, but Java offered no solution for this. So I hunted for my own one:

public static class ChainedException extends Exception {
  public ChainedException(String msg, Throwable cause) {
    super(msg, cause);
  }
  public void appendRootCause(Throwable cause) {
    Throwable parent = this;
    while (parent.getCause() != null) {
      parent = parent.getCause();
    }
    parent.initCause(cause);
  }
}

Now I only had to base the exceptions I actually want to chain on ChainedException and was able to do this (in fact I based all of them on this class):

try { ... }
catch (ChainedException e1) {
  try { ... }
  catch (ChainedException e2) {
    e2.appendRootCause(e1);
    throw new ChainedException("cleanup failed", e2);
  }
  throw e1;
}

Try it out yourself – you’ll see the trace of e1 at the bottom of the cause chain of e2. Quite nice, eh?