Kafril geht gegen Google-Rezensenten vor

Die Firma Kafril in Lossa (Landkreis Leipzig) geht anwaltlich gegen Menschen vor, die das Unternehmen mit schlechten Google-Bewertungen versehen, weil die Firma das Holzberg-Biotop, ein über viele Jahre von der Natur mühsam zurück erobertes Stück Bergbaugeschichte, mit Bauschutt verfüllen will.

Das Anwaltsschreiben im Wortlaut

Als Ermächtigter der oben genannten Firma sind im Einzelnen die Bewertungen aus nachfolgenden Gründen zu löschen.

1.) Löschung wegen fehlender beruflicher Verbindung
Ein Kundenkontakt zwischen Auftraggeber und dem Bewerter ist notwendige Voraussetzung für die rechtmäßige Veröffentlichung einer Bewertung. Mein Kunde bestreitet einen solchen Kundenkontakt mit dem Bewerter mit Nichtwissen.

Ich fordere Sie daher auf, Ihren vom Bundesgerichtshof (Urteil vom 01.03.2016 – VIZR 34/15) auferlegten Pflichten nachzukommen und

a. ) diese Beanstandung unverzüglich an den Verfasser der Bewertung weiterzuleiten
Und
b. ) den Verfasser gleichzeitig aufzufordern, Stellung zu dieser Beanstandung zu nehmen, dabei den Kundenkontakt möglichst genau zu beschreiben sowie den Kundenkontakt belegende Unterlagen wie etwa Rechnungen zu übermitteln
und
C.) die Stellungnahme samt eventueller Belege an mich weiterzuleiten.

Im Einzelnen:

zu a.) Pflicht zur Prüfung des Kundenkontaktes auf Plausibilität

2.)
Es wird seitens des Antragstellers ausgeschlossen, dass es sich beim Rezensenten, um einen Mitarbeiter oder ehemaligen Mitarbeiter handelt.

Auf diese Beanstandung hin sind Sie zur Prüfung des Kundenkontaktes verpflichtet, ganz unabhängig davon, ob die Bewertung eventuell inhaltlich zulässig ist.

Mit Urteil vom 01.03.2016 (VI ZR 34/15) hat der Bundesgerichtshof bezüglich der Prüfpflichten eines Bewertungsportalbetreibers entschieden, dass im Falle des Bestreitens der beruflichen Verbindung des Verfassers mit dem bewer>

Diese Prüfpflichten bestehen bei allen Bewertungen und nicht nur bei offensichtlich rechtswidrigen oder inhaltlich eventuell zulässigen Bewertungen. Denn unwahre Tatsachenbehauptungen können für den Portalbetreiber niemals of>
ZR 93/10 – Blogger; BGH, Urteil vom 01.03.2016 – VI ZR 34/15). Auch der Europäische Gerichtshof für Menschenrechte (EGMR) geht in ständiger Rechtsprechung davon aus, dass jede Meinungsäußerung bezüglich eines Geschäftsbetrieb>

zu b.) Sofortige Löschung bei ausbleibender Stellungnahme oder unplausibeler-
scheinendem Kundenkontakt
Sollte sich der Verfasser zurückmelden, so fordere ich Sie bereits jetzt auf, mir diese Stellungnahme unverzüglich per E-Mail weiterzuleiten. Hierzu sind Sie verpflichtet (so BGH, aaO)

Kommen Sie dieser Pflicht nicht nach, ist die Bewertung samt aller Notenbewertungen als rechtswidrig einzustufen und die Veröffentlichung unverzüglich zu unterlassen (BGH, aaO.; OLG München: Urteil vom 17.10.2014 – 18 W 1933/>

Wir haben uns eine Frist bis zum 10.10.2023 notiert.

Desweitern bitte ich von Ihrer Bitte der “Flaggen URL “Abstand zu halten, da dieses eine unzulässige Verzögerung des Prüfverfahrens darstellen würde.

Die Übermittlung der von uns übermittelten URL zur Bewertung ist hier ausreichend und die benötigten Informationen zur schnellen Bearbeitung Ihrerseits dort auch auffindbar.

Beachten Sie hier auch wieder das Urteil (BGH, Urteil vom 25.10.2011 – VI ZR 93/10)
Und das Urteil LG Hamburg v. 24.03.2017 – Az.: 324 O 148/16

Testing HTTPS Requests with Wiremock and Robolectric

Prerequisites: OkHttp 3.x/4.x, Wiremock 2.x, Robolectric 4.9.x

// build.gradle.kts
dependencies {
  testImplementation("org.robolectric:robolectric:4.9.2"
  testImplementation("com.github.tomakehurst:wiremock-jre8-standalone:2.35.0")
  testImplementation("com.squareup.okhttp3:okhttp-tls:4.10.0")
}

Now for the actual test it’s important that your System Under Test (SUT) is able to configure it’s OkHttpClient instance:

class SomeHttpClient @VisibleForTesting internal constructor(client: OkHttpClient) {

  constructor() : this(OkHttpClient())

  fun execute(request: Request): Response =
    client.newCall(request).execute()
}

…so that in your test you can configure the needed SSL building blocks:

...
val handshakeCertificates = HandshakeCertificates.Builder()
  .addInsecureHost("localhost")
  .build()
val hostnameVerifier = HostnameVerifier { hostname, _ -> hostname == "localhost" }

val sut = SomeHttpClient(
  OkHttpClient.Builder()
    .sslSocketFactory(handshakeCertificates.sslSocketFactory(), handshakeCertificates.trustManager)
    .hostnameVerifier(hostnameVerifier)
    .build()
)
...

For the Robolectric setup, you need to ensure that you disable Robolectric’s conscrypt implementation, because this does not work well with Wiremock:

@RunWith(RobolectricTestRunner::class)
@ConscryptMode(ConscryptMode.Mode.OFF)
class SomeHttpClientTest {
  @get:Rule
  val wiremockRule = WireMockRule(wireMockConfig().dynamicHttpsPort())

  // val sut = ... (see above)

  @Test
  fun shouldRequestSomething() {
    stubFor(
      get(urlPathMatching("/echo$"))
        .willReturn(aResponse().withBody("Hello World!"))
      )
    )
    
    val response = sut.execute(
      Request.Builder()
        .url("${wiremockRule.baseUrl()}/echo")
        .get()
        .build()
      )

    // verify response
  }
}

Dagger Hilt Learnings

This is a loose list of learnings I had when I first came in contact with Dagger Hilt, especially in regards to testing. So, without further ado, let’s get into it.

Documentation

While the documentation on Dagger Hilt on developer.android.com is already quite exhaustive, I figured I missed a couple of important information and gotchas that I only got from the official Hilt documentation. So be sure you read through both thoroughly.

Scoping

It’s buried a bit in the documentation, but it should be remembered that predefined components won’t mean that all dependencies in the particular component are single instances. Remember that there is a corresponding scope for each and every component type that ensures that there is only one specific instance of your thing. This is particularly useful if your thing holds some kind of shared state:

@Module
@InstallIn(ActivityRetainedComponent::class)
object RetainedModule {
    @Provides
    @ActivityRetainedScope
    fun provideFlow() = 
        MutableStateFlow<@JvmSuppressWildcards SomeState>(SomeState.Empty)
}

Communication between different Android ViewModel instances come into my mind where this is handy.

Since scoping comes with an overhead, also remember that you can use @Reusable in any component in case you only want to ensure that there is some instance of your otherwise stateless dependency at a time:

@Module
@InstallIn(SingletonComponent::class)
object SingletonModule {
    @Provides
    @Reusable
    fun provideHttpClient() = OkHttpClient.Builder()...
}

Dagger Hilt Debugging

Dagger Hilt is – under the hood – a beefed up Dagger that comes with a couple of interesting features, like isolated dependency graphs for tests. But after all, it’s still just Dagger and implemented in Java. Which means your usual rules for making Dagger work apply here (@JvmSuppressWildcards for the rescue when dealing with generics, etc.), just with an extra level of complexity that hides the usual unreadable errors.

Since most of my issues resolved around understanding the removal / replace of test dependencies, I figured the entry point for Hilt’s generated test sources is build/generated/hilt/component_sources. This directory is split into two parts, one that contains the generated test components for your tests, one for each test class, underneath component_sources/{variant}UnitTest/dagger/hilt/android/internal/testing/root and one that collects an injector implementation, again, for each of your tests, residing in component_sources/{variant}UnitTest/package/path/to/your/tests.

The former directory is definitely the more interesting one, because you can check each generated test component if it carries the correct modules you want your test to provide, i.e. you can check if your modules are properly replaced via @TestInstallIn or removed via @UninstallModules.

Component Tests have to be Android Tests

I like to write blackbox component tests on the JVM for REST or DAO implementations. Sometimes this requires a complex dependency setup (mappers, libraries, parsers, …) where I’d like to use Dagger to create instances of my subject under test.

Dagger Hilt supports this, kind of, as long as you don’t care that you rewrite your JUnit 5 component test in JUnit 4 (including all Extensions you might have written). Reason is that even though your test doesn’t need a single Android Framework dependency, you still need to run it with Robolectric because this is the only supported way of using Hilt in JVM tests as of now:

Even though we have plans in the future for Hilt without Android, right now Android is a requirement so it isn’t possible to run the Hilt Dagger graph without either an instrumentation test or Robolectric.

Eric Chang

UI Testing : Activity

Using Dagger Hilt for an Activity test is straight forward, you basically follow the documentation:

@HiltAndroidTest
@RunWith(RobolectricTestRunner::class)
@Config(application = HiltTestApplication::class)
internal class SomeActivityTest {
    
    @get:Rule(order = 0)
    val hiltRule = HiltAndroidRule(this)
    
    @get:Rule(order = 1)
    val activityScenarioRule = ActivityScenarioRule(SomeActivity::class)
    
    @Inject lateinit var dep: SomeDep
    
    @Before
    fun init() {
        hiltRule.inject()
    }
    
    @Test
    fun someTest() {
         // stub dep
         ...
         // launch
         activityScenarioRule.launchActivity()
    }
}

This works nicely in case your dependency is in Singleton scope, because your test instance itself cannot inject anything else but Singleton-scoped dependencies, but what if not and we have to stub the aforementioend MutableStateFlow?

Now, Hilt has a concept called EntryPoints that we can define in a test-local manner. The entry point then targets a specific component and can fetch dependencies from that. To find the right component for your entry point it helps looking at the component hiearchy. If our dependency lives in the ActivityRetainedComponent, its as easy as creating a new entry point into this for our test, right?

    ...
    
    @get:Rule(order = 0)
    val hiltRule = HiltAndroidRule(this)
    
    @EntryPoint
    @InstallIn(ActivityRetainedComponent::class)
    internal interface ActivityRetainedEntryPoint {
       val flow: MutableStateFlow<@JvmSuppressWildcards SomeState>
    }
    
    @Before
    fun init() {
        hiltRule.inject()
    }
    ...

Wrong. To get an instance of the entry point, you have to call EntryPoints.get(component, ActivityRetainedEntryPoint::class), where the component is the instance of the thing the component is owned, i.e. an Application instance for SingletonComponent entry points, an Activity instance for ActivityComponent entry points, aso. But what is the thing that owns the ActivityRetainedComponent and where to we get access to it?

Turns out we don’t need it. Looking at the component hiearchy again we see that ActivityComponent, FragmentComponent and a few others are direct or indirect child components of the ActivityRetainedComponent and therefor see all of it’s dependencies. So we “just” need an Activity or Fragment instance to get our dependency.

The Hilt docs state that the easiest way is to define a custom static activity class in your code, like this

@AndroidEntryPoint
class TestActivity : AppCompatActivity() {
    val flow: MutableStateFlow<SomeState>
}

but that Activity needs to go through the lifecycle at first to be usable. Can’t we just use the Activity instance we launch anyways for this? Turns out we can, we just need to “extract” the actual Activity instance from the ActivityScenario:

fun <T : Activity> ActivityScenario<T>.getActivity(): T? {
    val field = this::class.java.getDeclaredField("currentActivity")
    field.isAccessible = true
    @Suppress("UNCHECKED_CAST")
    return field.get(this) as? T?
}

inline fun <reified E : Any> ActivityScenarioRule<*>.getActivityEntryPoint(): E =
    EntryPoints.get(
        getScenario().getActivity() ?: error("activity not started"),
        E::class.java
    )

so our complete test looks like this:

@HiltAndroidTest
@RunWith(RobolectricTestRunner::class)
@Config(application = HiltTestApplication::class)
internal class SomeActivityTest {
    
    @get:Rule(order = 0)
    val hiltRule = HiltAndroidRule(this)
    
    @get:Rule(order = 1)
    val activityScenarioRule = ActivityScenarioRule(SomeActivity::class)
    
    @EntryPoint
    @InstallIn(ActivityComponent::class)
    internal interface EntryPoint {
       val flow: MutableStateFlow<SomeState>
    }
    
    @Before
    fun init() {
        hiltRule.inject()
    }
    
    @Test
    fun someTest() {
         // launch
         activityScenarioRule.launchActivity()
         // get the flow and do things with it
         val flow = activityScenarioRule.getActivityEntryPoint<EntryPoint>().flow
    }        
}

Downside is now, of course, that the Activity must be launched (started even!) before one gets access to the dependency. Can we fix that? Unfortunately not without moving the Dependency up the component hierarchy and installing the original module that provided it. See Replacing Ad-hoc Dependencies for a way to do that.

UI Testing : Fragments

The first issue with Hilt-enabled Fragment testing is that there is no support for Hilt-enabled Fragment testing. The problem is that the regular androidx.fragment:fragment-testing artifact comes with an internal TestActivity that is not Hilt-enabled, so we have to write our own:

@AndroidEntryPoint(AppCompatActivity::class)
class TestHiltActivity : Hilt_TestHiltActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        val themeRes = intent.getIntExtra(THEME_EXTRAS_BUNDLE_KEY, 0)
        require(themeRes != 0) { "No theme configured for ${this.javaClass}" }
        setTheme(themeRes)
        super.onCreate(savedInstanceState)
    }

    companion object {
        private const val THEME_EXTRAS_BUNDLE_KEY = "theme-extra-bundle-key"

        fun createIntent(context: Context, @StyleRes themeResId: Int): Intent {
            val componentName = ComponentName(context, TestHiltActivity::class.java)
            return Intent.makeMainActivity(componentName)
                .putExtra(THEME_EXTRAS_BUNDLE_KEY, themeResId)
        }
    }
}

This is basically copied from the original TestActivity and adapted. I place this into a separate Gradle module, because like the original artifact, this has to become a debugImplementation dependency.

Now we need a separate FragmentScenario and FragmentScenarioRule as well, to use this new Activity. I’ll not paste the complete implementation for them here, but refer you to this gist where I collected them.

With FragmentScenario we have more control over the Fragments state in which it is launched. My implementation by default launches a Fragment in Lifecycle.State.INITIALIZED, basically the state in which the Fragment is right after it’s instantiation and – more importantly – after Hilt injected its dependencies!

So, we can now stub dependencies that are used during onCreate like so:

@EntryPoint
@InstallIn(FragmentComponent::class)
internal interface FragmentEntryPoint {
    val someFlow: MutableStateFlow<SomeState>
}

private val fragmentScenarioRule = HiltFragmentScenarioRule(SomeFragment::class)
private val entryPoint by lazy {
    fragmentScenarioRule.getFragmentEntryPoint<FragmentEntryPoint>()
}

...
val fragmentScenario = fragmentScenarioRule.launchFragment(R.style.AppTheme)
entryPoint.someFlow.tryEmit(SomeState.SomeValue)               
fragmentScenario.moveToState(Lifecycle.State.RESUMED)

Replacing Ad-hoc Dependencies

There are times where you don’t provision dependencies through specific modules that you could, on the test side of things, replace via @TestInstallIn or alike. A good example for this are UseCase classes.

I tend to test my View (Fragment or Activity) together with my Android ViewModel implementation and the latter makes use of these UseCase classes to interface to my domain layer. Naturally one wants to replace the UseCase implementation with a fake implementation or a mock, but how can one accomplish this with Hilt?

Turns out it’s quite easy – all you have to do is to @BindValue your dependency in your test class. A dependency provisioned through this seems to take precendence over constructor-injected ad-hoc dependencies:

@HiltAndroidTest
@RunWith(RobolectricTestRunner::class)
@Config(application = HiltTestApplication::class)
internal class SomeActivityTest {

    @get:Rule(order = 0)
    val hiltRule = HiltAndroidRule(this)

    @get:Rule(order = 1)
    val activityScenarioRule = ActivityScenarioRule(SomeActivity::class)

    @BindValue
    val useCase: MyUseCase = mockk()

    @Before
    fun init() {
        hiltRule.inject()
    }
    ...
}

Lifecycle and Scoping in Tests

More often than not you might stumble in weird test issues when you follow the “good citizen” rule and provision even your test dependencies (e.g. mocks) with @Reusable. In some cases you might end up with two different instances, one in your test and one in your production code.

So, spare yourself a few headaches and and just always annotate those test dependencies with the scope matching the component, e.g. @Singleton.

Module Size

The ability to uninstall certain modules per test has the nice “side-effect” of training yourself to make your modules smaller, because the larger a module is – and the more unrelated dependencies it provisions, the more work you have to do to provide the “other” dependencies you’re not interested in once you uninstall that module for a particular test case.

Well, at least Dagger tells you that something is missing, by printing out it’s beloved compilation errors, right?!

Global Test Modules

Sometimes you want to remove some dependency from your graph that would otherwise go havoc during testing, think of a Crashlytics module suddenly sending crash reports on test failures or a Logging module that prints garbage to your stdout. Usually you’d do something like this then:

@Module
@TestInstallIn(
    components = [SingletonComponent::class],
    replaces = [LoggingModule::class]
)
internal object TestLoggingModule {
    @Provides
    @Singleton
    fun provideLogger(): Logger = Logger { /* no-op * / }
}

All fine, but what if you now have a single test case where you want to check the log output? Well, you can’t uninstall a module installed via @TestInstallIn, but you can do a workaround: Install a module that removes the dependency, then add another regular module that adds your no-op implementation:

@Module
@TestInstallIn(
    components = [SingletonComponent::class],
    replaces = [LoggingModule::class]
)
internal object TestLoggingRemovalModule

@Module
@InstallIn(SingletonComponent::class)
internal object TestLoggingModule {
    @Provides
    @Singleton
    fun provideLogger(): Logger = Logger { /* no-op * / }
}

Now, in your test you can remove that module and have a custom implementation that you can verify against:

@HiltAndroidTest
@RunWith(RobolectricTestRunner::class)
@UninstallModules(TestLoggingModule::class)
@Config(application = HiltTestApplication::class)
internal class SomeLoggingTest {
    @BindValue
    val logger: Logger = MyFakeLogger()
    ...
}

Code Coverage

If your @AndroidEntryPoints don’t show up in Jacoco’s code coverage reports as covered, even though you have tests for them, follow this excellent post and choose whether you want to keep using the Hilt Gradle plugin or not.

Wrap-up

Dagger Hilt makes testing a lot easier; the ability to replace dependencies for each test separately is a real game changer.

What’s also true is that it is still Dagger, i.e. the configuration is complex, the error messages cryptic at best and – this is new (at least for me) – Hilt compilation issues have occasionally to be fixed by cleaning your module, because there seem to be issues with incremental compilation. Not neccessarily confidence-inspiring, but at least you know how to fix things.

I hope I could help you out with some my learnings, let me know what you think!

Disabling Samsung Android System Services – A post-mortem

I’m currently working in a project where we build an Android App that gets installed on an EMM (Enterprise Mobility Managed) devices work profile. The devices are mostly Samsung devices, running Android 9 or Android 10.

More recently we got a big influx of crashes that left us back puzzled. Apparently when people took screenshots in the private profile, opened some app (like the browser) and then returned immediately to the work profile to our app, the application crashed as soon as they set the focus on an EditText field, with this:

Uncaught exception thrown in the UI: java.lang.SecurityException: No access to content://com.sec.android.semclipboardprovider/images: neither user 1010241 nor current process has android.permission.INTERACT_ACROSS_USERS_FULL or android.permission.INTERACT_ACROSS_USERS
at android.os.Parcel.createException(Parcel.java:2088)
at android.os.Parcel.readException(Parcel.java:2056)
at android.os.Parcel.readException(Parcel.java:2004)
at android.sec.clipboard.IClipboardService$Stub$Proxy.getClipData(IClipboardService.java:959)
at com.samsung.android.content.clipboard.SemClipboardManager.getLatestClip(SemClipboardManager.java:609)
at android.widget.EditText.updateClipboardFilter(EditText.java:316)
at android.view.inputmethod.InputMethodManager.startInputInner(InputMethodManager.java:2131)
... 

A quick Google search came back with almost nothing, only one stack overflow post suggested that one should simply add the missing permission with protectionLevel="signature" – which is of course non-sense for a non-system app that is not signed with the same key as the rest of the system framework. So, what do?

Staring at the stacktrace I fired up the Google Android CodeSearch and checked the sources of EditText – just to find a possible way to somehow disable / prevent the call to updateClipboardFilter. However, to my surprise, this API was completely nonexistant in AOSP!

So, apparently we’ve had to deal with a completely proprietary Samsung API. Firing up Google for SemClipboardManager pointed me to a several years old repository that partially disassembled the said class, so I could have a closer look of what is actually going on.

From what I saw there, the manager’s functionality could be disabled if I somehow found a way to overwrite the isEnabled method in this class to permanently return false – which the method usually only does if the device is in “emergency” or “ultra low power” mode. Ok, we have an attack vector!

From my usual Android trickery I knew the easiest way to fumble with system services is to create a custom ContextWrapper and wrap any given base context in my Activitys attachBaseContext method, like so:

class SomeActivity : AppCompatActivity {
  ...
  override fun attachBaseContext(newBase: Context) {
    super.attachBaseContext(FixSamsungStuff(newBase))
  }
  ...
}

Now, one could think “why deal with the internal service workings at all, wouldn’t it be enough to simply disable / null the service instead”, i.e. like this?

class FixSamsungStuff(base: Context): ContextWrapper(base) {
  override fun getSystemService(name: String): Any {
    // the name is from `adb shell service list`
    return if (name == "semclipboard") {
      null
    } else {
      super.getSystemService(name)
    }
  }
}

But the fine folks at Samsung of course don’t check for the non-existance of their service and instead of receiving the above SecurityException I was presented a NullPointerException instead.

So, now it got interesting – how would I actually proxy a method of a class to return a different value? From my testing adventures I knew this must be possible, because Mockito.spy(instance) exactly allows to do that, on the JVM and on ART.

So I came across ByteBuddy for Android, by the fantastic Rafael Winterhalter. His example on his front page of the repo was easy enough to adapt for my use case:

class FixSamsungStuff(base: Context): ContextWrapper(base) {
  override fun getSystemService(name: String): Any {
    val service = super.getSystemService(name)
    return if (name == "semclipboard") {
      interceptClipboardService(service)
    } else {
      service
    }
  }
  //
  private fun interceptClipboardService(service: Any): Any {
    val strategy = new AndroidClassLoadingStrategy.Wrapping(
      getDir("generated", Context.MODE_PRIVATE)
    )
    val dynamicType: Class<Any> = new ByteBuddy()
      .subclass(service.javaClass)
      .method(ElementMatchers.named("isEnabled"))
      .intercept(FixedValue.value(false))
      .make()
      .load(service.javaClass.classLoader, strategy)
      .getLoaded()
    // constructor definition from the decompiled sources
    val constructor = dynamicType.getConstructor(
      Context::class.java, Handler::class.java
    )
    return constructor.newInstance(this, Handler())
  }
}

But when I tried to ran this, I got a NoSuchFieldException because the given constructor was unknown. Hrm… well, I thought, maybe the decompiled sources where just too old, so I debugged into the code and checked for service.javaClass.getConstructors() and service.javaClass.getDeclaredConstructors(), but both returned an empty list! How on earth could a Java class be instantiated without a constructor?!

I learned that there are possibilities and that the JVM spec itself does actually not dictate the existance of a constructor for a class! So I contacted Rafael Winterhalter and he told me that there was probably some native code trickery going on, so my best bet should be to use sun.reflect.ReflectionFactory on my JVM. But this – of course – was not available on Android.

A hint in the Android Study Group slack then pointed me into the right direction – objenesis! This magic wand apparently allows you to create an instance of any class, regardless whether it has a constructor or not. So instantiating my ByteBuddy fake instance was as easy as doing

val objenesis = ObjenesisStd()
return objenesis.newInstance(dynamicType)

And as awesome as it is, that worked instantly!

This was a struggle-some, but in the end very worthy journey and I learned quite a few things on my way.

Thanks for reading!

Debugging with Gradle

If you want to debug something that is build / run with Gradle, you have a few options.

In all cases, your IDE needs to be set up to listen to a remote JDWP (Java Debug Wire Protocol) connection. In IntelliJ this looks like this: Go to “Edit configurations”, hit the “+” button on the top left corner, select “Remote” and give your run configuration a name. Leave the other configuration options as-is. (As Gradle will always start the debug server for us, we leave “Attach to remote JVM” selected.) Finally, hit “OK”.

Now to the actual debugging.

Debugging JUnit tests

More often than not you cannot debug a unit test properly inside the IDE. Even if you use the Gradle builder in IntelliJ for example there are times where the IDE simply won’t get the classpath right and your tests fail with obscure errors.

Now with Gradle we don’t need to start the tests from within the IDE to debug them, luckily! All we need is this:

 ./gradlew app:testDebug --tests MyAwesomeTest --debug-jvm

This is an example from an Android project, but you can think of any other test flavor here. With --tests we define the test we’d like to run (to avoid having to wait for all tests of the module to be executed) and --debug-jvm lets Gradle wait for our debugger to attach, before the test is executed.

Now you can put breakpoints into your code and start the pre-configured “Gradle” run configuration in “Debug” mode. As soon as you see “… connected” in the IDE, the command line execution will continue, execute your test and eventually stop on your breakpoints.

Debugging Gradle build scripts

Debugging Gradle build scripts itself is possible by starting any Gradle command with an additional, slightly different argument:

./gradlew myAwesomeTask -Dorg.gradle.debug=true

Here again, Gradle will start up and wait for your IDE to connect to the debug server, then continue executing your task and eventually stop on your breakpoints.

Not so fast, my breakpoints are not hit!

Well, it wouldn’t be Gradle if it would be that easy, right? Right!

Issue is that in a “properly” configured Gradle project there are probably multiple things set up to speed up your build. First and foremost, a running Gradle Daemon in the background might be re-used and you might fail to attach to that Daemon again once you disconnected from it once. So, best option here is to disable the usage of a global daemon for your run and just spawn a specific daemon for the command you want to debug:

./gradlew --no-daemon ...

(There is also an org.gradle.daemon.debug option to debug the daemon itself, but I never found a useful way of working with this. Would be helpful for feedback on this one :))

Secondly, you might have a build cache set up (either locally or remote). If you run tasks that ran through successful once, Gradle will just use the default outputs and skip task execution completely. (You’ll notice that usually when the Gradle output says something like “x tasks executed, y tasks cached, …”.) So, disable caching temporarily as well:

./gradlew --no-daemon --no-build-cache ...

Lastly, specifically if you execute tests, you should remove the previous test results, so your test is actually executed again:

rm -rf app/build/reports && \
  ./gradlew --no-daemon --no-build-cache ...

Now your breakpoints should be hit for real. Happy debugging!

Magnet – an alternative to Dagger

I meant to write about this for a very long time, but never actually came around and did it, mostly because of time constraints. But here we are, let’s go.

What is Magnet?

Magnet is a Java library that allows you to apply dependency injection (DI), more specifically Dependency Inversion, in your Java / Kotlin application.

Why another DI library?

Traditionally there have been many libraries in the past and there are even in the present that do this job. In the mobile area on Android where I mostly work on, all started out with Roboguice (a Android-friendly version of Googles Guice), then people migrated to Square’s Dagger and later Google picked up once again and created Dagger2 that is still in wide use in countless applications.

I have my own share of experience with Dagger2 from the past; the initial learning curve was steep, but once you was into it enough it worked out pretty well, except for a few nuisances:

  • Complexity – The amount of generated code and the reason why sometimes this code generation failed because of an error on your side is hard to grasp. While literally all code is generated for you, navigating between these generated parts proved to be very hard. On the other hand understanding some of the errors the Dagger2 compiler spit out in case you missed an annotation somewhere is, to put it mildly, not easy either.
  • Boilerplate – Dagger2 differentiates between Modules, Components, Subcomponents, Component relations, Providers, Custom Factories and what not and comes with a variety of different annotations that control the behavior of all these things. This is not only adding complexity, but because of the nature of the library you have to do a lot of footwork to get your first injection available somewhere. Have you ever asked yourself for example if there is a real need to have both, components and modules, in Dagger2?

Now this criticism is not new at all and with the advent of Kotlin on Android other projects emerged that try to provide an alternative to Dagger2, most prominently Kodein and Koin. However, when I played around with those it felt they missed something:

  • In Kodein I disliked that I had to pass kind-of a god object around to get my dependencies in place. Their solution felt more like a service locator implementation than a DI solution, as I was unable to have clean constructors without DI-specific parameters like I was used to from Dagger2 and others
  • In Koin I disliked that I had to basically wire all my dependencies together by hand; clearly this is the task that the DI library should do for me!

Looking for alternatives I stumbled upon Magnet. And I immediately fell in love with it.

Magnet Quick-Start

To get up on speed, let’s compare Magnet with Dagger2, by looking at the specific terms and things both libraries use.

Dagger2Magnet
ComponentScope
Subcomponent
Module
@Inject @Instance on class level
@Provides @Instance on class level or provide method
@Binds@Instance on class level or provide method
@Component
@Singleton @Instance with scoping = TOPMOST
@Named("...")@Instance / bind() with classifier = "..."
dagger.Lazy<Foo>Kotlin Lazy<Foo>
dagger.Provider<Foo>@Instance with scoping = UNSCOPED
Dagger AndroidCustom implementation needed, like this

Don’t be afraid, we’ll discuss everything above in detail.

Initial Setup

Magnet has a compiler and a runtime that you need to add as dependencies to each application module you’d like to use Magnet with:

dependencies {
  implementation "de.halfbit:magnet-kotlin:3.3-rc7"
  kapt "de.halfbit:magnet-processor:3.3-rc7"
}

The magnet-kotlin artifact pulls in the main magnet dependency transitively and adds a couple of useful Kotlin extension functions and helpers on top of it. The annotation processor magnet-processor generates the glue code to be able to construct your object tree later on. Besides that there are other artifacts available, which we’ll come back to later on.

Now that the dependencies are in place, Magnet needs an assembly point in your application to work. This tells Magnet where it should create its index of injectable things in your application, basically a flat list of all features included in the app inside of its gradle dependencies section. The assembly point can be written as an interface that you annotate with Magnet’s @Registry annotation:

import magnet.Registry   

@Registry
interface Assembly

The main application module is the module that usually contains this marker interface, but it could be as well the main / entry module of your library.

About scopes

Scopes in Magnet act similar like Components in Dagger2. They can be seen as “bags of dependencies” by holding references to objects that previously have been provisioned. Scopes can contain child scopes, which in turn, can again contain child scopes themselves.

There is no limit how deep you can nest your scopes; in Android application development however you should usually have at least one scope per screen in addition to the root scope, which we’ll discuss in a second.

Scopes are very easy to create and also very cheap, so it could also be useful to create additional scopes for certain, time-limited tasks, like a separate scope for a background service or even a scope for a specific process-intensive functionality that requires the instantiation of several classes that are not needed outside of this specific task. This way memory that is used by these classes can quickly be reclaimed by letting the particular scope and all its referenced class instances become subject for garbage collection shortly afterwards the task has been finished.

Creating the Root Scope

With the assembly point in place, we can start and actually create what Magnet calls the Root Scope. This – as the name suggests – is the root of any other scope that you might create. In this way it is comparable with what you’d usually call the application component in Dagger2, so you should create it in your application’s main class (your Application subclass in Android, for example) and keep a reference to it there.
We do this as well, but at the same time add a little abstraction to make it easier to retrieve a reference to this (and possibly other) scopes later on:

// ScopeOwner.kt
interface ScopeOwner {
val scope: Scope
}

val Context.scope: Scope
get() {
val owner = this as? ScopeOwner ?:
error("$this is expected to implement ScopeOwner")
return owner.scope
}

// MyApplication.kt
class MyApplication : Application(), ScopeOwner {
override val scope: Scope by lazy {
  Magnet.createRootScope().apply {
  bind(this@MyApplication as Application)
  bind(this@MyApplication as Context)
  }
  }
}

You see that the root scope is created lazily, i.e. on first usage. While Magnet scopes aren’t as heavy as Dagger2 components on object creation, it’s still a good pattern to do this way.
In addition you see that – right after the root scope is created – we bind two instances into it, the application context and the application. The bind(T) method comes from magnet-kotlin and actually simply calls into a method whose signature is bind(Class<T>, T).

Creating subscopes

Once the root scope is available, you’re free to create additional sub-scopes for different purposes. This is done by calling scope.createSubscope(). A naive implementation of an “activity scope” could for example look like this:

class BaseMagnetActivity : AppCompatActivity, ScopeOwner {
 override val scope: Scope by lazy {
application.scope.createSubscope()
}
}

But unfortunately this wouldn’t bring us very far, since this scope would be created and destroyed every time the underlying Activity would be restarted (e.g. on rotation). With AndroidX’s ViewModel library we however can create a scope that is not attached to the fragile Activity (or Fragment) , but to a separate ViewModel that is kept around and only destroyed when the user finishes the component or navigates away from it. While the glue code to set up such a thing is not yet part of Magnet, it’s no big wizardy to write it yourself. You might want to take some inspiration from my own solution.

Provisioning of instances

Above we’ve seen how we can bind existing instances of objects into a particular scope, but of course a DI library should allow you to automatically create new instances of classes without that you have to care about the details of the actual creation, like required parameters.

Magnet does this of course, and in addition to that also introduces a novel approach where it places the resulting instances in your object graph: Auto-scoping.

Auto-scoping means that Magnet is smart about figuring out what dependencies your new instance needs and in which scopes these instances themselves are placed. It then determines the top-most location in your scope tree your new instance can go to and places it exactly there. If the top-most location then happens to be the root scope, the instance becomes available globally. This mimics the behavior of Dagger2 when you annotate a type with @Singleton:

@Instance(type = Foo::class, scoping = Scoping.TOPMOST)
class Foo {}

It’s important to understand that with Magnet you only ever annotate types (and eventually pure functions, see below), but never constructors. This is a major gotcha when coming from Dagger2, where you place the @Inject annotation directly at the constructor that Dagger2 should use to create the instance of your type. This also means that Magnet is a bit picky and requires you to only have a single visible constructor for your type (package-protected or `internal` is possible as well), otherwise you’ll receive an error.

The optionScoping.TOPMOST, that you see in the example above, which triggers auto-scoping, is the default, so it can be omitted. Beside TOPMOST there is also DIRECT and UNSCOPED, which – as their name suggests – override the auto-scoping by placing an instance directly in the scope from which it was requested from (DIRECT) or not in any scope at all (UNSCOPED). The latter is very useful as a Factory pattern and can be compared with Dagger2’s Provider<Foo> feature.

Now, while this auto-scoping mechanism sounds awesome in first instnace, there might be times where you want to have a little more control what is going on. For example when you have a class that does not directly depend on anything in your current scope, but you still want to let it live in a specific (or “at-most-top-most”) scope, because it is not useful globally and would just take heap space if kept around. This can be achieved as well, simply by tagging a scope with a limit and applying the same tag to the provisioning you want to limit:

const val ACTIVITY_SCOPE = "activity-scope"

val scope = ...
scope.limit(ACTIVITY_SCOPE)

@Instance(type = Foo::class, scoping = Scoping.TOPMOST, limitedTo = ACTIVITY_SCOPE)
class Foo {}

With all that information, let’s discuss a few specific examples. Consider a scope setup consisting of the root scope, the sub-scope “A” (tagged with SCOPE_A ) which is a child of the root scope, and the sub-scope “B”, which is a child of the sub-scope “A”. Where do specific instances go to?

  • New instance without dependencies and scoping = TOPMOST – Your new instance goes directly into the root scope.
  • New instance with dependencies that themselves all live in the root scope and scoping = TOPMOST – Your new instance goes directly into the root scope.
  • New instance with at least one dependency that lives in a sub-scope “A” and scoping = TOPMOST
    • If requested from the root scope, Magnet will throw an error on runtime, because the specific dependency is not available in the root scope
    • If requested from the sub-scope “A”, the new instance will go into the same scope
    • If requested from the sub-scope “B”, the new instance will go into sub-scope “A”, the “top-most” scope that this instance can be in
  • New instance without dependencies, scoping = TOPMOST and limitedTo = SCOPE_A
    • If requested from the root scope, Magnet will throw an error on runtime, because the root scope is not tagged at all and there is no other parent scope available that Magnet could look for to match the limit configuration
    • If requested from the sub-scope “A”, the new instance will go into the same scope
    • If requested from the sub-scope “B”, the new instance will go into sub-scope “A”, the “top-most” scope that this instance is allowed to be in because of its limit configuration
  • New instance with arbitrary dependencies and scoping = DIRECT – Your instance goes directly into the scope from which you requested it
  • New instance with arbitrary dependencies and scoping = UNSCOPED – Your instance is just created and does not become part of any scope

Provisioning of external types

Imagine you have some external dependency in your application that contains a class that you depend on in one of your own classes. In Dagger2 you have to write a custom provisioning method to make this type “known” to the DI. In Magnet this process is similar, as it does not use reflection to instantiate types, like Dagger2, so you also have to write such provisional methods.

But in case the library you’re integrating was itself built with Magnet, then Magnet already created something that it calls a “provisioning factory” and that was likely packaged within the library already. In this case, Magnet will find that packaged provisioning factory and you don’t need to write custom provision methods yourself!

So, how are these provisioning methods then exactly written? Well, it turns out Magnet’s @Instance annotation is not only allowed on types, but on pure (static, top-level) functions as well:

@Instance(type = Foo::class)
fun provideFoo(factory: Factory): Foo = factory.createFoo()

A best practice for me is to add all those single provisions to a separate file that I usually call StaticProvision.ktand that I put in the specific module’s root package. There it is easy to find and will not only contain the provision methods, but other global configurations / constants that might be needed for the DI setup.

Provision different instances of the same type

Magnet supports providing and injecting different instances of the same type in any scope. All injections and provisions we did so far used no classifier, Classifier.NONE, but this can easily be changed:

// Provision
internal const val BUSY_SUBJECT = "busy-subject"

@Instance(type = Subject::class, classifier = BUSY_SUBJECT)
internal fun provideBusySubject(): Subject<Boolean> =
PublishSubject.create<Boolean>()

// Injection
internal class MyViewModel(
@Classifier(BUSY_SUBJECT) private val busySubject: Subject<Boolean>
) : ViewModel { ... }

Of course you can also bind instances with a specific classifier, the bind(...) method accepts an optional classifier parameter where you can “tag” the instance of the type as well. This is for example useful if you want to bind Activity intent data or Fragment argument values into your scope, so that they can be used in other places.

Provisioning while hiding the implementation

You might have wondered why each @Instance provisional configuration repeated the type that was provided – the reason is that you can specify another base type (and even multiple base types!) you want your instance to satisfy. This allows you to easily hide your implementation and just have the interface “visible” in your dependency tree outside your module.

Consider the following:

interface FooCalculator {
fun calculate(): Foo
}

@Instance(type = FooCalculator::class)
internal class FooCalculatorImpl() : FooCalculator { ... }

While this is obviously something that Dagger2 allows you to do as well, the configuration is usually detached. You specify an abstract provision of FooCalculator in a FooModule, that with luck lives nearby the interface and implementation, but eventually it does not, because Dagger2 modules are tedious to write and most people reuse existing module definitions for all kinds of provisionings.

Magnet’s approach here is clean, concise and simple, so simple actually that I most of the time no longer separate interface and implementation into separate files, but keep them directly together.

Providing Scope

One not so obvious thing is that Magnet is able to provide the complete Scope a specific dependency lives in as dependency itself. This might seem to be counter-intuitive at first, as this makes Magnet look like a service locator implementation, but there are use cases where this becomes handy.

Imagine you have a scheduled job to execute and the worker of this job needs a specific set of classes to be instantiated and available during the execution of the job. It might be the case however that multiple workers might be kicked off in parallel, so each worker instance needs its own set of dependencies, as some of them are also holding state specific to the worker. How would one implement these requirements with Magnet?

Well, it looks like that we could create a sub-scope for each worker and keep them separated this way, like so:

@Instance(type = JobManager::class)
class JobManager(private val scope: Scope, private val executor: Executor) {
fun start(firstParam: Int, secondParam: String) {
val subScope = scope.createSubscope {
bind(firstParam)
bind(secondParam)
}
val worker: Worker = subScope.getSingle()
executor.execute(worker)
}
}

@Instance(type = Worker::class)
class Worker(private val firstParam: Int, secondParam: String): Runnable {
fun run() { ... }
}

Injecting dependencies

Now we’ve talked in great length about how you provide dependencies in Magnet, but how do you actually retrieve them once they are provided?

Magnet offers several ways to retrieve dependencies:

  • Scope.getSingle(Foo::class) (or a simple dependency on Foo in your class’ constructor) – This will try to retrieve a single instance of Foo while looking for it in the current scope and any parent scope. If it fails to find an instance, it will throw an exception on runtime. If several instances of Foo can be found / instantiated, it will also throw an exception.
  • Scope.getOptional(Foo::class) (or a simple dependency on Foo? in your class’ constructor) – This will try to retrieve a single instance of Foo while looking for it in the current scope and any parent scope. If it fails to find an instance, it will return / inject null instead. If several instances of Foo can be found / instantiated, it will throw an exception.
  • Scope.getMany(Foo::class) (or simple dependency on List<Foo> in your class’ constructor) – This will try to retrieve a multiple instances of Foo while, again, looking for it in the current scope and any parent scope. If no instance is provided, an empty list is returned / injected instead.

An important difference to Dagger2 here is that not the provisioning side determines whether a List of instances is available (in Dagger2 annotated with @Provides @IntoSet), but the injection side requests a list of things. Also, there is no way to provision a map of <key, value> pairs in Magnet, but this limitation is easy to circumvent with the provision of a List of instances of a custom type that resembles both, key and value:

interface TabPage {
val id: String
val title: String
}

@Instance(type = TagPagesHost::class)
internal class TagPagesHost(pages: List) {
private val tabPages: Map = pages.associateBy { it.id }
}

Optional features

Now you might not have noticed it in the last section, but the ability to retrieve optional dependencies in Magnet is actually quite powerful.

Imagine you have two modules in your application, foo and foo-impl. The foo module contains a public interface that foo-impl implements:

// `foo` module, FooManager.kt
interface FooManager {
fun doFoo()
}

// `foo-impl` module, FooManagerImpl.kt
@Instance(type = FooManager::class)
internal class FooManagerImpl() : FooManager {
fun doFoo() { ... }
}

Naturally, foo-impl depends on the foo module, but in your app module it’s enough that you depend on foo for the time being to already make use of the feature:

// `app` module, build.gradle
android {
productFlavors {
demo { ... }
full { ... }
}
}
dependencies {
implementation project(':foo')
}

// `app` module, MyActivity.kt
class MyActivity : BaseMagnetActivity() {
fun onCreate(state: Bundle) {
super.onCreate()
...
findViewById(R.id.some_button).setOnClickListener {
val fooManager: FooManager? = scope.getOptional()
fooManager?.doFoo()
}
}

Now if you then also make foo-impl available to the classpath (e.g. through a different build variant or a dynamic feature implementation), your calling code above will continue to work without changes:

// `app` module, build.gradle
dependencies {
productionImplementation project(':foo-impl')
}

How cool is that?

Remember though that this technique only works on the specific module that acts as assembly point (see above), so in case you have a more complex module dependency hierarchy you can’t manage optional features in a nested manner.

App extensions

AppExtensions is a small feature that is packaged as an additional module in Magnet. It allows you to extract all code you typically keep in application class into separate extensions by their functionality to keep application class clean and “open for extension and closed for modification” (Open-Closed principle). Here is how you’d set it up:

// `app` module, build.gradle
dependencies {
  implementation "de.halfbit:magnetx-app:3.3-rc7"
}

Then add the following code into your Application subclass:

class MyApplication : Application(), ScopeOwner {
...

private lateinit var extensions: AppExtension.Delegate

override fun onCreate() {
super.onCreate()
extensions = scope.getSingle()
extensions.onCreate()
}

override fun onTrimMemory(level: Int) {        
extensions.onTrimMemory(level)
super.onTrimMemory(level)
}
}

There are many AppExtensions available, e.g. for LeakCanary, and you can even write your own. Try it out!

Debugging Magnet

Due to its dynamic nature it might not always be totally obvious in which scope a certain instance lives in. That is where Magnet’s Stetho support comes in handy.

At first add the following two dependencies into your app’s debug configuration:

// `app` module, build.gradle
dependencies {
debugImplementation "de.halfbit:magnetx-app-stetho-scope:3.3-rc7"
debugImplementation "com.facebook.stetho:stetho:1.5.1"
}

This will add an app extension to Magnet that contains some initialization code to connect to Stetho and dump the contents of all scopes to it. In order to have the initialization code being executed, your Application class needs to have the AppExtensions code as shown in the previous section.

Now when you run your application, you can inspect it with Stetho’s dumpapp tool (just copy the dumpapp script and stetho_open.py into your project tree from here):

$ scripts/dumpapp -p my.cool.app magnet scope

Note that you need to have an active ADB connection for this to work. If you stumble upon errors, check first if adb devices shows the device you want to debug and eventually restart the ADB server / reconnect the device if this is not the case. The output then looks like this:

  [1] magnet.internal.MagnetScope@1daafe1
BOUND Application my.cool.app.MyApplication@2906100
BOUND Context my.cool.app.MyApplication@2906100
TOPMOST SomeDependency my.cool.app.SomeDependency@2bd93c7
...
[2] magnet.internal.MagnetScope@f6213e5
BOUND CompositeDisposable io.reactivex.disposables.CompositeDisposable@6f06eba
...
[3] magnet.internal.MagnetScope@4c964c8
BOUND CompositeDisposable io.reactivex.disposables.CompositeDisposable@1250961
TOPMOST SomeFragmentDependency my.cool.app.SomeFragmentDependency@7a6bc86
...
[3] magnet.internal.MagnetScope@d740574
BOUND CompositeDisposable io.reactivex.disposables.CompositeDisposable@fc7da9d
TOPMOST SomeFragmentDependency my.cool.app.SomeFragmentDependency@bf4ff74
...

The number in [] brackets determines the scope level, where [1] stands for the root scope, [2] for an activity scope and [3] for a fragment scope in this example. Then the type of binding is written in upper-case letters; things that are manually bound to the scope via Scope.bind() are denoted as BOUND, things that are automatically bound to a specific scope / level are denoted as TOPMOST and finally things that are directly bound to a specific scope are denoted as DIRECT. Instances that are scoped with UNSCOPED aren’t listed here, because as we learned, are not bound to any scope.

Roundup

Magnet is a powerful, easy-to-use DI solution for any application, but primarily targeted on large multi-module mobile apps.

There are a few more advanced features that I haven’t covered here, like selector-based injection. I’ll leave this as an exercise for the reader to explore and try out for her/himself 🙂

Anyways, if you made it until here, please give Magnet a chance and try it out. Due to it’s non-pervasive nature it can co-exist with other solutions side-by-side, so you don’t have to convert existing applications all at once.

Many thanks to Sergej Shafarenka, the author of Magnet, for proofreading this blog.

New PGP Key

I think it was about time to get a new one. While I do not get much encrypted / signed email, the old one from 2003 that used a DSA/ElGamal combination was considered less secure by today’s standards. Since I had a couple of signatures on the old one, I ensured that I signed the new one with the old one to get at least “some” initial trust on this as well.

tl;dr Here is the new key: 0xCD45F2FD

And for those of you who want to span a more “social” web of trust with me, I’m also on keybase.io and have a couple of invites left as you can see 🙂

Batch-remove empty lines at the end of many Confluence pages

In a customer project we’ve decided to collaboratively write a bigger bunch of documentation in Atlassians Confluence and export that with Scroll Office, a third-party Confluence plugin, into Word.

That worked fine so far, but soon we figured that we’ve been kind of sloppy with empty lines at the end of each page, which were obviously taken over into the final document. So instead of going over each and every page and remove the empty lines there, I thought it might be easier to directly do this on the database, in our case MySQL.

The query was quickly developed, but then I realized that MySQL had no PREG_REPLACE function built-in, so I needed to install a UDF, a user-defined function first. Luckily, this UDF worked out of the box and so the query could be finalized:

UPDATE BODYCONTENT
JOIN CONTENT ON CONTENT.CONTENTID=BODYCONTENT.CONTENTID
AND CONTENTTYPE LIKE "PAGE" AND PREVVER IS NULL
SET BODY=PREG_REPLACE("/(

<.p>)+$/", "", BODY)
WHERE BODY LIKE "%

";

This query updates all current pages (no old versions) from all spaces that end with at least one empty line – this is Confluence’s internal markup for that – and removes all of these empty lines from all matches pages.

This was tested with MySQL 5.5.35, lib_mysqludf_preg 1.2-rc2 and Confluence 5.4.2.

I don’t need to mention that it is – of course – highly recommended that you backup your database before you execute this query on your server, right?

Custom polymorphic type handling with Jackson

Adding support for polymorphic types in Jackson is easy and well-documented here. But what if neither the Class-based nor the property-based (@JsonSubType) default type ID resolvers are fitting your use case?

Enter custom type ID resolvers! In my case a server returned an identifier for a Command that I wanted to match one-to-one on a specific “Sub-Command” class without having to configure each of these identifiers in a @JsonSubType configuration. Furthermore each of these sub-commands should live in the .command package beneath the base command class. So here is what I came up with:

@JsonTypeInfo(
    use = JsonTypeInfo.Id.CUSTOM, 
    include = JsonTypeInfo.As.PROPERTY,
    property = "command"
)
@JsonTypeIdResolver(CommandTypeIdResolver.class)
public abstract class Command {
    // common properties here
}

The important part beside the additional @JsonTypeIdResolver annotation is the use argument that is set to JsonTypeInfo.Id.CUSTOM. Normally you’d use JsonTypeInfo.Id.CLASS or JsonTypeInfo.Id.NAME. Lets see how the CommandTypeIdResolver is implemented:

public class CommandTypeIdResolver implements TypeIdResolver {
    private static final String COMMAND_PACKAGE = Command.class.getPackage().getName() + ".command";
    private JavaType mBaseType;
    @Override
    public void init(JavaType baseType) {
        mBaseType = baseType;
    }

    @Override
    public Id getMechanism() {
        return Id.CUSTOM;
    }

    @Override
    public String idFromValue(Object obj) {
        return idFromValueAndType(obj, obj.getClass());
    }

    @Override
    public String idFromBaseType() {
        return idFromValueAndType(null, mBaseType.getRawClass());
    }

    @Override
    public String idFromValueAndType(Object obj, Class clazz) {
        String name = clazz.getName();
        if (name.startsWith(COMMAND_PACKAGE)) {
            return name.substring(COMMAND_PACKAGE.length() + 1);
        }
        throw new IllegalStateException("class " + clazz + " is not in the package " + COMMAND_PACKAGE);
    }

    @Override
    public JavaType typeFromId(String type) {
        Class<?> clazz;
        String clazzName = COMMAND_PACKAGE + "." + type;
        try {
            clazz = ClassUtil.findClass(clazzName);
        } catch (ClassNotFoundException e) {
            throw new IllegalStateException("cannot find class '" + clazzName + "'");
        }
        return TypeFactory.defaultInstance().constructSpecializedType(mBaseType, clazz);
    }
}

The two most important methods here are idFromValueAndType and typeFromId. For the first I get the class name of the class to serialize and check whether it is in the right package (the .command package beneath the package where the Command class resides). If this is the case, I strip-off the package path and return that to the serializer. For the latter method I go the other way around: I try to load the class with Jackson’s ClassUtils by using the class name I got from the deserializer and prepend the expected package name in front of it. And thats already it!

Thanks to the nice folks at the Jackson User Mailing List for pointing me into the right direction!