Nazi-Überfall auf Wohnprojekt in Dresden

Ich bin eigentlich gegen jede Form von Gewalt, egal ob sie von links oder rechts kommt. Nach [diesen schockierenden Bildern vom letzten Wochenende](http://www.spiegel.de/video/video-1110518.html) aber rücke ich von meiner Position ab.

Unbelehrbaren Menschen muss Einhalt geboten werden – einer Horde von etwa 200 wildgewordenen Nazis, die 15 Minuten lang ein Wohnhaus in Dresden mit Steinen zu bewerfen, erst recht! Wie kann es sein, dass die Polizei – wenige Meter enfernt vom Ort des Geschehens positioniert – unfähig ist, Warnschüsse abzufeuern oder auf eine andere Art einzugreifen, um diesen Treiben ein Ende zu bereiten?

Wenn der Gesetzgeber und die Polizei weiterhin so unwillens sind, nationalsozialistisch gesinnte Gewalttäter wirksam zu bekämpfen, dann sehe ich die autonome Linke in Zukunft als die letzte Bastion vor den marodierenden Rechtsradikalismus in diesem Land. Dann haben sie mich als Unterstützer, denn dann bin ich ganz klar **für Selbstjustiz**.

#112123

Today I completed my ScrumMaster certification. After the two days with [Joseph Pelrine in early November](http://www.thomaskeller.biz/blog/2010/11/05/why-i-do-open-source-development/) they finally send me some login for their website scrumalliance.com, offered a short web-based test (which you couldn’t really fail) and tada – now I can call myself [“Certified ScrumMaster”](/bewerbung/csm.pdf).

So everything seemed to look fine, but wait, why has this certificate an expiry date tacked on it? Uh, yes, erm, I’m supposed to renew it every year beginning from 2012 for 50 bucks. Its not like your drivers license – once you’ve learned to ride your car you cannot unlearn it, right? All the knowledge about Scrum is of course totally forgotten and buried in my head when the magic year 2012 has been reached… now thats business!

But hey, I got something in return. I’m listed with the other 112122 fellows on their website as CSM and are “allowed” to use the term “CSM”, the “logo” (which I had to extract via photoshop from the certificate itself, because they do not offer a download) and the nice little Scrum community they build around the website.

Thank god that Joseph was entertaining and educational enough for the two days in November that all the (company) money was worth the certification. The certificate itself and the “services” around it are certainly not worth it.

Sender verification failed – or How you should treat your customers correctly

For a couple of years now, one of the easiest, yet very powerful anti-spam techniques is sender verification. Often spam is sent from bogus email addresses which contain random strings and are therefor also hard to blacklist. In this process the receiving mail server simply checks the `From:` header of every mail and asks the sending mail server if it actually knows the sending user. If not, the receiving mail server will most likely immediately discard the message with a “550: sender verify failed”. To not put a high load on the sending server, the result is cached in the receiving one, so if you receive 20 mails from bob@foo.com, its sending server is probably only asked once (or not at all if it has been asked before).

My exim instance has sender verification enabled by default and I like it, because ~90% of the spam doesn’t even need to get scanned by spamassassin, which in return means lower server load. However, sender verification also makes problems sometimes, especially if automatically crafted emails from, lets call them “legacy”, systems should reach me. You can of course replace “legacy” with “simple PHP mail script” or “shop frontend” if you like, as administrators or developers of these systems are apparently completely unaware of the bad job they do, if they fulfil the requirement “hey, the user should get this notification email, but ensure that he won’t spam the support with questions about it, so use an invalid address…”

You know what follows: The novice, or sometimes also not so novice developer / administrator, simply goes ahead and sets `noreply@host.com` as `From:` address. Especially in shared hosting environments there is usually an MX server configured for the hosted domain which allows local relaying, so sending a mail from a PHP script like this

mail("joe@otherhost.com", "Hello", "It works!",
     "From: noreply@host.com\r\n");

seems so simple. Of course most of the time it gets completly forgotten to give the mail server of the `host.com` domain a notice that there is suddenly a new (bogus) mail user available within one of his managed domains! So how do you fulfil the “don’t spam the support” requirement then?

Well, the simplest way is to use an existing mail address which is known within the sending mail server and then also add a `Reply-To:` header to your mail which may then contain the bogus address. If the user clicks on “reply” in his mail client, this reply-to address will pop up in the `To:` field and you practically achieve the same effect.

But probably the best way is of course to convince your management that it should not ignore customer inquiries by stupid procedures like this…

As a customer of several online services I have encountered this and similar mail problems a lot in the past. I cannot remember exactly when I actually stopped informing the individual webmaster or support team about the problems they had with their mail setup, simply because my inquiries had been ignored most of the time. See this blog post as a silent rant for all the crappy configured setups out there.

Was ist falsch an diesem Bild?

Auf dem Bild (Quelle: AP) ist zu sehen, wie Abgeordnete des Deutschen Bundestages am heutigen Freitag zur Abstimmung über die 22,4 Mrd. Euro Kredit-Nothilfe für Griechenland schreiten. Ich möchte gar nicht über Sinn und Unsinn dieser Kredite sprechen, nein, ich wundere mich ganz konkret über die in diesem Bild aufgefangene Situation:

Der Deutsche Bundestag beschließt Milliarden-Nothilfen zu Zeiten klammer Kassen, zurückgehender Steuereinnahmen und einer noch nicht vollständig von der letzten Wirtschaftskrise erholten Nation – und die Abgeordneten schreiten lachend und feixend zur Abstimmung über eine noch größere Ausgabenlast…?

Da haben wohl einige die Bodenhaftung gänzlich verloren – nach dem Motto “Ist ja nicht mein Geld…”. Liebe (Noch-)Volksvertreter, wundert Euch nicht, wenn das Volk eines Tages doch mal gewaltig rein Tisch macht.

Fuck php

Seriously, fuck it. Not only for it long-standing inconsistencies in the “user API”, no, I’ve rarely seen a piece of source code crap with such a low comment / code ratio.

What I’m trying to do? Debugging `SoapClient` from `ext/soap` and figure out why it ignores my `typemap`. Yes, there is a trace mode – but this will only fill the “private” `__getLastXXX()` methods, so I’m doing `printf()` debugging, within PHP’s source, as if it would be 1996 again. An amazing blaze from the past.

Oh, and in case you wonder if PHP can finally fully interoperate with other standard SOAP servers like Apache’s Axis, no, it still can’t. The developer of ext/soap is busy with other tasks these days, quoting him “[…] your WSDL uses overloaded functions […] and ext/soap doesn’t support them. I hardly believe it’ll support them in the future, in case nobody provide a patch.”, but hey, the aforementioned bug is only open for 5 or so years, right?

Apparently those people who give PHP the “enterprise ready” notion now are just waiting until SOAP died completly and everything has been replaced by the next best thing or what? Stupid morons, they should look at the source code of their “product”, think about it for ten seconds and finally run away screaming loud and beg the lord for forgiveness.

openSUSE madness

Just in case you wonder why a simple `sudo zypper install
` sometimes loads dozens or more unneeded, but possibly related packages, its not a bug, its a feature!

While Debian by default only hints you to these additional packages during the install phase, openSUSE installs them all by default. Try it with `git` and you’ll get everything here: `cvsps git git-core git-cvs git-email git-gui gitk git-svn git-web libpurple-tcl subversion-perl tcl tk xchat-tcl`.

There are two ways to get rid of this nasty behaviour:

1. Temporarily by adding the `–no-recommends` option to your call

2. Permanently by editing `/etc/zypp/zypper.conf` and configuring `installRecommends = no` in the `[solver]` section.

Hey, at least they have an option to disable it, though its completely beyond me why somebody wants to have this enabled by default. Maybe they get a cookie for every additional package download…?

Twitter

A hail to all the twitter users posting “I’m currently at XXX” status messages, otherwise useful information like those on pleaserobme.com would not be possible.

Darling, do you happen to know where I’ve put my lock pick in…?

Never trust doctrine:data-dump…

…and especially not if you get the impression that the dump will afterwards be readable by the `doctrine:data-load` command of symfony.

It was a costly lesson today when I tried to reimport a dump of a couple of Sympal tables. One of them, the one which models the menu items, has a nested set behaviour, and apparently this one cannot be restored properly by doctrine:

[Doctrine_Record_UnknownPropertyException]                                    
  Unknown record property / related component "children" 
  on "sfSympalMenuItem"

Apparently this particular issue popped up a couple of times in the past for other people as well (Google for it) and while the help of `doctrine:data-dump` still (Doctrine 1.2) blatantly states

The doctrine:data-dump task dumps database data:

./symfony doctrine:data-dump

The task dumps the database data in data/fixtures/%target%.

The dump file is in the YML format and can be reimported
by using the doctrine:data-load task.

./symfony doctrine:data-load

(with the emphasis of “can be reimported”)

the author of Doctrine, Jonathan Wage, told me today on Sympal’s IRC (shortened):

<jonwage> we don’t want people to think you can dump and then restore
<jonwage> that is not what the data fixtures are for
<jonwage> b/c dumping and then loading will never work
<jonwage> an ORM modifies data on the way and and the way out
<me> I mean the least thing doctrine could do there is that if it detects the nested set behaviour it should error out clearly on dump
<jonwage> so you can’t dump the data through an ORM and then try and reload it
<jonwage> i.e. hashed passwords
<me> if dumping is “never” going to work – why do you support dumping into yaml at all?!
<jonwage> if we do that then we would have to throw errors in sooooooo many other cases too
<jonwage> because it is at least a little bit of a convenience
<me> its like a half-baked feature then
<jonwage> we dump the raw data
<jonwage> and you can tweak it
<jonwage> thats my point though, it will ALWAYS be a half baked feature thats why we document it that way
<jonwage> it can NEVER work 100% the way you want it to
<jonwage> so if we fix that one thing, a million other things will be reported that we cannot fix
<jonwage> bc an ORM is not a backup and restore tool
<jonwage> it is impossible

Now I know that as well. My only problem was that I struggled “what is wrong with my fixtures” the whole time and never dared to ask “what is wrong with doctrine”…

Doctrine Horror

My latest Symfony project uses Doctrine as ORM, which is considered to be a lot better than Propel by many people…

Well, not by me. Doctrine seems to have a couple of very good concepts, amongst them built-in validators, a powerful query language, and last but not least, an easy schema language. (Though to be fair, Propel will gain most of these useful things in the future as well or already has, f.e. with its `PropelQuery` feature.)

But Doctrine also fails in many areas; the massive use of overloads everywhere makes it very hard to debug and even worse, it tries to outsmart you (the developer) in many areas, which makes it even more hard to debug stuff which Doctrine doesn’t get right.

A simple example – consider this schema:

Foo:
  columns:
     id: { type: integer(5), primary: true, autoincrement: true }
     name: { type: string }

Bar:
  columns:
     id: { type: integer(5), primary: true, autoincrement: true }
     name: { type: string }

FooBarBaz:
  columns:
     foo_id: { type: integer(5), primary: true }
     bar_id: { type: integer(5), primary: true }
     name: { type: string }

(I’ll skip the relation setup here, Doctrine should find them all with an additional `detect_relations: true`)

So what do you expect you see when you call this?

$obj = new FooBarBaz();
print_r($obj->toArray());

Well, I expected to get an empty object, with a `NULL`ed `foo_id` and `bar_id`, but I didn’t! For me `foo_id` was filled with a 1. Wait, where does this come from?

After I digged deep enough in Doctrine_Record, I saw that this was automatically assigned in the constructor, coming from a statically incremented `$_index` variable. I could revert this by using my own constructor and call `assignIdentifier()` like this:

class FooBarBaz extends BaseFooBarBaz 
{
   public function __construct()
   {
      parent::__construct();
      $this->assignIdentifier(false);
   }
}

but now this object could no longer be added to a `Doctrine_Collection` (which is a bummer, because if you want to extend object lists with “default” empty objects, you most likely stumble upon a Doctrine_Collection, which is the default data structure returned for every SQL query).

So you might ask “Why the hell does all this impose a problem for you?”

Well, if you work with the `FooForm` created by the doctrine plugin for you in Symfony and you want to add `FooBarBazForm` via `sfForm::embedFormForEach` a couple of times (similar to the use case described here), you suddenly have the problem that your embedded form for the appended new `FooBarBaz` object “magically” gets a foo_id of a wrong (maybe not existing) `Foo` object and you wonder where the heck this comes from…

I have my lesson learned for the last one and a half days. I promise I’ll never *ever* create a table in Doctrine with a multi-key primary key again and I’m returing back to Propel for my next project.

DHL outsmarts all

I sold a couple of things on eBay recently and went ahead to print labels for the packages on DHL.de. The user interface is clean and understandable until there, though they validate (or rather invalidate) my bank code for a GiroPay payment a bit late and even with a misguiding error message (no, I have not entered a wrong bank code, its just that my bank isn’t set up for GiroPay). But this alone is not the reason for this post, the reason is what awaited me after I was billed…

My expectations were simple: I thought I would be redirected to some page which would generate my parcel label on the fly, provide it as easy PDF download and thats it. It was that easy actually with DHL’s German competitor Hermes, but no, DHL had to outsmart the whole process.

On the final page in DHL’s booking process you have three options:

1) Open the label in a Java applet for viewing and printing
2) Saving the label via the Java applet
3) Saving the label as PDF

The “default” way – using the Java applet – did not work out at all for me on the Mac, neither in Safari nor Firefox. The applet simply did not load at all and “saving” the applet apparently meant for the DHL guys to save the HTML in which the applet code is embedded… cool. It doesn’t make a difference if the applet is embedded in an offline or online webpage, if the applet *itself* does not work!

Anyways, the PDF link seemed to be more familiar anyways, so I headed there and opened the downloaded file in OSX’ Preview.app. This was the result:

DHL pdf gone wild

The text in the red box on the left says that Javascript is disabled in Adobe Reader. Are you serious guys? Do I really have to install this bloatware Adobe Reader just to print out a simple label?

I had no other choice apparently. If the money was not already theirs, I’d have stopped by now, but I was part of their process.  Adobe Reader 9.20 was only a 32 MB download away (thanks god we all have broadband connections here – or do you think DHL would have paid for the roughly 90 minutes on a 56k dial-up connection?) and only 230 MB after installation, so I went ahead to the big moment – would I finally be able to view and print my beloved label?!

dhl2

Almost – now I reconized why Javascript was needed here after all: The document was “dynamic” in the way that it seemed to fetch the actual label data from a webservice located at https://www.dhl.de/popweb/services/LabelService?wsdl. (You really thought you could hang up your dialup connection after the Adobe Reader download?) All the label’s contents are overprinted with “MUSTER” (German for “SAMPLE”), even after the data were fetched. Printing only seemed to be possible once via a special yellow “Postage print now” button which appeared right in the document once the data had been loaded. The print now button seemed to remove the “MUSTER” overprints from the final print, but I actually cannot tell you that for sure, because I made the mistake to print to a PDF printer in the printer dialog – and Adobe’s distiller told me then:

This PostScript file was created from an encrypted PDF file.
Redistilling encrypted PDF is not permitted.

Opening my original PDF for the second time left me with my loss of roughly 6 Euro alone:

There went my money

“This parcel label has already been printed”. Yeah, they got me. I’ve played around and I lost. I’ve tried to outsmart the process, but they outsmarted me. My last chance to see at least some of my money is to contact their service support – which I did by now – but whatever comes out of that, they also lost me as customer for sure.

I mean, seriously, this vendor lock-in is hilarious. Its quite simple to create a unique pattern or scanner code for a single package which cannot be re-used and tampered – what came over them to invent something that brain-dead instead? Either they’re over paranoid or their IT guys never have dealt with crypto or they have some other huge pain in the ass which forces them and their customers to this process.