Intellij IDEA: German keyboard Shortcuts Reference


Ultimate S Keyboard

Still trying to get used to Intellij IDEA.
One of the major gripes I still have: keyboard shortcuts. Not only that they differ from what I am used to (how dare they). Quite often they are not working!

I am using a MacBook Air with OS X Lion and many of the shortcuts used by IDEA (especially for debugging) are already claimed by OS X. So I switched off most of Lion’s Shortcuts.

But still: I am using a German keyboard. A German laptop keyboard.
And IDEA insists on telling me that “Comment line with Line Comment” is


It is not. It is


On a German keyboard the underscore character is placed on the key that is used by the slash on American keyboards. No big deal, but all keyboard shortcuts using “[“, ”]”, “{”, and “}” are useless. To use them on a German keyboard you have to press the “⌥” modifier. And that seems to severely confuse IDEA. Pressing the keys where these characters are on an American keyboard doesn’t help either.
Jetbrains seems to use some low level keyboard access method that checks for the key pressed and not for the character delivered. I wouldn’t mind, if they could get their menus and keymap right, so that “⌘_” is displayed.

The company is located in the Czech Republic and in Russia. I would have expected that they have the same problem, but perhaps they are only using American keyboards.

Instead of whining some more, I decided to create my own keyboard mapping. Simple enough. But among the many tools I am using on many different computers, I need a keyboard cheat sheet. And perhaps someone else as well? Therefore I created a Word template that you may use to create your own cheat sheet:


Word template: IDEA-Keyboard-Shortcuts–1.0.docx
(The template is based on Jetbrain’s own Reference Card)

I am quite sure that there are still some errors. So please put any corrections into the comments of this blog post.


I like to write documentation

Leave a reply

VICTOR: I like to write documentation.

CHORUS: Hello, Victor!

Southland Paper mill, Kraft (chemical) pulp used in making newsprint, Lufkin, Texas (LOC)

There it is. I have admitted it. Yes, like most programmers having suffered under heavy waterfall-like development processes, I still prefer running software over documentation. But whenever I get to work on a project with an existing code base, I really like to have some document that explains the most important concepts, the decisions and the influences, which lead to theses decisions. And I do not mean a reference documentation, like JavaDoc. That is helpful (if done right) but you need to understand the code base before a reference documentation can help you.

Even more important, when I am starting a new project, writing something down helps me to clarify things in my own mind. And giving these drafts to other people makes it easier for them to spot the gaps in my reasoning. If I only have some pictures and scribbles on paper, I can wiggle myself out of the tough spot with eloquence (or what I consider eloquence) and some hand weaving. I am good at that.

Writing forces me to think things through. And writing it down with a reader in mind helps me get the priorities right. I once had a summer “job” where I had to explain board games to other people. As fun as that sounds, explaining games really is a good training to tell the important things first, to put yourself in someone else’s shoes. I still think this experience helped me become better at communicating – but writing is still a challenge.

When to write

We may discuss, when we should write documentation. I still prefer to write most of it after the implementation. Because then I am quite sure to get it right. But if there are multiple parties (vendors) involved or if it is part of a proposal, it has to be written before the implementation starts. You may call that “specification”, but for me it still is a kind of documentation or it should become part of the documentation.

Which tool(s) to choose

Text-Style, off Brick Lane, East London

Now that’s where things get difficult. I do not know. This is one reason, why I am writing this piece. If I write everything alone, but someone else must be able to edit it, I still propose Microsoft Word (TM). Yes, it still has its quirks (never use a floating text frame. Never!). If the document is mostly text, I really love MultiMarkdown. It is fast and simple to write (even on my iPad, like now). Being able to get beautiful documents in LaTeX is one of the major advantages.

But. But MultiMarkDown has only that many supported styles. What if I need a style for Shell output, and a style for class and method names inside a paragraph, etc. I can use “native” markup in LaTeX or HTML. But not for both. And it makes the readability of the text I am writing much more difficult. The beauty of (Multi-)MarkDown gets lost, if there is too much markup in there.

I cold still switch over to LaTeX. But here comes the rub: Not too many people are willing to switch with me. In many projects it has to be Word. I might be able to sell MultiMarkDown but not LaTeX (not to mention DocBook, which is really awful).

The biggest problem with Word: Merging.

You can get around many problems with Word, if you force everyone to only use styles. No manual formatting! You can even mark every manually formatted text.

But if you have to write a larger document with multiple contributors it gets really difficult. Merging in Word got better, but nothing compares to a really good editor that is capable of supporting a three-way-merge. Sigh.

I recently came across Ulysses, which sounds exactly like what I was looking for. You can add your own styles (you just have to define the delimiters and the transformation into LaTeX and HTML). You can even create Word documents (but without styles). … if only there were a Windows version.

I am currently thinking of extending MultiMarkDown, but I still hope someone has a better solution …


QCon highlights

Leave a reply

A QCon presentation style review (mostly)

At QCon this year the influence of Presentation Zen was to be seen everywhere. But I must admit: it didn’t help too much. Sure, the quality of the talks, its content was as great as ever, but I am not sure Garr Reynolds’s influence helped the presentations that much. It mostly felt artificial. The slides may have improved, but they looked so similar, unoriginal. And the presenters were still always craning their necks, standing behind the lectern, or if they dared to stray away, they had to return immediately to push the forward button (please buy a decent presenter, like: this for OS X or this for Windows) One of the exceptions: Sam Newman’s talk on From Dev to Production: Better Living through Build Pipelines and Teamwork, who seemed so at ease with his material, so natural that it was pure delight to watch. I’ve been told that it might be difficult, to get him to stop talking, but in this presentation it came together nicely. Thank you, Sam.

Simon Wardley did a slick presentation presentation as well. His explanation of the cloud “movement” from a historical and economic viewpoint, was spot on. I will especially take with me his view, that it’s not a question ‘if’ you are going to move to the cloud, but ‘when’, because computing power is becoming a commodity, and if you treat it otherwise you will fall back, because you will not be able to innovate on top of it as fast as others are doing.

His presentation on the other hand was completely over the top, with 400+ slides, well rehearsed, but after some time it became tiresome. He simply overdid it, seemed to enjoyed himself so much, that he might as will did it in front of a mirror. Please, cut it down next time.

The slides I probably liked most, were Nat Pryce’s. Simply because they were original: He completely did them by hand on a tablet PC, brushing them up with InkScape. It surely helps that he has a very legible handwriting

He ran a little bit through his different case studies, I think it would have been better to have at most two examples and going a bit deeper into his material on testing asynchronous systems. Still his advice was helpful.

The funniest presentation came from Dan North, I hope they publish the video asap. Without Dan’s presentation, the slides are only half as fun. This in no means says that the slides are only half good, it just means that Dan is a great presenter. You can follow him on Twitter, I think it’s worth it.

And I have to follow his advice to buy me a rubber duck! Yes, a rubber duck: whenever I am furiously hacking on something, losing myself; I then look at my rubber duck and asking it: is this worth it? And the rubber duck just looks back: duh? Ok, it’s not worth it, thank you. Very helpful.

Ralph Johnson on the other hand was rather boring, talking about refactoring; nothing new.

Optimizing CPU cache access in Java

The presentation that absolutely made my head explodes, was “LMAX – How to do over 100K contended complex business transactions per second at less than 1ms latency”. They really pushed performance optimization in Java beyond the limit I had ever considered. They even padded a data structure with additional bytes to ensure that two essential parts (the head and the tail of their own list implementation) were to be placed in two different cache rows of the CPU, to reduce contention. Most of their other optimizations were not so far of, but – for me – unheard of in the Java world.

Facebook create their own PHP compiler & runtime

Where LMAX pushed the Java limits, Facebook created their own PHP compiler to improve the performance of their web site. And they open sourced it (see: HipHopBlog and HipHop), because they had profited in so many ways from open source software that they want to give something back. Great move. But what really resonated with me was their “culture”. They …

  1. … create everything in very small teams
  2. … still try to rather push some new features out there and fix any (scaleability) problems when they occur

Sure the last principle does not work for everyone, and you have to weigh in the image problems when something really goes wrong, but sometimes we simply are way beyond sanity. Sometimes it feels that an IT department is mostly trying to avoid getting something through the door. Yes, too often those who have to stand up in the middle of the night aren’t those that caused the problems in the first place. But what about taking Sam’s advice and wear the pager yourself for a week after the release. Okay then you have to have access to production systems yourself, and so on yadda, yadda, yadda. Try it. At least ask (“we can’t possibly do that”), and then ask again (“we have never done it”)– and again (“well …”).

Let it REST

Stefan Tilkov is still trying to convince everyone to use REST. Luckily he has lost some of his zeal and even acknowledges that there are some difficulties in adopting the REST mindset. But most of his claims were less provocative then he might have thought. So I could agree with him most of the time. (And I must still thank him, because his zeal provoked me into blogging in the first place, see my very first blog entry.

But I must admit that the best part was “stolen” from Jim Webber (which he explicitly and happily said so himself. Please check out the keynote by Jim and Martin Fowler from QCon 2008): Should an ESB be the base of your SOA? Or which problem does an ESB solve?

If you happen to stumble upon the usual enterprise systems spaghetti landscape:

The idea of an ESB that cleans up the mess might be tempting. Everything now finally looks orderly, everything has its place. Something we like: Order. Cleanliness.

But what happens if you open up the lid?


His best argument for REST (from my point of view) is this: When you look at most APIs, they are 90% CRUD, so why use something that might be better for the last 10%? This undermines my strongest argument, that some important services really do not fit the REST style. Yes, they do not fit well, but is that a good argument for the WS-?


As usual I had to buy some books, which I will probably never read (have a look at my library at Library Thing). But, oh, I already finished “Confessions of a Public Speaker” (Scott Berkun, 2010), nice little book, you will probably have finished it, in one or two evenings. Nothing really new, some good stories, even if a little bit rambling along, but good entertainment after a full day of conference sessions.

The other one is “Web Design for Developers” (Brian Hogan, 2010), not yet sure if it was a good choice, but still learned quite a bit on color schemes. Would have preferred less micro-recipes for Photoshop and more about the thought processes, but still a good read up to now.


The idea that the business should learn IT is typical of technical people and completely misguided

1 comment

Some time ago I wrote the following:

The idea that the business should learn IT is typical of technical people and completely misguided” (Ron Palmer)

Yes, but then please don’t complain. Go away. You are talking about stuff you do not understand and I can’t take you seriously.
There is a Dilbert cartoon where the Pointy Haired Boss is doing a project estimation; claims that everything he doesn’t understand is simple and goes on: ‘Creating a multi-tier architecture for multiple channels … that will be three minutes’.

Do you listen to your IT department when they tell you, that doing this project in the given time frame will make maintenance difficult? No you won’t, you say that “they stifle the creative entrepreneurialism that is critical to advancing the state of the business”.
But would you skip maintenance on your car knowing that the risk of losing a tire will increase every month?

You do not listen … and then you complain. You create christmas wishlists and then you complain about not getting everything and especially not getting the thing that wasn’t on your list. But that was obviously needed by you.

So don’t learn about IT but make it your responsibility that they understand your needs. Would you have an architect build your house without checking the plans making sure that he understood your needs, without regularly checking the progress?

It was written, when I was involved in a project where the business side (again) complained that our software did not do what they expected. And it was (again) a misunderstanding about a sentence in their specification, that to us had a complete different meaning than what they expected (and vice versa). So perhaps you can forgive me the angry style. Instead of whining, let me try to improve on the situation.

Let’s draft a contract: It is our responsibility to understand your needs and translate them into software. We do that in the best way we know of. And “best” here means, as fast and cheap as possible (“cheap” in the long or short run: you choose). Because if you need something now, and know the trade off of getting it ASAP, we will do it. If we (think we) see a better way to support your business, it is our obligation to propose such a solution. But to be able to do that you must help us in any way you can. So that we are enabled to learn, what is important to you and what is not. And this does not end with you handing over a “specification”. It involves you, to write the specification (with us) in the best way you know of. It involves you, when we have questions and need clarifications. It involves you listening to us, when we explain trade offs for various solutions. It involves you, when we have a prototype or even a first version ready, to thoroughly try and test it. Anything that has been in that first version, that you would like to change later, will become more expensive to change over time. Adding a basement after the roof has been set is kind of difficult.


Does anyone read the Ivy documentation?

Leave a reply

I know there are write-only-programming-languages, but isn’t there write-only documentation as well? Case at hand: The Apache Ivy documentation.

Some years ago I stumbled upon Ivy and we tried to use it to replace our home grown Ant solution. But after some weeks we simply gave up, nobody but our in-house build guru could understand it and he refrained from supporting the build, if it used Ivy. Some month later, we/he gave in. I am not sure this was a wise move.

Now, searching again for something less convoluted than Maven, I am looking into Gradle and Buildr (interestingly all but Gradle are part of Apache).

Since Gradle uses Ivy, I took another look at it, i.e at its documentation:

<project xmlns:ivy="antlib:org.apache.ivy.ant" name="hello-ivy" default="run">
<target name="resolve" description="--> retrieve dependencies with ivy">
<ivy:retrieve />

“[…] Note that in this case we define a “resolve” target and call the retrieve task. This may sound confusing, actually the retrieve task performs a resolve (which resolves dependencies and downloads them to a cache) followed by a retrieve (a copy of those file in a local project directory)”

Yes, it sounds confusing. I had to read that twice, just to be sure I haven’t simply garbled the sentence in my head. We have a resolve target and a retrieve task that performs a resolve. Why isn’t the “resolve” target simply called “retrieve”? But directly after that sentence, there is the rescue: “Check the How does it work ? page for details about that.” Let’s move on.

First, we see a great looking image:

and then (explaining “resolve”):

“The resolve time is the moment when ivy actually resolve the dependencies of one module. It first needs to access the ivy file of the module for which it resolves the dependencies.

Then, for each dependency declared in this file, it asks the appropriate resolver (according to configuration) to find the module (i.e. either an ivy file for it, or its artifacts if no ivy file can be found). It also uses a filesystem based cache to avoid asking for a dependency if it is already in cache (at least if possible, which is not the case with latest revisions).”

I am not a native speaker, so I won’t try to fix this gobbledegook, but please could someone review this text. I think it is nearly unchanged (or even more complicated) than some years ago. But if i understand it correctly, “resolve” does retrieve the files from the enterprise repository to the cache. “retrieve” then copies them to the project workspace:

“What is called retrieve in ivy is the fact to copy artifacts from the cache to another directory structure. This is done using a pattern, which indicates to ivy where the files should be copied.

For this, ivy uses the xml report in cache corresponding to the module it should retrieve to know which artifacts should be copied.”

Ok, I give up.


Objective-C? plain ugly

Leave a reply

Now using a Mac for over a year I would really like to do some programming for OS X. Just to learn something new (and do some “real” work instead of creating slides).

But Objective-C?

  int main ( int argc, const char * argv[ ])
     NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
     NSLog ( @”Programming is fun! ”) ;
     [pool drain];
     return 0;

plain ugly. Mixing Smalltalk style message passing with C, throwing in some brackets like parentheses in Lisp? It gets worse: we have again to maintain an interface and an implementation file for each and every class. And even if there is now a garbage collector in Objective-C 2.0, you cannot use it when programming for the iPhone. Duh!


So I looked elsewhere, what about MacRuby? From Objective-C (Smalltalk-style):

  [person setFirstName:first lastName:last]

to MacRuby’s

  person.setFirstName(first, lastName:last)


  person.setFirstName first, :lastName => last

To me that looks rather, hm, unbalanced? The first argument has other syntactic sugar than the second. Ok, I understand that there are limitations to Ruby’s malleability, and it sure is better than RubyCocoas:


which even has other problems. I am not sure that I would like

  person <= (firstName: first, lastName: last)

better, if that would be possible in Ruby.

Conclusion: undecided.


Some time ago, when I was looking for a Smalltalk implementation for OS X (and don’t start with that ugly Squeak) I stumbled upon a Script language called F-Script. Its main goals are to provide

  • an embeddable scripting language for OS X applications
  • to have an interactive shell (REPL) to experiment with the APIs and to prototype solutions

It is not intended to write complete applications, but John Sterling has a nice blog entry, that shows how it could be done.

Perhaps this is the way to go. Remember: I do not want to create a real application, just to play around and have fun.


Cards on the Wall

Leave a reply

“If you want to make God laugh, tell him about your plans”
(Woody Allen, probably)

When it comes to creating project plans, I still use the “cards-on-the-wall” technique I first read about in 2001. This technique is rather old fashioned (compared to the current agile mainstream), but I still like it.

The main dimensions I am using are people (aka “staff”, aka “resources”) and time:


In reality it looks much less clean:


Each tasks should be exactly 5 days, but that is rarely the case, so you have to mark the estimated effort on the index cards (either with a simple number, or a visual cue: just some boxes (at most five, and some empty place holders.) You can add an ID, some description (preferably on the back) and what ever you like. Just keep it simple.


The main advantages for me are:

  • Everyone (up to 6-7 people) can participate, can understand the reasoning, discuss and finally the whole group decides on this plan.
  • It is much easier and faster to move tasks around, group them, put them aside for a moment etc. On a computer screen I can never do that

Sure it’s still a plan (someone is laughing). And therefore will be changed. The first planning round is just to see, if the project is feasible, to discover blunders, previously overlooked dependencies and especially missing tasks. And at the end of each iteration, we will adapt the plan — sometimes even more often.

Much too often, there are no user stories, there is just a plain old “functional requirements” document. So this is the best technique I could find to create such a plan. The alternative is much too grim: sitting on my computer, at best with one other person, and creating some monster of a wallpaper that no one ever wants to change and no one understands — me included.

But here’s the catch: in most cases someone wants this plan in electronic form and wants some progress report based on that plan. Let’s not start a discussion, if that is the most effective way to track a project. Let’s say it is just part of some contract or something like that (in my experience you can convince people higher up, that this plan is sufficient; as long as you take some pretty pictures as the one above for record keeping, but it takes time, it takes at least one successful project).

So how do I create that plan? Back to some project planning monster? I haven’t yet found a program that has a view like the one above, where I can …

  1. … assign tasks by drag and drop to some “resources”
  2. … still move taks around

At least for the first requirement I finally have found a tool, at least if you are using a Mac: OmniPlan. You can see it in action for yourself here. If only the dependencies would be visible and if I could still directly move a task around (change dependencies, prioritize, etc.) in that view.