Markdown Blog

It's been a while, but now my blog is up using a static HTML generator with Markdown as underlying format \o/ The static HTML is generated using a couple of small go scripts that wrap pandoc and go's text/template. Converting all of my WordPress posts to MarkDown was as fun as such a task can be, and quite tedious. However, it was still nice to read everything I have ever posted again and allowed me to take a more holistic view on everything I've done. It has definitely helped me with some life decisions this year.

I don't think this is the end of my efforts to improve this blog because I really should move off the go scripts at some point, and the content menu is very mobile-first (it only looks and works well on my phone screen). Moreover, source highlighting does not work either.

The biggest positive take-away, apart from finishing this mini-project, was that committing all intermediate outputs to a repository during the conversion really helped with moving fast. Every time I changed something in the scripts or in the Markdown code, I'd see how it trickles down. With disciplined reviewing, this allowed me to avoid random breakages, and it made debugging quite easy and cheap because SmartGit's differ would do most of the work for me.

I used the opportunity to learn and write some simple Go code. I am split on the language. It feels crude and lacking abstractions. Also, duck typing structural typing makes it harder to understand and explore code. On the other hand, I have been told all of this is intentional to keep things simple. Indeed, it was always straight-forward to get the code to do the things I want, with few surprises, even though it did usually not feel particularly elegant.

My time estimate for the conversion is that I've spent about 1 week for research, 1 week to convert the posts and 1 week to learn go and write the scripts. All of this was spread out over about 2 years with bursts of work now and then.

I'm not sure what the next steps are for this blog. It feels good to have it looked down in HTML and Markdown in a clean version that is easier to back up. I want to play around with other ways of publishing and organizing information. This blog can be a way to centralize all my efforts and keep pointers and write-ups.

Stay tuned! :-)

Writer's block


I don't know what to write... to write is to create though. So let's create:

I've been busy for the last 2 years. Busy with work and busy living. Life always takes you to different places, and for me this was Zurich (from Munich) and Google (from university). Both changes were not that big and yet quite big. They have taken me far away from where I imagined myself to be in this moment of time 2.5 years ago. I was at a loss of words about this for long, but I think I'm finding my voice again. My life at Google is worth a series of articles at a later time, and Zurich is a whole different story with very little Computer Science in my spare time.

So let's go back to the topic: writer's block. How to overcome it? In a way, I have run out of creative steam during my master thesis, and even though I am busy at Google, it's a different kind of business to work in a big company and be told what to do. Definitely more boring than university usually :) In a way, my writer's block, which has kept this blog on hold for the last 24 months, has also been a creator's block. I have resolved to change this with a willingness to forego some sleep in the months to come and show more discipline and restraint in my outgoing nature ;)

My writer's block in the last year is also a consequence of my reluctance to produce content for the WordPress platform I'm using, which has proven quite painful to maintain over time. The plugins and hand-written glue that keeps some of the blog together is not holding as well as it used to anymore, and I'm not able to fix all the issues in my spare time.

It's more fun to create than to maintain in general.

In this spirit, I want to come up with a new blog platform to run this blog on. Assuming that everything around is not fitting my peculiar taste, I will have to come up with something new from old parts. A Frankensteinian blogging system. My main idea is to actually use unit-testing and regression testing on the components and content of the blog to identify issues and maintain a consistent quality overall. In particular, content should stay static and not break in unexpected ways when adding new content that is intended to be orthogonal.

Imitating PBRT-style literate programming in LaTeX

Today, I want to release another bit of code from my master's thesis. However, this time it won't be C++ code, instead I'm going to release some LaTeX code which I used to display source code fragments with.

Specifically the results look like this:


You can download the accompanying example PDF here.

This mirrors the style of PBRT, which uses literate programming to develop and explain an advanced raytracer. Only it doesn't require you to write source code that adheres to literal programming principles. I used it to take parts of the source code of my master's thesis and explain them.

The following LaTeX code is used to create the fragment above:

\defTaggedCurrentFileFragment<Another file fragment;recursive>+=

You can either inline code fragment using a special listing environment, or you can reference code fragments in an external file.

//<Another file fragment;recursive> =
int fac_r( int n ) {
    if( n <= 1 ) {
        return 1;
    return n * fac_r( n - 1 );

Cross-referencing is allowed as well and forward page references are created just like in PBRT. However, I have not figured out how to create backward references.

You can find the LaTeX code on GitHub. I have not wrapped it into a LaTeX package because I don't consider it mature enough, but maybe someone finds it useful, and it eventually ends up in a package (that would be neat :)).

To close, I want to say that the book More Math into LaTeX1 has been very useful while creating these LaTeX macros. It is the first book that covers creating custom macros and commands understandable and in-depth. It feels good to finally understand a bit more about how you can customize LaTeX to fit your needs :)


  1. See also

Creative learning, Scratch and alternatives for learning how to create games

I'm still working at university as a student researcher for the chair of software engineering in a very unconventional placement: I help prepare and teach courses about topics in computer science to pupils, usually age 12-16.

My university1 actually invests a lot of resources in promoting engineering degrees and applied sciences because the schools cannot. Schools here have only so many resources, and the computer science and mathematics curricula are mostly boring and uninteresting. For me, computer science is a lot about being creative and about identifying and solving problems. I'm not sure I would have been interested in computer science that much if it had been just another subject like biology or geography.

The courses we give are usually just one or two hours long for groups that visit our faculty, but we also have courses that last a whole day or two and cover topics in more detail. We usually choose topics that are more interesting than what you might learn at school and that show that computer science is not just about typing in code for hours without end but consists of analyzing real-world problems, devising algorithms and tinkering.

Most pupils already know a bit about computer science, and some even know how to code a bit with Java, but they are always positively surprised when we work out together how to get out of a labyrinth2, how to come up with sorting algorithms, or how to encrypt messages easily with some binary magic 3.

Right now I am tasked with updating and recreating a two-day course about creating games that was held two times in the last years. The original course was only intended for girls with no former coding experience and used Scratch as development tool.

The updated version should be a bit more advanced and introduce more elements of game development properly to be both fun and educational. For this, I examined Scratch and alternatives to determine how to proceed. It ended up consuming a whole day and when I was done I had a beautiful mind map covering quite a number of different tools and frameworks. Having some spare time, I decided to write it all up in a blog post :)


Scratch is a programming language with an IDE that is targeted at young people who have never coded before. It has been developed by the Lifelong Kindergarten group at the MIT Media Lab.


Coincidentally, I'm participating in an online lecture they are offering right now: Learning creative learning. It is free and quite interesting---and Scratch is mentioned and discussed there as well.

Scratch's user interface is very simple and intuitive to use. You can code programs without having to type code. Everything is based on blocks that can be dragged and dropped to create event-based behaviors. Users can draw their own sprites and animate them or easily import their own pictures and use them. Scratch has an active online community and it is easy to share your projects with others and get feedback, which encourages participation.

However, a more experienced user quickly finds many limitations; for example, it is not possible:

  • to create procedures, ie custom blocks, and the message sending blocks are too primitive to create really complex behaviors;
  • to rename variables, which makes refactoring impossible;
  • to change the sequence of statements easily; or
  • to create new sprites (game objects) dynamically.

Moreover, I already found a Heisenbug: I could not write a simple lock to keep an event script from being run multiple times that works at run-time---it only works as described in the docs during step debugging (at reduced script execution speed).

Nonetheless, I consider it a very valuable tool that can be used to ease children into coding. The current 1.4 release needs to be installed on your computer, but they are working on a 2.0 release that will run entirely in the browser. You can give its beta a try here.

Creating programs using drag and drop is a very nice idea and I really appreciate Scratch's user interface. And I'm not the only one: Scratch has spawned many spin-offs that expand on its design. This has been possible because its source code has been released. You can find it here.

I have also examined some Scratch spin-offs that I have found:


Snap (formerly called BYOB for Build Your Own Block) extends Scratch and adds support for custom blocks, first-class lists and procedures, and continuations. It becomes a lot more like a functional language this way. You can run it here. The available documentation explains the new concepts very well. A German translation  is available for download, too. The last release is from 2011, so it might not be actively maintained anymore.


StarLogo TNG

StarLogo TNG extends Scratch into 3D and has been developed at the MIT as well. It is targeted at multi-agent systems and older users. It is more complex and there are some interesting tutorials and workshops available online: one shows how to build an ant population that knows how to pick up objects; and another one shows how to develop an epidemic model.

MIT App Inventor 

MIT's App Inventor lets you build mobile apps easily using a Scratch-based programming interface. It supports a WYSIWYG UI editor as well. This used to be Google App Inventor apparently.


GameFroot's editor looks pretty polished. It is browser-based, and it is very simple to create a side-scrolling platformer out of the box. However, scripts are not that important and most behavior is predefined. So it's probably not a good tool to tinker with. The games that have been created with it naturally look very similar.



Stencyl is another game-creating tool, but it looks very mature and there is a pro version that is sold for 80-150 USD per year to publish your games to mobile devices and computers. The normal version only supports browser games. It is updated regularly but there is no localization support (yet).


The documentation is fairly complete, and it supports writing behavior scripts by using a Scratch-like interface, which supports custom procedures as well, or by writing JavaScript code.

Lots of different games have been created with Stencyl. Here are some which I've tried and which are really fun to play:

  • The Little Who

    The Little Who

  • Balls in Space

    Balls in Space

  • The Wish

    The Wish

I've also looked at IDEs that are not based on Scratch.


Etoys is similar to Scratch but not as polished. However, it has been translated to many different languages. It also lacks the social community features which are so prominent in Scratch. Moreover, you cannot run projects in the browser straight-away because a special plugin has to be installed first. This all probably hinders its adaption.

When you actually give it a go, you will be surprised by how nice it is. It feels very different to Scratch because it uses a unified workspace. There is no special script plane like in Scratch. Everything is part of the workspace. For example, when you look through the tutorials, you not only see an elastic ball bouncing, but you also see the special workspace frame that contains all different animation frames and the script code to assign them to the ball one after the other to emulate its deformation. This way you don't have to imagine anything. It's all there in the open for you to see---and debug if necessary.

Sadly, there is not much documentation and for me it is not as intuitive as Scratch.


Alice is is similar to Scratch and Etoys, only that Alice is using 3D scenes and is geared towards story telling. It's quite to big with over 600MB, but for this you get quality models to play around with. I can't say much about its features. There is not much documentation available online. You probably need to buy their book "Learning to Program with Alice" to really learn more about it. I do not like this. However, there is some free stuff available. It is somewhat hidden, but here is a link.

Again, you only code using drag and drop, but you can switch to a Java mode which makes the blocks look like their equivalent Java statements.


Greenfoot can be used to teach programming with Java in a game-oriented context. Programming is done in a normal text editor. Greenfoot displays a class hierarchy and allows you to place instances of actors in a 2D scene. You can then control them manually or subclass actors and override their methods.

It's fairly basic but probably a good introduction to Java programming. If you start out with Scratch or Alice, it should be a small step from drag and drop coding to "real" coding.

Microsoft SmallBasic

I have learnt programming using good ol' QBASIC on my parent's 386, so I wouldn't disagree with a Basic dialect as a first language. Microsoft gave it some love, and it looks polished. I'm not sure how many are using it because there is no social community linked to it and the last release is from 2011. There is no specific support for creating games and the language is quite limited.

Microsoft TouchDevelop

This is really cool. Microsoft Research has a developed a fully fledged script language for mobile devices, so you can code using a touch-based interface. You can read more about it in the free online book "touchdevelop - programming on the go". Of course, it also runs in your browser and you can try it yourself :)

I don't think it is the best choice for creating games, but you can certainly learn how to program with it by creating "serious" apps for your mobile device. Maybe this makes it interesting for adult beginners to programming.

That's it for today.


  1. Technische Universität München,

  2. Maze solving algorithm

  3. Xor cypher

Al et WML

First, I have created some new pages regarding old university projects. Among them is a condensed page about light propagation volumes, which also made me update the project files to Visual Studio 2012, a page about my bachelor thesis in mathematics (Discrete Elastic Rods) and a page about my master thesis in computer science (Assisted Object Placement). I have not written about the latter two subjects before. Maybe I'll talk some more about them later and write a full post-mortem on them.

This is the first of a number of posts that will be related to my master thesis, or rather code drops from its code base. I have written about 60k LoC in the 6 months of my master thesis and there are a few bits that might be useful in the future.

The first one that I want to talk about is a very simple file format I came up with. Devising new text file formats is not something that I have been very keen about lately. Especially not as some many already exist. However, I have found none which has really fit my requirements:

  • minimal clutter (preferably indentation-based),
  • support for raw text inclusion, and
  • good C++ support.

JSON has too much clutter and doesn't support raw text. YAML, on the other hand, sounds like the perfect choice, even though it's not that easy to find a good library for it. However, when it comes to raw text, you run into the issue that tab characters are never allowed as indentation. Moreover, I was not very happy with the API choices and some bugs in the libraries that I tried to use.

So I decided to develop a very simple text-based data storage format:

WML - Whitespace Markup Language

I'll just start with an example, ie my readme. You can find the full readme here.

'Whitespace Markup Language':

        * very simple
        * no clutter while writing
        * only indentation counts
        * empty lines have no meaning
        * embedding text is easy
        * everything is a map internally


        title       "test\t\t1"
        path        'c:\unescaped.txt'
        version     1

            unformated text

            newlines count here

            time-changed    10:47am
            flags   archive system hidden

                    some data

                    this is nested too

                    key names
                    dont have to
                    be unique (see stream)
                        users   andreas root

As you can see, it is a whitespace-based format inspired by Python. Like in Python, indentation is used to convey structure. A WML file represents a map structure: every node has a name and possibly multiple children. A node definition in a WML file either contains the node name first and then multiple children names (which won't have any children themselves), the node name followed by one colon to signify that a nested definition follows (similar to JSON), or the node name followed by two colons to signify a raw text block. Two different kinds of strings are supported: single-quoted raw strings that do not interpret escape sequences, and double-quoted C strings that support escape sequences.

All in all, the grammar is very simple:

    INDENT, DEINDENT are virtual tokens that control the indentation level
    NEWLINE is a line break

    Indentation is done with tabs only at the moment.

    Here is a rough EBNF syntax for WML:

    root: map

    value: identifier | unescaped_string | escaped_string

    identifier: (!whitespace)+
    unescaped_string: '\'' (!'\'')* '\''
    escaped_string: '"' (!"\"")* '"' with support for \t, \n, \\, \', and \"

    key: value

    map: map_entry*

    map_entry: inline_entry | block_entry

    inline_entry: key value+ NEWLINE
    block_entry: key ':' ( ':' NEWLINE INDENT textblock DEINDENT | NEWLINE INDENT non-empty map DEINDENT )

    This file is itself a WML file and root["Whitespace Markup Language"]["Example"].data() is the example WML node

I've used this format for custom shaders as well as for my settings files and the declaration files of my test scenes:

        name 'platform, brown'
        size 20 2 20
        webColor 945412
        name 'platform, green'
        size 20 2 20
        webColor 129429
        name 'platform, muddy blue'
        size 20 2 20
        webColor 6488a5
        name 'platform, light blue'
        size 20 2 20
        webColor e5eaf1

I've uploaded the current code for WML to GitHub, and you can find the code here. The API supports an overloaded index operator to access children of a node and contains both a parser and an emitter for WML.


First, the current API isn't brilliant. It would be nice to separate the data model from the parser and emitter by using templates and type traits to improve abstraction. I think I might go and investigate different API types in the future and see which one works best for some simple cases.

Second, it would be possible to reduce clutter even more and remove the need for single colons to denote nested maps. A even simpler format could look like this:

    nodeB nodeC nodeD
        raw data
            with fixed indentation

        would be interpreted as "raw data\n\twith fixed intendation\n..."

This would yield the following JSON-equivalent:

{ "nodeA": { "nodeB": {} , "nodeC": {} , "nodeD": {}, "nodeE": {}, "nodeF": {}, "nodeG": { "raw data..." : {} } } }

This still separates raw text from normal data. A node that contains raw text can never contain other children this way. However, I cannot think of a good way to accomplish that without introducing a special character to end a raw text block.

That's it for now :)

Unity Prototype Project

For the last two days I've tried out the Unity engine with a friend from university, Andreas Ostermaier. If you don't know the Unity engine, go and check it out.

Its design is very similar to Torque 2D. I can't tell who used it first, but Unity's design is more mature than what I remember from Torque 2D 1.3 back when I was still using it in 2007. A scene is made up of game objects. A game object is a container of components (also called behaviors)1. A game object always contains a Transform component which places it in the scene (and in the scene hierarchy). Usually, it contains a Mesh component and a Physics component for handling rendering and collision detection. But sometimes empty game objects are useful as well: as respawn points for example. Custom scripts, written in JavaScript or C#, can be wrapped in a Script component and tied to game objects in the same way.

Templates/prototype objects2, called prefabs, are used to avoid creating all game objects by hand. Instances of a prefab link back to the prefab, that is the original object, and automatically inherit changes to it (or the children in its hierarchy). Torque 2D 1.3 lacked this feature: GarageGames had an editor plugin in development when I was doing contract work for them but I'm not sure it has ever been released.

We have spent the last two days implementing a PoC for a game idea my friend came up with. The idea is about being able to switch between 2D and 3D in a platformer to solve puzzles. It is about reinterpreting a level when it is seen from a 2D perspective. For example, two separate tiles above an acid pool that cannot be reached by jumping from one to the other can suddenly be connected by switching to 2D because they look connected.

PoCDimRunner 3D PoCDimRunner 2D >

You can play the PoC build here, using Unity's webplayer, or download it here in a .zip archive. Use the left mouse button or 'c' to switch between 2D and 3D.

There are a few games that are similar to this idea: Echochrome, Super Paper Mario, and Crush (and its sequel Crush 3D).

You can find the code on GitHub (ie here).


  1. See eg herehere, and here for an overview.

  2. See[[2]]

Presentation (seminar) about convex analysis

Last year (yes, I'm really slow with writing things up "lately") I did a presentation about convex analysis as part of a seminar about inverse problems in imaging.

I want to write about it today.

The seminar was based on the book 'Handbook of Mathematical Methods in Imaging (Springer Reference)'. It's very expensive and, in my opinion, pretty useless to learn about any details. At least the chapter about duality and convex programming was very dense and you could not learn anything from it without consulting other book.

The presentation was supposed to give an overview over duality and convex analysis and serve as introduction. Summarzing 42 very concise pages in 45 minutes is impossible, so I had to choose a few topics that give an overview over the important concepts and go into detail there.

The book itself does not really contain many proofs which makes it hard to follow. As a mathematician I like proofs. They help clear things up usually and they can give valuable insight into the methods of a theory when they're good.

But proofs suck when you see them on powerpoint slides. They also suck when you're the presenter.

So I gave a blackboard presentation---using a few slides only to show some pictures and sketches which I could not draw well enough on a blackboard.

To prepare the presentation I used OneNote and my Bamboo tablet. I scribbled all down over a weekend in one OneNote notebook and later extracted some sketches into another notebook that I used instead of a PowerPoint presentation. I exported the original notes as PDF and used this as handout after the presentation.

The presentation was alright. I had fun writing everything down on the blackboard and developing the subset of the theory I wanted to present, and the audience was content with it, too, because it was just at the right speed and like a lecture everybody was used, too.

In retrospect I can say that using OneNote and a tablet to create a blackboard presentation was the best decision. I don't want to think of the time I would have lost if I had created everything with LaTeX or PowerPoint.

Long story short: here are my notes :)

Cheers, Andreas

Molecular Dynamics and CUDA---my interdisciplinary project

You have to do a interdisciplinary project for your Master's degree at the Technische Universität München. I decided to do it at the Chair for Scientific Computing.

My topic was the Efficient Implementation of Multi-Center Potential Models in Molecular Dynamics with CUDA. For this, I have parallelized the molecule interaction calculations in the molecular dynamics simulator MarDyn with CUDA, optimized the code, and added support for more advanced potential models on the GPU.

What is molecular dynamics?

Molecular dynamics deals with the simulation of atoms and molecules. It estimates the interaction between a high number of molecules and atoms using force fields which are modeled using quantum mechanics equations.

Such simulations are important for many different areas in biology, chemistry and material sciences. For example you can examine the folding of proteins or the nanoscopic behavior of materials. The simulations are sped up using sophisticated approximations and algorithms.

Molecules only have strong force interactions with molecules that are nearby. One of the first approximations is to take into account these strong interactions only and ignore weaker long-distance ones. This space locality of the calculations leads to the Linked Cell-algorithm. It divides the simulation space into a grid of cells and only interactions between molecules inside each cell and with nearby cells are calculated.

Molecules are composed of atoms which interact using different physical principles. This is approximated by having different sites in a molecule that use different potential models. The most common and simplest potential model is the Lennard-Jones potential.

What is MarDyn?

MarDyn is a molecular dynamics simulator that has been developed at the Technische Universität München. See Martin Buchholz's thesis for more information.

Its code is rather ugly and it features some crazy design decisions. I have been told that the people who were initially developing it where learning how to code in C++ at the same time---and it shows :(

Previous work

When I started, there was a 'working' implementation of Lennard-Jones potentials for molecules with only one site on the GPU. It used OpenCL and was the result from somebody else's IDP. However, it was not very useful: the code was crammed into one function and one letter variable names were used through-out.

The main aims of the project

The idea was to port the previous OpenCL implementation to CUDA and continue from there to add support for multiple sites with different potential models.

Chronology of my work

Porting the code to CUDA was a straight-forward API change. However, I found the original code to be impossible to read, maintain, and optimize. The logic behind the code was not clear or well explained.

Consequently my supervisor and I decided that I would rewrite everything and optimize it from the beginning. This took longer than expected and while the resulting parallelism and the logic behind it were clear in the code, code complexity was an issue. The code consisted of three helper functions and two kernel functions in two files and when I went and integrated support for multiple sites and potential models, it became clear that the current design was not clear enough to endure further feature additions and lacked the flexibility for quick changes.

Treating the old implementation as a prototype and knowing about many possible traps and fallacies, I set out and rewrote everything again. This time the focus was on modularity and separation of concerns instead of performance. Code architecture was the most important thing this time around. I scaffolded the new version around the old code, which was working correctly and already optimized. I embedded it into the new design step by step. Afterwards I optimized the code.

What I've done


  • worked with CUDA 1.x and 2.x,
  • used both the driver and the runtime API,
  • implemented a template-based, very modular code design,
  • tried lots of optimizations, and
  • learnt how to read PTX to find work-arounds for compiler bugs.

Sadly some of the optimizations didn't improve the performance noticeably. I think the monolithic, all-in-one approach I've used is the main issue.

My code uses the driver API, because it would be easy to dump all the CUDA calls and replay them later for debugging purposes.

If I could iterate over the code one more time, I'd try to use many small kernels instead of only two rather large ones. Currently the kernels sequentially calculate all potential models for the molecule pair that are needed (for each site). This makes it difficult to measure performance improvements and isolate bottlenecks. And I suspect it's slower than it has to be.

I'd also create a sandbox application to test and develop the kernels instead of using MarDyn as starting point, because it is unnecessarily huge and it is not really needed to verify that the CUDA code works (or not).

Final paper & code

You can find the final paper that contains a description of my code and the performance results here, and my code here (I cannot include MarDyn's code because its source is closed, but at least I can share my code).


I just want to share two more links:

  • HOOMD-blue is a molecular dynamics framework, which uses CUDA, too, and the code design looks what I probably should have done :)
  • VMD is a molecular visualization application, which looks promising, too


A Long Journey: Acceler8 and TBB

A month has passed with Intel's Acceler8 competition and it has finally come to an end. It's been a long way from implementing the first sequential algorithm to having a fully-fledged parallel version.

I have never worked with Intel's Threading Building Block library before and it was a nice opportunity to examine it, since it offered a better abstraction than OpenMP or pthreads.

The documentation is very good and you quickly learn how to work with the library. One of the caveats is that I didn't have to use a low-level synchronization construct once in the development and everything worked fine without any race conditions or similar . The parallel_* functions (eg parallel_for, parallel_reduce, and parallel_scan) together with icc's C++0x support (lambda functions) allowed for very concise code and little programming overhead.

The implementation builds on Kadane's algorithm for the two dimensional case using prefix sums. One implementation that gets across the basic idea can be found here. Mine is similar and I simply parallelized as much as possible.

As you can see the outer two loops iterate over a two-dimensional range that is pretty much an upper right triangle of the whole possible domain. For this I've implemented a custom range that allows for better load balancing. A range in TBB defines an iteration range of any kind and supports a split operation that is used internally by the task scheduler to distribute the range dynamically on multiple threads as it sees fit.

Last but not least I came up with a way to parallelize the 1D part of Kadane's algorithm that is being used by splitting the column range into linear subranges and merging the subsolutions into one solution, ie a classical divide and conquer approach.

Because it's the most abstract yet interesting part of our implementation, I'm going to go into more detail here. :-)

How can you find the maximum subarray of a 1D array, if you know the maximum subarray of the two "halves" (they don't have to be split evenly)? Well, you don't, you need more information.

We calculate the following information for each chunk:

  • maximum subarray that starts at the beginning of the chunk
  • maximum subarray that ends at the end of the chunk
  • total sum
  • maximum subarray

It's easy to figure out how to merge these values for two neighboring chunks into the values of the merged chunk. The merged maximum subarray that starts at the beginning of the merged chunk is either said value for the left chunk or the total sum of the left chunk + that value of the right chunk. You can figure out how it works for the maximum subarray that ends at the end of the merged chunk :-) The maximum subarray is just the biggest of all merged values or the left maximum subarray or right one.

Using this idea you can use a simple parallel_reduce to parallelize Kadane's algorithm.

Of course, there is some overhead but for the right problem sizes this will be faster than the sequential algorithm as always.

Two more take-aways:

  • Always try to use language features like templates or lambda expressions to reduce duplicate code or make the code more concise.
  • Write unit tests. I have used googletest which is a very small but very capable library, and it has spared me a lot of debugging trouble.

Cheers :-)

Reading Nonfiction Books Quickly

I like to read books and I also like to read nonfiction books and in particular nonfiction books that I do not agree with.

Speed Reading

For this I've decided to look into speed reading. It is an umbrella term for methods that increase your reading speed while keeping a satisfactory comprehension level. A usual reading speed is 200-300 wpm (words per minute). Speed reading advertises increasing your speed to > 600 wpm.

I've searched around for a bit and read lots of blog posts and articles and here are my favorites: is a good introduction to the main techniques of speed-reading. It also includes some exercises and is a short read. describes the author's personal experience with fourhourworkweek's article.

There is also speed-reading software available. The idea is your eye movement is the main hindrance to reading really fast, so this software displays the text for you to read in groups of several words in the center of the screen. This way you can read everything at once without having to move your eyes at all. As crazy as it sounds, it works to a great degree. is  an online reader that works this way. Give it a try! You can make it display the introduction text at 600 wpm or 800 wpm and see how you'll understand - you'll be surprised!

'The Speed Reading Workbook' is what I've been using to practise it. It's okay written and has plenty of exercises and includes timing and evaluation sheets, which are really useful for measuring your progress.

A good summary of many concepts can be found in the PowerPoint Presentation 'Double your Reading Speed in 25 Minutes'.

About Reading in General is a good read and has many good suggestions about how to increase your reading productivity.

Own Expierence

I've been exercising with the workbook now and then for the past weeks and I've become faster but this is still an ongoing process for me, so I'm going to blog about it later (or rather update this post).

For what it's worth: my reading speed was 300 wpm and now it is around 550 wpm :-)