Image understanding combines traditional image processing with feature
extraction, analysis, and classification, in order to automatically
recognize semantic image structures relevant to a particular task. We
propose a framework in which a computer science and a medical expert
jointly create high-level computer programs for medical image
understanding, without having to write a single line of code. Rather,
the programs are generated automatically, as records of the way the
users solved a given image understanding task on a concrete example in a
dedicated visualization environment.
The interaction model is designed to provide immediate feedback that
guides each decision required from the users, and helps predict how well
the result obtained on a particular dataset will generalize to new data.
We demonstrate several examples in which the computer programs created
by our system can be successfully transferred to new data, and discuss
implications for a future machine learning-based system that does not
only learn from the desired classification result, but also from the way
a human observer would achieve it.