Jordi Inglada

Posts tagged "otb":

27 May 2015

Installing OTB has never been so easy

You've heard about the Orfeo Toolbox library and its wonders, but urban legends say that it is difficult to install. Don't believe that. Maybe it was difficult to install, but this is not the case anymore.

Thanks to the heroic work of the OTB core development team, installing OTB has never been so easy. In this post, you will find the step-by-step procedure to compile OTB from source on a Debian 8.0 Jessie GNU/Linux distribution.

Prepare the user account

I assume that you have a fresh install. The procedure below has been tested in a virtual machine running under VirtualBox. The virtual machine was installed from scratch using the official netinst ISO image.

During the installation, I created a user named otb that I will use for the tutorial below. For simplicity, I give this user root privileges in order to install some packages. This can be done as follows. Log in as root or use the command:

su -

You can then edit the /etc/sudoers file by using the following command:

visudo

This will open the file with the nano text editor. Scroll down to the lines containing

# User privilege specification
root    ALL=(ALL:ALL) ALL

and copy the second line and below and replace root by otb:

otb     ALL=(ALL:ALL) ALL

Write the file and quit by doing C^o ENTER C^x. Log out and log in as otb. You are set!

System dependencies

Now, let's install some packages needed to compile OTB. Open a terminal and use aptitude to install what we need:

sudo aptitude install mercurial \
     cmake-curses-gui build-essential \
     qt4-dev-tools libqt4-core \
     libqt4-dev libboost1.55-dev \
     zlib1g-dev libopencv-dev curl \
     libcurl4-openssl-dev swig \
     libpython-dev 

Get OTB source code

We will install OTB in its own directory. So from your $HOME directory create a directory named OTB and go into it:

mkdir OTB
cd OTB

Now, get the OTB sources by cloning the repository (depending on your network speed, this may take several minutes):

hg clone http://hg.orfeo-toolbox.org/OTB

This will create a directory named OTB (so in my case, this is /home/otb/OTB/OTB).

Using mercurial commands, you can choose a particular version or you can go bleeding edge. You will at least need the first release candidate for OTB-5.0, which you can get with the following commands:

cd OTB
hg update 5.0.0-rc1
cd ../

Get OTB dependencies

OTB's SuperBuild is a procedure which deals with all external libraries needed by OTB which may not be available through your Linux package manager. It is able to download source code, configure and install many external libraries automatically.

Since the download process may fail due to servers which are not maintained by the OTB team, a big tarball has been prepared for you. From the $HOME/OTB directory, do the following:

wget https://www.orfeo-toolbox.org/packages/SuperBuild-archives.tar.bz2
tar xvjf SuperBuild-archives.tar.bz2

The download step can be looooong. Be patient. Go jogging or something.

Compile OTB

Once you have downloaded and extracted the external dependencies, you can start compiling OTB. From the $HOME/OTB directory, create the directory where OTB will be built:

mkdir -p SuperBuild/OTB

At the end of the compilation, the $HOME/OTB/SuperBuild/ directory will contain a classical bin/, lib/, include/ and share/ directory tree. The $HOME/OTB/SuperBuild/OTB/ is where the configuration and compilation of OTB and all the dependencies will be stored.

Go into this directory:

cd SuperBuild/OTB

Now we can configure OTB using the cmake tool. Since you are on a recent GNU/Linux distribution, you can tell the compiler to use the most recent C++ standard, which can give you some benefits even if OTB still does not use it. We will also compile using the Release option (optimisations). The Python wrapping will be useful with the OTB Applications. We also tell cmake where the external dependencies are. The options chosen below for OpenJPEG make OTB use the gdal implementation.

cmake \
    -DCMAKE_CXX_FLAGS:STRING=-std=c++14 \
    -DOTB_WRAP_PYTHON:BOOL=ON \
    -DCMAKE_BUILD_TYPE:STRING=Release \
    -DCMAKE_INSTALL_PREFIX:PATH=/home/otb/OTB/SuperBuild/ \
    -DDOWNLOAD_LOCATION:PATH=/home/otb/OTB/SuperBuild-archives/ \
    -DOTB_USE_OPENJPEG:BOOL=ON \
    -DUSE_SYSTEM_OPENJPEG:BOOL=OFF \
    ../../OTB/SuperBuild/

After the configuration, you should be able to compile. I have 4 cores in my machine, so I use the -j4 option for make. Adjust the value to your configuration:

make -j4

This will take some time since there are many libraries which are going to be built. Time for a marathon.

Test your installation

Everything should be compiled and available now. You can set up some environment variables for an easier use of OTB. You can for instance add the following lines at the end of $HOME/.bashrc:

export OTB_HOME=${HOME}/OTB/SuperBuild
export PATH=${OTB_HOME}/bin:$PATH
export LD_LIBRARY_PATH=${OTB_HOME}/lib

You can now open a new terminal for this to take effect or use:

cd
source .bashrc 

You should now be able to use the OTB Applications. For instance, the command:

otbcli_BandMath

should display the documentation for the BandMath application.

Another way to run the applications, is using the command line application launcher as follows:

otbApplicationLauncherCommandLine BandMath $OTB_HOME/lib/otb/applications/

Conclusion

The SuperBuild procedure allows to easily install OTB without having to deal with different combinations of versions for the external dependencies (TIFF, GEOTIFF, OSSIM, GDAL, ITK, etc.).

This means that once you have cmake and a compiler, you are pretty much set. QT4 and Python are optional things which will be useful for the applications, but they are not required for a base OTB installation.

I am very grateful to the OTB core development team (Julien, Manuel, Guillaume, the other Julien, Mickaël, and maybe others that I forget) for their efforts in the work done for the modularisation and the development of the SuperBuild procedure. This is the kind of thing which is not easy to see from the outside, but makes OTB go forward steadily and makes it a very mature and powerful software.

Tags: remote-sensing otb programming open-source free-software tricks
13 May 2015

A simple thread pool to run parallel PROSail simulations

In the otb-bv we use the OTB versions of the Prospect and Sail models to perform satellite reflectance simulations of vegetation.

The code for the simulation of a single sample uses the ProSail simulator configured with the satellite relative spectral responses, the acquisition parameters (angles) and the biophysical variables (leaf pigments, LAI, etc.):

ProSailType prosail;
prosail.SetRSR(satRSR);
prosail.SetParameters(prosailPars);
prosail.SetBVs(bio_vars);
auto result = prosail();

A simulation is computationally expensive and it would be difficult to parallelize the code. However, if many simulations are going to be run, it is worth using all the available CPU cores in the machine.

I show below how using C++11 standard support for threads allows to easily run many simulations in parallel.

Each simulation uses a set of variables given in an input file. We parse the sample file in order to get the input parameters for each sample and we construct a vector of simulations with the appropriate size to store the results.

otbAppLogINFO("Processing simulations ..." << std::endl);
auto bv_vec = parse_bv_sample_file(m_SampleFile);
auto sampleCount = bv_vec.size();
otbAppLogINFO("" << sampleCount << " samples read."<< std::endl);
std::vector<SimulationType> simus{sampleCount};

The simulation function is actually a lambda which will sequentially process a sequence of samples and store the results into the simus vector. We capture by reference the parameters which are the same for all simulations (the satellite relative spectral responses satRSR and the acquisition angles in prosailPars):

auto simulator = [&](std::vector<BVType>::const_iterator sample_first,
                     std::vector<BVType>::const_iterator sample_last,
                     std::vector<SimulationType>::iterator simu_first){
  ProSailType prosail;
  prosail.SetRSR(satRSR);
  prosail.SetParameters(prosailPars);
  while(sample_first != sample_last)
    {
    prosail.SetBVs(*sample_first);
    *simu_first = prosail();
    ++sample_first;
    ++simu_first;
    }
};    

We start by figuring out how to split the simulation into concurrent threads. How many cores are there?

auto num_threads = std::thread::hardware_concurrency();
otbAppLogINFO("" << num_threads << " CPUs available."<< std::endl);

So we define the size of the chunks we are going to run in parallel and we prepare a vector to store the threads:

auto block_size = sampleCount/num_threads;
if(num_threads>=sampleCount) block_size = sampleCount;
std::vector<std::thread> threads(num_threads);

Here, I choose to use as many threads as cores available, but this could be changed by a multiplicative factor if we know, for instance that disk I/O will introduce some idle time for each thread.

An now we can fill the vector with the threads that will process every block of simulations :

auto input_start = std::begin(bv_vec);
auto output_start = std::begin(simus);
for(size_t t=0; t<num_threads; ++t)
  {
  auto input_end = input_start;
  std::advance(input_end, block_size);
  threads[t] = std::thread(simulator,
                           input_start,
                           input_end,
                           output_start);
  input_start = input_end;
  std::advance(output_start, block_size);
  }

The std::thread takes the name of the function object to be called, followed by the arguments of this function, which in our case are the iterators to the beginning and the end of the block of samples to be processed and the iterator of the output vector where the results will be stored. We use std::advance to update the iterator positions.

After that, we have a vector of threads which have been started concurrently. In order to make sure that they have finished before trying to write the results to disk, we call join on each thread, which results in waiting for each thread to end:

std::for_each(threads.begin(),threads.end(),
              std::mem_fn(&std::thread::join));
otbAppLogINFO("" << sampleCount << " samples processed."<< std::endl);
for(const auto& s : simus)
  this->WriteSimulation(s);

This may no be the most efficient solution, nor the most general one. Using std::async and std::future would have allowed not to have to deal with block sizes, but in this solution we can easily decide the number of parallel threads that we want to use, which may be useful in a server shared with other users.

Tags: remote-sensing otb research programming open-source cpp algorithms
22 Apr 2015

Should we always normalize image features before classification?

The quick answer is of course not. But if I am writing this post is because I have been myself giving a rule of thumb to students for a while without explaining the details.

When performing image classification (supervised or unsupervised) I always tell students to normalize the features. This normalization can be for instance just standardization (subtract the mean and divide by the standard deviation), although other approaches are possible.

The goal of this normalization is being able to compare apples and oranges: many classification algorithms are based on a distance in the feature space, and in order to give equal weight to all features, they have to have similar ranges. For instance, if we have 2 features like the NDVI, which ranges from -1 to 1, and the slope of a topographic surface which could be between \(0^\circ\) and \(90^\circ\), the Euclidean distance would give much more weight to the slope and the NDVI values would not have much influence in the classification.

This is why the TrainImagesClassifier and the ImageClassifier applications in the ORFEO Toolbox have the option to provide a statistics file with the mean and standard deviation of the features so they samples can be normalized.

This is needed for classifiers like SVM, unless custom kernels suited to particular sets of features are used.

Most OTB users have started using the Random Forest classifier since it was made available by the integration of the OpenCV Machine Learning module. And not long ago, someone told me that she did not notice any difference with and without feature normalization. Let's see why.

A Random Forest classifier uses a committee of decision trees. The decision trees split the samples at each node of the tree by thresholding one single feature. The learning of the classifier amounts basically at finding the value of the threshold at each node which is the optimum to split the samples.

The decision trees used by the Random Forest implementation in OpenCV use the Gini impurity in order to find the best split1. At any node of the tree during the learning, the samples are sorted in increasing order of the feature. Then all possible threshold values are used to split the samples into 2 sets and the Gini impurity is computed for every possible split. The impurity index does not use the values of the features, but only the labels of the samples. The values of the features are only used in order to sort the samples.

Any pre-processing of the features which is monotonic will not change the value of the Gini impurity and therefore will have no effect on the training of a Random Forest classifier.

Footnotes:

1

Actually, a variant of the Gini impurity is used, but this does not change the rationale here.

Tags: remote-sensing otb research algorithms
18 Mar 2015

Data science in C?

Coursera is offering a Data Science specialization taught by professors from Johns Hopkins. I discovered it via one of their courses which is about reproducible research. I have been watching some of the video lectures and they are very interesting, since they combine data processing, programming and statistics.

The courses use the R language which is free software and is one the reference tools in the field of statistics, but it is also very much used for other kinds of data analysis.

While I was watching some of the lectures, I had some ideas to be implemented in some of my tools. Although I have used GSL and VXL1 for linear algebra and optimization, I have never really needed statistics libraries in C or C++, so I ducked a bit and found the apophenia library2, which builds on top of GSL, SQLite and Glib to provide very useful tools and data structures to do statistics in C.

Browsing a little bit more, I found that the author of apophenia has written a book "Modeling with data"3, which teaches you statistics like many books about R, but using C, SQLite and gnuplot.

This is the kind of technical book that I like most: good math and code, no fluff, just stuff!

The author sets the stage from the foreword. An example from page xii (the emphasis is mine):

" The politics of software

All of the software in this book is free software, meaning that it may be freely downloaded and distributed. This is because the book focuses on portability and replicability, and if you need to purchase a license every time you switch computers, then the code is not portable. If you redistribute a functioning program that you wrote based on the GSL or Apophenia, then you need to redistribute both the compiled final program and the source code you used to write the program. If you are publishing an academic work, you should be doing this anyway. If you are in a situation where you will distribute only the output of an analysis, there are no obligations at all. This book is also reliant on POSIX-compliant systems, because such systems were built from the ground up for writing and running replicable and portable projects. This does not exclude any current operating system (OS): current members of the Microsoft Windows family of OSes claim POSIX compliance, as do all OSes ending in X (Mac OS X, Linux, UNIX,…)."

Of course, the author knows the usual complaints about programming in C (or C++ for that matter) and spends many pages explaining his choice:

"I spent much of my life ignoring the fundamentals of computing and just hacking together projects using the package or language of the month: C++, Mathematica, Octave, Perl, Python, Java, Scheme, S-PLUS, Stata, R, and probably a few others that I’ve forgotten. Albee (1960, p 30)4 explains that “sometimes it’s necessary to go a long distance out of the way in order to come back a short distance correctly;” this is the distance I’ve gone to arrive at writing a book on data-oriented computing using a general and basic computing language. For the purpose of modeling with data, I have found C to be an easier and more pleasant language than the purpose-built alternatives—especially after I worked out that I could ignore much of the advice from books written in the 1980s and apply the techniques I learned from the scripting languages."

The author explains that C is a very simple language:

" Simplicity

C is a super-simple language. Its syntax has no special tricks for polymorphic operators, abstract classes, virtual inheritance, lexical scoping, lambda expressions, or other such arcana, meaning that you have less to learn. Those features are certainly helpful in their place, but without them C has already proven to be sufficient for writing some impressive programs, like the Mac and Linux operating systems and most of the stats packages listed above."

And he makes it really simple, since he actually teaches you C in one chapter of 50 pages (and 50 pages counting source code is not that much!). He does not teach you all C, though:

"As for the syntax of C, this chapter will cover only a subset. C has 32 keywords and this book will only use 18 of them."

At one point in the introduction I worried about the author bashing C++, which I like very much, but he actually gives a good explanation of the differences between C and C++ (emphasis and footnotes are mine):

"This is the appropriate time to answer a common intro-to-C question: What is the difference between C and C++? There is much confusion due to the almost-compatible syntax and similar name—when explaining the name C-double-plus5, the language’s author references the Newspeak language used in George Orwell’s 1984 (Orwell, 19496; Stroustrup, 1986, p 47). The key difference is that C++ adds a second scope paradigm on top of C’s file- and function-based scope: object-oriented scope. In this system, functions are bound to objects, where an object is effectively a struct holding several variables and functions. Variables that are private to the object are in scope only for functions bound to the object, while those that are public are in scope whenever the object itself is in scope. In C, think of one file as an object: all variables declared inside the file are private, and all those declared in a header file are public. Only those functions that have a declaration in the header file can be called outside of the file. But the real difference between C and C++ is in philosophy: C++ is intended to allow for the mixing of various styles of programming, of which object-oriented coding is one. C++ therefore includes a number of other features, such as yet another type of scope called namespaces, templates and other tools for representing more abstract structures, and a large standard library of templates. Thus, C represents a philosophy of keeping the language as simple and unchanging as possible, even if it means passing up on useful additions; C++ represents an all-inclusive philosophy, choosing additional features and conveniences over parsimony."

It is actually funny that I find myself using less and less class inheritance and leaning towards small functions (often templates) and when I use classes, it is usually to create functors. This is certainly due to the influence of the Algorithmic journeys of Alex Stepanov.

Footnotes:

1

Actually the numerics module, vnl.

2

Apophenia is the experience of perceiving patterns or connections in random or meaningless data.

3

Ben Klemens, Modeling with Data: Tools and Techniques for Scientific Computing, Princeton University Press, 2009, ISBN: 9780691133140.

4

Albee, Edward. 1960. The American Dream and Zoo Story. Signet.

6

Orwell, George. 1949. 1984. Secker and Warburg.

7

Stroustrup, Bjarne. 1986. The C++ Programming Language. Addison-Wesley.

Tags: otb research programming books open-source free-software c cpp linux algorithms
26 Nov 2014

Is open and free global land cover mapping possible?

Short answer: yes.

Mid November took place in Toulouse "Le Capitole du libre", a conference on Free Software and Free Culture. The program this year was again full of interesting talks and workshops.

This year, I attended a workshop about contributing to Openstreetmap (OSM) using the JOSM software. The workshop was organised by Sébastien Dinot who is a massive contributor to OSM, and more importantly a very nice and passionate fellow.

I was very happy to learn to use JOSM and did 2 minor contributions right there.

During the workshop I learned that, over the past, OSM has been enriched using massive imports from open data sources, like for instance cadastral data bases from different countries or the Corine Land Cover data base. This has been possible thanks to the policies of many countries which have understood that the commons are important for the advancement of society. One example of this is the European INSPIRE initiative.

I was also interested to learn that what could be considered niche data, like agricultural land parcel data bases as for instance the French RPG have also been imported into OSM. Since I have been using the RPG at work for the last 4 years (see for example here or here), I was sympathetic with the difficulties of OSM contributors to efficiently exploit these data. I understood that the Corine Land Cover import was also difficult and the results were not fully satisfactory.

As a matter of fact, roads, buildings and other cartographic objects are easier to map than land cover, since they are discrete and sparse. They can be pointed, defined and characterised more easily than natural and semi-natural areas.

After that, I could not avoid making the link with what we do at work in terms of preparing the exploitation of upcoming satellite missions for automatic land cover map production.

One of our main interests is the use of Sentinel-2 images. It is the week end while I am writing this, so I will not use my free time to explain how land cover map production from multi-temporal satellite images work: I already did it in my day job.

What is therefore the link between what we do at work and OSM? The revolutionary thing from my point of view is the fact that Sentinel-2 data will be open and free, which means that the OSM project could use it to have a constantly up to date land cover layer.

Of course, Sentinel-2 data will come in huge volumes and a good amount of expertise will be needed to use them. However, several public agencies are paving the road in order to deliver data which is easy to use. For instance, the THEIA Land Data Centre will provide Sentinel-2 data which is ready to use for mapping. The data will be available with all the geometric and radiometric corrections of the best quality.

Actually, right now this is being done, for instance, for Landsat imagery. Of course, all these data is and will be available under open and free licences, which means that anyone can start right now learning how to use them.

However, going from images to land cover maps is not straightforward. Again, a good deal of expertise and efficient tools are needed in order to convert pixels into maps. This is what I have the chance to do at work: building tools to convert pixels into maps which are useful for real world applications.

Applying the same philosophy to tools as for data, the tools we produce are free and open. The core of all these tools is of course the Orfeo Toolbox, the Free Remote Sensing Image Processing Library from CNES. We have several times demonstrated that the tools are ready to efficiently exploit satellite imagery to produce maps. For instance, in this post here you even have the sequence of commands to generate land cover maps using satellite image time series.

This means that we have free data and free tools. Therefore, the complete pipeline is available for projects like OSM. OSM contributors could start right now getting familiar with these data and these tools.

Head over to CNES servers to get some Landsat data, install OTB, get familiar with the classification framework and see what could be done for OSM.

It is likely that some pieces may still be missing. For instance, the main approach for the map production is supervised classification. This means that we use machine learning algorithms to infer which land cover class is present at every given site using the images as input data. For these machine learning algorithms to work, we need training data, that is, we need to know before hand the correct land cover class in some places so the algorithm can be calibrated.

This training data is usually called ground truth and it is expensive and difficult to get. In a global mapping context, this can be a major drawback. However, there are interesting initiatives which could be leveraged to help here. For instance, Geo-Wiki comes to mind as a possible source of training data.

As always, talk is cheap, but it seems to me that exciting opportunities are available for open and free quality global mapping. This does not mean that the task is easy. It is not. There are many issues to be solved yet and some of them are at the research stage. But this should not stop motivated mappers and hackers to start learning to use the data and the tools.

Tags: remote-sensing otb research open-source free-culture free-software
27 Oct 2014

xdg-open or "computer, do what I mean"

Command line nerds1 say that graphical user interfaces are less efficient since the former use the keyboard and the latter tend to need the use of the mouse.

Graphical user interface fans say that command line based work-flows need the user to know more things about the system. Although I don't see why this would be an issue (what's the problem with getting to know better the system you use every day?), I can see one clear advantage to the point-and-click work-flow.

When the user wants to open a file, double-clicking on it just works. On the other hand, on the command line, the user needs to call the appropriate application and pass the file name as argument:

evince my_document.pdf

That means that the user has to know which is the application that is going to correctly deal with the given file. For example, this won't work:

evince my_movie.mp4

because evince is a document viewer and it only understands formats as pdf, postscript, dejavu or dvi, but it is not able to understand MPEG-4 videos.

So how can one replicate the point-and-click behavior on the command line? It would be nice to have something like:

open my_file

and that auto-magically the appropriate application was chosen.

It is of course possible, and it can be done in the same way as the graphical desktop environment does it: using xdg-utils.

Inside a desktop environment (e.g. GNOME, KDE, or Xfce), xdg-open simply passes the arguments to that desktop environment's file-opener application (gvfs-open, kde-open, or exo-open, respectively), which means that the associations are left up to the desktop environment. Therefore, on the command line, one can do:

xdg-open my_file.pdf

and the appropriate application will be used (evince on GNOME, okular on KDE, etc.).

When no desktop environment is detected (for example when one runs a standalone window manager, e.g. stumpwm), xdg-open will use its own configuration files.

Sometimes, the default associations between applications and file types may not suit the user. For instance, in my case, MPEG-4 videos were open with mplayer, but I prefer vlc. The xdg-mime tool is meant to help you change the default associations:

xdg-mime default vlc.desktop video/mp4

The vlc.desktop parameter is an xdg desktop file which describes the vlc applications. On my Debian GNU/Linux system, this files are located in /usr/share/applications. So I was able to look in there to see what applications xdg-open could use.

The parameter video/mp4 passed to xdg-mime is the type of file we want to associate the application with. In order to know what type of file we are dealing with, xdg-mime can query it for you:

xdg-mime query my_movie.mp4 

And the answer is:

video/mp4

Now, what happens if there is no desktop file for an application I want to use? For instance, remote sensing image visualization is not possible with the standard image viewers available on GNU/Linux. I personally use the OTB IceViewer, which is a lightweight application using the rendering engine developed for Monteverdi2.

In this case, I have to create a desktop file for the application. I my case, I have created the otbice.desktop file in /home/inglada/.local/share/applications/ with the following contents:

[Desktop Entry]
Type=Application
Name=OTB Ice Viewer
Exec=/home/inglada/OTB/builds/Ice/bin/otbiceviewer %U
Icon=otb-logo
Terminal=false
Categories=Graphics;2DGraphics;RasterGraphics;
MimeType=image/bmp;image/gif;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-xbitmap;image/tiff;image/jpeg;image/png;image/x-xpixmap;image/jp2;image/jpeg2000;

The file has to be executable, so I have to do:

chmod +x /home/inglada/.local/share/applications/otbice.desktop

After that, I can associate the IceViewer with the image formats I want:

xdg-mime default otbice.desktop image/tiff

And it just works.

Footnotes:

1

And I am one. See here, here and here for example.

Tags: otb open-source emacs linux tricks note-to-self
28 Jul 2014

Why C++?

Recently, on the otb-users mailing list somebody asked why C++ was chosen to implement OTB.

The main reasons why I chose C++ for OTB were the following (by order of relevance):

  1. Most of the existing libraries on which OTB was going to rely on were written in C++. Not only ITK, which is at the core of OTB, but also OSSIM and gdal.
  2. C++ can be as efficient as C and has higher level abstraction mechanisms allowing for cleaner code and architecture (templates, classes, name spaces, etc.).
  3. C++ was the language I was most familiar with.

C++ was first introduced in the late 70's and was first called "C with classes" since C++ was designed to provide Simula's facilities for program organization together with C's efficiency and flexibility for systems programming. The class concept (with derived classes and virtual functions) was borrowed from the Simula programming language.

C++ supports different programming styles:

Let's have a look at these different items.

Procedural and imperative styles: the C style

C++ has the same tools as C for imperative programming. So there are the classical control flow mechanisms such as for and while loops, if statements, etc.

C++ supports procedural programming via functions, which are like C functions but with more strict type checking of the arguments. Unlike C, function arguments can be passed by reference, which is like passing a pointer to the object, but using the same syntax as if the object itself was passed. This allows to pass large objects without copying them as with pointers, but the programmer can choose to declare the reference as constant so that the object can't be modified inside the function.

Object orientation: C with classes

C++ has classes which are composite types (that is, they can contain other objects) like C structs, but they can also have associated methods, or member functions. At first, object orientation in C++ can be seen very as similar to Java's or Python's in terms of what a class can contain and how it is used. The main difference is that C++ uses value semantics. In clear, when a variable of a given class is created in C++, it is a value, like an int or a double. In Java or Python, this is not the case. In these languages, what you get is a reference to an object, not the object itself, and therefore, you can't manage memory as you would want to and a garbage collector is needed in order to free memory when the object is not used anymore.

In C++ you can choose to have an object (the default behavior) and in this case, like an int or a double, it will be allocated in the stack and be deallocated automatically when it goes out of scope (typically at the end of the block, when a closing curly brace is found). So there is no need for garbage collector, since no garbage is generated. Java programmers will appreciate this joke …

On the other hand, you can choose to use pointers to the objects, and then the objects are allocated in the free store and the programmer is responsible for freeing the memory.

C++ offers multiple inheritance (unlike Java) and virtual functions (like Java).

C++ also offers operator overloading. This means that arithmetic operators like + for addition or * for multiplication can be redefined for each class. This is useful for instance for a matrix class. Other operators like [], -> can also be overloaded.

Generic programming: templates and the STL

Generic programming can be used in C++ thanks to templates. Class templates are classes defined in terms of generic types. That is, a class can contain an object of type T which is not defined. All code, like member functions, etc. can be written in terms of this generic type.

When a programmer wants to use this class template, she has to say which is the concrete type that will be used instead of T. This is very useful when algorithms are exactly the same for different data types like integers, floating point numbers, etc. In languages which don't support generic programming, the same algorithm has to be rewritten for every different type, or the programmer has to chose the best type (for some definition of best) for which to write the algorithm.

There are also function templates, which are functions defined in terms of generic types.

The mechanism behind templates generates, at compile time, the code for the concrete types which will be as efficient as hand-written code for these types.

C++ comes with the Standard Template Library, the STL, which provides generic containers and generic algorithms. Generic containers are containers like vectors, lists, etc. whose elements are generic (like ints, doubles, strings or any other type or class). Generic algorithms are algorithms which operate on generic types (like ints, doubles, strings or any other type or class). These algorithms operate on ranges inside a container, so they are the same for a vector of ints or for a list of doubles. These ranges are defined by iterators, which are like pointers or coordinates inside the container.

In this way, if you have N types of containers and M different algorithms, you don't have to write NxM versions of the code, but just N+M. So less code to write and maintain. I am a huge fan of templates.

Functional programming

Functional programming is not just programming using functions, but rather using functions as first class citizens. That means being able to pass functions as arguments to other functions and return them as results.

In C, one can achieve this by using pointers to functions, but this is risky and ugly, in terms of function signature.

In C++ we can define functions as types, and therefore they can be used as arguments to and return values from other functions. The way of creating a function type (in C++ we call them function objects or functors) is to create a class and use operator overloading. In the same way as + can be overloaded in a matrix class, the () operator can be overloaded and therefore if myfunc is an object of the class F where the operator has been overloaded, we can use myfunc() like a normal function call.

And therefore, now we can define functions which take or return objects of class F and implement functional programming.

The STL uses extensively function objects in order to tune the behavior of algorithms. For example, the STL find_if algorithm returns the position of first element in a container for which a particular condition is true. It takes as one of its arguments the function which will be used to test if the elements of the container verify a given condition.

Conclusion

As far as I know, C++ offers the best trade-off between efficiency and abstraction level. Or as Bjarne Stroustrup said in a recent interview:

The essence of C++ is that it provides a direct map to hardware and offers mechanisms for very general zero-overhead abstraction.

Since C++ can be as efficient as C, I don't see any reason to use C for a new programming project. Learning C is still useful if one needs to modify existing C code. However, if one wants to use an existing C library in a new project, C++ seems a better choice.

The recent C++11 and C++14 versions of the language (which is by the way an ISO standard) have introduced many new features which make the use of C++ easier and this is giving a sort of renaissance to the language.

Nowadays, added to its classical uses in systems programming and scientific applications, C++ is starting to be used in mobile applications.

Tags: cpp programming-languages otb
16 Jan 2014

OTB++11

Modern C++

I have just recently had a look at the last C++ ISO standard (known as C++11). I was not aware of all the fancy features that have been included in the language.

Some of the most interesting features are:

  • auto,
  • range-based for loops,
  • general constant expressions,
  • uniform initialisation,
  • type aliases,
  • decltype,
  • variadic templates,
  • type safe threading,
  • user-defined literals,
  • move semantics

There are also new containers and algorithms in the STL.

During the end of the year break, I spent some time reading and watching on line presentations about the new C++. I recommend watching Bjarne Stroustrup's keynote at Going Native 2012 (link) and Scott Meyer's talk at Going Native 2013 where he even anticipates some C++14 features (link).

Stroustrup introduces his talk by saying:

I use C++11 freely. Examples include auto, general constant expressions, uniform initialisation, type aliases, type safe threading, and user-defined literals. C++11 features are only just starting to appear in production compilers, so some of my suggestions are conjecture. Developing a "modern style," however, is essential if we don't want to maintain newly-written 1970s and 1980s style code in 2020.

After listening at him, I realised that the most easy to use features alone could make C++ programming much more fun and simple. Since I have spent some time in the last 2 years flirting with functional and dynamic typed (or at least with type inference) languages (Scheme, Clojure, Python, SML), I wanted to see how using C++11 affects the style of OTB code.

As usual, GCC has been the first compiler to make available these new features (closely followed by Clang). C++11 is available in recent versions of GCC using -std=c++11. So I recompiled OTB with this flag and started modifying some of the Software Guide examples using C++11 features.

All in all, I can say that these features make OTB code shorter, clearer and maybe safer. I report some examples here.

Examples

auto

The auto keyword allows to declare variables without explicitly stating their type and puts this burden on the compiler. This is not dynamic typing, this is type inference and nothing changes in terms of what the compiler generates. So you can write:

auto x = sqrt(2.0);

and the type of x will be deduced from the return value type of sqrt. You can also do this:

auto x=0;
auto y=0.0;

and x will be of type int and y will be of type double. If you want to force other types, you have to use the classical declarations.

How to benefit from auto when using OTB? A common pattern in OTB code is to instantiate a smart pointer using the New() class method. Sometimes, we just want to create one object of a given type and we do this:

typedef otb::ImageFileReader<ImageType> ReaderType;
ReaderType::Pointer reader = ReaderType::New();

typedef otb::ImageFileWriter<ImageType> WriterType;
WriterType::Pointer writer = WriterType::New();

in order to create a reader and a writer. That is, since type names are long we define a type using typedef and then we use it twice, once for Pointer and once for New.

Using auto, we can reduce the boilerplate like this:

auto reader = otb::ImageFileReader<ImageType>::New();
auto writer = otb::ImageFileWriter<ImageType>::New();

The same applies for filters, for which usually we only use one instance:

typedef itk::RescaleIntensityImageFilter<ImageType,
                                         OutputImageType> RescalerType;
RescalerType::Pointer rescaler = RescalerType::New();

becomes

auto rescaler = itk::RescaleIntensityImageFilter<ImageType,
                                                 OutputImageType>::New();

which is once again much shorter.

Another example in a for loop:

for (PolygonListType::Iterator it = polygonList->Begin();
     it != polygonList->End(); ++it)
  {
  DataNodeType::Pointer newPolygon = DataNodeType::New();
  newPolygon->SetPolygonExteriorRing(it.Get());
  tree->Add(newPolygon, multiPolygon);
  }

becomes

for (auto it = polygonList->Begin();
     it != polygonList->End(); ++it)
  {
  auto newPolygon = DataNodeType::New();
  newPolygon->SetPolygonExteriorRing(it.Get());
  tree->Add(newPolygon, multiPolygon);
  }

using

The using keyword allows to define template aliases. Whenever we write this:

typedef otb::Image<unsigned int, 2> ImageType;
typedef otb::VectorImage<unsigned int, 2> VectorImageType;
typedef otb::Image<double, 2> DoubleImageType;
typedef otb::VectorImage<double, 2> DoubleVectorImageType;

we could do this:

using ImageType = otb::Image<unsigned int, 2>;
using VectorImageType = otb::VectorImage<unsigned int, 2>;
using DoubleImageType = otb::Image<double, 2>;
using DoubleVectorImageType = otb::VectorImage<double, 2>;

For people like me who have always found counterintuitive the order of the types in a typedef, this is much clear. I know that the code is not shorter, but for a C++ beginner, this should be much clearer.

One cool thing that using allows and that typedef doesn't is partial template specialisation1:

template<typename T> 
using Image2D = otb::Image<T, 2>;
auto im = Image2D<int>::New();  

decltype

Sometimes, we don't want to mentally track the type names, but we know that to objects should have the same type. decltype allows to declare the type of one object by using the name of another object:

ImageType::IndexType start = {0, 0};
decltype(start) start2 = {0, 0};

In OTB, at first, I didn't see a way to use decltype on smart pointers and templates, since we usually manipulate ObjectType::Pointer, but the template parameters are simply ObjectType. Fortunately, ITK designers declared an ObjectType in the SmartPointer class which is an alias for the type of the pointed object.

I found an example where this is useful (the orthofusion example in the Tutorial chapter of the Software Guide).

The original code defines a type for a projection and creates an object:

typedef otb::GenericMapProjection<otb::TransformDirection::INVERSE>
        InverseProjectionType;

InverseProjectionType::Pointer utmMapProjection =
                               InverseProjectionType::New();

Later in the program, we can use the defined type as a template parameter for a filter:

typedef otb::OrthoRectificationFilter<ImageType, DoubleImageType,
                                      InverseProjectionType> 
             OrthoRectifFilterType;
OrthoRectifFilterType::Pointer orthoRectifPAN =
                               OrthoRectifFilterType::New();

If we created the projection without defining the type like this:

auto utmMapProjection =
     otb::GenericMapProjection<otb::TransformDirection::INVERSE>::New();

how can we then instantiate the filter with a type which has not been defined? Like this:

auto orthoRectifPAN =
      otb::OrthoRectificationFilter<ImageType,
                                    DoubleImageType,
                                    decltype(utmMapProjection)::ObjectType>::New();

I am not sure whether this is a good solution in production code which has to be maintained long time after it was written, but it is for sure an interesting solution when prototyping processing chains.

Lambdas

Lambdas, or anonymous functions, are one of the most exciting things in C++11 when one has used them in LISP-like languages or even in Python. They make fast prototyping much easier mainly when making a heavy use of the STL algorithms because they can replace functors or plain functions.

Usually, STL algorithms can be specialised by passing them functions as parameters. If one wants to store state, a functor or function object (a class which defines the () operator) is used.

The inconvenient of functions and functors is that they are defined away from where they are used, so they are not ideal when they are only used once. Functors are worse than functions since they need much boilerplate for only one useful method.

Lambdas are the ideal solution for this, since they are defined locally. They can store state by capturing local variables either by reference or by value.

OTB iterators are not fully compatible with the STL, so one can think that lambdas would not be very useful with OTB specific code. But actually, there are many filters which can be implemented by specialising one of the N-ary functor filters. These filters are useful when one wants to implement a filtering operation which can't be obtained with the otb::BandMathFilter, as for instance when a local neighbourhood is needed.

One example of this is the change detection framework. A long time ago when I wrote this example I had to use a lot of code to achieve a simple thing like computing the mean difference around a pixel.

First of all, I needed a functor to implement the processing in the neighbourhood:

template<class TInput1, class TInput2, class TOutput>
class MyChangeDetector
{
public:
  MyChangeDetector() {}
  ~MyChangeDetector() {}
  inline TOutput operator ()(const TInput1& itA,
                             const TInput2& itB)
  {
    TOutput result = 0.0;

    for (unsigned long pos = 0; pos < itA.Size(); ++pos)
      {

      result += static_cast<TOutput>(itA.GetPixel(pos) - itB.GetPixel(pos));

      }
    return static_cast<TOutput>(result / itA.Size());
  }
};

That is, a class definition with empty constructor and destructor. Then I needed to inherit from the binary functor which performs a neighbourhood filtering and specialise it using my functor.

template <class TInputImage1, class TInputImage2, class TOutputImage>
class ITK_EXPORT MyChangeDetectorImageFilter :
  public otb::BinaryFunctorNeighborhoodImageFilter<
      TInputImage1, TInputImage2, TOutputImage,
      MyChangeDetector<
          typename itk::ConstNeighborhoodIterator<TInputImage1>,
          typename itk::ConstNeighborhoodIterator<TInputImage2>,
          typename TOutputImage::PixelType> >
{
public:
  typedef MyChangeDetectorImageFilter Self;

  typedef typename otb::BinaryFunctorNeighborhoodImageFilter<
      TInputImage1, TInputImage2, TOutputImage,
      MyChangeDetector<
          typename itk::ConstNeighborhoodIterator<TInputImage1>,
          typename itk::ConstNeighborhoodIterator<TInputImage2>,
          typename TOutputImage::PixelType>
      >  Superclass;

  typedef itk::SmartPointer<Self>       Pointer;
  typedef itk::SmartPointer<const Self> ConstPointer;

  itkNewMacro(Self);

protected:
  MyChangeDetectorImageFilter() {}
  virtual ~MyChangeDetectorImageFilter() {}

private:
  MyChangeDetectorImageFilter(const Self &);
  void operator =(const Self&);

};

So again, a class definition for nothing. I could maybe just have defined a type, but the code using it would not have been as simple as this:

typedef MyChangeDetectorImageFilter<InputImageType1, InputImageType2,
    ChangeImageType>      FilterType;
FilterType::Pointer   filter = FilterType::New();

Even if I had defined a type alias for the filter in order to avoid to inherit from the existing filter, the functor definition is an overkill and it is located far from where it is used.

Using lambdas, the code becomes this:

using CPT = ChangeImageType::PixelType;
using CNIType1 = itk::ConstNeighborhoodIterator<InputImageType1>;
using CNIType2 = itk::ConstNeighborhoodIterator<InputImageType2>;

That is, 3 lines of type aliases for a simpler notation. Then the lambda:

std::function<CPT (CNIType1, CNIType2)>
  cdt = [](CNIType1 itA, CNIType2 itB){
  CPT result{0};
  for (auto pos = 0; pos < itA.Size(); ++pos)
    result += static_cast<CPT>(itA.GetPixel(pos) - itB.GetPixel(pos));
  return static_cast<CPT>(result / itA.Size());
};                                            

And finally the filter instantiation:

auto filter = otb::BinaryFunctorNeighborhoodImageFilter<InputImageType1,
                                                        InputImageType2,
                                                        ChangeImageType,
                                                        decltype(cdt)>::New();

If we had wanted to have a function with state, objects in the scope where the lambda is defined can be captured by value or by reference within the [], so we are very close to what a functor may achieve with much less code.

The main advantage here is that all the code is in the same location thanks to the possibility of creating a function anywhere in the sources. If I understand it well, lambdas are inlined, which means that the code is faster than a function call.

Conclusion

I have only started to use this kind of code, but I think that once you start using auto and lambdas it must be very difficult to go back to C++98/03.

The only drawback with using C++11 is dropping compatibility with old compilers, but this is not an issue for most of the things I do. So I can start writing modern OTB code instead of poo (plain old OTB)!

Footnotes:

1

With GCC 4.7 this only seems to work out of a scoped block.

Tags: cpp otb
17 Nov 2011

Hack OTB on the Clojure REPL

The WHY and the WHAT

Lately, I have been playing with several configurations in order to be able to fully exploit the ORFEO Toolbox library from Clojure.

As explained in previous posts, I am using OTB through the Java bindings. Recently, the folks of the OTB team have made very cool things – the application wrappers – which enrich even further the ways of using OTB from third party applications. The idea of the application wrappers is that when you code a processing chain, you may want to use it on the command line, with a GUI or even as a class within another application.

What Julien and the others have done is to build an application engine which given a pipeline and its description (inputs, outputs and parameters) is able to generate a CLI interface and a Qt-based GUI interface. But once you are there, there is no major problem (well, if your name is Julien, at least) to generate a SWIG interface which will then allow to build bindings for the different languages supported by SWIG. This may seem a kitchen sink – usine à gaz in French – and actually it is if you don't pay attention (like me).

The application wrappers use advanced techniques which require a recent version of GCC. At least recent from a Debian point of view (GCC 4.5 or later). On the other hand, the bindings for the library (the OTB-Wrapping package) use CableSwig, which in turn uses its own SWIG interfaces. So if you want to be able to combine the application wrappers with the library bindings, odds are that you may end up with a little mess in terms of compiler versions, etc.

And this is exactly what happened to me when I had to build GCC from source in Debian Squeeze (no GCC 4.5 in the backports yet). So I started making some experiments with virtual machines in order to understand what was happening. Eventually I found that the simplest configuration was the best to sort things out. This is the HOW-TO for having a working installation of OTB with application wrappers, OTB-Wrapping and access everything from a Clojure REPL.  

The HOW-TO

Installing Arch Linux on a virtual machine

I have chosen Arch Linux because with few steps you can have a minimal Linux install. Just grab the net-install ISO and create a new virtual machine on VirtualBox. In terms of packages just get the minimal set (the default selection, don't add anything else). After the install, log in as root, and perform a system update with:

    pacman -Suy

If you used the net-install image, there should be no need for this step, but in case you used an ISO with the full system, it's better to make sure that your system is up to date. Then, create a user account and set up a password if you want to:

    useradd -m jordi
    passwd jordi

Again, this is not mandatory, but things are cleaner this way, and you may want to keep on using this virtual machine. Don't forget to add your newly created user to the sudoers file:

    pacman -S sudo
    vi /etc/sudoers

You can now log out and log in with the user name you created. Oh, yes, I forgot to point out that you don't have Xorg, or Gnome or anything graphical. Even the mouse is useless, but since you are a real hacker, you don't mind.  

 

Install the packages you need

Now the fun starts. You are going to keep things minimal and only install the things you really need.

    sudo pacman -S mercurial make cmake gcc gdal openjdk6 \
                   mesa swig cvs bison links unzip rlwrap

Yes, I know, Mesa (OpenGL) is useless without X, but we want to check that everything builds OK and OTB uses OpenGL for displaying images.

  • Mercurial is needed to get the OTB sources, cvs is needed for getting CableSwig
  • Make, cmake and gcc are the tools for building from sources
  • Swig is needed for the binding generation, and bison is needed by CableSwig
  • OpenJDK is our Java version of choice
  • Links will be used to grab the Clojure distro, unzip to extract the corresponding jar file and rlwrap is used by the Clojure REPL.

And that's all!  

 

Get the source code for OTB and friends

I prefer using the development version of OTB, so I can complain when things don't work.

    hg clone http://hg.orfeo-toolbox.org/OTB
    hg clone http://hg.orfeo-toolbox.org/OTB-Wrapping

CableSwig is only available through CVS.

    cvs -d :pserver:anonymous@pubilc.kitware.com:/cvsroot/CableSwig \
            co CableSwig

 

 

Start building everything

We will compile everything on a separate directory:

    mkdir builds
    cd builds/

 

OTB and OTB-Wrapping

We create a directory for the OTB build and configure with CMake

    mkdir OTB
    cd OTB
    ccmake ../../OTB

Don't forget to set to ON the application build and the Java wrappers. Then just make (literally):

    make

By the way, I have noticed that the compilation of the application engine can blow up your gcc if you don't allocate enough RAM for your virtual machine. At this point, you should be able to use the Java application wrappers. But we want also the library bindings so we gon on. We can now build CableSwig which will be needed by OTB-Wrapping. Same procedure as before:

    cd ../
    mkdir CableSwig
    cd CableSwig/
    ccmake ../../CableSwig/
    make

And now, OTB-Wrapping. Same story:

    cd ../
    mkdir OTB-Wrapping
    cd OTB-Wrapping/
    ccmake ../../OTB-Wrapping/

In the cmake configuration, I choose only to build Java, but even in this case a Python interpreter is needed. I think that CableSwig needs it to generate XML code to represent the C++ class hierarchies. If you did not install Python explicitly in Arch, you will have by default a 2.7 version. This is OK. If you decided to install Python with pacman, you will have both, Python 2.7 and 3.2 and the default Python executable will point to the latter. In this case, don't forget set the PYTHON\EXECUTABLE in CMake to /usr/bin/python2. Then, just make and cd to your home directory.

    make
    cd

And you are done. Well not really. Right now, you can do Java, but what's the point? You might as well use the C++ version, right?  

 

 

Land of Lisp

Since Lisp is the best language out there, and OTB is the best remote sensing image processing software (no reference here, but trust me), we'll do OTB in Lisp.  

PWOB

You may want to get some examples that I have gathered on Bitbucket.

    mkdir Dev
    cd Dev/
    hg clone http://bitbucket.org/inglada/pwob

PWOB stands for Playing With OTB Bindings. I have only put there 3 languages which run on the JVM for reasons I stated in a previous post. You will of course avoid the plain Java ones. I have mixed feelings about Scala. I definetly love Clojure since it is a Lisp.  

 

Get Clojure

The cool thing about this Lisp implementation is that it is contained into a jar file. You can get it with the text-based web browser links:

    links http://clojure.org

Go to the download page and grab the latest release. It is a zip file which contains, among other things the needed jar. You can unzip the file:

    mkdir src
    mv clojure-1.3.0.zip src/
    cd src/
    unzip clojure-1.3.0.zip

Copy the jar file to the user .clojure dir:

    cd clojure-1.3.0
    mkdir ~/.clojure
    mv clojure-1.3.0.jar ~/.clojure/

Make a sym link so we have a clojure.jar:

    ln -s /home/inglada/.clojure/clojure-1.3.0.jar /home/inglada/.clojure/clojure.jar

And clean up useless things

    cd ..
    rm -rf clojure-1.3.0*

 

 

Final steps before hacking

Some final minor steps are needed before the fun starts. You may want to create a file for the REPL to store completions:

    touch ~/.clj_completions

In order for Clojure to find all the jars and shared libraries, you have to define some environment variables. You may choose to set them into your .bashrc file:

    export LD_LIBRARY_PATH="~/builds/OTB-Wrapping/lib/:~/builds/OTB/bin/"
    export ITK_AUTOLOAD_PATH="~/builds/OTB/bin/"

PWOB provides a script to run a Clojure REPL with everything set up:

    cd ~/Dev/pwob/Clojure/src/Pwob
    ./otb-repl.sh

Now you should see something like this:

    Clojure 1.3.0
    user=>

Welcome to Lisp! If you want to use the applications you can for instance do:

    (import '(org.otb.application Registry))
    (def available-applications (Registry/GetAvailableApplications))

In the PWOB source tree you will find other examples.  

 

 

 

Final remarks

Using the REPL is fun, but you will soon need to store the lines of code, test things, debug, etc. In this case, the best choice is to use SLIME, the Superior Lisp Interaction Mode for Emacs. There are many tutorials on the net on how to set it up for Clojure. Search for it using DuckDuckGo. In the PWOB tree (classpath.clj) you will find some hints on how to set it up for OTB and Clojure. A simpler config for Emacs is to use the inferior lisp mode, for which I have also written a config (otb-clojure-config.el). I may write a post some day about that. Have fun!

Tags: arch-linux clojure otb programming
20 May 2011

Lambda Omega Lambda

In my current research work, I am investigating the possibility of integrating domain expert knowledge together with image processing techniques (segmentation, classification) in order to provide accurate land cover maps from remote sensing image time series. When we look at this in terms of tools (software) there are a set of requirements which have to be met in order to be able to go from toy problems to operational processing chains. These operational chains are needed by scientists for carbon and water cycle studies, climate change analysis, etc. In the coming years, the image data needed for this kind of near-real-time mapping will be available through Earth observation missions like ESA's Sentinel program. Briefly, the constraints that the software tools for this task have are the following.

  1. Interactive environment for exploratory analysis
  2. Concurrent/parallel processing
  3. Availability of state of the art, validated image processing algorithms
  4. Symbolic, semantic information processing

This long post / short article describes some of the thoughts I have and some of the conclusions I have come to after a rather long analysis of the tools available.

Need for a repl

The repl acronym stands for Read-Evaluate-Print-Loop and I think has its origins in the first interactive Lisp systems. Nowadays, we call this an interactive interpreter. People used to numerical and/or scientific computing will think about Matlab, IDL, Octave, Scilab and what not. Computer scientists will see here the Python/Ruby/Tcl/etc. interpreters. Many people use nowadays Pyhton+Numpy or Scipy (now gathered in Pylab) in order to have something similar to Matlab/IDL/Scilab/Octave, but with a general purpose programming language. What a repl allows is to interactively explore the problem at hand without going through the cycle of writing the code, compiling, running etc. The simple script that I wrote for the OTB blog using OTB's Python bindings could be typed on the interpreter and bring together OTB and Pylab.  

 

Need for concurrent programming

Without going into the details and differences between parallel and concurrent programming, it seems clear that Moore's law can only continue to hold through multi-core architectures. In terms of low-level (pixel-wise) processing, OTB provides an interesting solution (multi-threaded execution of filters by dividing images into chunks). This approach can be generalized to GPUs. However, sometimes an algorithm needs to operate on the whole image because the image splitting affects the results. This is typically the case for Markovian approaches to filtering or methods for image segmentation. For this cases, one way to speed up things is to process several images in parallel (if the memory footprint allows that!). On way of trying to maximize the use of all available computing cores in Python is using the multiprocessing module which allows to deal with a pool of threads. One example would be as follows: However, this does not allow for easy inter-thread communication which is not needed in the above example, but can be very useful if the different processes are working on the same image: imagine a multi-agent system where classifiers, algorithms for biophysical parameter extraction, data assimilation techniques, etc. work together to produce an accurate land cover map. They may want to communicate in order to share information. As far as I understand, Python has some limitations due to the global interpreter lock. Some languages as for instance Erlang offer appropriate concurrency primitives for this. I have played a little bit with them in my exploration of 7 languages in 7 weeks. Unfortunately, there are no OTB bindings for Erlang. Scala has copied the Erlang actors, but I didn't really got into the Scala thing.

 

 

Need for OTB access

 

This one here shouldn't need much explanation. Efficient remote sensing image processing needs OTB. Period. I am sorry. I am rather biased on that! I like C++ and I have no problem in using it. But there is no repl, one needs several lines of typedef before being able to use anything. This is the price to pay in order to have a good static checking of the types before running the problem. And it's damned fast! We have Python bindings which allows us to have a clean syntax, like in the pipeline example of the OTB tutorials. However, the lack of easy concurrency is a bad point for Python. Also, the lack of Artificial Intelligence frameworks for Python is an anti-feature. Java has them, but Java has no repl and look at its syntax. It's worse than C++. You have all these mangled names which were clean in C++ and Python and become things like otbImageFileReaderIUS2. Scala, thanks to its interoperability with Java (Scala runs in the JVM), can use OTB bindings. Actually, we have a cleaner syntax than Java's: There is still the problem of the mangled names, but with some pattern matching or case classes, this should disappear. So Scala seems a good candidate. It has:

  1. Python-like syntax (although statically typed)
  2. Concurrency primitives
  3. A repl

Unfortunately, Scala is not a Lisp. Bear with me.  

 

A Lisp would be nice

I want to build an expert system, it would be nice to have something for remote sensing similar to Wolfram Alpha. We could call it Ω Toolbox and keep the OTB name (or close). Why Lisp? Well I am not able to explain that here, but you can read P. Norvig's or P. Graham's essays on the topic. If you have a look at books like PAIP or AIMA, or systems like LISA, CLIPS or JESS, they are either written in Lisp or the offer a Lisp-like DSLs. I am aware of implementations of the AIMA code in Python, and even P. Norvig himself has reasons to have migrated from Lisp to Python, but as stated above, Python seems to be out of the game for me. The code is data philosophy of Lisp is, as far as I understand it, together with the repl tool, one of the main assets for AI programming. Another aspect which is also important is the functional programming paradigm used in Lisp (even though other programming paradigms are also available in Lisp). Concurrency is the main reason for the upheaval of functional languages in recent years (Haskell, for instance). Even though I (still) don't see the need for pure functional programming for my applications, lambda calculus is elegant and interesting. Maybe λ Toolbox should be a more appropriate name?  

 

Clojure

If we recap the needs:

  1. A repl (no C++, no Java)
  2. Concurrency (no Python)
  3. OTB bindings available (no Erlang, no Haskell, no Ruby)
  4. Lisp (none of the above)

there is one single candidate: Clojure. Clojure is a Lisp dialect which runs on the JVM and has nice concurrency features like inmutability and STM and agents. And by the way, OTB bindings work like a charm: Admittedly, the syntax is less beautiful than Python's or Scala's, but (because!) it's a Lisp. And it's a better Java than Java. So you have all the Java libs available, and even Clojure specific repositories like Clojars. A particular interesting project is Incanter which provides is a Clojure-based, R-like platform for statistical computing and graphics. Have a look at this presentation to get an overview of what you can do at the Clojure repl with that. If we bear in mind that in Lisp code is data and that Lisp macros are mega-powerful, one could imagine writing things like:

    (make-otb-pipeline reader gradientFilter thresholdFilter writer)

Or even emulating the C++ template syntax to avoid using the mangled names of the OTB classes in the Java bindings (using macros and keywords):

    (def filter (RescaleIntensityImageFilter :itk (Image. : otb :Float 2)
                                                  (Image. : otb :UnsignedChar 2)))

instead of

    (def filter (itkRescaleIntensityImageFilterIF2IUC2.))

I have already found a cool syntax for using the Setters and Getters of a filter using the doto macro (see line 19 in the example below):

Conclusion

I am going to push further the investigation of the use of Clojure because is seems to fit my needs:

  1. Has an interactive interpreter
  2. Access to OTB (through the Java bindings)
  3. Concurrency primitives (agents, STM, etc.)
  4. It's a lisp, so I can easily port existing rule-based expert systems.

Given the fact that this would be the sum of many cool features, I think I should call it Σ Toolbox, but I don't like the name. The mix of λ calculus and latex Ω Toolbox, should be called λΩλ, which is LoL in Greek.  

 

Tags: clojure lisp otb programming
20 Mar 2010

Geospatial analysis and Object-Based image analysis

I was searching in the web about the use of PostGIS data bases for object based image analysis (OBIA) and Google sent me to the OTB Wiki to a page that I wrote 6 months ago (first hit for "postgis object based image analysis").

It seems that this is a subject for which no much work is ongoing. Julien Michel will be presenting some results about how to put together OBIA and geospatial analysis (+ active learning and some other cool things) in a workshop in Paris next May.

Tags: otb remote-sensing research
Other posts
Creative Commons License
jordiinglada.net by Jordi Inglada is licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License. RSS. Feedback: info /at/ jordiinglada.net Mastodon