Archive

Archive for July, 2015

A first stab a low resolution image enhancement

July 27, 2015 No comments

Team Clarify-the-Heat (Gary, Pavel and yours truly) were at a hackathon sponsored by Flir at the weekend.

The FlirONE contains an infrared sensor with 160 by 120 pixels, plus an optical sensor that provides a higher resolution image that can be overlaid with the thermal, that attaches to the USB port of an Android phone or iPhone. The sensor frequency range is 8 to 14 µm, i.e., ‘real’ infrared, not the side-band sliver obtained by removing the filter from visual light sensors.

At 160 by 120 the IR sensor resolution is relatively low, compared to today’s optical sensors, and team Clarify-the-Heat decided to create an iPhone App that merged multiple images to create a higher resolution image (subpixel interpolation) in real-time. An iPhone was used because the Flir software for this platform has been around longer (Android support is on its first release) and image processing requires lots of cpu power (i.e., compiled Objective-C is likely to be a lot faster than interpreted Java). The FlirONE frame rate is 8.6 images per second.

Modern smart phones contain 3-axis gyroscopes and data on change of rotational orientation was used to find the pixels from two images that corresponded to the same area of the viewed 2-D image. Phone gyroscope sensors drift over periods of a few seconds, some experimentation found that over a period of a few seconds the drift on Gary’s iPhone was safely under a tenth of a degree; one infrared pixel had a field of view of approximately 1/3 degrees horizontally and 1/4 degrees vertically. Some phone gyroscope sensors are known to be sensitive enough to pick up the vibrations caused by local conversations.

The plan was to get an App running in realtime on the iPhone. In practice debugging a couple of problems ate up the hours and the system demonstrated uploaded data from the iPhone to a server which a laptop read and processed two images to create a higher resolution result (the intensity from the overlapping pixels is averaged to double, in our case, the resolution). This approach is very primitive compared to the algorithms that do sub-pixel enhancement by detecting image features (late in the day we found out that OpenCV supports some of this kind of stuff), but it requires image less image processing and should be practical in real-time on a phone.

We discovered that the ‘raw’ data returned by the FlirONE API has been upsampled to produce a 320 by 240 image. This means that the gyroscope data, used to align image pixels, has to be twice as accurate. We did not try to use magnetic field data (while not accurate the values do not drift) to filter the gyroscope readings.

Two IR images from a FlirOne, of yours truly, and a ‘higher’ resolution image below:

Infrared images of me

The enhanced image shows more detail in places, but would obviously benefit from input from more images. The technique relies on partial overlap between pixels, which is a hit and miss affair (we were managing to extract around 5 images per second but did not get as far as merging more than two images at a time).

Team Clarify-the_Heat got an honorable mention, but then so did half the 13 teams present, and I won a FlirONE in one of the hourly draws 🙂 It looks like they will be priced around $250 when they go on sale in mid-August. I wonder how long it will take before they are integrated into phones (yet more megapixels in the visual spectrum is a bit pointless)?

I thought the best project of the event was by James Rampersad, who used processed the IR video stream using Eulerian Video Magnification to how blood pulsing through his face.

Deluxe Paint: 30 year old C showing its age in places

July 23, 2015 No comments

Electronic Arts have just released the source code of a version of Deluxe Paint from 1986.

The C source does not look anything like my memories of code from that era, probably because it was written on and for the Amiga. The Amiga used a Motorola 68000 cpu, which meant a flat address space, i.e., no peppering the code with near/far/huge data/function declaration modifiers (i.e., which cpu register will Sir be using to call his functions and access his data).

Function definitions use what is known as K&R style, because the C++ way of doing things (i.e., function prototypes) was just being accepted into C by the ANSI committee (the following is from BLEND.C):

void DoBlend(ob, canv, clip)
BMOB *ob;
BoxBM *canv;
Box *clip;
{...}

The difference between the two ways of doing things (apart from the syntax of putting the type information outside/inside the brackets) is that the compiler does not do any type checking between definition/call for the K&R style. So it is possible to call DoBlend with any number of arguments having any defined type and the compiler will generate code for the call assuming that the receiving end is expecting what is being passed. Having an exception raised in the first few lines of a function used to be a sure sign of an argument mismatch, while having things mysteriously fail well into the body of the code regularly ruined a developer’s day. Companies sold static analysis tools that did little more than check calls against function definitions.

The void on the return type is unusual for 1986, as is the use of enum types. I would have been willing to believe that this code was from 1996. The clue that this code is from 1986 and not 1996 is that int is 16 bits, a characteristic that is implicit in the code (the switch to 32 bits started in the early 1990s when large amounts of memory started to become cheaply available in end-user systems).

So the source code for a major App, in its day, is smaller than the jpegs on many’ish web pages (jpegs use a compressed format and the zip file of the source is 164K). The reason that modern Apps contain orders of magnitude more code is that our expectation of the features they should contain has grown in magnitude (plus marketing likes bells and whistles and developers enjoy writing lots of new code).

When the source of one of today’s major Apps is released in 30 years time, will people be able to recognise patterns in the code that localise it to being from around 2015?

Ah yes, that’s how people wrote code for computers that worked by moving electrons around, not really a good fit for photonic computers…

Categories: Uncategorized Tags: ,

Exhuastive superoptimization

July 10, 2015 No comments

Optimizing compilers include a list of common code patterns and equivalent, faster executing, patterns that they can be mapped to (traditionally the role of peephole optimization, which in many early compilers was the only optimization).

The problem with this human written table approach is that it is fixed in advance and depends on the person adding the entries knowing lots of very obscure stuff.

An alternative approach is the use of a superoptimizer, these take a code sequence and find the functionally equivalent sequence that executes the fastest. However, superoptimizing interesting sequences often take a very long time and so use tends to be restricted to helping figure out, during compiler or library development, what to do for sequences that are known to occur a lot and be time critical.

A new paper, Optgen: A Generator for Local Optimizations, does not restrict itself to sequences deemed to be common or interesting, it finds the optimal sequence for all sequences by iterating through all permutations of operands and operators. Obviously this will take awhile and the results in the paper cover expressions containing two unknown operands + known constants, and expressions containing three and four operands that are not constants; the operators searched were: unary ~ and - and binary +, &, |, - and ~.

The following 63 optimization patterns involving two variables (x and y)+constants (c0, c1, c2) were found, with the right column listing whether LLVM, GCC or the Intel compiler supported the optimization or not.

  optional-precondition => before  ->  after		L G I
 -~x -> x + 1						+ + -
 -(x & 0x80000000) -> x & 0x80000000			- + -
 ~-x -> x - 1						+ + -
 x +~x -> 0xFFFFFFFF					+ + -
 x + (x & 0x80000000) -> x & 0x7FFFFFFF			- - -
 (x | 0x80000000) + 0x80000000 -> x & 0x7FFFFFFF	+ - -
 (x & 0x7FFFFFFF) + (x & 0x7FFFFFFF) -> x + x		+ + -
 (x & 0x80000000) + (x & 0x80000000) -> 0		+ + -
 (x | 0x7FFFFFFF) + (x | 0x7FFFFFFF) -> 0xFFFFFFFE	+ + -
 (x | 0x80000000) + (x | 0x80000000) -> x + x		+ + -
 x & (x + 0x80000000) -> x & 0x7FFFFFFF			+ - -
 x & (x | y) -> x					+ + -
 x & (0x7FFFFFFF - x) -> x & 0x80000000			- - -
 -x & 1 -> x & 1					- + -
 (x + x)& 1 -> 0					+ + -
 is_power_of_2(c1) && c0 & (2 * c1 - 1) == c1 - 1
=> (c0 - x) & c1 -> x & c1				- - -
 x | (x + 0x80000000) -> x | 0x80000000			+ - -
 x | (x & y) -> x					+ + -
 x | (0x7FFFFFFF - x) -> x | 0x7FFFFFFF			- - -
 x | (x^y) -> x | y					+ - -
 ((c0 | -c0) &~c1) == 0 ) (x + c0) | c1!x | c1		+ - +
 is_power_of_2(~c1) && c0 & (2 *~c1 - 1) ==~c1 - 1
=> (c0 - x) | c1 -> x | c1				- - -
 -x | 0xFFFFFFFE -> x | 0xFFFFFFFE			- - -
 (x + x) | 0xFFFFFFFE -> 0xFFFFFFFE			+ + -
 0 - (x & 0x80000000) -> x & 0x80000000			- + -
 0x7FFFFFFF - (x & 0x80000000) -> x | 0x7FFFFFFF	- - -
 0x7FFFFFFF - (x | 0x7FFFFFFF) -> x & 0x80000000	- - -
 0xFFFFFFFE - (x | 0x7FFFFFFF) -> x | 0x7FFFFFFF	- - -
 (x & 0x7FFFFFFF) - x -> x & 0x80000000			- - -
 x^(x + 0x80000000) -> 0x80000000			+ - -
 x^(0x7FFFFFFF - x) -> 0x7FFFFFFF			- - -
 (x + 0x7FFFFFFF)^0x7FFFFFFF -> -x			- - -
 (x + 0x80000000)^0x7FFFFFFF -> ~x			+ + -
 -x^0x80000000 -> 0x80000000 - x			- - -
 (0x7FFFFFFF - x)^0x80000000 -> ~x			- + -
 (0x80000000 - x)^0x80000000 -> -x			- + -
 (x + 0xFFFFFFFF)^0xFFFFFFFF -> -x			+ + -
 (x + 0x80000000)^0x80000000 -> x			+ + -
 (0x7FFFFFFF - x)^0x7FFFFFFF -> x			- - -
 x - (x & c) -> x &~c					+ + -
 x^(x & c) -> x &~c					+ + -
 ~x + c -> (c - 1) - x					+ + -
 ~(x + c) -> ~c - x					+ - -
 -(x + c) -> -c - x					+ + -
 c -~x -> x + (c + 1)					+ + -
 ~x^c -> x^~c						+ + -
 ~x - c -> ~c - x					+ + -
 -x^0x7FFFFFFF -> x + 0x7FFFFFFF			- - -
 -x^0xFFFFFFFF -> x - 1					+ + -
 x & (x^c) -> x &~c					+ + -
 -x - c -> -c - x					+ + -
 (x | c) - c -> x &~c					- - -
 (x | c)^c -> x &~c					+ + -
 ~(c - x) -> x +~c					+ - -
 ~(x^c) -> x^~c						+ + -
 ~c0 == c1 => (x & c0)^c1 -> x | c1			+ + -
 -c0 == c1 => (x | c0) + c1 -> x &~c1			- - -
 (x^c) + 0x80000000 -> x^(c + 0x80000000)		+ + -
 ((c0 | -c0) & c1) == 0 => (x^c0) & c1!x & c1		+ + -
 (c0 &~c1) == 0 => (x^c0) | c1!x | c1			+ - -
 (x^c) - 0x80000000 -> x^(c + 0x80000000)		+ + -
 0x7FFFFFFF - (x^c) -> x^(0x7FFFFFFF - c)		- - -
 0xFFFFFFFF - (x^c) -> x^(0xFFFFFFFF - c)		+ + -

The optional-precondition are hand-written rules that constants must obey for a particular expression mapping to apply, e.g., (c0 &~c1) == 0.

Hand written rules using bit-vector (for each bit of a constant the value one, zero or don’t know) analysis of constants is used to prune the search space (otherwise the search of the two operand+constants case would still be running).

LLVM covers 40 of the 63 cases, GCC 36 and Intel just 1 (wow). I suspect that many of the unsupported patterns are rare, but now they are known it is worth implementing them just to be able to say that this particular set of possibilities is completely covered.

Categories: Uncategorized Tags:

R is now important enough to have a paid for PR make-over

July 5, 2015 8 comments

With the creation of the R consortium R has moved up a rung on the ladder of commercial importance.

R has captured the early adopters and has picked up a fair few of the early majority (I’m following the technology adoption life-cycle model made popular by the book Crossing the Chasm), i.e., it is starting to become mainstream. Being mainstream means that jobsworths are starting to encounter the language in situations of importance to them. How are the jobsworths likely to perceive R? From my own experience I would say it will be perceived as being an academic thing, which in the commercial world is not good, not good at all.

To really become mainstream R needs to shake off its academic image, and as I see it, the R consortium has been set up to make that happen. I imagine it will try to become the go-to point for journalists wanting information or a quote about things-related-to R. Yes, they will hold conferences with grandiose sounding titles and lots of business people will spend surprising amounts of money to attend, but the real purpose is to solidify the image of R as a commercial winner (the purpose of a very high conference fee is to keep the academics out and convince those attending that it must be important because it is so expensive).

This kind of consortium gets set up when some technology having an academic image is used by large companies that need to sell this usage to potential customers (if the technology is only used internally its wider image is unimportant).

Unix used to have an academic image, one of the things that X/Open was set up to ‘solve’. The academic image is now a thing of the past.

For the first half of the 1980s it looked like Pascal would be a mainstream language; a language widely taught in universities and perceived as being academic. Pascal did not get its own consortium and C came along and took its market (I was selling Pascal tools at the time and had lots of conversations with companies who were switching from Pascal to C and essentially put the change down to perception; it did not help that Pascal implementations did their best to hide/ignore the 8086 memory model, something of interest when memory is scarce).

How will we know when R reaches the top rung (if it does)? Well there are two kinds of languages, those that nobody uses and those that everybody complains about.

R will be truly mainstream once people feel socially comfortable complaining about it to any developer they are meeting for the first time.