Archive

Archive for December, 2010

Has the seed that gets software development out of the stone-age been sown?

December 26, 2010 1 comment

A big puzzle for archaeologists is why stone age culture lasted as long as it did (from approximately 2.5 millions years ago until the start of the copper age around 6.3 thousand years ago). Given the range of innovation rates seen in various cultures through-out human history a much shorter stone age is to be expected. A recent paper proposes that low population density is what maintained the stone age status quo; there was not enough contact between different hunter gather groups for widespread take up of innovations. Life was tough and the viable lifetime of individual groups of people may not have been long enough for them to be likely to pass on innovations (either their own on ones encountered through contact with other groups).

Software development is often done by small groups that don’t communicate with other groups and regularly die out (well there is a high turn-over, with many of the more experienced people moving on to non-software roles). There are sufficient parallels between hunter gathers and software developers to suggest both were/are kept in a stone age for the same reason, lack of a method that enables people to obtain information about innovations and how worthwhile these might be within a given environment.

A huge barrier to the development of better software development practices is the almost complete lack of significant quantities of reliable empirical data that can be used to judge whether a claimed innovation is really worthwhile. Companies rarely make their detailed fault databases and product development history public; who wants to risk negative publicity and law suits just so academics have some data to work with.

At the start of this decade public source code repositories like SourceForge and public software fault repositories like Bugzilla started to spring up. These repositories contain a huge amount of information about the characteristics of the software development process. Questions that can be asked of this data include: what are common patterns of development and which ones result in fewer faults, how does software evolve and how well do the techniques used to manage it work.

Empirical software engineering researchers are now setting up repositories, like Promise, containing the raw data from their analysis of Open Source (and some closed source) projects. By making this raw data available they are reducing the effort needed by other researchers to investigate their own alternative ideas (I have just started a book on empirical software engineering using the R statistical language that uses examples based on this raw data).

One of the side effects of Open Source development could be the creation of software development practices that have been shown to be better (including showing that some existing practices make things worse). The source of these practices not being what the software developers themselves do or how they do it, but the footsteps they have left behind in the sand.

Christmas book for 2010

December 19, 2010 No comments

I’m rather late with my list of Christmas books for 2010. While I do have a huge stack of books waiting to be read I don’t seem to have read many books this year (I have been reading lots of rather technical blogs this year, i.e., time/thought consuming ones) and there is only one book I would strongly recommend.

Anybody with even the slightest of interest in code readability needs to read
Reading in the Brain
by Stanislaw Dehaene (the guy who wrote The Number Sense, another highly recommended book). The style of the book is half way between being populist and being an undergraduate text.

Most of the discussion centers around the hardware/software processing that takes place in what Dehaene refers to as the letterbox area of the brain (in the left occipito-temporal cortex). The hardware being neurons in the human brain and software being the connections between them (part genetically hardwired and part selectively learned as the brain’s owner goes through childhood; Dehaene is not a software developer and does not use this hardware/software metaphor).

As any engineer knows, knowledge of the functional characteristics of a system are essential when designing other systems to work with it. Reading this book will help people separate out the plausible from the functionally implausible in discussions about code readability.

Time and again the reading process has co-opted brain functionality that appears to have been designed to perform other activities. During the evolution of writing there also seems to have been some adaptation to existing processes in the brain; a lesson here for people designing code visualizations tools.

In my C book I tried to provide an overview of the reading process but skipped discussing what went on in the brain, partly through ignorance on my part and also a belief that we were a long way from having an accurate model. Dehaene’s book clearly shows that a good model of what goes on in the brain during reading is now available.

Build an ISO Standard and the world will beat a path to your door

December 16, 2010 No comments

An email I received today, announcing the release of version 1.0 of the GNU Modula-2 compiler, reminded me of some plans I had to write something about a proposal to add some new definitions to the next version of the ISO C Standard.

In the 80s I was heavily involved in the Pascal community and some of the leading members of this community thought that the successor language designed by Niklaus Wirth, Modula-2, ought to be the next big language. Unfortunately for them this view was not widely shared and after much soul searching it was decided that the lack of an ISO standard for the language was responsible for holding back widespread adoption. A Modula-2 ISO Standard was produced and, as they say, the rest is history.

The C proposal involves dividing the existing definition undefined behavior into two subcategories; bounded undefined behavior and critical undefined behavior. The intent is to provide guidance to people involved with software assurance. My long standing involvement with C means that I find the technical discussions interesting; I have to snap myself out of getting too involved in them with the observation that should the proposals be included in the revised C Standard they will probably have the same impact as the publication of the ISO Standard had on Modula-2 usage.

The only way for changes to an existing language standard to have any impact on developer usage is for them to require changes to existing compiler behavior or to specify additional runtime library functionality (e.g., Extensions to the C Library Part I: Bounds-checking interfaces).

C is top dog in Google AI Challenge

December 4, 2010 No comments

The Google AI Challenge has just finished and the scores for the 4,619 entries have been made available, along with the language used. Does the language choice of the entries tell us anything?

The following is a list of the 11 most common languages used, the mean and standard deviation of the final score for entries written in that language and the number of entries written in the language:

           mean      sd    entries
C          2331     867       18
Haskell    2275     558       51
OCaml      2262     567       12
Ruby       2059     768       55
Java       2060     606     1634
C#         2039     612      485
C++        2027     624     1232
Lisp       1987     686       32
Python     1959     637      948
Perl       1957     693       42
PHP        1944     769       80

C has the highest mean and Lisp one of the lowest; is this a killer blow for Lisp in AI? (My empirical morality prevented me omitting the inconveniently large standard deviations.) C does have the largest standard deviation; is this because of crashes caused by off-by-one/null pointer errors or lower ability on the authors part? Not nearly enough data to tell.

I am guessing that this challenge would be taken up by many people in their 20s, which would explain the large number of entries written in Java and C++ (the most common languages taught in universities over the last few years).

I don’t have an explanation for the relatively large number of entries written in Python.

How to explain the 0.4% of entries written in C and it top placing? Easy, us older folk may be a bit thin on the ground but we know what we are doing (I did not submit an entry).

Categories: Uncategorized Tags: