Archive

Posts Tagged ‘K&R’

The software heritage of K&R C

November 28, 2021 No comments

The mission statement of the Software Heritage is “… to collect, preserve, and share all software that is publicly available in source code form.”

What are the uses of the preserved source code that is collected? Lots of people visit preserved buildings, but very few people are interested in looking at source code.

One use-case is tracking the evolution of changes in developer usage of various programming language constructs. It is possible to use Github to track the adoption of language features introduced after 2008, when the company was founded, e.g., new language constructs in Java. Over longer time-scales, the Software Heritage, which has source code going back to the 1960s, is the only option.

One question that keeps cropping up when discussing the C Standard, is whether K&R C continues to be used. Technically, K&R C is the language defined by the book that introduced C to the world. Over time, differences between K&R C and the C Standard have fallen away, as compilers cease supporting particular K&R ways of doing things (as an option or otherwise).

These days, saying that code uses K&R C is taken to mean that it contains functions defined using the K&R style (see sentence 1818), e.g.,

writing:

int f(a, b)
int a;
float b;
{
/* declarations and statements */
}

rather than:

int f(int a, float b)
{
/* declarations and statements */
}

As well as the syntactic differences, there are semantic differences between the two styles of function definition, but these are not relevant here.

How much longer should the C Standard continue to support the K&R style of function definition?

The WG14 committee prides itself on not breaking existing code, or at least not lots of it. How much code is out there, being actively maintained, and containing K&R function definitions?

Members of the committee agree that they rarely encounter this K&R usage, and it would be useful to have some idea of the decline in use over time (with the intent of removing support in some future revision of the standard).

One way to estimate the evolution in the use/non-use of K&R style function definitions is to analyse the C source created in each year since the late 1970s.

The question is then: How representative is the Software Heritage C source, compared to all the C source currently being actively maintained?

The Software Heritage preserves publicly available source, plus the non-public, proprietary source forming the totality of the C currently being maintained. Does the public and non-public C source have similar characteristics, or are there application domains which are poorly represented in the publicly available source?

Embedded systems is a very large and broad application domain that is poorly represented in the publicly available C source. Embedded source tends to be heavily tied to the hardware on which it runs, and vendors tend to be paranoid about releasing internal details about their products.

The various embedded systems domains (e.g., 8, 16, 32, 64-bit processor) tend to be a world unto themselves, and I would not be surprised to find out that there are enclaves of K&R usage (perhaps because there is no pressure to change, or because the available tools are ancient).

At the moment, the Software Heritage don’t offer code search functionality. But then, the next opportunity for major changes to the C Standard is probably 5-years away (the deadline for new proposals on the current revision has passed); plenty of time to get to a position where usage data can be obtained 🙂

A trip down memory lane with Microsoft Word 1.1

March 31, 2014 3 comments

Microsoft has donated the source code of Word for Windows 1.1a to the Computer history Museum and I have been rummaging around in this C code from the last 1980s.

What immediately struck me is how un-Microsoft Windows like the code looks, in some ways it looks more like Unix code. There are:

  • very few near/far/huge pointer declarations. The Intel x86 architecture is based on segmented addressing of memory, a fact that most developers are blissfully unaware of because 32-bit segments are big enough to be able to ignore this fact; back in the day we had 16-bit segments and there were near pointers that could only point within a segment, far pointers that could point to the contents of other segments (there were alignment issues associated with these) and huge pointers which are essentially 32-bit pointers. Many developers obsessed over saving time/space and did their utmost to only use near pointers (who is ever going to need a data structure larger than 64k?) and Windows code was often littered with different kinds of pointer usage,
  • very few #if/#endif. Why do developers use #if/#endif? They want to port their code to different versions of MS-DOS, use different vendor compilers and to use different third-party libraries. Microsoft were well known for not being backwards compatible with themselves (they have improved somewhat on that score) and obviously had no interest in using other vendor compilers and third-party libraries. Unix code of the day and today is of course packed with #if/#endif.

For any developer who has only used C since 2000 the most obvious code ‘quirk’ is probably the use of K&R style function definitions. This used to be the only way of defining functions until the first C Standard, in 1989, introduced function prototypes and the mass migration of the 1990’s occurred. The K&R style function definition looks like no big deal:

f()
int a;
struct T b;
{
/* blah blah */
}

instead of:

f(int a, struct T b)
{
/* blah blah */
}

until you find out that in the first case, when f is called, there is no checking of the arguments against the parameters appearing in the definition. This lack of checking was/is a constant source of annoying faults that could/can take hours to track down; once a developer has experienced the benefits of automatic argument/parameter checking they soon convert to the prototype form.

The makefile included with the source contains compiler options that won’t be familiar to Windows developers. This is because a special purpose C compiler was used, one that generated P-code (the P here is for Packed not Portable). In the late 1980s, when this code was written, a 256K was considered a lot of memory for your average person’s computer and at +180K lines of code Word 1.1 was enormous. Word processors are on the whole not cpu intensive tasks and trading off a factor of 10 (I’m guessing, probably more) for a memory footprint saving of 2.5-4 (not so much of a guess) was obviously considered worthwhile (but then Microsoft apps have never been known for being nimble).

The extensive use of bit-fields is the other clue that memory is tight. Do this ever get used outside of comms software and embedded systems these days?

I was surprised at how clean and presentable this source code is. I should not have judged Microsoft developers by the horrors perpetrated by many developers targeting their platform.