Home > Uncategorized > Memory capacity and commercial compiler development

Memory capacity and commercial compiler development

When I started out in the compiler business in the 80s many commercial compilers were originally written by one person. A very good person who dedicated himself (I have never heard of a woman doing this) to the job (i.e., minimal social life) could produce a commercially viable product for a non-huge language (e.g., Fortran, Pascal, C, etc and not C++, Ada, etc) in 12-18 months. Companies who decide to develop a compiler in-house tend to use a team of people and take longer because that is how they work, and they don’t want to depend on one person and anyway such a person might not be available to them.

Commercially viable compiler development stayed within the realm of an individual effort until sometime in the early 90s. What put the single individual out of business was the growth in computer memory capacity into the hundreds of megabytes and beyond. Compilers have to be able to run within the limits of developer machines; you won’t sell many compilers that require 100M of memory if most of your customers don’t have machines with 100M+ of memory.

Code optimizers eat memory and this prevented many optimizations that had been known about for years appearing in commercial products. Once the machines used for software development commonly contained 100M+ of memory compiler vendors could start to incorporate these optimizations into their products.

Improvements in processor speed also helped. But developers are usually willing to let the compiler take a long time to optimize the code going into a final build, provided development compiles run at a reasonable speed.

The increase in memory capacity created the opportunity for compilers to improve and when some did improve they made it harder for others to compete. Suddenly an individual had to spend that 12-18 months, plus many more months implementing fancy optimizations; developing a commercially viable compiler outgrew the realm of individual effort.

Some people claim that the open source model was the primary driver in killing off commercial C compiler development. I disagree. My experience of licensing compiler source was that companies preferred a commercial model they were familiar with and reacted strongly against the idea of having to make available any changes they made to the code of an open source program. GCC (and recently llvm) succeeded because many talented people contributed fancy optimizations and these could be incorporated because developer machines contained lots of memory. If storage had not dramatically increased gcc would probably not be the market leader it is today.

  1. Pierre Clouthier
    October 8, 2011 23:44 | #1

    I guess you’ve never heard of Cmdr. Grace Murray Hopper.

  2. Ken
    October 9, 2011 01:13 | #2

    On the other hand, it’s probably more in reach than ever for a single person to write a good compiler for a new language, because llvm is designed to be so reusable. Consider the parallel to iPhone app developement. Almost all apps are written by no more than a few people. That’s possible because of the frameworks.

  3. October 9, 2011 01:15 | #3

    @Pierre Clouthier

    Yes, I have heard of Grace Hopper. Her Wikipedia article refers to her writing something called the A compiler, presumably for a language called A. Was this a research project, an internal development language or a commercial compiler? I suspect a combination of the first two (were compilers sold commercially back then?). I know of several women who have written research compilers, but none who have single-handedly written compilers that have been sold commercially.

  4. October 9, 2011 01:58 | #4

    @Ken
    It is certainly possible for one person to bolt a new language front-end onto gcc/llvm and get high quality code out. For the same amount of work that person could generate simple assembler for some processor (i.e., not a lot of effort difference targeting gcc/llvm intermediate code or unoptimized assembler).

    Some languages contain so much/complex semantics (e.g., Ada, C++) that I doubt anybody could do a complete front-end in 12-18 months.

    I suspect the reason iPhone/Android so much app development is done by so individuals is that the economics don’t support larger teams. Most games were originally written by individuals, but once the money started to roll in team size soon mushroomed.

  5. October 9, 2011 18:55 | #5

    At Borland around the early 90s, the linker was implemented and maintained by a very talented woman programmer. Unfortunately I’ve forgotten her name, and while this was a linker and not a compiler, it was still part of the toolchain.

    I didn’t work for Borland, but I spent a week at their office beating up the beta version of whatever the “next” version of their C++ IDE was at the time. I recall the linker being rock solid. 🙂

  6. October 10, 2011 02:18 | #6

    I have a different take. We used to review all the C compilers at “Computer Language” and we were pretty close to the action. I think the battle between Borland and Microsoft, which was primarily about the quality of the IDEs, not the compiler toolchain, dominated the industry, hurting companies such as Zortech and Watcom. Then Borland stumbled terribly switching to Windows giving Microsoft a few years of uncontested market domination. MS delivered “good enough” tools and a low-cost IDE, starving out the few remaining C vendors. Those that survived concentrated on niche and embedded markets, where some continue to make a living, but the golden days of C vendors was over. In other words, I don’t think it was technology directly related to compilers but industry trends (GUIs, marketing, MS’ decisions vis a vis “co-opetition” with 3rd party vendors) and contingencies (Windows 3, the rise of OOP, recession).

  7. October 10, 2011 07:19 | #7

    @Larry O’Brien
    You are right that in the MS-DOS/Windows world it was the GUI and to some extent brand awareness rather than the quality of the generated code that drove many sales. Neither Borland or Microsoft were renowned for the quality of their code generation, with Microsoft being the most hyped and lowest code quality. As I recall Watcom were considered to be the optimizing leaders back then (now available as open source), a crown probably worn by Intel these days.

    I think what won it for Microsoft was longevity. At some point they made the decision that a C++ compiler was part of their core developer mindshare that had to be supported. Profits, is any, on product sales were a rounding error compared to Windows and Office. Had Borland or any other company looked like having significant developer mind share Microsoft would have dropped prices to gain market share (even though other vendors bent over backwards to offer Microsoft compatibility, something that Microsoft sometimes did not have had with previous versions of its own compiler).

    The original Borland compiler was written by an individual, Bob Jervis and I believe that Microsoft’s was a team effort.

  1. No trackbacks yet.