Home > Uncategorized > Why does Coverity restrict who can see its tool output?

Why does Coverity restrict who can see its tool output?

Coverity generate a lot of publicity from their contract (started under a contract with the US Department of Homeland Security, don’t know if things have changed) to scan large quantities of open source software with their static analysis tools and a while back I decided to have a look at the warning messages they produce. I was surprised to find that access to the output required singing a nondisclosure agreement, this has subsequently been changed to agreeing to a click-through license for the basic features and signing a NDA for access to advanced features. Were Coverity limiting access because they did not want competitors learning about new suspicious constructs to flag or because they did not want potential customers seeing what a poor job their tool did?

The claim that access “… for most projects is permitted only to members of the scanned project, partially in order to ensure that potential security issues may be resolved before the general public sees them.” does not really hold much water. Anybody interested in finding security problems in code could probably find hacked versions of various commercial static analysis tools to download. The SAMATE – Software Assurance Metrics And Tool Evaluation project runs yearly evaluations of static analysis tools and makes the results publicly available.

A recent blog post by Andy Chou, Coverity’s CTO, added weight to the argument that their Scan tool is rather run of the mill. The post discussed a new check that had recently been added flagging occurrences of memcmp, and related standard library functions, that were being tested for equality (or inequality) with specific integer constants (these functions are defined to return a negative or positive value or 0, and while many implementations always return the negative value -1 and the positive value 1 a developer should always test for the property of being negative/positive and not specific values that have that property). Standards define library functions to have a wide variety of different properties and tools that check for correct application of these properties have been available for over 15 years.

My experience of developer response, when told that some library function is required to return a negative value and some implementation might not return -1, is that their regard any implementation not returning -1 as being ‘faulty’ since all implementations in their experience return -1. I imagine that library implementors are aware of this expectation and try to meet it. However, optimizing compilers often try to automatically inline calls to memcpy and related copy/compare functions and will want to take advantage of the freedom to return any negative/positive value if it means not having to perform a branch test (a big performance killer on most modern processors).

  1. Dan
    July 20, 2012 19:48 | #1

    Completely agree about Coverity being run of the mill. It’s a very pricey piece of software that is buoyed by a slick, well-paid staff of sales people who are skilled in hand-waving, FUD mongering, and self-promotion.

    If it was $1200 instead of $20K-$50K, I’d say “OK, meh…” but they are unreasonable in their pricing. Plus the tool’s setup & configuration is slow & painful.

    I work with many, many shops (I’m a consultant), some of whom use Coverity, and I’d estimate 50% of them feel a little bit like they’ve been “taken for a ride”. My conclusion is that Coverity is the “VxWorks” of the static analysis world – expensive, supported by a well-fed sales team, constantly beating its cheat & proclaiming its superiority, yet simultaneously suffering from an inferiority complex thanks to smaller, more nimble tools like Gimpel’s Flexelint & PC-Lint.

  2. jduck
    July 21, 2012 20:14 | #2

    Also note that Coverity has requested that the SATE/SAMATE projects not release their raw results. Derive whatever conclusions you like from that fact, but it is indeed a fact. I, for one, am disappointed by Coverity’s lack of transparency when participating in such a noble evaluation effort.

  1. No trackbacks yet.