Follow-up on Holzmann@JPL, Coverity and Tecgraf tools...

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Follow-up on Holzmann@JPL, Coverity and Tecgraf tools...

sur-behoffski
G'day,

[Cross-posting again to lua-l, as I have some questions about code
coverage (e.g. Valgrind), and I believe that the IM/CD/IUP modules
are only a short distance away from achieving a significantly
higher level of "Code Cleanliness" -- but this goal needs help
from the community.]

I've seen a huge swathe of Coverity reports run on IUP, in the last
48 hours or so, pointing to over a dozen possible bug cases in the
IUP code (although Antonio Scuri has noted that some of those are
"#ifdef"'ed out via a "this is definitely broken" marker
preprocessor macro).

A big ***thank-you*** to "Ranier, VF", for providing these Coverity
scans.  He also provided a large number of valuable patches to cases
where the Coverity report seemed to be valid.

[Side note:  Some of the code in some modules, notably IM, was cloned
from third-party libraries at some point in the past, and, while the
latest version of those libraries has been cleaned up to remove
complaints from tools such as gcc's warnings, it's been easier to
"freeze" the snapshotted code version as-is, non-bug warnings and all,
instead of trying to port forwards.  In a few cases, it's been found
easier to remove the module (e.g. libz, the "compress" module), and
simply use the OS's link manager to include the library at link time.

While it would be valuable to run Coverity on CD and IM, I would
guess that the risk of code changes would limit what would be
attempted, given that the current code works well for (almost all?)
users.]

Going back to my pointing people to the Holzmann Mars Rover 2012
Usenix presentation, I not that the slew of tools that were run in
"continuous integration" mode (full build every night, shown in a
slide at about 36:35 into the video) were in two categories:

* Defect Detection:
         gcc
         gcc strict
         coverity
         codesonar
         uno

* Coding Rule Compliance:
         P10
         JPL standard
         MSL rules

(codesonar was very valuable, but extremely CPU-expensive to run:
Holzmann notes that a one-module scan could take ten minutes or more,
and that a whole-of-code overnight scan could take over a dozen hours
to complete.)

The presentation shows running these reports through a "triage" filter,
and, if passed by the triage process, reported as a Defect in a
database called SCRUB -- with the module owner deemed responsible for
the report... and with the report being assumed to be valid unless the
developer explicitly responds to the report with a posting such as:
"DISAGREE: This is not a problem because of ...".  Any new SCRUB database
entry, and any response from the developer, are then reviewed (?? not
sure by whom) to see if the report and/or any "'Disagree with the
automated tool because of...' analysis" posting is valid -- If the
developer's disagreement is upheld, then the defect report may be
suppressed at the "triage" stage.

[I'm simply working off of the one presentation by Gerald Holzmann in
making up this description... if I am wrong, corrections would be very
welcome.]

--------------------

Okay, enough of trying to come up with a "TL;DR" comment of the code
continuous-build process.  Looking directly at Valgrind and:

         1. Lua; and
         2. A trivial loadable module.

(We hope to try PUC/Tecgraf's "IM", "CD" and "IUP" modules with
Valgrind shortly... but this is "too hard" for this message.)

First question:  Is Lua "Valgrind"-clean, when given a trivial script
such as:

         #!/usr/bin/env lua
         return 0x42

If I run this as an executable from the command line, the result is:

         $ valgrind ./tt.lua
         ==5908== Memcheck, a memory error detector
         ==5908== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
         ==5908== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
         ==5908== Command: ./tt.lua
         ==5908==

So, yes, Lua is Valgrind-clean under a simple test.

Another check, running Lua directly, and having it "require()" a
second (trivial) module called "fred.lua", is also successful:

fred.lua:
         #!/usr/bin/env lua
         -- @module fred
         print("Hello from fred")
         return 42

tt.lua:
         #!/usr/bin/env lua
         require("fred")
         return 0x042

Again, Lua shows that it is Valgrind-clean in this case, correctly cleaning
up the heap:

         $ valgrind --leak-check=yes lua ./tt.lua

         ==6070== Memcheck, a memory error detector
         ==6070== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
         ==6070== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
         ==6070== Command: lua ./tt.lua
         ==6070==
         Hello from fred
         ==6070==
         ==6070== HEAP SUMMARY:
         ==6070==     in use at exit: 0 bytes in 0 blocks
         ==6070==   total heap usage: 495 allocs, 495 frees, 50,231 bytes allocated
         ==6070==
         ==6070== All heap blocks were freed -- no leaks are possible
         ==6070==
         ==6070== For counts of detected and suppressed errors, rerun with: -v
         ==6070== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

------------------------

I note my earlier promise to not cross-post again to the lua-l list, unless
there was sufficient "vibe" for it.  Seeing the multiple messages (more than
a dozen) on the iup-l list, showing both Coverity-generated error messages,
plus many messages, perhaps one patch per message, addressing Coverity
reports, has encouraged me greatly, and I felt that some of that "vibe" had
been picked up, and used, in the Coverity checks.  I don't have access to the
proprietary Coverity tool, so have resorted to merely reporting some very
trivial Valgrind tests on the Lua interpreter, in the hope that the way that
Lua passed these tests with flying colours might be a starting point for
deeper code quality testing.

cheers,

sur-behoffski (Brenton Hoff)
programmer, Grouse Software


Reply | Threaded
Open this post in threaded view
|

Re: Follow-up on Holzmann@JPL, Coverity and Tecgraf tools...

Thorkil Naur
Hello,

On Sun, May 26, 2019 at 08:36:27PM +0930, sur-behoffski wrote:
> ...
> [I'm simply working off of the one presentation by Gerald Holzmann in
> making up this description... if I am wrong, corrections would be very
> welcome.]

The details provided in

  Mars Code
  By Gerard J. Holzmann
  Communications of the ACM, February 2014, Vol. 57 No. 2, Pages 64-73

may be useful. Noting, by the way, that the considerably more complex
example of model checking described in that paper was incorrect: See my
letter to the editor

  Code That Missed Mars
  Communications of the ACM, April 2014, Vol. 57 No. 4, Page 9

and Holzmanns detailed analysis

  http://spinroot.com/dcas/index.html

> ...

Best regards
Thorkil Naur