This post discusses some code review basics – concepts and inspection ideas that one might use when performing a code review.  A code review is a technical verification activity.  The purpose is most often to identify coding errors against the design intent – one is verifying that the code actually accomplishes what that author intended.  We might describe the activity as “verifying correctness” of the implemented code.  The particular inspection tasks one might use are listed later in this post.

There are many methods one can use to assure the quality of a distinct unit of code – code review is just one method.  There are many automated static analysis tools that can detect bugs, vulnerabilities, and code “smells.”  Often these tools can be integrated into a continuous build and integration environment.  These tools can range from simplistic to very advanced – and generally the price tag follows that progression as well.  However, if the tool finds and leads to the elimination of a single bug that could have ended up in the field – it likely pays for itself.

Unit testing, of course, is yet another method one should use in verifying correctness of implemented code.  A unit of code is a very small, discrete piece of functionality that the developer can fully understand and define testable behaviors that can be verified while executing the code.  The key here is dynamic execution.  Many behaviors of software are difficult to statically analyze – one must execute the code to verify.  A powerful addition to unit testing is to add test “hooks” that allow modification of inputs, injection of data, and monitoring of interim states and data values.

The driving principle of course is that software quality verification must occur as the code is written.  Don’t wait until large amounts of code are written and integrated before verifying correctness.  Quality cannot be “tested in.”  It must be designed in.  Often a developer can rationalize that quality attributes will be considered later during a hardening iteration or increment.  Many an experienced developer will say that the so-called hardening never occurs – schedule pressures and other priorities will prevent this remediation to occur.

So what are considerations and questions one might consider during a code review?

  1. Are the assumptions made in creating each routine documented in the source code header or design specification?
  2. Has the use of each variable with scope beyond function level in this component been justified?
  3. Are all output variables verified (implicitly or explicitly) before being returned?
  4. Is each variable used for one and only one purpose?
  5. Does the routine handle every possible set of input data?
  6. For data accessed by multiple routines (e.g., ISRs), document how the data is protected against corruption by asynchronous accesses.
  7. Has each variable been initialized before being used?
  8. Are variables appropriately named and related to problem domain?
  9. Have all arrays been analyzed for boundary issues?
  10. Is code following structured design with single return points and no “goto” statements?
  11. Have all constants been tied to macros and commented appropriately?
  12. Have all state machines been documented with a state chart in the design specifications?
  13. Are state transitions limited to signal inputs and, at most, one attribute variable?
  14. Does state machine handle unexpected power loss?
  15. Is the software unit under review tied to an architectural description?
  16. Is there an API layer?  Does the API layer provide domain relevant names for public interfaces?
  17. Does NVM design have a robust scheme for handlind unexpected power loss?

Drivers

  1. Are drivers specific to one hardware device only?
  2. Do the drivers include an initialization function?
  3. Do the drivers make “port safe” writes?
  4. Are signals properly filtered and approved by hardware designer?
  5. Are signal shorts and signal opens detected and reported by driver?
  6. Are signals put into safe state during reset?

 

About the author

Partner and General Manager, Brian Pate is ISO 1385:2016 Lead Auditor certified for Medical Device Quality Management Systems (MD), and ISO 19011:2018 Management Systems Auditing (AU) and Leading Management Systems Audit Teams (TL). Brian started his medical device career in anesthesia clinical research in 1985 and has since worked both academia and industry including many years with Johnson & Johnson, Baxter Healthcare, and GE Medical. Brian’s roles have included software engineering, systems engineering, quality assurance, and regulatory affairs. Brian has served on multiple AAMI TIR working groups, including TIR32-2008 (Application of ISO 14971 Risk Management to Software; now IEC 80002-1) and TIR45-2012 (Guidance on the use of Agile practices in the development of medical device software) and served as a reviewer for the 2nd edition of TIR45. Brian serves on the AAMI Software Committee and as an AAMI instructor for the software, design controls, and agile methods courses. Brian also is a member of the Underwriters’ Laboratories (UL) Standards Technical Panel for UL1998 (Software in Programmable Components) and or UL5500 (Remote Software Updates).

SoftwareCPR Training Courses

ISO13485:2016 ISO 13485 Internal Audit(or) Training Course (Live, 3-day)

IEC 62304 and other Emerging Standards Impacting Medical Device Software (Live, 3-day)

Being Agile & Yet CompliantISO 14971 SaMD Risk Management

Software Risk Management

Medical Device Cybersecurity

Software Verification

IEC 62366 Usability Process and Documentation

Or just email training@softwarecpr.com for more info.

Corporate Office

15148 Springview St.
Tampa, FL 33624
USA
+1-781-721-2921
Partners located in the US (CA, FL, MA, MN, TX) and Canada.