As cybersecurity is becoming a strong FDA focus with specific requirements around static and dynamic code analysis, engineers must automate those practices and integrate them into existing development workflows. In this article, Auriga’s Airat Sadykov and Andrey Shastin share some tricks from the trade.
At Auriga, we’ve been delivering many software development projects for medical devices for almost 2 decades — from relatively simple blood glucose meters to more complex systems such as infusion pumps, patient monitors, and lung ventilation units. Implementing static and dynamic code analysis practices and deploying them on these projects is an integral part of our process, and here, I’ll share some tips from our practical experience and challenges.
Static Code Analysis
Static analysis is a practice of automatically checking compliance with well-known coding guidelines (i.e. MISRA, CERT, AUTOSAR, JSF) and detecting potential bugs such as null pointer dereferencing, division by zero, and buffer overflows. Modern static analysis tools also complement the traditional code review practice by reducing the manual effort by at least 30%.
In most cases, the first run of a static analysis tool against your current code will show thousands of errors (we’ve encountered even 20,000+ on the first run), which can of course be incredibly overwhelming, as it seems like it would take years to fix all of them. So here are some tips from our experts to cope with the problem.
Tip #1: The Compiler is your Friend
Disciplined development teams usually compile with –Wall and –Werror (in GCC), or /Wall /WX (in Visual Studio), or using similar options in other compilers. Fixing compiler warnings is an easy and inexpensive way to prepare for static analysis execution. We have seen that reviewing the output of your compiler in a “paranoid” mode can reduce the overall volume of static analysis violations.
While having fixed all compiler warning is a good thing, there are many projects where using just a compiler will not be enough and even acceptable option for compliance reasons.
After draining your compiler dry, use static analysis tools that are meant to dig much deeper into the code and give you a lot more hints.
Tip #2: Adopt static analysis early in the process
If you are currently developing medical device software, then you should be prepared to address the question of an automated static code analysis practice. Static analysis is almost guaranteed to be a subject of a discussion during an internal/external audit or even a pre-market submission.
The key is to deploy the tool in such a way that development does not lose the velocity while focusing on improving quality, and not dealing with tool idiosyncrasies and noise. This is a balancing act, requiring practice and expertise that Auriga engineers have gained over the years, but what we have found is that while discovering the root cause of the reported errors, you are likely to find out that many of them are easy to fix.
Here are just a few examples from static analysis report which can easily be fixed by just a simple script or well-trained interns:
1) Call of clear functions in destructors:
a. Destructor ‘~CTitle’ should not call function ‘clear_’ that is not in try context
b. Destructor ‘~THelper’ should not call function ‘removeModule’ that is not in try context
2) Check for NULL:
a. “pMP” may possibly be null
b. “((NPage*)this)->pSysCfg_” may possibly be null
3) Possibly declaration in wrong section:
a. Data members ‘D_FILE_1’ is declared as ‘public’
4) Non-constant arguments:
a. String literal “MCollection” is passed to function ‘FixedBlockHeap’ as pointer to non-const object
Many violations on existing code you can set aside and deal with them when you have a downtime. However, it’s important to NOT introduce new violations (technical dept) as you develop code. For example, Parasoft C/C++test has features that allow engineers to filter the noise and focus on fixing the most critical recent static analysis violations.
While static analysis interprets the source code as text and makes all conclusions based on parser output without executing a single instruction, dynamic analysis provides a different perspective on the code. It examines the running code, showing the code coverage, sufficiency, and quality of unit tests, memory leaks, and other potential weaknesses problems.
Tip #3: Be Flexible with your Runtime Environment
The term “embedded” covers a lot of devices: from 8-bit MCU with kilobytes of RAM and flash up to 64-bit multicore CPU with gigabytes of RAM and high-speed SSD. In case the device’s daily operation requires minimal memory and processing power, manufacturers are likely to opt for hardware geared only towards its needs addressing size, weight, or cost constraints. Although they usually leave some capacity for software updates and maintenance, it may still be insufficient when it comes to the dynamic analysis tool instrumenting the software, as the process will require a huge amount of RAM (compared to normal operation mode) and storage space to collect test and code coverage results.
Therefore, when the amount of memory on the device is to too low to run testing and collect code coverage, the target platform may not be suitable for collecting code coverage for unit and integration testing.
If your main target platform is not supported out of the box, search for valid alternatives. There could be a sister platform with more interfaces and memory being supported by a dynamic analysis tool so you can use it for unit and some integration testing. Another alternative is to use hardware simulators which run ARM Fast Models and QEMU.
While running testing on target platform is the most desirable, many times teams opt out to perform unit and some application integration tests on the developer’s workstation (such as Linux, Mac, Windows) to benefit from faster development cycle and a larger number of tools available for general development platforms. In that case, you will need to port your embedded code to compile with a host compiler, which might present some challenges.
There are plenty of compilers, build tools, frameworks, and methods to run them. Dynamic analysis tools support some common build techniques with internal presumptions on how developers might apply them. Therefore, even if the code is portable, most likely the project team will need additional time to align settings of a dynamic analysis tool depending on how the project’s build system works.
All efforts are highly recommended to be estimated in advance to pick up the best approach for future development and support in the long-term perspective.
Tip #4: Don’t Overly Rely on Autogenerated Test Cases
Modern dynamic analysis tools automatically generate sets of unit tests to increase code coverage. But tools are just tools — they are agnostic of various scenarios of how the production code is intended to be used, especially when you are trying to increase code coverage and cross a border between pure unit tests and integration tests. You will still have to manually update generated unit tests, and even write new ones, because only you know what the code is intended to do when it comes to positive and negative tests results.
In our experience, a set of selected files in a critical area can have automatically-generated unit tests covered in a range from 40% up to 100% for each file. But on average, for the whole project, numbers may vary from 25% to 60%.
In addition, autogenerated unit tests using only source code as input can often run perfectly on a buggy code. Auto generation should be used as a good way to start the process of unit testing. The process of creating test cases manually improves the quality of the code because it forces additional code review and provides a different vantage point on the design.
Tip #5: Don’t Underestimate the Effort of Tool Qualification/Validation
The FDA requires any tool using during formal development to be valid for intended use to ensure it performs expected actions and produces the right results. Usually, this is a special test protocol called IUV (Intended Use Validation).
Industry-leading vendors of static and dynamic analysis tools help produce necessary procedures and documentation, and this is extremely helpful to minimize your efforts. Parasoft’s is especially valuable because it automates a large part of the process. The same as with any other generic package, you might want to reserve some of your efforts to review and adjust documents in order to sync them with your own QMS procedures. For example, Parasoft’s Tool Qualification software provides a set of test cases to be executed on your environment to validate the static and dynamic analysis capabilities.
- Treat compiler warnings as errors. Warnings can become errors later for the newer version of the same compiler. Warnings often mean that the compiler does something implicitly.
- Run static analysis tools early in the project and fix issues as they appear.
- Keep code portable. Even if you don’t plan to port your product to another platform, this might be an option in the future. On the other hand, this will also help keep your code clean, maintainable, and readable which is always good for review and further support.
- Do not overly rely on autogenerated test cases to improve code quality.
- Do not underestimate the need and the effort to validate the tool for an intended use.
If you want help with the adoption of static and dynamic analysis, please contact us at Auriga.
Andrey Shastin, Technology & Business Partnering Executive, Auriga
Airat Sadykov, Project Manager, MedTech Development, Auriga