Embedded Software Security: Best Practices for Secure Code Q&A
September 30, 2025

Author: Mitch Souders, Senior Software Engineer at RunSafe Security
TrustInSoft spoke with Senior Software Engineer at RunSafe Security Mitch Souders to hear his perspective on the state of embedded software security and the most prevalent issues likely to affect the future of software security.
Bio: Mitch Souders is a Senior Software Engineer at RunSafe Security Inc., where he leads security-focused software engineering efforts, including the transition of RunSafe’s codebase from C++ to Rust to enhance memory safety and uphold Secure by Design principles. He brings over 15 years of experience in software development and holds a Master’s degree in Computer Science from Portland State University.
Read the conversation about the future of software security:
1. What are the most common coding mistakes that introduce vulnerabilities in embedded software and why do they persist despite decades of security guidance?
The most common mistakes are the ones we all know about by this point and it’s usually some combination of off-by-one errors or poor input sanitization. Both of these lead to referencing invalid memory whether through array overruns, dangling pointers, or null pointers.
These errors largely persist because, despite most evidence to the contrary, software developers are human and C/C++ makes these kinds of mistakes easy to make and easy to overlook.
There are some mitigating techniques but they all push the problems to the developer, rather than the language. The common approaches taken are usually instituting code reviews, leveraging external tooling (e.g. Address Sanitizer, static analyzers), and better testing.
2. In your experience, what’s the most overlooked aspect of secure coding in embedded C/C++?
Believe it or not: Security. Most projects are happy at the end of the day to be shipping a project that builds, compiles, and ideally passes the available tests. Security concerns often come dead last after all of these are met and often only if additional time is available before the ship date. Trying to fix your security concerns after the fact is a huge issue; the problems are already endemic in the codebase, with layers of abstraction already piled on, obscuring hard-to-find bugs.
3. Static analysis, formal verification, fuzzing, runtime protections—there are lots of tools out there. How should teams think about layering these techniques to get the best coverage?
I am a firm believer that if you have access to a tool that can help you, you should use it. At a previous position, my boss was told that our regression testing was taking too long to run and we would need to selectively reduce the number of tests. He responded that every engineer on the team had a story of a single test that revealed an otherwise disastrous bug that would have been introduced if that test didn’t exist. Every tool is just like that test, every single one adds additional coverage even if at face value it may even be claiming to cover the same thing.
That all being said, if you have to pick one and if cost or integration time is a concern, fuzzing is cheap and finds bugs that no one expected. It can be easily integrated into existing test suites. You can write your own bare-bones fuzzers in most cases, though I would recommend finding a framework that helps with random input generation and branch coverage for better coverage and fuzzing speed.
4. Looking beyond the immediate benefits, how does a post-deployment protection strategy inform a more resilient and forward-thinking approach to product security throughout its entire lifecycle?
As product life-cycles get longer with deployments potentially measured in decades, we see the long tail of bugs and security issues that in the past may have gone unnoticed or been otherwise ignored. That one-off internet-connected embedded sensor may run until it literally falls apart. If you look back over the last 10 years of CVEs on existing devices and imagine what the next 10 will look like, it makes us very cognizant of how critical it is to consider that reality before, during, and after the next product launch. This knowledge is forcing companies to consider the long lifecycle and consider today what security they need in place.
5. What role does memory safety play in modern embedded security, and how are you seeing teams address these risks differently today than five years ago?
I think the addition of Rust brought memory safety issues to the forefront. Before it existed, most people weren’t familiar or considered it largely impractical to worry about memory safety given the existing languages. If they were concerned they largely considered it to be the realm of garbage collected languages, like Java, Python, and C#.
Rust’s memory safety guarantees (and lack of garbage collection) made it a contender in many embedded systems where only C/C++ were considered viable options. While C++ has been introducing language features since C++11 that helped with some memory safety vulnerabilities, now C++ is scrambling to add more in future editions, clearly responding to a shifting tide of memory safety concerns.
Teams see the risk and are considering the language they are using, choosing either to port critical code to Rust or stick with C++ and leveraging additional tools to try and validate their existing C++ code. These approaches both have pros/cons that usually come down to budget and whether the time exists to retrofit their existing code to meet new memory safety requirements.
6. Software supply chain risks have become top-of-mind for embedded teams. How important is it to have visibility into all software components, and how do you view SBOMs within the embedded security picture?
Ultimately, you should always have a good understanding of everything that you intend to ship. Most people have a rough idea of what’s in their product but sometimes that’s not always obvious when leveraging 3rd party tools. I view SBOMs like the “Nutrition Facts” on your favorite cereal box; it’s not often the case that you read the label but that information must be dutifully collected. To further extend the analogy when an “ingredient” is found to have an issue at the factory, it’s very clear which products are affected and can be directly remedied.
In the embedded security picture, this is critical information. In the past, a CVE would be disclosed and it wasn’t even clear what products were impacted. SBOMs make that a thing of the past allowing responsible maintenance of existing released products in the security domain.
7. Looking ahead, what trends or technologies do you think will reshape how we write secure embedded code over the next 3–5 years?
I’m a big fan of Rust and I think that as the benefits of stronger type systems, memory safety, and general usability get tested by engineers that otherwise may be largely familiar with C/C++ we will find the industry wanting more. Expectations have been raised of what your language should be doing for you to help prevent common programming mistakes, build your software, and validate your code. We’ll continue to see tooling spring up around languages that can’t meet that threshold but I think in most cases, a ground-up solution is required. Some experimental languages like Carbon are hoping to be able to port/integrate existing C/C++ and that too may be a viable route to improving secure code or at the very least, reducing the insecure code to something more manageable.
Secure Code Now & For the Future
Software requirements are always evolving, and the right tool stack is essential for success in securing embedded systems. As Mitch Souders emphasized, it is important to have a good understanding of what lies within your software and what you are shipping. Make sure your tools and methods are up for the challenges of safe and secure software development.
Learn more about software safety, security and reliability with TrustInSoft Analyzer.