Open vs. closed source: which wins for security?
Some may assume that working on Android elicits a natural bias towards open source. Perhaps! However, I’ve spent more of my 30+ year software engineering career working on closed source software, there is more closed source than open source software powering all of Google’s products and services, and there even is more closed source software than open source software from Google found on every certified Android device. I have never been a FOSS zealot and instead look at this question purely from a technical security perspective based on experience in both worlds. So let’s explore this question while dispelling some common misconceptions on the topic!
First, a definition - what is open source software? You can read the verbose definition from the Open Source Initiative, but TL;DR: software whose source code is freely reusable.
Myth #1: open source code is good for security because “many eyes” will find more bugs.
Source code being publicly available doesn’t guarantee “many eyes” will look at it. Plenty of open source projects lack a robust developer community. OpenSSL and its infamous heartbleed, poodle, freak, and logjam vulns were a salient example of this myth.
And just because code is closed doesn’t mean it lacks many reviewers. iOS has the benefit of thousands of employees reviewing source code and a community of external researchers scouring the binary code for vulns that may be worth a bounty.
The “many eyes” theory is sometimes referred to as Linus’s law: given enough eyeballs, all bugs are shallow. This theory has been made so popular by FOSS developers that it feels more like religion. Yet it’s a dangerously misleading theory. Even the most prolifically reviewed open source projects have plenty of “deep” bugs (and chains of them) requiring massive time and complexity to discover or exploit. It’s not uncommon to find bugs that have been lurking for many years despite countless eyeballs, for example this recently discovered Linux bug latent for 12 years!
However, when an open source project does enjoy a robust developer ecosystem, the simpler bugs will get discovered faster. So a better law might be:
Given enough eyeballs, all shallow bugs surface.
Myth #2: open source code is bad for security because it’s easier for attackers to find bugs.
At a recent IoT security conference, someone made this exact statement. I don’t know if this person was a software developer, but I suspect not an experienced one. Per the law above, it’s possible that the relatively frequent discovery of shallow bugs in the early years of a software project can lead to this conclusion. Android is arguably an example: in its early years, around 2003-2008, the pace of code change was extreme, the number of eyeballs were fewer, and the attention to security was lower than what we see today (and have seen over the past 5+ years). Thus, a high rate and volume of shallow bugs were found by researchers and developers. Today’s Android has a mature SDLC and vastly improved security design and attack mitigations that have made the cost of scaled critical exploits higher than iOS according to a variety of pricing sources, including exploit brokers.
Android and iOS: a long-term study in the security impact of open vs. closed software
So not all open source projects have equivalent safety characteristics, and not all closed source projects have equivalent safety characteristics. However, Android and iOS provide a good way to compare the relative strength of the open vs. closed source model over time because both operating systems are similar in maturity, complexity, and objective. Of course there are differences in how the systems work, but on the whole, they serve similar functions and operate on similar scales. As mentioned above, in its early years, there was a general belief that Android was riddled with vulnerabilities (and it was) and that iOS was super “locked down” in comparison. In fact, the general belief that iOS lacked significant exploits persisted for an astonishingly long time. Operating system and exploit researchers had such fertile ground with Android, there simply wasn’t a ton of good research happening on iOS, and many consumers, media, and even enterprise IT professionals simply concluded that Apple had somehow, amazingly written tens of millions of lines of code without exploitable bugs. Of course, those of us who’ve built complex software projects and work on security knew this was folly.
Pegasus: a turning point
The converse of Myth #2 is that closed source systems have fewer practical exploits than open source systems (call this Myth #2A). A bellwether moment for me on this study was in 2016, at a meeting with the CIO and CISO of one of the world’s largest banks. I was leading product security at BlackBerry and was called in to answer questions about the impact of the Pegasis iOS zero-day exploits on BlackBerry’s software application container, which the bank was using to protect its corporate iPhone fleet. BlackBerry’s software runs within an app in user mode, and the previously published Pegasus information made it clear Pegasus could fully compromise the iOS kernel and jailbreak the device. Yet, the bank security leadership asked me if Pegasus could compromise BlackBerry’s app-level controls. I was dumbfounded that the security leadership of the one of the world’s largest banks didn’t seem to understand how computers work. But they were also completely stunned by the existence of these vulnerabilities in the first place. They said they didn’t think iOS was exploitable in this manner, and asked whether I thought there were more critical bugs like these lurking. Yep, wow. I answered: “There is approximately a bug per every 1000 lines of commercial quality systems code, and iOS has many million lines of code, so yes, there are a boatload of bugs, a decent number of them will be exploitable or chained into exploits, and you should never assume otherwise.” Since 2016, there have been a veritable flood of iOS vulnerabilities discovered by researchers and zero days exploited to harm Apple’s users. I sometimes wonder how naive the former CIO and CISO of that bank now must feel looking back.
So what can we expect in future exploit trends?
Looking forward, the trend of AOSP, and the best Android implementations like Pixel, having higher exploit pricing relative to iPhone, seems poised to continue. In fact, the trend would indicate that scaled Android exploit pricing will rise faster than iOS. Now that both operating systems have reached a level of maturity in their development lifecycles, there are simply way more folks working to improve Android security than iOS security. Apple has a few thousand employees contributing in some way to iOS security. And Apple hasn’t exactly welcomed the research community with open arms. In addition to Google’s teams, Android has the power of the global Linux community working to improve its security (which underpins the security of Android), SoC manufacturers such as SLSI, Qualcomm, and Mediatek (all with their own large security teams), a more robust global academic research community that prefers to work on open source projects, and a large set of product manufacturers such as Samsung, Sony, Meta, Amazon, and Xiaomi, all with their own mature security teams working to improve Android as they develop their Android-based products. My guesstimate is that there are a couple orders of magnitude more people working to secure Android than iOS. This doesn’t just mean finding bugs, it means innovating new security technologies such as Memory Tagging Extensions for ARM (which was co-designed by ARM and Google with help from several SoC vendors who specialize in Android products), CHERI, the use of Rust in Linux and Android, Linux kernel security such as the self-protection project, and many more.
While Apple has built some excellent products, other device developers do not benefit from Apple’s iOS work, whose goal is primarily to improve the economics of Apple. In contrast, Android is not only the most popular consumer operating system in history, an untold number of device developers have innovated with it, including powering auto infotainment systems, boat infotainment, in-flight entertainment, TVs, set-top-boxes/streamers, medical devices, smartwatches, and more. And of course Linux underlies not only these Android-based systems but a large portion of all computing devices on the planet that can run a virtual memory OS. This vibrant ecosystem is not just about the flexibility of open source code but the power of open platforms. #powerofopen has proven itself the superior model, year after year in innovation as well as global market share.
Myth #3: software confidentiality is a significant attack deterrent.
By now, hopefully the tech world has woken up to the fact that the value of code secrecy is quite limited. And I’m confident even Apple doesn’t rely much on security through obscurity any more. Rather, we must hearken to Kerckhoff’s Principle: assume the enemy knows the system, because they do. Furthermore, there is an insidious security pitfall of closed source software that hasn’t been talked about much: obscurity makes it easier for exploits and attackers to remain hidden within the walled garden. Paradoxically, obscurity presents a mere speed bump for attackers to land an exploit yet makes life more difficult for researchers to find that proverbial needle (backdoor) in the haystack (millions of lines of code). Indeed, source that is closed can be thought of as a broad veil across the eyes of the global community of potential defenders.
Transparency: the tide that raises all boats.
Open source is a powerful enabler of product transparency. But it’s by no means the only one. For example, binary transparency and hermetic/reproducible builds are also increasingly important aspects to a trust-building, externally verifiable development model, but alas, topics for another day.