Jeff Kaufman talking about how AI has impacted the status quo of coordinated disclosures in security vulnerabilities.
A week ago the Copy Fail vulnerability came out, and Hyunwoo Kim immediately realized that the fixes were insufficient, sharing a patch the same day. In doing this he followed standard procedure for Linux, especially within networking: share the security impact with a closed list of Linux security engineers, while fixing the bug quietly and efficiently in the open. His goal was that with only the raw fix public, the knowledge that a serious vulnerability existed could be “embargoed”: the people in a position to address it know, but they’ve agreed not to say anything for a few days.
Someone else noticed the change, however, realized the security implications, and shared it publicly. Since it was now out, the embargo was deemed over, and we can now see the full details.It’s interesting to see the tension here between two different approaches to vulnerabilities, and think about how this is likely to change with AI acceleration.
On one side you have “coordinated disclosure” culture. This is probably the most common approach in computer security. When you discover a security bug you tell the maintainers privately and give them some amount of time (often 90d) to fix it. The goal is that a fix is out before anyone learns about the hole.
On the other side you have “bugs are bugs” culture. This is especially common in Linux, where the argument is that if the kernel is doing something it shouldn’t then someone somewhere may be able to turn it into an attack. Just fix things as quickly as possible, without drawing attention to them. Often people won’t notice, with so many changes going past, and there’s still time to get machines patched.
This approach never worked perfectly, but with AI getting good at finding vulnerabilities it’s a much bigger problem. So many security fixes are coming out now that examining commits is much more attractive: the signal-to-noise ratio is higher. Additionally, having AI evaluate each commit as it passes is increasingly cheap and effective.
Another take on this by user tptacek on Hacker News.
This has been a very long time coming and the crackup we’re starting to see was predicted long before anyone knew what an LLM is.
The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.
This has been playing out in slow motion ever since BinDiff: you can’t patch software without disclosing vulnerabilities. We’ve been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.
It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.
The norms of coordinated disclosure are not calibrated for this environment. They really haven’t been for the last decade.
I’m weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.