What was notable in this case was that Google disclosed this issue to Intel, and Intel then took responsibility for public disclosure. Intel then proceeded to mail patches to public trees without flagging them as having any security relevance, posted a public disclosure despite the patches only being in pre-release kernels and claimed that affected users should upgrade to 5.9 (which had just been released) which absolutely did not have the patches fixed.
Coordinating disclosure around security issues is hard, especially with a project like Linux where you have an extremely intricate mixture of long-term support kernels, vendor trees and distributions to deal with. Companies who maintain security-critical components really need to ensure that everyone knows how to approach this correctly, otherwise we end up with situations like this where the disclosure process actually increased the associated risks.
(I was part of Google security during the timeline included in this post, but wasn't involved with or aware of this vulnerability)
To me it feels like we owe Google a lot for what they are doing to find security vulnerabilities. Did companies do stuff like this before Google's Project Zero?
We benefit a lot from Google doing this, for sure. Do we owe them a lot? For that to be true, itād have to have an altruistic motive and for the value being delivered to be less than Google derives from the open-source community / security community in general.
I understand the value derived vs. provided, but I disagree about it having to be altruistic. Someone donating large amounts of money to charity just to write it off on their taxes isnāt doing so altruistically, but is still doing a lot of good. Iād say we still owe them gratitude - or at the very least, the people theyāre helping.
Sort of... most companies that fund this type of research do it for product development (IDS/IPS related stuff), or as vulnerability development (for sale as parts of exploit packs or use in engagements). This is a gross over generalization, but like everything else related to the disclosure and research field, there is a lot of history and drama involved.
oof.
from the guys who brought you "the spectre patch in the kernel thats disabled by default" and "ignore your doubts, hyperthreading is still safe" comes "the incredible patch built solely around shareholder confidence and breakroom communication"
EDIT: spectre, not meltdown. oops.
https://www.theregister.com/2018/11/20/linux_kernel_spectre_...
> "the meltdown patch in the kernel thats disabled by default"
I'm not sure to what you are referring to here.
I was one of the people who worked on the Linux PTI implementation that mitigated Meltdown. My memory is not perfect, but I honestly don't know what you're referring to.
I'm guessing they're referring to Linus' ranting on Intel here: https://lore.kernel.org/lkml/CA+55aFwOkH8RH12Dzs=hT3e7eS3Ckz...
> The whole IBRS_ALL feature to me very clearly says "Intel is not serious about this, we'll have a ugly hack that will be so expensive that we don't want to enable it by default, because that would look bad in benchmarks".
It seems odd that they would have notified Intel rather than the security team, given how poorly Intel has handled disclosures in the past... Its good that they do note that "The Linux Kernel Security team should have been notified in order to facilitate coordination, and any future vulnerabilities of this type will also be reported to them"
Thank the lord for distros like Fedora. They deal with security and other issues, and are big enough that if Intel tried to sneak something past, almost assuredly Intel's engineers working on Red Hat would have noticed something.
I wonder what effect self selection bias has in people who end up writing hand crafted complex parsing code in C for untrusted data in ring 0. You either have to believe that it's doable to get right or that it doesn't matter much if you don't, or "it's not my worry".
// quick hack, fix later
{ .... }
Not a valid C comment.
Double slash single-line comments are valid C99 and later.
[flagged]
There are plenty of tools designed to help you write parsers that compile down to C. Alternatively there is the microkernel approach. Either one (or both) would satisfy GP's implication that hand-written C parsers in ring 0 are bad.
Well, Android's Bluetooth stack is being rewritten in Rust so you probably can go pretty far in memory-safe languages (though of course Rust isn't a managed language, just a safe one).
Dumb question (not a kernel or C developer): can't you call into code compiled from a memory safe language, like in a shared object file?
Yeah, and we're slowly slowly moving towards that model. One barrier is that some safe languages, like Rust, support far fewer target platforms than the Linux kernel does. There are C compilers for everything, but only a few Rust backends.
Something similar to Wuffs[0] (posted on HN very recently), which compiles down to C, might be a good compromise between portability and safe languages. (There may be some contorted way to have Rust emit C, too.)
Definitely, for example check how to make Go .so libraries.
https://medium.com/swlh/build-and-use-go-packages-as-c-libra...
Naturally the runtime also comes along, but that is another matter.
Do you know C.A.Hoare?
I advise you to read his Turing award speech from 1981.
Indeed! Hoare wrote about his self-reflection on the failures of his own software group.
There was also failure in prediction, in estimation of program size and speed, of effort required, in planning the coordination and interaction of programs, in providing an early warning that things were going wrong. There were faults in our control of program changes, documentation, liaison with other departments, with our management, and with our customers. We failed in giving clear and stable definitions of the responsibilities of individual programmers and project leaders,--Oh, need I go on?
What was amazing was that a large team of highly intelligent programmers could labor so hard and so long on such an unpromising project. You know, you shouldn't trust us intelligent programmers. We can think up such good arguments for convincing ourselves and each other of the utterly absurd. Especially don't believe us when we promise to repeat an earlier success, only bigger and better next time.
Yup. C bad, Rust good.
It's not like NASA sent a rover, written almost fully in C, to Mars. It's not like billions of cars and even more billions of their ECUs are written in C. It's not like the firmware of the keyboard you're writing your comment on, or even the OS/browser you're using is written in C. C bad, Rust good.
This isn't about Rust, the same problems were well known (and long suffered & complaned about) before Rust came around and there are well known much older engineering techniques for parsing untrusted data. It's nice that the Rust phenomenon has brought with it some new spirit of vigor and momentum to break out of the apathy, though.
Re Rover code and ECUs - this is the difference between safety critical code and security critical code at the attack surface.
The first kind deals primarily with "don't keel over or go crazy when natural phenomena throws unexpected circumstances at you", the second deals with inputs crafted by intelligent adversaries who can see your code and test & iterate attacks to exploit any flaws they uncovered through analysis or experimentation against your implementation. (Of course if we nitpick, an intelligent attacker is a natural phenomenon.)
> even the OS/browser you're using is written in C
Those have tons of exploitable bugs!!
NASA rovers and car ECUs have minimal people looking to exploit them, so I'm not overly convinced they're exploit free either.
I'm not a Rust evangelist or even a user, but the current paradigm of "THIS TIME, we'll write safe, complex, performant C/C++ code properly" isn't the solution, nor is manually squashing bugs one by one.
The solution seems to be a combination of improving the tooling around existing C\C++, and starting new projects in safer languages when possible.
I should be safe. After the latest ubuntu update, my bluetooth refuses to connect.
As they say, bluetooth in linux keeps only the honest people out.
Bluetooth on Linux is very simple: it doesn't work.
That quip was originally about something else, I think.
At least it's not display drivers anymore
The write-up doesn't appear to have any author details, but the main page [1] credits Andy Nguyen.
[1] https://google.github.io/security-research/pocs/linux/bleedi...
This might be a pretty naive question, but: in a hypothetical world where the vast majority of systems programming is done in "memory safe" langs, what would most vulnerabilities look like? How much safer would networked systems be, in broad strokes?
A related post from Google Security Blog[0]:
> "A recent study[1] found that "~70% of the vulnerabilities addressed through a security update each year continue to be memory safety issues.ā Another analysis on security issues in the ubiquitous `curl` command line tool showed that 53 out of 95 bugs would have been completely prevented by using a memory-safe language. [...]"
[0]: https://security.googleblog.com/2021/02/mitigating-memory-sa...
[1]: https://github.com/Microsoft/MSRC-Security-Research/blob/mas...
Likely we'll have less 'os-level' pwns, but to be fair these aren't really the most exploited class of vulnerabilities today anyway. I'm just as effective doing a sql injection and stealing your client's PII if you have or don't have your bluetooth stack written in a lang that prevents some memory corruption exploits from being feasible, and that's the actual goal of most attacks.
You're going to get owned in future by people obtaining creds to important stuff (say, aws creds) and by crappy userspace applications, we can hope that OS security continues to improve but even if it does get bulletproof the story is far from over while our apps are all piles of garbage.
At least, that's what I recon'
Of course proper escaping/parameterization can be enforced in a good quality library as well. So hopefully we will see SQL injections in the future as well if these safer libraries become the default.
Web development is done mostly using "memory safe" languages and we can see that it is far from being secure. The list looks like: https://owasp.org/www-project-top-ten/
Which is not to say that "memory safety" is not a significant issue in C/C++. I wonder why wuffs [1] is rarely used in C projects to parse untrusted data given that it can be translated to C.
Just adding slices to C would kill a very large proportion of bugs, but there are dimishing returns after a certain amount of safety because you start to reach the end of dangerous code, and into bad code (e.g. you forgot to check the password entirely): You can still catch the latter type of bug using typesystems and formal verification but it's not easy, whereas catching memory safety bugs even using a sanitizer atop of regular C code is actually extremely well-trodden ground now.
Sometimes I wonder if driver support such as Wi-Fi on FreeBSD is terrible by design. That OS has almost no attack surface.
OpenBSD 5.6 (~6 years ago) removed the Bluetooth stack altogether, due to security/maintainability concerns, so yeah, pretty much.
FreeBSD users sit back and laugh. Bluetooth? Wifi? Hah!
FreeBSD has Bluetooth in netgraph last time I looked. Itās not compatible with much.
And wifi. I have a little home server running FreeBSD, with an intel skylake processor from a few years ago. Wifi works out of the box. I havenāt tried Bluetooth but for my hardware, driver support has been fantastic.
I'm curious how the "BadChoice" vulnerability did not get picked up by a static analyzer. Only initializing part of a structure should be very easy to catch.
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.