A lot of these comments are just manifestations of the kneejerk HN "crypto bad" reflex. Here's the deal:
- Whether or not Signal's server is open source has nothing to do with security. Signal's security rests on the user's knowledge that the open source client is encrypting messages end to end. With that knowledge, the server code could be anything, and Signal inc. would still not be able to read your messages. In fact, having the server code open source adds absolutely nothing to this security model, because no matter how open source and secure the server code might be, Signal inc. could still be logging messages upstream of it. The security rests only upon the open source client code. The server is completely orthogonal to security.
- Signal's decision to keep early development of the MobileCoin feature set private was valid. Signal is not your weekend node.js module with two stars on Github. When changes get made to the repo, they will be noticed. This might mess up their marketing plan, especially if they weren't even sure whether they were going to end up going live with the feature. Signal is playing in the big leagues, competing with messengers which have billions of dollars in marketing budget, will never ever be even the smallest amount open source, and are selling all your messages to the highest bidder. They can't afford to handicap themselves just to keep some guys on Hacker News happy.
- Signal's decision to keep development to the (private) master branch, instead of splitting the MobileCoin integration into a long-running feature branch is a valid choice. It's a lot of work to keep a feature branch up to date over years, and to split every feature up into the public and non-public components which then get committed to separate branches. This would greatly affect their architecture and slow down shipping for no benefit, given that the open sourceness of the server is orthogonal to security.
> Whether or not Signal's server is open source has nothing to do with security
This true only when you are exclusively concerned about your messages' content but not about the metadata. As we all know, though, the metadata is the valuable stuff.
There is a second reason it is wrong, though: These days, lots of actual user data (i.e. != metadata) gets uploaded to the Signal servers[0] and encrypted with the user's Signal PIN (modulo some key derivation function). Unfortunately, many users choose an insecure PIN, not a passphrase with lots of entropy, so the derived encryption key isn't particularly strong. (IMO it doesn't help that it's called a PIN. They should rather call it "ultra-secure master passphrase".) This is where a technology called Intel SGX comes into play: It provides remote attestation that the code running on the servers is the real deal, i.e. the trusted and verified code, and not the code with the added backdoor. So yes, the server code does need to be published and verified.
Finally, let's not forget the fact that SGX doesn't seem particularly secure, either[1], so it's even more important that the Signal developers be open about the server code.
[0]: https://signal.org/blog/secure-value-recovery/
[1]: https://blog.cryptographyengineering.com/2020/07/10/a-few-th...
Addendum: Out of pure interest I just went into a deep dive into the Signal-Android repository and tried to figure out where exactly the SGX remote attestation happens. I figured that somewhere in the app there should be hash or something of the code running on the servers.
Unfortunately, `rg -i SGX` only yielded the following two pieces of code:
https://github.com/signalapp/Signal-Android/blob/master/libs...
https://github.com/signalapp/Signal-Android/blob/master/libs...
No immediate sign of a fixed hash. Instead, it looks like the code only verifies the certificate chain of some signature? How does this help if we want to verify the server is running a specific version of the code and we cannot trust the certificate issuer (whether it's Intel or Signal)?
I'm probably (hopefully) wrong here, so maybe someone else who's more familiar with the code could chime in here and explain this to me? :)
The hash of the code that is running in the enclave is called "MRENCLAVE" in SGX.
During remote attestation, the prover (here, Signal's server) create a "quote" that proves it is running a genuine enclave. The quote also includes the MRENCLAVE value.
It sends the quote to the verifier (here, Signal-Andriod), which in turn sends it to Intel Attestation Service (IAS). IAS verifies the quote, then signs the content of the quote, thus signing the MRENCLAVE value. The digital signature is sent back to the verifier.
Assuming that the verifier trusts IAS's public key (e.g., through a certificate), it can verify the digital signature, thus trust the MRENCLAVE value is valid.
The code where the verifier is verifying the IAS signature is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
The code where the MRENCLAVE value is checked is here: https://github.com/signalapp/Signal-Android/blob/6ddfbcb9451...
Hope this helps!
Addendum to the addendum: Whether there's a fixed hash inside the Signal app or not, here's one thing that crossed my mind last night that I have yet to understand:
Let's say we have a Signal-Android client C, and the Signal developers are running two Signal servers A and a B.
Suppose server A is running a publicly verified version of Signal-Server inside an SGX enclave, i.e. the source code is available on GitHub and has been audited, and server B is a rogue server, running a version of Signal-Server that comes with a backdoor. Server B is not running inside an SGX enclave but since it was set up by the Signal developers (or they were forced to do so) it does have the Signal TLS certificates needed to impersonate a legitimate Signal server (leaving aside SGX for a second). To simplify things, let's assume both servers' IPs are hard-coded in the Signal app and the client simply picks one at random.
Now suppose C connects to B to store its c2 value[0] and expects the server to return a remote attestation signature along with the response. What is stopping server B then from forwarding the client's request to A (in its original, encrypted and signed form), taking A's response (including the remote attestation signature) and sending it back to C? That way, server B could get its hands on the crucial secret value c2 and, as a consequence, later on brute-force the client's Signal PIN, without C ever noticing that B is not running the verified version of Signal-Server.
What am I missing here?
Obviously, Signal's cloud infrastructure is much more complicated than that, see [0], so the above example has to be adapted accordingly. In particular, according to the blog post, clients do remote attestation with certain "frontend servers" and behind the frontend servers there are a number of Raft nodes and they all do remote attestation with one another. So the real-life scenario would be a bit more complicated but I wanted to keep it simple. The point, in any case, is this: Since the Signal developers are in possession of all relevant TLS certificates and are also in control of the infrastructure, they can always MITM any of their legitimate endpoints (where the incoming TLS requests from clients get decrypted) and put a rogue server in between.
One possible way out might be to generate the TLS keys inside the SGX enclave, extract the public key through some public interface while keeping the private key in the encrypted RAM. This way, the public key can still be baked into the client apps but the private key cannot be used for attacks like the one above. However, for this the clients would once again need to know the code running on the servers and do remote attestation, which brings us back to my previous question â where in Signal-Android is that hash of the server code[1]?
[0]: https://signal.org/blog/secure-value-recovery/
[1]: More precisely, the code of the frontend enclave, since the blog post[0] states that its the frontend servers that clients do the TLS handshake with:
> We also wanted to offload the client handshake and request validation process to stateless frontend enclaves that are designed to be disposable.
> These days, lots of actual user data (i.e. != metadata) gets uploaded to the Signal servers[0] and encrypted with the user's Signal PIN (modulo some key derivation function). Unfortunately, many users choose an insecure PIN, not a passphrase with lots of entropy, so the derived encryption key isn't particularly strong.
If I understand what you are saying and what Signal says, Signal anticipates this problem and provides a solution that is arguably optimal:
https://signal.org/blog/secure-value-recovery/
My (limited) understanding is that the master key consists of the user PIN plus c2, a 256 bit code generated by a secure RNG, and that the Signal client uses a key derivation function to maximize the master key's entropy. c2 is stored in SGX on Signal's servers. If the user PIN is sufficiently secure, c2's security won't matter - an attacker with c2 still can't bypass the PIN. If the PIN is not sufficiently secure, as often happens, c2 stored in SGX might be the most secure way to augment it while still making the the data recoverable.
I'd love to hear from a security specialist regarding this scheme. I'm not one and I had only limited time to study the link above.
> If I understand what you are saying and what Signal says, Signal anticipates this problem and provides a solution that is arguably optimal
Yep, this is what I meant when I said "This is where a technology called Intel SGX comes into play". :)
And you're right, SGX is better than nothing if you accept that people use insecure PINs. My argument mainly was that
- the UI is designed in the worst possible way and actually encourages people to choose a short insecure PIN instead of recommending a longer one. This means that security guarantees suddenly rest entirely on SGX.
- SGX requires the server code to be verified and published (which it wasn't until yesterday). Without verification, it's all pointless.
> uses a key derivation function to maximize the master key's entropy
Nitpick: Technically, the KDF is deterministic, so it cannot change the entropy and â as the article says â you could still brute-force short PINs (if it weren't for SGX).
> I'd love to hear from a security specialist regarding this scheme. I'm not one and I had only limited time to study the link above.
Have a look at link [1] in my previous comment. :)
SGX is just the processor pinky swearing (signed with Intel keys) that everything is totally legit. Nation State Adversaries can and will take Intel's keys and lie.
SGX is also supposed to protect against Signal as a potential adversary, though, as well as against hackers. Or at least that's how I understood the blog article.
Focussing on whether the changes directly make things insecure is missing the point. Fundamentally this sort of security is about trust.
While it's nice to try to have Signal be resilient to attacks by the core team, there just aren't enough community-minded independent volunteer code reviewers to reliably catch them up. I doubt the signal foundation gets any significant volunteer efforts, even by programmers who aren't security experts.
That means I need to decide if I trust the Signal Foundation. Shilling sketchy cryptocurrencies is indicative of loose morals, which makes me think I was wrong to trust them in the past.
> Shilling sketchy cryptocurrencies is indicative of loose morals, which makes me think I was wrong to trust them in the past.
Who decided it was sketchy?
The "I don't like change so I'm going to piss all over you" attitude is what sinks a lot good things.
How does Signal benefit from being a shill for this coin? Are they being paid by MOB or do they get a % of the cut?
So far all I've read are people screaming their heads off that MOB eats babies and how dare Signal stoop so low as to even fart in their general direction, but I have yet to see anyone explain why MOB is bad or how Signal is bad for giving MOB a platform.
Part of the problem is that at the moment any government trying to force Signal to break the e2e security model is clearly interfering with speech.
By incorporating cryptocurrency/payments, governments are being handed a massive lever to force Signal to comply with the financial monitoring requirements that governments have in place.
This has a negative impact on those of us who just wanted a secure communications platform.
> How does Signal benefit from being a shill for this coin? Are they being paid by MOB or do they get a % of the cut?
The CEO of signal messenger LLC was/is the CTO of MOB.
See https://www.reddit.com/r/signal/comments/mm6nad/bought_mobil... and https://www.wired.com/story/signal-mobilecoin-payments-messa...
Yea, I'm bearish on cryptocurrencies, but I think moxie and his team have built up an incredible amount of goodwill in my book. Enough for me to hear out their solution before making a decision. I'm assuming they didn't write dogecoin2 or even a bitcoin clone. It will be interesting to learn about it.
You're apologizing for a project that has repeatedly damaged user trust with excuses.
These are "valid" reasons for keeping the source code private for a year? By whose book? Yours? Certainly not by mine. I wouldn't let any other business abscond from its promise to keep open source open source in spirit and practice, why would I let Signal?
This is some underhanded, sneaky maneuvering I'm more used to seeing from the Amazons and the Facebooks of the world. These are not the actions of an ethically Good organization. And as has already been demonstrated by Moxie in his lust to power, he's more than capable of deviance. On Wire vs Signal: "He claimed that we had copied his work and demanded that we either recreate it without looking at his code, or take a license from him and add his copyright header to our code. We explained that we have not copied his work. His behavior was concerning and went beyond a reasonable business exchange â he claimed to have recorded a phone call with me without my knowledge or consent, and he threatened to go public with information about alleged vulnerabilities in Wireâs implementation that he refused to identify." [1]
These are not the machinations of the crypto-idealist, scrappy underdog for justice we are painted by such publications as the New Yorker. This is some straight up cartoon villain twirling their moustache plotting.
So now I'm being sold on a business vision that was just so hot the public's eyes couldn't bear it? We're talking about a pre-mined cryptocurrency that its inventors are laughing themselves to the bank with.
At least Pavel Durov of Telegram is honest with his users. At least we have Element doing their work in the open for all to see with the Matrix protocol. There are better, more ethical, less shady organizations out there who we can and ought to be putting our trust in, not this freakshow of a morally-compromised shamble.
[1] https://medium.com/@wireapp/axolotl-and-proteus-788519b186a7
Thanks for linking this, I had no idea this occurred.
Repeatedly? This is the first I'm aware of, what are the others?
> - Whether or not Signal's server is open source has nothing to do with security. [...] having the server code open source adds absolutely nothing to this security model, [...] The security rests only upon the open source client code. The server is completely orthogonal to security.
The issue a lot of people have with Signal is that your definition here of where security comes from is an extremely narrow & technical one, and many would rather look at security in a more holistic manner.
The problem with messaging security is that there's two ends, and individually we only control one of them. Ok, screenshotting & leaking your messages will always be a concern no matter what technology we develop, but the other challenge is just getting the other end to use Signal in the first place and that's governed by the network effect of competitors.
Open Source is essential for security because one of the most fundamental security features we can possibly hope to gain is platform mobility. Signal doesn't offer any. If Signal gains mass adoption and the server changes, we're right back to our current security challenge: getting your contacts onto the new secure thing.
You're redefining the word "security" here to an incredibly expansive definition which includes all kinds of details about the ability for someone else to set up an interoperable service.
Yup. Security is hard.
But now the server code is there, so we now have this mobility, no?
Yes and no.
Signal is not actually designed with mobility in mind (in fact I would argue, based on Moxie's 36C3 talks, it was designed to be and continues to be persistently kept anti-mobility). That fact is independent of it being open- or closed-source.
However, if the server is open-source, it opens the door for future mobility in the event of org change. If it's closed-source, you get what's currently happening with WhatsApp.
In actuality, if we had something federated, with mobility pre-baked in, having a closed-source server would be less of a security-risk (the gp's comments on only needing to trust the client would apply more strongly since mobility removes the power to change from server maintainers)
Basically:
- with multi-server clients (e.g. Matrix/OMEMO), you have no dependency on any orgs' server, so their being open-source is less relevant (provided the protocol remains openâthis can still go wrong, e.g. with GChat/FBMessenger's use of XMPP).
- with single-server clients (Telegram/WhatsApp/Signal), you are dependent on a single server, so that server being open-source is important to ensure the community can make changes in the event of org change.
Until they decide to go silent for another 11 months
So it just took close to a year to dump thousands of private commits into the public repo! Is there an official response as to why they stopped sharing the code for so long and more importantly, why they started sharing it publicly again? Who gains what with the publication now? And seriously, why is it even relevant anymore?
The first commit that they omitted in April 2020 is related to the payment feature they just announced. So the two events coinciding (server code being published and payment feature being announced) might not have been a coincidence. They apparently didn't want to bother creating a private test server running a private fork of the server code and just pushed their experiments to production, just not releasing the source code to prevent people from seeing the feature before an official announcement. They neccessarily built test client apps because I couldn't find any old commit mentioning payments in the client app git log.
This leaves a very bad taste in my mouth. Unclear how much practical damage this caused (how many security analysts are using the Signal server source to look for vulns?) but this is damaging to the project's claims of transparency and trustworthiness.
Itâs quite clear that this crypto integration provides a perverse incentive for the project that points in the opposite direction of security.
Forgive me if this is a stupid question, but how exactly is that the case?
It's been damaging to their claims of transparency for almost a year now, if anything this should be the first step in repairing that slight. How is dumping a year's worth of private work into your public repo somehow doing damage to their trustworthiness?
The server being or not being secure is only important to the people who operate it. You can examine the client code and see that your messages are encrypted end to end. Signal's entire security model revolves around the idea that you don't need to trust the server.
It was called out as recently as 4 weeks ago [0] and was voted to the front-page but then weighted-out possibly incorrectly by mods (may be because the top comment is dismissive of concerns raised [1]?) before a discussion could flourish.
cc: @dang
[0] https://news.ycombinator.com/item?id=26345937
[1] The title is the only thing worth reading in this pile of speculation and hand waving.
Here's a response by MobileCoin folks:
> Signal had to verify that MobileCoin worked before exposing their users to the technology. That process took a long time because MobileCoin has lots of complicated moving parts.
> With respect to price, no one truly understands the market. Itâs impossible to predict future price.
- https://twitter.com/mobilecoin/status/1379830618876338179
Reeks of utter BS. As the reply on this tweet says, features can be developed while being kept switched off with a flag.
> features can be developed while being kept switched off with a flag
But maybe you don't want everyone to know about all the features / announcements months in advance?
They already did this development privately. I don't think anyone has a problem with building out a new feature before it's announced. The problem people have, IMO understandably, is that they pushed this code to production servers instead of testing it privately.
> Is there an official response as to why they stopped sharing the code for so long
Not oficially, but see https://news.ycombinator.com/item?id=26725117. They stopped publishing code when they started on the cryptocurrency integration.
better question yet: Did we ever get a full post-mortem of the six day outage the service had? other than hand waving statements about user subscriptions? what fixes were made or lessons learned?
The Signal outage was SIX DAYS?
(All the news I'm finding is that it was just one day.)
no it wasn't
"Signal Server code on GitHub is up to date again - now with a freshly added shitcoin!"
The addition of micropayments to Signal is discussed separately at https://news.ycombinator.com/item?id=26724237
It's obviously related. The implication is that they pushed code to Github just to gain public trust that can be leveraged to market their cryptocurrency.
I think youâre part correct. I suspect they didnât want to go public with the shitcoin until it was done.
Is it though? We're already 6 days without a commit. Who knows that the history isn't frozen again until the next major release?
If you have a PhD you might be able to verify from the client-side it does not matter. If you are into blockchain there might be another (but very expensive) way to show a system can be trusted.
For normal development, I am advocating an always auditable runtime that runs only public source code by design:- https://observablehq.com/@endpointservices/serverless-cells
Before sending data to a URL, you can look up the source code first, as the URL encodes the source location.
There is always the risk I decided to embed a trojan in the runtime (despite it being open source). However, if I am a service provider for 100k customers built upon the idea of a transparent cloud, then compromising the trust of one customer would cause loss of business across all customers. Thus, from a game-theoretic perspective, our incentives should align.
I think running public source code, which does not preclude injecting secrets and keeping data private, is something that normal development teams can do. No PhDs necessary, just normal development.
Follow me on https://twitter.com/tomlarkworthy if you want to see this different way of approaching privacy: always auditable source available server-side implementations. You can trust services implemented this way are safe, because you can always see how they process data. Even if you cannot be bothered to audit their source, the sheer fact that someone can, inoculates you against bad faith implementations.
I am building a transparent cloud. Everything is encoded in public notebooks and runs open-source https://observablehq.com/collection/@endpointservices/servic... There are other benefits, like being able to fork my implementations and customize, but primarily I am doing this for trust through transparency reasons.
How do you prove the endpoint is running the code to which it links?
Simple but not 100% foolproof, you can mutate your source code and verify the changes propagate.
Note the endpoint does a DYNAMIC lookup of source code. So you can kinda reassure yourself the endpoint is executing dynamic code just by providing your own source code.
It might be more obvious the runtime does nothing much if you see the runtime https://github.com/endpointservices/serverlesscells
The clever bits that actually implement services are all in the notebooks.
> Simple but not 100% foolproof, you can mutate your source code and verify the changes propagate.
If I was evil, I wouldn't have a totally separate source tree and binary that I shipped; I'd have my CI process inject a patch file. As a result, everything would work as expected - including getting any changes from the public source code - but the created binaries would be backdoored.
That doesn't seem to provide any meaningful indication the endpoint runs the code it claims. Can't I just create an evil endpoint that links to legit code?
Nice, if only they could be so kind after all this time to provide instructions on how to run it. I don't get why they have been dancing their reputation around Hacker News this way by not releasing sources until there were a bunch of front page posts about it.
If it looks like a duck, and quacks like a duck, it's probably a duck.
I read some speculation that the delay was to keep this objectionable crypto payment development under wraps until they were ready to launch.
Yep. I posted this on a different Signal HN submission, but the very next commit on April 22nd, 2020 was when they first began working on the integration.
https://github.com/signalapp/Signal-Server/commit/95f0ce1816...
Oh wow. Thatâs incredibly suspicious...
It could just be an arguably-legitimate desire to keep the hot new feature secret until the big announcement; this particular bit is... sub-optimal... but it doesn't seem like it needs to be nefarious.
This might be a legitimate reason to keep the source code non-public temporarily. However, the communication strategy by Signal about this was horrible (or rather non-existent).
People in the user forum (https://community.signalusers.org/t/where-is-new-signal-serv...) and in other places on the internet were upset for months, because the server wasn't being updated anymore. At the same time, Signal regularly tweetet that "all they do is 100% open source", even at a point in time where no source code was released for almost a year.
Just 2 days ago this was getting picked up by some larger tech news platforms:
https://www.golem.de/news/crypto-messenger-signal-server-nic...
https://www.androidpolice.com/2021/04/06/it-looks-like-signa...
It's normal that Signal ignores its users, but apparently they didn't even reply to press inquiries about the source code. All it would have taken is a clear statement like "we're working on a cool new feature and will release the sources once that's ready, please bear with us". Instead, they left people speculating for months.
This communication strategy, combined with the cryptocurrency announcement, may cause serious harm to Signal's reputation.
The devious aspect to this is that nobody knew the development was happening so we couldnât invest even if we wanted to.
It was kept under wraps for a grade A pump.
OTOH, announcing this development semi-privately on GitHub but not to the public at large (including the current MobileCoin owners) could be considered as "insider trading", and it's a criminal offense in US.
Which is probably why theyâre avoiding the SEC at all costs.
kinda crazy that the signal team doesn't GPG sign their commits.
A example of how irrational hate can make smart people do stupid things, unfortunately.
Certain people on their team don't like the PGP standard despite the fact that it is mature, standardized, and proven to work well for code signing. When questioned about their reasoning, they'd usually deflect and criticize some aspect of PGP that is irrelevant to code signing at all.
In their minds, they believe it is better to rely on git's broken SHA1 fingerprints than to use PGP.
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.