Hacker News
3 years ago by technion

We've got some information on the timeline (and a name) on one of the major exploits here:

https://proxylogon.com/

Some of the detail on where this is a mess -

The relevant security update is only offered for the latest (-1) Cumulative Update for Exchange. So you can open Windows Update and it will say "fully updated and secured", but you're not. Complicating matters, Cumulative Updates for Exchange 2019 have to be done from the licensing portal, with a valid logon.

So maybe you have a perfectly capable 24x7 tech team, but the guy who manages license acquisition is on leave today. This is how you may basically find yourself resorting to piracy to get this patched.

3 years ago by bombcar

Reminiscent of Cisco IOS patches being stuck behind support contracts - and inaccessible to many until they pony up.

3 years ago by oasisbob

It's been a while since I've had to deal with Cisco IOS, but IIRC they were always good about releasing security fixes to anyone upon a TAC request.

For used devices off support contract, security incidents were a great opportunity to get free updates.

3 years ago by sneak

IIRC you needed a contract login to open TAC cases.

I always just got copies of the .bins from friends who worked at places that had contracts. They didn't gate updates at that time by which model you bought, once you had access you could get firmware for anything Cisco.

3 years ago by gbil

I can confirm that on personal devices with no support contract. You contacted them asking for an update image due to published vulnerabilities and they sent it over

3 years ago by throwawayboise

> the guy who manages license acquisition is on leave today

At that point, if you really have no other options, you pull the network plug. Or firewall it to internal-only. Email can wait for a day. And the nice thing about the protocol is that it will all get re-sent automatically.

3 years ago by jsilence

Try telling the whole company email can wait a day. Good luck!

3 years ago by hansel_der

this! i cannot fathom any executive choosing the shutdown of email sevices over some risk that something might happen.

3 years ago by posguy

This depends on the sender's mailserver to cache the mail for a day (or a full weekend) without rejecting it. Some mailservers will kickback mail much sooner.

3 years ago by upofadown

Such servers are then not compliant with the standard of 4-5 days. See RFC 5321 sec 4.5.4.1.

Are non-standard retry intervals actually that common?

3 years ago by pfortuny

But exchange is much more than e-mail, is it not?

3 years ago by mattacular

Calendar and contacts

3 years ago by indigodaddy

Shouldnā€™t you have, or should I say donā€™t most orgs have, a spam filter or some other GW in front of Exchange that actually accepts the mail publicly? And then that gateway will send internally to the actual Exchange? This is what Iā€™ve seen in a few orgs.

3 years ago by unethical_ban

I don't think email proxies are built to cache entire org ail messages that long.

3 years ago by ocdtrekkie

Huh, this appears to be a change they made for 2019... the downloads for 2016 CUs, including the latest ones, are available publicly: https://www.microsoft.com/en-us/download/details.aspx?id=102...

3 years ago by phatfish

Yeah this is not the case for 2013/2016, although Exchange CUs are a full installer you can run a fresh install from. Unusual for Microsoft software updates in that respect I believe, and it kind of makes sense that would require a valid license to download.

Clearly is is less customer friendly than 2016, but then Microsoft do REALLY want that sweet reoccurring subscription for Office 365 (or is it Microsoft 365 now?). Can't make it too easy to host your own Exchange server these days...

3 years ago by gowld

> So maybe you have a perfectly capable 24x7 tech team,

OK

> but the guy who manages license acquisition is on leave today.

Then I wouldn't have "the" guy for anything.

3 years ago by Hendrikto

You are right. I think the people downvoting you just misunderstood the point you were making: In a ā€œperfectly capable 24/7 tech teamā€, you should not depend on a single individual for anything.

3 years ago by technion

Unfortunately a few comments here have honed in on one contrived example of why I think this strategy is broken. To give another contrived example: I personally had a logon to this portal, but it broke last year when they integrated logons with Azure and it took me like three months to get it fixed.

The fact a critical security update can't just be downloaded is bad. I don't care if someone in sales thinks every licensed user should probably be able to get it. Here NCC produced a list of "valid" files to help people scan for not legit files. Except they don't have Exchange 2019 CU 8 because they couldn't get it:

https://github.com/nccgroup/Cyber-Defence/tree/master/Intell...

Microsoft has a hard limit (5?) on the number of individual accounts you can grant access and in a big enough org it's still plausible they'll be scattered across the world and you'll find none of them available the exact hour you need this update.

3 years ago by zimpenfish

In real life, however, this kind of thing happens all the time. Someone forgot to write down the login when they left and no-one caught it in the offboarding. Or someone set up 2FA on a system but didn't put that info into 1Password / the wiki, etc.

3 years ago by Ensorceled

The failure mode of clever? It's "asshole." -- John Scalzi

I know youā€™re trying to save the original comment but that comment can legitimately be taken the way the downvotes are taking ... that the commenter believes that guy should be fired for being away from his phone. Why legitimately? Because Iā€™ve worked with people like that.

3 years ago by gostsamo

It is not the guy that's guilty, but you for having such bad organization that one guy under the buss is sinking the entire ship.

3 years ago by hmottestad

I thought it was Microsoft that was guilty for hiding a security update in the licensing portal!

3 years ago by ericd

I think that's what they're saying - that the problem is that there is a single "the guy".

3 years ago by IntelMiner

"You took a vacation the same day that a zero day dropped. You're fired"

Yeah uh, I don't think I wanna work for you then

3 years ago by detaro

I think their point is that "the guy for a specific task" can't exist on a team that actually is "24x7 perfectly capable".

3 years ago by strken

Pretty sure the poster is talking about having one single point of failure for all license acquisition in the first place, not about firing the single point of failure.

3 years ago by rorykoehler

If your organisation has key person dependencies thatā€™s a problem in itself.

3 years ago by edrxty

Bigger picture, what's the endgame here? It seems a lot of institutions handling sensitive work are considering air-gapping some or all of their networks at this point. Maybe that's even what has to happen.

Is there a means of fending off these attacks on the political front? If this same level of espionage was happening in person, there would be a kinetic response but it seems everyone is happy to just turn the other cheek.

These attacks have a very real impact. Copying others homework is a tried and true way to get a technological edge and in practical terms, it means a lot of research and development money is effectively wasted as it doesn't generate any returns.

Mind, I don't think there should be a violent response, but it's odd that even the threat of sanctions isn't made whenever this happens.

3 years ago by heresie-dabord

> endgame

If you mean the strategy as the end nears, it should be what it should always have been: trust no single product or supplier, implement multiple layers of defence for what is important. Maintain in-house expertise.

If you mean the "Lessons (never) Learned"... Train developers better, build better software through validation and verfication, train management to understand technology and risk. Humans become increasingly incompetent as complexity is scaled.

Everyone is doing espionage, no one is going to war because Microsoft has flaws.

3 years ago by dcow

Iā€™m curious to hear more about cases of large institutions seriously considering air-gapping. This is the first Iā€™ve got wind of something like that.

3 years ago by lrem

Yeah, runs contrary to my perception too. Even things that one would reasonably expect to be air-gapped are online these days.

3 years ago by bob1029

Air-gapped systems really only make sense for the occasional need to access exceptionally sensitive materials. I.e. private keys for root CAs.

For most businesses, air-gapping would mean we are back in the 20th century of business with filing cabinets and armies of people pushing paper between 2 rooms.

3 years ago by edrxty

It's not actually that bad. There's a lot of defense, security, and highly proprietary development that happens on isolated networks. You have to put significant effort into IT infrastructure but you'll end up with all your stuff hosted internally and most tools support custom package repo mirrors (linux distros, programming languages/build systems, docker). You'll also probably have a second system with internet access at your desk if not nearby for stackoverflow et al.

Basically the idea is defense in depth. The valuable stuff (design files, schematics, code, documentation) lives in the air gapped network while communications live inside a VPN and detailed technical discussion is often discouraged.

3 years ago by jffhn

Air-gapping is common in some industries, and there are also network diodes: https://en.wikipedia.org/wiki/Unidirectional_network

3 years ago by thw0rted

Keep in mind, there's actual air-gapping, and there's secure enclaves. This specific attack would have no teeth if your Exchange server / OWA endpoint were only accessible from corporate VPN. You don't have to be one of the top-ten biggest corporations to run a global-scale intranet with off-the-shelf VPN servers, and it still greatly reduces your attack surface.

3 years ago by AniseAbyss

Not true countries accept that they spy on eachother. They all do it its just that America are the "good guys" and its enemies don't do press conferences on how they got hacked. Also we already have copyright and patents so no you can't copypaste an iPhone.

3 years ago by hollerith

>Copying others homework is a tried and true way to get a technological edge

The Soviets were better at spying than the West was, but their being better at copying the West than the West was at copying them didn't seem to help them all that much.

3 years ago by bob1029

We are seriously looking at strategies for clean room rebuild of our IT infrastructure, potentially on a recurring basis via automation.

Obviously, you cant mitigate 0-day exploits in any situation where reasonable/expected network access is possible. But our concern, despite not being directly impacted by this, is that we may have accumulated malware over the past decade+ that has never been discovered. How many exploits exist in the wild which have never been documented or even noticed? Do we think it's at least one?

The thinking we are getting into is - If we nuke-from-orbit and then reseed from trusted backups on a recurring basis, any malware that gets installed via some side-channel would not be able to persist for as long as it traditionally would. Keeping backups pure via deterministic cryptographic schemes is far easier to work with than running 100+ security suites across your IT stack in hopes you find something naughty. It is incredibly hard for malware to hide in a well-normalized SQL database without SP or other programmatic features.

What if we built a new IT stack that was designed to be obliterated and reconstructed every 24 hours with latest patch builds each time? Surely many businesses could tolerate 1-2 hours of downtime overnight. It certainly works for the stock market. There really isn't a reason you need to give an attacker a well-managed private island to hide on for 10+ years at a time.

3 years ago by mlac

Iā€™ve thought a lot about this. I think from a tech standpoint and a security standpoint, my ideal approach would be to rotate out an A team and a B team. Every 2-3 years, the teams switch off. So year 1-3 A team is running the environment. B team is completely rebuilding and re-architecting the organizationā€™s IT. The company is migrated to B teams infrastructure for 3 years.

A team gets to re-build while B team is running and the cycle repeats. This has a few advantages, it keeps the org very current with tools and technology, everyone stays sharp on the latest tech, nothing is sacred, and teams get experience across the spectrum of design build implement and run. It also has good Disaster recovery properties if you idle the old environment so that you can fall back if some critical failure occurs in the new environment.

This would be expensive, but please poke holes. I like your idea of clean rebuilds and can see a path to it with automation / terraform / cloud resources. And you donā€™t need the downtime if you stand up the second one in parallel and just fail over. Thereā€™s still persistent data that needs to carry through, so youā€™d need to figure out how to separate your persistent data from the elements that reset.

3 years ago by bob1029

I think the need to maintain multiple team may not be as urgent if you constrain the timeline.

The biggest requirement I see is automation. For this to be feasible in a general sense, it has to be down to a single method invocation completes in 1 hour what those teams are doing in 2-3 years.

The biggest challenge that will emerge from trying to meet this objective is the import/export of data to/from these now-highly-ephemeral IT systems. The ability to easily import pure business data back into a fresh instance of the system will likely constrain the vendor & product choices as well.

Very soon, you might find yourself building a 100% custom vertical to support these objectives explicitly. I think this is ultimately inevitable and desirable though. We just need to learn how to build these things quickly & reliably.

3 years ago by mlac

Yeah I see it as two items - total architecture rebuilds and redesigns for an organizationā€™s IT system vs. blowing away enterprise resources each night and restoring with a known good application each day or week.

That would be amazing but incredibly complex. Each week I guess you would run a script to re-build your architecture in AWS with the latest builds and patches. Then run a config script to re-import all your data.

It would be painful to figure out, but you could essentially store a copy of your data at another AWS location and fail over within a day or two just given your two install scripts (the architecture build out and then the config script to read in the data), Depending how often and on how many systems you did this on, youā€™d basically make attackers restart every night or week. And ideally youā€™re patching as quickly as possible, so it might block some of them out quickly.

3 years ago by djrogers

Since you asked us to poke holes:

1) turnover 2) skillset

Some people are amazing at the architect/build side of things while either sucking at or hating the run side, and vise versa. Mismatched skill sets leads to higher turnover, which makes running an a/b team routine even harder.

3 years ago by mlac

Fair points. I would say if it is done well that turnover would go down. Iā€™d think re-architecting from scratch every 4-6 years would be extremely engaging and keep the role interesting. Or it would be extremely tiring and lead to burnout. Not saying the architects would need to run the application for 3 years - during the three years of run for their cycle they could determine issues with their architecture to fix for next time, work with the other architecture team to make improvements for the next cycle, and perform research for their next design.

I think the main drawback is cost - it essentially doubles the cost of staffing for the organizationā€™s IT. I guess there is core functionality that could be shared and stay consistent.

3 years ago by oauea

Sounds like an excellent strategy for resume padding.

3 years ago by RijilV

doubles the cost of implementing anything. Say a customer wants feature-X. Unless you're magically at the point where in your 2-3 year cycle where you're switching, both the A and B team need to implement feature. Of course, that's assuming you don't just tell the customer to stuff it and wait 2-3 years.

You're also assuming that you know ahead of time all use cases and interfaces. It's surprising how dependencies are taken. I've seen large scale systems break when a HTTP 204 was changed to a HTTP 206, or a base36 field changed to base62. Now again maybe you're thinking the consumer can stuff it and update everything whenever you decide to switch over, or that you'll have captured everything and have tests around it. But.. for any sufficiently complex system with a sufficiently large customer base everything about your interface becomes your customer contract. Changing everything all at once is going to break a ton of things nobody ever thought about.

Doing upgrades every 2-3 years means you're pretty much never going to be good at them. Institutional knowledge seems to have a 2-3 year memory horizon. Sure, you get that one person who is a bit of an archeologist/historian but tenure at most shops is not long ("The median number of years wage and salaried employees stayed with their current employer in 2018 was 4.2 years" - first hit on Google). While you're upgrading every 3 years, each team only does so every 6 years. Nobody is gonna remember what it looked like.

There's also a meta point, which is what are you actually trying to solve? Is it so hard to go from architecture A.v0 -> A.v1 -> architecture B that you need to build A, maintain A and simultaneously build B? If moving between architectures is so hard but moving between versions of an architecture isn't - why is that the case and why can't you make the former case easier?

I'm assuming that your plan has you upgrading the A-architecture within those 2-3 years. Maybe you're saying you wouldn't touch it at all and just hope there are no security issues or features or scaling you need to do.

There's also another point which is you've coupled all changes to a particular cadence. Maybe you want to upgrade your network, servers, storage systems, OS, application services, etc on different cycles. At the very least you're sorta hoping that all of those things have similar release cycles, which realistically you're going to be picking some network switch that's been out for 2 years and marrying it to a storage product that was released last month (because the previous one is 5 years old and will be out of support before your next refresh).

And scaling... what happens when you can't get the same server you were ordering 2 years ago? Tell users they can't have nice things until the other team rolls out their massive platform shift in a year? Or would you adopt a new platform to scale on, in which case, why are you doing this A and B team thing again?

And not only do you need two teams, but you need two sets of hardware which means you need twice as much datacenter space, etc etc. Do folks need to two desk phones when you roll that out?

And ... I'm gonna stop here...

3 years ago by mlac

This is a great comment and thanks for the feedback.

I should have clarified the context and my experience. I was thinking this is a process for dealing with legacy bloat and mostly internal IT systems (IT Architecture) in mostly stable Fortune 500 size companies that are already operating at scale.

From what Iā€™ve seen, big shifts are often a one time ā€œtransformationā€ with lock-in to a service. In cloud itā€™s azure or AWS or GCP. Or companies are stuck on legacy exchange and canā€™t move to O365 without a major initiative. Or there is no viable path to move from Microsoft to Google.

These things only occur with great pain, and resources arenā€™t often provided to reconsider alternatives and to stay current. I picked three years because things tend to operate at that pace at large organizations. Itā€™s probably a faster upgrade cycle than where most of those companies are today.

It would be interesting to go back to the drawing board with the business lines to develop tech internally to better support them. Lots of stuff is just operating on terribly outdated systems. There is some lock-in (e.g. weā€™re going to use O365 for our office products for the next 3 years), but it would increase bargaining power because your org could actually migrate away.

For a lot of applications I agree with what you are saying - pick a good architecture and stick with it. And I donā€™t think there would be a need to change the way the company works for the sake of change, but Iā€™ve seen enough big shifts that it makes me think a total redesign of an organizationā€™s architecture every few years (or at least considering it) would be useful. Right now a big advantage to startups is that they can design much more efficient IT models than most legacy large corps.

I know if I could start from scratch Iā€™d do a lot of things very differently and could show major cost, efficiency, and security improvements. So the idea would be to take a team who knows the company, break them off and say ā€œbuild an architecture for the organization that will go live in 3 yearsā€ - take the best of the current environment and tool set, integrate new tech and security, and we will start moving users to the environment in 3 years. Then you get to run that for 3 years while the other team does the same thing.

Youā€™re right on turnover point.

I think the whole goal of this would be to never go more than 3 years without seriously considering alternatives for major systems (ERP, HR, Security tools) while giving the chance to have it all be integrated and put into place as a cohesive design.

3 years ago by iforgotpassword

We use netboot for most desktop computers and servers that are mostly stateless. Any changes are temporary ending up on a dedicated temp partition that gets wiped on boot, or in ram.

Rebuilds are mostly automatic. Of course, netboot in itself opens new attack vectors where we're in early stages of exploring different approaches, even the painful secure boot crap. Honestly I think most of the security in our case right now comes from being an obscure in-house solution that you'd need to specifically target. Also, in case you do get pwned, a post mortem becomes mostly impossible since once you reboot a machine, everything is gone except stuff on network shares of course.

3 years ago by vsareto

>What if we built a new IT stack that was designed to be obliterated and reconstructed every 24 hours with latest patch builds each time?

Inevitably an update is going to break something. So even if you can automate all of that, how can you make sure it doesn't break something? This requirement isn't just the automation and technology gathering, it's testing too. It seems to me like you'd need a lot more benefits to make this worth the time/money/effort. You'd probably be better off having 2 networks for employees: 1 for public internet and 1 for internal company stuff. I think the intelligence community has something like that?

3 years ago by jka

Could the same principles from application development apply? Given sufficient coverage, running unit and integration tests for each "infrastructure build artifact" could help to provide assurance.

(and if your infrastructure service provider(s) don't have suitable test coverage they can offer, perhaps it's time for a conversation with them about that)

3 years ago by mleonhard

When you're deploying from backup every day, rollbacks are easy.

3 years ago by rhacker

I remember this kind of thing happening all the time in the 90s and part of the 00s... It's just 10 to 1000 times worse now days since EVERYTHING is online now.

3 years ago by slickrick216

Practice. All those folks are still alive and now thereā€™s more of them. Theyā€™ve all been practicing too.

3 years ago by panarky

Former US CISO Chris Krebs says this is a bigger deal than what's been reported so far.

This is a crazy huge hack. The numbers I've heard dwarf what's reported here & by my brother from another mother (@briankrebs).

https://twitter.com/C_C_Krebs/status/1368004401705717768

3 years ago by Godel_unicode

Chris Kerbs was definitely not the US CISO. He was the director of CISA, the Cybersecurity and Infrastructure Security Agency. CISO of the US is usually a meaningless figurehead, Krebs actually did things.

3 years ago by slickrick216

Yeah I just had an awkward conversation with a relative who works for a company that has a on site email server running exchange. When I asked him had he patched or upgraded it he said no Microsoft does all that. Grim.

3 years ago by yudlejoza

I wasn't aware Exchange Server was still this prevalent, and that its pwnage was still alive and kicking.

Great job M$.

3 years ago by taspeotis

I mean organisations with their own Exchange Server are just organisations that arenā€™t on Microsoft 365 yet. Which is basically hosted Exchange.

Itā€™s turtles all the way down.

3 years ago by technion

Unfortunately "moving to Office 365" for many organisations doesn't get rid of Exchange. Microsoft's article on "how and when" is basically a list of reasons you might be stuck with it.

https://docs.microsoft.com/en-us/exchange/decommission-on-pr...

3 years ago by lc9er

Even if you move to O365/Exchange Online, youā€™ll likely always have some Exchange footprint. The only way to get around this is to migrate your AD to Azure.

3 years ago by alfiedotwtf

ā€œBut at bottom, is Perl scriptā€

3 years ago by toyg

Meh, this is actually great publicity for O365.

3 years ago by gscott

Just like after the Experian hack, Experian ramped up their commercials for their paid Identity Theft Protection service. I was seeing their commercials every hour.

https://www.experian.com/consumer-products/identity-theft-an...

3 years ago by Triv888

which one is your favorite alternative?

3 years ago by CraigJPerry

Postfix.

But thatā€™s only an MTA i hear you cry, Exchange does both MTA & MDA! Bear with me.

Postfix is software to learn from. It might be written in C but the architecture is the epitome of beautiful modular design. Itā€™s not just the meticulous separation of concerns, the care and attention to detail, everything from string handling to memory management is pristinely handled. https://github.com/vdukhovni/postfix

Even at runtime the beauty of the architecture allows for a sysadmin to choose (via master.cf) exactly how the components should be composed to fit their needs. The defaults are crafted for minimum fuss if you just need to get it running ASAP. The software is ergonomic in addition to being artfully crafted.

So what does all this care and attention get you? Only 9 CVEs in 22 years, only 3 of which are code exec, only 2 of which are (maybe) remote code exec, only 1 of which is unauth user RCE - but very hard in practice to exploit.

Maybe itā€™s just not that popular? It was 1/3 of all SMTP servers on the internet according to a 2019 scan.

So itā€™s the best MTA ever to exist, but what about MDA? Well, that was the whole point. Compose well crafted components together to build a system. You especially donā€™t run part of your mailserverā€™s web interface in kernel space because, well iā€™m not sure why IIS/Exchange does that :-)

3 years ago by jhanschoo

How is it worse now? It looks to me that it's better now since SAAS companies today just patch their products on their end, and even this situation is better than needing physical media as in the past if the patch is too big.

3 years ago by ganzuul

He told you. Everything is online.

3 years ago by jhanschoo

I believe my argument addresses that everything being online doesn't necessarily worsen security from hacking.

3 years ago by undefined
[deleted]
3 years ago by xwolfi

How many people impacted ?

Before, the impact was low even of the fix was slow. Now the fix is fast, but it s thousands of companies per exploit.

It's not net positive for you, the cuck having a credit card number stored everywhere.

3 years ago by _robbywashere

The United States Government should actively be trying to protect its businesses. They should create a three letter organization to do so. They should call it the National Security something or another.

3 years ago by labster

That name is already taken by the department of hodling cyberattacks. They should have a National Vulnerability Agency that handles it.

3 years ago by jjeaff

Or maybe let's revisit the charter of the NSA and make some major tweaks.

3 years ago by Godel_unicode

Why do people keep thinking that putting the military in charge of civilian cyber defense is in any way a good idea??

3 years ago by gogopuppygogo

Step 1.) Let marijuana users be employed so you can attract talent.

Step 2.) Pay above market rate for talent, even import it from Israel or other friendly nation states. We need a Wernher von Braun style approach to recruitment.

Step 3.) ???? Profit ????

3 years ago by systematical

I've been saying this for years. The government should actively be hacking corporations, state, and local governments. Then disclosing the vulnerability privately to these organizations. This levels up our offensive capabilities while securing us at the same time.

3 years ago by cutemonster

National Security Theater?

3 years ago by waynesoftware

"This is the real deal," tweeted Christopher Krebs, the former CISA director. "If your organization runs an OWA server exposed to the internet, assume compromise between 02/26-03/03."

3 years ago by imglorp

Chris acknowledged Brian as his "brother from another mother." :-) I was wondering...

3 years ago by waynesoftware

Wow. Patching (or using cloud mail providers) would have mitigated the risk for this one...and many others in the past (and the future). The cleanup from this is big for those who were hit.

Launching attacks during major news events surely also helped the attackers stay under the radar for longer.

3 years ago by brundolf

The cloud angle is interesting; on one hand, it creates an even-more-centralized single point of failure. On the other hand, given that virtually every computing system out there is a house of cards, letting the experts focus on securing (and updating!) just a single one might be the best defense.

3 years ago by mywittyname

The cloud providers can afford to hire and train elite teams to handle security. I remember seeing a post about a guy trying to break out of the docker container used by Cloud SQL on GCP, and apparently the GCP admins made it known that he was being watched pretty early on. I believe the issue was patched fairly quickly too.

It's possible that <Random F500 Co> has a great security team. But it's also possible that <Other F500 Co> doesn't.

3 years ago by brundolf

Really what we need is the ability to self-host reasonably secure systems without a team of experts working round the clock... but that doesn't appear to be the hand we've been dealt

3 years ago by LilBytes

I remember reading this. When the Dev did it a second time there was a txt file on the host (container? Can't remember) saying "Hey this is cool, we're about to patch this, thanks for letting us know".

3 years ago by theobeers

Yeah, here's the blog post you're thinking of, from August 2020:

https://offensi.com/2020/08/18/how-to-contact-google-sre-dro...

And the HN thread:

https://news.ycombinator.com/item?id=24216009

3 years ago by pmlnr

Riiiight, because cloud sw can't have 0 days.

3 years ago by Veserv

That is only accurate if they can provide a meaningful defense against expected attacks otherwise all you are doing is creating a single central target. Unfortunately, the cloud providers can not mount even a token defense against an attacker funding an attack at the $100M level, so I see no reason to assume they can defend against credible threats to a single Fortune 500 company given that they can not even stop an attack with such a meager amount of resources allocated to it relative to the size of a Fortune 500 company. That is not to say that the teams in a Fortune 500 company are any better, merely that everybody is completely inadequate.

By consolidating targets when you can not even reach the level to protect a single one you are making the situation worse, not better by consolidating. For it to make any real amount of sense they would first need to demonstrate an ability to prevent attacks at least in the correct order of magnitude and then demonstrate that they can scale up without creating correlated risk. Only then does it make any sense to actually centralize on a single solution, let alone a single provider.

3 years ago by brundolf

I mean... they prevented this one in the cloud version.

I'm not advocating for a single provider, and I'm not necessarily advocating for cloud hosting as a solution, I'm just pointing out that in this case the cloud fared better than practically all of the self-hosted systems

3 years ago by bearbawl

Thatā€™s not how to look at this.

The point is not if the Cloud can defend against a very sophisticated attack, the point is whether they can at least do a better job than what those big companies are doing.

And the answer is really easy: Fortune 500 are at the Stone Age of security (among a lot of other computer science topics) so of course the Cloud is doing better. Itā€™s not even the same world or the same order of magnitude.

And the abyss will become bigger and bigger because itā€™s becoming more complex. There is no way a Fortune 500 company can keep up with the complexity of what AWS, Google or Azure is dealing with, and the new tech world we live in. And itā€™s also quite stupid, thatā€™s not your job nor where you will be making money. Just concentrate on the app/code that is indeed your core job, on top of solid and proven Cloud services.

Also, you talk about centralisation and the issue of a single provider, well, hereā€™s the actual joke: the level of centralization and concentration is way, way bigger internally than if it was on the Cloud. Most of those Fortune 500 companies have only a few datacenters. Although they are international, some even have datacenters only in their local region of origin, with zero region/local hub of some sort, as crazy as it may sound.

And most of those Fortune 500 companies have only one provider for each of their key component.

If they were on the Cloud (and they will be, eventually), reversibility and transferability is Ā« built-in Ā» almost, because it is an actual feature, or because everything is way more standardized, or just because moving into the Cloud, you will think from the start about how to move back or to a different provider. And in any case is much much better than the state thereā€™s in.

3 years ago by u678u

I think this was the conclusion from the Sony hack (2014- wow nearly 7 years already). People were scared of cloud security but Sony showed that on prem isn't any better.

3 years ago by koolba

Cloud providers are also more likely to have true off site backups in place. Your vanilla SMB running an exchange server on a pc in the closet doesnā€™t.

3 years ago by kaliszad

The proper mitigation would be actually using much simpler, better quality software. Microsoft Exchange Server is quite famous for being an attack vector on corporate networks. At my previous job, the company was advised (by a very capable and expensive security consulting company) to keep Exchange as separate as possible from the corporate network - this of course is a bit counter intuitive, when you want to use e.g. Single Sign-On, contacts and more typically with Active Directory (AD). Thankfully my job wasn't to administer or develop any solutions for AD or Exchange so I just took a note.

Obviously, no engineer can have even a sufficient overview of the full Exchange Server implementation not speaking of full understanding. In such a situation security, quality and user (or admin for that matter) experience always take a big hit. It doesn't help Exchange Server is most likely developed using programming languages and approaches that more or less demand complecting the solution with OOP-related ceremony. Supporting two decades or more of legacy features and protocols doesn't help. Some companies even want to connect AD and Exchange to SharePoint... which is at least as complex as Exchange.

The problem companies don't understand is that you have to work on simplifying, which is very hard - much harder than adding features. If you don't, the interactions between components will overwhelm even the largest and best skilled team on the planet. The result is, we see breaches and security issues like this every day and realistically, nobody who can decide anything in the corporate environment gives a f** anymore because nobody pays the more or less laughable fines with their own money and nobody really goes to jail but the user data is lost, peoples lives are shattered.

3 years ago by mattmanser

I find this comment extremely unhelpful.

There's a reason why everyone uses microsoft exchange, despite all its myriad of flaws, and the flaws of its major client Outlook.

And it's because it offers so much functionality, precisely because it so much more complicated.

It's like saying you can secure your house if you build a 20ft wall round it with no gate.

Sure you can, but it becomes pretty useless.

3 years ago by doctor_eval

I donā€™t think thatā€™s true at all. Exchange is awful. Itā€™s slow, hard to configure and doesnā€™t offer anything you canā€™t do better with simpler tools.

Like the majority of awful ā€œenterpriseā€ products on the market, the primary reason that itā€™s popular is because itā€™s from a megacorp who speaks the language of the buyers, who are all aspiring megacorps. I was horrified the first time I used exchange and couldnā€™t wait to change providers the moment I had the chance.

So itā€™s more like saying you can secure your house if you use a security service who sets security targets instead of sales targets.

3 years ago by rsj_hn

I think the point is that you can provide a lot of functionality by using back-end APIs to communicate to servers in different trust zones rather than having a big ball of trust - especially an internet facing big ball of trust.

And you are right, loose coupling does rule out a very small set of functionality. For example an email sent to a user might have an smb: link, and then Outlook used to do a preview of the email, automatically loading all the links, which would cause your credentials to be sent to the smb:// server just by previewing the email, thereby allowing malicious attacker to steal password hashes by sending emails to victims (no click was needed).

So that would be an example of excessively tight integration and a design philosophy that was fast and loose with shipping both credentials and executables across the network. I think we have learned from those lessons.

In terms of why it is dominant today, it is because of fairly rational C level decisions, not users clamoring for it as opposed to some generic email/calendaring solution. Microsoft still knows how to do support, there is a large pool of cheap IT admins certified to work on it, and it allows you to run your own server instead of buying a service from gsuite. Really if Google could shed their disdain for human beings and learn to think of them as customers, they could take a lot of market share away from Exchange, because right now it is a trade off of security versus support - the functionality is basically the same.

3 years ago by kaliszad

It is unhelpful to your business, if you get hacked and your customers lose trust into your ability not losing confidential data. The daily toil of using Outlook and Exchange is also substantial.

You conflate functionality and complexity. If you think about it for a minute, complexity actually hinders functionality. There is some intrinsic minimal complexity to useful features of a software system for it to be functional. Exchange could be way more useful, if it wasn't so complicated and it could be a lot easier to keep somewhat secure.

Exchange in many circumstances feels more like a banks vault but instead of steel door with a wooden one with the cheapest padlock you can buy and a sign "we go here once a year to check everything is in order" where real banks usually work a bit differently... There are many cases, where an attacker gained access to the complete Active Directory through Exchange. At least so I was told by a company that did the consulting afterwards to clean up the mess.

3 years ago by discreteevent

"It doesn't help Exchange Server is most likely developed using programming languages and approaches that more or less demand complecting the solution with OOP-related ceremony."

This statement certainly doesn't help the credibility of your comment.

3 years ago by systematical

Cause you could never have exploits in functional...ever.

3 years ago by EvanAnderson

Exchange product architecture was absolutely to blame for this. Very particualrly, the "/ECP" directory should have never been allowed to be Internet accessible. (I believe the upcoming version finally rectifies that in a "supported" way.) In general, though, Microsoft hasn't focused enough on making Exchange more compartmentalized. The servers' privileges in Active Directory are too high (though this is supposedly being addressed in the upcoming version too.)

3 years ago by kaliszad

Thank you for the insight.

Certainly, "in the upcoming version" is a bit late for those affected and most of those other Exchange-related hacks in the past. The thinking around Exchange is still more or less left in the 20th century and it shows.

3 years ago by FpUser

>"It doesn't help Exchange Server is most likely developed using programming languages and approaches that more or less demand complecting the solution with OOP-related ceremony"

What on Earth OOP has to do with the quality / security of the Exchange? This reads like someone is on crusade.

3 years ago by kaliszad

Well, I kind of am to be frank. OOP really mostly obscures an implementation and is often taught almost religiously as "the true one way". In the end, Exchange is a stellar example of the obscured and therefore unfathomable implementation.

You should really watch "Simple Made Easy" by Rich Hickey and think really hard about it. If you don't come to the conclusion that most software development could be way more sustainable in the long run would we use simpler tools and approaches instead of complecting everything especially with questionable OOP balast then maybe we have very different experiences.

3 years ago by EvanAnderson

The vulnerabilities being exploited were all zero-day. Up-to-date installations were still vulnerable.

3 years ago by weare138

It was a 0-day exploit. The patch wasn't released until March 2nd but the vulnerability was being exploited at least since January.

3 years ago by mattowen_uk

I patched my Exchange servers the morning this was announced, a few days ago. The patch takes about ten minutes per server, and does not require a reboot. If your server was a client facing one (CAS) users would have seen a brief outage in Outlook connectivity.

The patches were single file downloads, one for each version of Exchange, yes you needed to be on the latest Cumulative Update for Exchange, so if you weren't you really have no right running a production mail system...

3 years ago by ocdtrekkie

The last few security patches have been available independent of the Cumulative Updates, so it was reasonable to be a few behind. But this one required the latest CU to install.

Bear in mind after updating you still need to check if you were already hacked.

3 years ago by undefined
[deleted]
Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.