Hacker News
3 days ago by matt_heimer

The people configuring WAF rules at CDNs tend to do a poor job understanding sites and services that discuss technical content. It's not just Cloudflare, Akamai has the same problem.

If your site discusses databases then turning on the default SQL injection attack prevention rules will break your site. And there is another ruleset for file inclusion where things like /etc/hosts and /etc/passwd get blocked.

I disagree with other posts here, it is partially a balance between security and usability. You never know what service was implemented with possible security exploits and being able to throw every WAF rule on top of your service does keep it more secure. Its just that those same rulesets are super annoying when you have a securely implemented service which needs to discuss technical concepts.

Fine tuning the rules is time consuming. You often have to just completely turn off the ruleset because when you try to keep the ruleset on and allow the use-case there are a ton of changes you need to get implemented (if its even possible). Page won't load because /etc/hosts was in a query param? Okay, now that you've fixed that, all the XHR included resources won't load because /etc/hosts is included in the referrer. Now that that's fixed things still won't work because some random JS analytics lib put the URL visited in a cookie, etc, etc... There is a temptation to just turn the rules off.

3 days ago by mjr00

> I disagree with other posts here, it is partially a balance between security and usability.

And economics. Many people here are blaming incompetent security teams and app developers, but a lot of seemingly dumb security policies are due to insurers. If an insurer says "we're going to jack up premiums by 20% unless you force employees to change their password once every 90 days", you can argue till you're blue in the face that it's bad practice, NIST changed its policy to recommend not regularly rotating passwords over a decade ago, etc., and be totally correct... but they're still going to jack up premiums if you don't do it. So you dejectedly sigh, implement a password expiration policy, and listen to grumbling employees who call you incompetent.

It's been a while since I've been through a process like this, but given how infamous log4shell became, it wouldn't surprise me if insurers are now also making it mandatory that common "hacking strings" like /etc/hosts, /etc/passwd, jndi:, and friends must be rejected by servers.

3 days ago by swiftcoder

Not just economics, audit processes also really encourage adopting large rulesets wholesale.

We're SOC2 + HIPAA compliant, which either means convincing the auditor that our in-house security rules cover 100% of the cases they care about... or we buy an off-the-shelf WAF that has already completed the compliance process, and call it a day. The CTO is going to pick the second option every time.

3 days ago by mjr00

Yeah. SOC2 reminds me that I didn't mention sales as well, another security-as-economics feature. I've seen a lot of enterprise RFPs that mandate certain security protocols, some of which are perfectly sensible and others... not so much. Usually this is less problematic than insurance because the buyer is more flexible, but sometimes they (specifically, the buyer's company's security team, who has no interest besides covering their own ass) refuse to budge.

If your startup is on the verge of getting a 6 figure MRR deal with a company, but the company's security team mandates you put in a WAF to "protect their data"... guess you're putting in a WAF, like it or not.

a day ago by sgarland

OS-level monitoring / auditing software also never ceases to amaze me (for how awful it is). Multiple times, at multiple companies, I have seen incidents that were caused because Security installed / enabled something (AWS GuardDuty, Auditbeat, CrowdStrike…) that tanked performance. My current place has the latter two on our ProxySQL EC2 nodes. Auditbeat is consuming two logical cores on its own. I haven’t been able to yet quantify the impact of CrowdStrike, but from a recent perf report, it seemed like it was using eBPF to hook into every TCP connection, which is quite a lot for a DB connection poolers.

I understand the need for security tooling, but I don’t think companies often consider the huge performance impact these tools add.

2 days ago by simonw

I wish IT teams would say "sorry about the password requirement, it's required by our insurance policy". I'd feel a lot less angry about stupid password expiration rules if they told me that.

2 days ago by cratermoon

Sometime in the past few years I saw a new wrinkle: password must be changed every 90 days unless it is above a minimum length (12 or so as best I recall) in which case you only need to change it yearly. Since the industry has realized length trumps dumb "complexity" checks, it's a welcome change to see that encoded into policy.

a day ago by thayne

Not exactly that, but I've had the security team say "sorry about the password policy, we agree it is stupid and counterproductive, but it's required for compliance with X, and we need X to sell to some big customers."

3 days ago by betaby

> but a lot of seemingly dumb security policies are due to insurers.

I keep hearing that often on HN, however I've personally never seen seen such demands from insurers. I would greatly appreciate if one share such insurance policy. Insurance policies are not trade secrets and OK to be public. I can google plenty of commercial cars insurance policies for example.

2 days ago by simonw

I found an example!

https://retail.direct.zurich.ch/resources/definition/product...

Questionnaire Zurich Cyber Insurance

Question 4.2: "Do you have a technically enforced password policy that ensures use of strong passwords and that passwords are changed at least quarterly?"

Since this is an insurance questionnaire, presumably your answers to that question affect the rates you get charged?

(Found that with the help of o4-mini https://chatgpt.com/share/680bc054-77d8-8006-88a1-a6928ab99a...)

2 days ago by bigbuppo

The fun part is that they don't demand anything, they just send you a worksheet that you fill out and presumably it impacts your rates. You just assume that whatever they ask about is what they want. Some of what they suggest is reasonable, like having backups that aren't stored on storage directly coupled to your main environment.

The worst part about cyber insurance, though, is that as soon as you declare an incident, your computers and cloud accounts now belong to the insurance company until they have their chosen people rummage through everything. Your restoration process is now going to run on their schedule. In other words, the reason the recovery from a crypto-locker attack takes three weeks is because of cyber insurance. And to be fair, they should only have to pay out once for a single incident, so their designated experts get to be careful and meticulous.

2 days ago by tmpz22

This is such an important comment.

Fear of a prospective expectation, compliance, requirement, etc., even when that requirement does not actually exist is so prevalent in the personality types of software developers.

2 days ago by manwe150

You can buy insurance for just about anything, not just cars. Companies frequently buy insurance against various low-probability incidents such as loss of use, fraud, lawsuit, etc.

3 days ago by lucianbr

There should be some limits and some consequences to the insurer as well. I don't think the insurer is god and should be able to request anything no matter if it makes sense or not and have people and companies comply.

If anything, I think this attitude is part of the problem. Management, IT security, insurers, governing bodies, they all just impose rules with (sometimes, too often) zero regard for consequences to anyone else. If no pushback mechanism exists against insurer requirements, something is broken.

3 days ago by mjr00

> There should be some limits and some consequences to the insurer as well. I don't think the insurer is god and should be able to request anything no matter if it makes sense or not and have people and companies comply.

If the insurer requested something unreasonable, you'd go to a different insurer. It's a competitive market after all. But most of the complaints about incompetent security practices boil down to minor nuisances in the grand scheme of things. Forced password changes once every 90 days is dumb and slightly annoying but doesn't significantly impact business operations. Having to run some "enterprise security tool" and go through every false positive result (of which there will be many) and provide an explanation as to why it's a false positive is incredibly annoying and doesn't help your security, but it's also something you could have a $50k/year security intern do. Turning on a WAF that happens to reject the 0.0001% of Substack articles which talk about /etc/hosts isn't going to materially change Substack's revenue this year.

2 days ago by jimmaswell

This is why everyone should have a union, including highly paid professionals. Imagine what it would be like. "No, fuck you, we're going on strike until you stop inconveniencing us to death with your braindead security theater. No more code until you give us admin on our own machines, stop wasting our time with useless Checkmarx scans, and bring the firewall down about ten notches."

3 days ago by paxys

"You never know..." is the worst form of security, and makes systems less secure overall. Passwords must be changed every month, just to be safe. They must be 20 alphanumeric characters (with 5 symbols of course), just to be safe. We must pass every 3-letter compliance standard with hundreds of pages of checklists for each. The server must have WAF enabled, because one of the checklists says so.

Ask the CIO what actual threat all this is preventing, and you'll get blank stares.

As an engineer what incentive is there to put effort into knowing where each form input goes and how to sanitize it in a way that makes sense? You are getting paid to check the box and move on, and every new hire quickly realizes that. Organizations like these aren't focused on improving security, they are focused on covering their ass after the breach happens.

3 days ago by chii

> Ask the CIO what actual threat all this is preventing

the CIO is securing his job.

3 days ago by reaperducer

the CIO is securing his job.

Every CIO I have worked for (where n=3) has gotten where they are because they're a good manager, even though they have near-zero current technical knowledge.

The fetishizing of "business," in part through MBAs, has been detrimental to actually getting things done.

A century ago, if someone asked you what you do and you replied, "I'm a businessman. I have a degree in business," you'd get a response somewhere between "Yeah, but what to you actually do" and outright laughter.

2 days ago by undefined
[deleted]
3 days ago by ryandrake

This looks like a variation of the Scunthorpe problem[1], where a filter is applied too naively, aggressively, and in this case, to the wrong content altogether. Applying the filter to "other stuff" sent to and among the servers might make sense, but there doesn't seem to be any security benefit to filtering actual text payload that's only going to be displayed as blog content. This seems like a pretty cut and dried bug to me.

1: https://en.wikipedia.org/wiki/Scunthorpe_problem

3 days ago by rurp

This is exactly what I was thinking as well, it's a great Scunthorpe example. Nothing from the body of a user article should ever be executed in any way. If blocking a list of strings is providing any security at all you're already in trouble because attackers will find a way around that specific block list.

a day ago by chrisjj

> This looks like a variation of the Scunthorpe problem[1], where a filter is applied too naively

No.

> aggressively

No.

>, and in this case, to the wrong content altogether.

Yes - making it not a Scunthorpe problem.

2 days ago by pmarreck

Correct. And a great example of it.

3 days ago by krferriter

I don't get why you'd have SQL injection filtering of input fields at the CDN level. Or any validation of input fields aside from length or maybe some simple type validation (number, date, etc). Your backend should be able to handle arbitrary byte content in input fields. Your backend shouldn't be vulnerable to SQL injection if not for a CDN layer that's doing pre-filtering.

2 days ago by immibis

Because someone said "we need security" and someone else said "what is security" and someone else said "SQL injection is security" and someone looked up SQL injections and saw the word "select" and "insert".

WAFs are always a bad idea (possible exception: in allow-but-audit mode). If you knew the vulnerabilities you'd protect against them in your application. If you don't know the vulnerabilities all you get is a fuzzy feeling that Someone Else is Taking Care of it, meanwhile the vulnerabilities are still there.

Maybe that's what companies pay for? The feeling?

a day ago by patrakov

> If you knew the vulnerabilities you'd protect against them in your application.

Correction: it is not your application but someone else's Certified Stuff (TM) that you can't change, but which is still vulnerable.

2 days ago by pxc

WAFs can be a useful site of intervention during incidents or when high-severity vulns are first made public. It's not a replacement for fixing the vuln, that still has to happen, but it gives you a place to mitigate it that may be faster or simpler than deploying code changes.

2 days ago by gopher_space

If your clients will let you pass the buck on security like this it would be very tempting to work towards the least onerous insurance metric and no further.

2 days ago by ordersofmag

A simple reason would be if you're just using it as a proxy signal for bad bots and you want to reduce the load on your real servers and let them get rejected at the CDN level. Obvious SQL injection attempt = must be malicious bot = I don't want my servers wasting their time

a day ago by chrisjj

> A simple reason would be if you're just using it as a proxy signal for bad bots

Who would be that stupid?

2 days ago by benregenspan

It should be thought of as defense-in-depth only. The backend had better be immune to SQL injection, but what if someone (whether in-house or vendor) messes that up?

I do wish it were possible to write the rules in a more context-sensitive way, maybe possible with some standards around payloads (if the WAF knows that an endpoint is accepting a specific structured format, and how escapes work in that format, it could relax accordingly). But that's probably a pipe dream. Since the backend could be doing anything, paranoid rulesets have to treat even escaped data as a potential issue and it's up to users to poke holes.

2 days ago by nijave

The farther a request makes it into infrastructure, the more resources it uses.

3 days ago by netsharc

Reminds me of an anecdote about an e-commerce platform: someone coded a leaky webshop, so their workaround was to watch if the string "OutOfMemoryException" shows up in the logs, and then restart the app.

Another developer in the team decided they wanted to log what customers searched for, so if someone typed in "OutOfMemoryException" in the search bar...

2 days ago by PhilipRoman

Careless analysis of free-form text logs is an underrated way to exploit systems. It's scary how much software blindly logs data without out of band escaping or sanitizing.

2 days ago by ycombinatrix

Why would someone "sanitize" OutOfMemoryException out of their logs? That is a silly point to make.

2 days ago by teraflop

The point is not to sanitize known strings like "OutOfMemoryException". The point is to sanitize or (preferably) escape any untrusted data that gets logged, so that it won't be confused for something else.

2 days ago by owebmaster

An OutOfMemoryException log should not be the same as a search log

  Error: OutOfMemoryException
And

  Search: OutOfMemoryException
Should not be related in any way
2 days ago by MortyWaves

Absolutely incredible how dense HN can be and that no one has explained. Obviously that isn’t what they are saying, they are saying it’s profoundly stupid to have the server be controlled by a simple string search at all.

3 days ago by skipants

I've actually gone through this a few times with our WAF. A user got IP-banned because the WAF thought a note with the string "system(..." was PHP injection.

3 days ago by Y_Y

Does it block `/etc//hosts` or `/etc/./hosts`? This is a ridiculous kind of whack-a-mole that's doomed to failure. The people who wrote these should realize that hackers are smarter and more determined than they are and you should only rely on proven security, like not executing untrusted input.

3 days ago by jrockway

Yeah, and this seems like a common Fortune 500 mandatory checkbox. Gotta have a Web Application Firewall! Doesn't matter what the rules are, as long as there are a few. Once I was told I needed one to prevent SQL injection attacks... against an application that didn't use an SQL database.

If you push back you'll always get a lecture on "defense in depth", and then they really look at you like you're crazy when you suggest that it's more effective to get up, tap your desk once, and spin around in a circle three times every Thursday morning. I don't know... I do this every Thursday and I've never been hacked. Defense in depth, right? It can't hurt...

2 days ago by hnlmorg

I’m going through exactly this joy with a client right now.

“We need SQL injection rules in the WAF”

“But we don’t have an SQL database”

“But we need to protect against the possibility of partnering with another company that needs to use the same datasets and wants to import them into a SQL database”

In fairness, these people are just trying to do their job too. They get told by NIST (et al) and Cloud service providers that WAF is best practice. So it’s no wonder they’d trust these snake oil salesman over the developers who asking not to do something “security” related.

2 days ago by zelphirkalt

If they want to do their job well, how about adding some thinking into the mix, for good measure? Good would also be,if they actually knew what they are talking about, before trying to tell the engineers what to do.

3 days ago by bombcar

I love that having a web application firewall set to allow EVERYTHING passes the checkbox requirement ...

3 days ago by CoffeeOnWrite

(I’m in the anti-WAF camp) That does stand to improve your posture by giving you the ability to quickly apply duct tape to mitigate an active mild denial of service attack. It’s not utterly useless.

2 days ago by vultour

A large investment bank I worked for blocked every URL that ended in `.go`. Considering I mostly wrote Golang code it was somewhat frustrating.

3 days ago by nickdothutton

See "enumerating badness" as a losing strategy. I knew this was a bad idea about 5 minutes after starting my first job in 1995.

2 days ago by tom1337

Well I've just created an account on substack to test this but turns out they've already fixed the issue (or turned off their WAF completely)

3 days ago by augusto-moura

How would that be hard? Getting the absolute path of a string is in almost all languages stdlibs[1]. You can just grep for any string containing slashes and try resolve them and voilĂĄ

Resolving wildcards is trickier but definitely possible if you have a list of forbidden files

[1]: https://nodejs.org/api/path.html#pathresolvepaths

Edit: changed link because C's realpath has a slightly different behavior

2 days ago by TheDong

The reason it's doomed to failure is because WAFs operate before your application, and don't have any clue what the data is.

Here is a WAF matching line: https://github.com/coreruleset/coreruleset/blob/943a6216edea...

Here's where that file is loaded: https://github.com/coreruleset/coreruleset/blob/943a6216edea...

It's loaded with '"@pmFromFile lfi-os-files.data"' which means "case-insensitive match of values from a file".

So yeah, the reason it can't resolve paths properly is because WAFs are just regex and substring matching trying to paper over security issues in an application which can only be solved correctly at the application level.

2 days ago by myflash13

It’s not hard, but I think that’s more computation than a CDN should be doing on the edge. If your CDN layer is doing path resolution on all strings with slashes, that’s already some heavy lifting for a proxy layer.

2 days ago by watusername

> How would that be hard? Getting the absolute path of a string is in almost all languages stdlibs[1]. You can just grep for any string containing slashes and try resolve them and voilĂĄ

Be very, very careful about this, because if you aren't, this can actually result in platform-dependent behavior or actual filesystem access. They are bytes containing funny slashes and dots, so process them as such.

Edit: s/text/bytes/

3 days ago by simonw

"How could Substack improve this situation for technical writers?"

How about this: don't run a dumb as rocks Web Application Firewall on an endpoint where people are editing articles that could be about any topic, including discussing the kind of strings that might trigger a dumb as rocks WAF.

This is like when forums about web development implement XSS filters that prevent their members from talking about XSS!

Learn to escape content properly instead.

a day ago by awoimbee

I'm in the position where I have to run a WAF to pass security certifications. The only open source WAFs are modsecurity and it's beta successor, coraza. These things are dumb, they just use OWASP's coreruleset which is a big pile of unreadable garbage.

2 days ago by serial_dev

Surprisingly simple solution

2 days ago by ZeroTalent

hire a cybersec person. I don't think they one.

3 days ago by blenderob

> This case highlights an interesting tension in web security: the balance between protection and usability.

But it doesn't. This case highlights a bug, a stupid bug. This case highlights that people who should know better, don't!

The tension between security and usability is real but this is not it. Tension between security and usability is usually a tradeoff. When you implement good security that inconveniences the user. From simple things like 2FA to locking out the user after 3 failed attempts. Rate limiting to prevent DoS. It's a tradeoff. You increase security to degrade user experience. Or you decrease security to increase user experience.

This is neither. This is both bad security and bad user experience. What's the tension?

2 days ago by myflash13

I would say it’s a useful security practice in general to apply WAF as a blanket rule to all endpoints and then remove it selectively when issues like this occur. It’s much, much, harder to evaluate every single public facing endpoint especially when hosting third party software like Wordpress with plugins.

2 days ago by crabbone

Precisely.

This also reminded me, I think in the PHP 3 era, PHP used to "sanitize" the contents of URL requests to blanket combat SQL injections, or perhaps, it was a configuration setting that would be frequently turned on in shared hosting services. This, of course, would've been very soon discovered by the authors of the PHP site and various techniques were employed to circumvent this restriction, overall giving probably even worse outcomes than if the "sanitation" wasn't there to begin with.

12 hours ago by TRiG_Ireland

The days of addslashes() and stripslashes()!

3 days ago by SonOfLilit

After having been bitten once (was teaching a competitive programming team, half the class got a blank page when submitting solutions, after an hour of debugging I narrowed it down to a few C++ types and keywords that cause 403 if they appear in the code, all of which happen to have meaning in Javascript), and again (working for a bank, we had an API that you're supposed to submit a python file to, and most python files would result in 403 but short ones wouldn't... a few hours of debugging and I narrowed it down to a keyword that sometimes appears in the code) and then again a few months later (same thing, new cloud environment, few hours burned on debugging[1]), I had the solution to his problem in mind _immediately_ when I saw the words "network error".

[1] the second time it happened, a colleague added "if we got 403, print "HAHAHA YOU'VE BEEN WAFFED" to our deployment script, and for that I am forever thankful because I saw that error more times than I expected

3 days ago by simonw

Do you remember if that was Cloudflare or some other likely WAF?

3 days ago by SonOfLilit

First time something on-prem, maybe F5. Second time AWS.

Oh, I just remembered I had another encounter with the AWS WAF.

I had a Jenkins instance in our cloud account that I was trying to integrate with VSTS (imagine github except developed by Microsoft, and still maintained, nevermind that they own github and it's undoubtedly a better product). Whenever I tried to trigger a build, it worked, but when VSTS did, it failed. Using a REST monitor service I was able to record the exact requests VSTS was making and prove that they work with curl from my machine... after a few nights of experimenting and diffing I noticed a difference between the request VSTS made to the REST monitor and my reproduction with curl: VSTS didn't send a "User-Agent" header, so curl supplied one by default unless I added I think -H "User-Agent:", and therefore did not trigger the first default rule in the AWS WAF, "if your request doesn't list a user agent you're a hacker".

HAHAHA I'VE BEEN WAFFED AGAIN.

2 days ago by netsharc

+++ATH

3 days ago by pimanrules

We faced a similar issue in our application. Our internal Red Team was publishing data with XSS and other injection attack attempts. The attacks themselves didn't work, but the presence of these entries caused our internal admin page to stop loading because our corporate firewall was blocking the network requests with those payloads in them. So an unsuccessful XSS attack became an effective DoS attack instead.

3 days ago by darkwater

This is funny and sad at the same time.

3 days ago by mrgoldenbrown

Everything old is new again :) We used to call this the Scunthorpe problem.

https://en.m.wikipedia.org/wiki/Scunthorpe_problem

2 days ago by kreddor

I remember back in the old days on the Eve Online forums when the word cockpit would always turn up as "c***pit". I was quite amused by that.

2 days ago by Too

This is actually a better solution, replacing dangerous words with placeholders, instead of blocking the whole payload. That at least gives the user some indication of what is going on. Not that I'm for any such WAF filters in the first place, just if having to choose between the lesser of two evils I'd choose the more informative.

2 days ago by Matumio

Not so sure. Imagine you have a base64 encoded payload and it just happens to encode the forbidden word. Good luck debugging that, if the payload only gets silently modified.

I suddenly understand why it makes sense to integrity-check a payload that is already protected by all three of TLS, TCP checksum and CRC.

3 days ago by greghendershott

See also: Recent scrubbing US government web sites for words like "diversity", "equity", and "inclusion".

Writing about biology, finance, or geology? Shrug.

Dumb filtering is bad enough when used by smart people with good intent.

2 days ago by netsharc

Huh, quick tell one Musk's DOGE l33t h4ck3ers about reverse proxies, and put all government sites behind one, that looks for those words and returns an error... Error 451 would be the most appropriate!

For bonus, the reverse proxy will run on a system infiltrated by Russian (why not Chinese as well) hackers.

2 days ago by odirf

It is time to add the Substack case to this Wikipedia article.

2 days ago by reverendsteveii

"I wonder why it's called Scunthorpe....?"

sits quietly for a second

"Oh nnnnnnnooooooooooooooo lol!"

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.