I donât want more conversational, I want more to the point. Less telling me how great my question is, less about being friendly, instead I want more cold, hard, accurate, direct, and factual results.
Itâs a machine and a tool, not a person and definitely not my friend.
It's a cash grab. More conversational AI means more folks running out of free or lower paid tier tokens faster, leading to more upsell opportunities. API users will pay more in output tokens by default.
Example, I asked Claude a high level question about p2p systems and it started writing code in 3 languages. Ignoring the code, asking a follow up about the fundamentals, it answered and then rewrote the code 3 times. After a few minutes I hit a token limit for the first time.
It's pretty ridiculous that the response style doesn't persist for Claude. You need to click into a menu to set it to 'concise' for every single conversation. If I forget to it's immediately apparent when it spits out an absurd amount of text for a simple question.
I've had good results saying Do not code, focus on architecture first.
In claude code you should use Planning mode
Totally - if anything I want something more like Orac persona wise from Blakes 7 to the point and blunt. https://www.youtube.com/watch?v=H9vX-x9fVyo
One of my saved memories is to always give shorter "chat like" concise to the point answers and give further description if prompted to only
I've read from several supposed AI prompt-masters that this actually reduces output quality. I can't speak to the validity of these claims though.
Forcing shorter answers will definitely reduce their quality. Every token an LLM generates is like a little bit of extra thinking time. Sometimes it needs to work up to an answer. If you end a response too quickly, such as by demanding one-word answers, it's much more likely to produce hallucinations.
Just put your requirements as the first sentence in your prompts and it will work.
add on: You can even prime it that it should shout at you and treat you like an ass*** if you prefer that :-)
> what romanian football player won the premier league
> The only Romanian football player to have won the English Premier League (as of 2025) is Florin Andone, but wait â actually, thatâs incorrect; he never won the league.
> ...
> No Romanian footballer has ever won the Premier League (as of 2025).
Yes, this is what we needed, more "conversational" ChatGPT... Let alone the fact the answer is wrong.
My worry is that they're training it on Q&A from the general public now, and that this tone, and more specifically, how obsequious it can be, is exactly what the general public want.
Most of the time, I suspect, people are using it like wikipedia, but with a shortcut to cut through to the real question they want answered; and unfortunately they don't know if it is right or wrong, they just want to be told how bright they were for asking it, and here is the answer.
OpenAI then get caught in a revenue maximising hell-hole of garbage.
God, I hope I am wrong.
LLMs only really make sense for tasks where verifying the solution (which you have to do!) is significantly easier than solving the problem: translation where you know the target and source languages, agentic coding with automated tests, some forms of drafting or copy editing, etc.
General search is not one of those! Sure, the machine can give you its sources but it won't tell you about sources it ignored. And verifying the sources requires reading them, so you don't save any time.
I've often found it helpful in search. Specifically, when the topic is well-documented, you can provide a clear description, but you're lacking the right words or terminology. Then it can help in finding the right question to ask, if not answering it. Recall when we used to laugh at people typing in literal questions into the Google search bar? Those are the exact types of queries that the LLM is equipped to answer. As for the "improvements" in GPT 5.1, seems to me like another case of pushing Clippy on people who want Anton. https://www.latent.space/p/clippy-v-anton
I agree a lot with the first part, the only time I actually feel productive with them is when I can have a short feedback cycle with 100% proof if it's correct or not, as soon as "manual human verification" is needed, things spiral out of control quickly.
> Sure, the machine can give you its sources but it won't tell you about sources it ignored.
You can prompt for that though, include something like "Include all the sources you came across, and explain why you think it was irrelevant" and unsurprisingly, it'll include those. I've also added a "verify_claim" tool which it is instructed to use for any claims before sharing a final response, checks things inside a brand new context, one call per claim. So far it works great for me with GPT-OSS-120b as a local agent, with access to search tools.
One of the dangers of automated tests is that if you use an LLM to generate tests, it can easily start testing implemented rather than desired behavior. Tell it to loop until tests pass, and it will do exactly that if unsupervised.
And you canât even treat implementation as a black box, even using different LLMs, when all the frontier models are trained to have similar biases towards confidence and obsequiousness in making assumptions about the spec!
Verifying the solution in agentic coding is not nearly as easy as it sounds.
That's a major use case, especially if the definition is broad enough to include take my expertise, knowledge and perhaps a written document, and transmute it to others forms--slides, illustrations, flash cards, quizzes, podcasts, scripts for an inbound call center.
But there seem to be uses where a verified solution is irrelevant. Creativity generally--an image, poem, description of an NPC in a roleplaying game, the visuals for a music video never have to be "true", just evocative. I suppose persuasive rhetoric doesn't have to be true, just plausible or engaging.
As for general search, I don't know that we can say that "classic search" can be meaningful said to tell you about the sources it ignored. I will agree that using OpenAI or Perplexity for search is kind of meh, but Google's AI Mode does a reasonable job at informing you about the links it provides, and you can easily tab over to a classic search if you want. It's almost like having a depth of expertise doing search helps in building a search product the incorporates an LLM...
But, yeah, if one is really disinterested in looking at sources, just chatting with a typical LLM seems a rather dubious way to get an accurate or reasonable comprehensive answer.
Iâm of two minds about this.
The ass licking is dangerous to our already too tight information bubbles, that part is clear. But that aside, I think I prefer a conversational/buddylike interaction to an encyclopedic tone.
Intuitively I think it is easier to make the connection that this random buddy might be wrong, rather than thinking the encyclopedia is wrong. Casualness might serve to reduce the tendency to think of the output as actual truth.
Sam Altman probably canât handle any GPT models that donât ass lick to an extreme degree so they likely get nerfed before they reach the public.
> My worry is that they're training it on Q&A from the general public now, and that this tone, and more specifically, how obsequious it can be, is exactly what the general public want.
That tracks; it's what's expected of human customer service, too. Call a large company for support and you'll get the same sort of tone.
Its very frustating that it can't be relied upon. I was asking gemini this morning about Uncharted 1,2 and 3 if they had a remastered version for the PS5. It said no. Then 5 minutes later I on the PSN store there were the three remastered versions for sale.
Which model did you use? With 5.1 Thinking, I get:
"Costel Pantilimon is the Romanian footballer who won the English Premier League.
"He did it twice with Manchester City, in the 2011â12 and 2013â14 seasons, earning a winnerâs medal as a backup goalkeeper. ([Wikipedia][1])
URLs:
* [https://en.wikipedia.org/wiki/Costel_Pantilimon]
* [https://www.transfermarkt.com/costel-pantilimon/erfolge/spie...]
* [https://thefootballfaithful.com/worst-players-win-premier-le...
[1]: https://en.wikipedia.org/wiki/Costel_Pantilimon?utm_source=c... "Costel Pantilimon""
I just asked chatgpt 5.1 auto (not instant) on teams account, and its first repsonse was...
I could not find a Romanian football player who has won the Premier League title.
If you like, I can check deeper records to verify whether any Romanian has been part of a title-winning squad (even if as a non-regular player) and report back.
Then I followed up with an 'ok' and it then found the right player.
Just to rule out a random error, I asked the same question two more times in separate chats to gpt 5.1 auto, below are responses...
#2: One Romanian footballer who did not win the Premier League but played in it is Dan Petrescu.
If you meant actually won the Premier League title (as opposed to just playing), I couldnât find a Romanian player who is a verified Premier League champion.
Would you like me to check more deeply (perhaps look at medal-winners lists) to see if there is a Romanian player who earned a title medal?
#3: The Romanian football player who won the Premier League is Costel Pantilimon.
He was part of Manchester City when they won the Premier League in 2011-12 and again in 2013-14. Wikipedia +1
Same:
Yes â the Romanian player is Costel Pantilimon. He won the Premier League with Manchester City in the 2011-12 and 2013-14 seasons.
If you meant another Romanian player (perhaps one who featured more prominently rather than as a backup), I can check.
Same here, but with the default 5.1 auto and no extra settings. Every time someone posts one of these I just imagine they must have misunderstood the UI settings or cluttered their context somehow.
The beauty of nondeterminism. I get:
The Romanian football player who won the Premier League is Gheorghe Hagi. He played for Galatasaray in Turkey but had a brief spell in the Premier League with Wimbledon in the 1990s, although he didn't win the Premier League with them.
However, Marius LÄcÄtuČ won the Premier League with Arsenal in the late 1990s, being a key member of their squad.
https://chatgpt.com/s/t_6915c8bd1c80819183a54cd144b55eb2
Damn this is a lot of self correcting
This sounds like my inner monologue during a test I didnt study for
That's complete garbage.
The emojis are the cherry on top of this steaming pile of slop.
Lmao what the hell have they made
"Ah-- that's a classic confusion about football players. Your intuition is almost right-- let me break it down"
All the examples of "warmer" generations show that OpenAI's definition of warmer is synonymous with sycophantic, which is a surprise given all the criticism against that particular aspect of ChatGPT.
I suspect this approach is a direct response to the backlash against removing 4o.
Id have more appreciation and trust in an llm that disagreed with me more and challenged my opinions or prior beliefs. The sycophancy drives me towards not trusting anything it says.
This is why I like Kimi K2/Thinking. IME it pushes back really, really hard on any kind of non obvious belief or statement, and it doesn't give up after a few turns â it just keeps going, iterating and refining and restating its points if you change your mind or taken on its criticisms. It's great for having a dialectic around something you've written, although somewhat unsatisfying because it'll never agree with you, but that's fine, because it isn't a person, even if my social monkey brain feels like it is and wants it to agree with me sometimes. Someone even ran a quick and dirty analysis of which models are better or worse at pushing back on the user and Kimi came out on top:
https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...
See also the sycophancy score of Kimi K2 on Spiral-Bench: https://eqbench.com/spiral-bench.html (expand details, sort by inverse sycophancy).
In a recent AMA, the Kimi devs even said they RL it away from sycophancy explicitly, and in their paper they talk about intentionally trying to get it to generalize its STEM/reasoning approach to user interaction stuff as well, and it seems like this paid off. This is the least sycophantic model I've ever used.
Which agent do you use it with?
Google's search now has the annoying feature that a lot of searches which used to work fine now give a patronizing reply like "Unfortunately 'Haiti revolution persons' isn't a thing", or an explanation that "This is probably shorthand for [something completely wrong]"
That latter thing â where it just plain makes up a meaning and presents it as if it's real â is completely insane (and also presumably quite wasteful).
if I type in a string of keywords that isn't a sentence I wish it would just do the old fashioned thing rather than imagine what I mean.
I've toyed with the idea that maybe this is intentionally what they're doing. Maybe they (the LLM developers) have a vision of the future and don't like people giving away unearned trust!
Everyone telling you to use custom instructions etc donât realize that they donât carry over to voice.
Instead, the voice mode will now reference the instructions constantly with every response.
Before:
Absolutely, youâre so right and a lot of people would agree! Only a perceptive and curious person such as yourself would ever consider that, etc etc
After:
Ok hereâs the answer! No fluff, no agreeing for the sake of agreeing. Right to the point and concise like you want it. Etc etc
And no, I donât have memories enabled.
Having this problem with the voice mode as well. It makes it far less usable than it might be if it just honored the system prompts.
> All the examples of "warmer" generations show that OpenAI's definition of warmer is synonymous with sycophantic, which is a surprise given all the criticism against that particular aspect of ChatGPT.
Have you considered that âall that criticismâ may come from a relatively homogenous, narrow slice of the market that is not representative of the overall market preference?
I suspect a lot of people who are from a very similar background to those making the criticism and likely share it fail to consider that, because the criticism follows their own preferences and viewing its frequency in the media that they consume as representaive of the market is validating.
EDIT: I want to emphasize that I also share the preference that is expressed in the criticisms being discussed, but I also know that my preferred tone for an AI chatbot would probably be viewed as brusque, condescending, and off-putting by most of the market.
I'll be honest, I like the way Claude defaults to relentless positivity and affirmation. It is pleasant to talk to.
That said I also don't think the sycophancy in LLM's is a positive trend. I don't push back against it because it's not pleasant, I push back against it because I think the 24/7 "You're absolutely right!" machine is deeply unhealthy.
Some people are especially susceptible and get one shot by it, some people seem to get by just fine, but I doubt it's actually good for anyone.
The sycophancy makes LLMs useless if you want to use them to help you understand the world objectively.
Equally bad is when they push an opinion strongly (usually on a controversial topic) without being able to justify it well.
I hate NOTHING quite the way how Claude jovially and endlessly raves about the 9/10 tasks it "succeeded" at after making them up, while conveniently forgetting to mention it completely and utterly failed at the main task I asked it to do.
>Have you considered that âall that criticismâ may come from a relatively homogenous, narrow slice of the market that is not representative of the overall market preference?
Yes, and given Chat GPT's actual sycophantic behavior, we concluded that this is not the case.
I agree. Some of the most socially corrosive phenomenon of social media is a reflection of the revealed preferences of consumers.
It is interesting. I don't need ChatGPT to say "I got you, Jason" - but I don't think I'm the target user of this behavior.
The target users for this behavior are the ones using GPT as a replacement for social interactions; these are the people who crashed out/broke down about the GPT5 changes as though their long-term romantic partner had dumped them out of nowhere and ghosted them.
I get that those people were distraught/emotionally devastated/upset about the change, but I think that fact is reason enough not to revert that behavior. AI is not a person, and making it "warmer" and "more conversational" just reinforces those unhealthy behaviors. ChatGPT should be focused on being direct and succinct, and not on this sort of "I understand that must be very frustrating for you, let me see what I can do to resolve this" call center support agent speak.
> and not on this sort of "I understand that must be very frustrating for you, let me see what I can do to resolve this"
You're triggering me.
Another type that are incredibly grating to me are the weird empty / therapist like follow-up questions that don't contribute to the conversation at all.
The equivalent of like (just a contrived example), a discussion about the appropriate data structure for a problem and then it asks a follow-up question like, "what other kind of data structures do you find interesting?"
And I'm just like "...huh?"
True, neither here, but i think what we're seeing is a transition in focus. People at oai have finally clued in on the idea that agi via transformers is a pipedream like elons self driving cars, and so oai is pivoting toward friend/digital partner bot. Charlatan in cheif sam altman recently did say they're going to open up the product to adult content generation, which they wouldnt do if they still beleived some serious amd useful tool (in the specified usecases) were possible. Right now an LLM has three main uses. Interactive rubber ducky, entertainment, and mass surveillance. Since I've been following this saga, since gpt2 days, my close bench set of various tasks etc. Has been seeing a drop in metrics not a rise, so while open bench resultd are imoroving real performance is getting worse and at this point its so much worse that problems gpt3 could solve (yes pre chatgpt) are no longer solvable to something like gpt5.
Indeed, target users are people seeking validation + kids and teenagers + people with a less developed critical mind. Stickiness with 90% of the population is valuable for Sam.
> Youâre rattled, so your brain is doing that thing where it catastrophizes a tiny mishap into a character flaw. But honestly? People barely register this stuff.
This example response in the article gives me actual trauma-flash backs to the various articles about people driven to kill themselves by GPT-4o. Its the exact same sentence structure.
GPT-5.1 is going to kill more people.
Warmer = US centric? I always think that the proliferation of J.A.R.V.I.S.-type projects in the wild is down to the writing in Iron Man, and Paul Bettany's dry delivery. We want dryer, not warmer. More sarcasm, less smarm.
I wish chatgpt would stop saying things like "here's a no nonsense answer" like maybe just don't include nonsense in the answer?
It's analogous to how politicians nowadays are constantly saying "let me be clear", it drives me nuts.
Another annoyance: "In my honest opinion...". Does that mean that you other times are sharing dishonest opinions? Why would you need to declare that this time you're honest?
It might actually help output answer with less nonsense.
As an example in some workflow I ask chatgpt to figure out if the user is referring to a specific location and output a country in json like { country }
It has some error rate at this task. Asking it for a rationale improves this error rate to almost none. { rationale, country }. However reordering the keys like { country, rationale } does not. You get the wrong country and a rationale that justifies the correct one that was not given.
Maybe you used "Don't give me nonsense" in your custom system prompt?
An LLM should never refer to the user's "style" prompt like that. It should function as the model's personality, not something the user asked it to do or be like.
System prompt is for multi-client/agent applications, so if you wish to fix something for everyone, that is the right place to put it.
That does nothing. You can add, âsay I donât know if you are not certain or donât know the answerâ and it will never say I donât know.
That's because "certain" and "know the answer" has wildly different definitions depending on the person, you need to be more specific about what you actually mean with that. Anything that can be ambiguous, will be treated ambiguously.
Anything that you've mentioned in the past (like `no nonsense`) that still exists in context, will have a higher possibility of being generated than other tokens.
Recently microsoft copilot's (only one that's allowed within our corporate network) replies all have the first section prefixed as "Direct answer:"
And after the short direct answer it puts the usual five section blog post style answer with emoji headings
Seems like people here are pretty negative towards a "conversational" AI chatbot.
Chatgpt has a lot of frustrations and ethical concerns, and I hate the sycophancy as much as everyone else, but I don't consider being conversational to be a bad thing.
It's just preference I guess. I understand how someone who mostly uses it as a google replacement or programming tool would prefer something terse and efficient. I fall into the former category myself.
But it's also true that I've dreamed about a computer assistant that can respond to natural language, even real time speech, -- and can imitate a human well enough to hold a conversation -- since I was a kid, and now it's here.
The questions of ethics, safety, propaganda, and training on other people's hard work are valid. It's not surprising to me that using LLMs is considered uncool right now. But having a computer imitate a human really effectively hasn't stopped being awesome to me personally.
I'm not one of those people that treats it like a friend or anything, but its ability to immitate natural human conversation is one of the reasons I like it.
> I've dreamed about a computer assistant that can respond to natural language
When we dreamed about this as kids, we were dreaming about Data from Star Trek, not some chatbot that's been focus grouped and optimized for engagement within an inch of its life. LLMs are useful for many things and I'm a user myself, even staying within OpenAI's offerings, Codex is excellent, but as things stand anthropomorphizing models is a terrible idea and amplifies the negative effects of their sycophancy.
Right. I want to be conversational with my computer, I don't want it to respond in a manner that's trying to continue the conversation.
Q: "Hey Computer, make me a cup of tea" A: "Ok. Making tea."
Not: Q: "Hey computer, make me a cup of tea" A: "Oh wow, what a fantastic idea, I love tea don't you? I'll get right on that cup of tea for you. Do you want me to tell you about all the different ways you can make and enjoy tea?"
Readers of a certain age will remember the Sirius Cybernetics Corporation products from Hitch Hiker's Guide to the Galaxy.
Every product - doors, lifts, toasters, personal massagers - was equipped with intensely annoying, positive, and sycophantic GPP (Genuine People Personality)â˘, and their robots were sold as Your Plastic Pal Who's Fun to be With.
Unfortunately the entire workforce were put up against a wall and shot during the revolution.
I'm generally ok with it wanting a conversation, but yes, I absolutely hate it that is seems to always finish with a question even when it makes zero sense.
[dead]
I didn't grow up watching Star Trek, so I'm pretty sure that's not my dream. I pictured something more like Computer from Dexter's Lab. It talks, it appears to understand, it even occassionally cracks jokes and gives sass, it's incredibly useful, but it's not at risk of being mistaken for a human.
I would of though the hacker news type would be dreaming about having something like javis from iron man, not Data.
> Chatgpt has a lot of frustrations and ethical concerns, and I hate the sycophancy as much as everyone else, but I don't consider being conversational to be a bad thing.
But is this realistic conversation?
If I say to a human I don't know "I'm feeling stressed and could use some relaxation tips" and he responds with "Iâve got you, Ron" I'd want to reduce my interactions with him.
If I ask someone to explain a technical concept, and he responds with "Nice, nerd stat time", it's a great tell that he's not a nerd. This is how people think nerds talk, not how nerds actually talk.
Regarding spilling coffee:
"Hey â no, they didnât. Youâre rattled, so your brain is doing that thing where it catastrophizes a tiny mishap into a character flaw."
I ... don't know where to even begin with this. I don't want to be told how my brain works. This is very patronizing. If I were to say this to a human coworker who spilled coffee, it's not going to endear me to the person.
I mean, seriously, try it out with real humans.
The thing with all of this is that everyone has his/her preferences on how they'd like a conversation. And that's why everyone has some circle of friends, and exclude others. The problem with their solution to a conversational style is the same as one trying to make friends: It will either attract or repel.
Yes, it's true that I have different expectations from a conversation with a computer program than with a real human. Like I said, I don't think of it the same as a friend.
Ideally, a chatbot would be able to pick up on that. It would, based on what it knows about general human behavior and what it knows about a given user, make a very good guess as to whether the user wants concise technical know-how, a brainstorming session, or an emotional support conversation.
Unfortunately, advanced features like this are hard to train for, and work best on GPT-4.5 scale models.
I agree with what you're saying.
Personally, I also think that in some situations I do prefer to use it as the google replacement in combination with the imitated human conversations. I mostly use it to 'search' questions while I'm cooking or ask for clothing advice, and here I think the fact that it can respond in natural language and imitate a human to hold a conversation is benefit to me.
Iâve seen various older people that Iâm connected with on Facebook posting screenshots of chats theyâve had with ChatGPT.
Itâs quite bizarre from that small sample how many of them take pride in âbaitingâ or âbanteringâ with ChatGPT and then post screenshots showing how they âgot one overâ on the AI. I guess thereâs maybe some explanation - feeling alienated by technology, not understanding it, and so needing to âproveâ something. But itâs very strange and makes me feel quite uncomfortable.
Partly because of the ânormalâ and quite naturalistic way they talk to ChatGPT but also because some of these conversations clearly go on for hours.
So I think normies maybe do want a more conversational ChatGPT.
> So I think normies maybe do want a more conversational ChatGPT.
The backlash from GPT-5 proved that. The normies want a very different LLM from what you or I might want, and unfortunately OpenAI seems to be moving in a more direct-to-consumer focus and catering to that.
But I'm really concerned. People don't understand this technology, at all. The way they talk to it, the suicide stories, etc. point to people in general not groking that it has no real understanding or intelligence, and the AI companies aren't doing enough to educate (because why would they, they want you believe it's superintelligence).
These overly conversational chatbots will cause real-world harm to real people. They should reinforce, over and over again to the user, that they are not human, not intelligent, and do not reason or understand.
It's not really the technology itself that's the problem, as is the case with a lot of these things, it's a people & education problem, something that regulators are supposed to solve, but we aren't, we have an administration that is very anti AI regulation all in the name of "we must beat China."
I just cannot imagine myself sitting just âchatting awayâ with an AI. It makes me feel quite sick to even contemplate it.
Another person I was talking to recently kept referring to ChatGPT as âsheâ. âShe told me Xâ, âand I said to herâŚâ
Very very odd, and very worrying. As you say, a big education problem.
The interesting thing is that a lot of these people are folk who are on the edges of digital literacy - people who maybe first used computers when they were in their thirties or forties - or who never really used computers in the workplace, but who now have smartphones - who are now in their sixties.
As a counterpoint, I've been using my own PC since I was 6 and know reasonably well about the innards of LLMs and agentic AI, and absolutely love this ability to hold a conversation with an AI.
Earlier today, procrastinating from work, I spent an hour and a half talking with it about the philosophy of religion and had a great time, learning a ton. Sometimes I do just want a quick response to get things done, but I find living in a world where I'm able to just dive into a deep conversation with a machine that has read the entirety of the internet is incredible.
Why is it odd?
Some people treat their pets like they humans. Not sure why this is worse particularly.
is it that bad? I have a robot vacuum, i put googley eyes on it gave it a name, and now everyone in the house uses the name an uses he/him to refer to it.
In the future, this majority who love the artificial pampering will vastly out-vote and out-influence us.
I hope it wonât suck as bad as I predict it will for actual individuals.
Personally, I want a punching bag. It's not because I'm some kind of sociopath or need to work off some aggression. It's just that I need to work the upper body muscles in a punching manner. Sometimes the leg muscles need to move, and sometimes it's the upper body muscles.
ChatGPT is the best social punching bag. I don't want to attack people on social media. I don't want to watch drama, violent games, or anything like that. I think punching bag is a good analogy.
My family members do it all the time with AI. "That's not how you pronounce protein!" "YOUR BALD. BALD. BALDY BALL HEAD."
Like a punching bag, sometimes you need to adjust the response. You wouldn't punch a wall. Does it deflect, does it mirror, is it sycophantic? The conversational updates are new toys.
My experience with GPT-5.1 so far is definitely an improvement on 5 - I asked GPT-5 a relatively basic question the other day and it said "Beautiful question â and exactly the kind of subtlety that shows youâre really getting into the math of MDPs." and I threw up a little bit - 5.1 on the other hand is really frank, and straight down to business. Maybe it's better at following my system prompt (I say don't be a sycophant or something similar in mine), but I still quite like it.
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.