Hacker News
an hour ago by dkersten

I don’t want more conversational, I want more to the point. Less telling me how great my question is, less about being friendly, instead I want more cold, hard, accurate, direct, and factual results.

It’s a machine and a tool, not a person and definitely not my friend.

36 minutes ago by film42

It's a cash grab. More conversational AI means more folks running out of free or lower paid tier tokens faster, leading to more upsell opportunities. API users will pay more in output tokens by default.

Example, I asked Claude a high level question about p2p systems and it started writing code in 3 languages. Ignoring the code, asking a follow up about the fundamentals, it answered and then rewrote the code 3 times. After a few minutes I hit a token limit for the first time.

28 minutes ago by rurp

It's pretty ridiculous that the response style doesn't persist for Claude. You need to click into a menu to set it to 'concise' for every single conversation. If I forget to it's immediately apparent when it spits out an absurd amount of text for a simple question.

28 minutes ago by majora2007

I've had good results saying Do not code, focus on architecture first.

23 minutes ago by phito

In claude code you should use Planning mode

an hour ago by Zenst

Totally - if anything I want something more like Orac persona wise from Blakes 7 to the point and blunt. https://www.youtube.com/watch?v=H9vX-x9fVyo

an hour ago by hypercube33

One of my saved memories is to always give shorter "chat like" concise to the point answers and give further description if prompted to only

32 minutes ago by glenneroo

I've read from several supposed AI prompt-masters that this actually reduces output quality. I can't speak to the validity of these claims though.

15 minutes ago by SquareWheel

Forcing shorter answers will definitely reduce their quality. Every token an LLM generates is like a little bit of extra thinking time. Sometimes it needs to work up to an answer. If you end a response too quickly, such as by demanding one-word answers, it's much more likely to produce hallucinations.

an hour ago by ta12653421

Just put your requirements as the first sentence in your prompts and it will work.

an hour ago by ta12653421

add on: You can even prime it that it should shout at you and treat you like an ass*** if you prefer that :-)

6 hours ago by pbalau

> what romanian football player won the premier league

> The only Romanian football player to have won the English Premier League (as of 2025) is Florin Andone, but wait — actually, that’s incorrect; he never won the league.

> ...

> No Romanian footballer has ever won the Premier League (as of 2025).

Yes, this is what we needed, more "conversational" ChatGPT... Let alone the fact the answer is wrong.

6 hours ago by Quarrel

My worry is that they're training it on Q&A from the general public now, and that this tone, and more specifically, how obsequious it can be, is exactly what the general public want.

Most of the time, I suspect, people are using it like wikipedia, but with a shortcut to cut through to the real question they want answered; and unfortunately they don't know if it is right or wrong, they just want to be told how bright they were for asking it, and here is the answer.

OpenAI then get caught in a revenue maximising hell-hole of garbage.

God, I hope I am wrong.

4 hours ago by xmcqdpt2

LLMs only really make sense for tasks where verifying the solution (which you have to do!) is significantly easier than solving the problem: translation where you know the target and source languages, agentic coding with automated tests, some forms of drafting or copy editing, etc.

General search is not one of those! Sure, the machine can give you its sources but it won't tell you about sources it ignored. And verifying the sources requires reading them, so you don't save any time.

an hour ago by Zr01

I've often found it helpful in search. Specifically, when the topic is well-documented, you can provide a clear description, but you're lacking the right words or terminology. Then it can help in finding the right question to ask, if not answering it. Recall when we used to laugh at people typing in literal questions into the Google search bar? Those are the exact types of queries that the LLM is equipped to answer. As for the "improvements" in GPT 5.1, seems to me like another case of pushing Clippy on people who want Anton. https://www.latent.space/p/clippy-v-anton

4 hours ago by embedding-shape

I agree a lot with the first part, the only time I actually feel productive with them is when I can have a short feedback cycle with 100% proof if it's correct or not, as soon as "manual human verification" is needed, things spiral out of control quickly.

> Sure, the machine can give you its sources but it won't tell you about sources it ignored.

You can prompt for that though, include something like "Include all the sources you came across, and explain why you think it was irrelevant" and unsurprisingly, it'll include those. I've also added a "verify_claim" tool which it is instructed to use for any claims before sharing a final response, checks things inside a brand new context, one call per claim. So far it works great for me with GPT-OSS-120b as a local agent, with access to search tools.

an hour ago by btown

One of the dangers of automated tests is that if you use an LLM to generate tests, it can easily start testing implemented rather than desired behavior. Tell it to loop until tests pass, and it will do exactly that if unsupervised.

And you can’t even treat implementation as a black box, even using different LLMs, when all the frontier models are trained to have similar biases towards confidence and obsequiousness in making assumptions about the spec!

Verifying the solution in agentic coding is not nearly as easy as it sounds.

2 hours ago by msabalau

That's a major use case, especially if the definition is broad enough to include take my expertise, knowledge and perhaps a written document, and transmute it to others forms--slides, illustrations, flash cards, quizzes, podcasts, scripts for an inbound call center.

But there seem to be uses where a verified solution is irrelevant. Creativity generally--an image, poem, description of an NPC in a roleplaying game, the visuals for a music video never have to be "true", just evocative. I suppose persuasive rhetoric doesn't have to be true, just plausible or engaging.

As for general search, I don't know that we can say that "classic search" can be meaningful said to tell you about the sources it ignored. I will agree that using OpenAI or Perplexity for search is kind of meh, but Google's AI Mode does a reasonable job at informing you about the links it provides, and you can easily tab over to a classic search if you want. It's almost like having a depth of expertise doing search helps in building a search product the incorporates an LLM...

But, yeah, if one is really disinterested in looking at sources, just chatting with a typical LLM seems a rather dubious way to get an accurate or reasonable comprehensive answer.

5 hours ago by kace91

I’m of two minds about this.

The ass licking is dangerous to our already too tight information bubbles, that part is clear. But that aside, I think I prefer a conversational/buddylike interaction to an encyclopedic tone.

Intuitively I think it is easier to make the connection that this random buddy might be wrong, rather than thinking the encyclopedia is wrong. Casualness might serve to reduce the tendency to think of the output as actual truth.

3 hours ago by gizajob

Sam Altman probably can’t handle any GPT models that don’t ass lick to an extreme degree so they likely get nerfed before they reach the public.

29 minutes ago by ceejayoz

> My worry is that they're training it on Q&A from the general public now, and that this tone, and more specifically, how obsequious it can be, is exactly what the general public want.

That tracks; it's what's expected of human customer service, too. Call a large company for support and you'll get the same sort of tone.

3 hours ago by chud37

Its very frustating that it can't be relied upon. I was asking gemini this morning about Uncharted 1,2 and 3 if they had a remastered version for the PS5. It said no. Then 5 minutes later I on the PSN store there were the three remastered versions for sale.

6 hours ago by A_D_E_P_T

Which model did you use? With 5.1 Thinking, I get:

"Costel Pantilimon is the Romanian footballer who won the English Premier League.

"He did it twice with Manchester City, in the 2011–12 and 2013–14 seasons, earning a winner’s medal as a backup goalkeeper. ([Wikipedia][1])

URLs:

* [https://en.wikipedia.org/wiki/Costel_Pantilimon]

* [https://www.transfermarkt.com/costel-pantilimon/erfolge/spie...]

* [https://thefootballfaithful.com/worst-players-win-premier-le...

[1]: https://en.wikipedia.org/wiki/Costel_Pantilimon?utm_source=c... "Costel Pantilimon""

5 hours ago by marginalx

I just asked chatgpt 5.1 auto (not instant) on teams account, and its first repsonse was...

I could not find a Romanian football player who has won the Premier League title.

If you like, I can check deeper records to verify whether any Romanian has been part of a title-winning squad (even if as a non-regular player) and report back.

Then I followed up with an 'ok' and it then found the right player.

5 hours ago by marginalx

Just to rule out a random error, I asked the same question two more times in separate chats to gpt 5.1 auto, below are responses...

#2: One Romanian footballer who did not win the Premier League but played in it is Dan Petrescu.

If you meant actually won the Premier League title (as opposed to just playing), I couldn’t find a Romanian player who is a verified Premier League champion.

Would you like me to check more deeply (perhaps look at medal-winners lists) to see if there is a Romanian player who earned a title medal?

#3: The Romanian football player who won the Premier League is Costel Pantilimon.

He was part of Manchester City when they won the Premier League in 2011-12 and again in 2013-14. Wikipedia +1

5 hours ago by RobinL

Same:

Yes — the Romanian player is Costel Pantilimon. He won the Premier League with Manchester City in the 2011-12 and 2013-14 seasons.

If you meant another Romanian player (perhaps one who featured more prominently rather than as a backup), I can check.

an hour ago by sigmoid10

Same here, but with the default 5.1 auto and no extra settings. Every time someone posts one of these I just imagine they must have misunderstood the UI settings or cluttered their context somehow.

4 hours ago by Traubenfuchs

The beauty of nondeterminism. I get:

The Romanian football player who won the Premier League is Gheorghe Hagi. He played for Galatasaray in Turkey but had a brief spell in the Premier League with Wimbledon in the 1990s, although he didn't win the Premier League with them.

However, Marius Lăcătuș won the Premier League with Arsenal in the late 1990s, being a key member of their squad.

5 hours ago by 0xdeafbeef
2 hours ago by djeastm

This sounds like my inner monologue during a test I didnt study for

4 hours ago by saaaaaam

That's complete garbage.

44 minutes ago by zingababba

The emojis are the cherry on top of this steaming pile of slop.

3 hours ago by r_lee

Lmao what the hell have they made

6 minutes ago by estimator7292

"Ah-- that's a classic confusion about football players. Your intuition is almost right-- let me break it down"

a day ago by minimaxir

All the examples of "warmer" generations show that OpenAI's definition of warmer is synonymous with sycophantic, which is a surprise given all the criticism against that particular aspect of ChatGPT.

I suspect this approach is a direct response to the backlash against removing 4o.

a day ago by captainkrtek

Id have more appreciation and trust in an llm that disagreed with me more and challenged my opinions or prior beliefs. The sycophancy drives me towards not trusting anything it says.

20 hours ago by logicprog

This is why I like Kimi K2/Thinking. IME it pushes back really, really hard on any kind of non obvious belief or statement, and it doesn't give up after a few turns — it just keeps going, iterating and refining and restating its points if you change your mind or taken on its criticisms. It's great for having a dialectic around something you've written, although somewhat unsatisfying because it'll never agree with you, but that's fine, because it isn't a person, even if my social monkey brain feels like it is and wants it to agree with me sometimes. Someone even ran a quick and dirty analysis of which models are better or worse at pushing back on the user and Kimi came out on top:

https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced...

See also the sycophancy score of Kimi K2 on Spiral-Bench: https://eqbench.com/spiral-bench.html (expand details, sort by inverse sycophancy).

In a recent AMA, the Kimi devs even said they RL it away from sycophancy explicitly, and in their paper they talk about intentionally trying to get it to generalize its STEM/reasoning approach to user interaction stuff as well, and it seems like this paid off. This is the least sycophantic model I've ever used.

17 hours ago by seunosewa

Which agent do you use it with?

11 hours ago by vintermann

Google's search now has the annoying feature that a lot of searches which used to work fine now give a patronizing reply like "Unfortunately 'Haiti revolution persons' isn't a thing", or an explanation that "This is probably shorthand for [something completely wrong]"

5 hours ago by exasperaited

That latter thing — where it just plain makes up a meaning and presents it as if it's real — is completely insane (and also presumably quite wasteful).

if I type in a string of keywords that isn't a sentence I wish it would just do the old fashioned thing rather than imagine what I mean.

an hour ago by ahsillyme

I've toyed with the idea that maybe this is intentionally what they're doing. Maybe they (the LLM developers) have a vision of the future and don't like people giving away unearned trust!

2 hours ago by transcriptase

Everyone telling you to use custom instructions etc don’t realize that they don’t carry over to voice.

Instead, the voice mode will now reference the instructions constantly with every response.

Before:

Absolutely, you’re so right and a lot of people would agree! Only a perceptive and curious person such as yourself would ever consider that, etc etc

After:

Ok here’s the answer! No fluff, no agreeing for the sake of agreeing. Right to the point and concise like you want it. Etc etc

And no, I don’t have memories enabled.

an hour ago by cryoshon

Having this problem with the voice mode as well. It makes it far less usable than it might be if it just honored the system prompts.

12 hours ago by dragonwriter

> All the examples of "warmer" generations show that OpenAI's definition of warmer is synonymous with sycophantic, which is a surprise given all the criticism against that particular aspect of ChatGPT.

Have you considered that “all that criticism” may come from a relatively homogenous, narrow slice of the market that is not representative of the overall market preference?

I suspect a lot of people who are from a very similar background to those making the criticism and likely share it fail to consider that, because the criticism follows their own preferences and viewing its frequency in the media that they consume as representaive of the market is validating.

EDIT: I want to emphasize that I also share the preference that is expressed in the criticisms being discussed, but I also know that my preferred tone for an AI chatbot would probably be viewed as brusque, condescending, and off-putting by most of the market.

9 hours ago by TOMDM

I'll be honest, I like the way Claude defaults to relentless positivity and affirmation. It is pleasant to talk to.

That said I also don't think the sycophancy in LLM's is a positive trend. I don't push back against it because it's not pleasant, I push back against it because I think the 24/7 "You're absolutely right!" machine is deeply unhealthy.

Some people are especially susceptible and get one shot by it, some people seem to get by just fine, but I doubt it's actually good for anyone.

an hour ago by jfoster

The sycophancy makes LLMs useless if you want to use them to help you understand the world objectively.

Equally bad is when they push an opinion strongly (usually on a controversial topic) without being able to justify it well.

7 hours ago by endymi0n

I hate NOTHING quite the way how Claude jovially and endlessly raves about the 9/10 tasks it "succeeded" at after making them up, while conveniently forgetting to mention it completely and utterly failed at the main task I asked it to do.

8 hours ago by coldtea

>Have you considered that “all that criticism” may come from a relatively homogenous, narrow slice of the market that is not representative of the overall market preference?

Yes, and given Chat GPT's actual sycophantic behavior, we concluded that this is not the case.

9 hours ago by Hammershaft

I agree. Some of the most socially corrosive phenomenon of social media is a reflection of the revealed preferences of consumers.

a day ago by jasonjmcghee

It is interesting. I don't need ChatGPT to say "I got you, Jason" - but I don't think I'm the target user of this behavior.

a day ago by danudey

The target users for this behavior are the ones using GPT as a replacement for social interactions; these are the people who crashed out/broke down about the GPT5 changes as though their long-term romantic partner had dumped them out of nowhere and ghosted them.

I get that those people were distraught/emotionally devastated/upset about the change, but I think that fact is reason enough not to revert that behavior. AI is not a person, and making it "warmer" and "more conversational" just reinforces those unhealthy behaviors. ChatGPT should be focused on being direct and succinct, and not on this sort of "I understand that must be very frustrating for you, let me see what I can do to resolve this" call center support agent speak.

a day ago by jasonjmcghee

> and not on this sort of "I understand that must be very frustrating for you, let me see what I can do to resolve this"

You're triggering me.

Another type that are incredibly grating to me are the weird empty / therapist like follow-up questions that don't contribute to the conversation at all.

The equivalent of like (just a contrived example), a discussion about the appropriate data structure for a problem and then it asks a follow-up question like, "what other kind of data structures do you find interesting?"

And I'm just like "...huh?"

21 hours ago by Grimblewald

True, neither here, but i think what we're seeing is a transition in focus. People at oai have finally clued in on the idea that agi via transformers is a pipedream like elons self driving cars, and so oai is pivoting toward friend/digital partner bot. Charlatan in cheif sam altman recently did say they're going to open up the product to adult content generation, which they wouldnt do if they still beleived some serious amd useful tool (in the specified usecases) were possible. Right now an LLM has three main uses. Interactive rubber ducky, entertainment, and mass surveillance. Since I've been following this saga, since gpt2 days, my close bench set of various tasks etc. Has been seeing a drop in metrics not a rise, so while open bench resultd are imoroving real performance is getting worse and at this point its so much worse that problems gpt3 could solve (yes pre chatgpt) are no longer solvable to something like gpt5.

a day ago by nerbert

Indeed, target users are people seeking validation + kids and teenagers + people with a less developed critical mind. Stickiness with 90% of the population is valuable for Sam.

2 hours ago by 827a

> You’re rattled, so your brain is doing that thing where it catastrophizes a tiny mishap into a character flaw. But honestly? People barely register this stuff.

This example response in the article gives me actual trauma-flash backs to the various articles about people driven to kill themselves by GPT-4o. Its the exact same sentence structure.

GPT-5.1 is going to kill more people.

an hour ago by dickiedyce

Warmer = US centric? I always think that the proliferation of J.A.R.V.I.S.-type projects in the wild is down to the writing in Iron Man, and Paul Bettany's dry delivery. We want dryer, not warmer. More sarcasm, less smarm.

12 hours ago by mips_avatar

I wish chatgpt would stop saying things like "here's a no nonsense answer" like maybe just don't include nonsense in the answer?

5 hours ago by svantana

It's analogous to how politicians nowadays are constantly saying "let me be clear", it drives me nuts.

an hour ago by embedding-shape

Another annoyance: "In my honest opinion...". Does that mean that you other times are sharing dishonest opinions? Why would you need to declare that this time you're honest?

2 hours ago by gilfoy

It might actually help output answer with less nonsense.

As an example in some workflow I ask chatgpt to figure out if the user is referring to a specific location and output a country in json like { country }

It has some error rate at this task. Asking it for a rationale improves this error rate to almost none. { rationale, country }. However reordering the keys like { country, rationale } does not. You get the wrong country and a rationale that justifies the correct one that was not given.

7 hours ago by amelius

Maybe you used "Don't give me nonsense" in your custom system prompt?

6 hours ago by Sharlin

An LLM should never refer to the user's "style" prompt like that. It should function as the model's personality, not something the user asked it to do or be like.

6 hours ago by mycall

System prompt is for multi-client/agent applications, so if you wish to fix something for everyone, that is the right place to put it.

2 hours ago by AlwaysRock

That does nothing. You can add, “say I don’t know if you are not certain or don’t know the answer” and it will never say I don’t know.

an hour ago by embedding-shape

That's because "certain" and "know the answer" has wildly different definitions depending on the person, you need to be more specific about what you actually mean with that. Anything that can be ambiguous, will be treated ambiguously.

Anything that you've mentioned in the past (like `no nonsense`) that still exists in context, will have a higher possibility of being generated than other tokens.

5 hours ago by albert_e

Recently microsoft copilot's (only one that's allowed within our corporate network) replies all have the first section prefixed as "Direct answer:"

And after the short direct answer it puts the usual five section blog post style answer with emoji headings

21 hours ago by engeljohnb

Seems like people here are pretty negative towards a "conversational" AI chatbot.

Chatgpt has a lot of frustrations and ethical concerns, and I hate the sycophancy as much as everyone else, but I don't consider being conversational to be a bad thing.

It's just preference I guess. I understand how someone who mostly uses it as a google replacement or programming tool would prefer something terse and efficient. I fall into the former category myself.

But it's also true that I've dreamed about a computer assistant that can respond to natural language, even real time speech, -- and can imitate a human well enough to hold a conversation -- since I was a kid, and now it's here.

The questions of ethics, safety, propaganda, and training on other people's hard work are valid. It's not surprising to me that using LLMs is considered uncool right now. But having a computer imitate a human really effectively hasn't stopped being awesome to me personally.

I'm not one of those people that treats it like a friend or anything, but its ability to immitate natural human conversation is one of the reasons I like it.

21 hours ago by qsort

> I've dreamed about a computer assistant that can respond to natural language

When we dreamed about this as kids, we were dreaming about Data from Star Trek, not some chatbot that's been focus grouped and optimized for engagement within an inch of its life. LLMs are useful for many things and I'm a user myself, even staying within OpenAI's offerings, Codex is excellent, but as things stand anthropomorphizing models is a terrible idea and amplifies the negative effects of their sycophancy.

20 hours ago by thewebguyd

Right. I want to be conversational with my computer, I don't want it to respond in a manner that's trying to continue the conversation.

Q: "Hey Computer, make me a cup of tea" A: "Ok. Making tea."

Not: Q: "Hey computer, make me a cup of tea" A: "Oh wow, what a fantastic idea, I love tea don't you? I'll get right on that cup of tea for you. Do you want me to tell you about all the different ways you can make and enjoy tea?"

6 hours ago by TheOtherHobbes

Readers of a certain age will remember the Sirius Cybernetics Corporation products from Hitch Hiker's Guide to the Galaxy.

Every product - doors, lifts, toasters, personal massagers - was equipped with intensely annoying, positive, and sycophantic GPP (Genuine People Personality)™, and their robots were sold as Your Plastic Pal Who's Fun to be With.

Unfortunately the entire workforce were put up against a wall and shot during the revolution.

17 hours ago by falcor84

I'm generally ok with it wanting a conversation, but yes, I absolutely hate it that is seems to always finish with a question even when it makes zero sense.

3 hours ago by rrnechmech

[dead]

20 hours ago by engeljohnb

I didn't grow up watching Star Trek, so I'm pretty sure that's not my dream. I pictured something more like Computer from Dexter's Lab. It talks, it appears to understand, it even occassionally cracks jokes and gives sass, it's incredibly useful, but it's not at risk of being mistaken for a human.

11 hours ago by qwertytyyuu

I would of though the hacker news type would be dreaming about having something like javis from iron man, not Data.

17 minutes ago by BeetleB

> Chatgpt has a lot of frustrations and ethical concerns, and I hate the sycophancy as much as everyone else, but I don't consider being conversational to be a bad thing.

But is this realistic conversation?

If I say to a human I don't know "I'm feeling stressed and could use some relaxation tips" and he responds with "I’ve got you, Ron" I'd want to reduce my interactions with him.

If I ask someone to explain a technical concept, and he responds with "Nice, nerd stat time", it's a great tell that he's not a nerd. This is how people think nerds talk, not how nerds actually talk.

Regarding spilling coffee:

"Hey — no, they didn’t. You’re rattled, so your brain is doing that thing where it catastrophizes a tiny mishap into a character flaw."

I ... don't know where to even begin with this. I don't want to be told how my brain works. This is very patronizing. If I were to say this to a human coworker who spilled coffee, it's not going to endear me to the person.

I mean, seriously, try it out with real humans.

The thing with all of this is that everyone has his/her preferences on how they'd like a conversation. And that's why everyone has some circle of friends, and exclude others. The problem with their solution to a conversational style is the same as one trying to make friends: It will either attract or repel.

6 minutes ago by engeljohnb

Yes, it's true that I have different expectations from a conversation with a computer program than with a real human. Like I said, I don't think of it the same as a friend.

8 hours ago by ACCount37

Ideally, a chatbot would be able to pick up on that. It would, based on what it knows about general human behavior and what it knows about a given user, make a very good guess as to whether the user wants concise technical know-how, a brainstorming session, or an emotional support conversation.

Unfortunately, advanced features like this are hard to train for, and work best on GPT-4.5 scale models.

4 hours ago by vxvrs

I agree with what you're saying.

Personally, I also think that in some situations I do prefer to use it as the google replacement in combination with the imitated human conversations. I mostly use it to 'search' questions while I'm cooking or ask for clothing advice, and here I think the fact that it can respond in natural language and imitate a human to hold a conversation is benefit to me.

21 hours ago by saaaaaam

I’ve seen various older people that I’m connected with on Facebook posting screenshots of chats they’ve had with ChatGPT.

It’s quite bizarre from that small sample how many of them take pride in “baiting” or “bantering” with ChatGPT and then post screenshots showing how they “got one over” on the AI. I guess there’s maybe some explanation - feeling alienated by technology, not understanding it, and so needing to “prove” something. But it’s very strange and makes me feel quite uncomfortable.

Partly because of the “normal” and quite naturalistic way they talk to ChatGPT but also because some of these conversations clearly go on for hours.

So I think normies maybe do want a more conversational ChatGPT.

20 hours ago by thewebguyd

> So I think normies maybe do want a more conversational ChatGPT.

The backlash from GPT-5 proved that. The normies want a very different LLM from what you or I might want, and unfortunately OpenAI seems to be moving in a more direct-to-consumer focus and catering to that.

But I'm really concerned. People don't understand this technology, at all. The way they talk to it, the suicide stories, etc. point to people in general not groking that it has no real understanding or intelligence, and the AI companies aren't doing enough to educate (because why would they, they want you believe it's superintelligence).

These overly conversational chatbots will cause real-world harm to real people. They should reinforce, over and over again to the user, that they are not human, not intelligent, and do not reason or understand.

It's not really the technology itself that's the problem, as is the case with a lot of these things, it's a people & education problem, something that regulators are supposed to solve, but we aren't, we have an administration that is very anti AI regulation all in the name of "we must beat China."

18 hours ago by saaaaaam

I just cannot imagine myself sitting just “chatting away” with an AI. It makes me feel quite sick to even contemplate it.

Another person I was talking to recently kept referring to ChatGPT as “she”. “She told me X”, “and I said to her…”

Very very odd, and very worrying. As you say, a big education problem.

The interesting thing is that a lot of these people are folk who are on the edges of digital literacy - people who maybe first used computers when they were in their thirties or forties - or who never really used computers in the workplace, but who now have smartphones - who are now in their sixties.

17 hours ago by falcor84

As a counterpoint, I've been using my own PC since I was 6 and know reasonably well about the innards of LLMs and agentic AI, and absolutely love this ability to hold a conversation with an AI.

Earlier today, procrastinating from work, I spent an hour and a half talking with it about the philosophy of religion and had a great time, learning a ton. Sometimes I do just want a quick response to get things done, but I find living in a world where I'm able to just dive into a deep conversation with a machine that has read the entirety of the internet is incredible.

2 hours ago by nl

Why is it odd?

Some people treat their pets like they humans. Not sure why this is worse particularly.

11 hours ago by qwertytyyuu

is it that bad? I have a robot vacuum, i put googley eyes on it gave it a name, and now everyone in the house uses the name an uses he/him to refer to it.

4 hours ago by thoroughburro

In the future, this majority who love the artificial pampering will vastly out-vote and out-influence us.

I hope it won’t suck as bad as I predict it will for actual individuals.

10 hours ago by muzani

Personally, I want a punching bag. It's not because I'm some kind of sociopath or need to work off some aggression. It's just that I need to work the upper body muscles in a punching manner. Sometimes the leg muscles need to move, and sometimes it's the upper body muscles.

ChatGPT is the best social punching bag. I don't want to attack people on social media. I don't want to watch drama, violent games, or anything like that. I think punching bag is a good analogy.

My family members do it all the time with AI. "That's not how you pronounce protein!" "YOUR BALD. BALD. BALDY BALL HEAD."

Like a punching bag, sometimes you need to adjust the response. You wouldn't punch a wall. Does it deflect, does it mirror, is it sycophantic? The conversational updates are new toys.

30 minutes ago by simpetre

My experience with GPT-5.1 so far is definitely an improvement on 5 - I asked GPT-5 a relatively basic question the other day and it said "Beautiful question — and exactly the kind of subtlety that shows you’re really getting into the math of MDPs." and I threw up a little bit - 5.1 on the other hand is really frank, and straight down to business. Maybe it's better at following my system prompt (I say don't be a sycophant or something similar in mine), but I still quite like it.

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.