Hacker News
2 hours ago by simonw

Tip for AI skeptics: skip the data center water usage argument. At this point I think it harms your credibility - numbers like "millions of liters of water annually" (from the linked article) sound scary when presented without context, but if you compare data centers to farmland or even golf courses they're minuscule.

Other energy usage figures, air pollution, gas turbines, CO2 emissions etc are fine - but if you complain about water usage I think it risks discrediting the rest of your argument.

(Aside from that I agree with most of this piece, the "AGI" thing is a huge distraction.)

UPDATE an hour after posting this: I may be making an ass of myself here in that I've been arguing in this thread about comparisons between data center usage and agricultural usage of water, but that comparison doesn't hold as data centers often use potable drinking water that wouldn't be used in agriculture or for many other industrial purposes.

I still think the way these numbers are usually presented - as scary large "gallons of water" figures with no additional context to help people understand what that means - is an anti-pattern.

2 hours ago by the__alchemist

I will go meta into what you posted here: That people are classifying themselves as "AI skeptics". Many people are treating this in terms of tribal conflict and identity politics. On HN, we can do better! IMO the move is drop the politics, and discuss things on their technical merits. If we do talk about it as a debate, we can do it when with open minds, and intellectual honesty.

I think much of this may be a reaction to the hype promoted by tech CEOs and media outlets. People are seeing through their lies and exaggerations, and taking positions like "AI/LLMs have no values or uses", then using every argument they hear as a reason why it is bad in a broad sense. For example: Energy and water concerns. That's my best guess about the concern you're braced against.

an hour ago by magicalist

> I will go meta into what you posted here: That people are classifying themselves as "AI skeptics"

The comment you're replying to is calling other people AI skeptics.

Your advice has some fine parts to it (and simonw's comment is innocuous in its use of the term), but if we're really going meta, you seem to be engaging in the tribal conflict you're decrying by lecturing an imaginary person rather than the actual context of what you're responding to.

an hour ago by Flavius

Expecting a purely technical discussion is unrealistic because many people have significant vested interests. This includes not only those with financial stakes in AI stocks but also a large number of professionals in roles that could be transformed or replaced by this technology. For these groups, the discussion is inherently political, not just technical.

an hour ago by tracerbulletx

I don't really mind if people advocate for their value judgements, but the total disregard for good faith arguments and facts is really out of control. The number of people who care at all about finding the best position through debate and are willing to adjust their position is really shockingly small across almost every issue.

24 minutes ago by thewebguyd

> a large number of professionals in roles that could be transformed or replaced by this technology.

Right, "It is difficult get a man to understand something when his salary depends on his not understanding it."

I see this sort of irrationality around AI at my workplace, with the owners constantly droning on about "we must use AI everywhere." They are completely and irrationally paranoid that the business will fail or get outpaced by a competitor if we are not "using AI." Keep in mind this is a small 300 employee, non-tech company with no real local competitors.

Asking for clarification or what they mean by "use AI" they have no answers, just "other companies are going to use AI, and we need to use AI or we will fall behind."

There's no strategy or technical merit here, no pre-defined use case people have in mind. Purely driven by hype. We do in fact use AI. I do, the office workers use it daily, but the reality is it has had no outward/visible effect on profitability, so it doesn't show up on the P&L at the end of the quarter except as an expense, and so the hype and mandate continues. The only thing that matters is appearing to "use AI" until the magic box makes the line go up.

25 minutes ago by hamdingers

> IMO the move is drop the politics, and discuss things on their technical merits.

I'd love this but it's impossible to have this discussion with someone who will not touch generative AI tools with a 10 foot pole.

It's not unlike when religious people condemn a book they refuse to read. The merits of the book don't matter, it's symbolic opposition to something broader.

12 minutes ago by Capricorn2481

Okay, but a lot of people are calling environmental and content theft arguments "political" in an attempt to make it sound frivolous.

It's fine if you think every non-technical criticism against AI is overblown. I use LLMs, but it's perfectly fine to start from a place of whether it's ethical, or even a net good, to use these in the first place.

People saying "ignoring all of those arguments, let's just look at the tech" are, generously, either naive or shilling. Why would we only revisit these very important topics, which are the heart of how the tech would alter our society, after it's been fully embraced?

an hour ago by lkey

| Drop the politics

Politics is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of status or resources.

Most municipalities literally do not have enough spare power to service this 1.4 trillion dollar capital rollout as planned on paper. Even if they did, the concurrent inflation of energy costs is about as political as a topic can get.

Economic uncertainty (firings, wage depression) brought on by the promises of AI is about as political as it gets. There's no 'pure world' of 'engineering only' concerns when the primary goals of many of these billionaires is leverage this hype, real and imagined, into reshaping the global economy in their preferred form.

The only people that get to be 'apolitical' are those that have already benefitted the most from the status quo. It's a privilege.

34 minutes ago by eucyclos

There are politics and there are Politics, and I don't think the two of you are using the same definition. 'Making decisions in groups' does not require 'oversimplifying issues for the sake of tribal cohesion or loyalty'. It is a distressingly common occurrence that complex problems are oversimplified because political effectiveness requires appealing to a broader audience.

We'd all be better off if more people withheld judgement while actually engaging with the nuances of a political topic instead of pushing for their team. The capacity to do that may be a privilege but it's a privilege worth earning and celebrating.

32 minutes ago by rtkwe

Hear hear, It's funny having seen the same issue pop up in video game forums/communities. People complaining about politics in their video games after decades of completely straight faced US military propaganda from games like Call of Duty but because they agree with it it wasn't politics. To so many people politics begins where they start to disagree.

an hour ago by undefined
[deleted]
19 minutes ago by tomwphillips

Hey Simon, author here (and reader of your blog!).

I used to share your view, but what changed my mind was reading Hao's book. I don't have it to hand, but if my memory serves, she writes about a community in Chile opposing Google building a data centre in their city. The city already suffers from drought, and the data centre, acccording to Google's own assessment, would abstract ~169 litres of water a second from local supplies - about the same as the entire city's consumption.

If I also remember correctly, Hao also reported on another town where salt water was being added to municipal drinking water because the drought, exacerbated by local data centres, was so severe.

It is indeed hard to imagine these quantities of water but for me, anything on the order of a town or city's consumption is a lot. Coupled with droughts, it's a problem, in my view.

I really recommend the book.

3 minutes ago by HL33tibCe7

The fact that certain specific data centres are being proposed or built in areas with water issues may be bad, but it does not imply that all AI data centres are water guzzling drain holes that are killing Earth, which is the point you were (semi-implicitly) making in the article.

11 minutes ago by AndrewKemendo

None of which have to do with AI or AGI.

Nestle is and has been 10000x worse for global water security than all other companies and countries combined because nobody in the value chain cares about someone else’s aquifer.

It’s a social-economic problem of externalities being ignored , which transcends any narrow technological use case.

What you describe has been true for all exported manufacturing forever.

2 hours ago by paulryanrogers

Just because there are worse abuses elsewhere doesn't mean datacenters should get a pass.

Golf and datacenters should have to pay for their externalities. And if that means both are uneconomical in arid parts of the country then that's better than bankrupting the public and the environment.

2 hours ago by simonw

From https://www.newyorker.com/magazine/2025/11/03/inside-the-dat...

> I asked the farmer if he had noticed any environmental effects from living next to the data centers. The impact on the water supply, he told me, was negligible. "Honestly, we probably use more water than they do," he said. (Training a state-of-the-art A.I. requires less water than is used on a square mile of farmland in a year.) Power is a different story: the farmer said that the local utility was set to hike rates for the third time in three years, with the most recent proposed hike being in the double digits.

The water issue really is a distraction which harms the credibility of people who lean on it. There are plenty of credible reasons to criticize data enters, use those instead!

2 hours ago by Etherlord87

A farmer is a valuable perspective but imagine asking a lumberjack about the ecological effects of deforestation, he might know more about it than an average Joe, but there's probably better people to ask for expertise?

> Honestly, we probably use more water than they do

This kind of proves my point, regardless of the actual truth in this regard, it's a terrible argument to make: availability of water starts to become a huge problem in a growing amount of places, and this statement implies the water usage of something, that in basic principle doesn't need water at all, uses comparable amount of water as farming, which strictly relies on water.

an hour ago by belter

> The water issue really is a distraction which harms the credibility of people who lean on it

Is that really the case? - "Data Centers and Water Consumption" - https://www.eesi.org/articles/view/data-centers-and-water-co...

"...Large data centers can consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people..."

"I Was Wrong About Data Center Water Consumption" - https://www.construction-physics.com/p/i-was-wrong-about-dat...

"...So to wrap up, I misread the Berkeley Report and significantly underestimated US data center water consumption. If you simply take the Berkeley estimates directly, you get around 628 million gallons of water consumption per day for data centers, much higher than the 66-67 million gallons per day I originally stated..."

2 hours ago by jtr1

I think the point here is that objecting to AI data center water use and not to say, alfalfa farming in Arizona, reads as reactive rather than principled. But more importantly, there are vast, imminent social harms from AI that get crowded out by water use discourse. IMO, the environmental attack on AI is more a hangover from crypto than a thoughtful attempt to evaluate the costs and benefits of this new technology.

an hour ago by danaris

But if I say "I object to AI because <list of harms> and its water use", why would you assume that I don't also object to alfalfa farming in Arizona?

Similarly, if I say "I object to the genocide in Gaza", would you assume that I don't also object to the Uyghur genocide?

This is nothing but whataboutism.

People are allowed to talk about the bad things AI does without adding a 3-page disclaimer explaining that they understand all the other bad things happening in the world at the same time.

2 hours ago by roywiggins

I don't think there's a world where a water use tax is levied such that 1) it's enough for datacenters to notice and 2) doesn't immediately bankrupt all golf courses and beef production, because the water use of datacenters is just so much smaller.

2 hours ago by bee_rider

We definitely shouldn’t worry about bankrupting golf courses, they are not really useful in any way that wouldn’t be better served by just having a park or wilderness.

Beef, I guess, is a popular type of food. I’m under the impression that most of us would be better off eating less meat, maybe we could tax water until beef became a special occasion meal.

2 hours ago by heymijo

You're not wrong.

My perspective from someone who wants to understand this new AI landscape in good faith. The water issue isn't the show stopper it's presented as. It's an externality like you discuss.

And in comparison to other water usage, data centers don't match the doomsday narrative presented. I know when I see it now, I mentally discount or stop reading.

Electricity though seems to be real, at least for the area I'm in. I spent some time with ChatGPT last weekend working to model an apples:apples comparison and my area has seen a +48% increase in electric prices from 2023-2025. I modeled a typical 1,000kWh/month usage to see what that looked like in dollar terms and it's an extra $30-40/month.

Is it data centers? Partly yes, straight from the utility co's mouth: "sharply higher demand projections—driven largely by anticipated data center growth"

With FAANG money, that's immaterial. But for those who aren't, that's just one more thing that costs more today than it did yesterday.

Coming full circle, for me being concerned with AI's actual impact on the world, engaging with the facts and understanding them within competing narratives is helpful.

2 hours ago by amarcheschi

Not only electricity, air pollution around some datacenters too

https://www.politico.com/news/2025/05/06/elon-musk-xai-memph...

9 minutes ago by cesarb

Some time ago, I read the environmental impact assessment for a proposed natural gas thermal power plant, and in it they emphasized that their water usage was very low (to the point that it fit within the unused part of the water usage allowance for an already existing natural gas thermal power plant on the same site) because they used non-evaporative cooling.

What prevents data centers from using non-evaporative cooling to keep their water usage low? The water usage argument loses a lot of its relevant in that case.

2 hours ago by IgorPartola

It is ultimately a hardware problem. To simplify it greatly, an LLM neuron is a single input single output function. A human brain neuron takes in thousands of inputs and produces thousands of outputs, to the point that some inputs start being processed before they even get inside the cell by structures on the outside of it. An LLM neuron is an approximation of this. We cannot manufacture a human level neuron to be small and fast and energy efficient enough with our manufacturing capabilities today. A human brain has something like 80 or 90 billion of them and there are other types of cells that outnumber neurons by I think two orders of magnitude. The entire architecture is massively parallel and has a complex feedback network instead of the LLM’s rigid mostly forward processing. When I say massively parallel I don’t mean a billion tensor units. I mean a quintillion input superpositions.

And the final kicker: the human brain runs on like two dozen Watts. An LLM takes a year of running on a few MW to train and several KW to run.

Given this I am not certain we will get to AGI by simulating it in a GPU or TPU. We would need a new hardware paradigm.

an hour ago by rekrsiv

On the other hand, a large part of the complexity of human hardware randomly evolved for survival and only recently started playing around in the higher-order intellect game. It could be that we don't need so many neurons just for playing intellectual games in an environment with no natural selection pressure.

Evolution is winning because it's operating at a much lower scale than we are and needs less energy to achieve anything. Coincidentally, our own progress has also been tied to the rate of shrinking of our toys.

18 minutes ago by wat10000

Evolution has won so far because it had a four billion year head start. In two hundred years, technology has gone from "this multi-ton machine can do arithmetic operations on large numbers several times faster than a person" to "this box produces a convincing facsimile of human conversation, but it only emulates a trillion neurons and they're not nearly as sophisticated as real ones."

I do think we probably need a new hardware approach to get to the human level, but it does seem like it will happen in a relative blink of an eye compared to how long the brain took.

2 hours ago by friendzis

> We would need a new hardware paradigm.

It's not even that. The architecture(s) behind LLMs are nowhere near close that of a brain. The brain has multiple entry-points for different signals and uses different signaling across different parts. A brain of a rodent is much more complex than LLMs are.

an hour ago by samuelknight

LLM 'neurons' are not single input/single output functions. Most 'neurons' are Mat-Vec computations that combine the products of dozens or hundreds of prior weights.

In our lane the only important question to ask is, "Of what value are the tokens these models output?" not "How closely can we emulate an organic bran?"

Regarding the article, I disagree with the thesis that AGI research is a waste. AGI is the moonshot goal. It's what motivated the fairly expensive experiment that produced the GPT models, and we can look at all sorts of other hairbrained goals that ended up making revolutionary changes.

2 hours ago by us-merul

This is a great summary! I've joked with a coworker that while our capabilities can sometimes pale in comparison (such as dealing with massively high-dimensional data), at least we can run on just a few sandwiches per day.

44 minutes ago by HarHarVeryFunny

Assuming you want to define the goal, "AGI", as something functionally equivalent to part (or all) of the human brain, there are two broad approaches to implement that.

1) Try to build a neuron-level brain simulator - something that is a far distant possibility, not because of compute, but because we don't have a clear enough idea of how the brain is wired, how neurons work, and what level of fidelity is needed to capture all the aspects of neuron dynamics that are functionally relevant rather than just part of a wetware realization

OR

2) Analyze what the brain is doing, to extent possible given our current incomplete knowledge, and/or reduce the definition of "AGI" to a functional level, then design a functional architecture/implementation, rather than neuron level one, to implement it

The compute demands of these two approaches are massively different. It's like the difference between an electronic circuit simulator that works at gate level vs one that works at functional level.

For time being we have no choice other than following the functional approach, since we just don't know enough to build an accurate brain simulator even if that was for some reason to be seen as the preferred approach.

The power efficiency of a brain vs a gigawatt systolic array is certainly dramatic, and it would be great for the planet to close that gap, but it seems we first need to build a working "AGI" or artificial brain (however you want it define the goal) before we optimize it. Research and iteration requires a flexible platform like GPUs. Maybe when we figure it out we can use more of a dataflow brain-like approach to reduce power usage.

OTOH, look at the difference between a single user MOE LLM, and one running in a datacenter simultaneously processing multiple inputs. In the single-user case we conceptualize the MOE as saving FLOPs/power by only having one "expert" active at a time, but in the multi-user case all experts are active all the time handling tokens from different users. The potential of a dataflow approach to save power may be similar, with all parts of the model active at the same time when handling a datacenter load, so a custom hardware realization may not be needed/relevant for power efficiency.

13 minutes ago by ACCount37

Or

3) Pour enough computation into a sufficiently capable search process and have it find a solution for us

Which is what we're doing now.

The bitter lesson was proven right once again. LLMs prove that you can build incredibly advanced AIs without "understanding" how they work.

an hour ago by travisgriggs

> As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

Me too. But, I worry this “want” may not be realistic/scalable.

Yesterday, I was trying to get some Bluetooth/BLE working on a Raspberry CM 4. I had dabbled with this 9 months ago. And things were making progress then just fine. Suddenly with a new trixie build and who knows what else has changed, I just could not get my little client to open the HCI socket. In about 10 minutes prompt dueling between GPT and Claude, I was able to learn all about rfkill and get to the bottom of things. I’ve worked with Linux for 20+ years, and somehow had missed learning about rfkill in the mix.

I was happy and saddened. I would not have k own where to turn. SO doesn’t get near the traffic it used to and is so bifurcated and policed I don’t even try anymore. I never know whether to look for a mailing list, a forum, a discord, a channel, the newsgroups have all long died away. There is no solidly written chapter in a canonically accepted manual written by tech writers on all things Bluetooth for the Linux Kernel packaged with raspbian. And to pile on, my attention span driven by a constant diet of engagement, makes it harder to have the patience.

It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.

an hour ago by wilkommen

In the short term, it may be unrealistic (as you illustrate in your story) to try to successfully navigate the increasingly fragmented, fragile, and overly complex technological world we have created without genAI's assistance. But in the medium to long term, I have a hard time seeing how a world that's so complex that we can't navigate it without genAI can survive. Someday our cars will once again have to be simple enough that people of average intelligence can understand and fix them. I believe that a society that relies so much on expertise (even for everyday things) that even the experts can't manage without genAI is too fragile to last long. It can't withstand shocks.

2 hours ago by lordleft

The language around AGI is proof, in my mind, that religious impulses don't die with the withering of religion. A desire for a totalizing solution to all woes still endures.

an hour ago by red75prime

Does language around fusion reactors ("bringing power of the sun to Earth" and the like) cause similar associations? Those situations are close in other aspects too: we have a physical system (the sun, the brain), whose functionality we try to replicate technologically.

19 minutes ago by ACCount37

You don't even have to go as far as fusion reactors. Nuclear bombs are real, and we know they work.

But surely, anyone who's talking about atomic weapons must be invoking religious imagery and the old myths of divine retribution! They can't be talking about an actual technology capable of vaporizing cities and burning people into the walls as shadows - what a ridiculous, impossible notion would that be! "World War 3" is just a good old end-of-the-world myth, the kind of myth that exists in many religions, but given a new coat of paint.

And Hiroshima and Nagasaki? It's the story of Sodom and Gomorrah, now retold for the new age. You have to be a doomsday cultist to believe that something like this could actually happen!

24 minutes ago by hitarpetar

the way the pro nuclear crowd talk you might think they are a persecuted religion actually

2 hours ago by IAmGraydon

People always create god, even if they claim not to believe in it. The rise of belief in conspiracy theories is a form of this (imagining an all powerful entity behind every random event), as is the belief in AGI. It's not a totalizing solution to all woes. It's just a way to convince oneself that the world is not random, and is therefore predictable, which makes us feel safer. That, after all, is what we are - prediction machines.

an hour ago by danielbln

The existential dread from uncertainty is so easily exploited too, and the root cause for many of societies woes. I wonder what the antidote is, or if there is one.

44 minutes ago by casey2

It's just a scam, plain and simple. Some scams can go on for a very long time if you let the scammers run society.

Any technically superior solution needs to have a built in scam otherwise most followers will ignore it and the scammers won't have incentive to prosthelytize, e.g. rusts' safety scam.

2 hours ago by geerlingguy

I like the conclusion; like for me, Whisper has radically improved CC on my video content. I used to spend a few hours translating my scripts into CCs, and tooling was poor.

Now I run it through whisper in a couple minutes, give one quick pass to correct a few small hallucinations and misspellings, and I'm done.

There are big wins in AI. But those don't pump the bubble once they're solved.

And the thing that made Whisper more approachable for me was when someone spent the time to refine a great UI for it (MacWhisper).

14 minutes ago by tomwphillips

Author here. Indeed - it would be just as fantastical to deny there has been no value from deep learning, transformers, etc.

Yesterday I heard Cory Doctorow talk about a bunch of pro bono lawyers using LLMs to mine paperwork and help exonerate innocent people. Also a big win.

There's good stuff - engineering - that can be done with the underlying tech without the hyperscaling.

2 hours ago by sota_pop

Not only whispr, so much of the computer vision area is not as in vogue. I suspect because the truly monumental solutions unlocked are not that accessible to the average person; i.e. industrial manufacturing and robotics at scale.

25 minutes ago by dghlsakjg

I think that LLM hype is hiding a lot of very real and impactful progress in real world/robot intelligence.

An essay writing machine is cool. A machine that can competently control any robot arm, and make it immediately useful is a world-changing prospect.

Moving and manipulating objects without explicit human coded instructions will absolutely revolutionize so much of our world.

31 minutes ago by colechristensen

I think a lot of AI wins are going to end up local and free much like whisper.

Maybe it could be a little bit more accurate, it would be nice if it ran a little faster, but ultimately it's 95% complete software that can be free forever.

My guess is very many AI tasks are going to end up this way. In 5-10 years we're all going to be walking around with laptops with 100k cores and 1TB of RAM and an LLM that we talk to and it does stuff for us more or less exactly like Star Trek.

10 minutes ago by macleginn

I think he got it backwards. Whisper, incredible things chatbots can do with machine translation and controlled text generation, unbelievably useful code-generation capabilities (if enjoyed responsibly), new heights in general and scientific question answering, etc. AI as a set of tools is just great already, and users have access to it at a very low cost because these people passionately believe in weirdly improbable scenarious and their belief is infectious enough for some other people to give them enough money for capex and for yet other people to work 996 if not worse to push their models forward.

To put it another way, there were many talented people and lots of compute already before the AI craze really took off in early 2020s, and tell me, what magical things were they doing instead?

2 hours ago by Etheryte

Many big names in the industry have long advocated for the idea that LLM-s are a fundamental dead end. Many have also gone on and started companies to look for a new way forward. However, if you're hip deep in stock options, along with your reputation, you'll hardly want to break the mirage. So here we are.

an hour ago by wild_egg

They're a dead end for whatever their definition of "AGI" is, but still incredibly useful in many areas and not a "dead end" economically.

a minute ago by bigbuppo

Well, except for that needing a 40 year bond for a 3 year technology cycle thing.

an hour ago by red75prime

> Many big names in the industry have long advocated for the idea that LLM-s are a fundamental dead end.

There should be papers on fundamental limitations of LLMs then. Any pointers? "A single forward LLM pass has TC0 circuit complexity" isn't exactly it. Modern LLMs use CoT. Anything that uses Gödel's incompleteness theorems proves too much (We don't know whether the brain is capable of hypercomputations. And, most likely, it isn't capable of that).

an hour ago by hoherd

"It is difficult to get a man to understand something when his salary depends upon his not understanding it" and "never argue with a man whose job depends on not being convinced" in full effect.

2 hours ago by fallingfrog

I have some idea of what the way forward is going to look like but I don't want to accelerate the development of such a dangerous technology so I haven't told anyone about it. The people working on AI are very smart and they will solve the associated challenges soon enough. The problem of how to slow down the development of these technologies- a political problem- is much more pressing right now.

an hour ago by chriswarbo

> I have some idea of what the way forward is going to look like but I don't want to accelerate the development of such a dangerous technology so I haven't told anyone about it.

Ever since "AI" was named at Dartmouth, there have been very smart people thinking that their idea will be the thing which makes it work this time. Usually, those ideas work really well in-the-small (ELIZA, SHRDLU, Automated Mathematician, etc.), but don't scale to useful problem sizes.

So, unless you've built a full-scale implementation of your ideas, I wouldn't put too much faith in them if I were you.

7 minutes ago by ACCount37

Far more common are ideas that don't work on any scale at all.

If you have something that gives a sticky +5% at 250M scale, you might have an actual winner. Almost all new ML ideas fall well short of that.

2 hours ago by random3

Uncovering and tackling the deep problems of society starts making sense once you believe/see the possibility to unlock things. The idea that anything can be slowed down or accelerated can be faulty though. What are the more pressing political problems you consider a priority?

2 hours ago by fallingfrog

By the way downvoting me will not hurt my feelings and I understand why you are doing it, I don't care if you believe me or not. In your position I certainly would think the same thing you are. Its fine. The future will come soon enough without my help.

5 minutes ago by ACCount37

You're being downvoted for displaying the kind of overconfidence that people consider shameful.

Everyone in ML has seen dozens to thousands of instances of "I have a radical new idea that will result in a total AI breakthrough" already. Ever wondered why the real breakthroughs are so few and far in between?

a minute ago by WhammyBammy

[dead]

an hour ago by ajjahs

[dead]

an hour ago by mikemarsh

The idea of replicating a consciousness/intelligence in a computer seems to fall apart even under materialist/atheist assumptions: what we experience as consciousness is a product of a vast number of biological systems, not just neurons firing or words spoken/thought. Even considering something as basic as how fundamental bodily movement is to mental development, or how hormones influence mood ultimately influencing thought, how could anyone ever hope to to replicate such things via software in a way that "clicks" to add up to consciousness?

an hour ago by kalkin

Conflating consciousness and intelligence is going to hopelessly confuse any attempt to understand if or when a machine might achieve either.

(I think there's no reasonable definition of intelligence under which LLMs don't possess some, setting aside arguments about quantity. Whether they have or in principle could have any form of consciousness is much more mysterious -- how would we tell?)

4 minutes ago by mikemarsh

Defining machine consciousness is indeed mysterious, at the end of the day it ultimately depends on how much faith one puts in science fiction rather than an objective measure.

44 minutes ago by danielbln

I don't see a strong argument here. Are you saying there is a level of complexity involved in biological systems that can not be simulated? And if so, who says sufficient approximations and abstractions aren't enough to simulate the emergent behavior of said systems?

We can simulate weather (poorly) without modeling every hydrogen atom interaction.

9 minutes ago by mikemarsh

The argument is about causation or generation, not simulation. Of course we can simulate just about anything, I could write a program that just prints "Hello, I'm a conscious being!" instead of "Hello, World!".

The weather example is a good one: you can run a program that simulates the weather in the same way my program above (and LLMs in general) simulate consciousness, but no one would say the program is _causing_ weather in any sense.

Of course, it's entirely possible that more and more people will be convinced AI is generating consciousness, especially when tricks like voice or video chat with the models are employed, but that doesn't mean that the machine is actually conscious in the same way a human body empirically already is.

21 minutes ago by hitarpetar

I guess it depends, can you tell the difference between a weather simulation and the actual world?

2 minutes ago by ACCount37

Can you?

You have weather readouts. One set is from a weather simulation - a simulated planet with simulated climate. Another is real recordings from the same place at the same planet, taken by real weather monitoring probes. They have the same starting point, but diverge over time.

Which one is real though? Would you be able tell?

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.