It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.
It's about hijacking all of your federal and commercial data that these companies can get their hands on and building a highly specific and detailed profile of you. DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir. Then using AI to either imitate you or to possibly predict your reactions to certain stimulus.
Then presumably the game is finding the best way to turn you into a human slave of the state. I assure you, they're not going to use twitter to manipulate your vote for the president, they have much deeper designs on your wealth and ultimately your own personhood.
It's too easy to punch down. I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.
> DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir
Do you have any actual evidence of this?
> I recommend anyone presume the best of actual people and the worst of our corporations and governments
Corporations and governments are made of actual people.
> Then presumably the game is finding the best way to turn you into a human slave of the state.
"the state" doesn't have one grand agenda for enslavement. I've met people who work for the state at various levels and the policies they support that might lead towards that end result are usually not intentionally doing so.
"Don't attribute to malice what can be explained by incompetence"
>Do you have any actual evidence of this?
Apart from the exfiltration of data, the complete absence of any savings or efficiencies, and the fact that DOGE closed as soon as the exfiltration was over?
>Corporations and governments are made of actual people.
And we know how well that goes.
>"the state" doesn't have one grand agenda for enslavement.
The government doesn't. The people who own the government clearly do. If they didn't they'd be working hard to increase economic freedom, lower debt, invest in public health, make education better and more affordable, make it easier to start and run a small business, limit the power of corporations and big money, and clamp down on extractive wealth inequality.
They are very very clearly and obviously doing the opposite of all of these things.
And they have a history of links to the old slave states, and both a commercial and personal interest in neo-slavery - such as for-profit prisons, among other examples.
All of this gets sold as "freedom", but even Orwell had that one worked out.
Those who have been paying attention to how election fixers like SCL/Cambridge Analytica work(ed) know where the bodies are buried. The whole point of these operations is to use personalised, individual data profiling to influence voting political behaviour, by creating messaging that triggers individual responses that can be aggregated into a pattern of mass influence leveraged through social media.
> Apart from the exfiltration of data, the complete absence of any savings or efficiencies, and the fact that DOGE closed as soon as the exfiltration was over?
IMHO everyone kinda knew from the start that DOGE wouldn't achieve much because the cost centers where gains could realistically be made are off-limits (mainly social security and medicare/medicaid). What that leaves you with is making cuts in other small areas and sure you could cut a few billion here and there but when compared against the governments budget, that's a drop in the bucket.
I think weâre mistaking incompetence with malice in regards to DOGE here
> The people who own the government clearly do.
Has anyone in this thread ever met an actual person? All of the ones I know are cartoonishly bad at keeping secrets, and even worse at making long term plans.
The closest thing we have to anyone with a long term plan is silly shit like Putins ridiculous rebuilding of the Russian Empire or religious fundamentalist horseshit like project 2025 that will die with the elderly simpletons that run it.
These guys aren't masterminds, they're dumbasses who read books written by different dumbasses and make plans thay won't survive contact with reality.
Let's face it, both Orwell and Huxley were wrong. They both assumed the ruling class would be competent. Huxley was closest, but even he had to invent the Alpha's. Sadly our Alphas are really just Betas with too much self esteem.
Maybe AI will one day give us turbocharged dumbasses who are actually competent. For now I think we're safe from all but short term disruption.
> > DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir
> Do you have any actual evidence of this?
I will not comment on motives, but DOGE absolutely shredded the safeguards and firewalls that were created to protect privacy and prevent dangerous and unlawful aggregations of sensitive personal data.
They obtained accesses that would have taken months by normal protocols and would have been outright denied in most cases, and then used it with basically zero oversight or accountability.
It was a huge violation of anything resembling best practices from both a technological and bureaucratic perspective.
[flagged]
> I will not comment on motives, but DOGE absolutely shredded the safeguards and firewalls that were created to protect privacy and prevent dangerous and unlawful aggregations of sensitive personal data.
Do you have any actual evidence of this?
> Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account
> âWhoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,â Berulis wrote. âThere were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.â
https://krebsonsecurity.com/2025/04/whistleblower-doge-sipho...
Iâm surprised this didnât make bigger news.
> Corporations and governments are made of actual people.
Corporations and governments are made up of processes which are carried out by people. The people carrying out those processes don't decide what they are.
Also, legally, in the United States corporations are people.
Bang on.
> It's not about persuading you from "russian bot farms." Which I think is a ridiculous and unnecessarily reductive viewpoint.
Not an accidental 'viewpoint'. A deliberate framing to exactly exclude what you pointed out from the discourse. Sure therer are dummies who actually believe it, but they are not serious humans.
If the supposedly evil russians or their bots are the enemy then people pay much less attention to the real problems at home.
They really do run Russian bot farms though. It isn't a secret. Some of their planning reports have leaked.
There are people whose job it is day in day out to influence Western opinion. You can see their work under any comment about Ukraine on twitter, they're pretty easy to recognize but they flood the zone.
Sure, they exist (wouldn't be credible if they didn't). But it's a red herring.
[flagged]
> There are people whose job it is day in day out to influence Western opinion
CNN/CIA/NBC/ABC/FBI? etc?
We can have Russian bot problems and domestic bot problems simultaneously.
We can also have bugs crawling under your skin trying to control your mind.
> Then presumably the game is finding the best way to turn you into a human slave of the state.
I'm sorry, I think you dropped your tinfoil hat. Here it is.
My hn comments are a better (and probably not particularly good) view into my personality than any data the government could conceivably have collected.
If what you say is true, why should we fear their bizarre mind control fantasy?
Not every person has bared their soul on HN.
Yeah, I haven't either. That's my point.
Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.
More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming.
There's certainly some AI risks that are the same as human risks, just as you say.
But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI.
Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans.
I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go.
*But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand).
When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.
A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".
I don't find your mother's reaction bizarre. When people are told that some behavior they've been doing for years is bad for reasons X,Y,Z, it's typical to be defensive and skeptical. The fact that your mother really did follow up and check your reasons demonstrates that she takes your point of view seriously. If she didn't, she wouldn't have bothered to verify your assertions, and she wouldn't have told you you were right all along.
As far as trusting AI, I presume your mother was asking ChatGPT, not Llama 7B or something. The LLM backed up your reasoning rather than telling her that dog feces in bushes is harmless isn't just happenstance, it's because the big frontier commercial models really do know a lot.
That isn't to say the LLMs know everything, or that they're right all the time, but they tend to be more right than wrong. I wouldn't trust an LLM for medical advice over, say, a doctor, or for electrical advice over an electrician. But I'd absolutely trust ChatGPT or Claude for medical advice over an electrician, or for electrical advice over a medical doctor.
But to bring the point back to the article, we might currently be living in a brief period where these big corporate AIs can be reasonably trusted. Google's Gemeni is absolutely going to become ad driven, and OpenAI seems on the path to following the same direction. Xai's Grok is already practicing Elon-thought. Not only will the models show ads, but they'll be trained to tell their users what they want to hear because humans love confirmation bias. Future models may well tell your mother that dog feces can safely be thrown in bushes, if that's the answer that will make her likelier to come back and see some ads next time.
On the one hand, confirming a new piece of information with a second source is good practice (even if we should trust our family implicitly on such topics). On the other, I'm not even a dog person and I understand the etiquette here. So, really, this story sounds like someone outsourcing their common sense or common courtesy to a machine, which is scary to me.
However, maybe she was just making conversation & thought you might be impressed that she knows what AI is and how to use it.
Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically.
Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if youâre in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we donât bother picking up.
When the LLMs output supposedly convincing BS that "people" (I assume you mean on average, not e.g. HN commentariat) trust, they aren't doing anything that's difficult for humans (assuming the humans already at least minimally understand the topic they're about to BS about). They're just doing it efficiently and shamelessly.
> Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
To add to that, this research paper[1] argues that people with low AI literary are more receptive to AI messaging because they find it magical.
The paper is now published but it's behind paywall so I shared the working paper link.
[1] https://thearf-org-unified-admin.s3.amazonaws.com/MSI_Report...
But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].
People are naturally conform _themselves_ to social expectations. You don't need to enforce anything. If you alter their perception of those expectations you can manipulate them into taking actions under false pretenses. It's a abstract form of lying. It's astroturfing at a "hyperscale."
The problem is this only seems to work best when the technique is used sparingly and the messages are delivered through multiple media avenues simultaneously. I think there's very weak returns particularly when multiple actors use the techniques at the same time in opposition to each other and limited to social media. Once people perceive a social stale mate they either avoid the issue or use their personal experiences to make their decisions.
But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but itâs expensive. LLM can help with creating fake videos but computer graphics was already used for this.
With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. Itâs cheaper than political advertising and not bound by the law.
Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents.
Social networks are not a prerequisite for sentiment shaping by AI.
Every time you interact with an AI, its responses and persuasive capabilities shape how you think.
Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious)
It's not only about efficiency. When AI is utilized, things can become more personal and even more persuasive. If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
> If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
Evolution by natural selection suggests that this might be a filter that yield future generations of humans that are more robust and resilient.
> Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged
Then that doesnât seem like a (counter) movement.
There are also many âgrass roots movementsâ that I donât like and it doesnât make them âgoodâ just because theyâre âgrass rootsâ.
In this context grass roots would imply the interests of a group of common people in a democracy (as opposed to the interests of a small group of elites) which ostensibly is the point.
Making something 2x cheaper is just a difference in quantity, but 100x cheaper and easier becomes a difference in kind as well.
"Quantity has a quality of its own."
They already are?
All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.
Our previous information was coming through search engines. It seems way easier to filter search engine results than to fine tune models.
It's important to remember that being a "free thinker" often just means "being weird." It's quite celebrated to "think for yourself" and people always connect this to specific political ideas, and suggest that free thinkers will have "better" political ideas by not going along with the crowd. On one hand, this is not necessarily true; the crowd could potentially have the better idea and the free thinker could have some crazy or bad idea.
But also, there is a heavy cost to being out of sync with people; how many people can you relate to? Do the people you talk to think you're weird? You don't do the same things, know the same things, talk about the same things, etc. You're the odd man out, and potentially for not much benefit. Being a "free thinker" doesn't necessarily guarantee much of anything. Your ideas are potentially original, but not necessarily better. One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper. (getting up from a squat should not be difficult if you're even moderately healthy) Does this really buy me anything? No. I'm living to my preferences and in line with my ideas, but people just think it's weird, and would be really uncomfortable with it unless I'd already built up enough trust / goodwill to overcome this quirk.
To live freely is reward enough. We born alone, die alone, and in between, more loneliness. No reason to pretend that your friends and family will be there for you, or that their approval will save you. Playing their social games will not garner you much.
Humans are a social species, and quality of relationships is consistently shown to correlate with mental health.
> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper
I was also of this persuasion and did this for many years and for me the main issue was drafts close to the floor.
The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
> getting up from a squat should not be difficult
Not much use if youâre elderly or infirm.
Other cons: close to the ground so close to dirt and easy access for pests. You also donât get that extra bit of air gap insulation offered by the extra 6 inches of space and whatever youâve stashed under there.
Other pros: extra bit of storage space. Easy to roll out to a seated position if youâre feeling tired or unwell
Itâs good to talk to people about your crazy ideas and get some sun and air on that head cannon LOL
Futonâs are designed specifically for use case you have described so best to use one of those rather than a mattress which is going to absorb damp from the floor.
> The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
I was concerned about this as well, but it hasn't been an issue with us for years. I definitely think this must be climate-dependent.
Regardless, I appreciate you taking the argument seriously and discussing pros and cons.
> I appreciate you taking the argument seriously
Like I say, I have suffered similar delusion in the past and I never pass up the opportunity to help a brother out
I rather be weird than join the retarded masses.
I appreciate the sentiment of being out of sync with others, I donât even get along with family, but joining the stupid normies would make me want to cease my existence.
100% agree. Rise above the herd. Do it for yourself.
[dead]
Why are you so aggressive?
Contrarianism leads to feelings of intellectual superiority, but that doesn't get you anything if everyone else doesn't also know you're intellectually superior
Because normies are a herd of sheep who will drag you down to their level. Only by fighting back can we defend ourselves from this overwhelming majority. You must be aggressive if you wish to stand alone, because to stand alone will always be perceived as such.
Iâm not, you just arenât used to honest, weird people.
[dead]
ML has been used for influence for like a decade now right? my understanding was that mining data to track people, as well as influencing them for ends like their ad-engagement are things that are somewhat mature already. I'm sure LLMs would be a boost, and they've been around with wide usage for at least 3 years now.
My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.
Quite right. "Grok/Alexa, is this true?" being an authority figure makes it so much easier.
Much as everyone drags Trump for repeating the last thing he heard as fact, it's a turbocharged version of something lots of humans do, which is to glom onto the first thing they're told about a thing and get oddly emotional about it when later challenged. (Armchair neuroscience moment: perhaps Trump just has less object permanence so everything always seems new to him!)
Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
I'm very much not immune to it - it feels distinctly uncomfortable to be told that something you thought to be true for a long time is, in fact, false. Especially when there's an element of "I know better than you" or "not many people know this".
As an example, I remember being told by a teacher that fluorescent lighting was highly efficient (true enough, at the time), but that turning one on used several hours' lighting worth of energy for to the starter. I carried that proudly with me for far too long and told my parents that we shouldn't turn off the garage lighting when we left it for a bit. When someone with enough buttons told me that was bollocks and to think about it, I remember it specifically bring internally quite huffy until I did, and realised that a dinky plastic starter and the tube wouldn't be able to dissipate, say 80Wh (2 hours for a 40W tube) in about a second at a power of over 250kW.š
It's a silly example, but I think that if you can get a fact planted in a brain early enough, especially before enough critical thinking or experience exist to question it, the time it spends lodged there makes it surprisingly hard and uncomfortable to shift later. Especially if it's something that can't be disproven by simply thinking about it.
Systems that allow that process to be automated are potentially incredibly dangerous. At least mass media manipulation requires actual people to conduct it. Fiddling some weights is almost free in comparison, and you can deliver that output to only certain people, and in private.
1: A less innocent one the actually can have policy effects: a lot of people have also internalised and defend to the death a similar "fact" that the embedded carbon in a wind turbine takes decades or centuries to repay, when if fact it's on the order of a year. But to change this requires either a source so trusted that it can uproot the idea entirely and replace it, or you have to get into the relative carbon costs of steel and fibreglass and copper windings and magnets and the amount of each in a wind turbine and so on and on. Thousands of times more effort than when it was first related to them as a fact.
> Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
Wasn't that a change of definition of what is a planet when Eris was discovered? You could argue both should be called planets.
Pretty much. If Pluto is a planet, then there are potentially thousands of objects that could be discovered over time that would then also be planets, plus updated models over the last century of the gravitational effects of, say, Ceres and Pluto, that showed that neither were capable of "dominating" their orbits for some sense of the word. So we (or the IAU, rather) couldn't maintain "there are nine planets" as a fact either way without grandfathering Pluto into the nine arbitrarily due to some kind of planetaceous vibes.
But the point is that millions of people were suddenly told that their long-held fact "the are nine planets, Pluto is one" was now wrong (per IAU definitions at least). And the reaction for many wasn't "huh, cool, maybe thousands you say?" it was quite vocal outrage. Much of which was humourously played up for laughs and likes, I know, but some people really did seem to take it personally.
I think the problem is we'd then have to include a high number of other objects further than Pluto and Eris, so it makes more sense to change the definition in a way 'planet' is a bit more exclusive.
Time to bring up a pet peeve of mine: we should change the definition of a moon. It's not right to call a 1km-wide rock orbiting millions of miles from Jupiter a moon.
[dead]
Thanks to social media and AI, the cost of inundating the mediasphere with a Big Lie (made plausible thru sheer repetition) has been made much more affordable now. This is why the administration is trumpeting lower prices!
> has been made much more affordable now
So more democratized?
Media is "loudest volume wins", so the relative affordability doesn't matter; there's a sort of Jevons paradox thing where making it cheaper just means that more money will be spent on it. Presidential election spending only goes up, for example.
No, those with more money than you can now push even more slop than they could before.
You cannot compete with that.
So if I had enough money I could get CBS news to deny the Holocaust? Of course not. These companies operate under government license and that would certainly be the end of it through public complaint. I think it suggests a much different dynamic than most of this discussion presumes.
In particular, our own CIA has shown that the "Big Lie" is actually surprisingly cheap. It's not about paying off news directors or buying companies, it's about directly implanting a handful of actors into media companies, and spiking or advancing stories according to your whims. The people with the capacity to do this can then be very selective with who does and does not get to tell the Big Lies. They're not particularly motivated by taking bribes.
> So if I had enough money I could get CBS news to deny the Holocaust? Of course not.
You absolutely could. But wouldn't be CBS news, it would be ChatGPT or some other LLM bot that you're interacting with everywhere. And it wouldn't say outright "the holocaust didn't happen", but it would frame the responses to your queries in a way that casts doubt on it, or that leaves you thinking it probably didn't happen. We've seen this before (the "manifest destiny" of "settling" the West, the whitewashing of slavery,
For a modern example, you already have Fox News denying that there was no violent attempt to overturn the 2020 election. And look how Grokipedia treats certain topics differently than Wikipedia.
It's not only possible, it's likely.
We have no guardrails on our private surveillance society. I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
>I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
That was only for a short fraction of human history only lasting in the period between post-WW2 and before globalisation kicked into high gear, but people miss the fact that was only a short exception from the norm, basically a rounding error in terms of the length of human civilisation.
Now, society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression. Now the mechanisms by which that feudalist society is achieved today are different than in the past, but the underlying human framework of greed and consolidation of wealth and power is the same as it was 2000+ years ago, except now the games suck and the bread is mouldy.
The wealth inequality we have today, as bad as it is now, is as best as it will ever be moving forward. It's only gonna get worse each passing day. And despite all the political talks and promises on "fixing" wealth inequality, housing, etc, there's nothing to fix here, since the financial system is working as designed, this is a feature not a bug.
> society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth
The word âalwaysâ is carrying a lot of weight here. This has really only been true for the last 10,000 years or so, since the introduction of agriculture. We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that. Given the magnitude of difference in timespan, I think it is safe to say that that is the âdefault settingâ.
Even within the last 10,000 years, most of those systems looked nothing like the hereditary stations we associate with feudalism, and itâs focused within the last 4,000 years that any of those systems scaled, and then only in areas that were sufficiently urban to warrant the structures.
>We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that.
Only if you consider intra-group egalitarianism of tribal hunter gatherer societies. But tribes would constantly go to war with each other in search of expanding to better territories with more resources, and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
So you forgot that part that involved all the killing, enslavement and rape, but other than that, yes, the victorious tribes were quite egalitarian.
Back then there were so few people around and expectations for quality of life were so low that if you didn't like your neighbors you could just go to the middle of nowhere and most likely find an area which had enough resources for your meager existence. Or you'd die trying, which was probably what happened most of the time.
That entire approach to life died when agriculture appeared. Remnants of that lifestyle were nomadic peoples and the last groups to be successful were the Mongols and up until about 1600, the Cossacks.
[dead]
> which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression.
This isnât an historical norm. The majority of human history occurred without these systems of domination, and getting people to play along has historically been so difficult that colonizers resort to eradicating native populations and starting over again. The technologies used to force people onto the plantation have become more sophisticated, but in most of the world that has involved enfranchisement more than oppression; most of the world is tremendously better off today than it was even 20 years ago.
Mass surveillance and automated propaganda technologies pose a threat to this dynamic, but I wonât be worried until they have robotic door kickers. The bad guys are always going to be there, but it isnât obvious that they are going to triumph.
> The majority of human history occurred without these systems of domination,
you mean hunter/gatherers before the establishment of dominant "civilizations"? That history ended about 5000 years ago.
> The wealth inequality we have today, as bad as it is, is as best as it will ever be moving forward. It's only gonna get worse.
Why?
As the saying goes, the people need bread and circuses. Delve too deeply and you risk another French Revolution. And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Feudalism only works when you give back enough power and resources to the layers below you. The king depends on his vassals to provide money and military services. Try to act like a tyrant, and you end up being forced to sign the Magna Carta.
We've already seen a healthcare CEO being executed in broad daylight. If wealth inequality continues to worsen, do you really believe that'll be the last one?
> And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Which people are having their existences threatened by the elite?
> Delve too deeply and you risk another French Revolution.
Whats too deeply? Given the circumstances in the USA I dont see no revolution happening. Same goes for extremely poor countries. When will the exploiters heads roll? I dont see anyone willing to fight the elite. A lot of them are even celebrated in countries like India.
>Why?
Have you seen the consolidation of wealth in the last 5-20 years? What trajectory does it have?
>Delve too deeply and you risk another French Revolution.
They don't risk jack shit. People fawning over the French revolution and guillotines for the elite, forget that King Louis XVI didn't have Predator Drones, NSA mass surveillance apparatus, spy satellites, a social media propaganda machine, helicopters, Air Force One, and private islands with doomsday bunkers with food growth and life support systems to shelter him from the mob.
People also forget that the french revolution was a fight between the nobility and the monarchy, not between pleasantry and nobility, and the monarchy lost but the nobility won. Today's nobility is also winning as no matter who you vote for the nobility keeps getting richer because the financial system is designed that way.
>We've already seen a healthcare CEO being executed in broad daylight.
If you keep executing CEOs, what do you think is more likely to happen? That the elites will just give you their piece of the pie and say they're sorry, OR, that the government will start removing more and more of your rights to bear arms and also increase totalitarian surveillance and crack down on free speech, like what's happening in most of the world?
And that's why wealth inequality keep increasing no problem, because most people are as clueless as you about the reality of how things work and think the elites and the advanced government apparatus protecting them, are afraid of mobs with guillotines and hunting rifles.
As long as you have people gleefully celebrating it or providing some sort of narrative to justify it even partially then no.
>And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Can you elaborate on that?
I think this is true unfortunately, and the question of how we get back to a liberal and social state has many factors: how do we get the economy working again, how do we create trustworthy institutions, avoid bloat and decay in services, etc. There are no easy answers, I think it's just hard work and it might not even be possible. People suggesting magic wands are just populists and we need only look at history to study why these kinds of suggestions don't work.
>how do we get the economy working again
Just like we always have: a world war, and then the economy works amazing for the ones left on top of the rubble pile where they get unionized high wage jobs and amazing retirements at an early age for a few decades, while everyone else will be left toiling away to make stuff for cheap in sweatshops in exchange for currency from the victors who control the global economy and trade routes.
The next time the monopoly board gets flipped will only be a variation of this, but not a complete framework rewrite.
Itâs funny how itâs completely appropriate to talk about how the elites are getting more and more power, but if you then start looking deeper into it youâre suddenly a conspiracy theorist and hence bad. Who came up with the term conspiracy theorist anyway and that we should be afraid of it?
> I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
EDUCATION:
- Global literacy: 90% today vs 30%-35% in 1925
- Prinary enrollment: 90-95% today vs 40-50% in 1925
- Secondary enrollment: 75-80% today vs <10% in 1925
- Tertiary enrollment: 40-45% today vs <2% in 1925
- Gender gap: near parity today vs very high in 1925
HUNGER
Undernourished people: 735-800m people today (9-10% of population) vs 1.2 to 1.4 billion people in 1925 (55-60% of the population)
HOUSING
- quality: highest every today vs low in 1925
- affordability: worst in 100 years in many cities
COST OF LIVING:
Improved dramatically for most of the 20th century, but much of that progress reverse in the last 20 years. The cost of goods / stuff plummeted, but housing, health, and education became unaffordable compared to incomes.
You're comparing with 100 years ago. The OP is comparing with 25 years ago, where we are seeing significant regression (as you also pointed out), and the trend forward is increasingly regressive.
We can spend $T to shove ultimately ad-based AI down everyone's throats but we can't spend $T to improve everyone's lives.
Yea we do:
Shut off gadgets unless absolutely necessary
Entropy will continue to kill off the elders
Ability to learn independently
...They have not rewritten physics. Just the news.
I recently saw this https://arxiv.org/pdf/2503.11714 on conversational networks and it got me thinking that a lot of the problem with polarization and power struggle is the lack of dialog. We consume a lot, and while we have opinions too much of it shapes our thinking. There is no dialog. There is no questioning. There is no discussion. On networks like X it's posts and comments. Even here it's the same, it's comments with replies but it's not truly a discussion. It's rebuttals. A conversation is two ways and equal. It's a mutual dialog to understand differing positions. Yes elite can reshape what society thinks with AI, and it's already happening. But we also have the ability to redefine our networks and tools to be two way, not 1:N.
Dialogue you mean, conversation-debate, not dialog the screen displayed element, for interfacing with the user.
The group screaming the louder is considered to be correct, it is pretty bad.
There needs to an identity system, in which people are filtered out when the conversation devolves into ad-hominem attacks, and only debaters with the right balance of knowledge and no hidden agenda's join the conversation.
Reddit for example is a good implementation of something like this, but the arbiter cannot have that much power over their words, or their identities, getting them banned for example.
> Even here it's the same, it's comments with replies but it's not truly a discussion.
For technology/science/computer subjects HN is very good, but for other subjects not so good, as it is the case with every other forum.
But a solution will be found eventually. I think what is missing is an identity system to hop around different ways of debating and not be tied to a specific website or service. Solving this problem is not easy, so there has to be a lot of experimentation before an adequate solution is established.
Humans can only handle dialog while under the Dunbar's law / limit / number, anything else is pure fancy.
I recommend reading "In the Swarm" by Byung-Chul Han, and also his "The Crisis of Narration"; in those he tries to tackle exactly these issues in contemporary society.
His "Psychopolitics" talks about the manipulation of masses for political purposes using the digital environment, when written the LLM hype wasn't ongoing yet but it can definitely apply to this technology as well.
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.