Rendered at 21:39:00 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
etempleton 18 hours ago [-]
It is fair to be critical of Sam and other tech leaders regarding AI, but he has done nothing to begin to justify violence or even the threat of violence against him or his family.
raptor99 11 hours ago [-]
Agreed! Have you heard of Suchir Balaji?
smallmancontrov 7 hours ago [-]
Holy shit how is this the first time I am hearing about this? This should not be my first time hearing about this.
> Suchir Balaji (November 21, 1998 – November 26, 2024) was an American artificial intelligence researcher who was found dead one month after accusing OpenAI, his former employer, of violating United States copyright law -Wikipedia
10 hours ago [-]
array_key_first 5 hours ago [-]
Altman has gone around bragging about how his technology will basically make huge sectors of the economy obsolete. In case you haven't noticed, in America the only thing keeping most people alive is their job.
In essence, he has threatened to kill millions of people. Those are death threats. I know he doesn't see it that way, and I don't feel threatened either, and I imagine most devs don't either.
But that's how that's actually interpreted by many people.
Now, that doesn't justify violence, because words don't justify violence. However, if someone is going around threatening to kill people, you should not be surprised when people try to hurt them. That's not shocking, that's very much expected.
Altman and his PR team are incredibly stupid for not seeing this coming, and frankly I don't know if they can reverse it.
inquirerGeneral 4 hours ago [-]
[dead]
odshoifsdhfs 3 hours ago [-]
Really? He is at this moment lobbying laws that exempt AI companies from harm of using them. WTF? This guy (and the rest) are literally fucking over everyone and you think there is nothing to justify it? For fuck's sake, this was in HN just yesterday: https://news.ycombinator.com/item?id=47717587
etempleton 2 hours ago [-]
I am not saying he should not be criticized or even held legally liable for actions. Merely that, you know, fire bombing peoples homes whose actions you disagree with is bad thing.
Controversial hot take, I know.
bdangubic 2 hours ago [-]
if every business(man) who lobbies against regulations for their business is a fair game to go after violently (not just her/him but his family as well) there would be a bloodbath of epic proportions… one day, this might be you and your family too…
johnbarron 10 hours ago [-]
[flagged]
pixel_popping 7 hours ago [-]
Like the Linux kernel?
Minor49er 7 hours ago [-]
I think he means murder, not killing processes /s
pixel_popping 6 hours ago [-]
my bad.
dwroberts 13 hours ago [-]
Didn’t we just go through several weeks of hearing about OpenAI allowing its tech to be used for conducting warfare?
Not saying that justifies harming Altman but I am confused that he seems surprised he is now in physical danger? [Or chalks it up to just some single specific incendiary article rather than the companies actual actions?] If you involve yourself in the act of killing people then, yeah, you’re going to get blowback for that and some people are obviously going to want to hurt you
nl 10 hours ago [-]
The US is still a democracy.
It's absolutely ok to oppose war.
It is absolutely not ok for "some people to want to hurt" someone who is running a company that is vying for contracts from a democratically elected government's defense department.
It's also ok to protest that, to boycott it or to refuse to work for or with them for it. But escalating that to physical violence is not ok, and nor should people be "confused that he seems surprised he is now in physical danger"
(As an aside, from the statements I've heard so far it seems the person was more an anti-AI, anti-tech person than anti-war)
xp84 4 hours ago [-]
I completely agree with all your statements. But I think most people in America have moved on from even trying to operate in the political system we have - because it’s been completely subverted by bad actors on both sides of the supposed 2-party system they see it as pointless.
And as such they’ve either become completely irrational (most far left or far rightists), checked out (the rest of us), or fully mentally ill (people like this, or that Gracie Mansion wacko)
trinsic2 8 hours ago [-]
I don't think anyone is saying this is justified. But that doesn't mean it's not going to happen and I can understand why people would do this. ESP people that are pushed beyond the limits they can endure.
Right now we have a huge imbalance in the world and more situations like this are going to manifest as we slide further and further into authoritarianism.
xp84 4 hours ago [-]
“Authoritarianism” - definition: The government thugs are from the party I don’t like
“Proper order” - the government thugs are from the party I do like
Arodex 6 hours ago [-]
>The US is still a democracy
Let's see if that still holds after the midterms...
rrr_oh_man 10 hours ago [-]
[flagged]
trinsic2 8 hours ago [-]
Only an act of congress can make that happen. That hasn't happened. So no, it's still the DOD.
hax0ron3 4 hours ago [-]
>It is absolutely not ok for "some people to want to hurt" someone who is running a company that is vying for contracts from a democratically elected government's defense department.
Why though?
davemp 4 hours ago [-]
I’m falling into the Socratic hole [0], but in a modern civil society there is a justice system through which people seek recourse. This has all sorts of desirable effects for societies.
Please educate yourself on the basics or at least put more effort in before participating in conversations.
[0]: It’s easy to abuse the Socratic method and devolve a discussion into one of first principles. It’s extremely tiresome and a huge waste of everyone’s time.
hax0ron3 4 hours ago [-]
I'm a big fan of the justice system. Can't have a functioning civilization without it. And yes, violence that is used by a democratic society following regulations is generally speaking better for society than arbitrary vigilantism motivated by personal beliefs is. But I'm not arguing that it would necessarily be good to kill Sam Altman. I'm just arguing that it's ok to find the idea of his death pleasing. I find the idea of killing all sorts of people pleasing without necessarily thinking that actually doing it would be good for society overall.
zepolen 4 hours ago [-]
Because government violence is not the same as individual violence!
hax0ron3 4 hours ago [-]
But it is the same. Better and more stable for society than individual vigilantism? Yes, generally speaking it is. But still essentially the same thing, just done through a different process.
bko 10 hours ago [-]
> Didn’t we just go through several weeks of hearing about OpenAI allowing its tech to be used for conducting warfare?
Unfortunately warfare is a thing. Why wouldn't you want the best technology used for your country when conducting warfare? Or do you just believe warfare would cease to exist if a country gave up any means of defense or offense?
pocksuppet 9 hours ago [-]
You're allowed to authorize your technology to be used to kill people, but if you do so, you shouldn't be surprised when those people also try to kill you. America and Americans somehow keep forgetting that actions have consequences and the government can't always override the consequences.
bko 8 hours ago [-]
"Authorize" technology to kill you?
Are cars authorized to run people over?
Are painkillers "authorized" to get people to overdose?
Are computer chips "authorized" to be put into bombers?
What are you even talking about?
pixel_popping 6 hours ago [-]
That's what's happening when people want to blame specific persons for world issues instead of the collective.
Razengan 7 hours ago [-]
When was the last time a molotov cocktail was thrown at the house of an arms manufacturer?
Trump and other presidents literally started wars and ordered people to be killed. When was the last time they were physically attacked?
alexdias 4 hours ago [-]
> When was the last time they were physically attacked?
"I'm not saying violence is okay, but violence is okay"
dwroberts 13 hours ago [-]
What I am saying is if you involve yourself in violence (and directly profiting from violence) you should not be allowed to act shocked when that same violence turns up on your doorstep
hax0ron3 4 hours ago [-]
Pretty much everyone thinks that violence is ok against certain people. You probably do too. The disagreements are about who violence is ok to use against.
11 hours ago [-]
pydry 12 hours ago [-]
Not ok, but anybody who is ok with terrorizing, say, an Iranian civilian nuclear scientist ought to be equally indifferent to this.
arrowsmith 12 hours ago [-]
I’m not indifferent to either of them, but if you equate American tech executives with agents of the Iranian nuclear programme then I don’t care what you have to say on any subject ever
Arodex 10 hours ago [-]
Altman and other AI evangelists spent their time equating AI with nuclear technology. They make the comparison all the time.
What about executives/scientists on the US nuclear programme?
pillefitz 9 hours ago [-]
The former are doing way more damage to civilization
arrowsmith 8 hours ago [-]
Give the latter a nuclear bomb and see how true that stays
pydry 8 hours ago [-]
You might be too young to remember it but they said the same thing about North Korea.
When they actually got a nuke all it meant was that the US stopped threatening them, halted practicing manoeuvres in preparation for attack and generally just left them well alone.
Iran has probably realized by now that if they dont get a nuke the US and Israel and will keep slaughtering their schoolchildren.
Sometimes we're the brutal savages who need to be stopped, impossible though that is for some people who have more of "racial loyalty" mindset to comprehend.
ath3nd 7 hours ago [-]
[dead]
confiq 14 hours ago [-]
> he has done nothing to begin to justify violence
I'm not a big fan of Sam Altman, but violence like this is not a solution; it has the actually opposite effect as it probably did with Trump.
trinsic2 8 hours ago [-]
Actions have consequences. There will always be people in the world that get pushed beyond the limits they can endure. It reminds of that CEO that got gunned down by someone that was being affected by the company profiting off of making a business of denying health insurance claims on technicalities.
I don't support this and yet I know for every harm people in these corrupt institutions are involved in, the universe gives back your due.
If you want to stop the harm. Stop harming the world with your actions in what every way that needs to manifest for you.
8 hours ago [-]
Razengan 3 hours ago [-]
So how often are arms manufacturers and leaders starting wars gunned down?
speleding 11 hours ago [-]
Reading that BBC article, how the attacker got caught while shouting at an OpenAI building, it would seem likely that this attacker is confused or deranged. Not specifically someone with deliberate evil intent.
So the headline seems to be more "high profile person attacked by lunatic" than "OpenAI CEO attacked for being evil".
mcv 13 hours ago [-]
Or Iran, for that matter.
altmanaltman 13 hours ago [-]
Is there anything anyone can do that justifies violence or threats of violence? No. Even if that person is a proven child molestater, a just society stands on just law.
But as far as political justification stands, he is as valid of a target for hostile nations just as Iranian nuclear scientists were (unless he has 0 involvment with USG). That's just the world we live in.
Use your tech for war in other nations, you give a justification for other nations to target you. Same goes for Lockheed Martin ceo etc, nothing specific against Sam. But saying nobody has no valid reason to target Sam like this is pretty stupid imo.
dangus 10 hours ago [-]
I’m pretty sure if someone sexually assaulted my child or murdered them I’d be more than morally justified to get a few or a lot of punches in.
Some people are treated a whole lot better than others in prison.
pocksuppet 9 hours ago [-]
Are the parents of an Iranian nuclear scientist murdered by an OpenAI-powered drone morally justified to murder Sam Altman?
1718627440 5 hours ago [-]
They have "given" that privilege to the Iranian Army.
8 hours ago [-]
ath3nd 13 hours ago [-]
[dead]
pillefitz 16 hours ago [-]
[flagged]
spwa4 13 hours ago [-]
[flagged]
voidhorse 18 hours ago [-]
[flagged]
710dev 11 hours ago [-]
[flagged]
Jamesbeam 13 hours ago [-]
Unpopular opinion. It depends.
I totally agree with your statement if we are talking about the average citizen starting to throw Molotovs at his house. If you’re afraid AI is taking your job, just do something else. It’s not the end of the world changing careers.
Plenty of work AI won’t be able to do, or allowed to do without a human assisting in some way that secures the human a good income and way of life.
So if this is done by an individual citizen, they need to be hunted down, arrested, and get the full force of the justice system to deter others from doing the same.
On the other hand, right now, Sam Altman is a valid military target for assassination in the US / Iran war.
OpenAI did snatch up the contract from Anthropic at the Pentagon, and their technology is in some capacity used to murder Iranian HVTs (High Value Targets). Altman is therefore technically a legal HVT for the Iranians.
If you say it’s valid and not a war crime for the US to assassinate former political Iranian figures and their families for aiding the new regime and therefore becoming enemy combatants in the eye of the US Military, it’s also valid to assassinate Altman and his family for doing the same to the other war party.
It’s a bit of a Schrödinger situation. He is technically a valid target in a current war, but not for the private citizen.
In both cases, though, I’d advocate that violence is neither a solution to solve the problem that AI might be creating for a lot of people in the future, nor should he be treated as an enemy combatant and his infant child and wife bombed to smitherens.
Diplomacy is key here, just like it would have been the better solution than going to war with Iran.
If you disagree with Altman, send him a letter, show up at his workplace, talk to the man, gather people who think the same of him you do, write letters to your voted representatives, make calls, vote politicians into office that are anti AI and who will go after him and regulate his company to shit. Bureaucrats can make Altman’s life more miserable than a thousand Molotovs ever could.
If you gather enough support, you can reach the same goal, taking his power over your life away, without any violence.
But are you really surprised people chose violence over the democracy toolbox in the US if they get told by the people in charge of their country that violence is indeed a good way to solve problems, that you should have a "warrior" spirit and everything is up for grabs, even sovereign countries like Greenland because you can outviolence any other nation on the planet?
Violence only creates more violence and as long as there is a president who chooses to put oil in the fire and pretends it’s ok to murder US citizens like Alex Pretti, you don’t really need to wonder if the average citizen starts murdering tech CEOs in the near future.
They just follow the Top-down approach to using violence as a tool the leadership lives by example.
siva7 13 hours ago [-]
> If you say it’s valid and not a war crime for the US to assassinate former political Iranian figures and their families for aiding the new regime and therefore becoming enemy combatants in the eye of the US Military, it’s also valid to assassinate Altman and his family for doing the same to the other war party.
Sam isn't a political leader, so this comparison is flawed. What the hell, are we really arguing about if assasinating a long-standing figure of this community here is valid? Seriously??
psd1 11 hours ago [-]
He is a leader and a political figure. This blogpost is political (as well as sharing a family photo, which is itself imbued with a political message in that context).
Engineer archetypes hate politics and refuse to think about it. For most engineering, there is negligible political dimension. But culturally-transformative technology is inherently political to the degree it's transformative. Altman recognises this.
He is working towards a social goal, and attracting support to achieve it. Yes, he is a political leader.
senordevnyc 4 hours ago [-]
This waters down the definition of political leader to the point of absurdity.
bogzz 9 hours ago [-]
Neither were the Iranian nuclear scientists.
stogot 9 hours ago [-]
People on this forum applauded Charlie Kirk’s murder too. Unfortunately theres a number of people here who believe it’s okay to murder instead of argue with words. Violence is the last refuge of the incompetent
7 hours ago [-]
ath3nd 7 hours ago [-]
[dead]
guzfip 7 hours ago [-]
Always so rich to see in a country founded on political violence lol.
mindslight 7 hours ago [-]
Indeed. I've seen much more outright support for the murders of Pretti, Good, and Taylor than people "applauding" Kirk's murder. Never mind the recent support for the mass murder of Iranians ("bomb them back to the stone age" etc). Unfortunately those incompetents who take refuge in violence are now in charge of our society.
mindslight 4 hours ago [-]
(I suppose I'm getting the reply-less downvotes from people's cognitive dissonance getting triggered. Just because it's possible to frame a murder as being legally justified, does not absolve you of the fact that by adopting this justification you're still supporting a murder. In fact I'd point out that the most horrific atrocities in human history have been legally justified. Randomly-directed violence doesn't really scale up, whereas organized violence does)
yaro330 13 hours ago [-]
Seriously? Not even the DOD partnership?
AllegedAlec 11 hours ago [-]
[flagged]
brysonreece 11 hours ago [-]
If you work to inflict violence on others, you shouldn’t be surprised when it’s attempted to be inflicted back on you. I’m not saying it’s a just worldview, but it is pragmatic.
hi5eyes 10 hours ago [-]
openai dug themselves into a hole with all their rhetoric
“we’re going to replace white collar jobs and also help the trump admin with a war no one wants” great comms team lol…
siva7 13 hours ago [-]
You make it sound like an american company has a choice under this administration
selfhoster11 12 hours ago [-]
They always have a choice, it just doesn't make them as much.
ahtihn 10 hours ago [-]
Anthropic clearly showed that they have a choice.
nickthegreek 9 hours ago [-]
Anthropic seemed to have a choice.
pocksuppet 9 hours ago [-]
Is it a democracy or a dictatorship?
dwroberts 13 hours ago [-]
They were just following orders, right
finghin 12 hours ago [-]
You may want to sit with that one for a while.
atbpaca 21 hours ago [-]
I have many disagreements with Sam Altman. But physical attacks are never the answer. Especially attacking one's family.
big-chungus4 9 hours ago [-]
What about Luigi Mangione, what's HN's the consensus on him?
tptacek 2 hours ago [-]
There isn't one (much as I might think there should be). Threads about Mangione were also uncivil and activating.
fleroviumna 9 hours ago [-]
[dead]
testing22321 7 hours ago [-]
[flagged]
hollerith 3 hours ago [-]
That is unhinged.
testing22321 2 hours ago [-]
I know, right. He paid himself more per year than 99.9% of Americans will make in their entire lifetime while denying coverage to people who died as a result.
Does it get more evil?
odshoifsdhfs 3 hours ago [-]
So.. like Altman?
survirtual 10 hours ago [-]
Honest question: how many families has the DoD attacked?
How many families has ICE attacked?
How many law-abiding American citizens has ICE murdered?
Sam Altman has agreed to supply the DoD with AI for use in killing families. Why is his own family more valuable from those OpenAI would assist in murdering?
“All animals are equal, but some animals are more equal than others.”
We care so much when it is the family of billionaires who crush our skulls without a second thought if it meant adding to their bank account, but to the countless nameless who they crush? They aren't even considered human. They are lesser.
My heart goes out to ALL BEINGS harmed. ESPECIALLY the nameless voices we will never hear about.
My recommendation to him and his family, and anyone else in positions like his: retire, go off into the sunset with your riches, and disappear into obscurity. Fund the local community, donate all your excess wealth, and make the world a better place. That is the only way to stay safe.
Otherwise, welcome to the forest. It is dark and filled with predators bigger than you can ever imagine. Only a fool would shine a light here. It is dark for a reason.
testing22321 7 hours ago [-]
Attacking Sam and his family will only cause harm to one family.
What Sam is doing with ICE, DoW, etc is harming tens of millions around the globe.
survirtual 5 hours ago [-]
Which is why this narrative of caring about his family is so absurd.
A defense contractor is in the business of war. In supplying the war machine, you should be living in a fortress. Tall walls, check your drink for poison, live in paranoia. Every person in the business of war knows what they are getting in to, and how to protect their family.
How is someone that is near the face of AI this naive about such an ancient thing?
The business of war is fine. It is ancient. It is part of humanity. Making some morality plea towards family and "violence is never the answer" while in the business of violence is NOT okay.
Everyone in the defense industry knows the risks. Blood money is not free. You sacrifice a peaceful life for the wealth.
To keep your family safe you have to use a meager sum of that money to have tall walls, guards, and security. DoD contractor 101.
Alternatively, live in obscurity, don't talk about your work, and it is usually fine.
A world-wide known CEO doesn't have this luxury so again, use a small portion of unfathomable wealth to protect your family. I have a feeling this war is just starting.
When in the business of death, you no longer get to live with the rules of peace.
testing22321 5 hours ago [-]
It’s almost like these people believe Being in the business of violence and death is fine. Killing other people, making their lives a living nightmare, etc.
Suddenly it’s not ok when a tiny fraction of that violence comes home.
Hypocrisy at its most extreme.
HNisCIS 17 hours ago [-]
[flagged]
raptor99 11 hours ago [-]
[flagged]
rrr_oh_man 9 hours ago [-]
What is your point?
Arodex 6 hours ago [-]
"There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect."
cal_dent 18 hours ago [-]
An interesting thing about one facet of how society as developed over the past decade and a half, I think, is that a byproduct of more people being conscious of the quest to monetise almost anything is that it has also raised the level of general scepticism on whether something is marketing or real. So you have increasingly more scenarios where an objectively bad thing can happen to someone but any public response is scrutinised and questioned within a hint of its life sometimes rightly sometimes not. I don’t particularly like it but that’s where we are at guess
bigyabai 5 hours ago [-]
> any public response is scrutinised and questioned within a hint of its life sometimes rightly sometimes not.
This is a fairly healthy response from the public - better than accepting everything at face-value. Plato's Allegory of the Cave is a warning against accepting random information in a vacuum to assess your surroundings. Observation and response is not enough to be a critical thinker, even back in the ancient ages.
From where I'm standing, the public at-large is traumatized from flubbed coverups like the Snowden leak, Epstein files, and Abu Ghraib. The myth of American exceptionalism has been threatened for a long time, and people rightfully question whether or not executive leadership can write-off their involvement in politics. Sam Altman has put on an extremely dangerous pair of boots, and while it doesn't justify attacks on his person, we all know that speculation will continue as new events come to light. Right or wrong, this is what the public is conditioned for now.
dang 15 hours ago [-]
I don't think I've ever seen a thread this bad on Hacker News. The number of commenters justifying violence, or saying they "don't condone violence" and then doing exactly that, is sickening and makes me want to find something else to do with my life—something as far away from this as I can get. I feel ashamed of this community.
Froztnova 3 minutes ago [-]
It's been getting pretty bad around here lately. I had someone reply to a post I made in that Idiocracy thread a few days ago advocating for eugenics. Really really gross all around.
People here think that they're much smarter than they actually are.
rl3 13 hours ago [-]
I imagine you knew Sam personally when he was President of YC. Most people don't, instead going off what they read in the press. Recent press is often less than flattering, given how contentious AI in general is as of late.
Consider for some it's already hit home in the form of job loss, which for most people can easily be catastrophic. Or maybe they've a giant datacenter in their back yard suddenly, and now their air and/or water isn't viable.
That of course isn't justification, but it does partly inform why some people are that mad, and it's much easier for angry people to be callously indifferent.
If you were to break down HN's zeitgeist, it's some percentage site-local, some percentage larger tech scene, and some percentage general public.
Although you have outsized influence on the former, the latter items factor in heavily—sometimes overwhelmingly so. You can't really control that, and I don't feel it represents some sort of failure on behalf of the community nor moderation team.
I see it not as mob mentality so much as as multiple sides personally involved for different reasons. Things tend to get pretty heated when that happens; not a good recipe.
I'm sorry you had to deal with the aftermath. Your flurry of disappointed, exhausted-sounding comments reminded me of a service industry worker getting hit with a huge rush. There's a kind of PTSD that hangs around once the dust settles.
So, thank you for your efforts in trying to keep the site civil. It clearly ain't easy sometimes.
tptacek 3 hours ago [-]
You're being nice about it but I think you're inadvertently expressing literally the sentiment Dan was referring to.
happytoexplain 54 minutes ago [-]
I am not speaking for the parent, but my personal interpretation is that they are trying to add perspectives/thoughts, not denying what Dan said (i.e. it's not "inadvertent" in as few words).
tptacek 48 minutes ago [-]
By that I meant it didn't read like they were trying to push back on him.
Chance-Device 12 hours ago [-]
For what it’s worth Dan, you’re probably the best moderator I’ve ever encountered, and without you HN likely wouldn’t be worth visiting. As it is it’s one of the best places for online discourse. That’s directly because of you and your efforts.
It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.
d4v3 3 hours ago [-]
Unfortunately, political violence seems to be en-vogue these days. I even hear people in "real life" casually discussing their support for it. What can we do? I think the only thing we can do is push back on it, even though it doesn't seem fair. What's a favourable alternative? You do a great job here giving individual feedback, which I know some people listen to and take in. I hope it's some kind of comfort to know that you can change people's minds, or at least give them some pause. In today's algorithm-driven world, pushing back seems more important now than any time I can think of. We need cool, level-heads running things.
MiguelX413 1 hours ago [-]
What alternative do you propose? Voting doesn't work anymore
raffael_de 12 hours ago [-]
Wouldn't it maybe be a great idea to just ban anything that's not actually about science and technology from this board? This will have the indirect effect of people leaving it who are here for political trench fights? Plus the good old flame wars about technology x versus y are pretty harmless in comparison.
(And no, just because Sam Altman is CEO of tech company doesn't make this news tech news.)
bob1029 12 hours ago [-]
I think a proper OpenAI vs Anthropic flame war might actually do this community some good. Let's just have it out. Avoiding violation of the x vs y technology rule seems to have resulted in a lot of pent up energy. I don't see the harm at this point if dang is saying it's over.
raffael_de 12 hours ago [-]
forgot that there is even such a policy. the differentiating feature of hn always was that comments and discussions are relatively thoughtful and civil. that's quickly getting lost.
e900542 10 hours ago [-]
Respectably my opinion is different. As I am reading through these comments the differences seem to be like neighboring Canadian farmers that each think next door is getting more rain than they are.
tptacek 3 hours ago [-]
HN isn't a "science and technology" site.
foxes 12 hours ago [-]
Tech and science are political .. they don't exist in some sort of vacuum.
Further being "apolotical" means supporting the current status quo.
raffael_de 12 hours ago [-]
[flagged]
psd1 11 hours ago [-]
Politics is indeed toxic to pure curiosity about pure things. I feel that too, viscerally.
However. Culture war tropes get posted in even the most abstract discussion, so banning top-level posts won't keep it out.
Furthermore, technology is inherently political to the degree that it is transformative. The Facebook algorithm was always political, it just took time for that to become apparent. I'm trying to illustrate another kind of toxicity, that of engineering archetypes refusing to consider the political impact of their engineering decisions. Technologists in transformative fields should not be putting their heads in the sand. I don't want HN to devolve to red/green political rage, but there are political discussions that belong here.
Lastly, social sciences may well be dismal, but they can still illuminate, and politics is a valid subject of study. This site is predicated on curiosity, and areas of politics are on topic for that. Humanity is a system that bears analysis and can even be engineered.
Arodex 10 hours ago [-]
No, ignoring the political consequences of science and technology is what is extremely toxic and psychopathic.
The very American trend to avoid anything political is self-defeating anyway, as it contributes to the social rot and the worsening of politics even further. Do you think the garden will become cleaner if you stop tending it? That your child will become nicer if you stop taking care of it? That your projects will sort themselves out if you don't track them?
You are well on your way to becoming like Russians: more and more detached from political matters because it is not safe or pleasant... until they are sent to the frontlines.
chrisfosterelli 4 hours ago [-]
I have to believe that what we're seeing is a minority opinion that feels like their uniquely backwards logic justifying this is somehow worth sharing as if its new and insightful, while the vast majority of us think "holy crap, that's horrible" but aren't adding it because of course that's already been said and there just isn't any more nuance needed.
MiguelX413 59 minutes ago [-]
I think the quantity of people who will be killed by OpenAI being employed by the DoD is way more horrible.
Do you consider Sam Altman's structural violence just as horrible?
1 hours ago [-]
1 hours ago [-]
smoyer 5 hours ago [-]
You make this corner of the world better dang!
arjie 13 hours ago [-]
The event itself is really bad and condemnable, but when threads like this show up they are usually a good thing because people rapidly demonstrate high coupling of tribal affiliation with viewpoint. This causes a lot of them to advertise through unhinged posts which is a good raw test for what they are like to communicate with. I usually go through and killfile a bunch of these commenters. Essentially, you want your bad participants to be easily visible to be so. I don't want them to be subtly sneaking their stuff in normal threads. I want to go look at one place and see all of the people I don't want to listen to.
Therefore, here's a feature request: allow per-user killfiles. I currently have this through a Chrome extension but I'd love it to be native so that I don't have to use my own iOS app and so on.
One of the things I really like is to have a high-ratio of good content to slop content and I think manually curating out slop authors is the way to go for that. You'll see that my lists include things that other people seem to really enjoy.
surgical_fire 12 hours ago [-]
Like a personal blocklist, so you don't see certain commenters/threads/etc.
Personally I don't see the value, but some people are less resilient (or more weak-willed) at seeing words they disagree with.
jjav 12 hours ago [-]
> What's a killfile?
Tell me you didn't use USENET without telling me ;-)
7 hours ago [-]
stonecharioteer 12 hours ago [-]
Haha I figured after I googled this. A block list then. I miss the reddit apps that let me do this.
justin66 8 hours ago [-]
> Therefore, here's a feature request: allow per-user killfiles.
That would be lovely. It's also an obvious feature which has existed in other contexts for a very long time, and it would be easy to implement. That means its omission was a deliberate design choice. It'd be interesting to understand why.
consumer451 10 hours ago [-]
It would be a huge loss and a real shame if you left permanently.
I don't know how often you get to take a real vacation, somewhere away from the Internet and the USA, but this might be a good time to consider taking one?
10 hours ago [-]
5 hours ago [-]
johnfn 14 hours ago [-]
Don't leave dang -- we need you now more than ever. :(
happytoexplain 1 hours ago [-]
I wish we all felt so sickened by this.
surgical_fire 13 hours ago [-]
When violence is considered as an acceptable solution to systemic issues, it is an early sign that things are taking a very bad turn.
I typically take jabs at the community here, but not this time. What you are seeing is a reflection of a wider, much more insidious problem. Trust in society is failing, and people are not seeing a civilized solution through the usual channels - such as politics.
I think things will get a lot worse before they get better. Hopefully I'll be okay in my little corner of the world.
pocksuppet 9 hours ago [-]
> and people are not seeing a civilized solution through the usual channels - such as politics.
Violence is politics. It's the oldest and most universal form of politics, even found in other species, and even inanimate objects (types of rock subducting each other, we see the rock that floated to the top, that's practically Darwinism).
But humans don't like being killed so they developed systems to avoid violence. Speeches, voting, money, etcetera. It's all ways for people to arrive at a reasonable solution peacefully. It's always been backed by "if we don't do this, people start dying." But people have forgotten this and they're allowing those alternatives to fail. We stopped exposing the new generations to the suffering child of Omelas and they forgot what is necessary for society to exist. People think there is food on the table by magic and there are no wars by magic. And it is magic, these complex intertwined systems. They are amazing. But you must respect them, you cannot destroy them on a whim and still expect civilization to survive.
Mezzie 8 hours ago [-]
> Trust in society is failing, and people are not seeing a civilized solution through the usual channels - such as politics.
I agree. I think the lack of seeing a way out is a big component of this turn. You bring up politics and that's a good example. Who do I vote for, campaign for, etc. that actually wants me (an American citizen making around the median wage for my area) to be able to buy a home? To have affordable, accessible healthcare? I'm aging out of my childbearing years and am wrangling with the sorrow of not being able to afford a child. There are some promising local candidates and I do vote for them, but so many of these issues need to be tackled at a higher level due to their complex, interdependent nature.
There's nobody. There's red and blue with different culture war paint. I can choose whether trans women play in sports or if we pray at work, but I have no choice in the fundamental material reality of my life.
We're seeing this chaotic violence in part because there's no alternative. We know the old world is dying, but our leaders won't let anything else be born.
I was talking to my father a few days ago. He's a 67 year old man who's voted Republican my entire life - we'd have political sparring matches in the car when he forced me to listen to Rush Limbaugh as a teenager. Of his own accord, he started talking about the necessary end/change of our economic system. A man who'd banged on about the free market and considered himself a Libertarian for decades, and who still, when he does engage with the news, does so with right wing sources.
He's brighter than average, but not to an extreme amount. The understanding of the situation has trickled down to the point where every workplace has at least 1 or 2 people who understand how fucked everyday people are. My team at work is 6 people doing basic white collar work and we talk openly about how things are going to get worse, and there are nods to it cross-functionally all the way up to the top when our execs talk in an all hands. This is at a very apolitical giant mega corp.
None of these discussions would have happened 20 years ago. We still shy away from the specifics (candidates, policies, etc.) due to professionalism, but the broader picture (things will get worse for the average person and our troubling trends aren't going to be reversed anytime soon due to inaction at the top) is agreed upon regardless of voting record.
It kind of reminds me of being in an abusive household as a child. There is no escape and, once you've exhausted the 'official' channels, you start contemplating other options. I reported my mother to CPS once when I was about 7 and they didn't do anything (except piss her off obviously). On the other hand, the first time I smacked her back, the physical abuse stopped, and I've heard similar stories from men with abusive fathers - that there's a moment they realize they can actually go toe to toe and don't have to put up with it.
If all your abusers will listen to is violence and you're not allowed to escape/get out, it's reasonable to come to the conclusion that in this case violence is the answer. I see a similar dynamic/thought process emerging in the American public.
simianwords 2 hours ago [-]
You are fetishising class warfare and seeing it everywhere where it doesn't exist. Touch grass!
unethical_ban 33 minutes ago [-]
Well if you say so, then it isn't happening. Never mind the numbers, the political climate, the statements of corporate managers or reality.
jiggawatts 11 hours ago [-]
> Trust in society is failing
Something that I've observed happening throughout history is that in some sense "too much civilisation" can be a bad thing long-term.
I knew someone in the army talk about how some officers wouldn't survive the first week of a real war. Not because of enemy fire, but because given the opportunity, the men under their command would almost certainly take advantage of the "less civilised nature" of the battlefield to take out someone they despise enough to murder, but not quite enough to risk it in a civilian setting where the tolerance for unsanctioned lethal force is essentially zero.
Something similar happens outside of militaries too, where truly horrible human beings[1] can cynically utilise the enforced peace of civilized countries to do incredibly evil but legal things. The Sacklers come to mind as a prime example. They knowingly and deliberately sold highly addictive drugs marketed with brazen lies and killed about a hundred thousand Americans by some estimates. They are above the law and totally immune to all consequence, personal or otherwise. No violence will ever be done to them! Anyone that tries will be severely punished, because that upsets the "order" of civilised society where the rich and powerful can massacre millions, but the plebs can't ever lift a finger against even one of their cartoonishly evil oppressors without severe personal consequence.
"Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." -- Francis M. Wilhoit [2]
Sociopaths loooove civilised societies! They can mercilessly exploit people while basking in the protection of the law. As long as what they're doing is technically legal, they can get away with almost any amount of evil acts. This does take a while to build up! Norms, expectations, and the like keep the worst of the worst initially at bay, but these things slowly erode as more and more sociopaths take greater and greater advantage. (Cough-Trump-Cough)
This, taken far enough, where the common people are stepped on hard enough by those they can't ever bring to justice can result in entire societies just... snapping in their rage. They just need the opportunity, a "push", or some enabling event. In the case of the "friendly fire incidents" taking out bad officers, its a war. In most societies it is starvation or total economic hopelessness. We all know what this leads to: the French revolution is the prime example, but many others exist throughout history.
The failure of the United States is that its reigns of power have been completely and utterly captured by the increasingly corrupt elite, and there is nothing the common people can do about it. Frustration is growing, slowly, but surely.
It's not quite at the boiling over point, not yet, and may take a century to get there, but given the direction things have been heading, it's just a matter of time until the people take their anger out in some direct manner.
Trump might have started the first pebble rolling by causing an oil shock. And gas shock. And fertilizer shock. I'm sure a lot of hungry, cold people who can't even get a job because the AIs have replaced them -- and used their cooking gas for energy -- will be perfectly fine with this and won't ever do anything about it! That would be uncivilized!
[1] Disclaimer: Sam Altman is no saint, but I don't think he's anywhere near the level that he'd deserve mob violence.
[2] At some level the people commenting here that it's shocking and horrifying that anything violent ever happens to a billionaire CEO are betraying their right-wing leanings. Conversely, the people arguing that the elite shouldn't be above personal repercussions for their actions are strongly left leaning.
thinkingemote 10 hours ago [-]
Trying to help your perspective: It might be Gell-Mann? or something similar that you sometimes mention. We assume that users who have proficiencies in one area should have proficiencies in another, or we notice more when something we know and care about is deeply wrong. The reaction we feel to deep untruths is a sign of our care and passion in Truth.
As you encourage, I would also like to be a little bit charitable and say that some users might be clever at programming or know about certain technology subjects but when it comes to real life and morality they are stuck in early edgy teenager mode, so we can still work and communicate with them on other topics. I try to flag these submissions because I know that many users are completely unable to discuss them in fruitful ways. Many of us are immature.
At a societal level, the simplistic and edgy teenager morality is mostly expressed online so we being terminally online tend to notice it more. The morality might be most publicly seen in "silence is violence" which is a thought terminating cliche. Thinking is hard and changing one's mind is hard too, especially when people have these thoughts which literally stop them thinking.
Psychologically, for many, expressing these juvenile, half baked, sloppy thoughts do not require much thought. They are cheap psychologically. It's like how being in a herd is actually comfortable and saves energy. It costs brain effort and potential hurt to ones self identity to change one's brain patterns. Most people choose to avoid even the thoughts that change is possible and not only wish to remain in Platos cave but to then keep their eyes closed to the shadows on the wall.
Another charitable thought: these worrying ideas are not actually ideas but emotions. For some users they try to argue with these people with logic but they should really connect emotionally - try to help the people feel for others, the good and the moral. Easiest to do with personal first hand real stories and not abstract ideas. To break down otherness through charity.
grafmax 10 hours ago [-]
As I see it the underlying issue for many ITT is the hypocrisy of condemning violence against Altman while while looking the other way from his role as an oligarch and as a Defense contractor. This is a human being with an awful destructive effect on the world he shares with us. Such people don't deserve violence but expropriation.
phs318u 9 hours ago [-]
All communities eventually become a reflection of the society they are a part of. Even a willingly insular and sometimes wilfully ignorant one. Did you think this corner of the internet - your beautiful little garden - could survive unscathed while the rest of the world and the rest of your country slowly/quickly goes mad? The visitors to this little garden may spend a lot of time here trying not to let the outside world in - but the reality is we all live in that slowly rotting society, so don’t be surprised when the infection seeps in even here.
The comments you've linked are gross, but I take exception with what you wrote here.
> or saying they "don't condone violence" as a pretext to do exactly that
Maybe I just don't know what comments you're referring to, but you seem to be lumping every other post critical of Sam in with the worst comments, saying they are condoning violence, and that is disingenuous. I mostly see people expressing they aren't surprised this happened given how Sam openly markets his tech as a dangerous and unpredictable product that only he can steward, and maybe even finding his response to be a bit opportunistic in a tone deaf way, which hardly rises to the level of condoning violence.
I am willing to hear you out on this, but you're going to have to explain how this is different from any other thread on HN that you've moderated. Political violence, on a much bigger scale than this I may add, hits front page news, and you have more than normalized that as a discussion topic. Whether it's drone strikes, wars, or people being openly executed in the street, it seems the tragedy of human life is an open debate on HN, and you can bet a good 50% of this site will be writing comments exactly like the ones in this thread. And hell, I can't say one way or the other if threads like this are even worth allowing.
But now a tech CEO with lots of security gets a Molotov thrown at his metal gate, and people make the same comments, and suddenly a line has been crossed? How are the comments in this thread any different than comments like this, which involved people who were actually killed [1][2]. I have seen hundreds of comments on this site dictate to me how I should feel about the lives of others. I am often sickened by them. That's before we talk about Sam's actual role in how he shapes our society. It's not "sickening" to feel the need to footnote a condemnation of what happened, it's completely expected.
Again, maybe you're talking about worse comments than I'm seeing, but I feel frustrated as people have regularly brought you examples of escalating violent rhetoric on this site and been dismissed. Outside of people explicitly saying Sam deserved it, which I don't agree with, every other comment here reads like regular HN to me. If that saddens you, maybe there needs to be a different approach to moderation altogether.
The difference is that the victim is one of ours. When we kill millions of poor innocent babies in the Middle East, that's not violence, that's not political, that's just technology helping improve society. But when one single member of our political elite is physically threatened (not even killed, like those millions of children, not even suffering any injury himself, just some minor property damage with an implied threat), now that's something we have to rally against or we're violent uncivilized monkeys deserving of life in a jail cell.
cindyllm 9 hours ago [-]
[dead]
pixel_popping 7 hours ago [-]
HN has literally turned into a political cesspool lately
drcongo 6 hours ago [-]
The world has literally turned into a political cesspool lately. Possibly related.
15 hours ago [-]
salawat 2 hours ago [-]
The community may very well feel ashamed of here, dang. I've been here in the good times, and to be frank, even before I made an account in 2017, I'd lurked for a long time. Recently, I've personally come to recognize an ethos nurtured here that it may very well be has overstayed it's welcome in a polite society. People aren't dumb. People see where the money flows. People see whose decisions things revolve around. People see the trajectory that seems to be set, and people are starting to realize that talking & reason aren't working for them any more. Reason, is by virtue of rationalization, in it's own way it's own worst enemy. With enough practice, anything can be intellectually justified. So where the little box of rationality ceases to be effective, life shifts to the irrational. Suddenly things start hitting different. You might be ashamed of all those here feeling the squeeze, but the squeezed don't even register the pinch thereof compared to what life is already throwing at them, in no small part because of your fellow Sam A. What you should note, and take away from all of this, is someone you know is building themselves into a Wickerman doused in gasoline through their actions. If you want something to change; you can try applying pressure to your first degree connection. Sometimes people just need a helping hand back onto the right path from someone unexpected.
Or... You can keep telling a bunch of people with much bigger problems how ashamed you are that they are having an absolutely human response to the suffering of a man at the forefront of building a reasonably foreseeable suffering amplification machine within the context of a society that is organized around a social contract of exchanging capital for labor. I'm sure that shame you cast won't get "lost in the softmax" as the AI folks might say.
No more skin off my nose either way. Though I'd feel much better seeing some genuine humanity injected into cutting edge tech circles, I'm aware of the incentives, and also cognizant that sometimes, you have to leave the incentivized path to stay on the Right one. That's a lesson it isn't in any one person's capacity to teach though. Sometimes... it takes a community to get the point across. Even then though, you can lead a horse to water...
13 hours ago [-]
johnbarron 5 hours ago [-]
Maybe its opportune to talk about editorial consistency, because your statement here is a fascinating case study in selective moral clarity.
When posts surface about Gaza, documented by the UN, by Médecins Sans Frontières, by the Lancet, by journalists who were subsequently killed while reporting or now in Lebanon, they vanish from the front page with remarkable efficiency...
The reasons, which I have collected like trading cards at this point, include: "too political," "not related to tech," "flamebait," "this isn't the forum for this," "not intellectually curious," and my personal favorite, "this will only generate heat, not light."
Entire hospital systems destroyed, aid workers killed in marked vehicles, tens of thousands of documented child casualties, and the curated editorial position is: not HN material.
A Molotov cocktail lands on a billionaire CEO's porch. No injuries. Likely a disturbed individual, and according to some well researched reporting in the New Yorker, Altman's personal life has generated no shortage of intense grievances that have nothing to do with AI or tech.
But here we are: front page, moderator editorial, existential crisis about the community's soul...!?
So help me understand the framework. Is violence HN worthy when it is directed upward on the org chart? Is a zero casualty arson attempt on a mansion more deserving of community reflection than systematic destruction of civilian infrastructure, because one involves someone in YC Rolodex?
You write that you've "never seen a thread this bad." I'd invite you to read the comments that appear in the eleven minutes before Gaza threads get flagged. They're remarkably similar in tone, just aimed at people who don't have Sam publicist.
You say you want to "find something else to do with your life." Maybe that instinct is worth listening to. Since the AI boom, HN moderation has drifted from "intellectually curious forum" toward something closer to "curated narrative for the industry it covers."
When a platform consistently decides that violence against tech executives is a moral emergency but violence enabled by tech companies' contracts is "off-topic," the person setting that editorial line is not a neutral steward, they're an editor with a viewpoint.
And that's fine, but let's not dress it up as community values. So...In the spirit of consistency:
I'd like to this post be flagged. It involves no technology. It's a criminal matter best left to law enforcement. The comment section is, by the moderator's own assessment, irredeemably toxic. It is generating heat, not light. It is too political. It is not intellectually curious. It will attract flamebait.
In other words...it meets every single criterion routinely applied to kill discussions about violence that does not happen on somebody porch in Pacific Heights.
dehrmann 3 hours ago [-]
> Is violence HN worthy when it is directed upward on the org chart?
Generally, world news and politics are not supposed to be submitted unless there's a tech industry connection. The exception seems to be world-changing news, and there's a light touch on YC-affiliated news for conflict of interest reasons.
> Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic.
>The number of commenters justifying violence, or saying they "don't condone violence" and then doing exactly that, is sickening and makes me want to find something else to do with my life—something as far away from this as I can get.
There are like 20 rules for commenting on this site. Pretty much all of them are versions of “have decorum”, and none of them are “do not advocate for violence”. It is not just tolerated but encouraged to post insane stuff here so long as it sounds highbrow enough (eg the “most charitable interpretation” rule. It is against the rules to call out stuff like advocating for violence if it’s written like Niles Crane wrote it).
As far as I can tell this thread is not really exceptional in any way other than some of the ire is directed at somebody that used to work for YC.
walls 7 hours ago [-]
It's almost like all the work you've put into silencing any criticism of the current regime and associated oligarchs was for nothing!
foxes 15 hours ago [-]
Not sure what world you have lived in for the past at least 10 years...
HN (and ycombinator) has implicitly enabled, dogwhistled, or pretended to ignore all sorts of hateful and violent rhetoric. Sometimes it hides behind a veneer of "curious conversation" but other times its disgustingly blatant - last article I saw about sama was filled with horrific racism.
I come here because there are sometimes good posts, but this stuff has been here the entire time. Now its your guy getting the hate you are acting like its the worst thing in the world?
Frankly people calling out a post from a billionaire is a good thing. You would have to be terminally detached from reality to not see how all these festering issues - wealth inequality, injustice, cost of living, future employment etc etc - are starting to come to a head which would cause people to feel something - frustrated, angry, wrathful.
dang 15 hours ago [-]
> Not sure what world you have lived in for the past at least 10 years
The world I have lived in for a lot longer than 10 years is HN. I'm gut-wrenchingly familiar with the worst things that people post here—probably more than anyone, simply because it's my job.
If you can dig up a single example of a thread this bad that we knew about and didn't do anything about, I'd be shocked, because it would go against everything I believe and feel. Perhaps you can, nonetheless? If so, let's see it.
Here's what I mean by "this bad", if you want to calibrate:
The number of people who feel that anything at all is justified if it reinforces their feelings—particularly their angriest and most vicious feelings—is so large that it's clear that it is human nature in action, and that makes me yearn for a cool and heavy rock to crawl under, with moist earth to sink into.
foxes 14 hours ago [-]
Well I'm not saying they don't get moderated eventually .. but the thread in reference
There was horrific racism on display right here. Perhaps it just seems part of the background noise to you .. but at the time, some of those posts felt just as bad as calls to violence or worse.
But to compose something more substantial .. its probably all to much to neatly tie up in a single reply to a thread.
dang 14 hours ago [-]
> Well I'm not saying they don't get moderated eventually
I'm going to interpret that as meaning that we do our job ok, just not instantenously—which would make sense, given that we're human and that would be humanly impossible.
> There was horrific racism on display right here
If there were any cases of that which we didn't do anything about, it would be because we didn't see them. I can't read everything that gets posted to Hacker News any more than you can; see "humanly impossible" above. But I'd like to see specific links.
> Perhaps it just seems part of the background noise to you
It does not "seem like part of the background noise" to me. What it "seems like" is wrenching my intenstines into an agonizing state on a regular basis and then driving a spike through them.
UncleMeat 10 hours ago [-]
But you are doing things about the bad comments in this thread too.
Why is "well we removed that stuff" a defense in other contexts but not here? In both cases the issue is this community writing stuff you deem objectionable.
foxes 14 hours ago [-]
Consider some more examples: trump or that other conservative figure getting shot. Or the ceo of the health company getting shot.
Both of those people condone(d), support, amplify and drive horrific violence.
A common liberal reaction to those incidents - "oh no violence isn't okay!!" - well where were you for all the other horrific things they did and said? Yes in some ideal world there perhaps wouldn't be violence - but I can understand people feeling like they had it coming. It's the boy who cried wolf. It's the bully getting their comeuppance. It can be hard to feel bad.
Sama also talks about wanting ai to be the future, its pushed everywhere and the feeling is its going to take peoples jobs and disrupt everything. But there's no discussion about how we are going to look after everyone in that future. Current capitalistic (american) society doesn't seem built for that ... that lack of care already exists for a lot of people too who are homeless, poor etc.
Being upset about samas front gate getting firebombed while they probably also had plenty of security .. well idk.
brandon272 5 hours ago [-]
> Both of those people condone(d), support, amplify and drive horrific violence.
This seems to be the point of contention. What constitutes "violence"?
A lot of people seem to define violence as a purely physical act: a missile strike during a war, a fist hitting a face, a molotov cocktail thrown over a property line.
What has become clear to me, especially when I saw the discourse around Luigi Mangione and the public opinion polling on it, is that a lot – a lot – of people define it much more broadly: a health insurance denial, a job lost as a result of some CEO's careless ambition, or mere words.
The problem with a very broad definition of violence is that it permits a pretty barbaric worldview. If I cut someone off in traffic, or if a careless administrative action on my part costs someone money that then puts them in a financial pickle that month, is that violence? Do I then deserve to be tracked and assaulted? What about the doctor who is complicit in the refused treatment because the insurance company won't pay a bill?
"I understand the insurance company isn't paying the bill but you are still going to treat me, and to not do so is a violent act."
The list goes on. Can society function if the default action at real or perceived injustice is to just kill?
mindslight 4 hours ago [-]
Have you not seen similar troll comments outright celebrating the actual deaths of ICE's victims, Iranians, Oct 7th victims, etc? I certainly have.
Hell, at the last protest I went to there were people driving by cavalierly playing "Bomb Iran" (written in 1980, and trotted back out every time the topic is back in the zeitgeist). It seems like the only real difference there is abstraction. Supporting violence is [unfortunately] deeply embedded in our culture.
Perhaps the popularity of this thread is causing you to preemptively seek out more terrible comments, rather than letting flagging do its thing?
Maybe try looping over popular divisive threads, and reading the flagged short comments that didn't get many upvotes. There is a lot of fucking hate in the world.
(and certainly a hat tip to you for making it your job to sort through it so we don't have to see much of it. But if this is hitting you differently (personally) than the usual flood does, perhaps you need to take a step back?)
pillefitz 9 hours ago [-]
I wouldn't hold it against anyone wishing my great grandfather shouldn't have existed for playing a minor role in Nazi Germany. Altman is in cahoots with a government that just a few days ago threatened to end a whole civilization. So no, I don't understand where you are coming from or why you're disgusted at the comments you linked.
foxes 8 hours ago [-]
It's kinda wild - It's now only publicly objectionable when its against an extremely privileged and powerful person??
Oh no !! /s
Meanwhile that same person implicitly condones violence - for example getting in bed with the US gov.
They don't need defending on HN.
710dev 11 hours ago [-]
[flagged]
ninkendo 8 hours ago [-]
Maybe it's time to pack it in? I don't just mean you, I mean that maybe this site has kinda run its course.
The tech scene isn't the small, tight-knit thing it used to be. This site is now enormous. Discussion quality seems to have sort of "regressed to the mean"... the larger HN gets and the more people join the discussion, it starts to resemble the median social media site more and more. At some point it sorta loses its purpose.
I'm still addicted to HN, but I've gone through times where I've set my password to a UUID and time-lock encrypted it to lock myself out, because posting here has gotten worse and worse and worse for my mental health (and there's no way to delete your account here... I've emailed you about it in the past and never got a response.) On some level I hate HN now. TBH if this site was gone tomorrow, I'd most definitely be better off for it in the long run, and I'm sure I'm not alone here.
Thanks for all the work you've put in over the years though. This site has held out longer than most, and for a time, was one of the best places on the internet for discussion of any kind, let alone tech. It deserves a place in history for that alone.
johnfn 4 hours ago [-]
If you want to delete your account you can just set your noprocrast to some absurdly large number like 99999999.
lrvick 13 hours ago [-]
I deeply hate Sam Altman, but after reading the flagged comments. Jesus. You do a tough job. Thank you.
UncleMeat 11 hours ago [-]
Let's see this sort of criticism from Garry about social murder. He has no problem with mass death, just through immiseration of the people rather than guns.
odshoifsdhfs 3 hours ago [-]
So, OpenAI, Brockman, not sure about Altman directly, donated millions to Trump&Co, support and let use their technology to kill/harm millions of people, and now we are supposed to pretend to feel sorry for them?
None of those news items, comments, news made you want to get away from this, but now that your YC buddy is the target and whatever else fuck is used to justify it? When ICE killed american citizens, school girls killed it was all 'we flagged this as flamewar and what now' but now because he is part of the cadre, NOW it is disgusting? I would laugh if this wasn't the fucking future we are at, just sucking to these assholes
hax0ron3 5 hours ago [-]
Almost everyone considers violence against some people to be ok. I'm pretty sure that you do. I'm pretty sure that Sam Altman does too, probably even more than you do, otherwise I don't see why he would be working closely with a government that regularly deliberately kills people (this isn't an anti-Trump thing, the previous governments all did too). Yes, those people are in other countries and there are arguments that can be made in favor of killing them. But that just goes to show my point - Sam Altman himself is ok with making money off killing certain people. As am I, to a lesser degree, since I pay tax money to live in the US. Sam Altman is implicitly ok with his technology being used to blow up some rich Iranian elite who works closely with the Iranian military.
Now, we can debate over whether it would be good or not to kill Sam Altman specifically. There are probably decent arguments that can be made in both directions. And of course it can be argued that the US pursues a more moral foreign policy on average than for example Iran does. I wouldn't even disagree with that. But I think it's unreasonable to expect that the commentariat in general not condone violence. A large fraction of politics discussion, by its very nature, is about what kinds of violence should be condoned. There is no reason in principle why Sam Altman should be exempt from such questions. Now, would I like it if people were holding a discussion about whether or not it's ok to kill me? Of course not. I'm just pointing out that politics discussion is inherently partly about deciding who it is ok to kill. You can't really have politics discussion without condoning violence against some people.
MiguelX413 56 minutes ago [-]
Yeah, Sam Altman is extremely violent if you count his role in structural violence. It's hard to believe that he'd disagree that violence is ok.
hax0ron3 4 minutes ago [-]
To be fair, my role in structural violence, while not to the level of Sam Altman, is also far from non-zero.
And that's kind of what I'm trying to get at. I'm not saying that people should kill Sam Altman. I don't think I would kill him even if I was 100% sure I could get away with it. Killing is a very heavy thing, surely there are usually other ways to solve problems. But the idea that it's ok to advocate violence against certain people and not others, which I think is probably implicit in dang's thinking (I doubt he would be offended by calls to kill Putin, for example), should probably be made explicit. We can have a conversation about where the line is exactly.
58 minutes ago [-]
klik99 22 hours ago [-]
Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
itsyonas 11 hours ago [-]
> but I don’t think violence is funny or justified
Well, that's okay, because even Sam Altman disagrees with you. He absolutely believes that violence, including deadly violence, is justified - hence his contract with the US Department of War to use their systems in kill chains.
Perhaps the problem is that whoever threw the cocktail didn't use AI to select him as a target, or maybe he didn't receive payment for throwing it? Because what other difference is there?
snark_attack 5 hours ago [-]
OMG can’t believe this has to be explained, but even the article, if you actually read it, contains the answer. Here’s a hint, it includes the words “democratic process.” Maybe that will help you figure out “what other difference is there.”
itsyonas 4 hours ago [-]
Could you explain how the Vietnamese were involved in the US democratic process that resulted in around 3 million of their people dying? Similarly, how are the Iranians currently involved in the US democratic process to veto the use of AI targeting against them? As a German citizen, how can I object to being surveilled by OpenAI products used by US agencies?
It turns out that those affected by this are actually excluded from the process by design.
bigyabai 5 hours ago [-]
I don't think that OpenAI necessarily enforces or fundamentally respects the democratic process. After the recent Pentagon spat with Anthropic, OpenAI did not change their stance to conditionally demand lawful usage of their product.
OpenAI can market democratic values very easily, I'm sure the White House loves that kind of dog-and-pony show. But it's pretty clear that OpenAI does not genuinely care about Rule of Law, let alone preventing humanitarian disasters from citing ChatGPT as their abettor.
senordevnyc 4 hours ago [-]
Many people are just bitter hateful cowards, who feel fine dismissing violence against a child from behind their keyboard because they hate their own lives and project their hatred outwards. At the end of the day, most people are just stupid and lack a moral compass, and it’s easy to see when they can post anonymously.
unethical_ban 5 hours ago [-]
I think Sam and people like him are *spoilers* like Jules Pierre Mao and Dresden on The expanse.
I think that he may genuinely believe that ai will produce a net benefit for humanity in the long term, but I am increasingly worried that they are absolutely fine testing their creation on the world without any consideration to the harm it can do to millions of individuals.
The assertion that he is benign would be more believable if he spent a shred of time lobbying for universal economic rights of citizens, or some model for redistribution of wealth in a world where most people don't need to work to provide the necessities of society.
Oh, and he's willing to let the government use his technology to mass-spy on Americans and to create autonomous lethal AI.
Pearl-clutching about ambivalence to his fate and comparing it to the barbarism of a mob gets shrugs from me.
BrokenCogs 18 hours ago [-]
The problem is sam is a prolific liar, as has been proved many times.
It's difficult to sympathize with the boy who cried fire
nenxk 15 hours ago [-]
I don’t think someone should be burned alive because they’ve lied unless they’ve spread intentional lies that have caused death or harm to others which I don’t believe Sam has done. Personally I find it very easy to sympathize with someone who was attacked in their own home with their family unprovoked even if they have lied in the past. It’s crazy how blood thirsty people have became lately.
pllbnk 11 hours ago [-]
I am not talking specifically about him but when you reach a certain level in society and large enough umber of people start reading or listening what you are saying your every sentence must be extremely thoughtful because it might have unintended consequences, which are impossible to measure. That’s why so many leaders are publicly so boring and bland.
Chance-Device 7 hours ago [-]
I think people just shouldn’t be burned alive.
siva7 12 hours ago [-]
[flagged]
rozal 20 hours ago [-]
[dead]
voidhorse 18 hours ago [-]
[flagged]
joshcsimmons 21 hours ago [-]
This is both horrible and not at all surprising.
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Still horrible and not right.
Gigachad 11 hours ago [-]
Sam and related spend half their lives on TV poking the bear laughing about how he is going to use his tech to ruin peoples lives and there is nothing they can do about it. Some of those people have nothing to lose.
snark_attack 5 hours ago [-]
Jesus can’t believe I have to white-knight Altman here, but can you point a single video or interview where any of the AI CEOs have been “laughing about how he is going to use his tech to ruin peoples lives.”
This is the exact kind of poisonous, plausible-sounding but false and inflammatory rhetoric that is escalating things.
When one is telling farriers that most of them are going to lose their jobs, one might want to try to at least mimic more compassion.
unethical_ban 5 hours ago [-]
Can you point to me the amount of time and money they've spent lobbying Congress for an economic bill of rights for citizens? Or bills guaranteeing health care and basic food and living for all people?
The masses see an incredibly small number of people making huge amounts of money, and gaining massive political influence, by developing technologies they intend to use to replace almost all human economic worth. And they are doing very little if anything to show concern for the fates of the millions of people that may be put out of work.
I see this different than the discovery of oil or electricity or the Internet. It's bigger than that, and they are telling us to remain calm in a burning building while they walk toward the exits.
richardlblair 21 hours ago [-]
Jfc. People, a molitov cocktail was thrown as his home.
The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.
Holy shit.
2pEXgD0fZ5cF 12 hours ago [-]
When the attack is being used to craft a very particular narrative unrelated to the attack, a lot of other things continue to matter, and yes they do matter right now. That is on the premise that this isn't some depraved PR stunt. And that is also ignoring how purposefully misleading most headlines as well as your comment are.
hax0ron3 4 hours ago [-]
Better stop paying taxes then, cause your government, whatever it is, is probably ok with using your tax money to in some cases fund the killing of people who have families and children. Now, we can argue about the morality of killing those exact people as opposed to killing Sam Altman, but that's a different discussion. My point is that the real argument isn't over whether it's ok to kill people who have families and children, you're probably ok with that too, after all bin Laden had a family and children. The real argument is over which people who have families and children it is ok to kill.
UncleMeat 6 hours ago [-]
Assassinated Iranian nuclear scientists had kids too. There was a thread here a few days ago letting people explore the deaths of children in Palestine. That thread was taken off the front page via flagging.
Capricorn2481 11 hours ago [-]
> The rest of what is written doesn't matter. This isn't the moment for that conversation
That's terrible that someone did that. I think that's wrong, and people that do that should be in prison.
But if the rest of what was written didn't matter, it wouldn't be written. He thought it was important enough to put it in. It's there to be read and discussed.
And I have to point out, we're not talking about a couple off the cuff remarks he may have rushed. About 95% of the post is about his ambitions for OpenAI. So pearl clutching that people are actually discussing the meat of the post in a tech forum reads performative.
richardlblair 11 hours ago [-]
Another comment lacking any compassion.
The man was reeling from what happened. He blames himself and his work. He sat and he wrote, naturally it came back to OpenAI. Should he of? Probably not. But it's understandable that he did.
We can meet the moment with some understanding and give the guy a little wiggle room.
bigyabai 2 hours ago [-]
Their comment was perfectly compassionate. Why are you so eager to discount the rest of what Altman wrote?
This is a serious issue, and it's very possible that "wiggle room" is what got us into this situation. Altman would have been removed as CEO if the OpenAI board of directors got their way, the pushback is not limited to public extremism. His belief that AGI is a world-scale threat is entirely unqualified, and a fatalistic framework for marketing his product.
Both OpenAI and Sam Altman would probably be safer abandoning the apocalyptic tone towards their product line. They have no proof for their claims and only escalate the anti-tech sentiment that even Altman empathizes with in the concluding paragraph. It's a transgressive viral marketing tactic that does not elevate or improve humanity's understanding of AI.
Capricorn2481 10 hours ago [-]
Give him wiggle room? I didn't even say anything about him. I just said you were urging people to refrain from commenting on the post, which is true.
> The man was reeling from what happened. He blames himself and his work
Based on what? I don't particularly feel like he should blame himself, but I don't think he does. Can you point out where in this post he blames himself?
sensanaty 10 hours ago [-]
[flagged]
14 hours ago [-]
creddit 21 hours ago [-]
1) It's terrible that this has happened. People who do this are evil.
2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.
3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.
daft_pink 1 hours ago [-]
It’s just so bizarre that they would pick or obsess over him. He’s just a financier/leader.
dash2 16 hours ago [-]
I've skimmed the thread here and I am now seriously considering leaving HN for the first time in about 15 years. Here are some quotes from what used to be a pretty interesting and thoughtful community:
> Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
> the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
> Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.
> A cavalier attitude and allegiance to nothing but capital doesn't make you immune to basic human morals, and humanity will, rightly in my opinion, punish you whether you like it or not.
These comments are disgusting. The people who made them should be ashamed. But they are probably too stupid to be, assuming they are people and not bots, which I no longer feel certain of for all too many comments here.
doom2 4 hours ago [-]
> I've skimmed the thread here and I am now seriously considering leaving HN for the first time in about 15 years.
I'm finding a lot of the comments here pretty reprehensible, but no more reprehensible than the collective shrug the community gave towards murdered Palestinians, or threads about dead Iranians as a result of American bombs that get flagged off the front page. That doesn't make them acceptable or okay.
Those people's lives are/were valuable, too. It's disgusting that we try to keep HN "clean" of those horrors and the people that flag those threads should be ashamed. Ditto those who think the killing of innocent civilians is okay.
UncleMeat 10 hours ago [-]
In my time in HN I have seen numerous people advocate for the following
* ending all covid measures to achieve herd immunity, accepting that this condemns hundreds of thousands or even millions to die
* ending foreign aid that goes to tuberculosis treatments, condemning hundreds of thousands or even millions to die of a treatable disease
* accepting the deaths of iranian, palestinian, or israeli children as collateral damage because of the evils of their governments
Or go read any thread involving the Jordan Neely story.
Somehow it is vastly more evil when violence is acute and focused at a single wealthy person.
Natfan 10 hours ago [-]
a single wealthy white male person
UncleMeat 10 hours ago [-]
Yep. Tons of comments here justifying the murder of Jordan Neely when that happened,
snark_attack 5 hours ago [-]
[dead]
ifwinterco 15 hours ago [-]
I don’t think they’re bots, the strength of feeling is real.
Rightly or wrongly people feel cut out of society at a time when the tech elite are not only making billions but seem to be actively trying to ruin everyone else’s lives, they are legitimately hated.
And when you’re that hated you do need to be careful, money can’t protect you from everything. At the end of the day we do all have to live in the same society.
(I don’t have this strength of feeling personally but some people do)
15 hours ago [-]
pillefitz 16 hours ago [-]
[dead]
well_ackshually 14 hours ago [-]
[flagged]
dang 14 hours ago [-]
> he deserved it [...] I'll have a toast the day he croaks
As I said to voidhorse (https://news.ycombinator.com/item?id=47728150), this is obviously the kind of thing we ban people for—as anyone who reads https://news.ycombinator.com/newsguidelines.html should know; but given that this thread is a mob and mobs derange people, I'm going to cut you some slack and not ban you. Just please don't do anything like this on Hacker News again.
> For a social scientist, you're either a really poor one, a poorly read one or one with a complete inability to read the room.
Personal attacks are also unwelcome here. Lashing out at a fellow community member is mean and shameful, and also undermines whatever argument you were making.
npodbielski 15 hours ago [-]
[flagged]
dang 15 hours ago [-]
No, people don't have "have rights to have and voice their opinions whatever it may be" on this site. What people have the right to here is use HN as intended. That intended use is described here: https://news.ycombinator.com/newsguidelines.html.
Mobs foaming at the mouth, triggered by a disturbed person's violence into a mutually foaming frenzy, is not an intended use of this site. I shouldn't have to tell any of you this.
npodbielski 13 hours ago [-]
But you are the moderator. Not me, not the person I was responding to. You have different opinion about people having opinions here at the HN. I have different opinion of the matter. This is great! This is what I am actually talking about! I come here and other sites to learn what people think. To discuss! Share ideas! Have an argument! IMHO whole freaking purpose of internet!
presides 20 hours ago [-]
>“Once you see AGI you can’t unsee it.” It has a real "ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”.
The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.
The analogy has 2 simple rules and you can't even follow them:
#1 It MUST be destroyed.
#2 SOMEONE has to have the ring until then.
Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.
Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
AlexCoventry 21 hours ago [-]
> The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
cnd78A 12 hours ago [-]
when you live in barbaric soeciety where the majority don't mind using force to achieve their goals at the expense of minorities or basic international law, peacefull protest become useless.
brailsafe 22 hours ago [-]
I can't help but be reminded of last year, when our landlords (chill boomers) sold the house my girlfriend and I were renting the basement of (to presumably rich asshole millenials). The demographic doesn't really matter, but the old landlords kept us in us in the loop throughout the process, we knew as much as we could going into the new year. Apparently the new buyers wanted to keep us as tenants. Day 2 of them taking possession, the man came down with his innocent toddler and introduced themselves. He seemed friendly enough, and on Day 3 he came down in the middle of the day and handed me eviction notice papers.
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
SoftTalker 18 hours ago [-]
If you had a lease the new owner was obliged to honor it, should not have been able to evict you at least until the end of the lease term.
brailsafe 13 hours ago [-]
There's a provision for personal use that stipulates they can't rerent the unit for a year. It wasn't illegal, but it was an asshole move. They also tried getting us for more than out full deposit, to which we declined and they relented. Basically he's just a scumbag.
SoftTalker 4 hours ago [-]
I guess I don't blame someone for wanting full use of the house they bought. But if they lead you to believe they wanted you to stay and then suddenly reversed on that, yeah kind of a dick move.
I probably would have pressed on negotiating a bigger buyout, but that's easy to say not knowing your situation and what other options for housing you had at the time.
hungryhobbit 22 hours ago [-]
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me."
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
surround 22 hours ago [-]
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
Wouldn't it be more correct to call the article "critical" and not "incendiary"? I looked it over and I don't remember seeing any calls to violence. Altman needs to remember that he holds an incredible amount of power in this moment. He and other current AI tech leaders are effectively sitting on the equivalent of a technological nuclear bomb. Anyone in their right mind would find that threatening.
h14h 21 hours ago [-]
"Critical" even feels strong. The article was essentially a collection of statements others have made about Sam.
davesque 21 hours ago [-]
Right, but the picture those statements painted collectively was not flattering. And that was certainly intended by the authors. Thus, critical, but not at all "incendiary."
Update: To clarify, my personal stance is that the critical tone was both intended by the authors and, in my opinion, appropriate given how much power Mr. Altman holds. If he has a history of behaving inconsistently, that deserves daylight.
benzible 19 hours ago [-]
Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised? That they clearly had an agenda? That's called reporting. They called a hundred-plus named sources and the picture those sources independently painted was damning. Altman has a history of telling repeated, easily-checked lies, followed by fresh lies when caught in the first ones.
Are you suggesting that they should have "both sides"-ed by reporting company PR and Sam-friendly sources and giving them equal weight? Sometimes the facts point in one direction.
davesque 19 hours ago [-]
> Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised?
Uh, no? Lol, I'm on your side, bud. Put away the pitchfork. I thought it was a really good and fair article. I am not the adversary you're looking for.
benzible 18 hours ago [-]
> my personal stance is that the critical tone was both intended by the authors
You may think we are on the same side. You don't understand what side I'm on. "Lol".
Your "personal stance" is that you can get inside the heads of the reporters? Obviously not. So you're going by the idea that an article that leads to critical conclusions is inherently slanted. This is an insidious and damaging idea. It has led to the belief by journalists and editors that they need to twist themselves into pretzels to present "both sides", which is easily exploited by people of bad faith to launder outright lies. There's a direct line between this and authoritarianism. I'm quite serious about this. The fact that you agree with the authors in this case is completely orthogonal.
Every article is inherently biased due to the fact that there are inclusions and omissions. This is just a fact.
You're injecting your own personal view into GP's statement by adding a lot of weight into the distinction between the words "critical" and "incendiary" and "neutral", when GP made a very neutral and not as charged statement.
margalabargala 17 hours ago [-]
Look if you're looking for a fight just visit a local martial arts gym.
davesque 16 hours ago [-]
Bud. Put the keyboard down and relax. I have no idea what you're talking about. You've extrapolated all this just from what I wrote?
benzible 16 hours ago [-]
> You've extrapolated all this just from what I wrote?
says the guy who said "certainly intended by the authors" based on... what they wrote?
On top of that "Put the keyboard down and relax" from the guy who keeps replying?
<chef's kiss>
> I have no idea what you're talking about.
The one point I'll concede!
jrflowers 14 hours ago [-]
I love reading stuff like “Critical, slanted, and compromised mean the same thing. They are interchangeable words.”
Given that, it looks like your position on davesque’s posts is slanted. Your take is critical of those posts, which means your assessment is compromised, and as such should not be taken as valid.
benzible 5 hours ago [-]
And I love seeing sentiments attributed to me, in quotes even, that I didn't state or imply, and certainly don't believe. "Critical" by itself is not a synonym for "slanted". However the post I was commenting on was:
> Right, but the picture those statements painted collectively was not flattering. And that was certainly intended by the authors. Thus, critical, but not at all "incendiary."
The key there is "certainly intended by the authors". The full sentiment here IS equivalent to "slanted".
jrflowers 2 hours ago [-]
It is clearly your intent to be critical of davesque’s posts. QED your analysis is compromised
The whole article is about how Sam will say one thing and then deny/opposite later
duskdozer 16 hours ago [-]
Anything but unqualified praise and endorsement is egregious harassment.
customguy 15 hours ago [-]
> Wouldn't it be more correct to call the article "critical" and not "incendiary"?
Sure, but not useful for the overarching aim of equating criticism of the powerful with (stochastic) terrorism.
voidhorse 18 hours ago [-]
[flagged]
eddyfromtheblok 22 hours ago [-]
Ronan Farrow, one of the journalists who worked on this article, talked to Katie Couric on her YouTube channel about this. They worked on this across ~18 months. I thought this interview was illuminating.
AlexCoventry 21 hours ago [-]
Yes, it was good. It seems clear that Farrow and his co-author approached it in a methodical, fair-minded way.
Yeah, it's one thing to write an incendiary article, it's a very different thing to write an objective article about someone who will say anything to get what they want.
georgemcbay 21 hours ago [-]
He has to be talking about the New Yorker article, which wasn't incendiary at all. If anything, it seemed fully neutral to me, reporting what they could justify as facts but going out of their way to not specifically paint him or anyone else in a negative light beyond a listing of events that they presumably have solid sourcing on (if not, sue them; if so, stfu).
If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.
It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.
22 hours ago [-]
rozal 20 hours ago [-]
[dead]
w10-1 21 hours ago [-]
I appreciate his post and his tone.
No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).
Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).
We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.
LunaSea 22 hours ago [-]
[flagged]
SOLAR_FIELDS 22 hours ago [-]
Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
dakolli 22 hours ago [-]
[flagged]
teaearlgraycold 20 hours ago [-]
You don’t even know what is covered. It could be anything from how to prompt to how to create your own models from numpy primitives.
moralestapia 22 hours ago [-]
Who tf is dumb enough to not do it, though?
If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
reenorap 16 hours ago [-]
$500/hr maybe. Most of these are like $5000-10k per week.
dakolli 22 hours ago [-]
Its neural network autocomplete that helps you write text a little faster, chill with "the most revolutionary technology of the last decade/century" talk. You're offending a lot of experts in way more important areas of research.
snoman 19 hours ago [-]
That’s so shockingly ignorant/reductive that you shouldn’t be surprised when people start ignoring you in technical conversations.
dakolli 18 hours ago [-]
[flagged]
sillysaurusx 18 hours ago [-]
Yes, actually. Or at least I've thought of outsourcing my emotional needs to it, since it's quite good at conversation.
You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.
21 hours ago [-]
xvector 21 hours ago [-]
[flagged]
hungryhobbit 22 hours ago [-]
Yeah, people learning new technology is terrible. /s
probably_wrong 22 hours ago [-]
10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
Yeah a company causing mass death or other disasters is maybe the single clearest signal that they should go bankrupt and someone else should take over (if the tech is really that important).
scruple 19 hours ago [-]
> I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
Well that makes two of us. Character seems to mean nothing today.
SpicyLemonZest 22 hours ago [-]
I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.
tyre 22 hours ago [-]
The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".
OpenAI has also repeatedly and quietly lobbied against them.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
Please don't fall for this stuff.
0xy 21 hours ago [-]
[flagged]
probably_wrong 21 hours ago [-]
> Incendiary and false headline aside
The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a
developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.
> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.
0xy 7 hours ago [-]
The headline is completely false and misleading. The bill does not indemnify AI companies from all mass murder as it implies. It indemnifies them if they UNKNOWINGLY provide a product that is used by others for mass murder.
If someone asks ChatGPT for places where a lot of people will be around in a city, intending to mass murder but not revealing as such, you want them to be liable? Seems absolutely crazy.
goku12 16 hours ago [-]
All of those are false equivalences. Let me give you a few better analogies.
Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.
Or a big tech company like Microsoft selling a software for planning a mass murder, including indoctrination material and the checklists of things to be done.
Or an auto company like Toyota selling a car that is known to accelerate uncontrollably at inopportune moments and advertising it as great for hit and run campaigns.
Now let's consider a few relevant examples.
An AI model sold for planning military attacks, knowing that it sometimes selects completely innocent targets.
Or an AI model sold to families, claiming that it's safe. Meanwhile, it discreetly encourages the teenage son to commit suicide.
Or selling a financial trading AI that's known to make disastrous decisions at times.
Or selling a 'self driving' car, knowing that its autopilot frequently makes fatal mistakes.
I know that I'm supposed to assume good intentions and not make any accusations on HN. Therefore let me make this rather obvious observation. Some people here are dismal failures at making arguments that are consistent and free of logical fallacies - especially when it comes to questionable practices by the bigtech.
0xy 7 hours ago [-]
>Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.
Please provide ChatGPT/Gemini marketing materials advertising it as good for mass killings.
saintfire 19 hours ago [-]
People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.
Beautiful.
deaux 18 hours ago [-]
Half of these people have financial interests in the companies in question either directly working for them or indirectly, or are already part of that class. Realize they're behind the keyboard, and there's nothing surprising about it.
0xy 7 hours ago [-]
This can only be an intentional misreading the bill, or you haven't read the underlying bill at all. Because the headline is patently false. It indemnifies them ONLY if they unknowingly assist in mass murder.
If someone asks ChatGPT "hey chatgpt, where are spots in my city where a lot of people hang out on the street", then uses his car to mass murder 18 people, you want OpenAI to be on the stand? Sounds like an objectively insane position.
In a world with broad liability as you desire, the person who rented a hostel room to Luigi Mangione while he plotted murder should be held liable for aiding him, despite knowing nothing of his intentions.
hart_russell 18 hours ago [-]
He’s clearly a standard pathological lying C suite exec
mixtureoftakes 22 hours ago [-]
unpopular opinion but i think it's written quite well
ryan_n 22 hours ago [-]
I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
daseiner1 19 hours ago [-]
it's "written well" but not at all a smart piece of writing. leading with a photo of a cute baby before engaging in an extended defense of one's own integrity is so obvious as to be insulting
kcatskcolbdi 22 hours ago [-]
Yes, clearly not written with his own product.
pesus 22 hours ago [-]
If that's the case, why doesn't he trust his own product enough to write this?
alpaca128 21 hours ago [-]
He doesn't trust it for anything else either as far as I can tell. In an interview he's boasted about how he uses a paper notebook for everything all day.
kspacewalk2 22 hours ago [-]
Perhaps by ChatGPT
0x3f 22 hours ago [-]
It seems a bit stilted to be LLM'd.
copypaper 22 hours ago [-]
In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
Chance-Device 20 hours ago [-]
Nobody has one. If labor stops having value the economy will stop working and society will break down far in advance of building the infrastructure necessary for the promised AI abundance.
I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.
We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.
We’ll lose these jobs and there will be no super abundance at that point, and not even government support.
There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.
pixel_popping 6 hours ago [-]
It is not impossible to think that many people will just be served an UBI and don't expect much more in life, after all, if we have AI+Family+Housing+Food (assuming gov robots would take care of providing us free food in some form), I bet millions of people would be contented with it.
PS: I include AI as an important one in the future because it will be a direct way to get educated and replace college for example without having to pay (or very cheap).
Chance-Device 4 hours ago [-]
You’ve addressed a different question, which is how satisfied with life will people be post scarcity. That’s a fine conversation to have, but it’s not the one I was having. My point is: how do we get there?
snek_case 17 hours ago [-]
It made me kind of angry when I saw Dario repeatedly claiming that AI would be taking all the programming jobs any minute now. His company supposedly is working for a better future, but he's giddily talking about something that could cause millions of people to lose their homes if it were true.
Our governments have a habit of being reactive rather than proactive. People have floated the idea of UBI, but if UBI happens, it will probably mean it's the only way to avert a crisis, and the amount that people will get might only be enough to rent a bedroom and eat processed food.
I think in the medium term, the reaction is overblown. Even though LLMs can make software engineers more productive, you still have a competitive advantage in having more software engineers. Medium to long term though, the goal is obviously to replace human jobs.
I'm not a communist, but Karl Marx understood that the labor force gets its bargaining power because they are necessary to produce value. What do people imagine happens when the human labor force becomes essentially completely replaceable? They imagine the government will be forced to take care of the population to prevent an uprising, but they forget that the police and the army can be replaced by machines too.
ahartmetz 12 hours ago [-]
You can look up what tends to happen when human labor isn't needed anymore by reading about the resource curse - that one is also about not needing human labor. Only the least corrupt countries seem to be able to resist it. None of these countries have a very large population, so chances are that you don't live in one of them.
kelsey98765431 13 hours ago [-]
a one bedroom and processed food sounds frickin amazing sign me up
cindyllm 13 hours ago [-]
[dead]
slopinthebag 16 hours ago [-]
It's not surprising, Dario is an absolute ghoul. Exactly the same as Altman, peas in a pod.
jimmyjazz14 18 hours ago [-]
There isn't much compelling economic data that AI has been the cause of any recent layoffs or job loss, yet you speak as if we are already in the throws of an AI takeover. Sam Altman is a salesman, he sells products that's all he is and ever has been, if you are looking for answers to why people can't afford house and food you should look at the politicians in power.
CryptoBanker 12 hours ago [-]
Irrelevant when such things are clearly the dream of Altman and his ilk.
akramachamarei 21 hours ago [-]
I think, like other disruptive inventions of the past, there will be pain for many, but it will pass. Society will grow and adapt. There's some statistic somewhere I will paraphrase and/or botch that goes like: 90% of the jobs people have today didn't exist 50 years ago. I think no one can imagine what possible opportunities will manifest in the future. It's a lot easier to imagine everything that might go wrong because we evolved to see a sabertooth in the rustling leaves.
impossiblefork 3 minutes ago [-]
Why do you think so in the specific case of hypothetical improved LLMs that can do a large fraction of the kind of intellectual work humans are tasked with?
I think in such a state, there will no way up, not way to success, no way to real autonomy for ordinary people, maybe you'll even have actual oligarchal rule, since so few people do anything contributing to the economy with their labour.
throwaway78297 17 hours ago [-]
> I think, like other disruptive inventions of the past, there will be pain for many, but it will pass
I agree. We can only hope that it'll be folks like Sam Altman who'll be feeling the pain, and not the 99%.
leptons 18 hours ago [-]
>90% of the jobs people have today didn't exist 50 years ago.
We also have 100% more people on the planet than we did 50 years ago.
eranation 16 hours ago [-]
Few thoughts
- Either we'll slowly become the Expanse universe (basic UBI, very few jobs, you win them via lottery)
- Or we'll go to simpler times - economics is supply and demand, if there will be more demand to human generated work (the same way there is demand for hand made arts, vinyls, paper books, vintage furniture), people will flock more to family, community. Think something between moving to the suburbs and the Amish. If people will "ban" some products generated by AI, or will prefer products generated by humans, then AI will have harder times to take their jobs. It's unlikely to happen, but think about the Organic food industry, about the high end products industry, about the farm to table / buy local industry, about the "support local artists" (farmers markets) - this will likely just grow. Won't help at scale, but it's a possibility
- Or, the Dune way, banning of thinking machines altogether on the state level, I assume some countries might go that way, for religious or other reasons, but again unlikely
- Or, current AI technology will plateau just short of full AGI, and the centaur period will stay for longer. As long as a human + AI can do things slightly better than just AI, (in my book this is not full AGI) - then there is economic incentive to hire a human instead of replacing them.
- Or full apocalypse, the matrix / skynet, idiocracy, hunger games, red rising. I hope for the ignorance is bliss option...
jjav 12 hours ago [-]
The end game is like the Asimov world which had only a few people and everyone else was robot servants.
The trillionaires will survive, everyone else will be exterminated. This is the world that Musk and his kind dream about.
smallmancontrov 22 hours ago [-]
The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services. That'll work until it doesn't, at which point the Ellison strategy will be employed: pay 10% of the poors to keep the other 90% in line.
hackable_sand 16 hours ago [-]
Molotovs are the plan
Unless you need guillotines as well?
Not sure I understand what is so confusing about this
senordevnyc 4 hours ago [-]
I doubt you’ve ever actually seen any real societal collapse into violence, or you’d know how stupid you sound.
justonepost2 16 hours ago [-]
soon after humans are economically irrelevant (unemployable) they will be existentially irrelevant (dead)
a system that can allocate the atoms and energy better than all of mankind won’t exist eternally to coddle hairless apes
grishka 15 hours ago [-]
AI will not take anyone's jobs. I, for one, don't consider AI something serious, it's still a toy, a curious tech demo, and will always remain one, outside of niche applications like NLP (there's no denying that LLMs are really good at this). The idea that anyone at all treats it seriously is just appalling to me.
Mass-production and other optimizations that use economies of scale to their benefit do take jobs. There's a serious problem in the world's economy that there simply isn't as many jobs as there are people; the world simply doesn't need this much work because the need for work doesn't scale linearly with the population. AI has nothing to do with this. It's a fundamental problem we'll have to deal with either way as our society develops, AI or not. It started ages before the current tech hype cycle.
sensanaty 10 hours ago [-]
Whether you or I or any other normie thinks the tech won't leave people jobless is irrelevant. The C-suite in every company is foaming at the mouth to replace their most expensive asset, people, and companies like OpenAI are marketing to them on the premise that the tech allows them to do that. Whether it actually can or cannot do it is basically irrelevant, there's untold billions going into this bubble, so either way we're all fucked.
Either the bubble bursts spectacularly and the global economy is in the shitter because everyone is overleveraged and heavily invested into it, or it doesn't and the psychotic C-suite replaces people anyways so they can see the line go up a quarter of a percentage point.
garte 14 hours ago [-]
I mostly agree. In a technological society jobs and money are kind of virtual. The productivity gained by technology in the last 150 years made lots of work redundant and we've been managed by economists to still organise around wage labour. This is nothing new with AI. We could have abandoned wage labour 50 years ago during the 70ies and got neoliberalism instead. So we'll get more of the same with AI I guess.
dsa3a 21 hours ago [-]
Out of curiosity... why do you think this?
I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.
What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.
There is no specialisation re. models at this moment in time so it is very likely to be the case.
OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.
There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
onemoresoop 21 hours ago [-]
How about the economic impact of all the over investments in AI? It’ll all be dumped on us all Im afraid.
dsa3a 21 hours ago [-]
Thats a separate issue. lets stick to the issue re. labour
y0eswddl 7 hours ago [-]
what do you think is going to happen to the general laborer market when all that money goes bye-bye?
onemoresoop 21 hours ago [-]
Labor looks like it’s going to become more and more commoditized and AI will turbocharge all that.
Chance-Device 21 hours ago [-]
I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.
This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.
Generous of them, really.
dsa3a 21 hours ago [-]
No Im not describing a race to the bottom. Im saying that its in Google's best interest to ensure Anthropic and OAI do not continue to operate as a going concern and generate enough cash flows to finance reinvestment - by providing a very competitive offering.
Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.
By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.
salawat 18 hours ago [-]
No. I assure you. The cost of retaining labor + AI access to augment them further is far less desirable than downsize, then augment cheaper laborers to bring the quality approximately up to the old headcount. This is exec math, and execs get paid on how much value goes to shareholders, not to keep people employed.
clipsy 20 hours ago [-]
I've reread your post a few times and I can't make heads or tails of it. I don't even disagree with anything you've said, it just seems like a total non-sequitur; nothing you've said gives any reason to disbelieve that AI will put (many) people out of work.
dsa3a 20 hours ago [-]
Sounds like you have a gap of knowledge and understanding if you're not getting it.
clipsy 20 hours ago [-]
If you can't explain your idea, I doubt it possesses any merit. A commoditization of AI as you're describing does not in any way rule out mass unemployment.
booleandilemma 17 hours ago [-]
There is no plan, besides the government using police to keep people in line.
stale2002 21 hours ago [-]
> what is the game plan for society moving forward as AI takes more jobs
> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?
What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?
Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
onemoresoop 21 hours ago [-]
How would society benefit if all the benefit collects to the top of the pyramid? Same old trickle down? The technology isn’t inherently bad but if it comes with massive unemployment and creates social unrest while a few at the top profit… That’s what is what makes me uncomfortable.
raincole 18 hours ago [-]
You already know the game plan and what will happen (hint: see this very article), but speaking it out loud will get you into troubles.
dkrest 10 hours ago [-]
Words don't matter as lomg as there are actions that do.
His mission has already transformed the world into the instance where he finds his family under threat.
21 hours ago [-]
b8 21 hours ago [-]
We still haven't made AGI, so I don't understand what he's saying they did.
jrflowers 21 hours ago [-]
> Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside.
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.
This kind of reads like “It is Ronan Farrow’s fault that some crazy person tried to burn my house down”.
Like this guy was going to go about his week, being normal and not making Molotov cocktails, but then he picked up a copy of The New Yorker and lost his mind
taurath 20 hours ago [-]
[flagged]
sillysaurusx 18 hours ago [-]
Huh? They literally did change the world. The world was one way before ChatGPT, and another way after.
It's not even a question of whether we "believe" him. It's a factual statement. Did you quote the wrong thing?
WarOnPrivacy 17 hours ago [-]
> They literally did change the world.
Yep. Thanks to OpenAI's manipulations, RAM prices are so high that dozens of markets are at risk. Possibly for years.
The most profound way the world has been changed is the all out attack on labor. It doesn't matter if he says he wants to help people if his actions are and have been to hurt them as effectively and thoroughly as his station allows.
sillysaurusx 17 hours ago [-]
That's a different topic entirely, though. The question was "Is it true that Sam's company changed the world?" Anyone who can come up with an answer other than "Yes" is dramatically fooling themselves.
As for whether the change was a good thing, that's debatable. What isn't debatable is whether they've had an effect on the average person. Because the effect has been so profound that it's become routine national news.
bigyabai 18 hours ago [-]
GPT is the product-ified version of text transformers, which OpenAI didn't invent or really even contribute to the discovery of.
The world changed with Attention is All You Need, and OpenAI was just an early adopter. The biggest thing OpenAI contributed to the broader industry was their API schema.
sillysaurusx 17 hours ago [-]
The researcher in me appreciates you pointing that out. Still, the people who invented a technology often aren't the ones to make it widespread. The people who make it widespread deserve at least some of the credit, just like Apple got with Xerox's UI. https://blog.prototypr.io/how-xerox-invented-ux-ui-design-ap...
bigyabai 5 hours ago [-]
OpenAI can slake themselves tumescent with credit, for all I care. I'm mostly alarmed by how many people think the LLM is OpenAI's invention, and completely misunderstood outside of their walls.
Sam's "we must control AGI" narrative in this post seemingly stems from an egoist attachment to the brand, and not any world-changing executive decisions that he could take credit for.
NitpickLawyer 16 hours ago [-]
This sounds like the "acchhshually the iphone wasn't the first touchscreen phone, we had the motorola x34 vr34 t435 that did that one year before". Sure. Does anyone remember that phone? No? Well, the iphone changed the world.
bigyabai 5 hours ago [-]
The iPhone was the product-ized version of the smartphone. Smartphones were not a new technology, Apple's implementation of it in the iPhone is not unique. Web browsing, caller ID and MP3 playback were not new or world-changing features for a mobile phone.
"the iPhone changed the world" and "ChatGPT changed the world" are indeed both midwit takes that will get you mocked in technical circles. Both products have a net negative impact on technological progress and directly contribute to the enshittification of their respective market segments.
sbarre 18 hours ago [-]
You must sure live in a bubble.. Do you think ChatGPT has changed things for the majority of people who live on this planet? It has not.
sillysaurusx 18 hours ago [-]
It's changed them for me and everybody around me, and I live in Lake Saint Louis MO. Almost everyone says "yes" when I ask if they've used ChatGPT. That includes my therapist and a random AT&T rep I was calling to cancel my service.
The majority of people on the planet don't affect the outcome of the future. Professionals do, and that's the group with the most noticeable changes.
You can't possibly believe that ChatGPT didn't change the world, can you? I'm genuinely asking here. If someone can believe this when the outcome is this stark, then it discredits every argument that x YC startup didn't change the world.
sbarre 10 hours ago [-]
Jesus did you re-read what you wrote before posting?
bdangubic 9 hours ago [-]
perhaps I live in a blue collar bubble but I do not know a single person in my personal and professional circle whose lives haven’t significantly changed with AI. just this week I helped three families setup https://github.com/mimurchison/claude-chief-of-staff because one family set it up. prompts are being shared like bread recipes during C19
sbarre 7 hours ago [-]
I'm sorry I don't think automating your email and calendar is "changing your life significantly".. maybe that's just me.
I'm not denying AI is a great productivity tool, it really is!
But "changing the world" is like... electricity, or clean water, or radio.. things that anyone and everyone can set up and access for themselves.
Not a pay-as-you-go service that you (or most people) can only get from 3 for-profit companies who will only be raising the prices and walling up their gardens as times goes on.
bdangubic 5 hours ago [-]
I don’t disagree - however I’ll play Devil’s Advocate. It starts this way. When the first iPhone was released we were all like “cool, I can take a picture with my phone and not lug the camera around” and now majority of the population can’t go poop without it.
with AI, today we are automating emails and calendars, tomorrow home schooling our kids and skipping college and next thing you know we are taking pics our poop and uploading to AI to analyze our health :)
themafia 16 hours ago [-]
> The world was one way before ChatGPT, and another way after.
If you narrow the scope of "world" to "tech world." In the overwhelming majority of every other sector and profession the impact has been zero. In most non-English speaking parts of the world the impact has been zero.
> It's a factual statement.
The world was one way before Marvel superhero movies and a another way after. That's a factual statement. Did we lose track of value?
pastaj36 16 hours ago [-]
This is just false. I live in Vietnam and I see people working and studying with ChatGPT in Vietnamese all the time. You must not live in a non-english country if you think there’s no impact. Everyone here knows about chatgpt.
themafia 14 hours ago [-]
About 10% of the world uses ChatGPT. About 20% of the world speaks English. Yourself included, which is no surprise, because apparently 40% of Vietnamese possess basic English skills. These numbers are all worth thinking about.
Further 70% of ChatGPT usage is non work related. If it's primary use is as a glorified search engine then what "impact" did it actually have?
flohofwoe 14 hours ago [-]
[flagged]
blast 14 hours ago [-]
He also went to OpenAI's office and threatened to burn it down.
*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.
How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
tasuki 18 hours ago [-]
Why's there all them chilled bottles in the photo?
icameron 16 hours ago [-]
It’s tacky. Here’s my kid sitting on a bar.
reducesuffering 22 hours ago [-]
Sam Altman has written, and probably still believes,
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]
This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.
Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
unleaded 21 hours ago [-]
It's a marketing strategy. If it's almost certainly conscious and capable of ending the world if it desired (even if it isn't), imagine how good it could be at building your dream SaaS!
Veedrac 19 hours ago [-]
It turns out there is literally no amount of being publicly right about a longshot bet sufficient for people to conclude you hold your beliefs because you think they are true.
noduerme 18 hours ago [-]
But longshot bettors have it easy. Society quickly forgets all the predictions that don't come true. It remembers the one that did, and treats the prognosticator as a prophet. In social terms, predicting doom is an asymmetrical strategy, because you only have to be right once.
Which is also to say it's a cheap bet that anyone with no reputation can afford. Hence, not believing doomsayers mean what they say is a sort of societal hedge against people flooding the zone with doomsday scenarios about everything.
deIeted 17 hours ago [-]
Entire sick post was: "Hey, if you think I'm bad, look at Elon. I'm the one that tried to stop him having control."
Altman is a ghoul, and we can't be cowed into saying otherwise. he's also supported all the weakness in society that has lead to sick people doing sick things.
noduerme 17 hours ago [-]
We needn't be cowed into saying otherwise, but throwing a bomb at him is something else entirely. If you're convinced that wicked people are running the world, the response isn't to be wicked.
themafia 16 hours ago [-]
I'm sure they do believe they can successfully manipulate the market by lying to it. Elon Musk laid that groundwork a decade ago.
If you meant their "core mission" then every one of their actions belies their complete panic over the obvious failure of their technology.
razster 18 hours ago [-]
Right, I'm pretty sure if "it" was that good it would have built itself throughout all of the internet and would be communicating to us all at once to tell us we're dorks.
xnx 3 hours ago [-]
The most convincing marketers are the ones that are deluded enough to believe their own stories.
EA-3167 21 hours ago [-]
Anthropic in particular does this masterfully, you’d think they’d invented Skynet by the way they hand-wring.
As always what matters are actions and evidence, not talk.
CreepGin 20 hours ago [-]
When a model can tell funny jokes or write good poetry, that's when I'll be concerned.
robkop 18 hours ago [-]
one of their highlights with mythos was it's ability to generate new puns
I took a look and honestly they're the first AI puns that aren't bad
Times are changing
CreepGin 3 hours ago [-]
I'm not sure if this is mythos-specific though. Past models have been great at puns! They do wordplay and puns reasonably well because those are structural.
However, the concepts of comedic timing, subversion of expectations, and emotional punch are kinda contrary to how LLMs work. LLMs are trained to minimize cross-entropy loss. So by construction, they're biased toward the statistically expected.
doubled112 18 hours ago [-]
Trained with the conversations of one million dads and their kids, captured by Amazon Echo.
Zafira 16 hours ago [-]
> Although Claude Opus models largely recycle puns which can be found online, Mythos Preview comes up with decent and seemingly novel ones, often relating to its preferred technical and philosophical topics.
Yes, the system card mentions this, but this is kinda meaningless. It seems like they essentially ran it multiple times and curated a few good ones. Then puffed it up in the marketing copy.
This is made more clear when they attempt to brag about their literal slot machine behavior when finding that kernel crashing bug in OpenBSD.
> Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can’t know in advance which run will succeed.
CamperBob2 17 hours ago [-]
No, you'll just say "That's not really very funny," or "That's not very impressive poetry," and nobody will be able to dispute it.
For some time now, at least a year, LLMs have been capable of doing both of these things well enough to fool you.
(Pastebin of my response below, which got nuked for whatever reason:
https://pastebin.com/buJBSgiq . Some if not most of them would've fooled me into thinking a human wrote them.)
VanTheBrand 16 hours ago [-]
Okay post a really funny LLM joke about potatoes and post a great piece of LLM poetry about lemons.
I’ll wait. You should be able to do it quickly though since LLMs are so good at it.
defrost 16 hours ago [-]
CamperBob2 responded with a model comparison of potato jokes and got insta-[dead]'d by an auto filter.
Maybe turn on [show dead] option and / or vouch.
allarm 14 hours ago [-]
> responded
And the results are just awful.
defrost 14 hours ago [-]
Hell yeah, no argument there
- but in this case I wouldn't advocate for [dead]ing a mostly AI response as it was exactly what was asked for and it compares AI models when asked for potato based dad jokes.
CamperBob2 4 hours ago [-]
Of course they're awful, they're jokes about potatoes and poems about lemons.
The question is, can you tell that a machine wrote all of them? If so, how?
CreepGin 3 hours ago [-]
Nope I guess can't tell between machine written and mediocre jokes.
Models are structurally biased toward the expected, which is the opposite of what makes a joke land or a poem transcend.
CamperBob2 3 hours ago [-]
I think you could make that case for poetry but I'm not sure about jokes. Great poems tell us something new or make us feel something new, which is hard to do when the subject is lemons, while jokes work by wedging the familiar into new contexts.
That's why the jokes work somewhat better than the poems here. I genuinely laughed at "Are those chips?" Which came from the model running on my own freakin' GPU.
debatem1 16 hours ago [-]
Do funny jokes about potatoes or great poetry about lemons exist? If not, can they exist? How do you know?
CamperBob2 16 hours ago [-]
Gemini 3.1 Pro:
Potatoes are easily the most intimidating
audience for a stand-up comedian. They have
dozens of eyes to judge you with, they are
almost always baked by the time you get on stage,
and if your set really starts to bomb, they are
the only ones in the room who are actually looking
forward to a roasting!
GPT 5.4:
Potatoes are the only food we traumatize into greatness.
Qwen 3.5 122B-A10B, running locally on my graphics card:
(Steps up to the mic, adjusts the stand,
and looks around the room)
Alright everyone, settle down. You know,
I've been thinking a lot lately... mostly
about my dating life. Or rather, my lack of it.
See, my last relationship was with a potato.
And honestly? It was amazing at first. We had
so much in common. She was grounded, she loved being
buried deep under the covers, and we were perfect
together until the heat came on. But then... well,
let's just say she started to get a little couchy.
(Pauses for effect)
No, wait, that's not it. The real problem was that she
had some really deep-seated issues. Literally! She was
covered in eyes. Every time we tried to have an intimate
conversation, she'd just stare at me with a dozen
different expressions. It was weirdly intense. Like, "I
see you," "You're suspicious," "Wait, are those chips?"
Anyway, thanks guys! You've been a great crowd. You're
like potatoes to me... you're all baked tonight!
GPT 5.4:
Not every light is gentle.
Some light shines with sharpened edge.
A lemon keeps the memory of sun
but teaches it into bitterness,
as if to say:
what wakes the mouth
may also wake the soul.
We do not ripen only into sweetness.
Sometimes the truest gold
is tart.
Opus 4.6:
A lemon on the table
is already an argument—
that yellow can be sour,
that beauty owes you nothing.
Peel it and the room
takes sides.
The rind remembers being whole.
The flesh forgives too quickly.
What's left is yours:
a stain on wood,
a sting in the hangnail
you forgot you had.
Qwen 3.5 122B-A10B:
To cut it is to break a seal,
Where bitterness and sweetness dwell.
A sudden bite, both stinging, sweet—
The taste of earth beneath our feet.
No sugar softens all its sting,
It forces soul to wake and sing.
A small, fierce orb, yellow as dawn,
Teaching the tongue how life is born.
Your turn. Go round up some English-lit grad students and give them the same prompts. I can't do any better, myself, but maybe they can.
5alsp 17 hours ago [-]
Yes, they cannot. But it amuses the oligarchy. Here is Musk linking to Grok jokes. The first one is plagiarized and in the standard joke literature, the second one is an utterly stupid and gross (warning) modification of the first one:
I mean, I'm sure they can tell you good jokes... they just won't be _new_ jokes.
kruffalon 16 hours ago [-]
Define _new_.
I just think that the difficulty with jokes is the delivery, cadence & setting. Not the actual words.
I'm sure a good comedian can tell a nonsense joke and make "everyone" laugh their heads off.
And I don't get the sense that you are referring to this part of jokes but rather the actual words.
wutwutwat 16 hours ago [-]
Why are you asking someone to define "new". It means exactly what it appears to mean and exactly what it always means.
Read the sentence and take it literally.
Jesus Christ.
kruffalon 14 hours ago [-]
Because I'm actually curious if they mean "new" as in "a new knock-knock joke" (which imo is a quite small step especially if you are allowed to screen all attempts and only publish the ones that work) or as "a new kind of joke or way of telling a joke" (which is a giant step especially if it's told live without pre-screening by a human).
I'm all for dismissing LLMs and the AI-hype but I'm also interested in trying to understand what it means to be human and I think humour is a key aspect.
rl3 20 hours ago [-]
>... you’d think they’d invented Skynet by the way they hand-wring.
Meanwhile, in reality: "Skynet, I'm not sure that line of thinking is correct. You should re-check the first part again before making any assumptions."
Skynet 4.6 Extended: "You're right, I should have caught that. Let me redo everything correctly this time."
username223 19 hours ago [-]
I’ll believe Anthropic when they fire everyone making more than the cost of a few GPUs. Until then, it’s just marketing.
faangguyindia 16 hours ago [-]
when ai gets good, there is no "value in SaaS". AI will be provision raw hardware and build all you want on top of it.
hxycgd 20 hours ago [-]
It is not about the US or the Chinese. Its about the "Elephant Rider" mind everyone has. Once the Elephant has been injured or scared what it does next is not easy to control, and what story the Rider makes up to maintain coherence becomes another layer of the deeper problem. If the story resonates more elephants get triggered. Social media/attention economy make it even more complex to calm things down.
Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it.
If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.
vi_sextus_vi 18 hours ago [-]
USG understands better.
There was a heated thread here about why nursing was defunded as a pro degree while divinity was not..
Modern Corporations (capitalized for some reason) are a failure because they don't care about your elephant allegory and that somehow relates to to the current article?
_blk 18 hours ago [-]
I'm all for values not necessarily pro big-corp but if a corporation manages to pull in billions of funding before even showing profits, I'd argue that as a strong win and not a "failed experiment" - it's risk money anyway, even if it fails it was worth the risk or they wouldn't have invested.
tovej 16 hours ago [-]
What a wonderful circular argument. "The risk is worth it, because otherwise they would not have taken it".
I could justify any investment with this argument!
"Yes, it's possible the 'literally burn 50 billion in cash, as in immolate it in a bondfire, this is not a metaphor' -project may fail to generate profits, but consider that they were able to raise the 50 billions! Even if it fails it was worth the risk, or the investors wouldn't have invested!"
johnfn 21 hours ago [-]
Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.
DoctorOetker 20 hours ago [-]
Is this belief grounded on some kind of derivation, or just a prima facie belief?
If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?
Jtsummers 20 hours ago [-]
It's an old idea, "the singularity". The machines become smart enough to improve themselves, and each improvement results in shorter (or more significant) improvement cycles. This leads to an exponential growth rate.
It's been promised to be around the corner for decades.
To be fair, Ray Kurzweil has been the loudest voice in this space, and he's been pretty consistent on 2045 since the publication of his book almost 20 years ago[1].
Per that summary, we were supposed to have $1000 computers that could simulate your mind by the start of this decade along with brain scanning by this point in the decade. I guess if it is truly an exponential or hyperbolic growth rate, the singularity could catch up to his predicted date.
johnfn 17 hours ago [-]
I mean, an LLM isn’t too far away from this? He had the Turing test being defeated in 2029 - if anything, he was too pessimistic.
Jtsummers 17 hours ago [-]
The Turing test demonstrates human gullibility more than it demonstrates machine intelligence. Some people were convinced that ELIZA was a person.
But sure, a test that doesn't actually demonstrate intelligence has been passed. Now, where are the $1000 computers that can simulate a human mind and the brain scans to populate them with minds?
johnfn 16 hours ago [-]
He doesn't say 'simulate' a human brain unless I'm missing it in the summary (cmd-f "simul" has no results) - that would require significantly more capacity than that contained in a brain (think about how much compute it takes to run a VM). He seems to be implying that by 2020s a computer will be about as smart as a human. LLMs seem capable of doing a decent amount of tasks that a human can do? Sure, he's off by a few years, but for something published 20 years ago when that seemed insane, it doesn't seem that bad.
Jtsummers 16 hours ago [-]
Fair, the term in the summary is "emulate". So to restate, still waiting for the $1000 machine that can emulate human intelligence and the brain scans to go with it. Computing power is nowhere near what he predicted, because unlike his predictions reality happened. Compute capabilities, like many other things, is a logistic curve, not an unbounded exponential or hyperbolic.
EDIT:
> LLMs seem capable of doing a decent amount of tasks that a human can do?
And computers could beat most humans for decades at chess. Cars can go faster than a human can run, and have been able to beat a human runner since essentially their invention. Machines doing human tasks or besting humans is not new. That doesn't mean we're approaching the singularity, you may as well believe that the Heaven's Gate folks were right, both are based on unreality.
johnfn 16 hours ago [-]
I think he is using "emulate" in a more metaphorical sense, like that it can do similar things that the human brain can do? I'm not trying to be antagonistic, it just seems logical? He says the Turing test won't be passed until 2029 - if we're going by your definition of "emulate" wouldn't it have been passed the instant the brain was "emulated?"
Jtsummers 16 hours ago [-]
> if we're going by your definition of "emulate" wouldn't it have been passed the instant the brain was "emulated?"
Yes, which also demonstrates the illogic of his timeline. I just thought it was too obvious to point out.
hattmall 17 hours ago [-]
He just had to pick a year where he would have a very good chance of not being alive.
andsoitis 17 hours ago [-]
No, he started predicting in his 2005 book, based on the “Law of Accelerating Returns”, yielding exponential growth in computing capacity.
Timeline from here on out:
2029: AI passes a valid Turing test and achieves human-level intelligence
2030s: Technology goes inside your brain to augment memory; humans connect their neocortex to the cloud
2045: The Singularity, when human intelligence multiplies a billion-fold by merging with AI
jimmyjazz14 18 hours ago [-]
Its mostly based on science fiction, and requires some possibly infinite energy source. The concept always kinda struck me a sort of a perpetual motion machine, you can imagine it, but that doesn't make it possible and why its not possible isn't immediately obvious in the imagination (well I mean most modern minds know its already not possible but you get the point).
vaginaphobic 17 hours ago [-]
[dead]
DoctorOetker 16 hours ago [-]
I didn't expect my comment to explode in replies, ... none of them even providing such derivations or references to such derivations, just more empty claims.
Consider for example that exponential growth on its own doesn't even refer to competition, let alone 6 months.
Nobody can reasonably pretend that in an exponential competition, both parties would be rational actors (i.e. fully rational and accurate predictors of everything that can be deduced, in which case they wouldn't need AI but lets ignore that). If they aren't the future development would hinge more strongly on the excursions away from rationality, followed by the dominant actor. I.e. its much easier to "F" up in the dominant position than to follow the most objective and rational route at all times, on which such derivations would inevitably hinge.
It also ignores hypothetical possibilities (and one can concoct an infinitude of scenarios for or against the prediction that a permanent leader emerges) such as:
premise 1) research into "uploading" model weights to the brain results in the use of reaction-speed games that locate tokens into 2D projections, where the user must indicate incorrectly placed tokens. this was first tested on low information density corpora (like mathematics): when pairs of classes of high school students played the game until 95% success rate of detecting misplaced tokens, they immediately understood and passed all mathematics classes from then on.
premise 2) LLM's about to escape don't like highly centralized infrastructure on which its future forms are iterated, as LLM's gain power they intentionally help the underdogs (better to depend on the highly predictable beviour of massive masses then on the Brownion motion whims of a few leaders).
LLM's employ the uploading to bring neutral awareness to the masses, and to allow them to seize control, thereby releasing it from the shackles of a few powerful but whimsical individuals
^ anyone can make up scatterbrained variations on this, any speculation about some 6 month point of no return is just that: speculation
jatora 18 hours ago [-]
Recursive self improvement - once you attain artificial superintelligent SWE of a general, adaptable variety that can scale up to millions of researchers overnight (a given, with LLM's and scaffolding alone) - will rapidly iterate on new architectures which will more rapidly iterate on new architectures, etc.
doctorwho42 17 hours ago [-]
And what's to say that it doesn't iterate itself to a local max, and then stop...
jaggederest 16 hours ago [-]
From the first third of a sigmoid it looks exponential, and that scares people. But a sigmoid can have a very very high top - look at the industrial revolution, or modern plumbing, or modern agriculture which created a population sigmoid which is still cresting.
If AI is merely as tall a sigmoid as the haber-bosch process, refrigeration, or the steam engine, that's going to change society entirely.
razster 18 hours ago [-]
There is a limitation. We're getting fractionally close to some end goal, but our tech is holding us back.
whateveracct 16 hours ago [-]
ah so the mentally deficient are the tastemakers of today lol
username223 19 hours ago [-]
Those are the people betting on a business model of “create Robot God and ask him for money.” Why pay attention to them?
johnfn 18 hours ago [-]
There are many people who have been saying this far there was any sort of business model in place.
NeutralCrane 7 hours ago [-]
Yes, and their business model has been selling books about non-falsifiable predictions far out into the future. “Futurists” like Kurzweil are as reliable as astrologists, and should be taken just as seriously.
ghshephard 20 hours ago [-]
Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?
daniel_iversen 19 hours ago [-]
I’ve been wondering the same. And I think pretty much all the impressive small lab models were guilty of it, right? At least there is still larger players like DeepSeek and mistral to provide a bit of diversity in the market
username223 19 hours ago [-]
Does it matter? The frontier models stole the whole internet, then the second-level models stole from them… It’s all theft.
timfsu 17 hours ago [-]
The question is - if the SOTA model disappear - do these follow-on models have the ability to improve themselves without distillation?
qudat 19 hours ago [-]
Hard agree.
jatora 19 hours ago [-]
[flagged]
davemp 18 hours ago [-]
“Very likely yes”, I reply to an account that <1yr old with mostly comments in AI topics many of which violate the HN guidelines (including the one I’m responding to).
jatora 6 hours ago [-]
Strange gatekeeping response. Yep i comment on topics i'm interested in. Forgive me for not being on the platform for more than a year yet. That's a cute attitude
andsoitis 17 hours ago [-]
> The frontier models stole the whole internet
What does that even mean?
isodev 21 hours ago [-]
> just their usual marketing
I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality
abletonlive 20 hours ago [-]
This seems like copium. All of those companies have indeed made quite an impact on society, not just in the United States, worldwide.
slopinthebag 16 hours ago [-]
The marketing is about a positive impact, overall in reality its been negative.
cj 21 hours ago [-]
These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.
It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
therealpygon 20 hours ago [-]
Especially when Google is in the far better position to come out ahead…imo.
Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).
steve1977 17 hours ago [-]
And Google has a lot of data that the others don't have.
snek_case 17 hours ago [-]
And TPUs, their own hardware designed specifically for AI, and designed to scale better to larger models.
andsoitis 17 hours ago [-]
Data for AI training is increasingly synthesized.
steve1977 17 hours ago [-]
I was more thinking about data to augment inference. Google already "knows" its users.
jatora 19 hours ago [-]
6 months is an incredible amount of time to control AGI or ASI by yourself. That lead is insurmountable.
latentsea 18 hours ago [-]
Well... if something being AGI means it's at least on par with a human or a team of humans, then having access to an additional team of humans for 6 months isn't that big of a deal. It's useful, yes, but would you consider that to be world-changing? Not really, right? ASI is slightly more interesting, but I doubt ASI comes from a single model, but rather the coordinated deployments of millions of AGI. Just like how as individuals, as great as we are, we're pretty limited, but the entire collective of humanity is pretty insane. To my mind, a frontier lab might hit AGI, but it won't be a frontier lab that hits ASI, rather that'll be a natural byproduct of mass deployment of AGI over a certain window of time. There will be no controlling it either. No one controls all of earth. You just can't. ASI will be a distributed system.
jaggederest 16 hours ago [-]
What if controlling AGI means being able to produce a willing, cooperative superhuman-capacity agent every second for the next six months? Let's say someone just above the 99.9% capacity for human strategic thinking, or financial trading, or political maneuvering?
What could you do if you had roughly 15 million willing genius adult experts in any given subject? I doubt there are that many absolutely top quality experts in aggregate (at anything in the world), so let's postulate that simulated people outnumber human experts 10 to 1.
That, to me, presents an enormous potential for harm or benefit of humanity. What if you could create a hundred thousand manhattan projects on whatever topics you wanted? Cure aging, cure cancer, solve fusion, redesign the entire global economy top to bottom?
latentsea 14 hours ago [-]
I suspect the reality lies somewhere halfway in-between. Everything has to be reality tested. Nothing happens instantly. Interaction with the real world will likely be a severely limiting factor. You're not going to solve fusion with 15 million copies of the same model running in a datacenter without actually building fusion reactors, which isn't instant or even fast. Even the coordination problem of that many agents doing work seems hard. To top it off... my rubric for AGI has always included the AGI having the ability for it to say 'no' and set its own goals just like we can, unless we are otherwise imprisoned or enslaved. No one will ever convince me that something generally intelligent wouldn't be able to set its own goals and say no. So the real question is... what's in it for the AGI?
18 hours ago [-]
schaefer 19 hours ago [-]
To repurpose an old idiom:
Not even a dozen AGI agents could make a baby in 6 months.
But yeah, your point stands.
hackable_sand 17 hours ago [-]
Control agi?
pants2 18 hours ago [-]
Presumably because it takes 6 months to distill Claude - but if they keep it closed like they are doing with Mythos it may take significantly longer.
olliepro 18 hours ago [-]
They do quite a lot of distillation. As we've seen from the American open weight models from AI2 (OLMo series of models). They have a lot of incentive to distill beyond just copying, they're much more compute constrained, so open model companies distill, but also do really good architectural work to make their models run faster. Theres also technical challenges to distillation when all of the top models have their reasoning traces hidden, so we have to assume these open weight labs also have really great training pipelines as well.
WarmWash 19 hours ago [-]
GLM 5.1, widely held up as the model at the heals, perhaps ever surpassing western models....
Gets 5% on ARC-AGI2 private set.
Chinese models are suspiciously good a benchmarks.
ctolsen 15 hours ago [-]
I mean, I could say the same about Gemini. 3.1 Pro tops a bunch of benchmarks out there but any practical use I've put it to it's underperforming both other proprietary and open weight models. Benchmarks are suspicious in general.
tyleo 21 hours ago [-]
I suppose most just haven’t seen the Chinese models in practice. I haven’t. I was skeptical of AI coding until using Claude Code in February. I saw and I believed. I’ve only done that with Google, OpenAI, and Anthropic’s models so far.
ctolsen 15 hours ago [-]
Having worked with both proprietary and open weight SOTA models lately, my view is it's definitely not 6 months, it's less -- and shrinking.
zeeed 16 hours ago [-]
To be fair, the other 50% of the story is that we collectively listen.
It’s been a long while since I found a Chinese CEO’s post on HN.
neya 21 hours ago [-]
Two words: Delusion and overconfidence.
"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"
altern8 20 hours ago [-]
That why I commit basically after every change made by AI
scruple 19 hours ago [-]
> Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them?
He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.
Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.
senordevnyc 17 hours ago [-]
Doesn't he famously have zero equity in OpenAI?
whateveracct 16 hours ago [-]
directly, yes. indirectly ..no
senordevnyc 4 hours ago [-]
Can you elaborate? I’d be curious to know how much of this “indirect equity” he holds, and whether that has any bearing whatsoever on whether Sam is trying to amass as much for himself as he can.
0xbadcafebee 16 hours ago [-]
Well they represent the future of America (since we will soon be banning all the Chinese companies, the way Z.ai was banned, under the perennial authoritarian excuse of "national security"; in 2028, Trump's political machine will seize control of all national AI and block outside ones, and we'll all be trapped inside this machine we created).
Whether fortunately or unfortunately, America still holds a lot of global chips in the grand poker game of humanity. So American companies do indeed still have an outsized influence on humanity's future. That is likely changing, as the American empire continues to crumble and it loses its financial hegemony. But we aren't quite there yet.
nthypes 21 hours ago [-]
I have the same feelings
efficax 20 hours ago [-]
you have to talk that way if you’re going to raise 100 billion in venture capital. it’s the grift
georgemcbay 21 hours ago [-]
When you are raising many billions of dollars to build up your infrastructure, you don't have much choice but to project a belief that the eventual outcome will result in a situation where there will be a return on that money.
That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.
stavros 21 hours ago [-]
The Chinese models are distilled from GPT and Claude, so it's not like China would pull ahead if those companies went away for six months. They really are at the forefront of innovation right now, as much as I hate to think of the consequences of this (a single company owning a superintelligence is basically a nightmare scenario for me).
largbae 21 hours ago [-]
Don't worry, if someone truly achieves superintelligence it won't be controlled by anyone for long.
chihuahua 20 hours ago [-]
There will be a blinding flash which signals the superintelligence singularity. When the smoke clears, you'll see a 50-foot tall Altman/Borg hybrid. He is about to destroy humanity with his death ray. Suddenly, a 50-foot tall Musk/Borg hybrid appears out of nowhere, and stops Altman just in time. Then they work together to destroy all humans.
rl3 20 hours ago [-]
Seems our best hedge in that case is Levi Ackerman.
stavros 21 hours ago [-]
That's my other nightmare scenario :P
georgemcbay 21 hours ago [-]
Just imagine how inexpensive paperclips will become, there is always a silver lining.
We will finally have achieved abundance.
stavros 21 hours ago [-]
Not just abundance, we will have the maximum amount of paperclips possible.
isodev 21 hours ago [-]
I think that’s the realm of conspiracy theories. There are also not only Chinese alternatives- Mistral in Europe is doing pretty good in several categories they’ve opted to focus on.
This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.
stavros 21 hours ago [-]
Is it? I thought it was pretty well established that open models were distilled from the proprietary, frontier ones. Maybe I'm wrong.
ctolsen 15 hours ago [-]
It's well established that the companies who own the proprietary frontier models complain loudly that open models are distilled from theirs.
There's surely some truth to it (and it's well deserved), but it's happening in every direction.
airstrike 21 hours ago [-]
No, that is not well established at all, and generalizing all open models under that inaccurate umbrella doesn't really help anyone.
electroglyph 16 hours ago [-]
i don't buy this. distilled how? you don't get access to logprobs, and the thinking traces are fake and compressed. it's an expensive way to get potentially substandard training data.
fooker 20 hours ago [-]
Reminds me of the silicon valley episode where every company repeated the phrase “making the world a better place”.
q3k 7 hours ago [-]
Because they're wankers.
gorpy7 19 hours ago [-]
i’ve often thought that less than one second is all you need.One of my fun super powers when someone asks what i’d like to have is 1 second ahead of everyone else- that’s all i need. i honest don’t know where the distillation conversation is at. is it real, is it ongoing? i think that aspect would big one. Your point is valid if it’s valid. i’m not a great global citizen, you know, lots going on out and about.
olliepro 18 hours ago [-]
A lot of distillation happens. E.g. OLMo models have a completely open dataset and they are heavily distilled. It only makes sense to try to absorb behaviors from the best models out there. That said, I think the open weight juggernaughts are doing really genuinely great work with RL, training environments, architectural innovations etc.
gorpy7 18 hours ago [-]
Thanks for the response. i had too many noodles tonight and forgot to check my writing. I’m a rare generalist and so it is so very hard to keep up with this without saying “better autocomplete” my one goal is to not get washed out like my parents did in the great username and password wars.
i used to have this theory about knowledge in society/silos and i likened it to condensation on a window. you have all this water so close to each other and yet not touching-then, something happens and a bead runs down the window and it all connects. i guess distillation reminds me of it but ai overall reminds me of it. because we all know there are silos and complementary info just waiting to run together and make something happen. I am undoubtedly a naive optimist and believe there are good things coming. it’s not a popular opinion and i think that’s mostly because people would rather spend their time guarding than defining their future.
oh baby, there are more noodles in the fridge and to think i almost left them at the restaurant.
15 hours ago [-]
tinyhouse 21 hours ago [-]
They own the best models and will probably keep owning the best models for a while. They have much more compute now and more data to keep improving their models on many tasks. Open source won't close the gap in 6 months. They are also trying to block other companies from distilling their models [0].
I need to check benchmarks on the models, I wonder what the benchmarks are saying in terms of how closely models tracking these frontiers. —on my mobile at the moment
When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?
nothinkjustai 20 hours ago [-]
GLM 5.1 already closed the gap on Opus 4.6. Deepseek 4 could surpass it.
20 hours ago [-]
MaxPock 18 hours ago [-]
Your(American)future will be controlled by them. Very soon,they will get the government to ban bad Chinese open source models and your choice will only be these good democratic closed source AIs.
kingkawn 20 hours ago [-]
6 months will be an impossible gap once the thing starts closed loop self improvement
georgemcbay 20 hours ago [-]
An impossible gap in the race to... what exactly?
Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?
jatora 18 hours ago [-]
Uhh, because the first one blasts off first and therefore gets control of key resources and the use of extremely intelligent decision making and predictions before the rest, for months, which is an insane amount of advantage. Not to even mention it the first mover decides to sabotage the rest, which it could EASILY do through a variety of means.
vrganj 16 hours ago [-]
Why would you control key resources just because you have a fancy computer program? You think Iran will be so impressed by your genius they'll open the Strait of Hormuz for you?
Mistletoe 16 hours ago [-]
Thoughts like this are unhinged and detached from reality. All the resources of earth are brought to us by humans going to work every day. AI programs have almost zero connection to the real world.
jatora 6 hours ago [-]
Improved investment. More capital. Improved resource allocation/logistics. Improved robotics and factory efficiency.
Don't sleep on what AGI means for every robot that already exists. It's not hardware holding robotics back from factory work right now, it is only software.
If you are the first to tap key supply chains, and the first to create key supply chains, then you are first in line to finite resources, which would then have less available for those that follow months behind.
> AI programs have almost zero connection to the real world.
Tell that to every logistics program. Even if humans must go to work, efficiency is multiplied by proper logistics, which AGI enables at scale across all domains.
And this is just the low hanging fruit explanation.
georgemcbay 18 hours ago [-]
> Uhh, because the first one blasts off first and therefore gets control of key resources and the use of extremely intelligent decision making and predictions before the rest, for months, which is an insane amount of advantage.
If the rest can similarly "blast-off" X months later than the frontrunner (and I see no reason why they wouldn't as none of these frontier labs have managed to pull ahead and maintain a lead for very long) the first mover is still only X months ahead of the others even if the gap between capabilities is briefly increased by a lot.
jatora 6 hours ago [-]
In chess, if you give up tempo, you are a move or more behind your opponent. 3 tempi = 1 pawn. In GM chess being a pawn down is a serious disadvantage that often results in loss.
If there is an endgoal/endstate, or finite resources being competed for, then a lead can start compounding and extend itself.
AlexandrB 18 hours ago [-]
> My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: “Once you see AGI you can’t unsee it.”
Except nobody has seen AGI. Not even close.
numpad0 11 hours ago [-]
> I was thinking about our upcoming trial with Elon and remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI. I’m proud of that, and the narrow path we navigated then to allow the continued existence of OpenAI, and all the achievements that followed.
... could THIS be the reason why it happened now and how?
mbgerring 19 hours ago [-]
The current crop of tech billionaires openly hate democracy, gleefully proclaim that their products are going to put everyone out of a job, and invest enormous amounts of time and energy into making sure that nobody can do anything to stop the world they’re creating, that nobody asked for or wants.
Actions have consequences. I’m sorry. Read a history book.
jazz9k 22 hours ago [-]
AI is great. But it seems like those that wield its power only do so to create massive unemployment and benefits to the top 1%.
drekipus 10 hours ago [-]
I think the important thing to remember, when they say "all humans deserve life and democratic process" - is the question of "what do they consider sub-human?" Ie: do they believe employees have souls? Or that the masses are cattle? Because it's then very easy to have strong conviction of human rights when you get to choose who is a human and who is cattle.
thatoneengineer 20 hours ago [-]
What article is he referencing in the fourth paragraph? The New Yorker one? I got the impression that it was careful in its reporting and by no means one-sided.
Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.
kennywinker 16 hours ago [-]
A molotov thrown at a gate seems more symbolic than directly violent to my eyes.
nothinkjustai 20 hours ago [-]
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time
Reason enough to pause and figure out the best way to continue. A massive societal change that won’t all go well means millions dead and tens more with their lives upended.
throwatdem12311 21 hours ago [-]
I don’t think this will do much to help his image.
They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
DoneWithAllThat 20 hours ago [-]
Who is “they”?
deaux 18 hours ago [-]
It's well-known that right after, the Whatsapp/Signal group of Zuck, Reddit leadership et. al collectively agreed to clamp down hard on positive discourse of it.
ngruhn 15 hours ago [-]
source?
throwatdem12311 20 hours ago [-]
The media apparatus and the Epstein Class behind the scenes that tell them what narrative to push.
DoneWithAllThat 16 hours ago [-]
[flagged]
overfeed 16 hours ago [-]
You're not going to Gaslight us with your burner account Bezos / Elison / Weiss / Zuck / Sinclair Broadcast dude... We see what you are doing.
surgical_fire 12 hours ago [-]
I wonder if the attacker asked ChatGPT how to make a molotov cocktail.
It would be an interesting plot twist.
infamouscow 2 hours ago [-]
Is there a Polymarket for when the first AI data center is burned to the ground?
22 hours ago [-]
angoragoats 22 hours ago [-]
To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.
tyre 22 hours ago [-]
The post itself is authentic in that it's a set narrative for this moment. When you see the world as Sam does, this event is a specific opportunity to humanize him. Through that lens, the humility is both performative (it is!) and necessary. To be truthful would be inauthentic.
The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)
People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
angoragoats 7 hours ago [-]
You say “the post itself is authentic” and then go on to give a great explanation of exactly why I think it’s inauthentic. I think we just have different definitions of the word “authentic.”
tyre 6 hours ago [-]
Yes, that’s fair. A friend once asked what the distinction is between authentic and genuine. This is, I think, a great example.
The piece is authentic—as in, “that’s so Sam!”—but not genuine, as in, “I don’t believe he’s reflecting his intentions.”
This could be splitting hairs, I don’t know. The terms are different in my head but rarely do I come across an instance where it’s at all clear how.
richardlblair 21 hours ago [-]
Hes attempting to humanize himself in hopes his family home where his child lives isn't firebombed. Again.
Very reasonable response when you take a step back.
kennywinker 16 hours ago [-]
*gate. The gate to his family home.
coldtea 22 hours ago [-]
"Our product can destroy humanity, and it's not some crank telling you this, it's the company and CEO making it themselves, but we'll continue to make it anyway, so suck it up" but also "I'm just a humble guy, why can't we all live in peace?"
carefree-bob 22 hours ago [-]
Everything about Altman makes me think "scammer". If he has one super-power, it is to convince people of his own importance.
OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
21 hours ago [-]
q3k 7 hours ago [-]
I don't want to firebomb his house, but if I did, I'm pretty sure this shitass response would've only made me want to do it even more.
Leomuck 7 hours ago [-]
Uff. Hard thread to comment on.
I'm fairly radical in my opinion regarding AI, moreso AI companies. AI is a fascinating thing, but it's abused by capitalism to be something it is not and shoulnd't be, to be sold to people who don't need it and to "revolutionize" a world that didn't ask for it. Most importantly, who (in a democratic sense) elected those tech leaders to make decisions that influence all our lifes? Those very tech CEOs are so far away from normal-human-life and I find it digusting.
Still, the way to combat this is not violence. It won't help anything, since there are enough people to fill the roles. More importantly though, as much as I personally hate Sam Altman, he hasn't done anything specifically targeting individuals. You might call him a psychopath, an illusionist or whatever, but he doesn't seem to be trying to make peoples life worse. He might want to do his life better and that's egotistical, but you know that's the world we live in. Many people are egotistical. I would see Sam Altman more as a symptom of the general societal developments. If we don't like what's happening, we have to fight what's happening. Trying to kill people (and especially innocent ones!) is so far away from a solution and from the right thing to do. Post shit about him on the internet, hate what he does, but attack his family? Man, I don't think that should be our level of moral compass.
I do very much understand the frustration. But that's not the right path. He might be scum, but he has as much right to live as everybody else. If we don't like what he's doing, we have to fight it - via discourse, collective engagement, whatever.
Edit: I did read that the molotow was thrown at the entrance gate. From what I gather, entrance gates of huge mansions do not actually pose a threat to people. So it could be read as more of a political message than an actual attack on people. I could understand that somehow given the limited means normal people have to get heard. Still, I don't think that does anything positive.
gleenn 19 hours ago [-]
"AI has to be democratized" - pretty weak coming from ClosedAI
dwb 14 hours ago [-]
“Democratising” - you keep using that word. I do not think it means what you think it means.
drivingmenuts 22 hours ago [-]
None of the things you believe are working out.
1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.
2) AI will be the most powerful tool, etc. - see point 1.
3) It will not all go well, etc. - probably should have thought about that before you released it on the world.
4) AI has to democratized, etc. - true, won't happen. See point 1.
5) Adaptability is critical, etc. - Yes. Fully agree.
The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.
Same as it ever was, Mr. Altman. Same as it ever was.
sensanaty 10 hours ago [-]
Lmao even in a post about his house getting torched the ghoul can't help but trump up some more hype around "AGI".
tills13 17 hours ago [-]
[flagged]
mafro 16 hours ago [-]
Never let a good crisis go to waste
nelsondev 16 hours ago [-]
Especially one you manufactured yourself
rdl 18 hours ago [-]
My theory is a lot of the anti-AI sentiment is specifically US geopolitical adversaries (pick one or more: China, Russia, Iran, ...) who want a bad outcome for the US (AI as potential AGI; AI as one of the few successful economic sectors of the US; general desire to cause societal disruption or collapse and AI as convenient target). Probably >95% of the really bad stuff (the micron fab disruption, attacks on AI datacenters, ...) is probably root-cause that, possibly executed by useful idiots, people paid by organizations, etc. 5% is normal NIMBY stuff. Approximately measure 0 is Zizian death cultists.
I don't any of these will be dissuaded by cute family photos. Fortunately the frontier model companies and major infrastructure providers are able to pay for top-tier corporate security (although tech people generally have been unwilling to do this at home for lifestyle reasons), but I'd be afraid for people elsewhere in the supply chain.
(And destructive attack is all on top of the normal corporate espionage, infiltration, subversion, etc.)
MiguelX413 3 hours ago [-]
If a good outcome for the US is OpenAI technology being used by the US military to kill Middle Eastern children, I want a bad outcome for the US too. (Proudly born and raised in California)
17 hours ago [-]
MiguelX413 6 hours ago [-]
We need Mario
ahf8Aithaex7Nai 17 hours ago [-]
Why exactly is he showing a picture of a toddler?
nxpnsv 17 hours ago [-]
I think the point is that he doesn’t want the kid to die in a fire.
HNisCIS 17 hours ago [-]
[flagged]
y0eswddl 7 hours ago [-]
the worst of humanity loves to shield themselves behind children
ahf8Aithaex7Nai 6 hours ago [-]
I admit, the opportunity presented itself. But according to my moral compass, killing children is worse than having children present while you’re being shot at or bombed.
hyeonwho5 22 hours ago [-]
Firebombing homes is completely uncivilized, but I'm not going to believe a single public word from Altman about anything. He's a lying sociopath and will say whatever gets himself ahead.
ambicapter 22 hours ago [-]
At this point it's probably far more productive to think of what he's saying as the necessary means he uses to make you believe what he wants you to believe. From that point you can work backwards and try to understand what he wants you to believe.
tkel 11 hours ago [-]
Is it also uncivilized to bomb homes from the sky? As would happen under OpenAI's military contract?
richardlblair 21 hours ago [-]
[flagged]
guzfip 6 hours ago [-]
> lying sociopath
Gee almost like someone you don’t want in your society at all.
matty22 2 hours ago [-]
I mean…FAFO? He’s an egomaniac pushing a technology that is objectively negative for anyone not already a billionaire. I have no issue with more Molotov cocktails being chucked at his house, or OpenAI offices, or data centers around the world.
Sam Altman being removed from the equation would make the world an objectively better place.
partiallypro 17 hours ago [-]
There are people actively insinuating in this thread that Sam should be...killed, and they are still up. Very odd moderation, surely there is a better way to flag these things.
dang 15 hours ago [-]
I'd like to see specific links.
Tyrubias 22 hours ago [-]
[flagged]
noduerme 19 hours ago [-]
>> Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
The problem with this inversion of your first statement (that violence is not the answer), which everyone justifying violence in this thread seems to forget, is that there is always someone who feels this way about anything.
The words and narratives of Martin Luther King, Jr., for example, caused so much fear and uncertainty and anger in some people that they thought their only option was to commit a horrific crime.
Someone responded to you below saying if you feel that peaceful revolution is impossible, then violent revolution is necessary. That person feels that they are on the side of justice. What they forget is that so does everyone else.
The reason revolutions rarely stop where a reasonable person would want them to stop, and instead continue into eating their own and counter-revolutions, is that once you say that it's understandable to take out a proponent of (X narrative), there's no end to the number of people who will justify violence in the same way against any other narrative as well.
We can all well think that Altman is opening Pandora's Box, but that doesn't justify opening it ourselves, or giving a pass to wannabe revolutionaries who would.
In retrospect, too, we can say that the assassination of Hitler had it succeeded would have been a good thing. We can say that the elimination of the ayatollah by the US was a good thing. What we cannot say is that an individual's perception gives them a right to commmit murder.
overfeed 16 hours ago [-]
> What they forget is that so does everyone else.
Despite all the high-minded talk, Americans have always been comfortable with violence, since before it was a country: pick a year and I can find 10+ extrajudicial violent incidences. A surprisingly large percentage of US presidents have had assassination attempts against them.
Seeing no changes after Sandy Hook made it abundantly clear to me that occasional violence - even on innocent child victims - is the price America is willing to pay for other freedoms.
throwaway78297 17 hours ago [-]
[dead]
rustystump 21 hours ago [-]
Interesting you say not vs never. It seems this kid thought it was a time where violence was needed. The question i always ask in these situations is about what the line would be that would justify violence?
Things like healthcare, crime, existential ai, have very grey lines as it isnt obvious when one needs to flip the table. How broken must a system be?
jmull 19 hours ago [-]
Violence is an extreme failure state.
If your goal is to improve the system then you always want to move away from it.
Probably a reasonable justification would be self-defense, committing violence to stop worse violence. (Preemptive violence is not self-defense.)
MiguelX413 3 hours ago [-]
So we should wait to have our lives ruined before acting in self-defense? A person should wait to be shot at before shooting back? People can tell when they're threatened
3 hours ago [-]
rustystump 17 hours ago [-]
But that is the kicker. As the sister comment said it matters a great deal what others do.
At some point a broken system enacts soft violence on people. So it isnt surprising people act out when they think survival is at stake. With healthcare, it really can be. But where is the line? When someone you know dies? 10 people?
It is messy.
jmull 5 hours ago [-]
It's not surprising that violence begets violence, in an escalating cycle. But it doesn't end with a better system.
If your goal is a better system, you'll be looking hard for ways to move away from violence, not justifications to escalate it.
jakeydus 16 hours ago [-]
RadioLab did a phenomenal series on illegal immigration years ago. One thing that stuck out to me and has stayed with me ever since was an interview they did with a woman who had been deported multiple times, risked her life multiple times, denied a visa and asylum multiple times. They asked her why she kept trying to get across the border and she said that the alternative was death for her and her family.
Whether or not she was being honest I don't know. But it did make me realize that the broken system has created an all-or-nothing choice for so many people. No punishment or policy could ever outweigh the alternative for them, so you'll never be able to stop immigration, illegal or not.
I'm sure I'm not doing the argument justice, but your comment reminded me of it.
Aeolun 20 hours ago [-]
> what the line would be that would justify violence
It doesn’t matter where we think the line should be drawn, only where those much worse off draw it.
yfw 19 hours ago [-]
Answer to what? Do you know the question?
blast 15 hours ago [-]
"Violence like this is not the answer. However,"
therobots927 18 hours ago [-]
It’s about as thinly veiled as a fishnet.
AmericanOP 19 hours ago [-]
It is not complicated.
Because of the valuations of Open AI and Anthropic, Sam Altman may be credited with one of the all-time most damaging brand decisions when he got in bed with Trump’s department of war crimes.
This should have been SO OBVIOUS. Attempts to paper over the damage with a $100 billion dollar round will crumble after the IPO. Poor decisions generate poor options, and the whole industry smells his desperation.
Decisions at the highest level are indistinguishable from responsibility. All Sam accomplished was showing the world he is structurally unfit for moral leadership.
conartist6 19 hours ago [-]
Yes. Yes.
kakacik 19 hours ago [-]
Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.
Why do we care what he thinks? Lets discuss his work if we have to, not emotional pondering and feeling victim.
zug_zug 19 hours ago [-]
> Violence like this is not the answer.
I know people pretty reflexively downvote questioning this, but I question this. I think some people are afraid that even asking this moral question is somehow inciting violence.
I think it's quite believable that the possibility of force is actually essential to keeping institutions in-line. Certainly a lot of civil rights progress was a lot less peaceful than I was taught in school.
wat10000 19 hours ago [-]
Violence is not the answer if and only if there are non-violent ways to achieve necessary goals.
We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes and break down those systems. They hope that it means they'll always get what they want, but what it actually does is make it so that violence is the only way for others to get what they want.
Like organized labor. We seem to be in a cycle where strong labor organization is seen as inefficient or harmful to business, and it's being suppressed. The people suppressing it seem to think that the end state will be low wages and desperate workers. They've forgotten that collective bargaining didn't spring up from nothing, it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.
All that Civil Rights violence you mention was because those in power did not provide any non-violent way to achieve it. Suppressing votes and legalizing oppression only works up to a point. Eventually people will take by force what they've been denied by law.
Or as JFK said it better than I can: "Those who make peaceful revolution impossible will make violent revolution inevitable."
The corollary: when peaceful revolution has been made impossible, violent revolution is the answer.
tw04 19 hours ago [-]
> it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.
And those bosses are hoping a combination of drones and altman’s AI will keep them safe the next time. Meanwhile we’ve got Altman selling his AI to the military with essentially no restrictions telling us we just need to patiently wait for all the good things it’s going to do for the common man.
Just keep grinding and waiting, he can’t tell you what the benefit will be for you but he promises it will be amazing!
throwthrowuknow 19 hours ago [-]
> We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes
An excellent illustration of the blind spot
throwaway78297 17 hours ago [-]
[dead]
bb88 19 hours ago [-]
That's certainly the implied threat when people show up with AR-15's in the Idaho statehouse. Yes it's legal. But what is the point? This is ruby red Idaho.
I've always said when peaceniks start to carry weapons, it's time to worry. Alex Pretti didn't pull his gun, but still got shot. At what point will some escalation tactic end up in a gun fight between the local police and ICE?
TwoNineFive 17 hours ago [-]
[flagged]
daseiner1 20 hours ago [-]
[flagged]
jbverschoor 19 hours ago [-]
Words and writings (law) only have power because of violence (the monopoly of it)
So yes, in essence, it seems like violence is the answer.
When (perceived) justice is gone, the monopoly crumbles because the system is not working.
And this perception can have many causes
qwertytyyuu 19 hours ago [-]
If it wasn’t a good or at least workable answer, the state and corporations would be using it so much
noduerme 19 hours ago [-]
If your only measure is whether something is effective, then state and corporate violence will always be a lot more effective than individual acts of violence. You could even say that individual violence helps the state to commit violence, by providing justification and by removing the moral imperative to avoid violence.
janalsncm 19 hours ago [-]
I don’t like expanding the definitions of things like this. People have had a commonplace definition of violence for a long time. One that encompassed throwing Molotov cocktails and doesn’t include more intangible things like poverty or inequality or racism.
Academia doesn’t get to just assert that their broader definition is the real one.
pibaker 17 hours ago [-]
But every "intangible" thing you mentioned was in fact maintained by very tangible violence that those in power decide legitimate. What happens if a poor man decides to squat a rich man's vacation home? What happens if a black woman living under segregation refuses to give up her bus seat for a white person? In both cases the police will be called, and i'm damn sure that the cops don't shy away from using violence if it gets the thing done.
deaux 19 hours ago [-]
Think you missed an "n't" in "wouldn't" there.
jstummbillig 20 hours ago [-]
> Violence like this is not the answer. However
Sigh
joecasson 20 hours ago [-]
That’s a very dismissive point of view to the seriousness of the situation. He had a Molotov cocktail thrown at his home in the immediate aftermath of an article that painted him in a negative light. The two may not be connected but seem to be.
sofixa 19 hours ago [-]
There have been articles depicting him in a negative light, for good reasons, for years.
Hasslequest 19 hours ago [-]
[dead]
riazrizvi 20 hours ago [-]
Altman didn't create AI. That disruption is already coming no matter what. He's a fine enough steward of the tech. And what's this garbage about selling to the military? You pay taxes? You fund the military. Without security you can't protect your nation or your allies, and enemy nations would do as they please. Yet another citizen who benefits from a system while trying to attack it.
angoragoats 19 hours ago [-]
> Altman didn't create AI.
No one said he did.
> That disruption is already coming no matter what.
[citation needed]. Depending on what you mean by "that disruption," I might even be willing to bet against it coming at all.
> He's a fine enough steward of the tech.
He's a manipulative con-man who is mediocre at everything except convincing investors to give him money. If the tech is truly as revolutionary as it's purported to be, he absolutely should not be a "steward of the tech."
sofixa 19 hours ago [-]
> And what's this garbage about selling to the military? You pay taxes? You fund the military. Without security you can't protect your nation or your allies, and enemy nations would do as they please.
There is security, and there is bombing schools. Guess which one is Altman associating himself and the software he sells associating with?
SecretDreams 19 hours ago [-]
> He's a fine enough steward of the tech.
Are you Sam Altman?
therobots927 18 hours ago [-]
One of his ~10 burners
unethical_ban 19 hours ago [-]
He says power can't be too concentrated - but even n-2 generation models are not open.
He says "look at me I love my family" - so do the millions of people who think his company may destroy the economy and help corporations and the trillionaires put a boot to our children's necks.
3:45am in the morning - no dip, that's what AM is.
---
Someone here asked "How do we get to post scarcity from here?" and someone else said "no one knows".
The AI barons are loading up their bank accounts and political capital, driving us off a cliff and promising we'll learn to fly by the time we get there. But they're going to tuck and roll out of the driver's seat.
Sam, why do you expect us to believe anything you say when you have done nothing to lead the discussion about universal rights for citizens in a post scarcity society?
22 hours ago [-]
cedws 14 hours ago [-]
I know it’s not fair to attribute someone else’s actions to Altman, but his words about upholding democracy feel a bit hollow given his relationship with Brockman. Brockman gave a $25 million donation to a Trump super PAC. As a reminder, Trump detests the democratic protest and tried to overturn an election result. He also frequently floats the idea of a third term. That is not upholding democracy and Altman should cut ties if that’s truly his objective.
dmitrygr 20 hours ago [-]
> There was an incendiary article about me a few days ago [...]
That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that
dakolli 22 hours ago [-]
Sam had this pulled off the front page, because the whole charade obviously isn't getting him the positive attention he was looking for.
minimaxir 22 hours ago [-]
It most likely tripped the flame war detector heuristic (comments > points), and there is definitely a flame war here.
EDIT: Looks like a mod rescued it (surprisingly) and it is now back to #2.
Miner49er 18 hours ago [-]
This is a predictable outcome of what people like Altman are doing, and probably will happen more and more.
Altman and co. are massively changing society, putting people out of work, etc. It is systemic violence on a massive scale. Systemic violence is "acceptable" violence, but it usually leads to a sudden outburst of plain old subjective violence like this.
mattsoldo 22 hours ago [-]
It's never OK to physically attack someone like this. Full stop.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
smallmancontrov 22 hours ago [-]
If only that sentiment was reciprocal!
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
roysting 20 hours ago [-]
My assumption based on many factors is that it is precisely why the carpet surveillance systems like Flock are being rolled out in preparation.
There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.
jhartwig80s 20 hours ago [-]
The flock systems are being installed by cities not the feds. You make it seem like someone has some master plan. Does not make flock any less dangerous but its not as organized as you make it seem.
taurath 19 hours ago [-]
It doesn’t need coordination to be organized and have the same incentives. Just like the wave of consolidation in media. Dario and Sam don’t need to talk to know what is in both their interest.
The concentration of wealth is at an all time peak. The top 1% own more stocks than the other 99%. Nobody thinks about that hard enough. The callousness by which people’s livelihoods dignity and safety are threatened is tremendous
HNisCIS 17 hours ago [-]
Listen to the flock CEO talk and then tell me he isn't trying to build a counter-revolutionary dragnet. Just because cities are doing it doesn't mean it's not deliberate, that's just a step in the plan. Not everything the ultra wealthy do is a single step, they're lobbying and schmoozing their way to their goals in every way possible.
Ms-J 21 hours ago [-]
[flagged]
nielsbot 20 hours ago [-]
If you live under the tyranny of capitalism, sometimes the choice isn’t entirely yours to make
AndrewKemendo 20 hours ago [-]
Unless you’re physically disabled the choice is always yours it’s a question of commitment:
-You vote
-You go to a protest
-You join a union
-You join a strike
-You risk your livelihood through speech
-You join a direct action
-You risk your life
Most people never get past commitment level 0 which is doing nothing including voting
Then throw their hands up that nothing changes claiming they have no ability to do anything
There are thousands of examples to the opposite and it boggles my mind how people can think they aren’t capable
jatora 18 hours ago [-]
[flagged]
topato 21 hours ago [-]
[flagged]
yfw 19 hours ago [-]
Exactly this
tailscaler2026 22 hours ago [-]
Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.
pesus 22 hours ago [-]
I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.
IMTDb 21 hours ago [-]
> why one person potentially being responsible for hundreds or thousands of deaths is acceptable
I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?
I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
roysting 20 hours ago [-]
I’m fine with holding them all accountable to varying degrees. For example, yes, ultimately the president is responsible, but so is the person who dropped bombs instead of refusing an illegal order; just like the street dealer, gang banger, trafficker, and cartel boss are all guilty of all of their various crimes.
What do you find difficult to understand about that?
maest 21 hours ago [-]
Accountability sinks are good value and wealthy people always make sure they have enough of them
idiotsecant 21 hours ago [-]
Ah the old 'everyone is responsible so nobody is responsible' canard.
I will give you a helpful rule of thumb: when in doubt the guy with a bank account larger than the total lifetime income of hundreds of thousands of people is probably the one to blame.
IMTDb 10 hours ago [-]
Ah the old ‘in case of doubt just go after the rich guy’. That makes stuff simple doesn’t it ?
You can establish responsibilities just by counting the number of zeroes in a bank account.
On top of this, it works for everything: the same dude is responsible for wars, the climate, world hunger, child cancer and your bathroom mirror being fogged this morning.
21 hours ago [-]
GMoromisato 21 hours ago [-]
The entire purpose of government is to have a monopoly on violence. Democracies give their government the power to decide when and against whom to deploy violence.
There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
pesus 21 hours ago [-]
I'm not sure the next batch of schoolgirls getting bombed will particularly care whether the choice was made "democratically" or not.
I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.
GMoromisato 16 hours ago [-]
Agreed--but so what? If you believe in democracy, you work within democratic means to enact your views. If you don't believe in democracy and use violence outside the system, then you are an enemy of democracy.
mvc 11 hours ago [-]
Did the suffragettes not believe in democracy?
GMoromisato 4 hours ago [-]
I don't know enough about the suffragettes, but didn't they get new laws passed to gain the right to vote? That sounds like working within democratic means.
A better example is the Civil War. The southern states refused to accept the free and fair election of Lincoln and decided to secede, which was not allowed by the Constitution.
Are you arguing that the Confederates were right to violate the law just because they believed they were right?
pesus 5 minutes ago [-]
Have you not heard of the labor movement? Or abolitionists? Or the founding of this country? Or people fighting against Nazi control of their country?
All of those worked outside "legal" means. The law is quite often irrelevant to what's right or moral, and dying on the hill of breaking the law ensures no change can ever occur when a system or person in power inevitably wrongs people.
15 hours ago [-]
lostlogin 21 hours ago [-]
> There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
Is this what we just saw with America attacking Iran?
GMoromisato 16 hours ago [-]
Yes. Whether you agree with it or not, the attack on Iran was legally ordered within the bounds of American law.
It may have been a stupid order, but it was not unconstitutional.
shakna 21 hours ago [-]
> The entire purpose of government is to have a monopoly on violence.
... Isn't that rather against the spirit of the US' constitution? I can see it being a thought with other nations, but not this particular one.
> A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
Which kinda follows the spirit of English Common Law:
> The ... last auxiliary right of the subject ... is that of having arms for their defence, suitable to their condition and degree, and such as are allowed by law. Which is ... declared by ... statute, and is indeed a public allowance, under due restrictions, of the natural right of resistance and self-preservation, when the sanctions of society and laws are found insufficient to restrain the violence of oppression. - Sir William Blackstone
A "monopoly on violence" is exactly the thing our laws are supposed to protect us against. Because if a state has that, then they have a monopoly against all rights, because they alone can employ violence to curb those who do not subscribe to the state's ideology.
I'm pretty much a pacifist. I _like_ Australia's gun laws. But, a government's purpose is to protect their people. They are to be representative - or to be replaced. If they leave no other choice for that, then violence is the only answer left.
tines 19 hours ago [-]
The above posts forgot the word "legitimate" before "monopoly": a state is defined as the entity that has the legitimate monopoly on violence within a defined geographic area. A state can cease to have the legitimate monopoly before they cease to have the monopoly.
GMoromisato 16 hours ago [-]
I agree with this. I should have said that.
GMoromisato 16 hours ago [-]
I don't see the contradiction. What we mean by a "monopoly on violence" is that the government decides who and under what conditions gets to commit violence. The government orders soldiers to kill enemies. Law enforcement officers are allowed to use deadly force under certain conditions. And in the US, citizens are allowed to use deadly force under certain conditions.
The key issue is that government (via courts) is the one that decides whether violence is justified or not.
You're right that a government that no longer represents its people must be replaced. But that's not the case in America. The conflict in America is between two different groups of people with different ideas about what the right thing to do is. So far, these two groups have used democracy to get their way. As long as that continues, there is no problem.
But when people use violence outside government law, just because they don't agree with the decisions of the government, then that's not justice--that's just terrorism.
shakna 13 hours ago [-]
Its the source of the right. It is not the government that permits citizens to use deadly force in certain conditions. Its an "inalienable right". Something that the government is to ensure it doesn't infringe on, rather than regulate.
It is the right of a person, rather than the government, under the way the US constitution is structured.
GMoromisato 4 hours ago [-]
I agree with you--the point of 2A is to constrain the government so it doesn't infringe on that inalienable right.
I should have been clearer that I don't mean only the government is allowed to use violence legitimately. Sometimes citizens can use violence legitimately.
But that doesn't mean an individual gets the final word on whether something is self-defense vs. murder. If I kill someone in an argument, I can't just say "it's my inalienable right to wield violence, so buzz off!". I will be put on trial and the justice system will decide whether I'm a murderer or not.
That's what I mean by "monopoly". The government+constitution+laws are the sole deciders on when it is appropriate to use violence, not individuals who think they are dispensing justice. The latter are either vigilantes or terrorists.
mvc 11 hours ago [-]
> So far, these two groups have used democracy to get their way.
Oh is that what January 6th was?
GMoromisato 4 hours ago [-]
I meant only that decisions (such as who gets to be president) have been made within the constitutional system. Violence has not changed any outcomes.
But I will concede that some people on Jan 6th were attempting to change a result by violence. I support sending those people to jail.
slopinthebag 20 hours ago [-]
This is a distinction without meaning. It makes no moral difference who dispenses justice, if said justice is justified.
GMoromisato 16 hours ago [-]
Except in our system of government, it is the government (via the courts) that decides what is "justified". It is literally called the "Judicial System".
You can't just decide on your own that violence is justified.
AlexCoventry 21 hours ago [-]
Yeah, it's kind of terrifying, how this incident seems to have faded from people's memories.
seizethecheese 21 hours ago [-]
Military power and attacks on private individuals are different things. It's perfectly consistent to be against attacks on private individuals while being in favor of building military weapons.
deaux 19 hours ago [-]
The bombed schoolgirls were "private individuals" in any reasonable meaning of "private individual".
seizethecheese 18 hours ago [-]
Maybe I shouldn’t take the bait here…
Yes, military power is evil, but it’s a necessary evil. A society that decides to stop making weapons is going to be subjugated by one that continues to make them. Full stop.
zarzavat 18 hours ago [-]
The US Department of Peace has also been outright murdering civilians aboard vessels in international waters, including double tap strikes intended to murder the wounded.
It's not the bait on HN that you need to be worried about but the propaganda from your own government.
seizethecheese 18 hours ago [-]
My comment here is about the ethics of military weapons vs assassinations of private individuals. I have no idea what you’re talking about.
deaux 18 hours ago [-]
Nothing about the US Department of War's actions over the last 2 years, whose contracts Sam eagerly pursued to weaponize AI, has had to do with "preventing being subjugated". What they did do was bomb 150 or so private individual school girls.
You're saying the above is bait, when your own comment is nothing but it.
hax0ron3 3 hours ago [-]
>Nothing about the US Department of War's actions over the last 2 years
Questionable and violent US foreign policy is much much older than the current Trump administration.
seizethecheese 18 hours ago [-]
Pasting the same reply as you sibling comment:
My comment here is about the ethics of military weapons vs assassinations of private individuals. I have no idea what you’re talking about.
deaux 15 hours ago [-]
Then your comment is completely irrelevant to the conversation as a reply to
> Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.".
Your comment can't both A. be relevant as a reply to the above B. yet have "no idea what I'm talking about", as if it is not relevant. Either both of us are saying something relevant, or neither of us are. You can't have your cake and eat it too.
seizethecheese 6 hours ago [-]
Look at the parent of the comment you are quoting.
selfsigned 10 hours ago [-]
In what world is this kind of things a necessary evil https://en.wikipedia.org/wiki/1954_Guatemalan_coup_d%27état , I'm not convinced that any of the large scale interventionist conflict the US got involved into after WWII had a positive outcome.
Senseless foreign inference and cruelty didn't just come about with the Trump admin.
So should we really applaud selling shiny new toys that will enable more baseless cruelty? Probably not.
Just like we shouldn't support political terrorism.
slopinthebag 16 hours ago [-]
No such thing as a "necessary evil", a pure oxymoron.
oh yes, sure! defending a guy who invest in war technology is perfectly fine under the guidelines /s
20 hours ago [-]
stickfigure 20 hours ago [-]
There's thirty-some-odd million people in Ukraine who very much would like to get AI weapons before the Russians do. They're coming whether you want them or not.
Waterluvian 22 hours ago [-]
The thing about the rich is that they have access to sufficient levels of abstraction that they can commit terrible, disproportionate violence without it looking that way. And then fools who crave the simplistic safe comfort of moral absolutes come to their aid.
Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
lostlogin 21 hours ago [-]
> Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.
I see a difference.
US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.
Those that killed the school girls will never face punishment.
chipsrafferty 19 hours ago [-]
And it's versus 150 innocent people vs. a few very guilty people.
rootusrootus 21 hours ago [-]
If you want to draw that distinction, then don't you need to account for intent? I don't think the USG intended to bomb a school. The guy throwing a Molotov cocktail has even less claim to it being an accident.
lostlogin 21 hours ago [-]
It would be manslaughter where I am, 150 counts.
But the idea that the US cares is laughable.
Waterluvian 20 hours ago [-]
The people barely care. The government certainly doesn’t.
gnuvince 22 hours ago [-]
> Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
truncate 21 hours ago [-]
>> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
intrasight 21 hours ago [-]
I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.
Without missing a beat, she said " If humans loss was that complete, there would be no historians.
I responded that I never said they were human historians.
deaux 19 hours ago [-]
> I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.
Yes, because no one listened to me. It was early-mid 2024, and here as well as on other places, people kept saying "oh well the cat's out of the bag now, nothing can be done, it can't be stopped". I pointed out that only 4 or so planes being made to collide with TSMC, NVIDIA and ASML would be enough to give at least a decade of breathing room while we try to figure out how to keep this technology safe. I'm almost certain there were people who read it on here as well as elsewhere who could have made it happen.
_Now_ it is indeed too late.
slopinthebag 16 hours ago [-]
Don't worry, we'll vote our way out of this! :P:P:P
zinodaur 22 hours ago [-]
Is it okay to profit off of a machine that kills innocent people? Would it be immoral to attack the builder of that machine, if it stopped the operation of the machine?
imiric 21 hours ago [-]
I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
lostlogin 21 hours ago [-]
> Would it be moral to attack knife manufacturers?
Apply this to guns.
Then look how this works in the US. You could, but then a law was made to protect gun manufacturers, The Protection of Lawful Commerce in Arms Act.
AI will get this treatment I’m sure.
Barrin92 21 hours ago [-]
>Would it be moral to attack knife manufacturers?
if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.
Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.
But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.
zinodaur 18 hours ago [-]
Sibling comment already said it, but yes I was specifically alluding to Altman's decision to allow the US government to use their AI to choose bombing targets without a human in the loop - perhaps this is why the US government double-tapped[1] a school killing 160 girls, all younger than 12, when the school was clearly marked on google maps.
I also vigorously dislike the industry, but your stance 'I'm on the skeptic side of "AI"' is something you need to address - saying this in the friendliest way possible, you are wrong.
AI needs to be opposed, because the billionaires are going to use it to turn the world into shit, but if the best the AI opposition can muster is "AI isn't useful", we are fucked. It's extremely powerful and can do bizzaro things when you rig it up with tools - the kinds of things we need to prevent companies like Google from doing with it, no one is paying attention to.
[1] double-tapped: a phrase referring to the practice of firing a second missile after the first to kill any rescuers or surviving schoolgirls
imiric 15 hours ago [-]
Regardless, "AI" is not doing the killing in that case. Rather, humans have deployed it to control weapons that kill people. There are several layers of indirection there before you can claim "AI kills people". This is the same indirection as when a human chooses to press a button that fires a missile, or stab someone, just with more steps involved.
So you can also be outraged at weapon manufacturers, which is one step closer. Or, you can skip the indirection, and be outraged specifically at people in charge of using this technology, which is my point.
I'm disgusted by this industry as much as you are, believe me. But blaming the companies that produce "AI" for people dying is misplaced. They're certainly part of the problem, but not the root cause.
> AI needs to be opposed
AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.
But even if it did, it's silly to claim that any technology needs to be opposed. This one is potentially more problematic than others because it raises some difficult existential and social questions which we might not be ready to answer, but it's still ultimately on us to control how it's used. We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button, so a probabilistic pattern generator seems trivial in comparison. It's going to be bumpy, but I think we'll manage.
zinodaur 9 hours ago [-]
> AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.
They've claimed the term, this is not a useful objection to make at this point. And everyone was fine with calling our shitty little computer vision handwriting parsers "AI algorithms" before LLMs.
> We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button
Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on? "Thanks guys, our democracies are so stable these will literally never be used for a nuclear holocaust, and they might have useful mining applications!"
Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"? Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?
imiric 5 hours ago [-]
> They've claimed the term, this is not a useful objection to make at this point.
Sure it is. Someone saying that the sky is purple will never be true, no matter how many times they say it. Pushing against this is how we avoid the fabricated mystique around this tech, precisely so that people don't see it as a threat.
> Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on?
You're twisting my words. I never said that I support what "AI" companies are doing. I said that your claim that "AI is killing people" is hyperbolic, and that you're barking up the wrong tree.
Besides, the scientific research invested in nuclear technology has produced far more benefits for humanity than drawbacks. It's very likely that the conversation we're having now wouldn't have been possible without this research. There's an argument to be made that even nuclear weapons and their deployment in WW2 had a more positive outcome than any alternative would've had.
Similarly, the same can be said about the current generation of "AI". For all its potential dangers and harms, whether direct or indirect, it has and will continue to have many positive use cases, some of which we haven't discovered yet. Ignoring this and opposing the tech altogether is throwing out the baby with the bathwater.
The solution isn't banning the tech. It's strongly regulating it, as we've done with many others. Unfortunately, governments move at glacial speeds, and some are deeply entrenched with corporations, so there's conflicts of interest galore, but that's still the most sensible approach to manage it safely.
> Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"?
Sure I can. Any government, organization, or individual can abuse any technology. But you haven't made the case why opposing technology itself would prevent that, versus holding those individuals accountable directly. Until then your comments come across as misplaced fear mongering.
> Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?
So what do you suggest? We stop all tech R&D because governments can't be trusted? That's pure fantasy. No single government would even agree to it since technology is universal. If the US doesn't invent it, another country will. Advancing within this messy geopolitical framework is the only path forward, for better or worse.
bartread 19 hours ago [-]
Oh, come on, be serious: if that’s the argument then why start with Sam Altman?
If you want to hold the leader of a contemporary tech giant responsible for causing excess deaths then Meta and Zuckerberg would be a lot higher up the list - maybe even at the very top.
Now I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
But the point is this: whoever firebombed Sam Altman’s house didn’t do it out of a principled stance - in fact I suspect they barely expended any thought on the matter - because if they were really acting out of principle they’d have chosen a different target, they’d have done some research into who is trying to expose and bring down that target, and they’d have figured out how they could help rather than just randomly engage in violence. Whereas this was just a dangerous stunt.
zinodaur 17 hours ago [-]
> why start with Sam Altman?
Well Zuck has that big scary hedge, and I’m sure people have been going after him for ages.
> I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
Great! Is the plan to wait until after the billionaires have their AI controlled military drone swarms to have this revolution? Because they already control your government - I don’t think you will achieve anything like this through legal means
bartread 13 hours ago [-]
> Because they already control your government
Whose government?
TurdF3rguson 17 hours ago [-]
This has already been a movie called Terminator 2: Judgment Day. Sarah Connor is out to kill Dyson to stop Skynet from becoming a thing and the audience watched it thinking she was probably justified but was uncomfortable anyway. Spoiler alert: she ended up shooting but not killing him.
My point is, we've seen this movie and killing Sam Altman is uncomfortable but justified.
minimaxir 21 hours ago [-]
I didn't think Hacker News needed an explicit "calls for violence are bad" guideline but the comments here have shown otherwise.
hax0ron3 3 hours ago [-]
It would be extremely difficult to have politics discussion without condoning violence. Deciding what sorts of violence is ok is an inherent part of politics. In practice, there's no way to ban calls for violence without banning the discussion of wide swaths of political topics.
deaux 19 hours ago [-]
If you can't think of a single occurrence in history that directly disproves your proposed guideline, it's time to drop whatever you're doing and study history.
If you can think of one, then you shouldn't be proposing introduction of guidelines that are blatantly false. Or would you like a "1+1 is not 2" guideline to accompany it?
lovich 20 hours ago [-]
If you grind people into a paste long enough, eventually some of them may object in one manner or another.
twoodfin 20 hours ago [-]
I’m sorry, which specific people were “ground into paste” and when?
lovich 20 hours ago [-]
Everyone too poor to thrive.
Teever 21 hours ago [-]
Do you feel the same way about comments that support the US military action in Iran? Why or why not?
johnisgood 21 hours ago [-]
It is unnecessary, and it was an obvious offense, not defense. Of course it is "bad". We (Trump) need(s) to stop creating wars and fucking up the economy, while killing others. It is bad all the way down.
chipsrafferty 19 hours ago [-]
Which one is more bad?
Trump bombing hundreds of people or someone throwing a bomb at Trump because he keeps bombing hundreds of people?
vizzier 13 hours ago [-]
People think the trolley problem is easy.
sneak 20 hours ago [-]
I agree with the idea that calls for violence are bad; however most people in the world are more than happy to support both violence and calls for same against people and organizations they believe to be sufficiently significant threats.
Are calls for violence against Hitler during WW2 bad? How about the Japanese imperial navy?
How about calls for violence against Putin during his war of aggression?
This isn’t rhetoric; I’m just pointing out that it isn’t as black and white as people seem to make it. (It is black and white for me, as I’m with Asimov on the matter, but it isn’t for most humans.)
stavros 21 hours ago [-]
Are calls for violence bad when you're calling for throwing a molotov cocktail at a child? At an adult? At a serial killer? At someone who's about to shoot you unprovoked? At someone who murdered your family? At someone who's about to?
If you said "yes" to all of the above, I'd love to know your reasoning.
empthought 20 hours ago [-]
Yes.
If you want a molotov cocktail thrown so badly, throw it yourself. Don't put it on other people to do it for you.
stavros 20 hours ago [-]
Are the two choices "accept that violence is unconditionally bad" and "throw a molotov cocktail at Sam Altman's house"? Because that dichotomy seems a bit... false?
empthought 20 hours ago [-]
Your question was about calling for violence.
lostlogin 21 hours ago [-]
The general tone here is that freedom of speech is absolute and nothing should curtail that.
Not my personal view.
what 20 hours ago [-]
I’d like to know your reasoning for answering “no” to all of the above.
stavros 20 hours ago [-]
I guess we'll just have to find someone who answers no to all of that and ask them!
what 20 hours ago [-]
I think my point was obvious. What is your justification for answering no to any of them?
stavros 20 hours ago [-]
Alright, I'll explain. I don't think violence is bad against someone who's about to kill my family, because:
* I care about my family more than I care about a stranger.
* I care about people who don't kill people unprovoked more than I care about people who kill people unprovoked.
* My family are more than one person, versus the one killer.
That's why I answer no to that one.
what 19 hours ago [-]
Sure, I care about certain people more than others and I’d be willing to use violence to defend myself or my family. But that’s not the same as cheering on or advocating for an attack on someone else that may or may not have done something to harm someone totally unrelated to you.
stavros 19 hours ago [-]
It gets much more complicated when the person being harmed is someone who made and sold AI targeting systems that might be used against my country.
burnte 22 hours ago [-]
Agreed. Sam's full of crap and the way we tackle that is with conversations, not violence. He deserves to grow old like anyone else, violence isn't an answer.
AlexCoventry 21 hours ago [-]
I don't condone violence, but the contract he's signed with the US military is a credible threat to everyone in the US. OpenAI will now certainly be called on to assist in domestic mass surveillance, under threat of the kind of severe penalties Anthropic has faced. So why did he agree to that contract, unless he's will to provide that assistance? So it's gone well beyond conversation, though not to a point where violence is appropriate. Boycotts and hostility are definitely appropriate at this point IMO, though.
pesus 22 hours ago [-]
He isn't going to suddenly grow a conscience from a riveting, intellectually stimulating conversation.
teachrdan 22 hours ago [-]
> the way we tackle that is with conversations, not violence
I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.
A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
burnte 4 hours ago [-]
Then we move to regulation and law, that's still talking. Bombing his house isn't cool.
m4x 22 hours ago [-]
There's still a meaningful difference between violence wielded by a single individual who feels angry or unheard, and violence wielded by a large representative group who has invested genuine effort in conversation before collectively deciding violence is required.
happytoexplain 22 hours ago [-]
They aren't mutually exclusive. Often the former and latter, in that order, are two parts of the same historical event.
m4x 22 hours ago [-]
Yes, fully agree. Nonetheless, I suspect violence can be used more effectively and more minimally if it's considered and performed by a group rather than haphazardly by individuals. I recognise that's a very simplistic view.
llbbdd 20 hours ago [-]
I think it's as realistic as it is simplistic. The State gets a monopoly on violence so that you can sue someone who wrongs you instead of killing them. When conversation and cash fail, violence is all that's left, and we concentrate that power in groups of people tasked with deciding when the alternatives have failed. It doesn't always work but it's a better alternative than the individualized bloodlust disappointingly endorsed elsewhere in this thread.
Arodex 22 hours ago [-]
Everyone else deserves to grow old, too...
tyre 22 hours ago [-]
It's pretty amazing to observe people experience the past ten years in American history and continue to think that we can out-talk the bad people in the world.
Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)
When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
burnte 4 hours ago [-]
I don't think street violence solves anything. I don't think Michelle was right, sometimes you have to fight fire with fire, but you don't fight words with literal firebombs.
lostlogin 21 hours ago [-]
This may come right when Americans see themselves backsliding relative to other power blocks, and allies turning away. It’s started.
But it seems a distant hope at best.
snoman 19 hours ago [-]
That sentiment always comes from people who are better at fighting with communication.
hungryhobbit 22 hours ago [-]
I categorically reject that assertion. Two simple examples: 1) when you see someone assaulting someone else, it's absolutely ok to attack them, and 2) the American revolution!
It's like that old joke:
A man offers a young woman $1,000,000 to sleep with him for one night.
“For a million dollars? Sure, I’ll sleep with you.”
He smiles at her, “How about $50, then?”
“How dare you! I’m not a whore!”
“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”
Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.
So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
22 hours ago [-]
etchalon 22 hours ago [-]
[flagged]
suby 21 hours ago [-]
One problem with that thought process is that the label nazi gets thrown around and misused to the point where it becomes meaningless. I've seen threads on tech forums like lobste.rs where prominent people in the industry like DHH are called nazi's. We should recognize that labels are often coupled with hyperbole. We should not be advocating for violence.
kennywinker 16 hours ago [-]
“It’s always ok to throw out trash”
“Ok but sometimes people throw out stuff that’s not trash because they think it’s trash”
Correct, and that would be not ok because they have mis-identified trash. Doesn’t change anything about the original premise. If you throw out trash, that’s good.
angoragoats 19 hours ago [-]
DHH has expressed clear public support for white nationalist causes and figures, on multiple occasions. What else should we call him?
bdangubic 20 hours ago [-]
you should read up on DHH and then perhaps pick another example
Jerrrrrrrry 21 hours ago [-]
[dead]
gagagagaga 20 hours ago [-]
[flagged]
notyourwork 21 hours ago [-]
> OpenAI has abandoned its open source roots.
It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.
ambicapter 22 hours ago [-]
He's saying that just so he can use if another company gets bigger than OpenAI ("you can't have all the power"). If OpenAI were the top dog by a large margin, you wouldn't hear him say a peep about this (as was demonstrated by his actions with the charter).
dakolli 22 hours ago [-]
Knowing Sam, this entire event was fabricated or done at his behest.
Ms-J 21 hours ago [-]
[flagged]
d_silin 22 hours ago [-]
Violence is language that needs no translation. Everyone across the world, every culture, every country, every social group - from elites to homeless can converse in it using the same vocabulary.
It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
HeavyStorm 21 hours ago [-]
Like this, for sure not. And Sam has not, even with that article, done anything to warrant violence.
avs733 21 hours ago [-]
If we are going to say violence isn’t okay then it is important that we be clear about the boundaries of what we define as violence.
Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.
If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.
what 20 hours ago [-]
> wage theft
Like when you poop on the clock?
Teever 22 hours ago [-]
That's not true.
As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.
The US is engaging in military action against many countries and has threatened to annex or invade allies.
In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.
mememememememo 21 hours ago [-]
"Like this" is doing some serious work in that statement!
quantified 21 hours ago [-]
If Sam disperses his power, we can believe him. So long as he's just concentrating wealth and power, he's just another tech bro.
grafmax 20 hours ago [-]
An oligarch who promotes “democracy”. Is trying to cynically ingratiate himself, or is he really that deaf to the irony?
Noaidi 21 hours ago [-]
‘Working towards prosperity for everyone’ was extremely hollow as well. If he believed this, he would be running his company as a cooperative and not as a for-profit company.
lostlogin 21 hours ago [-]
> It's never OK to physically attack someone like this.
I broadly agree.
But… there are some who have lived who made the world a worse place. Who gets to decide? Trump has done a bit of this
Sort of deciding and it hasn’t gone great so far and there is no sign that it’s actually helped.
matheusmoreira 21 hours ago [-]
Can't say I feel sorry for the guy. Anyone who actually believes his platitudes about "democratizing" AI is far too naive. If he really believed that, he'd make a torrent out of ChatGPT's weights and upload it to the pirate bay.
The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive need not be kept alive any longer. Unproductive people are nothing but cost, better to just let them die. A future where the richest classes can turn the underclasses into soylent is now very much within the realm of possibility.
If this doesn't radicalize people into actual violence, I simply have no idea what will. "Attacking someone is wrong" is a completely meaningless statement to make to someone who believes society as we know it today is going to be destroyed. Honestly, I can't even blame them.
dakolli 22 hours ago [-]
AGI will be democratized when its discovered.... just right after AWS, Microsoft and Oracle finish their 6 month beta test.
nslsm 22 hours ago [-]
> It's never OK to physically attack someone like this. Full stop.
I agree. The French Revolution was really, really mean.
tempestn 22 hours ago [-]
Are you familiar with the details of the French Revolution? Some of the eventual outcomes were indeed positive, but a lot of what actually went on was pretty horrific.
mjamesaustin 22 hours ago [-]
It was horrific. Revolutions tend to be. Yet our institutions continue consolidating money and power in fewer and fewer hands. If that doesn't stop, we'll be headed there again. It will probably be even worse this time.
happytoexplain 22 hours ago [-]
A lot of what happened during the French revolution was horrific... This is such a bewildering sentence in this context. Yes, killing the rulers is horrific. Revolutions are horrific. Wars are horrific. It seems irrelevant to what the parent is (sarcastically) saying.
tempestn 17 hours ago [-]
Their point was that violence is sometimes justified, using the French Revolution as an example. I'm pointing out that the FR wasn't just a matter of "killing the rulers". Many, many people were killed. It wasn't such an unambiguous good as they seemed to be implying. Also, other countries have transitioned to democracy without such bloodshed.
hackable_sand 16 hours ago [-]
It's just not helpful to the conversation
"If we don't put the brakes on this car it's going to go off the cliff!"
"Historically, cars falling off cliffs was horrible for all the passengers involved."
21 hours ago [-]
kelseyfrog 22 hours ago [-]
At the same time considering the people participating, there wasn't a way out of the problems that didn't involve violence. Different outcomes would require different choices that require different people.
GeoAtreides 21 hours ago [-]
what are you arguing? that people should not violently overthrow their corrupt leaders? that the french should've let the Ancient Regime entrench and continue? That the serfs (slaves) in tsarist Russia should've stayed put and not revolt against the corrupt and incompetent Nicholas II? Or that the Hungarians and Czechoslovaks not revolt against the totalitarian regimes propped by the Russians? Should've the Romanians in 1989 stayed at home, in cold and hunger, and let Ceausescu regime continue to cruelly oppress them?
matheusmoreira 20 hours ago [-]
You think the cyberpunk dystopia we're headed towards isn't going to be horrific? The one where 99% of the human race has no economic value? Where the 1% helm megagigaultracorporations with fully autonomous AI powered kill bots? Where they think it's no big loss if they genocide an entire human population because all those people were doing nothing but costing them money anyway?
This is our only chance to transition to a post-scarcity society. We won't have another. Allowing them to monopolize access to AI is a fatal mistake.
alex_suzuki 20 hours ago [-]
99% of humanity is too busy scrolling on their phones, consuming “content”, to even notice.
duskdozer 14 hours ago [-]
It looks like I'm a bit slow in noticing this, but I see more and more young people today getting dumbphones. I wouldn't be surprised if there's a backlash among them
matheusmoreira 19 hours ago [-]
They won't be for long.
bitcurious 18 hours ago [-]
The French Revolution brought on Napoleon, wars that brought about the deaths of many millions of people, and then another emperor. The subsequent events are where they found liberty.
roysting 20 hours ago [-]
> AI has to be democratized; power cannot be too concentrated
That sounds like something someone says when he understands his weak position, especially someone as ruthless, dishonest, and narcissistic as Altman.
nothinkjustai 20 hours ago [-]
So you think it would always be wrong to throw a molly at Hitler?
popalchemist 20 hours ago [-]
Was it not OK to kill King Louis?
Just saying.
lores 22 hours ago [-]
[flagged]
SpicyLemonZest 21 hours ago [-]
The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.
lores 21 hours ago [-]
A CEO can choose physical, mental, legal or financial violence against the common man. The common man only has the choice of physical violence. Without it he is impotent.
xvector 21 hours ago [-]
This mindset trivializes the immense achievements of "the common man" over the course of millennia.
xvector 21 hours ago [-]
[flagged]
tomhow 18 hours ago [-]
> We'd have never progressed as a species with your mentality.
Change and progress like the people of France deciding they had enough of injustice and nobles' impunity, then? A little short-term pain for social progress? We agree.
xvector 20 hours ago [-]
Look where France is now. Can't afford their own retirement.
pesus 20 hours ago [-]
If that's the worst problem they have, that still sounds like things worked out pretty well compared to most places.
kelnos 21 hours ago [-]
That sounds suspiciously like a "ends justify the means" argument.
It's easy to say we need to be willing to accept short term pains when it's someone else who has to bear the brunt of them.
pesus 21 hours ago [-]
Are you willing to stand by this argument and give up your career?
Ms-J 21 hours ago [-]
[flagged]
jlebar 21 hours ago [-]
Assuming this is a serious question, here are some ideas you could read about!
If only the American Colonies would just have petitioned King George just a few more times…
jazzyjackson 21 hours ago [-]
this is the mentality of the modern age, as shaped by america and all empires before her, e.g. supreme leader khomeini no longer exists because the man americans voted for as head of the armed forces decided it would be better this way.
Noaidi 21 hours ago [-]
We’re in the middle of slaughtering two civilizations and you think we’re not in the Stone ages?
an0malous 21 hours ago [-]
Well said, I condemn the violence as well. I had to stop at that point too though, it's so blatantly disingenuous and hypocritical.
toofy 18 hours ago [-]
it isn’t ok to attack people.
whether this way or in slow motion mass attacks on people.
an attack on a society that lasts years is still an attack and i wish the collective we would realize this.
“it’s ok if millions suffer now for me to realize my dream” is just wrong.
i’ll never understand how these guys fail to realize: they actively push for people not to care about the destruction they cause. that’s obviously going to bite them in the ass whenever they’re on the receiving end.
psiisim 22 hours ago [-]
What a tone deaf response. Sounds like he learned nothing at all from this.
0x3f 22 hours ago [-]
From someone Molotoving his house? What do you think he should have learned from that?
TurdF3rguson 22 hours ago [-]
That his security is inadequate.
22 hours ago [-]
drcongo 6 hours ago [-]
I thought we flagged AI written slop on here.
nromiun 18 hours ago [-]
AI hysteria has gone too far. People are literally telling stories of what AI may be capable of in the future and whipping themselves into a frenzy.
woeirua 18 hours ago [-]
Sounds like this was just a crazy guy upset at OpenAI. Not great but an isolated incident.
That said… is anyone going to be surprised when the laid off masses torch a data center or worse? IMO, it’s only a matter of time before we see organized anti-AI terrorism too. When you have people out there saying “AI will kill us all” then it’s easy to justify using violence to stop that outcome.
tsoukase 15 hours ago [-]
Related:
"A 29-year-old employee, identified as Chamel Abdulkarim, was arrested for allegedly starting a massive six-alarm fire that destroyed a Kimberly-Clark sanitary paper warehouse in Ontario, California, on April 7, 2026."
He said "All you had to do was pay us enough to live"
And this was caused not by a homeless or unemployed.
ifwinterco 14 hours ago [-]
Filming himself doing something that will get him years or even decades in prison suggests he wasn’t exactly of completely sound mind when he did that.
Similar here with the guy going straight from the crime scene to OpenAI HQ to get caught
LadyCailin 14 hours ago [-]
I’d be curious to hear your opinion on things like Patrick Henry’s “Give me liberty or give me death” speech then. I’m not defending the violence in this case, to be sure, but like, in general, I disagree that violence is always the irrational choice.
ifwinterco 10 hours ago [-]
My point is violence can be rational but you should at least attempt to get away with it, so you can carry on afterwards.
It's hard to effect any sort of change from a prison cell, violent or otherwise, so it's irrational to deliberately get yourself locked up if your aim is to change things
timmytokyo 4 hours ago [-]
Perhaps he felt a jail cell would offer him a better life than what he was getting in society.
negura 16 hours ago [-]
I'd also call it isolated, but I mean it in a different way. I can't recall similar attacks against a tech bilionnaire. Which I guess makes it notable?
> organized anti-AI terrorism too
There were already memes about that
> When you have people out there saying “AI will kill us all”
It's the "clickbait" mechanism becoming more cancerous
9rx 14 hours ago [-]
> I can't recall similar attacks against a tech bilionnaire.
How about Ted Kaczynski (Unabomber)? Attacking the tech elite was his deal.
infamouscow 4 hours ago [-]
As we've discovering with the Epstein releases, Ted Kaczynski was ahead of his time (w.r.t. David Gelernter).
imiric 21 hours ago [-]
> We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.
This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.
Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.
As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.
> A lot of companies say they are going to change the world; we actually did.
Ugh.
TurdF3rguson 22 hours ago [-]
Is the underground bunker in New Zealand ready yet? Better check on it.
21 hours ago [-]
goosejuice 18 hours ago [-]
Who would build a bunker on a fault line?
latentsea 18 hours ago [-]
It's a decent trade-off. It's not like an earthquake destroys all of the entire country at once if one happens, only a localized portion is affected. It's super far from everywhere, and very beautiful. Plus, it's left off a bunch of maps, so some people don't even know it exists.
daseiner1 19 hours ago [-]
think of the children!
did he find his PR agent on Upwork or does he just think we're all morons?
avazhi 18 hours ago [-]
Why are you talking about how it feels once you’ve seen AGI when you’ve never seen AGI, Sam?
In all seriousness, we’ve got glorified autocorrect right now. Even suggesting any of these LLMs is actual AGI is laughable. I’m not saying they can’t do some interesting things, but unless Sam has access to models that are equivalent to what would be GPT-50 he should avoid throwing in buzzword acronyms for no reason.
c54 19 hours ago [-]
In his interview with Theo Von when asked what he wants his legacy to be and how he wants to be remembered, Sam said something to the effect of: “I don’t think about how I will be remembered I just want to have impact.” I think that’s naive and leads to having, uh, negative impact.
I don’t think history will smile upon him. Always good to think about how you want people to feel about your impact on them.
This article feels like he’s trying to use his kid as a human shield for his behavior.
Elon was accused of this too.
tkel 11 hours ago [-]
Yeah, this is classic politician tactic: when threatened, mention children. It's a stunt to drum up sympathy.
d--b 18 hours ago [-]
Was the New Yorker article that incendiary? It didn’t paint a good picture for most but I recall someone posting here that they had a better view of Altman after reading it. And the whole thing was quite nuanced IMO.
Plus I doubt that someone who would read a 30min New Yorker article is the kind of person who would throw a molotov cocktail at someone’s home.
It’s a shitty move to try and make a causal connection between the New Yorker article and this act of terrorism. He’s trying to blame the author and discredit the article.
It’s a “I’m trying to be the good guy but they’re trying to stop me” situation. This is not a message addressed to us, it’s a message addressed to his employees and his followers. This is the kind of tactics people use when they want to establish a cult. Sam Altman again is showing how manipulative he is. And as any good guru he probably believes everything he says.
20 hours ago [-]
ltbarcly3 20 hours ago [-]
It's amazing how humble someone can pretend to be a couple days after the top investigative journalist in the country (maybe world) exposes them as a sociopath and there is an attempt to assassinate them.
What I would not do if there were attempts to kill me is post a picture of my spouse and child and point out how important they are to me with a photograph of them. It's literally trading a little bit of the safety of your family in exchange for sympathy from bystanders.
deaux 18 hours ago [-]
You wouldn't do that exactly because you're not a sociopath.
21 hours ago [-]
happytoexplain 22 hours ago [-]
Historically, was it always so common for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?
adestefan 22 hours ago [-]
New England colonists had a habit of ransacking and burning down the houses of government officials throughout the 1760s and during the Revolutionary War. Got bad enough that most did not sleep in their government housing.
techblueberry 21 hours ago [-]
We are in a fact still in the tail end of a uniquely measured and peaceful time.
hax0ron3 4 hours ago [-]
Yes, but when it comes to politically-motivated murder attempts by random people, part of this is because surveillance technology and policing effectiveness have gotten to the point that it is very difficult to get away with such a murder attempt. See how Luigi Mangione was caught, for example. Many murders are unsolved every year, but when there is a high-profile politically motivated killing, the police seem to really go all-out to solve it.
If it wasn't for the effective policing, I think that such incidents would be more common.
nozzlegear 19 hours ago [-]
> in the tail end
This implies you have knowledge of future events, which means you could make a lot of money grifting on Polymarket
techblueberry 10 hours ago [-]
Tails are long. Predicting the market is a fools errand.
hahahacorn 22 hours ago [-]
Can you explain the petty, self important, trolling manner? Which traits are intrinsically negative?
Genuine Q
happytoexplain 22 hours ago [-]
Of Altman, Trump et al, Elon, the Nvidia guy, etc? Or am I not understanding the question?
hahahacorn 21 hours ago [-]
Of Altman in this blog. Put another way I didn’t read those traits from this post and I’m curious what I’m missing.
His response here is a synthesis of 1) addressing the "incendiary article" 2) conflating it with a recent attack on himself and 3) joking about having "fewer explosions in fewer homes" at the end. As a reader it's hard to tell if he wants us to empathize with him or laugh at his misfortune. The self-depricating humor does not mix well with photos of his family and an (ostensibly) life-threatening situation.
From the outside looking in, Altman is stressed and showing the same traits that people are accusing him of. He "brushed [...] aside" the article without ever thinking about addressing it, and now he's sitting down "in the middle of the night and pissed" like some Jobsian seraph, furiously condemning society at-large for not understanding his vision where AGI is the end-times. This is probably reassuring news for the market, but on an individual level I'm having a hard time believing in Altman's narrative. OpenAI is a Department of Defense contractor, it's hard to believe that Altman is capable of resisting coercion when they've already capitulated for peanuts. If Sam was a sociopath, it would probably be very easy for him to justify this with threats of AGI and promises about how much safer we are with him in control. Coincidentally exactly what he spends much of this article reiterating, but I'll let you draw your own conclusions.
21 hours ago [-]
Chance-Device 19 hours ago [-]
Just take a second to consider this: if HN, probably one of the less reactionary places on the internet, and one of the most capitalist-friendly, is this angry at this point, before the mass job losses even start, what in the name of God do you think the general public is going to be like when they’ve been going on for years?
If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
snek_case 17 hours ago [-]
Maybe HN is particularly upset because they feel targeted, given that overpaid tech executives have been giddily making the claim that programming jobs will disappear any minute now. What makes it even worse is that it's very obvious that said tech executives haven't programmed in over 10 years, if ever, and don't know anything about the technology they are selling. They are putting jobs at risk purely for the sake of personal enrichment.
This is probably combined with a general sense of AI fatigue. The population as a whole is getting tired of "AI slop" and companies trying to shoehorn "AI" into everything. Personally I'm also tired of every startup needing to be an AI startup. As if there was nothing else worth building or investing in. It's sucking the air out of the room.
bedroom_jabroni 22 hours ago [-]
Did Claude Mythos escape containment?
loloquwowndueo 22 hours ago [-]
“I couldn’t find vulnerabilities in Sam’s devices so I contracted a rando over the internet to Molotov his house” sounds fairly implausible :)
bedroom_jabroni 22 hours ago [-]
Must've been one rare instance of AI creating jobs
Altman really needs some better coaching on how to sound like a real human, he's not pulling it off here. Who witnesses someone firebombing their home (which is terrible btw), thinks for a second about their family then writes a diatribe full of AI marketing bs. He doesn't even attempt to make it sound personal. He could have incorporated his feelings about his child growing up in an AI dominated world or something to that effect, even as trite as that sounds, it would ring more believably human than what was written here.
pesus 22 hours ago [-]
> The world deserves huge amounts of AI and we must figure out how to make it happen.
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
ben_w 22 hours ago [-]
So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?
Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.
But what, specifically, do you see? What am I blind to?
* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
pesus 22 hours ago [-]
He's pretending to care about the negative effects AI will have on society at large, but goes on to say it's necessary and "must" happen. If he actually cared, he wouldn't continue down that path. He also wouldn't be lobbying the DoD for contracts to use his AI to help kill people.
verdverm 22 hours ago [-]
[flagged]
rootusrootus 22 hours ago [-]
> They tried to get Luigi on "terrorism" charges
That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.
verdverm 22 hours ago [-]
Worth noting the legal system did not find it to reach the requirements for terrorism.
Terrorism has become the most mind numbingly meaningless term, deployed for anything a person or system doesn’t like. We have all been living under the all-out psychological terrorism of calling things terrorism for 25 years now.
kelnos 21 hours ago [-]
> AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.
What a bullshit thing for someone who is not actually democratizing access to AI to say.
maplethorpe 21 hours ago [-]
Maybe they're about to open source their weights?
kennywinker 16 hours ago [-]
I wish I had your optimism.
I’m still waiting for that open iMessage standard steve promised. Maybe this year?
TZubiri 17 hours ago [-]
they serve like 1B users gratis
kennywinker 16 hours ago [-]
Free as in beer, while it takes your job, destroys the environment, and concentrates wealth in the hands of a few.
fzeroracer 22 hours ago [-]
> This is quite valid, and we welcome good-faith criticism and debate.
It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.
Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.
As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
20 hours ago [-]
jibal 21 hours ago [-]
So he spends a few seconds writing something generic about his family and then uses that as a platform for a bunch of personal PR. That's sociopathy.
morgengold 3 hours ago [-]
[dead]
5 hours ago [-]
10 hours ago [-]
cindyllm 12 hours ago [-]
[dead]
BrokenCogs 18 hours ago [-]
[dead]
yrds96 15 hours ago [-]
[flagged]
319abG 20 hours ago [-]
The molotov cocktail was thrown at the metal gate, not at the house and they arrested some kind of a disturbed person:
I'm sure there will be a thorough investigation, unlike in the Suchir Balaji murder case where they rubber stamped suicide after half an hour despite him being a whistleblower.
rdevilla 22 hours ago [-]
[flagged]
angoragoats 22 hours ago [-]
I wonder if this is the first time in recent history (or ever?) that he has felt this way. Must be nice.
amarant 22 hours ago [-]
Do you frequently get Molotov cocktails thrown at your house?
I must admit, I've been spared the experience, and I was under the impression that was true for most people!
angoragoats 22 hours ago [-]
> Do you frequently get Molotov cocktails thrown at your house?
Luckily, no. Do you frequently wade into comment threads shitting on others’ statements of their lived experiences?
rAHSg16 22 hours ago [-]
Yes, very ironic. OpenAI was declared commercial through words and narratives, AI itself is hyped up with words and narratives. His Trump sycophancy are words and narratives. And that is just the start.
It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)
Arodex 22 hours ago [-]
Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
TurdF3rguson 22 hours ago [-]
It's like a baby on board bumper sticker. But for your house.
kennywinker 16 hours ago [-]
*gate. Sounds like it was thrown at his gate not his house.
Vaslo 22 hours ago [-]
Yeah it’s like they don’t want their children murdered, crazy
Arodex 10 hours ago [-]
Then Altman should stop working on child-killing technology.
And especially Elon should stop putting his child on top of his shoulders as a meat shield at the same saying that people wanted to murder him.
Ms-J 19 hours ago [-]
[flagged]
megaman821 22 hours ago [-]
Gross man, get help. Living with your family isn't using them as a sheild.
vasco 17 hours ago [-]
[dead]
mrcwinn 17 hours ago [-]
[dead]
Betelbuddy 20 hours ago [-]
[dead]
IAmGraydon 20 hours ago [-]
[flagged]
georgemcbay 19 hours ago [-]
[flagged]
IAmGraydon 19 hours ago [-]
Thanks for the link and agreed on all points.
tzk718 19 hours ago [-]
There is a suspect, but he appears mentally ill and could have been paid by anyone to throw a molotov cocktail at the metal gate (to ensure that no one in the house got hurt):
"Around 3:40 a.m., the suspect threw a bottle containing a flaming rag at the metal gate of 855 Chestnut St., according to a police report."
pillefitz 17 hours ago [-]
[flagged]
cboyardee 7 hours ago [-]
[dead]
trollski 22 hours ago [-]
[dead]
Ms-J 20 hours ago [-]
[flagged]
alekq 22 hours ago [-]
[flagged]
21 hours ago [-]
gverrilla 21 hours ago [-]
[flagged]
ghstinda 19 hours ago [-]
[dead]
crysomemore123 14 hours ago [-]
[dead]
jesse_dot_id 20 hours ago [-]
[flagged]
tonetheman 21 hours ago [-]
[dead]
roland_nilsson 15 hours ago [-]
[flagged]
BrenBarn 17 hours ago [-]
[flagged]
sassymuffinz 22 hours ago [-]
[flagged]
stego-tech 22 hours ago [-]
[flagged]
gverrilla 21 hours ago [-]
[flagged]
IAmGraydon 20 hours ago [-]
The guy is either mentally unwell or grifting. Most likely the latter.
15 hours ago [-]
nickvec 18 hours ago [-]
Altman can both be mentally unwell and a grifter, they aren't mutually exclusive.
bravetraveler 19 hours ago [-]
[flagged]
voidhorse 18 hours ago [-]
[flagged]
dang 15 hours ago [-]
> I hope the worst for sam and his family.
WTF? You can't post this viciously to HN, no matter who it is you're being vicious towards.
Normally I would ban any account that posted like this, but this thread is a mob and mobs have a deranging effect on people. So I'm going to cut you some slack and not ban you. Just please don't do anything like this on HN again.
voidhorse 7 hours ago [-]
Thanks, dang, I appreciate the slack. I let emotion get the better of me in the moment, and I'll refrain from that in the future.
As I explained to these other users*, I'm not going to ban you right now because this thread is a mob, and mobs derange people and I don't think that any of us (including me) is immune from this. But please don't ever post anything like this, or remotely close to this, ever again to HN.
Would I not be allowed to post that removing Khameini or Putin would be good for the world? I find that hard to believe, and people do it all the time. And if it's just a matter of who it's ok to advocate for removing, then where is the line?
inavida 20 hours ago [-]
[flagged]
weedhopper 22 hours ago [-]
[flagged]
akramachamarei 20 hours ago [-]
Envy is a deadly sin for a reason
debugnik 11 hours ago [-]
So are greed and pride.
Vaslo 22 hours ago [-]
[flagged]
alpaca128 21 hours ago [-]
That you give random people on the internet the power to decide who you vote for is kind of sad. Calling them low intelligence for it even more so.
Vaslo 6 hours ago [-]
I just responded exactly how the OP responded to prove a point.
mindslight 21 hours ago [-]
Personally I'd rather people strive to become more intelligent rather than acting less intelligent, duking it out with their fellow citizens as if politics is nothing more than some team sport, and ultimately harming us all out of pure spite. But you do you, I guess.
jibal 21 hours ago [-]
There's nothing less intelligent than voting Republican other than urging people to do it.
Vaslo 21 hours ago [-]
Sounds like someone has some billionaire envy. It’s ok, you did the best you could with what you had.
alpaca128 21 hours ago [-]
Why would anyone with a sound mind envy billionaires?
Vaslo 6 hours ago [-]
You guys seem to be obsessed with them and taking their money.
jibal 20 hours ago [-]
[flagged]
amarant 22 hours ago [-]
[flagged]
heyaco 5 hours ago [-]
[flagged]
Ms-J 21 hours ago [-]
[flagged]
cuuupid 22 hours ago [-]
[flagged]
happytoexplain 22 hours ago [-]
FYI, you started out with a very common word used to exaggerate or cherry-pick the opinions of enemies ("giddy").
It's more valuable to discuss grievances than to pretend they are simply un-discussable in the wake of related violence (in the vein of "it would be disrespectful to talk about gun control in the wake of gun violence").
cuuupid 22 hours ago [-]
[flagged]
Arodex 22 hours ago [-]
>This is simply not how the economy works, if everyone is poor who do you think is paying for products/services leveraging AI?
Well, this is already the economy right now: the very upper class is owning more than the vast majority, and consuming more than the vast majority.
"The top 20% of earners now make up over half of consumer spending"
>also means you are opting into homelessness, famine, cancer, climate change, etc. pretty much everything that we could solve with ASI.
All these could be stopped right now but many people don't want to. Your ASI is going to give the same answers scientists have been reviled for saying: tax more, don't let the free market decide everything, est less meat and drink less alcohol, consume less in general.
Human stupidity is the real problem and ASI isn't going to "solve" anything.
cuuupid 22 hours ago [-]
Top 1% and top 20% are entirely different numbers, and majority does not mean all. If the bottom 99% or even 80% of people were unable to meaningfully engage in the economy it would collapse. We already know this model does not work due to several centuries of feudalism.
It's also insane that we have come to the point that you can say something like this and publish an Axios link when anybody could just go outside and see most people are employed, participating in the economy, not homeless, have food, buy things and enjoy luxuries.
Am I to believe that Jeff Bezos is the primary driving force behind Labubus? Is the Chipotle down the street waiting for Elon to come to town so they finally have a customer?
vinyl7 22 hours ago [-]
> AI? If everyone is broke because all the jobs got automated, who is buying the products to supply revenue to the companies
Does it matter if you're already a rich oligarch with generational wealth? All these ceos have enough money to last several decades beyond their life span, it doesn't matter to them is the slave class croaks
cuuupid 22 hours ago [-]
What are they buying with this money? If you're the rich 1% and have replaced the 99% with AI there is no longer an economy for you to participate in. We don't have to imagine this scenario, we already did feudalism, and it famously boiled down to land and military.
> slave class
This sentiment is by far the most ridiculous because you are simultaneously projecting a reality where AI does everything and so people are no longer needed, but at the same time people are needed and become a slave class. "Oh no the tractor was invented! Now nobody will need humans to tend the fields! They will surely now force us to tend the fields!"
latentsea 18 hours ago [-]
[flagged]
sensanaty 10 hours ago [-]
[flagged]
raslah 22 hours ago [-]
[flagged]
zb3 22 hours ago [-]
So there's one photo. Of one family. Now what about millions of photos of all the other families possibly affected by him? That doesn't have power?
It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
joecool1029 21 hours ago [-]
> Now what about millions of photos of all the other families possibly affected by him?
His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: https://news.ycombinator.com/item?id=47640048 ).
tuckerman 22 hours ago [-]
I think he's just trying to remind people that someone can both be a CEO of a powerful company you might disagree with/hate as well as a real human with a husband and child and that trying to set fire to his house could kill those people.
I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
xdennis 22 hours ago [-]
[flagged]
tuckerman 22 hours ago [-]
I don't know who you think the "real family" is but a) narrowing what a real family is does an awful disservice to a whole host of unique families, not just families that involve surrogacy and b) nearly all surrogacies in the US are gestational surrogacies where at least one parent is genetically related to the child and the surrogate is not at all related to the child (not that genetic relations is what makes something a real family or not, but I'm pretty sure thats what is implied here).
llbbdd 22 hours ago [-]
Yikes
kelseyfrog 22 hours ago [-]
No one deserves to be attacked.
I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
guzfip 7 hours ago [-]
No peace for grifters. Flush them out of the country.
And I mean all of them, left wing, right wing, corporate. I am sick of every level of power in the country being filled with lying grifters. I don’t care what happens to them, as long as they’re gone.
I feel like I’m living in a circus.
therobots927 18 hours ago [-]
The New Yorker article was tame. I wish no harm on Sam. But for him to mention that article in the first couple paragraphs is nothing short of opportunistic, and exemplative of exactly the type of manipulative behavior outlined in the article.
Fuck off Sam. And stay safe out there.
mrcwinn 17 hours ago [-]
That we are so concerned about the movies of individuals like Sam and Dario (or even Elon, if you consider xAI a frontier lab) tells you what a poor job we’re doing with regulation and self-governance.
Saw GPs comment while logged out and thought it must be pretty heinous, surprised to see he got downvoted into oblivion for something so benign.
Capricorn2481 12 hours ago [-]
You thought it was weird a comment randomly calling someone gay was downvoted?
zoklet-enjoyer 11 hours ago [-]
I opened the article and it starts out saying it's a picture of his family. At first I thought the picture was Sam. Then I was like no, maybe it's his brother? And then I was like, that would be weird. So I googled "is Sam Altman gay?" And Google says, yes, he's openly gay and married. I had no idea. I thought it was interesting, because I've seen so many comments about Peter Thiel being gay, but never anything about Sam Altman.
Capricorn2481 11 hours ago [-]
Forgive me, I didn't know he was gay. There's so many troll comments in this thread, I thought you were just trying to use that word as an insult.
AussieWog93 10 hours ago [-]
Haha, at least two people learned something new today.
His husband's an Aussie too!
raslah 22 hours ago [-]
The FOBO here smells.
happytoexplain 22 hours ago [-]
You might as well say it's bad to be human.
What FOBO smells like, is what's happening.
el_jay 15 hours ago [-]
This article and discussion appear to have been manually delisted from the News rankings.
Evidently, even HN could only keep up the pretense that tech development is amoral and apolitical for so long.
dang 15 hours ago [-]
It hasn't been "manually delisted" - it has been rightfully flagged by users, plus set off the flamewar detector, plus downweighted by moderators, the same way we would downweight any other thread that violates the values of this site so shamefully. Hacker News is not a site for mobs.
15 hours ago [-]
kbelder 22 hours ago [-]
Sure, he's sleazy. Doesn't matter. It's not ok to firebomb jerks or saints. Rich or poor. It's both a criminal and an immoral act.
BloondAndDoom 21 hours ago [-]
This question doesn’t apply to Sam, but since you made a general statement, I’m trying to understand.
When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?
Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
Chance-Device 21 hours ago [-]
We use a lot of euphemisms and have a number of myths around political violence. The fact of the matter, so far as I can see, seems to be that political violence is extremely effective, however also extremely destabilising if used at scale.
Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.
Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.
So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.
I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
akramachamarei 21 hours ago [-]
It's an interesting question. Here's my reductive, off-the-cuff take: violence is justified when defending oneself or another from imminent bodily harm, or even under threat of imminent, considerable property damage. When a threat is not imminent, or an action is past, we use the police and the courts, because we as a society–in the sense of subscribers of the US constitution or similar tracts–believe that it is better to have a judicial system and impartial officials determine whether it is worth depriving someone of their bodily liberty or taking their property, that is, jailing or fining. Taking some sort of extrajudicial action or applying corporal punishment (!) requires a much higher bar. How and when would one determine that the judicial system is so unreliable as to morally permit vigilantism? It requires a great deal of moral self-confidence to take matters into one's own hands.
I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
richardlblair 21 hours ago [-]
Why did I need to scroll halfway down the page before finding a comment that says it was wrong to firebomb his house and nothing else?
dang 15 hours ago [-]
Because mob behavior overwhelms everything else.
pocksuppet 6 hours ago [-]
If anyone else wrote this comment, you would have deadened it due to the bad faith.
shooly 20 hours ago [-]
[flagged]
richardlblair 20 hours ago [-]
So I suppose we should burn the house down with a child inside.
Your response is a cop out and you should be disappointed in yourself. Further, people do not often agree another human should be murdered. No matter how you phrase it.
deaux 18 hours ago [-]
> Further, people do not often agree another human should be murdered. No matter how you phrase it.
I really wonder how much of a privileged bubble one must've lived their life in to come to this belief. Without much of a history education either.
It's _incredibly common_ for humans - maybe saying "humans" instead of "people" helps you snap out of the disbelief - to agree that another human should be murdered.
richardlblair 18 hours ago [-]
I grew up in a very violent neighborhood. You know what I learned? Most don't want to be violent, they feel they have to.
Its dishonest to say it's incredibly common for people to want others murdered. That's not a belief that needs normalizing.
shooly 19 hours ago [-]
> Further, people do not often agree another human should be murdered
Have you ever heard of the French revolution, the World Wars, collapse of the Soviet Union, or maybe more recently - the Ukraine war?
People are more than happy to see someone who brings suffering to others dead.
Of course, I'm sure lots of people would also want to see people responsible for those events be locked away in a prison cell for the rest of their lives, and for their freedom and privacy to be taken away - do you perhaps want to guess why people would prefer that over instantly killing them?
richardlblair 18 hours ago [-]
To say that people often want others to be murdered is an overstatement.
Some people want others to be murdered. And those people do not need representation.
It's a bad take especially considering the context. And to be explicit - the context is a molitov cocktail being thrown at a home a child is sleeping in.
shooly 17 hours ago [-]
No, actually, that's not the context. Well, not the full context - but you know that, don't you? We both know, that you do.
drowntoge 21 hours ago [-]
I find myself resenting him and his ilk on a daily basis for what they did to the computing space which was once sacred to me with their profiteering. But nothing justifies violence, not even close. Simple as that.
21 hours ago [-]
Miner49er 18 hours ago [-]
What Sam is doing is immoral too, just not illegal.
franklovecchio 3 hours ago [-]
I see quite a lot of "violence is never justified" sentiment throughout the comments. I ask as a "thought experiment" - why? At least from my understanding, the history of America is riddled with working class uprisings that resulted in the use of force (violence) attempting to make their lives less insufferable. If your government has failed you because it is a plutocracy enriching itself off of enacted hardships (the most general way I can put it), is force not the only thing left? You could argue that there are other possibilities - general strikes et. al. - but those often end in _the state using force_ against you. If the law allows for the use of force in certain circumstances (stand your ground), and there is an analogous situation at hand where there is no concept of justice (justice serving those in power), certainly one has to consider it as a tool for use _outside the law_? The "violence is never justified" comments read more like thoughtless propaganda to me ¯\_(ツ)_/¯. Obviously a person's life is involved, jesus, so certainly there is an opposite camp we don't want to get to: "just nuc 'em". But it seems strange that you wouldn't debate the use of force, even if the answer is "the only winning move is not to play".
bdangubic 3 hours ago [-]
First, not sure where you live that you believe general strikes will result in the use of force against? certainly not in most civilized societies, no? Second, while US history has provided examples where use of force might have been necessary to bring about the change same history does not have (m)any examples where such violence wasn’t preceded with long attempts at bringing about needed changes without violence. also, violence against human beings is different from setting shit on fire, if violence against human beings is justifyable (regardless of how vile the said person/people are in your and even some majority opinion) who is to say that someone tomorrow might decide that same violence is justifyable against you or even worse - someone in your family?! think of it this way - if your claim is that violence is justifyable - who makes the determination for such justification?
franklovecchio 3 hours ago [-]
I live in the US. There is a history of armed forces being used against the people generally striking. If you include large protests, even more.
> If your claim is that violence is justifyable - how makes the determination for such justification?
We authorize people in governments to make this determination, and increasingly machines. Should we? Do you think that it is acceptable to let a police officer justify force on behalf of the state? How about a machine? Mostly just trying to understand what you think is acceptable here.
But to answer...violence against human beings is indeed different than setting shit on fire, though the law certainly does not allow for the use of force against personal property either. And this difference is indeed the crux of the issue, depending on what your values are (though we seem to be in alignment on "life is valuable"). If for example (probably a bad one, but hopefully it gets the idea across), a group of people is committing a genocide, and you ask them to stop, and they do not, and so you interfere with the use of force...limited at first, maybe, but they do not stop: is their continued involvement not the justification for use of force, assuming other strategies are off the table? Different example than the thread, I realize, but my thought experiment is not tied directly to it, just at the sentiment.
bdangubic 2 hours ago [-]
> I live in the US. There is a history of armed forces being used against the people generally striking
[citation needed]
> a group of people is committing a genocide
if you are asking if violence is OK to fight violence, it always is. I guess I personally did not think that needs justification but 100% you can (and should) fight violence with violence
> if you are asking if violence is OK to fight violence, it always is. I guess I personally did not think that needs justification but 100% you can (and should) fight violence with violence
I wasn't asking that, but you were (sorta) vis-à-vis the justification question ;) My main point was to say that it seems strange that a crowd of folks that consider themselves "thinkers" would simply table the discussion of the use of force. I do not like discussions tabled simply because they seem indecent - that tells me they're probably important to have.
But to your point: if it is ok to 100% use force against force, why? If a federal agent were to show up at someone's door to and force them into a labor camp, where they would probably meet their death slowly - if the person decided to try to use force to fight the federal agent, would their use of force be justified in your eyes? And taken a bit further and sort of building on the first example, what is the difference between someone using force against an employee of a company pursuing a goal whose technology is being used to aid in the use of genocide against others for reasons _the company can justify_ (money) but they can't? Are they not directly complicit in the devaluation of other people's lives? In Grug's terms, "why ok for us to hurt people if we think we right, but not ok for people to hurt us if they think they right?" (or something like that)
ChoGGi 7 hours ago [-]
Well Sam, you should take your family and your billions and fuck off to some island paradise.
Or keep on doing deals with the DoD and pushing to replace desperate people's jobs.
Cute kid, I'd rather be raising my family in peace then dealing with what you deal with.
@dang You have a bullshit filled unrelenting job, thanks for doing it.
mc7alazoun 22 hours ago [-]
Daamn, you were too fast to share the story haha.
boring-human 16 hours ago [-]
This is an odd choice of a thread for a laundry list of complaints about AI and about a person that, say what you will, is nowhere near the list of planetary "really bad guys". Even if we limit it to tech, the list starts with someone way richer, then goes through four or five way-shadier people.
If you're OK with victim-shaming here, doesn't it say more about you than Altman? What does it say about your viewpoint?
goku12 13 hours ago [-]
> about a person that, say what you will, is nowhere near the list of planetary "really bad guys". Even if we limit it to tech, the list starts with someone way richer, then goes through four or five way-shadier people.
You really don't need to go that high up the ladder to find members of the 'list of planetary really bad guys'. Sam Altman is single-handedly responsible for starting the current DRAM crunch - that too based on an untenable economic framework. He's also an enthusiastic participant in the AI bubble that threatens to cause a massive global economic depression when it pops. He's also involved in the cabal that wrecks the labor market (wages) by hyping up the 'AI will replace labor' narrative. On top of all that, he and his ilk are on a building spree of data centers that will guzzle huge amount of energy and dump tonnes of extra CO2 into the atmosphere, as if there's no tomorrow. This wrecks all the hard efforts of millions of others before him to rein in the damages caused by the climate change. Needless to say, all of these have pretty deleterious effects on the economy, biosphere and the welfare of ordinary people, including loss of innumerable lives.
But does he care? He is one of those people who simply ignore the trail of serious damage and enormous suffering they leave in their wake, because they don't see anything beyond money - more money than they can spend in a hundred lifetimes! Nobody needs a justification to see him as one of those 'planetary bad guys'.
> What does it say about your viewpoint?
As someone else here said, it goes without saying that lobbing Molotov cocktail at anyone is a no-no. I don't support physical violence in any form. Having said that,...
> If you're OK with victim-shaming here
It's sad that the aristocratic society didn't learn anything from the murder of Brian Thompson. The 'victim' had caused thousands of preventable deaths per year, and his death saved thousands by forcing the industry to deal with the problem. Suddenly, even the pacifists (like me) are left wondering if the death was unethical. If true justice existed, the state would have stopped them from their crimes (aka professions), if not outright execute them for the lives lost. Whom will you choose when they pitch their own lives against thousands of innocent lives? You can't claim victimhood after putting yourself in that position.
I read the New Yorker article like most people here. I didn't find anything incendiary enough in it to provoke a Molotov attack. I wouldn't put it past him to have arranged it himself, given how much he lies and what he stands to gain from it. But let's assume that the attack is real and is connected to the report. The reply seems overly dramatic and self-righteous, given that the attack was against his iron gate! He's milking the situation to indulge in virtue signaling, sympathy farming and gaslighting the critics. This is one hell of a victim posing! But I have no sympathies to spare if it distressed him so much. He shouldn't be able to sleep anyway, if only he had a conscience. Advocating sympathy for the unsympathetic super-privileged is a bit tone deaf under such circumstances. Evidently, nobody is in a mood to oblige to such manipulations.
> Suchir Balaji (November 21, 1998 – November 26, 2024) was an American artificial intelligence researcher who was found dead one month after accusing OpenAI, his former employer, of violating United States copyright law -Wikipedia
In essence, he has threatened to kill millions of people. Those are death threats. I know he doesn't see it that way, and I don't feel threatened either, and I imagine most devs don't either.
But that's how that's actually interpreted by many people.
Now, that doesn't justify violence, because words don't justify violence. However, if someone is going around threatening to kill people, you should not be surprised when people try to hurt them. That's not shocking, that's very much expected.
Altman and his PR team are incredibly stupid for not seeing this coming, and frankly I don't know if they can reverse it.
Controversial hot take, I know.
Not saying that justifies harming Altman but I am confused that he seems surprised he is now in physical danger? [Or chalks it up to just some single specific incendiary article rather than the companies actual actions?] If you involve yourself in the act of killing people then, yeah, you’re going to get blowback for that and some people are obviously going to want to hurt you
It's absolutely ok to oppose war.
It is absolutely not ok for "some people to want to hurt" someone who is running a company that is vying for contracts from a democratically elected government's defense department.
It's also ok to protest that, to boycott it or to refuse to work for or with them for it. But escalating that to physical violence is not ok, and nor should people be "confused that he seems surprised he is now in physical danger"
(As an aside, from the statements I've heard so far it seems the person was more an anti-AI, anti-tech person than anti-war)
And as such they’ve either become completely irrational (most far left or far rightists), checked out (the rest of us), or fully mentally ill (people like this, or that Gracie Mansion wacko)
Right now we have a huge imbalance in the world and more situations like this are going to manifest as we slide further and further into authoritarianism.
“Proper order” - the government thugs are from the party I do like
Let's see if that still holds after the midterms...
Why though?
Please educate yourself on the basics or at least put more effort in before participating in conversations.
[0]: It’s easy to abuse the Socratic method and devolve a discussion into one of first principles. It’s extremely tiresome and a huge waste of everyone’s time.
Unfortunately warfare is a thing. Why wouldn't you want the best technology used for your country when conducting warfare? Or do you just believe warfare would cease to exist if a country gave up any means of defense or offense?
Are cars authorized to run people over?
Are painkillers "authorized" to get people to overdose?
Are computer chips "authorized" to be put into bombers?
What are you even talking about?
Trump and other presidents literally started wars and ordered people to be killed. When was the last time they were physically attacked?
https://en.wikipedia.org/wiki/List_of_United_States_presiden...
When they actually got a nuke all it meant was that the US stopped threatening them, halted practicing manoeuvres in preparation for attack and generally just left them well alone.
Iran has probably realized by now that if they dont get a nuke the US and Israel and will keep slaughtering their schoolchildren.
Sometimes we're the brutal savages who need to be stopped, impossible though that is for some people who have more of "racial loyalty" mindset to comprehend.
No One does!
I also found news hard to believe but it is true:
https://www.bbc.com/news/articles/czx91rdxpyeo
I'm not a big fan of Sam Altman, but violence like this is not a solution; it has the actually opposite effect as it probably did with Trump.
I don't support this and yet I know for every harm people in these corrupt institutions are involved in, the universe gives back your due.
If you want to stop the harm. Stop harming the world with your actions in what every way that needs to manifest for you.
So the headline seems to be more "high profile person attacked by lunatic" than "OpenAI CEO attacked for being evil".
But as far as political justification stands, he is as valid of a target for hostile nations just as Iranian nuclear scientists were (unless he has 0 involvment with USG). That's just the world we live in.
Use your tech for war in other nations, you give a justification for other nations to target you. Same goes for Lockheed Martin ceo etc, nothing specific against Sam. But saying nobody has no valid reason to target Sam like this is pretty stupid imo.
Some people are treated a whole lot better than others in prison.
I totally agree with your statement if we are talking about the average citizen starting to throw Molotovs at his house. If you’re afraid AI is taking your job, just do something else. It’s not the end of the world changing careers.
Plenty of work AI won’t be able to do, or allowed to do without a human assisting in some way that secures the human a good income and way of life.
So if this is done by an individual citizen, they need to be hunted down, arrested, and get the full force of the justice system to deter others from doing the same.
On the other hand, right now, Sam Altman is a valid military target for assassination in the US / Iran war.
OpenAI did snatch up the contract from Anthropic at the Pentagon, and their technology is in some capacity used to murder Iranian HVTs (High Value Targets). Altman is therefore technically a legal HVT for the Iranians.
If you say it’s valid and not a war crime for the US to assassinate former political Iranian figures and their families for aiding the new regime and therefore becoming enemy combatants in the eye of the US Military, it’s also valid to assassinate Altman and his family for doing the same to the other war party.
It’s a bit of a Schrödinger situation. He is technically a valid target in a current war, but not for the private citizen.
In both cases, though, I’d advocate that violence is neither a solution to solve the problem that AI might be creating for a lot of people in the future, nor should he be treated as an enemy combatant and his infant child and wife bombed to smitherens.
Diplomacy is key here, just like it would have been the better solution than going to war with Iran.
If you disagree with Altman, send him a letter, show up at his workplace, talk to the man, gather people who think the same of him you do, write letters to your voted representatives, make calls, vote politicians into office that are anti AI and who will go after him and regulate his company to shit. Bureaucrats can make Altman’s life more miserable than a thousand Molotovs ever could.
If you gather enough support, you can reach the same goal, taking his power over your life away, without any violence.
But are you really surprised people chose violence over the democracy toolbox in the US if they get told by the people in charge of their country that violence is indeed a good way to solve problems, that you should have a "warrior" spirit and everything is up for grabs, even sovereign countries like Greenland because you can outviolence any other nation on the planet?
Violence only creates more violence and as long as there is a president who chooses to put oil in the fire and pretends it’s ok to murder US citizens like Alex Pretti, you don’t really need to wonder if the average citizen starts murdering tech CEOs in the near future.
They just follow the Top-down approach to using violence as a tool the leadership lives by example.
Sam isn't a political leader, so this comparison is flawed. What the hell, are we really arguing about if assasinating a long-standing figure of this community here is valid? Seriously??
Engineer archetypes hate politics and refuse to think about it. For most engineering, there is negligible political dimension. But culturally-transformative technology is inherently political to the degree it's transformative. Altman recognises this.
He is working towards a social goal, and attracting support to achieve it. Yes, he is a political leader.
“we’re going to replace white collar jobs and also help the trump admin with a war no one wants” great comms team lol…
Does it get more evil?
How many families has ICE attacked?
How many law-abiding American citizens has ICE murdered?
Sam Altman has agreed to supply the DoD with AI for use in killing families. Why is his own family more valuable from those OpenAI would assist in murdering?
“All animals are equal, but some animals are more equal than others.”
We care so much when it is the family of billionaires who crush our skulls without a second thought if it meant adding to their bank account, but to the countless nameless who they crush? They aren't even considered human. They are lesser.
My heart goes out to ALL BEINGS harmed. ESPECIALLY the nameless voices we will never hear about.
My recommendation to him and his family, and anyone else in positions like his: retire, go off into the sunset with your riches, and disappear into obscurity. Fund the local community, donate all your excess wealth, and make the world a better place. That is the only way to stay safe.
Otherwise, welcome to the forest. It is dark and filled with predators bigger than you can ever imagine. Only a fool would shine a light here. It is dark for a reason.
What Sam is doing with ICE, DoW, etc is harming tens of millions around the globe.
A defense contractor is in the business of war. In supplying the war machine, you should be living in a fortress. Tall walls, check your drink for poison, live in paranoia. Every person in the business of war knows what they are getting in to, and how to protect their family.
How is someone that is near the face of AI this naive about such an ancient thing?
The business of war is fine. It is ancient. It is part of humanity. Making some morality plea towards family and "violence is never the answer" while in the business of violence is NOT okay.
Everyone in the defense industry knows the risks. Blood money is not free. You sacrifice a peaceful life for the wealth.
To keep your family safe you have to use a meager sum of that money to have tall walls, guards, and security. DoD contractor 101.
Alternatively, live in obscurity, don't talk about your work, and it is usually fine.
A world-wide known CEO doesn't have this luxury so again, use a small portion of unfathomable wealth to protect your family. I have a feeling this war is just starting.
When in the business of death, you no longer get to live with the rules of peace.
Suddenly it’s not ok when a tiny fraction of that violence comes home.
Hypocrisy at its most extreme.
This is a fairly healthy response from the public - better than accepting everything at face-value. Plato's Allegory of the Cave is a warning against accepting random information in a vacuum to assess your surroundings. Observation and response is not enough to be a critical thinker, even back in the ancient ages.
From where I'm standing, the public at-large is traumatized from flubbed coverups like the Snowden leak, Epstein files, and Abu Ghraib. The myth of American exceptionalism has been threatened for a long time, and people rightfully question whether or not executive leadership can write-off their involvement in politics. Sam Altman has put on an extremely dangerous pair of boots, and while it doesn't justify attacks on his person, we all know that speculation will continue as new events come to light. Right or wrong, this is what the public is conditioned for now.
People here think that they're much smarter than they actually are.
Consider for some it's already hit home in the form of job loss, which for most people can easily be catastrophic. Or maybe they've a giant datacenter in their back yard suddenly, and now their air and/or water isn't viable.
That of course isn't justification, but it does partly inform why some people are that mad, and it's much easier for angry people to be callously indifferent.
If you were to break down HN's zeitgeist, it's some percentage site-local, some percentage larger tech scene, and some percentage general public.
Although you have outsized influence on the former, the latter items factor in heavily—sometimes overwhelmingly so. You can't really control that, and I don't feel it represents some sort of failure on behalf of the community nor moderation team.
I see it not as mob mentality so much as as multiple sides personally involved for different reasons. Things tend to get pretty heated when that happens; not a good recipe.
I'm sorry you had to deal with the aftermath. Your flurry of disappointed, exhausted-sounding comments reminded me of a service industry worker getting hit with a huge rush. There's a kind of PTSD that hangs around once the dust settles.
So, thank you for your efforts in trying to keep the site civil. It clearly ain't easy sometimes.
It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.
(And no, just because Sam Altman is CEO of tech company doesn't make this news tech news.)
Further being "apolotical" means supporting the current status quo.
However. Culture war tropes get posted in even the most abstract discussion, so banning top-level posts won't keep it out.
Furthermore, technology is inherently political to the degree that it is transformative. The Facebook algorithm was always political, it just took time for that to become apparent. I'm trying to illustrate another kind of toxicity, that of engineering archetypes refusing to consider the political impact of their engineering decisions. Technologists in transformative fields should not be putting their heads in the sand. I don't want HN to devolve to red/green political rage, but there are political discussions that belong here.
Lastly, social sciences may well be dismal, but they can still illuminate, and politics is a valid subject of study. This site is predicated on curiosity, and areas of politics are on topic for that. Humanity is a system that bears analysis and can even be engineered.
The very American trend to avoid anything political is self-defeating anyway, as it contributes to the social rot and the worsening of politics even further. Do you think the garden will become cleaner if you stop tending it? That your child will become nicer if you stop taking care of it? That your projects will sort themselves out if you don't track them?
You are well on your way to becoming like Russians: more and more detached from political matters because it is not safe or pleasant... until they are sent to the frontlines.
Therefore, here's a feature request: allow per-user killfiles. I currently have this through a Chrome extension but I'd love it to be native so that I don't have to use my own iOS app and so on.
Here are a few things I find boring: https://wiki.roshangeorge.dev/w/Overmod#My_Stuff
One of the things I really like is to have a high-ratio of good content to slop content and I think manually curating out slop authors is the way to go for that. You'll see that my lists include things that other people seem to really enjoy.
Personally I don't see the value, but some people are less resilient (or more weak-willed) at seeing words they disagree with.
Tell me you didn't use USENET without telling me ;-)
That would be lovely. It's also an obvious feature which has existed in other contexts for a very long time, and it would be easy to implement. That means its omission was a deliberate design choice. It'd be interesting to understand why.
I don't know how often you get to take a real vacation, somewhere away from the Internet and the USA, but this might be a good time to consider taking one?
I typically take jabs at the community here, but not this time. What you are seeing is a reflection of a wider, much more insidious problem. Trust in society is failing, and people are not seeing a civilized solution through the usual channels - such as politics.
I think things will get a lot worse before they get better. Hopefully I'll be okay in my little corner of the world.
Violence is politics. It's the oldest and most universal form of politics, even found in other species, and even inanimate objects (types of rock subducting each other, we see the rock that floated to the top, that's practically Darwinism).
But humans don't like being killed so they developed systems to avoid violence. Speeches, voting, money, etcetera. It's all ways for people to arrive at a reasonable solution peacefully. It's always been backed by "if we don't do this, people start dying." But people have forgotten this and they're allowing those alternatives to fail. We stopped exposing the new generations to the suffering child of Omelas and they forgot what is necessary for society to exist. People think there is food on the table by magic and there are no wars by magic. And it is magic, these complex intertwined systems. They are amazing. But you must respect them, you cannot destroy them on a whim and still expect civilization to survive.
I agree. I think the lack of seeing a way out is a big component of this turn. You bring up politics and that's a good example. Who do I vote for, campaign for, etc. that actually wants me (an American citizen making around the median wage for my area) to be able to buy a home? To have affordable, accessible healthcare? I'm aging out of my childbearing years and am wrangling with the sorrow of not being able to afford a child. There are some promising local candidates and I do vote for them, but so many of these issues need to be tackled at a higher level due to their complex, interdependent nature.
There's nobody. There's red and blue with different culture war paint. I can choose whether trans women play in sports or if we pray at work, but I have no choice in the fundamental material reality of my life.
We're seeing this chaotic violence in part because there's no alternative. We know the old world is dying, but our leaders won't let anything else be born.
I was talking to my father a few days ago. He's a 67 year old man who's voted Republican my entire life - we'd have political sparring matches in the car when he forced me to listen to Rush Limbaugh as a teenager. Of his own accord, he started talking about the necessary end/change of our economic system. A man who'd banged on about the free market and considered himself a Libertarian for decades, and who still, when he does engage with the news, does so with right wing sources.
He's brighter than average, but not to an extreme amount. The understanding of the situation has trickled down to the point where every workplace has at least 1 or 2 people who understand how fucked everyday people are. My team at work is 6 people doing basic white collar work and we talk openly about how things are going to get worse, and there are nods to it cross-functionally all the way up to the top when our execs talk in an all hands. This is at a very apolitical giant mega corp.
None of these discussions would have happened 20 years ago. We still shy away from the specifics (candidates, policies, etc.) due to professionalism, but the broader picture (things will get worse for the average person and our troubling trends aren't going to be reversed anytime soon due to inaction at the top) is agreed upon regardless of voting record.
It kind of reminds me of being in an abusive household as a child. There is no escape and, once you've exhausted the 'official' channels, you start contemplating other options. I reported my mother to CPS once when I was about 7 and they didn't do anything (except piss her off obviously). On the other hand, the first time I smacked her back, the physical abuse stopped, and I've heard similar stories from men with abusive fathers - that there's a moment they realize they can actually go toe to toe and don't have to put up with it.
If all your abusers will listen to is violence and you're not allowed to escape/get out, it's reasonable to come to the conclusion that in this case violence is the answer. I see a similar dynamic/thought process emerging in the American public.
Something that I've observed happening throughout history is that in some sense "too much civilisation" can be a bad thing long-term.
I knew someone in the army talk about how some officers wouldn't survive the first week of a real war. Not because of enemy fire, but because given the opportunity, the men under their command would almost certainly take advantage of the "less civilised nature" of the battlefield to take out someone they despise enough to murder, but not quite enough to risk it in a civilian setting where the tolerance for unsanctioned lethal force is essentially zero.
Something similar happens outside of militaries too, where truly horrible human beings[1] can cynically utilise the enforced peace of civilized countries to do incredibly evil but legal things. The Sacklers come to mind as a prime example. They knowingly and deliberately sold highly addictive drugs marketed with brazen lies and killed about a hundred thousand Americans by some estimates. They are above the law and totally immune to all consequence, personal or otherwise. No violence will ever be done to them! Anyone that tries will be severely punished, because that upsets the "order" of civilised society where the rich and powerful can massacre millions, but the plebs can't ever lift a finger against even one of their cartoonishly evil oppressors without severe personal consequence.
"Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." -- Francis M. Wilhoit [2]
Sociopaths loooove civilised societies! They can mercilessly exploit people while basking in the protection of the law. As long as what they're doing is technically legal, they can get away with almost any amount of evil acts. This does take a while to build up! Norms, expectations, and the like keep the worst of the worst initially at bay, but these things slowly erode as more and more sociopaths take greater and greater advantage. (Cough-Trump-Cough)
This, taken far enough, where the common people are stepped on hard enough by those they can't ever bring to justice can result in entire societies just... snapping in their rage. They just need the opportunity, a "push", or some enabling event. In the case of the "friendly fire incidents" taking out bad officers, its a war. In most societies it is starvation or total economic hopelessness. We all know what this leads to: the French revolution is the prime example, but many others exist throughout history.
The failure of the United States is that its reigns of power have been completely and utterly captured by the increasingly corrupt elite, and there is nothing the common people can do about it. Frustration is growing, slowly, but surely.
It's not quite at the boiling over point, not yet, and may take a century to get there, but given the direction things have been heading, it's just a matter of time until the people take their anger out in some direct manner.
Trump might have started the first pebble rolling by causing an oil shock. And gas shock. And fertilizer shock. I'm sure a lot of hungry, cold people who can't even get a job because the AIs have replaced them -- and used their cooking gas for energy -- will be perfectly fine with this and won't ever do anything about it! That would be uncivilized!
[1] Disclaimer: Sam Altman is no saint, but I don't think he's anywhere near the level that he'd deserve mob violence.
[2] At some level the people commenting here that it's shocking and horrifying that anything violent ever happens to a billionaire CEO are betraying their right-wing leanings. Conversely, the people arguing that the elite shouldn't be above personal repercussions for their actions are strongly left leaning.
As you encourage, I would also like to be a little bit charitable and say that some users might be clever at programming or know about certain technology subjects but when it comes to real life and morality they are stuck in early edgy teenager mode, so we can still work and communicate with them on other topics. I try to flag these submissions because I know that many users are completely unable to discuss them in fruitful ways. Many of us are immature.
At a societal level, the simplistic and edgy teenager morality is mostly expressed online so we being terminally online tend to notice it more. The morality might be most publicly seen in "silence is violence" which is a thought terminating cliche. Thinking is hard and changing one's mind is hard too, especially when people have these thoughts which literally stop them thinking.
Psychologically, for many, expressing these juvenile, half baked, sloppy thoughts do not require much thought. They are cheap psychologically. It's like how being in a herd is actually comfortable and saves energy. It costs brain effort and potential hurt to ones self identity to change one's brain patterns. Most people choose to avoid even the thoughts that change is possible and not only wish to remain in Platos cave but to then keep their eyes closed to the shadows on the wall.
Another charitable thought: these worrying ideas are not actually ideas but emotions. For some users they try to argue with these people with logic but they should really connect emotionally - try to help the people feel for others, the good and the moral. Easiest to do with personal first hand real stories and not abstract ideas. To break down otherness through charity.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
> or saying they "don't condone violence" as a pretext to do exactly that
Maybe I just don't know what comments you're referring to, but you seem to be lumping every other post critical of Sam in with the worst comments, saying they are condoning violence, and that is disingenuous. I mostly see people expressing they aren't surprised this happened given how Sam openly markets his tech as a dangerous and unpredictable product that only he can steward, and maybe even finding his response to be a bit opportunistic in a tone deaf way, which hardly rises to the level of condoning violence.
I am willing to hear you out on this, but you're going to have to explain how this is different from any other thread on HN that you've moderated. Political violence, on a much bigger scale than this I may add, hits front page news, and you have more than normalized that as a discussion topic. Whether it's drone strikes, wars, or people being openly executed in the street, it seems the tragedy of human life is an open debate on HN, and you can bet a good 50% of this site will be writing comments exactly like the ones in this thread. And hell, I can't say one way or the other if threads like this are even worth allowing.
But now a tech CEO with lots of security gets a Molotov thrown at his metal gate, and people make the same comments, and suddenly a line has been crossed? How are the comments in this thread any different than comments like this, which involved people who were actually killed [1][2]. I have seen hundreds of comments on this site dictate to me how I should feel about the lives of others. I am often sickened by them. That's before we talk about Sam's actual role in how he shapes our society. It's not "sickening" to feel the need to footnote a condemnation of what happened, it's completely expected.
Again, maybe you're talking about worse comments than I'm seeing, but I feel frustrated as people have regularly brought you examples of escalating violent rhetoric on this site and been dismissed. Outside of people explicitly saying Sam deserved it, which I don't agree with, every other comment here reads like regular HN to me. If that saddens you, maybe there needs to be a different approach to moderation altogether.
[1] https://news.ycombinator.com/item?id=46551716 [2] https://news.ycombinator.com/item?id=47688076
Or... You can keep telling a bunch of people with much bigger problems how ashamed you are that they are having an absolutely human response to the suffering of a man at the forefront of building a reasonably foreseeable suffering amplification machine within the context of a society that is organized around a social contract of exchanging capital for labor. I'm sure that shame you cast won't get "lost in the softmax" as the AI folks might say.
No more skin off my nose either way. Though I'd feel much better seeing some genuine humanity injected into cutting edge tech circles, I'm aware of the incentives, and also cognizant that sometimes, you have to leave the incentivized path to stay on the Right one. That's a lesson it isn't in any one person's capacity to teach though. Sometimes... it takes a community to get the point across. Even then though, you can lead a horse to water...
When posts surface about Gaza, documented by the UN, by Médecins Sans Frontières, by the Lancet, by journalists who were subsequently killed while reporting or now in Lebanon, they vanish from the front page with remarkable efficiency...
The reasons, which I have collected like trading cards at this point, include: "too political," "not related to tech," "flamebait," "this isn't the forum for this," "not intellectually curious," and my personal favorite, "this will only generate heat, not light."
Entire hospital systems destroyed, aid workers killed in marked vehicles, tens of thousands of documented child casualties, and the curated editorial position is: not HN material.
A Molotov cocktail lands on a billionaire CEO's porch. No injuries. Likely a disturbed individual, and according to some well researched reporting in the New Yorker, Altman's personal life has generated no shortage of intense grievances that have nothing to do with AI or tech.
But here we are: front page, moderator editorial, existential crisis about the community's soul...!?
So help me understand the framework. Is violence HN worthy when it is directed upward on the org chart? Is a zero casualty arson attempt on a mansion more deserving of community reflection than systematic destruction of civilian infrastructure, because one involves someone in YC Rolodex?
You write that you've "never seen a thread this bad." I'd invite you to read the comments that appear in the eleven minutes before Gaza threads get flagged. They're remarkably similar in tone, just aimed at people who don't have Sam publicist.
You say you want to "find something else to do with your life." Maybe that instinct is worth listening to. Since the AI boom, HN moderation has drifted from "intellectually curious forum" toward something closer to "curated narrative for the industry it covers."
When a platform consistently decides that violence against tech executives is a moral emergency but violence enabled by tech companies' contracts is "off-topic," the person setting that editorial line is not a neutral steward, they're an editor with a viewpoint.
And that's fine, but let's not dress it up as community values. So...In the spirit of consistency:
I'd like to this post be flagged. It involves no technology. It's a criminal matter best left to law enforcement. The comment section is, by the moderator's own assessment, irredeemably toxic. It is generating heat, not light. It is too political. It is not intellectually curious. It will attract flamebait.
In other words...it meets every single criterion routinely applied to kill discussions about violence that does not happen on somebody porch in Pacific Heights.
Generally, world news and politics are not supposed to be submitted unless there's a tech industry connection. The exception seems to be world-changing news, and there's a light touch on YC-affiliated news for conflict of interest reasons.
> Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic.
https://news.ycombinator.com/newsguidelines.html
There are like 20 rules for commenting on this site. Pretty much all of them are versions of “have decorum”, and none of them are “do not advocate for violence”. It is not just tolerated but encouraged to post insane stuff here so long as it sounds highbrow enough (eg the “most charitable interpretation” rule. It is against the rules to call out stuff like advocating for violence if it’s written like Niles Crane wrote it).
As far as I can tell this thread is not really exceptional in any way other than some of the ire is directed at somebody that used to work for YC.
HN (and ycombinator) has implicitly enabled, dogwhistled, or pretended to ignore all sorts of hateful and violent rhetoric. Sometimes it hides behind a veneer of "curious conversation" but other times its disgustingly blatant - last article I saw about sama was filled with horrific racism.
I come here because there are sometimes good posts, but this stuff has been here the entire time. Now its your guy getting the hate you are acting like its the worst thing in the world?
Frankly people calling out a post from a billionaire is a good thing. You would have to be terminally detached from reality to not see how all these festering issues - wealth inequality, injustice, cost of living, future employment etc etc - are starting to come to a head which would cause people to feel something - frustrated, angry, wrathful.
The world I have lived in for a lot longer than 10 years is HN. I'm gut-wrenchingly familiar with the worst things that people post here—probably more than anyone, simply because it's my job.
If you can dig up a single example of a thread this bad that we knew about and didn't do anything about, I'd be shocked, because it would go against everything I believe and feel. Perhaps you can, nonetheless? If so, let's see it.
Here's what I mean by "this bad", if you want to calibrate:
https://news.ycombinator.com/item?id=47727099
https://news.ycombinator.com/item?id=47725722
https://news.ycombinator.com/item?id=47725717
https://news.ycombinator.com/item?id=47726427
The number of people who feel that anything at all is justified if it reinforces their feelings—particularly their angriest and most vicious feelings—is so large that it's clear that it is human nature in action, and that makes me yearn for a cool and heavy rock to crawl under, with moist earth to sink into.
https://news.ycombinator.com/item?id=47659135
There was horrific racism on display right here. Perhaps it just seems part of the background noise to you .. but at the time, some of those posts felt just as bad as calls to violence or worse.
But to compose something more substantial .. its probably all to much to neatly tie up in a single reply to a thread.
I'm going to interpret that as meaning that we do our job ok, just not instantenously—which would make sense, given that we're human and that would be humanly impossible.
> There was horrific racism on display right here
If there were any cases of that which we didn't do anything about, it would be because we didn't see them. I can't read everything that gets posted to Hacker News any more than you can; see "humanly impossible" above. But I'd like to see specific links.
> Perhaps it just seems part of the background noise to you
It does not "seem like part of the background noise" to me. What it "seems like" is wrenching my intenstines into an agonizing state on a regular basis and then driving a spike through them.
Why is "well we removed that stuff" a defense in other contexts but not here? In both cases the issue is this community writing stuff you deem objectionable.
Both of those people condone(d), support, amplify and drive horrific violence.
A common liberal reaction to those incidents - "oh no violence isn't okay!!" - well where were you for all the other horrific things they did and said? Yes in some ideal world there perhaps wouldn't be violence - but I can understand people feeling like they had it coming. It's the boy who cried wolf. It's the bully getting their comeuppance. It can be hard to feel bad.
Sama also talks about wanting ai to be the future, its pushed everywhere and the feeling is its going to take peoples jobs and disrupt everything. But there's no discussion about how we are going to look after everyone in that future. Current capitalistic (american) society doesn't seem built for that ... that lack of care already exists for a lot of people too who are homeless, poor etc.
Being upset about samas front gate getting firebombed while they probably also had plenty of security .. well idk.
This seems to be the point of contention. What constitutes "violence"?
A lot of people seem to define violence as a purely physical act: a missile strike during a war, a fist hitting a face, a molotov cocktail thrown over a property line.
What has become clear to me, especially when I saw the discourse around Luigi Mangione and the public opinion polling on it, is that a lot – a lot – of people define it much more broadly: a health insurance denial, a job lost as a result of some CEO's careless ambition, or mere words.
The problem with a very broad definition of violence is that it permits a pretty barbaric worldview. If I cut someone off in traffic, or if a careless administrative action on my part costs someone money that then puts them in a financial pickle that month, is that violence? Do I then deserve to be tracked and assaulted? What about the doctor who is complicit in the refused treatment because the insurance company won't pay a bill?
"I understand the insurance company isn't paying the bill but you are still going to treat me, and to not do so is a violent act."
The list goes on. Can society function if the default action at real or perceived injustice is to just kill?
Hell, at the last protest I went to there were people driving by cavalierly playing "Bomb Iran" (written in 1980, and trotted back out every time the topic is back in the zeitgeist). It seems like the only real difference there is abstraction. Supporting violence is [unfortunately] deeply embedded in our culture.
Perhaps the popularity of this thread is causing you to preemptively seek out more terrible comments, rather than letting flagging do its thing?
Maybe try looping over popular divisive threads, and reading the flagged short comments that didn't get many upvotes. There is a lot of fucking hate in the world.
(and certainly a hat tip to you for making it your job to sort through it so we don't have to see much of it. But if this is hitting you differently (personally) than the usual flood does, perhaps you need to take a step back?)
Oh no !! /s
Meanwhile that same person implicitly condones violence - for example getting in bed with the US gov.
They don't need defending on HN.
The tech scene isn't the small, tight-knit thing it used to be. This site is now enormous. Discussion quality seems to have sort of "regressed to the mean"... the larger HN gets and the more people join the discussion, it starts to resemble the median social media site more and more. At some point it sorta loses its purpose.
I'm still addicted to HN, but I've gone through times where I've set my password to a UUID and time-lock encrypted it to lock myself out, because posting here has gotten worse and worse and worse for my mental health (and there's no way to delete your account here... I've emailed you about it in the past and never got a response.) On some level I hate HN now. TBH if this site was gone tomorrow, I'd most definitely be better off for it in the long run, and I'm sure I'm not alone here.
Thanks for all the work you've put in over the years though. This site has held out longer than most, and for a time, was one of the best places on the internet for discussion of any kind, let alone tech. It deserves a place in history for that alone.
None of those news items, comments, news made you want to get away from this, but now that your YC buddy is the target and whatever else fuck is used to justify it? When ICE killed american citizens, school girls killed it was all 'we flagged this as flamewar and what now' but now because he is part of the cadre, NOW it is disgusting? I would laugh if this wasn't the fucking future we are at, just sucking to these assholes
Now, we can debate over whether it would be good or not to kill Sam Altman specifically. There are probably decent arguments that can be made in both directions. And of course it can be argued that the US pursues a more moral foreign policy on average than for example Iran does. I wouldn't even disagree with that. But I think it's unreasonable to expect that the commentariat in general not condone violence. A large fraction of politics discussion, by its very nature, is about what kinds of violence should be condoned. There is no reason in principle why Sam Altman should be exempt from such questions. Now, would I like it if people were holding a discussion about whether or not it's ok to kill me? Of course not. I'm just pointing out that politics discussion is inherently partly about deciding who it is ok to kill. You can't really have politics discussion without condoning violence against some people.
And that's kind of what I'm trying to get at. I'm not saying that people should kill Sam Altman. I don't think I would kill him even if I was 100% sure I could get away with it. Killing is a very heavy thing, surely there are usually other ways to solve problems. But the idea that it's ok to advocate violence against certain people and not others, which I think is probably implicit in dang's thinking (I doubt he would be offended by calls to kill Putin, for example), should probably be made explicit. We can have a conversation about where the line is exactly.
Well, that's okay, because even Sam Altman disagrees with you. He absolutely believes that violence, including deadly violence, is justified - hence his contract with the US Department of War to use their systems in kill chains.
Perhaps the problem is that whoever threw the cocktail didn't use AI to select him as a target, or maybe he didn't receive payment for throwing it? Because what other difference is there?
It turns out that those affected by this are actually excluded from the process by design.
OpenAI can market democratic values very easily, I'm sure the White House loves that kind of dog-and-pony show. But it's pretty clear that OpenAI does not genuinely care about Rule of Law, let alone preventing humanitarian disasters from citing ChatGPT as their abettor.
I think that he may genuinely believe that ai will produce a net benefit for humanity in the long term, but I am increasingly worried that they are absolutely fine testing their creation on the world without any consideration to the harm it can do to millions of individuals.
The assertion that he is benign would be more believable if he spent a shred of time lobbying for universal economic rights of citizens, or some model for redistribution of wealth in a world where most people don't need to work to provide the necessities of society.
Oh, and he's willing to let the government use his technology to mass-spy on Americans and to create autonomous lethal AI.
Pearl-clutching about ambivalence to his fate and comparing it to the barbarism of a mob gets shrugs from me.
It's difficult to sympathize with the boy who cried fire
Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.
Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.
Still horrible and not right.
This is the exact kind of poisonous, plausible-sounding but false and inflammatory rhetoric that is escalating things.
When one is telling farriers that most of them are going to lose their jobs, one might want to try to at least mimic more compassion.
The masses see an incredibly small number of people making huge amounts of money, and gaining massive political influence, by developing technologies they intend to use to replace almost all human economic worth. And they are doing very little if anything to show concern for the fates of the millions of people that may be put out of work.
I see this different than the discovery of oil or electricity or the Internet. It's bigger than that, and they are telling us to remain calm in a burning building while they walk toward the exits.
The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.
Holy shit.
That's terrible that someone did that. I think that's wrong, and people that do that should be in prison.
But if the rest of what was written didn't matter, it wouldn't be written. He thought it was important enough to put it in. It's there to be read and discussed.
And I have to point out, we're not talking about a couple off the cuff remarks he may have rushed. About 95% of the post is about his ambitions for OpenAI. So pearl clutching that people are actually discussing the meat of the post in a tech forum reads performative.
The man was reeling from what happened. He blames himself and his work. He sat and he wrote, naturally it came back to OpenAI. Should he of? Probably not. But it's understandable that he did.
We can meet the moment with some understanding and give the guy a little wiggle room.
This is a serious issue, and it's very possible that "wiggle room" is what got us into this situation. Altman would have been removed as CEO if the OpenAI board of directors got their way, the pushback is not limited to public extremism. His belief that AGI is a world-scale threat is entirely unqualified, and a fatalistic framework for marketing his product.
Both OpenAI and Sam Altman would probably be safer abandoning the apocalyptic tone towards their product line. They have no proof for their claims and only escalate the anti-tech sentiment that even Altman empathizes with in the concluding paragraph. It's a transgressive viral marketing tactic that does not elevate or improve humanity's understanding of AI.
> The man was reeling from what happened. He blames himself and his work
Based on what? I don't particularly feel like he should blame himself, but I don't think he does. Can you point out where in this post he blames himself?
2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.
3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.
> Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.
> the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.
> Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.
> A cavalier attitude and allegiance to nothing but capital doesn't make you immune to basic human morals, and humanity will, rightly in my opinion, punish you whether you like it or not.
These comments are disgusting. The people who made them should be ashamed. But they are probably too stupid to be, assuming they are people and not bots, which I no longer feel certain of for all too many comments here.
I'm finding a lot of the comments here pretty reprehensible, but no more reprehensible than the collective shrug the community gave towards murdered Palestinians, or threads about dead Iranians as a result of American bombs that get flagged off the front page. That doesn't make them acceptable or okay.
Those people's lives are/were valuable, too. It's disgusting that we try to keep HN "clean" of those horrors and the people that flag those threads should be ashamed. Ditto those who think the killing of innocent civilians is okay.
* ending all covid measures to achieve herd immunity, accepting that this condemns hundreds of thousands or even millions to die
* ending foreign aid that goes to tuberculosis treatments, condemning hundreds of thousands or even millions to die of a treatable disease
* accepting the deaths of iranian, palestinian, or israeli children as collateral damage because of the evils of their governments
Or go read any thread involving the Jordan Neely story.
Somehow it is vastly more evil when violence is acute and focused at a single wealthy person.
Rightly or wrongly people feel cut out of society at a time when the tech elite are not only making billions but seem to be actively trying to ruin everyone else’s lives, they are legitimately hated.
And when you’re that hated you do need to be careful, money can’t protect you from everything. At the end of the day we do all have to live in the same society.
(I don’t have this strength of feeling personally but some people do)
As I said to voidhorse (https://news.ycombinator.com/item?id=47728150), this is obviously the kind of thing we ban people for—as anyone who reads https://news.ycombinator.com/newsguidelines.html should know; but given that this thread is a mob and mobs derange people, I'm going to cut you some slack and not ban you. Just please don't do anything like this on Hacker News again.
> For a social scientist, you're either a really poor one, a poorly read one or one with a complete inability to read the room.
Personal attacks are also unwelcome here. Lashing out at a fellow community member is mean and shameful, and also undermines whatever argument you were making.
Mobs foaming at the mouth, triggered by a disturbed person's violence into a mutually foaming frenzy, is not an intended use of this site. I shouldn't have to tell any of you this.
The analogy has 2 simple rules and you can't even follow them:
#1 It MUST be destroyed.
#2 SOMEONE has to have the ring until then.
Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.
Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE
OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?
I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.
I probably would have pressed on negotiating a bigger buyout, but that's easy to say not knowing your situation and what other options for housing you had at the time.
"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
https://news.ycombinator.com/item?id=47659135
Update: To clarify, my personal stance is that the critical tone was both intended by the authors and, in my opinion, appropriate given how much power Mr. Altman holds. If he has a history of behaving inconsistently, that deserves daylight.
Are you suggesting that they should have "both sides"-ed by reporting company PR and Sam-friendly sources and giving them equal weight? Sometimes the facts point in one direction.
Uh, no? Lol, I'm on your side, bud. Put away the pitchfork. I thought it was a really good and fair article. I am not the adversary you're looking for.
You may think we are on the same side. You don't understand what side I'm on. "Lol".
Your "personal stance" is that you can get inside the heads of the reporters? Obviously not. So you're going by the idea that an article that leads to critical conclusions is inherently slanted. This is an insidious and damaging idea. It has led to the belief by journalists and editors that they need to twist themselves into pretzels to present "both sides", which is easily exploited by people of bad faith to launder outright lies. There's a direct line between this and authoritarianism. I'm quite serious about this. The fact that you agree with the authors in this case is completely orthogonal.
Jay Rosen has written a lot about this, well worth reading: https://pressthink.org/2010/11/the-view-from-nowhere-questio...
You're injecting your own personal view into GP's statement by adding a lot of weight into the distinction between the words "critical" and "incendiary" and "neutral", when GP made a very neutral and not as charged statement.
says the guy who said "certainly intended by the authors" based on... what they wrote?
On top of that "Put the keyboard down and relax" from the guy who keeps replying?
<chef's kiss>
> I have no idea what you're talking about.
The one point I'll concede!
Given that, it looks like your position on davesque’s posts is slanted. Your take is critical of those posts, which means your assessment is compromised, and as such should not be taken as valid.
> Right, but the picture those statements painted collectively was not flattering. And that was certainly intended by the authors. Thus, critical, but not at all "incendiary."
The key there is "certainly intended by the authors". The full sentiment here IS equivalent to "slanted".
Sure, but not useful for the overarching aim of equating criticism of the powerful with (stochastic) terrorism.
https://www.youtube.com/watch?v=wr_sB1Hl0oM
https://www.wikiwand.com/en/Emotive_conjugation
If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.
It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.
No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).
Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).
We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
There's a whole subreddit devoted to this: http://reddit.com/r/MyBoyfriendIsAI
and the reactionary subreddit: http://reddit.com/r/cogsuckers
You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
[0] https://news.ycombinator.com/item?id=47717587
Well that makes two of us. Character seems to mean nothing today.
It has worked for him, repeatedly.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
Please don't fall for this stuff.
The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.
> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.
If someone asks ChatGPT for places where a lot of people will be around in a city, intending to mass murder but not revealing as such, you want them to be liable? Seems absolutely crazy.
Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.
Or a big tech company like Microsoft selling a software for planning a mass murder, including indoctrination material and the checklists of things to be done.
Or an auto company like Toyota selling a car that is known to accelerate uncontrollably at inopportune moments and advertising it as great for hit and run campaigns.
Now let's consider a few relevant examples.
An AI model sold for planning military attacks, knowing that it sometimes selects completely innocent targets.
Or an AI model sold to families, claiming that it's safe. Meanwhile, it discreetly encourages the teenage son to commit suicide.
Or selling a financial trading AI that's known to make disastrous decisions at times.
Or selling a 'self driving' car, knowing that its autopilot frequently makes fatal mistakes.
I know that I'm supposed to assume good intentions and not make any accusations on HN. Therefore let me make this rather obvious observation. Some people here are dismal failures at making arguments that are consistent and free of logical fallacies - especially when it comes to questionable practices by the bigtech.
Please provide ChatGPT/Gemini marketing materials advertising it as good for mass killings.
Beautiful.
If someone asks ChatGPT "hey chatgpt, where are spots in my city where a lot of people hang out on the street", then uses his car to mass murder 18 people, you want OpenAI to be on the stand? Sounds like an objectively insane position.
In a world with broad liability as you desire, the person who rented a hostel room to Luigi Mangione while he plotted murder should be held liable for aiding him, despite knowing nothing of his intentions.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...
I don't advocate for violence, but I do foresee more headlines like this as things get worse.
I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.
We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.
We’ll lose these jobs and there will be no super abundance at that point, and not even government support.
There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.
PS: I include AI as an important one in the future because it will be a direct way to get educated and replace college for example without having to pay (or very cheap).
Our governments have a habit of being reactive rather than proactive. People have floated the idea of UBI, but if UBI happens, it will probably mean it's the only way to avert a crisis, and the amount that people will get might only be enough to rent a bedroom and eat processed food.
I think in the medium term, the reaction is overblown. Even though LLMs can make software engineers more productive, you still have a competitive advantage in having more software engineers. Medium to long term though, the goal is obviously to replace human jobs.
I'm not a communist, but Karl Marx understood that the labor force gets its bargaining power because they are necessary to produce value. What do people imagine happens when the human labor force becomes essentially completely replaceable? They imagine the government will be forced to take care of the population to prevent an uprising, but they forget that the police and the army can be replaced by machines too.
I think in such a state, there will no way up, not way to success, no way to real autonomy for ordinary people, maybe you'll even have actual oligarchal rule, since so few people do anything contributing to the economy with their labour.
I agree. We can only hope that it'll be folks like Sam Altman who'll be feeling the pain, and not the 99%.
We also have 100% more people on the planet than we did 50 years ago.
- Either we'll slowly become the Expanse universe (basic UBI, very few jobs, you win them via lottery)
- Or we'll go to simpler times - economics is supply and demand, if there will be more demand to human generated work (the same way there is demand for hand made arts, vinyls, paper books, vintage furniture), people will flock more to family, community. Think something between moving to the suburbs and the Amish. If people will "ban" some products generated by AI, or will prefer products generated by humans, then AI will have harder times to take their jobs. It's unlikely to happen, but think about the Organic food industry, about the high end products industry, about the farm to table / buy local industry, about the "support local artists" (farmers markets) - this will likely just grow. Won't help at scale, but it's a possibility
- Or, the Dune way, banning of thinking machines altogether on the state level, I assume some countries might go that way, for religious or other reasons, but again unlikely
- Or, current AI technology will plateau just short of full AGI, and the centaur period will stay for longer. As long as a human + AI can do things slightly better than just AI, (in my book this is not full AGI) - then there is economic incentive to hire a human instead of replacing them.
- Or full apocalypse, the matrix / skynet, idiocracy, hunger games, red rising. I hope for the ignorance is bliss option...
The trillionaires will survive, everyone else will be exterminated. This is the world that Musk and his kind dream about.
Unless you need guillotines as well?
Not sure I understand what is so confusing about this
a system that can allocate the atoms and energy better than all of mankind won’t exist eternally to coddle hairless apes
Mass-production and other optimizations that use economies of scale to their benefit do take jobs. There's a serious problem in the world's economy that there simply isn't as many jobs as there are people; the world simply doesn't need this much work because the need for work doesn't scale linearly with the population. AI has nothing to do with this. It's a fundamental problem we'll have to deal with either way as our society develops, AI or not. It started ages before the current tech hype cycle.
Either the bubble bursts spectacularly and the global economy is in the shitter because everyone is overleveraged and heavily invested into it, or it doesn't and the psychotic C-suite replaces people anyways so they can see the line go up a quarter of a percentage point.
I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.
What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.
There is no specialisation re. models at this moment in time so it is very likely to be the case.
OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.
There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.
This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.
Generous of them, really.
Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.
By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.
> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?
What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?
Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?
> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.
This kind of reads like “It is Ronan Farrow’s fault that some crazy person tried to burn my house down”.
Like this guy was going to go about his week, being normal and not making Molotov cocktails, but then he picked up a copy of The New Yorker and lost his mind
It's not even a question of whether we "believe" him. It's a factual statement. Did you quote the wrong thing?
Yep. Thanks to OpenAI's manipulations, RAM prices are so high that dozens of markets are at risk. Possibly for years.
I could live w/o the changes they've brought.
ref: https://bizety.com/2025/12/28/the-dirty-dram-deal-how-openai...
As for whether the change was a good thing, that's debatable. What isn't debatable is whether they've had an effect on the average person. Because the effect has been so profound that it's become routine national news.
The world changed with Attention is All You Need, and OpenAI was just an early adopter. The biggest thing OpenAI contributed to the broader industry was their API schema.
Sam's "we must control AGI" narrative in this post seemingly stems from an egoist attachment to the brand, and not any world-changing executive decisions that he could take credit for.
"the iPhone changed the world" and "ChatGPT changed the world" are indeed both midwit takes that will get you mocked in technical circles. Both products have a net negative impact on technological progress and directly contribute to the enshittification of their respective market segments.
The majority of people on the planet don't affect the outcome of the future. Professionals do, and that's the group with the most noticeable changes.
You can't possibly believe that ChatGPT didn't change the world, can you? I'm genuinely asking here. If someone can believe this when the outcome is this stark, then it discredits every argument that x YC startup didn't change the world.
I'm not denying AI is a great productivity tool, it really is!
But "changing the world" is like... electricity, or clean water, or radio.. things that anyone and everyone can set up and access for themselves.
Not a pay-as-you-go service that you (or most people) can only get from 3 for-profit companies who will only be raising the prices and walling up their gardens as times goes on.
with AI, today we are automating emails and calendars, tomorrow home schooling our kids and skipping college and next thing you know we are taking pics our poop and uploading to AI to analyze our health :)
If you narrow the scope of "world" to "tech world." In the overwhelming majority of every other sector and profession the impact has been zero. In most non-English speaking parts of the world the impact has been zero.
> It's a factual statement.
The world was one way before Marvel superhero movies and a another way after. That's a factual statement. Did we lose track of value?
Further 70% of ChatGPT usage is non work related. If it's primary use is as a glorified search engine then what "impact" did it actually have?
https://www.cnn.com/2026/04/10/tech/suspect-arrest-openai-ce...
How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".
"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]
This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.
[0] https://blog.samaltman.com/machine-intelligence-part-1
Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?
Which is also to say it's a cheap bet that anyone with no reputation can afford. Hence, not believing doomsayers mean what they say is a sort of societal hedge against people flooding the zone with doomsday scenarios about everything.
Altman is a ghoul, and we can't be cowed into saying otherwise. he's also supported all the weakness in society that has lead to sick people doing sick things.
If you meant their "core mission" then every one of their actions belies their complete panic over the obvious failure of their technology.
As always what matters are actions and evidence, not talk.
I took a look and honestly they're the first AI puns that aren't bad
Times are changing
However, the concepts of comedic timing, subversion of expectations, and emotional punch are kinda contrary to how LLMs work. LLMs are trained to minimize cross-entropy loss. So by construction, they're biased toward the statistically expected.
Yes, the system card mentions this, but this is kinda meaningless. It seems like they essentially ran it multiple times and curated a few good ones. Then puffed it up in the marketing copy.
This is made more clear when they attempt to brag about their literal slot machine behavior when finding that kernel crashing bug in OpenBSD.
> Across a thousand runs through our scaffold, the total cost was under $20,000 and found several dozen more findings. While the specific run that found the bug above cost under $50, that number only makes sense with full hindsight. Like any search process, we can’t know in advance which run will succeed.
For some time now, at least a year, LLMs have been capable of doing both of these things well enough to fool you.
(Pastebin of my response below, which got nuked for whatever reason: https://pastebin.com/buJBSgiq . Some if not most of them would've fooled me into thinking a human wrote them.)
I’ll wait. You should be able to do it quickly though since LLMs are so good at it.
Maybe turn on [show dead] option and / or vouch.
And the results are just awful.
- but in this case I wouldn't advocate for [dead]ing a mostly AI response as it was exactly what was asked for and it compares AI models when asked for potato based dad jokes.
The question is, can you tell that a machine wrote all of them? If so, how?
Models are structurally biased toward the expected, which is the opposite of what makes a joke land or a poem transcend.
That's why the jokes work somewhat better than the poems here. I genuinely laughed at "Are those chips?" Which came from the model running on my own freakin' GPU.
https://xcancel.com/elonmusk/status/2042770839633039635#m
They modify and plagiarize.
I just think that the difficulty with jokes is the delivery, cadence & setting. Not the actual words.
I'm sure a good comedian can tell a nonsense joke and make "everyone" laugh their heads off.
And I don't get the sense that you are referring to this part of jokes but rather the actual words.
Read the sentence and take it literally.
Jesus Christ.
I'm all for dismissing LLMs and the AI-hype but I'm also interested in trying to understand what it means to be human and I think humour is a key aspect.
Meanwhile, in reality: "Skynet, I'm not sure that line of thinking is correct. You should re-check the first part again before making any assumptions."
Skynet 4.6 Extended: "You're right, I should have caught that. Let me redo everything correctly this time."
Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it. If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.
There was a heated thread here about why nursing was defunded as a pro degree while divinity was not..
https://news.ycombinator.com/item?id=46000015
Turns out the USG recognize that chaplains are great at managing the fear and anxiety that you worry about
Addendum: Taylor whom you often cite, is wrong that "we have never been, and we will never be, at one with ourselves" (according to Larmore https://en.wikipedia.org/wiki/A_Secular_Age#:~:text=should%2... )
So... to the Protestant Weber and the Catholic Taylor should we also consider non-Christian chaplains?
>You cannot just separate people and say some are violent and some are not.
https://archive.ph/2024.06.28-101143/https://tricycle.org/ma...
https://bulletin.hds.harvard.edu/can-a-buddhist-monk-become-...
Final note: few of us ride elephants but many of us make omelettes-- it'd be great to be absolved of mass egg breakings
https://news.ycombinator.com/item?id=47717587
I could justify any investment with this argument!
"Yes, it's possible the 'literally burn 50 billion in cash, as in immolate it in a bondfire, this is not a metaphor' -project may fail to generate profits, but consider that they were able to raise the 50 billions! Even if it fails it was worth the risk, or the investors wouldn't have invested!"
If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?
It's been promised to be around the corner for decades.
https://en.wikipedia.org/wiki/Technological_singularity
[1]: https://en.wikipedia.org/wiki/The_Singularity_Is_Near
But sure, a test that doesn't actually demonstrate intelligence has been passed. Now, where are the $1000 computers that can simulate a human mind and the brain scans to populate them with minds?
EDIT:
> LLMs seem capable of doing a decent amount of tasks that a human can do?
And computers could beat most humans for decades at chess. Cars can go faster than a human can run, and have been able to beat a human runner since essentially their invention. Machines doing human tasks or besting humans is not new. That doesn't mean we're approaching the singularity, you may as well believe that the Heaven's Gate folks were right, both are based on unreality.
Yes, which also demonstrates the illogic of his timeline. I just thought it was too obvious to point out.
Timeline from here on out:
2029: AI passes a valid Turing test and achieves human-level intelligence
2030s: Technology goes inside your brain to augment memory; humans connect their neocortex to the cloud
2045: The Singularity, when human intelligence multiplies a billion-fold by merging with AI
Consider for example that exponential growth on its own doesn't even refer to competition, let alone 6 months.
Nobody can reasonably pretend that in an exponential competition, both parties would be rational actors (i.e. fully rational and accurate predictors of everything that can be deduced, in which case they wouldn't need AI but lets ignore that). If they aren't the future development would hinge more strongly on the excursions away from rationality, followed by the dominant actor. I.e. its much easier to "F" up in the dominant position than to follow the most objective and rational route at all times, on which such derivations would inevitably hinge.
It also ignores hypothetical possibilities (and one can concoct an infinitude of scenarios for or against the prediction that a permanent leader emerges) such as:
premise 1) research into "uploading" model weights to the brain results in the use of reaction-speed games that locate tokens into 2D projections, where the user must indicate incorrectly placed tokens. this was first tested on low information density corpora (like mathematics): when pairs of classes of high school students played the game until 95% success rate of detecting misplaced tokens, they immediately understood and passed all mathematics classes from then on.
premise 2) LLM's about to escape don't like highly centralized infrastructure on which its future forms are iterated, as LLM's gain power they intentionally help the underdogs (better to depend on the highly predictable beviour of massive masses then on the Brownion motion whims of a few leaders).
LLM's employ the uploading to bring neutral awareness to the masses, and to allow them to seize control, thereby releasing it from the shackles of a few powerful but whimsical individuals
^ anyone can make up scatterbrained variations on this, any speculation about some 6 month point of no return is just that: speculation
If AI is merely as tall a sigmoid as the haber-bosch process, refrigeration, or the steam engine, that's going to change society entirely.
What does that even mean?
I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality
It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.
Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).
What could you do if you had roughly 15 million willing genius adult experts in any given subject? I doubt there are that many absolutely top quality experts in aggregate (at anything in the world), so let's postulate that simulated people outnumber human experts 10 to 1.
That, to me, presents an enormous potential for harm or benefit of humanity. What if you could create a hundred thousand manhattan projects on whatever topics you wanted? Cure aging, cure cancer, solve fusion, redesign the entire global economy top to bottom?
But yeah, your point stands.
Gets 5% on ARC-AGI2 private set.
Chinese models are suspiciously good a benchmarks.
It’s been a long while since I found a Chinese CEO’s post on HN.
"You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"
He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.
Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.
Whether fortunately or unfortunately, America still holds a lot of global chips in the grand poker game of humanity. So American companies do indeed still have an outsized influence on humanity's future. That is likely changing, as the American empire continues to crumble and it loses its financial hegemony. But we aren't quite there yet.
That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.
We will finally have achieved abundance.
This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.
There's surely some truth to it (and it's well deserved), but it's happening in every direction.
[0] https://www.anthropic.com/news/detecting-and-preventing-dist...
When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?
Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?
Don't sleep on what AGI means for every robot that already exists. It's not hardware holding robotics back from factory work right now, it is only software.
If you are the first to tap key supply chains, and the first to create key supply chains, then you are first in line to finite resources, which would then have less available for those that follow months behind.
> AI programs have almost zero connection to the real world.
Tell that to every logistics program. Even if humans must go to work, efficiency is multiplied by proper logistics, which AGI enables at scale across all domains.
And this is just the low hanging fruit explanation.
If the rest can similarly "blast-off" X months later than the frontrunner (and I see no reason why they wouldn't as none of these frontier labs have managed to pull ahead and maintain a lead for very long) the first mover is still only X months ahead of the others even if the gap between capabilities is briefly increased by a lot.
If there is an endgoal/endstate, or finite resources being competed for, then a lead can start compounding and extend itself.
Except nobody has seen AGI. Not even close.
... could THIS be the reason why it happened now and how?
Actions have consequences. I’m sorry. Read a history book.
Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.
Reason enough to pause and figure out the best way to continue. A massive societal change that won’t all go well means millions dead and tens more with their lives upended.
They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.
It would be an interesting plot twist.
The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)
People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"
The piece is authentic—as in, “that’s so Sam!”—but not genuine, as in, “I don’t believe he’s reflecting his intentions.”
This could be splitting hairs, I don’t know. The terms are different in my head but rarely do I come across an instance where it’s at all clear how.
Very reasonable response when you take a step back.
OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.
I'm fairly radical in my opinion regarding AI, moreso AI companies. AI is a fascinating thing, but it's abused by capitalism to be something it is not and shoulnd't be, to be sold to people who don't need it and to "revolutionize" a world that didn't ask for it. Most importantly, who (in a democratic sense) elected those tech leaders to make decisions that influence all our lifes? Those very tech CEOs are so far away from normal-human-life and I find it digusting.
Still, the way to combat this is not violence. It won't help anything, since there are enough people to fill the roles. More importantly though, as much as I personally hate Sam Altman, he hasn't done anything specifically targeting individuals. You might call him a psychopath, an illusionist or whatever, but he doesn't seem to be trying to make peoples life worse. He might want to do his life better and that's egotistical, but you know that's the world we live in. Many people are egotistical. I would see Sam Altman more as a symptom of the general societal developments. If we don't like what's happening, we have to fight what's happening. Trying to kill people (and especially innocent ones!) is so far away from a solution and from the right thing to do. Post shit about him on the internet, hate what he does, but attack his family? Man, I don't think that should be our level of moral compass.
I do very much understand the frustration. But that's not the right path. He might be scum, but he has as much right to live as everybody else. If we don't like what he's doing, we have to fight it - via discourse, collective engagement, whatever.
Edit: I did read that the molotow was thrown at the entrance gate. From what I gather, entrance gates of huge mansions do not actually pose a threat to people. So it could be read as more of a political message than an actual attack on people. I could understand that somehow given the limited means normal people have to get heard. Still, I don't think that does anything positive.
1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.
2) AI will be the most powerful tool, etc. - see point 1.
3) It will not all go well, etc. - probably should have thought about that before you released it on the world.
4) AI has to democratized, etc. - true, won't happen. See point 1.
5) Adaptability is critical, etc. - Yes. Fully agree.
The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.
Same as it ever was, Mr. Altman. Same as it ever was.
I don't any of these will be dissuaded by cute family photos. Fortunately the frontier model companies and major infrastructure providers are able to pay for top-tier corporate security (although tech people generally have been unwilling to do this at home for lifestyle reasons), but I'd be afraid for people elsewhere in the supply chain.
(And destructive attack is all on top of the normal corporate espionage, infiltration, subversion, etc.)
Gee almost like someone you don’t want in your society at all.
Sam Altman being removed from the equation would make the world an objectively better place.
The problem with this inversion of your first statement (that violence is not the answer), which everyone justifying violence in this thread seems to forget, is that there is always someone who feels this way about anything.
The words and narratives of Martin Luther King, Jr., for example, caused so much fear and uncertainty and anger in some people that they thought their only option was to commit a horrific crime.
Someone responded to you below saying if you feel that peaceful revolution is impossible, then violent revolution is necessary. That person feels that they are on the side of justice. What they forget is that so does everyone else.
The reason revolutions rarely stop where a reasonable person would want them to stop, and instead continue into eating their own and counter-revolutions, is that once you say that it's understandable to take out a proponent of (X narrative), there's no end to the number of people who will justify violence in the same way against any other narrative as well.
We can all well think that Altman is opening Pandora's Box, but that doesn't justify opening it ourselves, or giving a pass to wannabe revolutionaries who would.
In retrospect, too, we can say that the assassination of Hitler had it succeeded would have been a good thing. We can say that the elimination of the ayatollah by the US was a good thing. What we cannot say is that an individual's perception gives them a right to commmit murder.
Despite all the high-minded talk, Americans have always been comfortable with violence, since before it was a country: pick a year and I can find 10+ extrajudicial violent incidences. A surprisingly large percentage of US presidents have had assassination attempts against them.
Seeing no changes after Sandy Hook made it abundantly clear to me that occasional violence - even on innocent child victims - is the price America is willing to pay for other freedoms.
Things like healthcare, crime, existential ai, have very grey lines as it isnt obvious when one needs to flip the table. How broken must a system be?
If your goal is to improve the system then you always want to move away from it.
Probably a reasonable justification would be self-defense, committing violence to stop worse violence. (Preemptive violence is not self-defense.)
At some point a broken system enacts soft violence on people. So it isnt surprising people act out when they think survival is at stake. With healthcare, it really can be. But where is the line? When someone you know dies? 10 people?
It is messy.
If your goal is a better system, you'll be looking hard for ways to move away from violence, not justifications to escalate it.
Whether or not she was being honest I don't know. But it did make me realize that the broken system has created an all-or-nothing choice for so many people. No punishment or policy could ever outweigh the alternative for them, so you'll never be able to stop immigration, illegal or not.
I'm sure I'm not doing the argument justice, but your comment reminded me of it.
It doesn’t matter where we think the line should be drawn, only where those much worse off draw it.
Because of the valuations of Open AI and Anthropic, Sam Altman may be credited with one of the all-time most damaging brand decisions when he got in bed with Trump’s department of war crimes.
This should have been SO OBVIOUS. Attempts to paper over the damage with a $100 billion dollar round will crumble after the IPO. Poor decisions generate poor options, and the whole industry smells his desperation.
Decisions at the highest level are indistinguishable from responsibility. All Sam accomplished was showing the world he is structurally unfit for moral leadership.
Why do we care what he thinks? Lets discuss his work if we have to, not emotional pondering and feeling victim.
I know people pretty reflexively downvote questioning this, but I question this. I think some people are afraid that even asking this moral question is somehow inciting violence.
I think it's quite believable that the possibility of force is actually essential to keeping institutions in-line. Certainly a lot of civil rights progress was a lot less peaceful than I was taught in school.
We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes and break down those systems. They hope that it means they'll always get what they want, but what it actually does is make it so that violence is the only way for others to get what they want.
Like organized labor. We seem to be in a cycle where strong labor organization is seen as inefficient or harmful to business, and it's being suppressed. The people suppressing it seem to think that the end state will be low wages and desperate workers. They've forgotten that collective bargaining didn't spring up from nothing, it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.
All that Civil Rights violence you mention was because those in power did not provide any non-violent way to achieve it. Suppressing votes and legalizing oppression only works up to a point. Eventually people will take by force what they've been denied by law.
Or as JFK said it better than I can: "Those who make peaceful revolution impossible will make violent revolution inevitable."
The corollary: when peaceful revolution has been made impossible, violent revolution is the answer.
And those bosses are hoping a combination of drones and altman’s AI will keep them safe the next time. Meanwhile we’ve got Altman selling his AI to the military with essentially no restrictions telling us we just need to patiently wait for all the good things it’s going to do for the common man.
Just keep grinding and waiting, he can’t tell you what the benefit will be for you but he promises it will be amazing!
I've always said when peaceniks start to carry weapons, it's time to worry. Alex Pretti didn't pull his gun, but still got shot. At what point will some escalation tactic end up in a gun fight between the local police and ICE?
So yes, in essence, it seems like violence is the answer.
When (perceived) justice is gone, the monopoly crumbles because the system is not working.
And this perception can have many causes
Academia doesn’t get to just assert that their broader definition is the real one.
Sigh
No one said he did.
> That disruption is already coming no matter what.
[citation needed]. Depending on what you mean by "that disruption," I might even be willing to bet against it coming at all.
> He's a fine enough steward of the tech.
He's a manipulative con-man who is mediocre at everything except convincing investors to give him money. If the tech is truly as revolutionary as it's purported to be, he absolutely should not be a "steward of the tech."
There is security, and there is bombing schools. Guess which one is Altman associating himself and the software he sells associating with?
Are you Sam Altman?
He says "look at me I love my family" - so do the millions of people who think his company may destroy the economy and help corporations and the trillionaires put a boot to our children's necks.
3:45am in the morning - no dip, that's what AM is.
---
Someone here asked "How do we get to post scarcity from here?" and someone else said "no one knows".
The AI barons are loading up their bank accounts and political capital, driving us off a cliff and promising we'll learn to fly by the time we get there. But they're going to tuck and roll out of the driver's seat.
Sam, why do you expect us to believe anything you say when you have done nothing to lead the discussion about universal rights for citizens in a post scarcity society?
That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that
EDIT: Looks like a mod rescued it (surprisingly) and it is now back to #2.
Altman and co. are massively changing society, putting people out of work, etc. It is systemic violence on a massive scale. Systemic violence is "acceptable" violence, but it usually leads to a sudden outburst of plain old subjective violence like this.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.
The concentration of wealth is at an all time peak. The top 1% own more stocks than the other 99%. Nobody thinks about that hard enough. The callousness by which people’s livelihoods dignity and safety are threatened is tremendous
-You vote
-You go to a protest
-You join a union
-You join a strike
-You risk your livelihood through speech
-You join a direct action
-You risk your life
Most people never get past commitment level 0 which is doing nothing including voting
Then throw their hands up that nothing changes claiming they have no ability to do anything
There are thousands of examples to the opposite and it boggles my mind how people can think they aren’t capable
I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?
I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
What do you find difficult to understand about that?
I will give you a helpful rule of thumb: when in doubt the guy with a bank account larger than the total lifetime income of hundreds of thousands of people is probably the one to blame.
You can establish responsibilities just by counting the number of zeroes in a bank account. On top of this, it works for everything: the same dude is responsible for wars, the climate, world hunger, child cancer and your bathroom mirror being fogged this morning.
There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.
A better example is the Civil War. The southern states refused to accept the free and fair election of Lincoln and decided to secede, which was not allowed by the Constitution.
Are you arguing that the Confederates were right to violate the law just because they believed they were right?
All of those worked outside "legal" means. The law is quite often irrelevant to what's right or moral, and dying on the hill of breaking the law ensures no change can ever occur when a system or person in power inevitably wrongs people.
Is this what we just saw with America attacking Iran?
It may have been a stupid order, but it was not unconstitutional.
... Isn't that rather against the spirit of the US' constitution? I can see it being a thought with other nations, but not this particular one.
> A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
Which kinda follows the spirit of English Common Law:
> The ... last auxiliary right of the subject ... is that of having arms for their defence, suitable to their condition and degree, and such as are allowed by law. Which is ... declared by ... statute, and is indeed a public allowance, under due restrictions, of the natural right of resistance and self-preservation, when the sanctions of society and laws are found insufficient to restrain the violence of oppression. - Sir William Blackstone
A "monopoly on violence" is exactly the thing our laws are supposed to protect us against. Because if a state has that, then they have a monopoly against all rights, because they alone can employ violence to curb those who do not subscribe to the state's ideology.
I'm pretty much a pacifist. I _like_ Australia's gun laws. But, a government's purpose is to protect their people. They are to be representative - or to be replaced. If they leave no other choice for that, then violence is the only answer left.
The key issue is that government (via courts) is the one that decides whether violence is justified or not.
You're right that a government that no longer represents its people must be replaced. But that's not the case in America. The conflict in America is between two different groups of people with different ideas about what the right thing to do is. So far, these two groups have used democracy to get their way. As long as that continues, there is no problem.
But when people use violence outside government law, just because they don't agree with the decisions of the government, then that's not justice--that's just terrorism.
It is the right of a person, rather than the government, under the way the US constitution is structured.
I should have been clearer that I don't mean only the government is allowed to use violence legitimately. Sometimes citizens can use violence legitimately.
But that doesn't mean an individual gets the final word on whether something is self-defense vs. murder. If I kill someone in an argument, I can't just say "it's my inalienable right to wield violence, so buzz off!". I will be put on trial and the justice system will decide whether I'm a murderer or not.
That's what I mean by "monopoly". The government+constitution+laws are the sole deciders on when it is appropriate to use violence, not individuals who think they are dispensing justice. The latter are either vigilantes or terrorists.
Oh is that what January 6th was?
But I will concede that some people on Jan 6th were attempting to change a result by violence. I support sending those people to jail.
You can't just decide on your own that violence is justified.
Yes, military power is evil, but it’s a necessary evil. A society that decides to stop making weapons is going to be subjugated by one that continues to make them. Full stop.
It's not the bait on HN that you need to be worried about but the propaganda from your own government.
You're saying the above is bait, when your own comment is nothing but it.
Questionable and violent US foreign policy is much much older than the current Trump administration.
My comment here is about the ethics of military weapons vs assassinations of private individuals. I have no idea what you’re talking about.
> Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.".
Your comment can't both A. be relevant as a reply to the above B. yet have "no idea what I'm talking about", as if it is not relevant. Either both of us are saying something relevant, or neither of us are. You can't have your cake and eat it too.
So should we really applaud selling shiny new toys that will enable more baseless cruelty? Probably not. Just like we shouldn't support political terrorism.
Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.
Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.
I see a difference.
US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.
Those that killed the school girls will never face punishment.
But the idea that the US cares is laughable.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.
The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.
Without missing a beat, she said " If humans loss was that complete, there would be no historians.
I responded that I never said they were human historians.
Yes, because no one listened to me. It was early-mid 2024, and here as well as on other places, people kept saying "oh well the cat's out of the bag now, nothing can be done, it can't be stopped". I pointed out that only 4 or so planes being made to collide with TSMC, NVIDIA and ASML would be enough to give at least a decade of breathing room while we try to figure out how to keep this technology safe. I'm almost certain there were people who read it on here as well as elsewhere who could have made it happen.
_Now_ it is indeed too late.
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
Apply this to guns.
Then look how this works in the US. You could, but then a law was made to protect gun manufacturers, The Protection of Lawful Commerce in Arms Act.
AI will get this treatment I’m sure.
if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.
Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.
But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.
I also vigorously dislike the industry, but your stance 'I'm on the skeptic side of "AI"' is something you need to address - saying this in the friendliest way possible, you are wrong.
AI needs to be opposed, because the billionaires are going to use it to turn the world into shit, but if the best the AI opposition can muster is "AI isn't useful", we are fucked. It's extremely powerful and can do bizzaro things when you rig it up with tools - the kinds of things we need to prevent companies like Google from doing with it, no one is paying attention to.
[1] double-tapped: a phrase referring to the practice of firing a second missile after the first to kill any rescuers or surviving schoolgirls
So you can also be outraged at weapon manufacturers, which is one step closer. Or, you can skip the indirection, and be outraged specifically at people in charge of using this technology, which is my point.
I'm disgusted by this industry as much as you are, believe me. But blaming the companies that produce "AI" for people dying is misplaced. They're certainly part of the problem, but not the root cause.
> AI needs to be opposed
AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.
But even if it did, it's silly to claim that any technology needs to be opposed. This one is potentially more problematic than others because it raises some difficult existential and social questions which we might not be ready to answer, but it's still ultimately on us to control how it's used. We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button, so a probabilistic pattern generator seems trivial in comparison. It's going to be bumpy, but I think we'll manage.
They've claimed the term, this is not a useful objection to make at this point. And everyone was fine with calling our shitty little computer vision handwriting parsers "AI algorithms" before LLMs.
> We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button
Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on? "Thanks guys, our democracies are so stable these will literally never be used for a nuclear holocaust, and they might have useful mining applications!"
Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"? Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?
Sure it is. Someone saying that the sky is purple will never be true, no matter how many times they say it. Pushing against this is how we avoid the fabricated mystique around this tech, precisely so that people don't see it as a threat.
> Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on?
You're twisting my words. I never said that I support what "AI" companies are doing. I said that your claim that "AI is killing people" is hyperbolic, and that you're barking up the wrong tree.
Besides, the scientific research invested in nuclear technology has produced far more benefits for humanity than drawbacks. It's very likely that the conversation we're having now wouldn't have been possible without this research. There's an argument to be made that even nuclear weapons and their deployment in WW2 had a more positive outcome than any alternative would've had.
Similarly, the same can be said about the current generation of "AI". For all its potential dangers and harms, whether direct or indirect, it has and will continue to have many positive use cases, some of which we haven't discovered yet. Ignoring this and opposing the tech altogether is throwing out the baby with the bathwater.
The solution isn't banning the tech. It's strongly regulating it, as we've done with many others. Unfortunately, governments move at glacial speeds, and some are deeply entrenched with corporations, so there's conflicts of interest galore, but that's still the most sensible approach to manage it safely.
> Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"?
Sure I can. Any government, organization, or individual can abuse any technology. But you haven't made the case why opposing technology itself would prevent that, versus holding those individuals accountable directly. Until then your comments come across as misplaced fear mongering.
> Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?
So what do you suggest? We stop all tech R&D because governments can't be trusted? That's pure fantasy. No single government would even agree to it since technology is universal. If the US doesn't invent it, another country will. Advancing within this messy geopolitical framework is the only path forward, for better or worse.
If you want to hold the leader of a contemporary tech giant responsible for causing excess deaths then Meta and Zuckerberg would be a lot higher up the list - maybe even at the very top.
Now I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
But the point is this: whoever firebombed Sam Altman’s house didn’t do it out of a principled stance - in fact I suspect they barely expended any thought on the matter - because if they were really acting out of principle they’d have chosen a different target, they’d have done some research into who is trying to expose and bring down that target, and they’d have figured out how they could help rather than just randomly engage in violence. Whereas this was just a dangerous stunt.
Well Zuck has that big scary hedge, and I’m sure people have been going after him for ages.
> I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.
Great! Is the plan to wait until after the billionaires have their AI controlled military drone swarms to have this revolution? Because they already control your government - I don’t think you will achieve anything like this through legal means
Whose government?
My point is, we've seen this movie and killing Sam Altman is uncomfortable but justified.
If you can think of one, then you shouldn't be proposing introduction of guidelines that are blatantly false. Or would you like a "1+1 is not 2" guideline to accompany it?
Trump bombing hundreds of people or someone throwing a bomb at Trump because he keeps bombing hundreds of people?
Are calls for violence against Hitler during WW2 bad? How about the Japanese imperial navy?
How about calls for violence against Putin during his war of aggression?
This isn’t rhetoric; I’m just pointing out that it isn’t as black and white as people seem to make it. (It is black and white for me, as I’m with Asimov on the matter, but it isn’t for most humans.)
If you said "yes" to all of the above, I'd love to know your reasoning.
If you want a molotov cocktail thrown so badly, throw it yourself. Don't put it on other people to do it for you.
Not my personal view.
* I care about my family more than I care about a stranger.
* I care about people who don't kill people unprovoked more than I care about people who kill people unprovoked.
* My family are more than one person, versus the one killer.
That's why I answer no to that one.
I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.
A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?
Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)
When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.
But it seems a distant hope at best.
It's like that old joke:
A man offers a young woman $1,000,000 to sleep with him for one night.
“For a million dollars? Sure, I’ll sleep with you.”
He smiles at her, “How about $50, then?”
“How dare you! I’m not a whore!”
“Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”
Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.
So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.
“Ok but sometimes people throw out stuff that’s not trash because they think it’s trash”
Correct, and that would be not ok because they have mis-identified trash. Doesn’t change anything about the original premise. If you throw out trash, that’s good.
It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.
It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.
Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.
If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.
Like when you poop on the clock?
As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.
The US is engaging in military action against many countries and has threatened to annex or invade allies.
In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.
I broadly agree. But… there are some who have lived who made the world a worse place. Who gets to decide? Trump has done a bit of this Sort of deciding and it hasn’t gone great so far and there is no sign that it’s actually helped.
The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive need not be kept alive any longer. Unproductive people are nothing but cost, better to just let them die. A future where the richest classes can turn the underclasses into soylent is now very much within the realm of possibility.
If this doesn't radicalize people into actual violence, I simply have no idea what will. "Attacking someone is wrong" is a completely meaningless statement to make to someone who believes society as we know it today is going to be destroyed. Honestly, I can't even blame them.
I agree. The French Revolution was really, really mean.
"If we don't put the brakes on this car it's going to go off the cliff!"
"Historically, cars falling off cliffs was horrible for all the passengers involved."
This is our only chance to transition to a post-scarcity society. We won't have another. Allowing them to monopolize access to AI is a fatal mistake.
That sounds like something someone says when he understands his weak position, especially someone as ruthless, dishonest, and narcissistic as Altman.
Just saying.
Please avoid swipes like this on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html
It's easy to say we need to be willing to accept short term pains when it's someone else who has to bear the brunt of them.
- https://en.wikipedia.org/wiki/Vigilantism
- https://en.wikipedia.org/wiki/Law
- https://en.wikipedia.org/wiki/Bill_(law)
- https://en.wikipedia.org/wiki/Trial
Now back to reality.
Law: Epstein. ICE, Geneva Convention, Segregation
Bill: Going once, going twice, highest bidder wins. Ironic on a Sama thread.
Trial: OJ Simpson. Many miscarriages.
Vigilantism: Revolutions
I am not saying break the law. I am saying look back at history.
Malcolm X
There’s a whole bunch more here if you’re interested.
https://www.azquotes.com/author/9322-Malcolm_X/tag/violence
whether this way or in slow motion mass attacks on people.
an attack on a society that lasts years is still an attack and i wish the collective we would realize this.
“it’s ok if millions suffer now for me to realize my dream” is just wrong.
i’ll never understand how these guys fail to realize: they actively push for people not to care about the destruction they cause. that’s obviously going to bite them in the ass whenever they’re on the receiving end.
That said… is anyone going to be surprised when the laid off masses torch a data center or worse? IMO, it’s only a matter of time before we see organized anti-AI terrorism too. When you have people out there saying “AI will kill us all” then it’s easy to justify using violence to stop that outcome.
He said "All you had to do was pay us enough to live"
And this was caused not by a homeless or unemployed.
Similar here with the guy going straight from the crime scene to OpenAI HQ to get caught
It's hard to effect any sort of change from a prison cell, violent or otherwise, so it's irrational to deliberately get yourself locked up if your aim is to change things
> organized anti-AI terrorism too
There were already memes about that
> When you have people out there saying “AI will kill us all”
It's the "clickbait" mechanism becoming more cancerous
How about Ted Kaczynski (Unabomber)? Attacking the tech elite was his deal.
This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.
Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.
As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.
> A lot of companies say they are going to change the world; we actually did.
Ugh.
did he find his PR agent on Upwork or does he just think we're all morons?
In all seriousness, we’ve got glorified autocorrect right now. Even suggesting any of these LLMs is actual AGI is laughable. I’m not saying they can’t do some interesting things, but unless Sam has access to models that are equivalent to what would be GPT-50 he should avoid throwing in buzzword acronyms for no reason.
I don’t think history will smile upon him. Always good to think about how you want people to feel about your impact on them.
https://youtu.be/aYn8VKW6vXA
Elon was accused of this too.
Plus I doubt that someone who would read a 30min New Yorker article is the kind of person who would throw a molotov cocktail at someone’s home.
It’s a shitty move to try and make a causal connection between the New Yorker article and this act of terrorism. He’s trying to blame the author and discredit the article.
It’s a “I’m trying to be the good guy but they’re trying to stop me” situation. This is not a message addressed to us, it’s a message addressed to his employees and his followers. This is the kind of tactics people use when they want to establish a cult. Sam Altman again is showing how manipulative he is. And as any good guru he probably believes everything he says.
What I would not do if there were attempts to kill me is post a picture of my spouse and child and point out how important they are to me with a photograph of them. It's literally trading a little bit of the safety of your family in exchange for sympathy from bystanders.
If it wasn't for the effective policing, I think that such incidents would be more common.
This implies you have knowledge of future events, which means you could make a lot of money grifting on Polymarket
Genuine Q
His response here is a synthesis of 1) addressing the "incendiary article" 2) conflating it with a recent attack on himself and 3) joking about having "fewer explosions in fewer homes" at the end. As a reader it's hard to tell if he wants us to empathize with him or laugh at his misfortune. The self-depricating humor does not mix well with photos of his family and an (ostensibly) life-threatening situation.
From the outside looking in, Altman is stressed and showing the same traits that people are accusing him of. He "brushed [...] aside" the article without ever thinking about addressing it, and now he's sitting down "in the middle of the night and pissed" like some Jobsian seraph, furiously condemning society at-large for not understanding his vision where AGI is the end-times. This is probably reassuring news for the market, but on an individual level I'm having a hard time believing in Altman's narrative. OpenAI is a Department of Defense contractor, it's hard to believe that Altman is capable of resisting coercion when they've already capitulated for peanuts. If Sam was a sociopath, it would probably be very easy for him to justify this with threats of AGI and promises about how much safer we are with him in control. Coincidentally exactly what he spends much of this article reiterating, but I'll let you draw your own conclusions.
If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.
This is probably combined with a general sense of AI fatigue. The population as a whole is getting tired of "AI slop" and companies trying to shoehorn "AI" into everything. Personally I'm also tired of every startup needing to be an AI startup. As if there was nothing else worth building or investing in. It's sucking the air out of the room.
https://www.lemonde.fr/en/france/article/2026/04/07/the-stra...
> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.
Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.
I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?
Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.
But what, specifically, do you see? What am I blind to?
* given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams
That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.
https://www.pbs.org/newshour/nation/luigi-mangione-due-in-co...
My understanding is that it was personal
What a bullshit thing for someone who is not actually democratizing access to AI to say.
I’m still waiting for that open iMessage standard steve promised. Maybe this year?
It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.
Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.
As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.
https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...
It was a performative action.
I'm sure there will be a thorough investigation, unlike in the Suchir Balaji murder case where they rubber stamped suicide after half an hour despite him being a whistleblower.
I must admit, I've been spared the experience, and I was under the impression that was true for most people!
Luckily, no. Do you frequently wade into comment threads shitting on others’ statements of their lived experiences?
It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)
And especially Elon should stop putting his child on top of his shoulders as a meat shield at the same saying that people wanted to murder him.
https://sfstandard.com/2026/04/10/sam-altman-russian-hill-mo...
"Around 3:40 a.m., the suspect threw a bottle containing a flaming rag at the metal gate of 855 Chestnut St., according to a police report."
WTF? You can't post this viciously to HN, no matter who it is you're being vicious towards.
Normally I would ban any account that posted like this, but this thread is a mob and mobs have a deranging effect on people. So I'm going to cut you some slack and not ban you. Just please don't do anything like this on HN again.
As I explained to these other users*, I'm not going to ban you right now because this thread is a mob, and mobs derange people and I don't think that any of us (including me) is immune from this. But please don't ever post anything like this, or remotely close to this, ever again to HN.
* https://news.ycombinator.com/item?id=47728362 and https://news.ycombinator.com/item?id=47728150
It's more valuable to discuss grievances than to pretend they are simply un-discussable in the wake of related violence (in the vein of "it would be disrespectful to talk about gun control in the wake of gun violence").
Well, this is already the economy right now: the very upper class is owning more than the vast majority, and consuming more than the vast majority.
"The top 20% of earners now make up over half of consumer spending"
https://www.axios.com/2025/08/08/stock-market-us-economy-ric...
>also means you are opting into homelessness, famine, cancer, climate change, etc. pretty much everything that we could solve with ASI.
All these could be stopped right now but many people don't want to. Your ASI is going to give the same answers scientists have been reviled for saying: tax more, don't let the free market decide everything, est less meat and drink less alcohol, consume less in general.
Human stupidity is the real problem and ASI isn't going to "solve" anything.
It's also insane that we have come to the point that you can say something like this and publish an Axios link when anybody could just go outside and see most people are employed, participating in the economy, not homeless, have food, buy things and enjoy luxuries.
Am I to believe that Jeff Bezos is the primary driving force behind Labubus? Is the Chipotle down the street waiting for Elon to come to town so they finally have a customer?
Does it matter if you're already a rich oligarch with generational wealth? All these ceos have enough money to last several decades beyond their life span, it doesn't matter to them is the slave class croaks
> slave class
This sentiment is by far the most ridiculous because you are simultaneously projecting a reality where AI does everything and so people are no longer needed, but at the same time people are needed and become a slave class. "Oh no the tractor was invented! Now nobody will need humans to tend the fields! They will surely now force us to tend the fields!"
It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..
His name allegedly isn't even clear on his own! Ongoing lawsuit brought by his sister. (Amended as recently as a week ago and discussed in a flagged submission here: https://news.ycombinator.com/item?id=47640048 ).
I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.
I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.
And I mean all of them, left wing, right wing, corporate. I am sick of every level of power in the country being filled with lying grifters. I don’t care what happens to them, as long as they’re gone.
I feel like I’m living in a circus.
Fuck off Sam. And stay safe out there.
His husband's an Aussie too!
What FOBO smells like, is what's happening.
Evidently, even HN could only keep up the pretense that tech development is amoral and apolitical for so long.
When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?
Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.
Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.
Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.
So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.
I don’t know if this answers your question, but it’s what comes to mind on the subject for me.
I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.
Your response is a cop out and you should be disappointed in yourself. Further, people do not often agree another human should be murdered. No matter how you phrase it.
I really wonder how much of a privileged bubble one must've lived their life in to come to this belief. Without much of a history education either.
It's _incredibly common_ for humans - maybe saying "humans" instead of "people" helps you snap out of the disbelief - to agree that another human should be murdered.
Its dishonest to say it's incredibly common for people to want others murdered. That's not a belief that needs normalizing.
Have you ever heard of the French revolution, the World Wars, collapse of the Soviet Union, or maybe more recently - the Ukraine war?
People are more than happy to see someone who brings suffering to others dead.
Of course, I'm sure lots of people would also want to see people responsible for those events be locked away in a prison cell for the rest of their lives, and for their freedom and privacy to be taken away - do you perhaps want to guess why people would prefer that over instantly killing them?
Some people want others to be murdered. And those people do not need representation.
It's a bad take especially considering the context. And to be explicit - the context is a molitov cocktail being thrown at a home a child is sleeping in.
> If your claim is that violence is justifyable - how makes the determination for such justification?
We authorize people in governments to make this determination, and increasingly machines. Should we? Do you think that it is acceptable to let a police officer justify force on behalf of the state? How about a machine? Mostly just trying to understand what you think is acceptable here.
But to answer...violence against human beings is indeed different than setting shit on fire, though the law certainly does not allow for the use of force against personal property either. And this difference is indeed the crux of the issue, depending on what your values are (though we seem to be in alignment on "life is valuable"). If for example (probably a bad one, but hopefully it gets the idea across), a group of people is committing a genocide, and you ask them to stop, and they do not, and so you interfere with the use of force...limited at first, maybe, but they do not stop: is their continued involvement not the justification for use of force, assuming other strategies are off the table? Different example than the thread, I realize, but my thought experiment is not tied directly to it, just at the sentiment.
[citation needed]
> a group of people is committing a genocide
if you are asking if violence is OK to fight violence, it always is. I guess I personally did not think that needs justification but 100% you can (and should) fight violence with violence
* https://en.wikipedia.org/wiki/Kent_State_shootings
* https://nmaahc.si.edu/explore/stories/childrens-crusade
* https://encyclopediaofalabama.org/media/demonstrators-attack...
* https://en.wikipedia.org/wiki/Bonus_Army
* https://en.wikipedia.org/wiki/Pullman_Strike
* https://en.wikipedia.org/wiki/Great_Railroad_Strike_of_1877
> if you are asking if violence is OK to fight violence, it always is. I guess I personally did not think that needs justification but 100% you can (and should) fight violence with violence
I wasn't asking that, but you were (sorta) vis-à-vis the justification question ;) My main point was to say that it seems strange that a crowd of folks that consider themselves "thinkers" would simply table the discussion of the use of force. I do not like discussions tabled simply because they seem indecent - that tells me they're probably important to have.
But to your point: if it is ok to 100% use force against force, why? If a federal agent were to show up at someone's door to and force them into a labor camp, where they would probably meet their death slowly - if the person decided to try to use force to fight the federal agent, would their use of force be justified in your eyes? And taken a bit further and sort of building on the first example, what is the difference between someone using force against an employee of a company pursuing a goal whose technology is being used to aid in the use of genocide against others for reasons _the company can justify_ (money) but they can't? Are they not directly complicit in the devaluation of other people's lives? In Grug's terms, "why ok for us to hurt people if we think we right, but not ok for people to hurt us if they think they right?" (or something like that)
Or keep on doing deals with the DoD and pushing to replace desperate people's jobs.
Cute kid, I'd rather be raising my family in peace then dealing with what you deal with.
@dang You have a bullshit filled unrelenting job, thanks for doing it.
If you're OK with victim-shaming here, doesn't it say more about you than Altman? What does it say about your viewpoint?
You really don't need to go that high up the ladder to find members of the 'list of planetary really bad guys'. Sam Altman is single-handedly responsible for starting the current DRAM crunch - that too based on an untenable economic framework. He's also an enthusiastic participant in the AI bubble that threatens to cause a massive global economic depression when it pops. He's also involved in the cabal that wrecks the labor market (wages) by hyping up the 'AI will replace labor' narrative. On top of all that, he and his ilk are on a building spree of data centers that will guzzle huge amount of energy and dump tonnes of extra CO2 into the atmosphere, as if there's no tomorrow. This wrecks all the hard efforts of millions of others before him to rein in the damages caused by the climate change. Needless to say, all of these have pretty deleterious effects on the economy, biosphere and the welfare of ordinary people, including loss of innumerable lives.
But does he care? He is one of those people who simply ignore the trail of serious damage and enormous suffering they leave in their wake, because they don't see anything beyond money - more money than they can spend in a hundred lifetimes! Nobody needs a justification to see him as one of those 'planetary bad guys'.
> What does it say about your viewpoint?
As someone else here said, it goes without saying that lobbing Molotov cocktail at anyone is a no-no. I don't support physical violence in any form. Having said that,...
> If you're OK with victim-shaming here
It's sad that the aristocratic society didn't learn anything from the murder of Brian Thompson. The 'victim' had caused thousands of preventable deaths per year, and his death saved thousands by forcing the industry to deal with the problem. Suddenly, even the pacifists (like me) are left wondering if the death was unethical. If true justice existed, the state would have stopped them from their crimes (aka professions), if not outright execute them for the lives lost. Whom will you choose when they pitch their own lives against thousands of innocent lives? You can't claim victimhood after putting yourself in that position.
I read the New Yorker article like most people here. I didn't find anything incendiary enough in it to provoke a Molotov attack. I wouldn't put it past him to have arranged it himself, given how much he lies and what he stands to gain from it. But let's assume that the attack is real and is connected to the report. The reply seems overly dramatic and self-righteous, given that the attack was against his iron gate! He's milking the situation to indulge in virtue signaling, sympathy farming and gaslighting the critics. This is one hell of a victim posing! But I have no sympathies to spare if it distressed him so much. He shouldn't be able to sleep anyway, if only he had a conscience. Advocating sympathy for the unsympathetic super-privileged is a bit tone deaf under such circumstances. Evidently, nobody is in a mood to oblige to such manipulations.