Rendered at 19:00:13 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
tombert 23 hours ago [-]
The name of this CPU is bordering on securities fraud. When people see the term "AGI" now, they are assuming "Artificial General Intelligence", not "Agentic AI Infrastructure".
Of course people don't realize that, and people will buy ARM stock thinking they've cracked AGI. The people running Arm absolutely know this, so this name is what we in the industry call a "lie".
torginus 23 hours ago [-]
Considering AGI has been degraded into a generic feelgood marketing word, I can't wait to get my AGI-scented deodorant.
You can already drink AGI! Oh sorry, AG1. The resemblance must be a complete coincidence.
bogzz 21 hours ago [-]
Oh, is that what they're implementing in schools? No, wait, that was A1, probably the sauce.
parl_match 21 hours ago [-]
> The resemblance must be a complete coincidence.
I don't know why so many people are willing to descend into flippant, lazy conspiracy instead of a 7 second Google search before making a claim?
AG1 was started in 2010 by a police officer from New Zealand and AG stands for Athletic Greens.
There is a fair amount of controversy around the company's claims, so I suppose that is one symmetry between AG1 and AGI.
bensyverson 21 hours ago [-]
Not a conspiracy, and I know the history—just a joke. The current branding sure looks like AGI if you're not looking closely (or maybe I just read too much hn)
rsktaker 15 hours ago [-]
I laughed!
krogenx 22 hours ago [-]
Pretty sure in that case AG stands for Athletic Greens.
I think the name change also came before the AI hype.
BLKNSLVR 20 hours ago [-]
AGI: Attorney General Intelligence.
I believe Arm probably has cracked this very low bar.
can16358p 10 hours ago [-]
Buy it in combo with the good ol' Blockchain perfume!
lxgr 7 hours ago [-]
You mean iced tea, right?
14 hours ago [-]
SecretDreams 23 hours ago [-]
> I can't wait to get my AGI-scented deodorant.
Old spice for me, thanks!
BLKNSLVR 18 hours ago [-]
Old Spice, that's OG!
RcouF1uZ4gsC 17 hours ago [-]
Artificial Gut Incense?
imglorp 22 hours ago [-]
The marketers did this for 5G also, calling their product 5G before it was actually deployed, only because theirs came after 4G but wanted to ride the upcoming 5G buzz.
It seems marketing /depends/ on conflating terms and misleading consumers. Shakespeare might have gotten it wrong with his quip about lawyers.
There was soooo much intentional disinformation around 5G. Everyone who wanted to sell anything intentionally confused the >1Gbps millimeter wave line-of-sight kind of 5G with the "4G but with some changes to handle more devices connected to one tower" kind of 5G. I wonder how many bought a "5G phone" expecting millimeter wave but only got the slightly improved 4G.
bee_rider 16 hours ago [-]
This is mostly the standard’s fault, right? Putting more conventional wavelengths and the mm stuff together in one standard was… a choice.
phire 11 hours ago [-]
From a standards design perspective, there is nothing wrong with it. It's the same protocol running on two very different frequency bands. They co-exist and support each other.
The problem is how marketing interacted with it.
catlifeonmars 7 hours ago [-]
Wait til you search the term “6g”.
guerrilla 11 hours ago [-]
Yes, my wireless router has "5G WiFi" but only does 4G. I didn't have a choice about using it since it comes from the provider, but still stupid.
catlifeonmars 7 hours ago [-]
What is 5G WiFi? Do you mean 5Ghz WiFi?
estimator7292 2 hours ago [-]
5G and 4G are not terms applied to WiFi. We have 802.11a/b/g/n/ac/ax and WiFi6/7
WiFi operates in the 2.4, 5, 6GHz bands, but those frequency bands are not used to differentiate WiFi standards because you can mix and match WiFi 6/7 on all three bands.
There are also more WiFi bands below 2.4 and above 6GHz, but they're not common worldwide.
It’s been a long long time since I’ve heard that name come up in conversation.
Thanks for the trip down memory lane.
juleiie 21 hours ago [-]
If rich people are this stupid then they deserve to be parted with their cash.
If you invest money so mindlessly that you don’t even check what you buy, then no legislation in the world will manage to protect you from your own mind
tombert 17 hours ago [-]
It’s not just rich people though. Most people (at least in the US) have their retirements and the like in things like 401ks, tied to some kind of index like the S&P 500. A company doing bullshit to manipulate the stock affects pretty much anyone who uses an index fund or ETF, which is pretty much everyone in the US.
elictronic 11 hours ago [-]
You invest in index funds and etfs so your money averages out and you don’t get impacted by a single companies stupidity.
tombert 3 hours ago [-]
No, the impact is lessened, but there can still be an impact from an individual company's stupidity.
fidotron 5 hours ago [-]
An unappreciated aspect of Arm is they really were the Robin Saxby show. https://en.wikipedia.org/wiki/Robin_Saxby Whichever ISA had him selling it was going to win.
While AArch64 represents the technical revolution they needed their business compass has just gone ever since he stepped down. This grimy stuff, and as others noted competing with your own customers, were no goes in the earlier era.
kergonath 23 hours ago [-]
AGI is a poorly-defined concept anyway. It’s just vibes, nothing descriptive.
chromoblob 9 hours ago [-]
AGI is the automation of self-regulation of language
source: 100% personal certainty
rayiner 5 hours ago [-]
Can you imagine being an engineer and working hard to create something new and cool and some jackass in marketing slaps the name “AGI CPU” on it?
0x3f 23 hours ago [-]
> Of course people don't realize that, and people will buy ARM stock thinking they've cracked AGI.
Doesn't seem like a very credible assertion. Picking stocks in this way would remove you from the market pretty quickly.
PessimalDecimal 23 hours ago [-]
Didn't random companies add block chain to their names only just a few years ago and get 30+% jumps in stock price immediately?
bee_rider 15 hours ago [-]
That’s quite different, BlockChain was a buzzword label for existing tech. AGI is a label for something we famously haven’t achieved, and which would be revolutionary if we had.
This seems more like calling your spaceship company, I dunno, “Interplanetary Passengers” or something.
serf 11 hours ago [-]
AGI is a buzzword too, it's just differently applied.
In this case it's a word that means the thing we're all developing towards apparently, but that no one actually knows how to get or even how to measure whether or not we've already gotten it , and no one really knows what will happen when it's achieeved, if it hasn't already been.
It's a bit like an even wackier more-corporate version of The Quest for the Holy Grail.
And the honest one true test for "is it a buzzword?" : Did a corporate group brand a flagship with it?
"RISC architecture is going to change everything!"
0x3f 22 hours ago [-]
> Just because the stock goes up doesn't mean anyone was tricked. People invest in sentiment, in momentum, in all kinds of second order effects.
sincerely 17 hours ago [-]
Wouldn’t those second order effects be downstream of the first order effect of people being tricked?
elictronic 11 hours ago [-]
Run trading bot looking for news feeds with specific terms. Buy stocks based on this. Understand your fellow humans are lazy and stupid. If you can’t read past the first word of a news article maybe that person shouldn’t be allowed to trade stocks.
wiml 23 hours ago [-]
Yes, that's how fraud works a lot of the time. It removes you from the market but not until after it's removed your money. And there's an endless supply of new people ready to make the same mistake after you've learned your lesson.
Does an iced tea company changing their name to Long Blockchain make any sense? No, not really, it's pretty stupid actually, but it managed to bump the stock by apparently 380%.
The stock market can be pretty dumb sometimes. Let's not forget the weird GME bubble.
0x3f 22 hours ago [-]
You're making claims not found in evidence. Just because the stock goes up doesn't mean anyone was tricked. People invest in sentiment, in momentum, in all kinds of second order effects.
GME was hardly a trick either. If you actually read the subreddits at the time they were all perfectly aware of the nature of the thing. They literally go around calling it degenerate behavior (i.e. risky, frothy, baseless).
Why is the assumption that you are smarter than everyone else? That you can interpret the world but everyone else needs protecting?
tombert 21 hours ago [-]
Where did I say I was smarter than everyone else? I certainly don’t think that and I didn’t mean to imply it if I did.
I do think I know more than the average person about computers. Probably most people on this forum can say that. People who know about computers are more likely to be able to smell bullshit with a name like AGI. It’s not that I am smarter, I wouldn’t be able to call bullshit with anything involving chemistry or physics.
I think, like Long Blockchain, ARM is abusing that world’s collective computer illiteracy and trying to harvest investor money in the process. Clearly this has worked once, as was the case of Long Blockchain.
> People invest in sentiment, in momentum, in all kinds of second order effects.
Yep! And this is why it is wrong for corporations to put out incorrect or misleading statements, as it creates a sentiment that is not realistic. This can then propagate in the form of the stock price not being realistic.
0x3f 21 hours ago [-]
I just don't believe even a person with a poor understanding of the market or the underlying technology tosses out bets so casually based only on a name, in the sense that they believe 'oh wow this is actually AGI, I should buy it'.
It's different for them to toss out a bet on the basis of 'other people will think this is AGI, I should buy it in anticipation of that' or even 'other people will think other people will think this is AGI, I should buy in anticipation of that'.
People playing the Keynesian beauty contest are not, to me, naive participants in the market getting scammed by a company adding 'AGI' to a product.
The idea that the first-order person exists in any great number is just so insulting to the average person's intelligence that it's hard not to read it in a paternalistic tone.
tombert 21 hours ago [-]
> I just don't believe even a person with a poor understanding of the market or the underlying technology tosses out bets so casually based only on a name, in the sense that they believe 'oh wow this is actually AGI, I should buy it'.
The CUBA ticker shot up in value after Obama lifted sanctions on Cuba, despite the fact that company doesn't invest in any Cuba companies. People will invest in things just based on a name. https://acrinv.com/silly-true-market-anomaly/
The average person generally doesn't know a lot about anything other than the specific niche that they do for a living. This isn't a dig at their intelligence, or at least I'm not excluding myself. I know a fair bit about computer science, but only a very lay person's understanding of basically everything else.
For example, I know nothing about electric or hydrogen powered cars, so I wasn't able to call bullshit with the Nikola scam a few years ago. I fortunately didn't buy any Nikola stock, but that wasn't because of any insight on my end, just didn't buy it. I am very glad that people who do know about this kind of stuff call it out when companies lie to potential investors.
0x3f 20 hours ago [-]
> People will invest in things just based on a name
Right but it doesn't follow from this that those people were tricked in some way. They can be second- or third-order bettors. Even the most sophisticated quant shop in the world, the literal sharpest players in the market, can bet 'just based on a name' if it fits into some theory about market dynamics or whatever.
> The average person generally doesn't know a lot about anything other than the specific niche that they do for a living.
But so what, it doesn't follow that because they don't know about X they are willing to trivially gamble significant amounts of money on X without even the most basic of research. "I don't know much about this so won't place a bet I'm not willing to lose" is not something that requires any great intelligence.
rootbear 22 hours ago [-]
This sort of thing really bugs me! Marketing departments appropriate an existing term and use it in some new, often deceptive way. This goes all the way back to when IBM released “The IBM Personal Computer”, at a time when “personal computer” was a category name. Then Microsoft released Windows, when “windows” was a generic term for windowing systems. Intel did it with their “core” architecture. The list goes on.
(Disclosure: I am a casual investor in ARM.)
Ucalegon 23 hours ago [-]
Marketing is marketing, nothing about it was ever about being factual when there is a total addressable market to go after and dollars to be made! This is inline with much of the other marketing that exists in the AI space as it stands now, not mention the use of AGI within the space as it stands currently.
I'm not saying anything is going to happen, ARM holdings has a lot more money and lawyers than Long Blockchain did, but I'm just saying that it's not weird to think that a deceptive name could be considered false advertising.
Ucalegon 22 hours ago [-]
That would not hold up considering that they consistently use 'agentic' in their press release and make no mention of 'artificial general intelligence'. Just because two things have the same acronym does not mean that they stand for the same thing. Marketing being cheeky is not a crime.
tombert 22 hours ago [-]
It's not "being cheeky". They know that the holy grail for AI is AGI. They know that people are going to see the acronym AGI and assume Artificial General Intelligence. They know that people aren't going to read the full article.
This isn't just a crass joke or a pun, it's outright deception. I'm not a lawyer, maybe it wouldn't hold up in court, but you cannot convince me that they aren't doing this on purpose.
Ucalegon 22 hours ago [-]
of course they did it on purpose but thats not illegal. They are not at fault for individuals not reading what the acronym stands for and the intent that they place within the press release, which is very, very clear. They are not obligated or liable for others lack of due diligence.
imtringued 12 hours ago [-]
The AGI in "Arm AGI CPU" isn't an acronym and there is no coincidence.
boxedemp 17 hours ago [-]
It's HD and ai and 5G and and that
bluegatty 15 hours ago [-]
People buying these kinds of chips will know. AGI is barely a popular concept. Nobody in my family knows what it means.
usrusr 20 hours ago [-]
Do you think that we should live in a world where investors who buy on a comical misinterpretation of an acronym are protected from their naivety?
Why isn't there a minority shareholder lawsuit on the news because someone bought MSFT not realizing that Copilot isn't actually certified to fly an airliner? A certain type of people would likely just buy MSFT on a massive lever and then if the bet fails to work out sue pretending that they did not understand.
tombert 20 hours ago [-]
You're being purposefully obtuse.
People have been hearing for the last three years about how a specific acronym, "AGI", is the final frontier of artificial intelligence and how it's going to change the entire economy around it. They've been hearing about this quasi-theoretical, very specific thing, and a lot of them don't even know what the "G" stands for.
People haven't been hearing for years about a mythical "copilot", and as such I think people are much more likely to think it's not anything more than a cute nickname.
Are you suggesting that this is just a coincidence? The acronym AGI doesn't even make sense for Agentic AI Infrastructure, which should be AAII; they're clearly calling it AGI to mislead people. I refuse to think that the people running Arm are so stupid that they didn't even Google the acronym before releasing the chip.
You think it's a "comical misinterpretation", but I don't think it is. When I saw the article, I thought "shit; did they manage to crack AGI?", and I clicked the article and was disappointed. I suspect a lot of people aren't even going to read the press release.
jasonvorhe 19 hours ago [-]
If sind can't do the most basic due diligence as in reading up on stuff you invest in using Wikipedia or a search engine, best of luck to them.
nixass 7 hours ago [-]
I'm "people" and AGI means nothing to me
5 hours ago [-]
LeifCarrotson 23 hours ago [-]
Those in the industry don't call it a lie, they call it "marketing".
It's those out of the industry who call them lies.
tombert 23 hours ago [-]
Touché. I guess I should have said "I call it a lie".
andsoitis 6 hours ago [-]
> The name of this CPU is bordering on securities fraud.
No. For it to be securities fraud, Arm would need to make a materially false statement of fact that misleads investors. Naming the CPU in this way doesn't clear the bar because:
a) the name is clearly product brand, similar to how macOS Lion, or Microsoft Windows, or Ford Mustang, or Yves Saint Laurent Black Opium don't mean literally what they say)
b) Arm explicitly defines it as silicon "designed to power the next generation of AI infrastructure", with the technical specs fully disclosed
c) sophisticated investors, the relevant standard for securities fraud, can read a spec sheet
d) Arms' EVP said "We think that the CPU is going to be fundamental to ultimately achieving AGI", framing it as contribution towards AGI, not AGI itself
croon 6 hours ago [-]
I was on board with A through C, but then with D it's either clearly a lie or stupidity. I guess it's not a lie technically if they believe it though, so the latter then. But I also don't want to assume someone in their position to be stupid, so then I'm back to the former.
andsoitis 6 hours ago [-]
So D undermines A - C in your mind? That doesn't make sense.
croon 5 hours ago [-]
Huge IANAL disclaimer to start, but your post started off with:
> No. For it to be securities fraud, Arm would need to make a materially false statement of fact that misleads investors. Naming the CPU in this way doesn't clear the bar because:...
The EVP statement doesn't say "our CPU does AGI", sure, but is it unfair to suggest it makes some form av AGI claim, which isn't there from the naming alone?
It's no longer your point A) "clearly product brand" if the established usage of the term "AGI" comes out of the EVP's mouth.
And yes, their (albeit very vague) claim is clearly wrong IMHO.
giancarlostoro 20 hours ago [-]
It's just going the way of "Smartphone" and "Smart Car" they'll market it as such to get people riled up about it. Consumers will eat it up. I'm sure Scam Altman is ready to show us "AGI" next too. If ARM is making AGI's meaning shift to a CPU descriptor, anyone can call their tech "AGI" by just using their chips.
monegator 23 hours ago [-]
In case you haven't noticed, this whole thing has been a grift since 2022. It's kind of amazing that nobody thought of making AGI processors before
_3u10 12 hours ago [-]
I thought they were adding support for AGI slots
alfalfasprout 23 hours ago [-]
the whole AI space is rife with much worse example of what could be considered securities fraud tbh
dakolli 20 hours ago [-]
If this headline lead you to believe that ARM has somehow cracked AGI, you deserve to lose your money..
imtringued 11 hours ago [-]
ARM has cracked Agentic AI infrastructure. What are you on about? AGI is a solved problem. The next generation models will have AGI capabilities.
dakolli 5 hours ago [-]
I really hope this is satire. If not, please see a psychiatrist
groby_b 21 hours ago [-]
Honestly: The people who buy stock because a product says "AGI" in the name deserve to lose their shirt.
And no, it's not "a lie", because only an utter idiot would consider a product name an actual fact. It's a name. The Hopper GPUs also didn't ship with a lifesize cutout of Grace Hopper.
tombert 20 hours ago [-]
No, it's actually a lie, and it's different than the Hopper GPU you mentioned.
People have been seeing every big AI company talk about how AGI is the holy grail of AI, and how they're all trying to reach it. Arm naming a chip AGI is clearly meant to make casual observers think they cracked AGI.
The Hopper GPU isn't the same, because Nvidia isn't actively trying to make people think that it includes a lifesize cutout of Grace Hopper. Not a dig on her, but most people don't know who Grace Hopper is, people haven't been hearing on the news for the last several years about how having a Grace Hopper is going to make every job irrelevant.
throwaway613746 22 hours ago [-]
[dead]
emptyfile 22 hours ago [-]
[dead]
bhouston 23 hours ago [-]
If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
We have to keep defining AGI upwards or nitpick it to show that we haven't achieved it.
I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
We don't have clear ASI yet, but we definitely are in a AGI-era.
I think we are missing an ego/motiviations in the AGI and them having self-sufficiency independent of us, but that is just a bit of engineering that would actually make them more dangerous, it isn't really a significant scientific hurdle.
tombert 23 hours ago [-]
Ok, but it's not AGI. People five years ago would have been wrong. People who don't have all the information are often wrong about things.
ETA:
You updated your comment, which is fine but I wanted to reply to your points.
> I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
I would actually argue that they are decidedly not smarter than even dumb humans right now. They're useful but they are glorified text predictors. Yes, they have more individual facts memorized than the average person but that's not the same thing; Wikipedia, even before LLMs also had many more facts than the average person but you wouldn't say that Wikipedia is "smarter" than a human because that doesn't make sense.
Intelligence isn't just about memorizing facts, it's about reasoning. The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.
> We don't have clear ASI yet, but we definitely are in a AGI-era.
Nah, not really.
bhouston 23 hours ago [-]
> They're useful but they are glorified text predictors.
There is a long history of people arguing that intelligence is actually the ability to predict accurately.
> Intelligence isn't just about memorizing facts, it's about reasoning.
Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.
That said, there is definitely a biased towards training set material, but that is also the case with the large majority of humans.
For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?
I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.
Aloisius 21 hours ago [-]
> There is a long history of people arguing that intelligence is actually the ability to predict accurately.
That page describes a few recent CS people in AI arguing intelligence is being able to predict accurately which is like carpenters declaring all problems can be solved with a hammer.
AI "reasoning" is human-like in the sense that it is similar to how humans communicate reasoning, but that's not how humans mentally reason.
saltcured 17 hours ago [-]
Like my father before me, I seem to have absorbed an ability to predict what comes next in movies and books. It's sometimes a fun parlor trick to annoy people who actually get genuine surprise out of these nearly deterministic plot twists. But, a bit like with LLMs, it is a superficial ability to follow the limited context that the writers' group is seemingly forced by contract to maintain.
Like my father before me, I've also gotten old enough to to realize that some subset of people out there also behave like they are scripted by the same writers' group and production rules. I fear for the future where LLMs are on an equal footing because we choose to mimic them.
tombert 23 hours ago [-]
> There is a long history of people arguing that intelligence is actually the ability to predict accurately.
There sure is, and in psychological circles that it appears that there's an argument that that is not the case.
> Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.
If you handwave the details away, then sure it's very human like, though the reasoning models just kind of feed the dialog back to itself to get something more accurate. I use Claude code like everyone else, and it will get stuck on the strangest details that humans actively wouldn't.
> For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?
Tough to say since I haven't done it, though I suspect it wouldn't help much, since there's still basically no training data for advanced programs in these languages.
> I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.
Even if you're right about this being the AGI era, that doesn't mean that current models are AGI, at least not yet. It feels like you're actively trying to handwave away details.
bhouston 22 hours ago [-]
> though the reasoning models just kind of feed the dialog back to itself to get something more accurate.
Much of our reasoning is based on stimulating our sensory organs, either via imagination (self-stimulation of our visual system) or via subvocalization (self-stimulation of our auditory system), etc.
> it will get stuck on the strangest details that humans actively wouldn't.
It isn't a human. It is AGI, not HGI.
> It feels like you're actively trying to handwave away details.
Maybe. I don't think so though.
saganus 23 hours ago [-]
What does AGI look like in your opinion?
Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.
Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.
Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.
It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.
In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.
Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.
tombert 22 hours ago [-]
> What does AGI look like in your opinion?
Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool.
Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning.
Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem.
I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions.
adamsb6 22 hours ago [-]
Why should implementation matter at all? You should be able to classify a black box as AGI or not.
Well, I guess you lose artificial if there’s a human brain hidden in the box.
root_axis 22 hours ago [-]
If we had AGI we wouldn't need to keep spending more and more money to train these models, they could just solve arbitrary problems through logic and deduction like any human. Instead, the only way to make them good at something is to encode millions of examples into text or find some other technique to tune them automatically (e.g. verifiable reward modeling of with computer systems).
Why is it that LLMs could ace nearly every written test known to man, but need specialized training in order to do things like reliably type commands into a terminal or competently navigate a computer? A truly intelligent system should be able to 0-shot those types of tasks, or in the absolute worst case 1-shot them.
fc417fc802 13 hours ago [-]
To add to this, previously one could argue that LLMs were on par with somewhat less intelligent humans and it was (at least I found) difficult to dispute. But now the frontier models can custom tailor explanations of technical subjects in the advanced undergraduate to graduate range. Simultaneously, I regularly catch them making what for a human of that level would be considered very odd errors in reasoning. When questioned about these inconsistencies they either display a hopeless lack of awareness or appear to attempt to deflect. They're also entirely incapable of learning from such an interaction. It feels like interacting with an empty vessel that presents an illusion of intelligence and produces genuinely useful output yet there's nothing behind the curtain so to speak.
IanCal 21 hours ago [-]
> The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.
I’m really not sure how well a typical human would do writing brainfuck. It’d take me a long time to write some pretty basic things in a bunch of those languages and I’m a SE.
tombert 19 hours ago [-]
Yes, but you also wouldn't need a corpus of hundreds of thousands of projects to crib from. If it were truly able to "reason" then conceivably it could look at a language spec, and learn how to express things in term of Brainfuck.
IanCal 8 hours ago [-]
They did for some problems. If you gave me five iterations at a problem like this in brainfuck:
> "Read a string S and produce its run-length encoding: for each maximal block of identical characters, output the character followed immediately by the length of the block as a decimal integer. Concatenate all blocks and output the resulting string.
I'd do absolutely awfully at it.
And to be clear that's not "five runs from scratch repeatedly trying it" it's five iterations so at most five attempts at writing the solution and seeing the results.
I'd also note that when they can iterate they get it right much more than "n zero shot attempts" when they have feedback from the output. That doesn't seem to correlate well with a lack of reasoning to me.
Given new frameworks or libraries and they can absolutely build things in them with some instructions or docs. So they're not very basically just outputting previously seen things, it's at least much more pattern based than words.
edit -
I play clues by sam, a logical reasoning puzzle. The solutions are unlikely to be available online, and in this benchmark the cutoff date for training seems to be before this puzzle launched at all:
Frankly just watching them debug something makes it hard for me to say there's no reasoning happening at all.
nananana9 22 hours ago [-]
My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.
5 years ago we thought that language is the be-all and end-all of intelligence and treated it as the most impressive thing humans do. We were wrong. We now have these models that are very good at language, but still very bad at tasks that we wrongly considered prerequisites for language.
Majromax 21 hours ago [-]
> My definition of AGI hasn't changed - it's something that can perform, or learn to perform, any intellectual task that a human can.
Wait, could you make your qualifiers specific here? Is your definition of AGI that it be able to perform/learn any intellectual task that is achievable by every human, or by any human?
Those are almost incomparably different standards. For the first, a nascent AGI would only need to perform a bit better than a "profound intellectual disability" level. For the second, AGI would need to be a real "Renaissance AGI," capable of advancing the frontiers of thought in every discipline, but at the same time every human would likely fail that bar.
svachalek 21 hours ago [-]
Your true average human is someone like your barista at Starbuck's. Try giving them a good math problem, or logic puzzle, or leetcode problem if you need some reminding of the standard reasoning capabilities of our species. LLMs cannot beat the best humans at practically anything, but average humans? Average humans are a much softer target than this thread seems to think.
18 hours ago [-]
singpolyma3 21 hours ago [-]
Completely disagree. Inability to handle specific math or CS is a matter of training and experience not reasoning and intelligence. The barista is quite capable at reasoning and learning feats the LLMs aren't close to
tombert 20 hours ago [-]
Yeah, there appears to be this idea that "being smart" is the same thing as "knowing facts", which I don't think is realistic.
I know plenty of people who are considerably smarter than me, but don't know nearly as much as I do about computer science or obscure 90's video game trivia. Just because I know more facts than they do (at least in this very limited scope) doesn't mean that they're less capable of learning than I am.
As you said, a barista is very likely able to reason about and learn new things, which is not something an LLM can really do.
chromoblob 9 hours ago [-]
it's the matter of knowing the most practically important facts to know
jacquesm 19 hours ago [-]
I think it would be fairly easy to prove or disprove that 'AI as it is today knows more about any subject than 99% of HN'. But knowledge alone does not translate into intelligence and that's the problem: we don't have a really hard definition of what intelligence really is. There are many reasons for that (such as that it would require us to reconsider some of our past actions), but the fact remains.
So until we really once and for all nail down what intelligence is you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.
The rate-of-change is a factor here. Arguably the current rate of change is very high compared to with two decades ago, but compared to three years ago it feels as if we're already leveling off and we're more focused on tooling and infrastructure than on intelligence itself.
Intelligence may not actually have a proper definition at all, it seems to be an emergent phenomenon rather than something that you engineer for and there may well be many pathways to intelligence and many different kinds of intelligence.
What gets me about AI so far is that it can be amazing one minute and so incredibly stupid the next that it is cringe worthy. It gives me an idiot/savant kind of vibe rather than that it feels like an actual intelligent party. If it were really intelligent I would expect it to be able to learn as much or more from the interaction and to be able to have a conversation with one party where it learns something useful to then be able to immediately apply that new bit of knowledge in all the other ones.
Humans don't need to be taught the same facts over and over again, though it may help with long term retention. We are able to reason about things based on very limited information and while we get stuff wrong - and frequently so - we usually also know quite precisely where the limits of our knowledge are, even if we don't always act like it.
To me it is one of those 'I'll know it when I see it' things, and without insulting anybody, including the barista's at Starbucks, I think it is perfectly possible to have a discussion about this and to accept that average humans all have different skills and specialties and that some people work at Starbucks because they want to and others because they have to, it does not say anything per-se about their intelligence or lack thereof. At the same time you can be IQ 140 but still dumber than a Starbucks barista on what it takes to make someone feel comfortable and how to make coffee.
fc417fc802 13 hours ago [-]
We seem to largely agree but I wanted to respond to this one bit:
> you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.
It's important to distinguish between "AI" and "AGI" here. I haven't seen many objections that the frontier models of the past year or so don't qualify as AI (whatever that might or might not mean) and the ones I have seen don't seem to hold much water.
However there's a constant stream of bogus claims presenting some new feat as "AGI" upon which each time we collectively stop and revise our working definition to close the latest loophole for something that is very obviously not AGI. Thus IMO legal loophole is a more fitting description than god of the gaps.
I do think we're nearing human level in general and have already exceeded it in specific tightly constrained domains but I don't think that was ever the common understanding of AGI. Go watch 80s movies and they've got humanoid robots walking around doing freeform housework while chatting with the homeowner. Meanwhile transferring dirty laundry from a hamper to the drum remains a cutting edge research problem for us, let alone wielding kitchen knives or handling things on the stovetop.
alpaca128 17 hours ago [-]
And yet if you asked that barista if you should walk to the car wash or take your car there, they would never respond with "you should take a walk, it's healthier than driving" like almost every LLM did in a test I saw.
That is as basic as everyday reasoning gets and any human in modern society solves hundreds of problems like that every day without even thinking about it, but with LLMs it's a diceroll. Testing them with leetcode problems or logic puzzles is not going to prove much unless you first made sure none of those were in the training data to prevent pure memorization.
root_axis 22 hours ago [-]
> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
Would they? Perhaps if you only showed them glossy demos that obscure all the ways in which LLMs fail catastrophically and are very obviously nowhere even close to AGI.
Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.
bykhun 22 hours ago [-]
> Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.
To be fair, I am pretty sure Claude Code will download and run stockfish, if you task it to play chess with you. It's not like a human who read 100 books about chess, but never played, would be able to play well with their eyes closed, and someone whispering board position into their ear
root_axis 22 hours ago [-]
There are a lot of problems with this analogy, but even if you were to take a photo of the board after every move and send it to the model, it would still be unable to play competently.
singpolyma3 21 hours ago [-]
It doesn't look anything like AGI and no one who knows what that means would be confused in any era.
Is it useful? Yes. Is it as smart as a person? Not even remotely. It can't even remember things it already was told 5 minutes ago. Sometimes even if they are still in the context window un compacted!
IanCal 21 hours ago [-]
It doesn’t need to be human level, and if I walk into a room and forget why I went in am I no longer a general intelligence?
singpolyma3 21 hours ago [-]
If it doesn't need to be human level then what are we even talking about? AGI means human level. Everything else is AI
IanCal 8 hours ago [-]
No, the big thing with AGI was that it was general. AI things we made were extremely narrow, identify things out of a set of classes or route planning or something similarly specific. We couldn't just hand the systems a new kind of task, often even extremely similar ones. We've been making superhuman level narrow AI things for many years, but for a long time even extremely basic and restricted worlds still were beyond what more general systems could do.
If LLMs are your first foray into what AI means and you were used to the term ML for everything else I could see how you'd think that, but AI for decades has referred to even very simple systems.
singpolyma3 6 hours ago [-]
If AGI doesn't mean human level then what does? As you say, every application of A* is in some way "AI" so we had this idea of "AGI" for something "actually intelligent" but maybe I'm wrong and AGI never meant that. What does mean that?
chromacity 21 hours ago [-]
> If you showed someone what our computers can do with the latest LLMs now to someone 5 years ago they would probably say it sure looks a lot like AGI.
But this is a CPU! It's not a GPU / TPU. Even if you think we've achieved AGI, this is not where the matrix multiplication magic happens. It's pure marketing hype.
IanCal 21 hours ago [-]
I did AI back before it was cool and I think we have agi. Imo the whole distinction was from extremely narrow AI to general intelligence. A classifier for engine failure can only do that - a route planner can only do that…
Now we have things I can ask a pretty arbitrary question and they can answer it. Translate, understand nuance (the multitude of ways of parsing sentences, getting sarcasm was an unsolved problem), write code, go and read and find answers elsewhere, use tools… these aren’t one trick ponies.
There are finer points to this where the level of autonomy or learning over time may be important parts to you but to me it was the generality that was the important part. And I think we’re clearly there.
Agi doesn’t have to be human level, and it doesn’t have to be equal to experts in every field all at once.
usrusr 20 hours ago [-]
An interesting perspective: general, absolutely, just nowhere near superhuman in all kinds of tasks. Not even close to human in many. But intelligent? No doubt, far beyond all not entirely unrealistic expectations.
But that seems almost like an unavoidable trade-off. Fiction about the old "AI means logic!" type of AI is full of thought experiments where the logic imposes a limitation and those fictional challenges appear to be just what the AI we have excels at.
jltsiren 20 hours ago [-]
The problem with definitions is that they are all wrong when you try to apply them outside mathematical models. Descriptive terms are more useful than normative ones when you are dealing with the real world. Their meaning naturally evolves when people understand the topic better.
General intelligence, as a description, covers many aspects of intelligence. I would say that the current AIs are almost but not quite generally intelligent. They still have severe deficiencies in learning and long-term memory. As a consequence, they tend to get worse rather than better with experience. To work around those deficiencies, people routinely discard the context and start over with a fresh instance.
dubcanada 22 hours ago [-]
A human can think logically with reason, not to say they are smart or smarter. But LLMs cannot. You can convince a LLM anything is correct and it will believe you. You can't convince a human anything is correct.
I can't argue that LLMs do not know an absolute insane amount of information about everything. But you can't just say LLMs are smarter then most humans. We've already decided that smartness is not about how much data you know, but thinking about that data with logical reasoning. Including the fact it may or may not be true.
I can run a LLM through absolutely incorrect data, and tell it that data is 100% true. Then ask it questions about that data and get those incorrect results as answers. That's not easy to do with humans.
hex4def6 21 hours ago [-]
That just implies LLMs are suggestible. The same is true of children. As we get older and build a more complete world model in our heads, it's harder to get us to believe things which go against that model.
Tell a 5-yr old about Santa, and they will believe it sincerely. Do the same with a 30-year old immigrant who has never heard of Santa, and I suspect you'll have a harder time.
That's not because the 5-year old is dumber, but just because their life-experience ("training data") is much more limited.
Even so, trying to convince a modern LLM of something ridiculous is getting harder. I invite you to try telling ChatGPT or Gemini that the president died a week ago and was replaced by a body-double facsimile until January 2027, so that Vance can have a full term. I suspect you'll have significant difficulty.
soperj 20 hours ago [-]
> Do the same with a 30-year old immigrant who has never heard of Santa, and I suspect you'll have a harder time.
There's a plethora of people who convert to religion at an older age, and that seems far more far fetched than Santa.
dahart 19 hours ago [-]
> There's a plethora of people who convert to religion at an older age, and that seems far more far fetched than Santa.
Being in a religion doesn’t imply belief in deities; it only implies people want social connection. This is clearly visible in global religion statistics; there are countries where the majority of people identify as belonging to a religion, and at the same time only a small minority state they believe in a “God”. Norway is a decent example that I bumped into just yesterday. https://en.wikipedia.org/wiki/Religion_in_Norway
hex4def6 20 hours ago [-]
Sure.
But I bet you'd have a significantly easier time converting a child rather than a 30/40/50-yr old to a religion.
My point is that LLMs are suggestible, perhaps more so than the average adult, but less so than I child I suspect. I don't think suggestibility really solves the problem of whether something has AGI or not. To me, on the contrary, it seems like to be intelligent and adaptable you need to be able to modify your world model. How easily you are fooled is a function of how mature / data-rich your existing world model is.
rootusrootus 23 hours ago [-]
> LLMs are actually smarter than the majority of humans right now
I consider myself a bit of a misanthrope but this makes me an optimist by comparison.
Even stupid people are waaaaaay smarter than any LLM.
The problem is the continued habit humans have of anthropomorphizing computers that spit out pretty words. It’s like Eliza only prettier. More useful for sure. Still just a computer.
svachalek 21 hours ago [-]
I really feel like we have not encountered the same stupid people. Most stupid people I know respond to every question with some form of will-not-attempt. What's 74 times 2? Use a calculator! Should I drive or walk to the car wash? Not my problem! How many R's in strawberry? Who cares! They'll lose to the LLM 100%.
tombert 17 hours ago [-]
The cheapest Aliexpress calculator can multiply much bigger numbers than I can in my head, and it can do it instantly. Does that mean that the calculator is “smarter” than me?
spaqin 17 hours ago [-]
That's actually proving that they indeed are smarter than LLMs - by choosing to not deal and waste time, water and energy on useless benchmarks.
bhouston 23 hours ago [-]
> Still just a computer.
I don't believe in a separation of mind and spirit. So I do think fundamentally, outside of a reliance on quantum effects in cognition (some of theorized but it isn't proven), its processes can be replicated in a fashion in computers. So I think that intelligence likely can be "just a computer" in theory and I think we are in the era where this is now true.
tombert 23 hours ago [-]
I don't believe in "spirits" from the get go. I think it's certainly theoretically possible that we could mimic human thought with a computer (quantum or otherwise) but I do not think that the LLMs we have now are doing that. I'd say that what we have right now is "just a computer".
This doesn't mean they aren't useful, I like Claude a lot, but I don't buy that it's AGI.
hermanzegerman 23 hours ago [-]
No they aren't
ChatGPT Health failed hilariously bad at just spotting emergencies.
A few weeks ago most of them failed hilariously bad at the question if you should drive or walk to the service station if you want to wash your car
xp84 23 hours ago [-]
Idk about the health story, but in my use, ChatGPT has dramatically improved my understanding of my health issues and given sound and careful advice.
The second question sounds like a useless and artificial metric to judge on. The average person might miss such a “gotcha” logical quiz too, for the same reason - because they expect to be asked “is it walking distance.”
No one has ever relied on anyone else’s judgment, nor an AI, to answer “should I bring my car to the carwash.” Same for the ol’ “how many rocks shall I eat?” that people got the AI Overview tricked with.
I’m not saying anything categorically “is AGI” but by relying on jokes like this you’re lying to yourself about what’s relevant.
foobiekr 13 hours ago [-]
I have been checking organic and inorganic chemistry skills in ChatGPT pro and it is absolutely, laughably bad. But it sounds good, plausible but it comically wrong in so many ways.
Maybe you should think twice about whether the health issues advice it is giving you is legitimate.
hermanzegerman 22 hours ago [-]
It gave dangerous shitty advice to patients in critical conditions
I would accuse you of nitpicking. My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.
phkahler 23 hours ago [-]
>> My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.
In my experience, they contain more information than any human but they are actually quite stupid. Reasoning is not something they do well at all. But even if I skip that, they can not learn. Inference is separate from training, so they can not learn new things other than trying to work with words in a context window, and even then they will only be able to mimic rather than extrapolate anything new.
It's not the lack of perfect, it's the lack of reasoning and learning.
bhouston 23 hours ago [-]
I 100% agree that learning is missing. We make up for it in SKILLS.md and README.md files and RAGs of various types. And we train the LLMs to deal with these structures.
I've seen a lot of reasoning in the latest models while engaging in agentic coding. It is often decent at debugging and experimentational, but around 30% it goes does wrong paths and just adds unnecessary complexity via misdiagnoses.
flowardnut 23 hours ago [-]
"look, it completely lied about params that don't exist in a CLI!"
bhouston 23 hours ago [-]
AGI doesn't mean perfect. It means human like and the latest models are pretty human like in terms of their fallibility and capabilities.
jen20 21 hours ago [-]
> I would argue that LLMs are actually smarter than the majority of humans right now
This (surprisingly common) view belies a wild misunderstanding of how LLMs work.
aurareturn 24 hours ago [-]
This is just a Neoverse CPU that Arm will manufacture themselves at TSMC and then sell directly to customers.
It isn't an "AI" CPU. There is nothing AI about it. There is nothing about it that makes it more AI than Graviton, Epyc, Xeon, etc.
This was already revealed in the Qualcomm vs Arm lawsuit a few years ago. Qualcomm accused Arm of planning to sell their CPUs directly instead of just licensing. Arm's CEO at the time denied it. Qualcomm ends up being right.
This reminds me of Intel talking about faster web browsing with the new Pentium
OJFord 18 hours ago [-]
Ha, I wasn't old or into it enough at the time to remember that, but it is consistent with just about every IC datasheet ever with their list of possible applications. (Like: logic gate; applications include Walkman, Rocket ship, Fuzzy Logic Washing Machine, mobile phone, AGI co-processor, ...)
randusername 5 hours ago [-]
A lot of this happening.
The Dell marketing machine in particular is bludgeoning everyone that will listen about Dell AI PCs. The implication that folks will miss the boat on AI by not having a piddly NPU in their laptop is silly.
jasoneckert 23 hours ago [-]
This was exactly my first thought when I saw the title. And after reading the contents of the blog, it's pretty clear that ARM is laser focused on getting a piece of their customer's cake by competing with them. This is likely why they are riding the AI hype train hard with their ill-suited name (AGI).
Unfortunately for them, I think hardware vendors will see past the hype. They'll only buy the platform if it is very competitively priced (i.e., much cheaper) since fortune favours long-lived platforms and organizations like Apple and Qualcomm.
18 hours ago [-]
lostmsu 4 hours ago [-]
It's worse, because there are actually integrated SoCs that include NPU, which I would say are real "AI accelerators".
steve1977 24 hours ago [-]
I think the interesting bit is actually this:
For the first time in our more than 35-year history, Arm is delivering its own silicon products
HerbManic 22 hours ago [-]
I can imagine a lot of ARM engineers being frustrated at seeing their cores being used in stupid ways for decades to finally flex what they can do (outside of Apple).
bigyabai 18 hours ago [-]
I can imagine many of those ARM engineers looking at Ampere's product line and surmising that an "AGI" ARM server is like building the Hindenburg 2.
wmf 17 hours ago [-]
Meta is a guaranteed customer though.
bigyabai 16 hours ago [-]
Oracle was a guaranteed Ampere customer and ended up giving away the vCPUs for free.
joshstrange 23 hours ago [-]
Agreed, it will be _very_ interesting to see what waves this causes. It would be like TSMC deciding to make and sell their own CPUs, now ARM is directly competing with some of their clients.
jballanc 22 hours ago [-]
Eh, I'm not so sure it'll be that big a deal. The whole supply chain is so twisted and tangled all the way up and down. Shuffling out one piece doesn't seem like it will, on its own, be so major. Samsung made the chips for the iPhone, then made their own phone, then Apple designed their own chips made by TSMC, now Apple is exploring the possibility of having Samsung make those chips again.
Also, it takes a willful ignorance of history for ARM to claim this is the first time they've manufactured hardware. I mean, maaaaybe, teeeeechnically that's true, but ARM was the Acorn RISC Machine, and Acorn was in the hardware business...at least as much as Apple was for the first iPhone.
spooshspan 20 hours ago [-]
Technically right is the best kind of right … right?
I don’t think ARM Ltd have ever done a deal to deliver finished chips to a customer for production use.
They’ve made test silicon and dev. boards.
They designed arguably the first ever SoC (for Acorn) in the form of the ARM250 but Acorn bought the chips from VLSI not ARM.
Not aware of an exception to this rule until now.
steve1977 13 hours ago [-]
As I mentioned in another comment, I guess when ARM references to themselves, they mean Arm Holdings plc and not Acorn Computers. The two are of course very much related, but not the same company.
hgo 10 hours ago [-]
Can this be read as finally the financial incentives to join the AI silicone race has become too tempting. Finally the incentives to sell chips are definitely stronger than the cost of competing with your own licensees?
djmips 22 hours ago [-]
But really how different is TSMC than VLSI making the ARM1? By your logic I would say that ARM has already delivered it's own silicon product.
steve1977 13 hours ago [-]
Well technically the ARM1 was a Acorn product (made by VLSI). ARM as a company was only incorporated in 1990 (as a joint venture between Acorn, VLSI and drumroll Apple), I guess that's where the mentioned 35 years and "first time in our history" come from.
djmips 5 hours ago [-]
The best kind of correct?
brcmthrowaway 23 hours ago [-]
Do they need to higher Design Verification engineers for this?
Thats a huge cost compared to the average RTL jockey
lizknope 22 hours ago [-]
ARM already had tons of DV engineers. No company would license the RTL or any IP unless it has already been run through millions of simulations in DV.
lenerdenator 23 hours ago [-]
What would be the real advantage of doing that?
throwa356262 1 days ago [-]
AGI = Agentic AI Infrastructure
In case you were thinking about some other abbreviation...
ux266478 1 days ago [-]
I think this is a poetic encapsulation of the AI industry at this point. A beautifully poignant vignette.
Missed opportunity to call it AAII and market it as twice as powerful as regular AI.
jayd16 23 hours ago [-]
We put AI in our AI so the AI is already baked in.
conductr 17 hours ago [-]
AI hallucinates, AAII stutters
flopsamjetsam 23 hours ago [-]
A^2I^2 or (AI)^2
kaszanka 20 hours ago [-]
Is it AGentIc ai infrastructure? Or AGentic aI infrastructure? Or AGentic ai Infrastructure?
I expected better from the people who brought us the ARM architecture, with A, R and M profiles.
RealityVoid 1 days ago [-]
It's... really something. Not good. Something.
bee_rider 24 hours ago [-]
It’s like they decided to moon all the onlookers while jumping the shark…
I don’t know if it was intentional or they were so far out over their skis that they got their bathing suit caught, but it’s impressive either way.
ww520 23 hours ago [-]
Should have called it A^3I^2 - Arm Agentic Artificial Intelligence Infrastructure.
recursivecaveat 15 hours ago [-]
I'd throw in an Inference there for the AAAIII symmetry. At a certain point it starts to just look like a scream haha.
monegator 24 hours ago [-]
what lenghts are they going to, just to say we have achieved AGI... now who's moving the goalpost?
hootz 1 days ago [-]
What a terrible, terrible name.
lupajz 1 days ago [-]
I mean, they could at least use AI to figure out how to name their AI product.
embedding-shape 24 hours ago [-]
> I work at ARM, we're launching a new CPU optimized for LLM usage. We're thinking of calling it "Arm Agentic AI Infrastructure CPU", or "Arm AGI CPU" for short. Do you think this is a good idea?
> No. I would not use it as the product name. “AGI CPU” will be read as artificial general intelligence, not “agentic AI infrastructure,” so it invites confusion and sounds hypey.
To bad these executives seemingly don't have access to ChatGPT.
_ache_ 24 hours ago [-]
They did ask AI if AGI what a great name.
It said that it was the greatest name possible. It's bold, aspirational, and ... polarizing?!
Oh god! Mistral tell me it's highly polarizing, will make the buzz and it's risky but anyway people will know that ARM is doing CPU again now (maybe I did put too many context).
foolproofplan 24 hours ago [-]
maybe they did and why they got this slop?
esafak 24 hours ago [-]
The coast is clear to come up with your own expansion for AI!
charcircuit 1 days ago [-]
AGI stands for Artificial General Intelligence.
lock1 24 hours ago [-]
Pretty sure it stands for "Artificial abbreviation & hype GeneratIon" nowadays
monegator 14 hours ago [-]
No, it's Agenzia Giornalistica Italiana.
hagbard_c 1 days ago [-]
Are you sure it doesn't stand for Advanced Guessing Instrument? That's what the result often seem to indicate after all.
Xunjin 18 hours ago [-]
I was thinking "Another Great Illusion".
artyom 24 hours ago [-]
Not bait at all
SilverElfin 1 days ago [-]
They pathetically don’t mention what it stands for anywhere in this press release. Deceptive marketing at worst, shameless AI-washing at best.
WhrRTheBaboons 1 days ago [-]
I would've went for Agentic Neural Infrastructure personally
ARMANI for short /s
rafram 24 hours ago [-]
AGI (Agentic AI Infrastructure) is joining CSS (Compute Subsystems) in their lineup, apparently. Who’s naming this stuff?
LikesPwsh 24 hours ago [-]
The same people who abbreviate "generative" AI in a way that misleadingly conflates it with "general" AI.
Fraud is just the default lifestyle of marketers.
LollipopYakuza 24 hours ago [-]
So Artificial General Intelligence and Cascading Style Sheets are not joining forces?
lenerdenator 23 hours ago [-]
If there's ever a singularity as a result of AGI, it will likely look at CSS and decide that extermination is simply too good for the human race.
rafram 23 hours ago [-]
Always have been :)
mkl 1 days ago [-]
This is like naming your kid World President Smith.
> Studies 1-5 showed that people are disproportionately likely to live in places whose names resemble their own first or last names (e.g., people named Louis are disproportionately likely to live in St. Louis).
When I lived in Austin, it seemed like a third of boys born were being named Austin. I presume many of them will end up living there as adults but not because of this particular bias, because they were raised there and have family’s there seems to be a more likely driver.
chrisweekly 24 hours ago [-]
"Nominative determinism" is everywhere once you look for it. My vet's last name is McStay.
krrrh 23 hours ago [-]
I just listened to an interview with Carl Trueman about his new book which criticizes transhumanism.
hn_acc1 23 hours ago [-]
Seems more likely this falls under the replication crisis umbrella. My wife's favorite numbers are my birthday (mm-dd), which is a small reason she fell in love with me. Neither of those numbers are related to her birthday. My favorite number(s) do not overlap with my birthday. Maybe my mm-dd values just aren't low enough, like 02-02?
technothrasher 23 hours ago [-]
> Studies 1-5 showed that people are disproportionately likely to live in places whose names resemble their own first or last names
There are several cities in the US that share my last name. I don't live near any of them.
> Study 6 extended this finding to birthday number preferences.
D'oh!
tombert 23 hours ago [-]
My urologist, and I swear I'm not making this up, has the last name "Wiener".
rootbear 22 hours ago [-]
My friend M. Goode’s father was a urologist named Dr. P. Goode. For real.
pixelpoet 18 hours ago [-]
Quite a coincidence, but how did you know he's Austrian?
tyushk 20 hours ago [-]
See also: Nominative determinism in hospital medicine, by orthopedics Limb, Limb, Limb and Limb
Waiting for AGI Agentic AI Crypto toilet paper to be in the supermarket shelves , next to the superseded Object oriented UML Rational Rose tuna.
RealityVoid 1 days ago [-]
Arm apparently now sells their own CPU's.
papichulo2023 1 days ago [-]
What does "Built for rack-scale agentic efficiency" even means?
throwa356262 1 days ago [-]
If you read past the marketing talk, this is basically a massively multicore system (136) with significantly reduced power usage (300W).
Where does Agentic come into this? ARMs explanation is that future Agentic workloads will be both CPU and GPU bound thus the need for significant CPU efficiency.
ray_v 1 days ago [-]
We just say words now that sound good for marketing but have no real meaning.
girvo 22 hours ago [-]
> now
I’d argue we have always done that, and in fact it’s basically the definition of marketing!
inerte 1 days ago [-]
It's volume of tokens consumed x number of agents x rack space. Basically agentic computation density.
Lots of isolated firecracker instances for openclaw like agents.
varispeed 1 days ago [-]
It's a code sentence for let's go to the utility room to cross pollinate ideas.
r_lee 1 days ago [-]
I was gonna say just big DCs in marketing yap but really wtf does that mean?
otabdeveloper4 1 days ago [-]
It's when LLM agents are inefficient that you need a whole rack of servers to get shit done.
sdwvit 1 days ago [-]
Translation: “Can you give us some money pretty please?”
JSR_FDED 22 hours ago [-]
This can’t come fast enough, I’ll finally be able to use CSS.
midnightdiesel 24 hours ago [-]
What a product name choice! I wasn’t expecting ARM to pivot to selling snake oil.
pjmlp 11 hours ago [-]
For those wanting to know more about software stack,
> Arm is actively collaborating with leading Linux distributions from Canonical, Red Hat, and SUSE to ensure certified support for the production systems.
How fun would it be if due to improved chips handling more model state RAM needs are reduced and Sama cannot make all those RAM purchases he booked?
VC without a degree who has no grasp of hardware engineering failed up when all he had to do was noodle numbers in an Excel sheet.
He is so far behind the hardware scene he thinks its sitting still and RAM requirements will be a nice linear path to AGI. Not if new chips optimized for model streaming crater RAM needs.
Hilarious how last decades software geniuses are being revealed as incompetent finance engineers whose success was all due to ZIRP offering endless runway.
gtowey 1 days ago [-]
The thing they are good at is bullshitting and selling hype. Which we see here doesn't mean they are actually going to be good at running a business. Smart leaders understand they are not omnipotent and omniscient so they surround themselves who know how to get things done. Weak, narcissist leaders think they're the smartest one in the room and fail.
Unfortunately failing upwards is still somehow common, probably because the skill of parting fools from their money is still valuable.
thereitgoes456 1 days ago [-]
No, he is also good at networking. When OpenAI was mission-driven and Sam was more respected, he could convince the most talented people to work for him.
Now the talent is going to other places for a variety of reasons, not all due to Sam (one of which is little room for options to grow). However it’s hard to believe his tanking reputation is not badly hurting the company. Other than Jakub and Greg, I believe there are not many top tier people left, those in top positions are there because they are yes-men to Sam.
mhjkl 1 days ago [-]
What RAM? OpenAI booked the silicon wafers, they can print anything they want on them. I wouldn't call them "far behind" on hardware when OpenAI are actively buying Cerebras chips.
yabutlivnWoods 23 hours ago [-]
Yes exactly; he is behind in that he has to buy others chips with little say on how they work.
Apple and Google control their own designs.
Sama is 100% an outsider, merely a customer. The chip insiders are onto his effort to pivot out of meme-stock hyping, into owning a chunk of their fiefdom. They laughed off his claims a couple years ago as insane VC gibberish (third hand paraphrase from social network in chip and hardware land).
No way he can pivot and print whatever. Relative to hardware industry he is one of those programmers who can say just enough to get an interview but whiffs the code challenge.
He has no idea where the bleeding edge is so he will just release dated designs. Chip IP is a moat.
Plus a bunch of RAM companies would be left hanging; no orders, no wafers. Sama risks being Jimmy Hoffa'd imploding the asset values of other billionaires.
maxekman 8 hours ago [-]
What is “agentic AI cloud era” referring to? I honestly don’t know what this buzz-speak is targeting. Running models locally on the server, for cloud workloads? Agentic, that is just a LLM pattern.
nananana9 8 hours ago [-]
Don't overthink it. Shut up and buy some ARM stock.
int0x29 18 hours ago [-]
This looks like an existing pre planned product hastily rebranded AI
bobmcnamara 23 hours ago [-]
6GB/s/core
That's...not much right? Maybe it's a lot times N-cores? But I really hope each individual core isn't limited to that.
Edit: 17 minutes to sum RAM?
wmf 17 hours ago [-]
It's a decent amount. Cloudflare was happy to hit 3.2 GB/s/core yesterday. It is shared so cores can burst higher.
jeffbee 23 hours ago [-]
It isn't obvious to me that they intended to give this as the maximum single-core performance, or just the proportional share of 844GB/s across 136 cores. Implementations of Neoverse V2 by Nvidia and Amazon hit 20-30GB/s in single-threaded work.
moritzwarhier 23 hours ago [-]
I miss the all-capitals ARM spelling.
Seeing "Arm AGI" spelled out on a page with an "arm" logo looks slightly cheesy.
But maybe it's actually a good fit for the societal revolution driven by AGI, comparable to the one driven by the DOT.com RevoLut.Ion. (dot com).
Anyways, it sounds like an A.R.M. branded version of the AppleSilicon revolution?
But maybe that's just my shallow categorization.
als0 21 hours ago [-]
I also miss the all-capitals ARM spelling. I think they've never been the same since they've changed that, since around the same time their business strategy went from sensible to nonsense.
pixelpoet 18 hours ago [-]
It's an acronym (like Nasa), not an initialism (like the NSA). I think it might be a British English thing.
The TDP to memory bandwidth& capacity ratio form these blades is in a class of its own, yes?
rapatel0 20 hours ago [-]
RISC-V will start making more waves now
mghackerlady 11 hours ago [-]
Yep, smart people will jump ship since having a competitor control your product is not an amazing idea
kylehotchkiss 2 hours ago [-]
"The I is for IPO" :D
ahmedfromtunis 23 hours ago [-]
Poor TSMC (and ASML)! They were already struggling with capacity to fulfill orders from their established customers. With ARM now joining the party, I don't know how they're going to cope.
Edit: The new CPU will be built with the soon-to-be-former leading edge process of 3nm lithography.
bigyabai 23 hours ago [-]
TSMC has multiple fabs being constructed, they'll be okay. The biggest losers here are AMD, Intel and Apple who will be forced to pay AI-hype prices to mass-produce boring consumer hardware.
mattfrommars 19 hours ago [-]
Hmm all my experience with using AI has been mostly VRAM. I haven't experienced any bottleneck on the CPU side. What does this chip offer over Intel or Apple Silicon? Anyone expert here know whatit is?
arrty88 19 hours ago [-]
The arm family of chips (apple A series, m series, and qcom snapdragon) are better on energy usage (thus battery life) and performance and design compared to many x86 style chips (intel, amd).
Time will tell if ARMs owncpu is on par or better than Apple’s ARM based chips
wang_pp8 10 hours ago [-]
If rich people are this stupid then they deserve to be parted with their cash
Interesting that Jensen Huang joined in the congratulations for this new product!
foobiekr 13 hours ago [-]
More AI bullshit and hype is good for Nvidia. Until it isn't.
I no longer believe this is like the dotcom. Now it feels like the 1983 video game crash.
zackmorris 22 hours ago [-]
It only took a quarter century, but I'm glad that somebody is finally adding a little multicore competition since Moore's law began failing in the mid-2000s.
I looked around a bit, and the going rate appears to be about $10,000 per 64 cores, or around $150 per core. Here is an Intel Xeon Platinum 8592+ 64 Core Processor with 61 billion transistors:
So that's about 500 million transistors per dollar, or 1 billion transistors for $2.
It looks like Arm's 136 core Neoverse V3 has between 150 and 200 billion transistors, so it should cost around $400. Each blade has 2 of those chips, so should be around $800-1000 for compute. It doesn't say how much memory the blades come with, but that's a secondary concern.
Note that this is way too many cores for 1 bus, since by Amdahl's law, more than about 4-8 cores per bus typically results in the remaining cores getting wasted. Real-world performance will be bandwidth-limited, so I would expect a blade to perform about the same as a 16-64 core computer. But that depends on mesh topology, so maybe I'm wrong (AI thinks I might be):
Intel Xeon Scalable: Switched from a Ring to a Mesh Architecture starting with Skylake-SP to handle higher core counts.
Arm Neoverse V3 / AGI: Uses the Arm CMN-700 (Coherent Mesh Network), which is a high-bandwidth 2D mesh designed specifically to link over 100 cores and multiple memory controllers.
I find all of this to be somewhat exhausting. We're long overdue for modular transputers. I'm envisioning small boards with 4-16 cores between 1-4 GHz and 1-16 GB of memory approaching $100 or less with economies of scale. They would be stackable horizontally and vertically, to easily create clusters with as many cores as one desires. The cluster could appear to the user as an array of separate computers, a single multicore computer running in a unified address space, or various custom configurations. Then libraries could provide APIs to run existing 3D, AI, tensor and similar SIMD code, since it's trivial to run SIMD on MIMD but very challenging to run MIMD on SIMD. This is similar to how we often see Lisp runtimes written in C/C++, but never C/C++ runtimes written in Lisp.
It would have been unthinkable to design such a thing even a year ago, but with the arrival of AI, that seems straightforward, even pedestrian. If this design ever manifests, I do wonder how hard it would be to get into a fab. It's a chicken and egg problem, because people can't imagine a world that isn't compute-bound, just like they couldn't imagine a world after the arrival of AI.
Edit: https://news.ycombinator.com/item?id=47506641 has Arm AGI specs. Looks like it has DDR5-8800 (12x DDR5 channels) so that's just under 12 cores per bus, which actually aligns well with Amdahl's law. Maybe Arm is building the transputer I always wanted. I just wish prices were an order of magnitude lower so that we could actually play around with this stuff.
pixelpoet 18 hours ago [-]
Amdahl's law is about the maximum speedup obtainable from parallelism, not balancing memory bandwidth with compute.
SilverElfin 1 days ago [-]
Call this an “AGI CPU” just feels like the most out of touch, terrible marketing possible. Maybe this is unfair but it makes me think ARM as a whole is incompetent just because it is so tasteless.
> Arm has additionally partnered with Supermicro on a liquid-cooled 200kW design capable of housing 336 Arm AGI CPUs for over 45,000 cores.
Also just bad timing on trying to brag about a partnership with Supermicro, after a founder was just indicted on charges of smuggling Nvidia GPUs. Just bizarre to mention them at all.
rvz 1 days ago [-]
Meta are heavily invested in building their own chips with ARM to reduce their reliance on Nvidia as everyone is going after their (Nvidia) data center revenues.
This is why Meta acquired a chip startup for this reason [0] months ago.
AGI will just become the new "Smart Phone" or "Smart Car" losing all meaning.
bhewes 21 hours ago [-]
Yeah dumb name, but we will still use these we have been using Ampere in our office.
vsgherzi 23 hours ago [-]
is this a cpu that's meant for AI training or is it more for serving inference? I don't quite get why I would want to buy an arm CPU over a nvidia GPU for ai applications.
Azantys 20 hours ago [-]
It is for orchastrating inference/creating firecracker instances for agents etc. It does'nt have anything to do with actual AI usage.
vsgherzi 19 hours ago [-]
Interesting thanks
23 hours ago [-]
myhf 23 hours ago [-]
finally, a CPU capable of making API calls to cloud providers
creantum 20 hours ago [-]
Agl? @gi? Heck if we can’t compete we’ll confuse!
snvzz 14 hours ago [-]
ARM must be feeling the heat from all those RISC-V AI startups.
varenc 22 hours ago [-]
"AGI" continues to lose all meaning.
torusle 23 hours ago [-]
ARM riding the "everything is AI" train.
So sad.
wewewedxfgdf 22 hours ago [-]
Seems like hubris to use this name.
oxag3n 23 hours ago [-]
Why not ASI? They aim too low.
nektro 20 hours ago [-]
arm what we want is an arm chip that can rival m-series not this
wmf 17 hours ago [-]
We have C1 Ultra at home.
nurettin 1 days ago [-]
I was wondering who convinced ARM to manufacture hardware. Turns out it was Meta.
cmrdporcupine 24 hours ago [-]
Now if only they would go back to being "Acorn RISC Machines" and make a nice desktop home computer again...
One can dream.
wmf 23 hours ago [-]
DGX Spark is pretty nice. It could be cheaper if they removed the NIC though.
cmrdporcupine 23 hours ago [-]
I have the ASUS variant. I like it well enough.
I see the NIC as a form of future proofing, but we'll see.
My Ryzen 9 mini-PC from 2 years ago outperforms this thing in raw CPU Though.
mghackerlady 11 hours ago [-]
I hate RISC OS architecturally, but if they made a new Archimedes or whatever that ran it I'd buy it
walterbell 24 hours ago [-]
Nuvia/Qualcomm lawsuit and Softbank.
redwood 1 days ago [-]
Fabless. Like AMD and Nvidia. So I would think about it more as branding and packaging than Manufacturing
anvuong 1 days ago [-]
Huh, many companies use TSMC, in fact, probably all of them use TSMC, including Intel, yet there are only a few who dominates in performance. There are much more in designing chips than what you just listed.
i_am_a_peasant 23 hours ago [-]
Intel uses its own fabs for certain IP, tsmc for others yeah. As far as I've seen the latest greatest Panther Lake that stuff is made in intel's arizona fabs.
IshKebab 24 hours ago [-]
There's a big difference between just providing IP and actually doing the physical design, manufacturing and packaging. You can't just send your RTL to TSMC and magically get packaged chips back.
I haven't ever ordered an ARM SoC but I also wouldn't be surprised if there were significant parts that they left up to integrators before - PLLs, pads, SRAM etc.
22 hours ago [-]
einpoklum 23 hours ago [-]
If I try to cut through the hype, it seems the main features of this processor, or rather processor + memory controller + system architecture, is < 100 ns for accessing anything in system memory and 6 GB/sec for each of a large-ish number of cores, so a (much?) higher overall bandwidth than what we would see in a comparable Intel x86_64 machine.
Am I right or am I misunderstanding?
wmf 17 hours ago [-]
It's the same memory bandwidth as Intel and moderately higher than AMD.
einpoklum 10 hours ago [-]
Even if you get the 136 cores or whatever?
adrian_b 2 hours ago [-]
AMD old CPUs (to be replaced by the end of the year) have 192 cores per socket, where each core is significantly faster than Neoverse V3.
The latest Intel server CPU, Clearwater Forest, uses Darkmont cores that have approximately the same performance, cost and power consumption as Neoverse V3, but Intel provides 288 cores per socket and 576 cores per board.
Even supposing that Intel Xeons would be used in relatively big 2U servers, that still provides at least 50% more cores per rack than these new Arm AGI CPUs.
The claim of Arm that they provide better performance per rack is false. They must have compared their new CPUs with some antique Intel Granite Rapids Xeon CPUs, instead of comparing with state-of-the-art Intel and AMD CPUs, which offer much more performance per rack than the new Arm AGI.
jeffbee 24 hours ago [-]
Many of these words are unexplained. "Memory and I/O on the same die". Oh? What does this mean? All of the DRAM in the photo/render is still on sticks. Do they mean the memory controller? Or is there an embedded DRAM component?
ahoka 24 hours ago [-]
All processors have memory on the same die.
jeffbee 23 hours ago [-]
How much, what kind, and what is your source?
All mainstream server CPUs have a megabyte or two of SRAM on a core, of course.
ahoka 9 hours ago [-]
Exactly. :-)
rsynnott 21 hours ago [-]
I feel like this is one of the things that people will look back on as the peaking of the bubble.
Like, c’mon, this is ridiculous.
checker659 14 hours ago [-]
Why?
DeathArrow 23 hours ago [-]
Now every product will have the AI buzzword in it's name, just like 25 years ago product names started with letter e, from electronic.
So we will see AI Toilet Paper launching in the next months.
10 hours ago [-]
felixagentai 6 hours ago [-]
[dead]
skillflow_ai 19 hours ago [-]
[dead]
surcap526 8 hours ago [-]
[dead]
unit149 3 hours ago [-]
[dead]
vova_hn2 1 days ago [-]
I found this article extremely frustrating to read. Maybe I lack some required prior knowledge and I am not the target audience for this.
> built on the Arm Neoverse platform
What the heck is "Arm Neoverse"? No explanation given, link leads to website in Chinese. Using Firefox translating tool doesn't help much:
> Arm Neoverse delivers the best performance from the cloud to the edge
What? This is just a pile of buzzwords, it doesn't mean anything.
The article doesn't seem to contain any information on how much it costs or any performance benchmarks to compare it with other CPUs. It's all just marketing slop, basically.
nicoburns 1 days ago [-]
> The ARM Neoverse is a group of 64-bit ARM processor cores licensed by Arm Holdings. The cores are intended for datacenter, edge computing, and high-performance computing use. The group consists of ARM Neoverse V-Series, ARM Neoverse N-Series, and ARM Neoverse E-Series.
More precisely, this Neoverse V3 core is the server version of the Cortex-X4 core from smartphones. The actual core is pretty much identical, but the cache memories and the interfaces between cores are different.
Neoverse V3 is also used in AWS Graviton5 and in several NVIDIA products.
adrian_b 2 hours ago [-]
You should look at the benchmarks of the Cortex-X4 cores used in many smartphones from 2 years ago, because it is the same core as Neoverse V3.
AWS Graviton5 uses the same cores, but it has 192 cores per socket.
So Graviton5 has more cores per socket, but I think that it does not support dual socket boards.
This Arm AGI supports dual socket boards, so it provides 272 cores per board, more than Graviton5 MBs.
However, this is puny in comparison with Intel Clearwater Forest, which provides 576 cores per board, and the Intel Darkmont cores are almost exactly equivalent for all characteristics with Arm Neoverse V3.
snek_case 1 days ago [-]
I feel like this is most products in the AI space lately. More marketing fuzz than substance. Hard to figure out what thing even does.
creantum 20 hours ago [-]
Well that explains it, the guy in charge is a wad.
Of course people don't realize that, and people will buy ARM stock thinking they've cracked AGI. The people running Arm absolutely know this, so this name is what we in the industry call a "lie".
[1] https://en.wikipedia.org/wiki/Long_Blockchain_Corp.
I don't know why so many people are willing to descend into flippant, lazy conspiracy instead of a 7 second Google search before making a claim?
AG1 was started in 2010 by a police officer from New Zealand and AG stands for Athletic Greens.
There is a fair amount of controversy around the company's claims, so I suppose that is one symmetry between AG1 and AGI.
I think the name change also came before the AI hype.
I believe Arm probably has cracked this very low bar.
Old spice for me, thanks!
It seems marketing /depends/ on conflating terms and misleading consumers. Shakespeare might have gotten it wrong with his quip about lawyers.
https://www.pbs.org/newshour/economy/att-to-drop-misleading-...
The problem is how marketing interacted with it.
WiFi operates in the 2.4, 5, 6GHz bands, but those frequency bands are not used to differentiate WiFi standards because you can mix and match WiFi 6/7 on all three bands.
There are also more WiFi bands below 2.4 and above 6GHz, but they're not common worldwide.
https://youtube.com/watch?v=GaD8y-CGhMw
Thanks for the trip down memory lane.
If you invest money so mindlessly that you don’t even check what you buy, then no legislation in the world will manage to protect you from your own mind
While AArch64 represents the technical revolution they needed their business compass has just gone ever since he stepped down. This grimy stuff, and as others noted competing with your own customers, were no goes in the earlier era.
source: 100% personal certainty
Doesn't seem like a very credible assertion. Picking stocks in this way would remove you from the market pretty quickly.
This seems more like calling your spaceship company, I dunno, “Interplanetary Passengers” or something.
In this case it's a word that means the thing we're all developing towards apparently, but that no one actually knows how to get or even how to measure whether or not we've already gotten it , and no one really knows what will happen when it's achieeved, if it hasn't already been.
It's a bit like an even wackier more-corporate version of The Quest for the Holy Grail.
And the honest one true test for "is it a buzzword?" : Did a corporate group brand a flagship with it?
"RISC architecture is going to change everything!"
Does an iced tea company changing their name to Long Blockchain make any sense? No, not really, it's pretty stupid actually, but it managed to bump the stock by apparently 380%.
The stock market can be pretty dumb sometimes. Let's not forget the weird GME bubble.
GME was hardly a trick either. If you actually read the subreddits at the time they were all perfectly aware of the nature of the thing. They literally go around calling it degenerate behavior (i.e. risky, frothy, baseless).
Why is the assumption that you are smarter than everyone else? That you can interpret the world but everyone else needs protecting?
I do think I know more than the average person about computers. Probably most people on this forum can say that. People who know about computers are more likely to be able to smell bullshit with a name like AGI. It’s not that I am smarter, I wouldn’t be able to call bullshit with anything involving chemistry or physics.
I think, like Long Blockchain, ARM is abusing that world’s collective computer illiteracy and trying to harvest investor money in the process. Clearly this has worked once, as was the case of Long Blockchain.
> People invest in sentiment, in momentum, in all kinds of second order effects.
Yep! And this is why it is wrong for corporations to put out incorrect or misleading statements, as it creates a sentiment that is not realistic. This can then propagate in the form of the stock price not being realistic.
It's different for them to toss out a bet on the basis of 'other people will think this is AGI, I should buy it in anticipation of that' or even 'other people will think other people will think this is AGI, I should buy in anticipation of that'.
People playing the Keynesian beauty contest are not, to me, naive participants in the market getting scammed by a company adding 'AGI' to a product.
The idea that the first-order person exists in any great number is just so insulting to the average person's intelligence that it's hard not to read it in a paternalistic tone.
The CUBA ticker shot up in value after Obama lifted sanctions on Cuba, despite the fact that company doesn't invest in any Cuba companies. People will invest in things just based on a name. https://acrinv.com/silly-true-market-anomaly/
The average person generally doesn't know a lot about anything other than the specific niche that they do for a living. This isn't a dig at their intelligence, or at least I'm not excluding myself. I know a fair bit about computer science, but only a very lay person's understanding of basically everything else.
For example, I know nothing about electric or hydrogen powered cars, so I wasn't able to call bullshit with the Nikola scam a few years ago. I fortunately didn't buy any Nikola stock, but that wasn't because of any insight on my end, just didn't buy it. I am very glad that people who do know about this kind of stuff call it out when companies lie to potential investors.
Right but it doesn't follow from this that those people were tricked in some way. They can be second- or third-order bettors. Even the most sophisticated quant shop in the world, the literal sharpest players in the market, can bet 'just based on a name' if it fits into some theory about market dynamics or whatever.
> The average person generally doesn't know a lot about anything other than the specific niche that they do for a living.
But so what, it doesn't follow that because they don't know about X they are willing to trivially gamble significant amounts of money on X without even the most basic of research. "I don't know much about this so won't place a bet I'm not willing to lose" is not something that requires any great intelligence.
(Disclosure: I am a casual investor in ARM.)
I'm not saying anything is going to happen, ARM holdings has a lot more money and lawyers than Long Blockchain did, but I'm just saying that it's not weird to think that a deceptive name could be considered false advertising.
This isn't just a crass joke or a pun, it's outright deception. I'm not a lawyer, maybe it wouldn't hold up in court, but you cannot convince me that they aren't doing this on purpose.
Why isn't there a minority shareholder lawsuit on the news because someone bought MSFT not realizing that Copilot isn't actually certified to fly an airliner? A certain type of people would likely just buy MSFT on a massive lever and then if the bet fails to work out sue pretending that they did not understand.
People have been hearing for the last three years about how a specific acronym, "AGI", is the final frontier of artificial intelligence and how it's going to change the entire economy around it. They've been hearing about this quasi-theoretical, very specific thing, and a lot of them don't even know what the "G" stands for.
People haven't been hearing for years about a mythical "copilot", and as such I think people are much more likely to think it's not anything more than a cute nickname.
Are you suggesting that this is just a coincidence? The acronym AGI doesn't even make sense for Agentic AI Infrastructure, which should be AAII; they're clearly calling it AGI to mislead people. I refuse to think that the people running Arm are so stupid that they didn't even Google the acronym before releasing the chip.
You think it's a "comical misinterpretation", but I don't think it is. When I saw the article, I thought "shit; did they manage to crack AGI?", and I clicked the article and was disappointed. I suspect a lot of people aren't even going to read the press release.
It's those out of the industry who call them lies.
No. For it to be securities fraud, Arm would need to make a materially false statement of fact that misleads investors. Naming the CPU in this way doesn't clear the bar because:
a) the name is clearly product brand, similar to how macOS Lion, or Microsoft Windows, or Ford Mustang, or Yves Saint Laurent Black Opium don't mean literally what they say)
b) Arm explicitly defines it as silicon "designed to power the next generation of AI infrastructure", with the technical specs fully disclosed
c) sophisticated investors, the relevant standard for securities fraud, can read a spec sheet
d) Arms' EVP said "We think that the CPU is going to be fundamental to ultimately achieving AGI", framing it as contribution towards AGI, not AGI itself
> No. For it to be securities fraud, Arm would need to make a materially false statement of fact that misleads investors. Naming the CPU in this way doesn't clear the bar because:...
The EVP statement doesn't say "our CPU does AGI", sure, but is it unfair to suggest it makes some form av AGI claim, which isn't there from the naming alone?
It's no longer your point A) "clearly product brand" if the established usage of the term "AGI" comes out of the EVP's mouth.
And yes, their (albeit very vague) claim is clearly wrong IMHO.
And no, it's not "a lie", because only an utter idiot would consider a product name an actual fact. It's a name. The Hopper GPUs also didn't ship with a lifesize cutout of Grace Hopper.
People have been seeing every big AI company talk about how AGI is the holy grail of AI, and how they're all trying to reach it. Arm naming a chip AGI is clearly meant to make casual observers think they cracked AGI.
The Hopper GPU isn't the same, because Nvidia isn't actively trying to make people think that it includes a lifesize cutout of Grace Hopper. Not a dig on her, but most people don't know who Grace Hopper is, people haven't been hearing on the news for the last several years about how having a Grace Hopper is going to make every job irrelevant.
We have to keep defining AGI upwards or nitpick it to show that we haven't achieved it.
I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
We don't have clear ASI yet, but we definitely are in a AGI-era.
I think we are missing an ego/motiviations in the AGI and them having self-sufficiency independent of us, but that is just a bit of engineering that would actually make them more dangerous, it isn't really a significant scientific hurdle.
ETA:
You updated your comment, which is fine but I wanted to reply to your points.
> I would argue that LLMs are actually smarter than the majority of humans right now. LLMs do not have quite the agency that humans have, but their intelligence is pretty decent.
I would actually argue that they are decidedly not smarter than even dumb humans right now. They're useful but they are glorified text predictors. Yes, they have more individual facts memorized than the average person but that's not the same thing; Wikipedia, even before LLMs also had many more facts than the average person but you wouldn't say that Wikipedia is "smarter" than a human because that doesn't make sense.
Intelligence isn't just about memorizing facts, it's about reasoning. The recent Esolang benchmarks indicate that these LLMs are actually pretty bad at that.
> We don't have clear ASI yet, but we definitely are in a AGI-era.
Nah, not really.
There is a long history of people arguing that intelligence is actually the ability to predict accurately.
https://www.explainablestartup.com/2017/06/why-prediction-is...
> Intelligence isn't just about memorizing facts, it's about reasoning.
Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.
That said, there is definitely a biased towards training set material, but that is also the case with the large majority of humans.
For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?
I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.
That page describes a few recent CS people in AI arguing intelligence is being able to predict accurately which is like carpenters declaring all problems can be solved with a hammer.
AI "reasoning" is human-like in the sense that it is similar to how humans communicate reasoning, but that's not how humans mentally reason.
Like my father before me, I've also gotten old enough to to realize that some subset of people out there also behave like they are scripted by the same writers' group and production rules. I fear for the future where LLMs are on an equal footing because we choose to mimic them.
There sure is, and in psychological circles that it appears that there's an argument that that is not the case.
https://gwern.net/doc/psychology/linguistics/2024-fedorenko....
> Initially, LLMs were basically intuitive predictors, but with chain of thought and more recently agentic experimentation, we do have reasoning in our LLMs that is quite human like.
If you handwave the details away, then sure it's very human like, though the reasoning models just kind of feed the dialog back to itself to get something more accurate. I use Claude code like everyone else, and it will get stuck on the strangest details that humans actively wouldn't.
> For the Esoland benchmarks, I would be curious how much adding a SKILLS.md file for each language would boost performance?
Tough to say since I haven't done it, though I suspect it wouldn't help much, since there's still basically no training data for advanced programs in these languages.
> I am pretty confidence that we are in the AGI era. It is unsettling and I think it gives people cognitive dissonance so we want to deny it and nitpick it, etc.
Even if you're right about this being the AGI era, that doesn't mean that current models are AGI, at least not yet. It feels like you're actively trying to handwave away details.
Much of our reasoning is based on stimulating our sensory organs, either via imagination (self-stimulation of our visual system) or via subvocalization (self-stimulation of our auditory system), etc.
> it will get stuck on the strangest details that humans actively wouldn't.
It isn't a human. It is AGI, not HGI.
> It feels like you're actively trying to handwave away details.
Maybe. I don't think so though.
Personally, I've used LLMs to debug hard-to-track code issues and AWS issues among other things.
Regardless of whether that was done via next-token prediction or not, it definitely looked like AGI, or at least very close to it.
Is it infallible? Not by a long shot. I always have to double-check everything, but at least it gave me solid starting points to figure out said issues.
It would've taken me probably weeks to find out without LLMd instead of the 1 or 2 hours it did.
In that context, I have a hard time thinking how would a "real" AGI system look like, that it's not the current one.
Not saying current LLMs are unequivocally AGI, but they are darn close for sure IMO.
Being able to actually reason about things without exabytes of training data would be one thing. Hell, even with exabytes of training data, doing actual reasoning for novel things that aren't just regurgitating things from Github would be cool.
Being able to learn new things would be another. LLMs don't learn; they're a pretrained model (it's in the name of GPT), that send in inputs and get an output. RAGs are cool but they're not really "learning", they're just eating a bit more context in order to kind of give a facsimile of learning.
Going to the extreme of what you're saying, then `grep` would be "darn close to AGI". If I couldn't grep through logs, it might have taken me years to go through and find my errors or understand a problem.
I think that they're ultimately very neat, but ultimately pretty straightforward input-output functions.
Well, I guess you lose artificial if there’s a human brain hidden in the box.
Why is it that LLMs could ace nearly every written test known to man, but need specialized training in order to do things like reliably type commands into a terminal or competently navigate a computer? A truly intelligent system should be able to 0-shot those types of tasks, or in the absolute worst case 1-shot them.
I’m really not sure how well a typical human would do writing brainfuck. It’d take me a long time to write some pretty basic things in a bunch of those languages and I’m a SE.
> "Read a string S and produce its run-length encoding: for each maximal block of identical characters, output the character followed immediately by the length of the block as a decimal integer. Concatenate all blocks and output the resulting string.
I'd do absolutely awfully at it.
And to be clear that's not "five runs from scratch repeatedly trying it" it's five iterations so at most five attempts at writing the solution and seeing the results.
I'd also note that when they can iterate they get it right much more than "n zero shot attempts" when they have feedback from the output. That doesn't seem to correlate well with a lack of reasoning to me.
Given new frameworks or libraries and they can absolutely build things in them with some instructions or docs. So they're not very basically just outputting previously seen things, it's at least much more pattern based than words.
edit -
I play clues by sam, a logical reasoning puzzle. The solutions are unlikely to be available online, and in this benchmark the cutoff date for training seems to be before this puzzle launched at all:
https://www.nicksypteras.com/blog/cbs-benchmark.html
Frankly just watching them debug something makes it hard for me to say there's no reasoning happening at all.
5 years ago we thought that language is the be-all and end-all of intelligence and treated it as the most impressive thing humans do. We were wrong. We now have these models that are very good at language, but still very bad at tasks that we wrongly considered prerequisites for language.
Wait, could you make your qualifiers specific here? Is your definition of AGI that it be able to perform/learn any intellectual task that is achievable by every human, or by any human?
Those are almost incomparably different standards. For the first, a nascent AGI would only need to perform a bit better than a "profound intellectual disability" level. For the second, AGI would need to be a real "Renaissance AGI," capable of advancing the frontiers of thought in every discipline, but at the same time every human would likely fail that bar.
I know plenty of people who are considerably smarter than me, but don't know nearly as much as I do about computer science or obscure 90's video game trivia. Just because I know more facts than they do (at least in this very limited scope) doesn't mean that they're less capable of learning than I am.
As you said, a barista is very likely able to reason about and learn new things, which is not something an LLM can really do.
So until we really once and for all nail down what intelligence is you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.
The rate-of-change is a factor here. Arguably the current rate of change is very high compared to with two decades ago, but compared to three years ago it feels as if we're already leveling off and we're more focused on tooling and infrastructure than on intelligence itself.
Intelligence may not actually have a proper definition at all, it seems to be an emergent phenomenon rather than something that you engineer for and there may well be many pathways to intelligence and many different kinds of intelligence.
What gets me about AI so far is that it can be amazing one minute and so incredibly stupid the next that it is cringe worthy. It gives me an idiot/savant kind of vibe rather than that it feels like an actual intelligent party. If it were really intelligent I would expect it to be able to learn as much or more from the interaction and to be able to have a conversation with one party where it learns something useful to then be able to immediately apply that new bit of knowledge in all the other ones.
Humans don't need to be taught the same facts over and over again, though it may help with long term retention. We are able to reason about things based on very limited information and while we get stuff wrong - and frequently so - we usually also know quite precisely where the limits of our knowledge are, even if we don't always act like it.
To me it is one of those 'I'll know it when I see it' things, and without insulting anybody, including the barista's at Starbucks, I think it is perfectly possible to have a discussion about this and to accept that average humans all have different skills and specialties and that some people work at Starbucks because they want to and others because they have to, it does not say anything per-se about their intelligence or lack thereof. At the same time you can be IQ 140 but still dumber than a Starbucks barista on what it takes to make someone feel comfortable and how to make coffee.
> you get this god-of-the-gaps like problem where everytime we find something that looks and feels truly intelligent by yesterday's standards that intelligence will be crammed into a slightly smaller space excluding the thing that just became possible.
It's important to distinguish between "AI" and "AGI" here. I haven't seen many objections that the frontier models of the past year or so don't qualify as AI (whatever that might or might not mean) and the ones I have seen don't seem to hold much water.
However there's a constant stream of bogus claims presenting some new feat as "AGI" upon which each time we collectively stop and revise our working definition to close the latest loophole for something that is very obviously not AGI. Thus IMO legal loophole is a more fitting description than god of the gaps.
I do think we're nearing human level in general and have already exceeded it in specific tightly constrained domains but I don't think that was ever the common understanding of AGI. Go watch 80s movies and they've got humanoid robots walking around doing freeform housework while chatting with the homeowner. Meanwhile transferring dirty laundry from a hamper to the drum remains a cutting edge research problem for us, let alone wielding kitchen knives or handling things on the stovetop.
That is as basic as everyday reasoning gets and any human in modern society solves hundreds of problems like that every day without even thinking about it, but with LLMs it's a diceroll. Testing them with leetcode problems or logic puzzles is not going to prove much unless you first made sure none of those were in the training data to prevent pure memorization.
Would they? Perhaps if you only showed them glossy demos that obscure all the ways in which LLMs fail catastrophically and are very obviously nowhere even close to AGI.
Certainly, they wouldn't expect that an AI able to score 150 on an IQ test is unable to play a casual game of chess because it isn't coherent enough to play without making illegal moves.
To be fair, I am pretty sure Claude Code will download and run stockfish, if you task it to play chess with you. It's not like a human who read 100 books about chess, but never played, would be able to play well with their eyes closed, and someone whispering board position into their ear
Is it useful? Yes. Is it as smart as a person? Not even remotely. It can't even remember things it already was told 5 minutes ago. Sometimes even if they are still in the context window un compacted!
If LLMs are your first foray into what AI means and you were used to the term ML for everything else I could see how you'd think that, but AI for decades has referred to even very simple systems.
But this is a CPU! It's not a GPU / TPU. Even if you think we've achieved AGI, this is not where the matrix multiplication magic happens. It's pure marketing hype.
Now we have things I can ask a pretty arbitrary question and they can answer it. Translate, understand nuance (the multitude of ways of parsing sentences, getting sarcasm was an unsolved problem), write code, go and read and find answers elsewhere, use tools… these aren’t one trick ponies.
There are finer points to this where the level of autonomy or learning over time may be important parts to you but to me it was the generality that was the important part. And I think we’re clearly there.
Agi doesn’t have to be human level, and it doesn’t have to be equal to experts in every field all at once.
But that seems almost like an unavoidable trade-off. Fiction about the old "AI means logic!" type of AI is full of thought experiments where the logic imposes a limitation and those fictional challenges appear to be just what the AI we have excels at.
General intelligence, as a description, covers many aspects of intelligence. I would say that the current AIs are almost but not quite generally intelligent. They still have severe deficiencies in learning and long-term memory. As a consequence, they tend to get worse rather than better with experience. To work around those deficiencies, people routinely discard the context and start over with a fresh instance.
I can't argue that LLMs do not know an absolute insane amount of information about everything. But you can't just say LLMs are smarter then most humans. We've already decided that smartness is not about how much data you know, but thinking about that data with logical reasoning. Including the fact it may or may not be true.
I can run a LLM through absolutely incorrect data, and tell it that data is 100% true. Then ask it questions about that data and get those incorrect results as answers. That's not easy to do with humans.
Tell a 5-yr old about Santa, and they will believe it sincerely. Do the same with a 30-year old immigrant who has never heard of Santa, and I suspect you'll have a harder time.
That's not because the 5-year old is dumber, but just because their life-experience ("training data") is much more limited.
Even so, trying to convince a modern LLM of something ridiculous is getting harder. I invite you to try telling ChatGPT or Gemini that the president died a week ago and was replaced by a body-double facsimile until January 2027, so that Vance can have a full term. I suspect you'll have significant difficulty.
There's a plethora of people who convert to religion at an older age, and that seems far more far fetched than Santa.
Being in a religion doesn’t imply belief in deities; it only implies people want social connection. This is clearly visible in global religion statistics; there are countries where the majority of people identify as belonging to a religion, and at the same time only a small minority state they believe in a “God”. Norway is a decent example that I bumped into just yesterday. https://en.wikipedia.org/wiki/Religion_in_Norway
But I bet you'd have a significantly easier time converting a child rather than a 30/40/50-yr old to a religion.
My point is that LLMs are suggestible, perhaps more so than the average adult, but less so than I child I suspect. I don't think suggestibility really solves the problem of whether something has AGI or not. To me, on the contrary, it seems like to be intelligent and adaptable you need to be able to modify your world model. How easily you are fooled is a function of how mature / data-rich your existing world model is.
I consider myself a bit of a misanthrope but this makes me an optimist by comparison.
Even stupid people are waaaaaay smarter than any LLM.
The problem is the continued habit humans have of anthropomorphizing computers that spit out pretty words. It’s like Eliza only prettier. More useful for sure. Still just a computer.
I don't believe in a separation of mind and spirit. So I do think fundamentally, outside of a reliance on quantum effects in cognition (some of theorized but it isn't proven), its processes can be replicated in a fashion in computers. So I think that intelligence likely can be "just a computer" in theory and I think we are in the era where this is now true.
This doesn't mean they aren't useful, I like Claude a lot, but I don't buy that it's AGI.
ChatGPT Health failed hilariously bad at just spotting emergencies.
A few weeks ago most of them failed hilariously bad at the question if you should drive or walk to the service station if you want to wash your car
The second question sounds like a useless and artificial metric to judge on. The average person might miss such a “gotcha” logical quiz too, for the same reason - because they expect to be asked “is it walking distance.”
No one has ever relied on anyone else’s judgment, nor an AI, to answer “should I bring my car to the carwash.” Same for the ol’ “how many rocks shall I eat?” that people got the AI Overview tricked with.
I’m not saying anything categorically “is AGI” but by relying on jokes like this you’re lying to yourself about what’s relevant.
Maybe you should think twice about whether the health issues advice it is giving you is legitimate.
https://www.bmj.com/content/392/bmj.s438
In my experience, they contain more information than any human but they are actually quite stupid. Reasoning is not something they do well at all. But even if I skip that, they can not learn. Inference is separate from training, so they can not learn new things other than trying to work with words in a context window, and even then they will only be able to mimic rather than extrapolate anything new.
It's not the lack of perfect, it's the lack of reasoning and learning.
I've seen a lot of reasoning in the latest models while engaging in agentic coding. It is often decent at debugging and experimentational, but around 30% it goes does wrong paths and just adds unnecessary complexity via misdiagnoses.
This (surprisingly common) view belies a wild misunderstanding of how LLMs work.
It isn't an "AI" CPU. There is nothing AI about it. There is nothing about it that makes it more AI than Graviton, Epyc, Xeon, etc.
This was already revealed in the Qualcomm vs Arm lawsuit a few years ago. Qualcomm accused Arm of planning to sell their CPUs directly instead of just licensing. Arm's CEO at the time denied it. Qualcomm ends up being right.
I wrote a post here on why Arm is doing this and why now: https://news.ycombinator.com/item?id=47032932
The Dell marketing machine in particular is bludgeoning everyone that will listen about Dell AI PCs. The implication that folks will miss the boat on AI by not having a piddly NPU in their laptop is silly.
Unfortunately for them, I think hardware vendors will see past the hype. They'll only buy the platform if it is very competitively priced (i.e., much cheaper) since fortune favours long-lived platforms and organizations like Apple and Qualcomm.
For the first time in our more than 35-year history, Arm is delivering its own silicon products
Also, it takes a willful ignorance of history for ARM to claim this is the first time they've manufactured hardware. I mean, maaaaybe, teeeeechnically that's true, but ARM was the Acorn RISC Machine, and Acorn was in the hardware business...at least as much as Apple was for the first iPhone.
I don’t think ARM Ltd have ever done a deal to deliver finished chips to a customer for production use.
They’ve made test silicon and dev. boards.
They designed arguably the first ever SoC (for Acorn) in the form of the ARM250 but Acorn bought the chips from VLSI not ARM.
Not aware of an exception to this rule until now.
Thats a huge cost compared to the average RTL jockey
In case you were thinking about some other abbreviation...
I expected better from the people who brought us the ARM architecture, with A, R and M profiles.
I don’t know if it was intentional or they were so far out over their skis that they got their bathing suit caught, but it’s impressive either way.
> No. I would not use it as the product name. “AGI CPU” will be read as artificial general intelligence, not “agentic AI infrastructure,” so it invites confusion and sounds hypey.
To bad these executives seemingly don't have access to ChatGPT.
Oh god! Mistral tell me it's highly polarizing, will make the buzz and it's risky but anyway people will know that ARM is doing CPU again now (maybe I did put too many context).
ARMANI for short /s
Fraud is just the default lifestyle of marketers.
My realtor's last name is House
When I lived in Austin, it seemed like a third of boys born were being named Austin. I presume many of them will end up living there as adults but not because of this particular bias, because they were raised there and have family’s there seems to be a more likely driver.
There are several cities in the US that share my last name. I don't live near any of them.
> Study 6 extended this finding to birthday number preferences.
D'oh!
https://publishing.rcseng.ac.uk/doi/10.1308/147363515X141345...
Yesterday everything was Agentic.
Everything was AI last week.
Waiting for AGI Agentic AI Crypto toilet paper to be in the supermarket shelves , next to the superseded Object oriented UML Rational Rose tuna.
Where does Agentic come into this? ARMs explanation is that future Agentic workloads will be both CPU and GPU bound thus the need for significant CPU efficiency.
I’d argue we have always done that, and in fact it’s basically the definition of marketing!
https://en.wikipedia.org/wiki/How_many_angels_can_dance_on_t...
> Arm is actively collaborating with leading Linux distributions from Canonical, Red Hat, and SUSE to ensure certified support for the production systems.
Taken from
https://developer.arm.com/community/arm-community-blogs/b/se...
VC without a degree who has no grasp of hardware engineering failed up when all he had to do was noodle numbers in an Excel sheet.
He is so far behind the hardware scene he thinks its sitting still and RAM requirements will be a nice linear path to AGI. Not if new chips optimized for model streaming crater RAM needs.
Hilarious how last decades software geniuses are being revealed as incompetent finance engineers whose success was all due to ZIRP offering endless runway.
Unfortunately failing upwards is still somehow common, probably because the skill of parting fools from their money is still valuable.
Now the talent is going to other places for a variety of reasons, not all due to Sam (one of which is little room for options to grow). However it’s hard to believe his tanking reputation is not badly hurting the company. Other than Jakub and Greg, I believe there are not many top tier people left, those in top positions are there because they are yes-men to Sam.
Apple and Google control their own designs.
Sama is 100% an outsider, merely a customer. The chip insiders are onto his effort to pivot out of meme-stock hyping, into owning a chunk of their fiefdom. They laughed off his claims a couple years ago as insane VC gibberish (third hand paraphrase from social network in chip and hardware land).
No way he can pivot and print whatever. Relative to hardware industry he is one of those programmers who can say just enough to get an interview but whiffs the code challenge.
He has no idea where the bleeding edge is so he will just release dated designs. Chip IP is a moat.
Plus a bunch of RAM companies would be left hanging; no orders, no wafers. Sama risks being Jimmy Hoffa'd imploding the asset values of other billionaires.
That's...not much right? Maybe it's a lot times N-cores? But I really hope each individual core isn't limited to that.
Edit: 17 minutes to sum RAM?
Seeing "Arm AGI" spelled out on a page with an "arm" logo looks slightly cheesy.
But maybe it's actually a good fit for the societal revolution driven by AGI, comparable to the one driven by the DOT.com RevoLut.Ion. (dot com).
Anyways, it sounds like an A.R.M. branded version of the AppleSilicon revolution?
But maybe that's just my shallow categorization.
The TDP to memory bandwidth& capacity ratio form these blades is in a class of its own, yes?
Edit: The new CPU will be built with the soon-to-be-former leading edge process of 3nm lithography.
Time will tell if ARMs owncpu is on par or better than Apple’s ARM based chips
I no longer believe this is like the dotcom. Now it feels like the 1983 video game crash.
I looked around a bit, and the going rate appears to be about $10,000 per 64 cores, or around $150 per core. Here is an Intel Xeon Platinum 8592+ 64 Core Processor with 61 billion transistors:
https://www.itcreations.com/product/144410
So that's about 500 million transistors per dollar, or 1 billion transistors for $2.
It looks like Arm's 136 core Neoverse V3 has between 150 and 200 billion transistors, so it should cost around $400. Each blade has 2 of those chips, so should be around $800-1000 for compute. It doesn't say how much memory the blades come with, but that's a secondary concern.
Note that this is way too many cores for 1 bus, since by Amdahl's law, more than about 4-8 cores per bus typically results in the remaining cores getting wasted. Real-world performance will be bandwidth-limited, so I would expect a blade to perform about the same as a 16-64 core computer. But that depends on mesh topology, so maybe I'm wrong (AI thinks I might be):
I find all of this to be somewhat exhausting. We're long overdue for modular transputers. I'm envisioning small boards with 4-16 cores between 1-4 GHz and 1-16 GB of memory approaching $100 or less with economies of scale. They would be stackable horizontally and vertically, to easily create clusters with as many cores as one desires. The cluster could appear to the user as an array of separate computers, a single multicore computer running in a unified address space, or various custom configurations. Then libraries could provide APIs to run existing 3D, AI, tensor and similar SIMD code, since it's trivial to run SIMD on MIMD but very challenging to run MIMD on SIMD. This is similar to how we often see Lisp runtimes written in C/C++, but never C/C++ runtimes written in Lisp.It would have been unthinkable to design such a thing even a year ago, but with the arrival of AI, that seems straightforward, even pedestrian. If this design ever manifests, I do wonder how hard it would be to get into a fab. It's a chicken and egg problem, because people can't imagine a world that isn't compute-bound, just like they couldn't imagine a world after the arrival of AI.
Edit: https://news.ycombinator.com/item?id=47506641 has Arm AGI specs. Looks like it has DDR5-8800 (12x DDR5 channels) so that's just under 12 cores per bus, which actually aligns well with Amdahl's law. Maybe Arm is building the transputer I always wanted. I just wish prices were an order of magnitude lower so that we could actually play around with this stuff.
> Arm has additionally partnered with Supermicro on a liquid-cooled 200kW design capable of housing 336 Arm AGI CPUs for over 45,000 cores.
Also just bad timing on trying to brag about a partnership with Supermicro, after a founder was just indicted on charges of smuggling Nvidia GPUs. Just bizarre to mention them at all.
This is why Meta acquired a chip startup for this reason [0] months ago.
[0] https://www.reuters.com/business/meta-buy-chip-startup-rivos...
So sad.
One can dream.
I see the NIC as a form of future proofing, but we'll see.
My Ryzen 9 mini-PC from 2 years ago outperforms this thing in raw CPU Though.
I haven't ever ordered an ARM SoC but I also wouldn't be surprised if there were significant parts that they left up to integrators before - PLLs, pads, SRAM etc.
Am I right or am I misunderstanding?
The latest Intel server CPU, Clearwater Forest, uses Darkmont cores that have approximately the same performance, cost and power consumption as Neoverse V3, but Intel provides 288 cores per socket and 576 cores per board.
Even supposing that Intel Xeons would be used in relatively big 2U servers, that still provides at least 50% more cores per rack than these new Arm AGI CPUs.
The claim of Arm that they provide better performance per rack is false. They must have compared their new CPUs with some antique Intel Granite Rapids Xeon CPUs, instead of comparing with state-of-the-art Intel and AMD CPUs, which offer much more performance per rack than the new Arm AGI.
All mainstream server CPUs have a megabyte or two of SRAM on a core, of course.
Like, c’mon, this is ridiculous.
So we will see AI Toilet Paper launching in the next months.
> built on the Arm Neoverse platform
What the heck is "Arm Neoverse"? No explanation given, link leads to website in Chinese. Using Firefox translating tool doesn't help much:
> Arm Neoverse delivers the best performance from the cloud to the edge
What? This is just a pile of buzzwords, it doesn't mean anything.
The article doesn't seem to contain any information on how much it costs or any performance benchmarks to compare it with other CPUs. It's all just marketing slop, basically.
https://en.wikipedia.org/wiki/ARM_Neoverse
Neoverse V3 is also used in AWS Graviton5 and in several NVIDIA products.
AWS Graviton5 uses the same cores, but it has 192 cores per socket.
So Graviton5 has more cores per socket, but I think that it does not support dual socket boards.
This Arm AGI supports dual socket boards, so it provides 272 cores per board, more than Graviton5 MBs.
However, this is puny in comparison with Intel Clearwater Forest, which provides 576 cores per board, and the Intel Darkmont cores are almost exactly equivalent for all characteristics with Arm Neoverse V3.