Rendered at 21:22:08 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
_doctor_love 24 hours ago [-]
This might sound like snark, but I truly don’t mean it that way.
I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development. All the people I work with who are getting the most value out of using AI to deliver software are people who are already very high-skilled engineers, and the more years of real experience they have, the better.
I know some guys who were road warriors for many years —- everything from racking and cabling servers, setting up infrastructure, and getting huge cloud deployments going all the way to embedded software, video game backends, etc. These guys were already really good at automation, seeing the whole life cycle of software, and understanding all the pressure points. For them, AI is the ultimate power tool. They’re just flying with it right now. (All of them also are aware that the AI vampire is very real.)
There’s still a lot to learn, and the tools are still very, very early on, but the value is clear.
I think for quite a few people, engaging with AI is maybe the first time ever in their entire career they are having to engage with systems thinking in a very concrete and directed way. Consequently, this is why so many software engineers are having an identity crisis: they’ve spent most of their career focusing on one very small section of the overall SDLC, meanwhile believing that was mostly all there was that they needed to know.
So I think we’re going to keep talking for quite a while, and the conversation will continue to be very unevenly distributed. Paradoxically, I’m not bored of it, because I’m learning so much listening to intelligent people share their learnings.
jakelsaunders94 24 hours ago [-]
Hey, I don't think this sounded like snark at all. Super grounded take.
> I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development.
This I agree with completely. You can see it in the difference between a prompt where you know exactly what you want and when things are a little woolley. A tool in the hands of a well trained craftsperson is always better used.
> So I think we’re going to keep talking for quite a while
Me neither, and to be clear I'm okay with that. This was mostly a rant at the lack of diversity of discourse.
_doctor_love 23 hours ago [-]
Thanks friend! Appreciate it.
Agree, the diversity of the discourse is not great. There's a lot of "omg I just got started waaauw" articles out there along with "we're all gonna die!" stuff. And then a few seams of very excellent insight.
Deep research at least helps with dowsing for the knowledge...
This would be a more compelling argument if the conversations weren't so extremely dull and derivative, with most of the articles written in LLMspeak. I see a lot of discussion and not a lot of substance; articles and discussions about AI have a much smaller chance of being compelling compared to any other technical subject posted on HN.
piva00 11 hours ago [-]
The signal-to-noise ratio seems worse than many other hypes but it's the general way hypes go.
It's really hard to separate the wheat from the chaff at this point but I've been positively surprised by the relatively few articles sharing their more advanced workflow, lessons learnt which helps me to avoid the traps, patterns emerging that taught me something new (or at least validated approaches I tried on my own which worked). Gets tiresome to keep pace so I try to not fall for FOMO, and avoid experimenting too much to not get lost until I see a pattern emerging from different sources.
bengale 24 hours ago [-]
Spot on take. The people I’ve noticed that say things like “it’s not useful” are the ones who are doing so little they can’t see the value.
This isn’t to say there’s not hype. Just that if you’re not seeing big productivity gains you need to make sure you really are an outlier and not just surplus to requirements.
imiric 22 hours ago [-]
I rarely come across people who flat out say "it's not useful". They exist, but IME they're the minority.
Rather, I hear a lot of nuanced opinions of how the tech is useful in some scenarios, but that the net benefit is not clear. I.e. the tech has many drawbacks that make it require a lot of effort to extract actual value from. This is an opinion I personally share.
In most cases, those "big productivity gains" are vastly blown out of proportion. In the context of software development specifically, sure, you can now generate thousands of lines of code in an instant, but writing code was never the bottleneck. It was always the effort to carefully design and implement correct solutions to real-world problems. These new tools can approximate this to an extent, when given relevant context and expert guidance, but the output is always unreliable, and very difficult to verify.
So anyone who claims "big productivity gains" is likely not bothering to verify the output, which in most cases will eventually come back to haunt them and/or anyone who depends on their work. And this should concern everyone.
kaiokendev 16 hours ago [-]
> writing code was never the bottleneck
This is overly dismissive, there are many things that are possible now that weren't before because writing the code is no longer the bottleneck, like porting parts of the codebase from managed to unmanaged for teams with limited capacity. Writing code is about 1/3rd of the job. Another 1/3rd is analysis, which also benefits from AI allowing people who aren't very good at it to outperform. The final 1/3rd is-
> the effort to carefully design and implement correct solutions to real-world problems.
That's problem-solving - that part doesn't get sped up, and likely never will, reliably.
hparadiz 20 hours ago [-]
"productivity" is a misnomer. Sort of. The things I'm building are all things I've had on the back burner for years. Most of which I never would have bothered to do. But AI lets me ignore that excuse and just do it.
ok_dad 19 hours ago [-]
The productivity comes from not having the startup costs. You don’t need to research the best way to do X, just verify that X works via tests and documentation. I find it still takes T hours to actually implement X with an agent, but if I didn’t know how to do X it eliminates that startup cost which might make it take 3T hours instead.
The only downside is not learning about method Y or Z that work differently than X but would also be sufficient, and you don’t learn the nuances and details of the problem space for X, Y, and Z.
lenkite 11 hours ago [-]
> just verify that X works via tests and documentation.
No, its verify that X approach is semantically correct, architecturally makes sense, design is valid and then add tests and documentation. Basically, 80% of the work.
abustamam 19 hours ago [-]
I find it useful to use a brainstorming skill to teach me X Y Z and help me understand the tradeoffs for each, and what it'd recommend.
I've learned about outbox pattern, eventual consistency, CAP theorem, etc. It's been fun. But if I didn't ask the LLM to help me understand it would have just went with option A without me understanding why.
imiric 13 hours ago [-]
> You don’t need to research the best way to do X, just verify that X works via tests and documentation.
"Just verify" is glossing over a lot of difficult work, though. It doesn't just involve checking whether the program compiles and does what you wanted—that's the easy part. You should also verify that the program is secure, robust, reasonably performant, efficient, etc. Even if you think about these things, and ask the tool to do this for you, generate tests, etc., you will have the same verification problem in that case as well. The documentation could also be misleading, and so on. At each step of this process there will likely always be something you missed, which considering you're not experienced in X, Y, or Z, you have no ability to properly judge.
You can ignore all of this, of course, which majority of people do, but then don't be surprised when it fails in unexpected ways.
And verification is actually relatively simple for software. In many other fields and industries verification is very impractical and resource intensive. It doesn't take a genius to deduce the consequences of all of this. Hence, the net effect of these tools is arguably not positive.
Andrei_dev 4 hours ago [-]
Exactly. "Tests pass" and "code is secure" are just different things. AI code makes that gap worse.
I run static analysis on mixed human/AI codebases. The AI parts pass tests fine but they'll have stuff any SAST tool flags on first run — hardcoded creds, wildcard CORS, string-built SQL. Works in a demo, turns into a CVE in prod.
And nobody's review capacity scaled with generation speed. Most teams don't even have semgrep in CI. So you get unreviewed code just sitting in production.
The "10x" is real if you count lines shipped. Nobody counts the fix cost downstream though.
SpaceNoodled 22 hours ago [-]
That's only because we're trying to not be too condescending.
abustamam 19 hours ago [-]
Really? On HN I see so many people AI naysayers who say either it's not useful or it's a net negative on productivity. Perhaps they are a minority, but they're certainly a vocal one.
AbanoubRodolf 20 hours ago [-]
[dead]
strangattractor 20 hours ago [-]
Agreed - another tool in the old tool pouch. I find it fascinating in that it provides insight into the role of language in intelligence. Certainly not AGI but makes ELIZA seem neolithic;)
I am amazed at the incredible things it can do - only to turnaround and not be able to do a simple task a child can do. Just like people.
_doctor_love 4 hours ago [-]
Your handle caught my attention - yes - and for folks studying non-linear and dynamical systems, fascinating how much of prompting is sensitive dependence to initial conditions.
randusername 7 hours ago [-]
> Consequently, this is why so many software engineers are having an identity crisis: they’ve spent most of their career focusing on one very small section of the overall SDLC, meanwhile believing that was mostly all there was that they needed to know.
Specialists/generalists, top-down/bottom-up, BFS/DFS, pragmatists/idealists, ADHD/ASD; lots of continuums in software work and those at either extreme have biases.
Personally I think that there will be less programmers needed and the ones that will remain will have had to mellow out towards the center in all these continuums. We won't be able to rely on big teams balancing each other out.
Generalists will need to learn which details matter. Specialists will need to learn the delegation and risk tolerance usually reserved for the bemoaned management track. Hard to say which is the easier journey.
AbanoubRodolf 20 hours ago [-]
The identity crisis observation is the most accurate thing in this thread. The engineers struggling most aren't struggling because AI is replacing their skills. They're struggling because AI is revealing which of their skills were incidental to their job versus central to it.
A lot of software engineering career capital was built on knowing which obscure method to call, which Stack Overflow answer to trust, how to navigate a specific framework's quirks. That knowledge was genuinely hard to acquire and it was a real signal. Now it's table stakes. The career capital that survives is knowing why you'd make a particular architectural decision, how to tell if generated code is actually correct, what the error message is really telling you.
The road warrior framing is right. Those people internalized systems thinking across the whole stack over years. AI doesn't replace that — it makes it worth more, because now one person with that mental model can move faster than a team without it. The people who are "bored of AI" are often the people who already made that transition and stopped finding it novel. The people still anxious about it usually haven't yet.
dogleash 5 hours ago [-]
>The identity crisis observation is the most accurate thing
It's also a nasty tool used to dismiss criticism by tearing people down in work-friendly language.
Software does employ a lot turds that it shouldn't. We been knowing this. Impossible to ignore following the 2010s push to expand the hiring pool. Newcomers didn't even pretend to try or care.
Convenient that we're suddenly calling them out now. At the same time there's a need to indiscriminately invalidate professional-informed opinions.
amelius 24 hours ago [-]
This is really not true. There are stories of people who had no background in software engineering who now write entire applications using AI. And I have personally seen this happen.
strken 21 hours ago [-]
Before AI, there were also stories of people who had no background in software engineering who wrote entire applications using their fingers. This was called "learning to be a software engineer".
I don't mean to snipe at AI, because it really does seem to have set more people on the path of learning, but I was writing VB5 apps when I was 14 by copying poorly understood bits and pieces from books. Now people are doing basically the same but with less typing and everyone thinks it's a revolution.
munksbeer 3 hours ago [-]
Learning to code and write an application was very hard for most people, because of time and other friction. I know of non-coders who now have applications running for various things they find useful.
You might not consider this productive, but they do, so what you think literally doesn't matter to them.
anon-3988 18 hours ago [-]
I have never seen people learn how to be a software engineer in a weekend tho.
pamcake 16 hours ago [-]
And neither do you today.
There is more to it than "being able to make an entire application", which a novice could also have pulled off in a weekend 10 years ago.
mikkupikku 24 hours ago [-]
Smart people can hit the ground running if they're freed from the need to first learn the intricacies of a new language. We're going to see an explosion in the number of people writing software as clever people who invested their time in something other than learning to program are now able to write software for themselves.
abustamam 19 hours ago [-]
This may be true, but define an entire application. Is it a CRUD app? Is it an app that scales to a thousand, ten thousand, a million users? Is it an app that is bug free and if not bug free, easy to fix said bugs with few to no regressions? Is it an app that is easy to maintain and add new features to without risk of breaking other stuff?
I think it is genuinely impressive to be able to build one app with AI. But I haven't seen evidence that someone could build a maintanable, scalable app with ai. In fact, anecdotally, a friend of mine who runs an agency had a client who vibe coded their app, figured out that they couldn't maintain it, and had him rewrite the app in a way that could be maintained long term.
Again, I'm not an Ai detractor. I use it every day at work. But I've needed to harden my app and rules and such, such that the AI cannot make mistakes when I or another engineer is vibing a new feature. It's a work in progress.
switchbak 23 hours ago [-]
What is not true, that "so many software engineers are having an identity crisis"?
I don't believe they said that folks new to AI can't make impressive use of it. They did however say that senior folks with lots of scrappy and holistic knowledge can do amazing things with it. Both can be true.
leptons 21 hours ago [-]
I've seen people generate a lot of vibeslop with AI, but they didn't actually "Write entire applications using AI".
They still have absolutely no clue how it works, so how could they "write entire applications"? They vibed it, but they certainly didn't write any of it, not one bit of it, and they're clueless as to how to extend it, upgrade it, and maintain it so that the AI doesn't make it a bloated monstrosity of AI patches and fixes and workarounds that they simply could never begin to understand.
They were also following a dozen youtube tutorials step by step, so even that part was someone else doing the thinking.
Yeah, these are the same guys constantly bugging me to help them figure something out.
timacles 5 hours ago [-]
Yeah no way are people developing actual applications. They’re developing one off apps, tools and websites. The amount of robustness a usable application with actual purpose requires is exactly where LLMs fall apart.
I’m not a naysayer by any means and at this point use Llms all day for many purpose. But it is undeniable that the exact moment complexity reaches a certain threshold, LLMs need more and more guidance and specific implementation details to make worthwhile improvements. This same complexity threshold is where real world functionality lives.
pojzon 24 hours ago [-]
Its silly to say this but one such person is „pewdiepie”
abustamam 19 hours ago [-]
This! I've actually learned a lot about what I don't know by using AI. It made me dig into learning proper systems design, app architecture, etc.
But at the same time the more I read about AI, the more I realize I need to learn about AI. Thus far I'm just using cursor and the Claude code extension alongside obra superpowers, and I've been quite happy with it. But on Twitter I see people with multiple instances of Claude code or open claw talking to each other and I don't even know how to begin to understand what's going on there. But I'm not letting myself get distracted — Claude code and open claw are tools. They could go away at any time. But systems thinking is something that won't go away. At least, that's my gambit.
jimbokun 18 hours ago [-]
It’s telling those people mostly talk about the complexity of the AI setup they’ve engineered to write code. Much more so than bragging about the software created by that process.
abustamam 16 hours ago [-]
That's a good point, but I've seen a lot of interesting meta-setups as well (like visualizations of agents interacting with one another).
Does it write good code? I dunno. But it looks cool, and I think interesting in its own right, even if it ends up being functionally useless.
sigbottle 23 hours ago [-]
The "AI Vampire", huh. Unironically, I've been feeling that way.
Well, there was also a lot of unrelated things that happened as well around last November for me, but yes, getting into vibecoding for real was one of them, and man I feel physically drained coming back from work and going to use more AI.
Not sure what it is. I'm using AI personally to learn and bootstrap a lot of domain knowledge I never would have learned otherwise (even got into philosophy!, but man is it exhausting keeping up with AI. I would burn through a week's worth of credits in a day, and now I haven't vibe coded a week.
I think, I will chill. One day at a time.
_doctor_love 23 hours ago [-]
AI Vampire is from Steve Yegge, credit where it's due.
My take is that it's similar to what Amber Case described in Calm Technology - with AI you are not steering one car, you're really steering three cars at the same time. The human mind isn't really designed for that.
I am finding that really structuring my time helps in terms of fighting back. And adopting an hours restriction, even if I could rage for 4 more hours, I don't. Instead I stop and go outside.
systemsweird 20 hours ago [-]
Completely agree. It’s very telling that the majority of write ups on effect agentic coding are essentially summaries of software engineering best practices.
QuantumGood 23 hours ago [-]
> I’m learning so much listening to intelligent people share their learnings.
Me too. A key purpose of HN, and a bright time for that.
d675 24 hours ago [-]
absolutely. as a early/mid level SDET/SRE, I can move so fast on prototyping full good apps now. That style of thinking is serving me well, knowing about queues, docker, basic infra knowledge, good coding practices, is plenty to produce decent code. Interesting time to be laid off.
AI makes a ton of bad decisions too and it's up to you to work with it. If I had the knowledge of the dangers hidden in things I'm developing, I'd move even faster
Was able to make a great full web app, which I think is hardened for prod but it had to be refactored to do so. Which it happily did.
It's really about asking the right questions, breaking down tasks, and planning now. I'm going to tackle a huge project, hoping to share it here.
username135 22 hours ago [-]
AI Vampire is so perfect. Ive never thought of it that way but its right there.
gAI 24 hours ago [-]
Agreed, though I prefer "Fae Folk" to vampires.
Terr_ 22 hours ago [-]
If LLMs were vampires, they'd be better at counting, if they were fae, they'd be better at legalistic logic. :p
deadbabe 22 hours ago [-]
If you have to really understand software development to be a good user of AI, we’re screwed. All the best users of AI we’ll ever have already exist I think.
throwawaytea 20 hours ago [-]
That's a good point.
Im a novice self taught developer that somehow pushed through and made a decent PM tool for the construction industry. It works, if your users aren't malicious or too demanding.
Now I'm working on a second project, all with AI. I haven't written a single line. It works better than a non programmer would make because I knew what to ask for. But I'll admit I'm not learning anything.
hparadiz 20 hours ago [-]
Can't say the same. I've been super hands on with a C project. Really getting into the details of the event bus and how to make things performant. The AI is still writing 99% of the code but I'm being super strict about what I consider acceptable.
deadbabe 10 hours ago [-]
And when you get memory leaks and don’t know how to debug?
hparadiz 6 hours ago [-]
Lol what a silly assumption
deadbabe 5 hours ago [-]
Fuck, another C developer who will just let memory leaks grow uncontrollably is born.
hparadiz 5 hours ago [-]
You might wanna learn how to set up constraints for your agents. Every memory map is accounted for with docs pointing to the exact structs, allocations, and usage patterns. It's stuff I was already doing. Now I can do it in a fraction of the time.
deadbabe 3 hours ago [-]
So you have absolutely NO doubt
hparadiz 2 hours ago [-]
Doubts? I'm full of them. I'm building a new task manager for myself that has things I've always wanted. It's basically a profiler so it must be performant.
The core if you disable all the plugins is currently topped out at 73.8 MB after several days of running it. I've given it several audits with the AI agents using actual memory maps and doing the math on each variable.
I haven't had time to do Milkdrop yet but it's on my todo. The issue isn't doing the work. The issue is not having enough credits in my accounts to throw some compute at it. I'll get to it eventually. But it's actually way easier now to try new ways of packing the data into binary and profiling it for issues.
The issues I've had are edge cases like a 6 hour youtube stream. At one point the BPM detector was buffering the entire track in the pipewire sink. It took one throwaway prompt to the AI to solve that one.
dalmo3 9 hours ago [-]
[dead]
djeastm 23 hours ago [-]
Any thoughts on what the next generation of software devs is going to look like without as much manual experience?
eloisant 23 hours ago [-]
When C arrived, programmers wonder how software devs would look like when they won't have assembly experience.
Then the same happened with languages that managed memory.
And with IDE that could refactor your code in a click and autocomplete API calls.
And with Stack Overflow where people copy/pasted code they didn't understand.
bGl2YW5j 22 hours ago [-]
I reckon there's a limit to how long this abstraction can go on before not understanding underlying mechanisms will seriously hamstring you.
wild_egg 17 hours ago [-]
I think we're a long ways from that.
But with that said, those who learn the underlying mechanisms will always be able to solve more problems than the folks who don't. When you know the lower pieces, your mental model tells you when and where the higher level pieces are likely to break. Legit superpower.
bluefirebrand 17 hours ago [-]
> But with that said, those who learn the underlying mechanisms will always be able to solve more problems than the folks who don't
I would define that as being "seriously hamstrung"
sellmesoap 13 hours ago [-]
Well how many times have we seen the S3 bucket set to public while the customer data piles up and leaks out to space.
20 hours ago [-]
calvinmorrison 22 hours ago [-]
And over and over time proves that, when you need it, ASM or C or generals system knowledge was handy. One example, I am not a "Windows" or "NT" guy, mostly working in various Unixes and Linux in my professional career. I had a client who had battered every resource trying to fix some horrible freeze/timeout in their application. So I rolled up my sleeves, first search " is there dtrace on windows", found some profiling tools, found the process was stuck in some dumb blocking call loop, resource was unavailable, and the rest was history.
So yeah i mean - who cares how it works - but also if you have experience in how things _do_ work you can solve problems other people cannot.
eloisant 6 hours ago [-]
Sure, "it's handy" and once every few years you encounter a bug where ASM or C knowledge is valuable.
Yet most programmers nowadays can't write ASM or C and still manage to produce useful software.
AnimalMuppet 20 hours ago [-]
It started before that. When assemblers came out, (some) programmers worried about losing touch with the machine if they didn't have to know the instructions in octal.
_doctor_love 23 hours ago [-]
Honestly, I think it will look pretty much like this one. There’s a lot of manual experience that the current generation doesn’t have.
For example, I haven’t racked and cabled a server in over 15 years. That used to be a valuable skill.
I also used to know how to operate Cisco switches and routers (on the original IOS!). I haven't thought about CIDR and the difference between a /24 and a /30 since the year 2008. A class IP addresses, how do those work? What subnet am I on? Is thing running on a different VLAN? Irrelevant to me these days. Some people still know it! But not as many as in the past.
The late Dr. Richard Hamming observed that once a upon a time, "a good man knew how to implement square root in machine code." If you didn't know how to do that, you weren't legit. These days nobody would make such a claim.
So some skills fade and others rise. And also, software has moved in predictable cycles for many decades at this point. We are still a very young field but we do have some history at this point.
So things will remain the same the more they change on that front.
a1o 19 hours ago [-]
I am pretty sure network knowledge and all those things are still necessary for people running data centers and really big computers and I imagine we will build a lot more of that.
Also anyone making a homelab has to know these stuff.
calvinmorrison 22 hours ago [-]
> So some skills fade and others rise. And also, software has moved in predictable cycles for many decades at this point. We are still a very young field but we do have some history at this point.
And there'll be a split too... like there's a giant divide between those mechanics who used to work on carburetors and the new gen with microcontrollers, injection systems, etc. People who think cars are 'too complicated' aren't wrong, but for someone who grew up in the injected era, i vastly prefer debugging issues over the canbus rather than snaking my ass around a hot exhaust to check something.
SpecialistK 19 hours ago [-]
And to take the analogy even further, I'm sure there will be a subset of people who develop really strong opinions about a particular toolchain or workflow. Like how we have people who specialize in 70s diesel trucks or 90-00s JDM sports cars, there'll likely be programmers who are SMEs at updating COBAL to Rust using Claude.
calvinmorrison 2 hours ago [-]
yup. I do early bosch EFI and K-Jet... thats what I know!
pjmlp 13 hours ago [-]
I am having an identity crisis that thankfully is sorted out by being senior and close to retirement than early career days.
Since COVID I have seen teams scaled down, lots of custom development or devops/infra work work got replaced with SaaS and iPaaS cloud products, serverless/lambda, managed containers.
This is the next step.
Great that people feel more productive, unfortunely for many of them, us, more productivity means the C-suites can do some head count reduction yet again.
keybored 23 hours ago [-]
A post supposedly about being bored of talking about AI. But psyche, it’s the same AI talking points. And psyche, the top comment is the same sentiment about how the truly skilled will finally have their time to shine.
I don’t know if it’s the Universe delivering this farce or it’s the emergent LLM Singularity.
_doctor_love 23 hours ago [-]
> how the truly skilled will finally have their time to shine.
That's not what I said. I said that those who are already shining, are now shining even brighter. Give a great craftsman a new tool and he will find a way to apply it. If it is valueless, he will throw it away.
For what it's worth, your comment is also an HN trope, the disaffected low-effort armchair keyboard warrior.
keybored 23 hours ago [-]
Expressing a negative sentiment is a trope now?
Rapzid 22 hours ago [-]
Keybored is a trending vibe, yeah.
keybored 9 hours ago [-]
Is anyone else bored of talking about posting with that keybored vibe? Hey, don’t get me wrong!—I’m not some heretic. I love the keybored posting vibe. Lots of sarcasm, complaining about American geopolitical decisions, recommending socialism on some venture capitalist/hacker forum, absolutely no hint of any Show HN bragging rights or any technical accomplishments whatsover, just lots of complaining in general (but I repeat myself). It’s fantastic and I could not go back to posting any other way. But why are talking about it so much? Why not just live it, live that beautiful vibe, and let it permeate our whole Internet persona—in fact just let it become water around us, like we are fish, something vital to our existence and wholly unquestioned until the dam breaks somewhere.
_doctor_love 4 hours ago [-]
u mad bro?
ludicrousdispla 14 hours ago [-]
it's essentially the same argument Agile consultants made when faced with criticism about Agile, ... "you're not using it right"
LogicFailsMe 23 hours ago [-]
Spot on, I am having the time of my life with AI, more fun than I've had in decades. But I was in the top 10% of engineering, and top 1% of the bits of engineering I do best, so it's easy for me to use AI to explore more ideas than I could have possibly explored by hand. And if I get replaced, cool bro, my investments are in compute, and compute's just getting started IMO.
LogicFailsMe 5 hours ago [-]
Wow, replying to myself here, but also, wow, yes I'm pretty good at something pretty in demand right now. And that didn't happen by chance, and because I was a decade early on that skill, I got all the arrows in my back along with the headstart. We are all created equal, sure, but if I put 100,000+ hours into something that most barely get past 10,000, if I don't end up world class at it, then what was I even doing? I know, I know, hubris, right? Also, didn't have kids so I had the time.
But that said, this will be the 3rd major industry transition of my career. And having survived the past two, you will adapt, or you won't have a job. And that's why, once again, you will ultimately adapt, kicking and screaming if that's what it takes, so why not start early?
AI coding agents are useful already, but they make too many mistakes and they need handholding from expert engineering talent in 2026. Ask me again in 2027. But that's why the best results are coming from the talent right now with the experience to ask the right questions and propose the right tests and fixes as the human in the loop. Otherwise, it's still hallucinatory vibe coding in a loop IMO.
The surprise and disappointment, well not the surprise, is the usual hatred of success that defines humanity. Whatever, downvotes, right?
4 hours ago [-]
4 hours ago [-]
cyanydeez 24 hours ago [-]
Isn't that scary though: A bunch of people are going to be forced to use a tool that keeps them ignorant and they absolutely won't know if it's doing correct things, to the point that as you retire, the next crop is going to be much less involved in knowing whats going on.
It's what happened with the internet and computer usage. As Apple made it easier to get online with zero computer knowledge, suddenly we're electing people like donald trump.
scorpioxy 23 hours ago [-]
To me, it is very scary. I know people who have sort of "outsourced" their critical thinking to chatgpt. So to me it's extra scary when I see it outside technical circles. They'll just believe whatever that generation of LLM tells them because it is doing it so confidentially and never question or check the information. Maybe I'm naive but I thought easier access to knowledge was supposed to make us more intelligent, not less.
vparseval 20 hours ago [-]
I don't remember exactly in which book's introduction Hannah Arendt mentioned this, but she pointed out that every time humanity learned a new skill that improved its efficiency in some capacity, that skill as well as adjacent skills diminished irrevocably.
AI is the thing that for the first time can think better than us (or so at least some people believe) and is seen as an efficiency booster in the world of cognition and ideas. I'd think Hannah Arendt would be worried with what we are currently seeing and where we might be headed.
heavyset_go 22 hours ago [-]
> Maybe I'm naive but I thought easier access to knowledge was supposed to make us more intelligent, not less.
Turns out Lowtax was right and ahead of his time
_doctor_love 23 hours ago [-]
Serious reply to this one: I truly don’t find it any more scary than what’s already taken place many times in human history.
We have hundreds and thousands of years of history showing humans committing atrocities against each other well before the advent of computers, or even the introduction of electricity. So while the tool may become so ubiquitous that there’s no option not to engage with it, I don’t think it really fundamentally alters the dynamics of human behavior.
Some people are motivated by greed. Others are motivated by nobility. It really just comes down to which wolf they're feeding.
In terms of the tool keeping people ignorant, there’s a part I agree with and a part that I don’t. I think, in terms of information dissemination, AI is probably the autocrat’s wet dream in terms of finally being able to achieve real-time redefinition of reality. That’s pretty scary, and I’m not sure what to do about it.
On the other hand, people have always been free to not really learn their craft and to just sort of get by and make a living. That was true a thousand years ago, and it’s true today. There’s always somebody who can do really a high-quality job, but they’re very expensive, and then there's a vast population who will do a medium to terrible job for less money. You get what you pay for. There's a reason history is primarily written about people with power and wealth, they were the only ones with the means to do anything.
I don’t agree with the assertion about the internet and the election of someone like Donald Trump. Well before the internet existed, politicians were using communication mediums to influence things and get elected—whether it was the telegraph, the telephone, or the TV. JFK famously was the first TV president (notably, he didn't wear a hat).
These technologies simply give politicians more reach, and they may change the dynamics of how voters are persuaded. But what’s true today was true three hundred years ago: there’s the face of power that you see publicly, and then there’s what really happens behind the scenes.
bluefirebrand 23 hours ago [-]
> Serious reply to this one: I truly don’t find it any more scary than what’s already taken place many times in human history
Spoken like someone who thinks they are going to be insulated from the fallout
solenoid0937 23 hours ago [-]
Many of us are fine with the fallout because we understand the net benefit to humanity is going to be similar to the previous waves of automation.
Sure, it might hurt me personally. I'm not selfish enough to put that over what will be an incredibly empowering development for our species.
bluefirebrand 21 hours ago [-]
I don't believe for even a second that the net benefit to humanity is going to be positive
This will be good for a handful of elites and no one else
heliumtera 23 hours ago [-]
>They’re just flying with it right now.
Where are they flying and why software has gone to shit?
Maybe this super stars programmers have to keep their reality breaking technology secret, but everything has not only degraded, but turned to absolute trash.
hbarka 22 hours ago [-]
> For them, AI is the ultimate power tool.
Yup
SpaceNoodled 22 hours ago [-]
When all you've got AI, every problem looks like ... Uh, whatever hole an LLM's output goes into. A garbage can, ideally.
AI seems great when you have no way of truly validating its output.
lukev 1 days ago [-]
This is bad in tech. But at least we are (relatively) well equipped to deal with it.
My partner teaches at a small college. These people are absolutely lost, with administration totally sold on the idea that "AI is the future" while lacking any kind of coherent theory about how to apply it to pedagogy.
Administrators are typically uncritically buying into the hype, professors are a mix of compliant and (understandably) completely belligerent to the idea.
Students are being told conflicting information -- in one class that "ChatGPT is cheating" and in the very next class that using AI is mandatory for a good grade.
Its an absolute disaster.
nicbou 14 hours ago [-]
My only gripe is how myopic the AI discussion on HN is. We barely talk about how it hits everyone else.
In the relocation industry, it's losing translators, relocation consultants and immigration lawyers a lot of work. Their cases are also getting tougher because people are getting false information from ChatGPT and arguing with them.
This problem is compounded by the lack of training data for that topic. I spent years surfacing that sort of information and putting it online, but with AI overviews killing the economics of running a website, it feels pointless.
I see such stories everywhere. People being replaced by something half as good but a tenth of the cost. It's putting everyone out of work and making everything worse.
simianwords 13 hours ago [-]
Half as good and tenth of the cost is a good replacement. I constantly make those choices myself when purchasing things.
civvv 13 hours ago [-]
That entirely depends on what you are buying. If you’re in need of a lawyer to keep you out of the bottom bunk, I’d happily spend a lot more for a little better.
nicbou 13 hours ago [-]
It's fine to an extent, but it kills what happens in the other half.
You can feel it with AI-generated content and responses, in AI-generated art, customer service bots and vibe-coded software. This gradual worsening of everything won't lead to lower prices or a better experience, so it's not really a tradeoff.
dalmo3 9 hours ago [-]
I think GPs point is that you'll soon not even be able to make those choices.
Now every toilet on the market only flushes number one. But hey, they're so much cheaper.
Terr_ 22 hours ago [-]
I've been telling my curious/adrift relatives that it's a machine takes a document and guesses what "usually" comes next based on other documents. You're not "chatting with it" as much as helping it construct a chat document.
The closer they can map their real problems to make-document-bigger, the better their results will be.
Alas, that alignment is nearly 100% when it comes to academic cheating.
chatmasta 1 days ago [-]
The wild part is they’re having this reaction while using the most rigid and limited interfaces to the LLMs. Imagine when the capabilities of coding agents surface up to these professions. It’s already starting to happen with Claude Cowork. I swear if I see another presentation with that default theme…
iugtmkbdfil834 1 days ago [-]
This. As annoying as all sorts of 'safety features' are, the sheer amount of effort that goes into further restricting that on the corporate wrapper side side makes llm nigh unusable. How can those kids even begin to get the idea of what it can do, when it seems like its severely locked down.
pjc50 24 hours ago [-]
Could you provide an example of such a thing that is prevented?
iugtmkbdfil834 21 hours ago [-]
Sure. In the instance I am aware of, SQL ( and xml and few others )files are explicitly verbotten, but you can upload them as text and reference them that way; references to personal information like DOB immediately stops the inference with no clear error as to why, but referencing the same info any other way allows it go on.
It is all small things, but none of those small things are captured anywhere so whoever is on the other end has to 'discover' through trial and error.
metalliqaz 23 hours ago [-]
By my understanding, the administrators at small colleges are among the least capable professionals one might find anywhere in the economy.
throwawaysleep 20 hours ago [-]
A friend and I have a contract with a local university here in Canada.
They paid for custom on prem software and in over a year, they have not fully provided both access and infrastructure for install it.
We have been paid already, but they paid for a tool they can’t get their shit together enough to let us install.
kjkjadksj 5 hours ago [-]
This is true even at large colleges. Better cut faculty jobs to deal with budget shortfall. Never mind the football program can raise $200m with a dozen phone calls.
whattheheckheck 24 hours ago [-]
When industrialization was taking root yes indeed the factory jobs sucked AND it was the future. Two things can be true
ares623 20 hours ago [-]
You left out the part that the non-factory jobs sucked more (or were just non-existent).
This is the opposite.
webdood90 24 hours ago [-]
> These people are absolutely lost, with administration totally sold on the idea that "AI is the future" ...
Doesn't sound that different from my tech job
jakelsaunders94 24 hours ago [-]
This is really interesting. I've been out of education for a long time, but I was wondering how they were dealing with the advent of AI. Are exams still a thing? Do people do coursework now that you can spew out competent sounding stuff in seconds?
Al-Khwarizmi 21 hours ago [-]
I teach CS at a university in Spain. Most people here are in denial. It is obvious to me that we need to go back to grading based on in-person exams, but in our last university reform (which tried to copy the US/UK in many aspects) there was so much political posturing and indoctrination about exams being evil and coursework having to take the fore that now most people just can't admit the truth before their own eyes. And for those of us that do admit it, we have a limited range of maneuver because grading coursework is often a requirement that emanates from above and we can't fundamentally change it.
So in most courses nothing has changed in the way we grade. Suddenly coursework grades have gone up sharply. Anyone with working neurons know why, but in the best case, nothing of consequence is done. In the worst case (fortunately uncommon), there are people trusting snake oil detectors and probably unfairly failing some students. Oh, and I forgot: there are also some people who are increasing the difficulty of the coursework in line with LLMs. Which I guess more or less makes sense... Except that if a student wants to learn without using them, then they suddenly will find assignments to be out of their league.
So yeah, it's a mess.
technothrasher 21 hours ago [-]
> Except that if a student wants to learn without using them
My son, who is a freshman at a major university in NYC, when he said to his freshman English professor that he wanted to write his papers without using AI, was told that this was "too advanced for a freshman English class" and that using AI was a requirement.
tdeck 8 hours ago [-]
I don't understand what they think it is they're teaching? Will we teach kids to "read" by taking a photo of their bedtime story and hitting a button next?
One of the teaching methods is "look at the context, like pictures, and guess what the word is". One example I remember was thinking "pony" is "horse" due to association without being able to sound it out.
senordevnyc 21 hours ago [-]
Now colleges will have to try and detect if you didn't use AI!
ares623 20 hours ago [-]
Meh, today I opened twenty PRs and felt great. That's worth it to me. (/s)
AI is starting to look like a net negative for humanity. I remember the early days of OpenAI. I was super excited about it. There was a new space to uncover and learn about. I was hopeful.
Now I have this love/hate relationship with it. Claude Code is amazing. I use it everyday because it makes me so much more efficient at my job. But I also know that by using it I’m contributing to making my job redundant one day.
At the same time I see how much resources we are wasting on AI. And to what end? Does anybody really buy the BS that this will all make the world a better place one day? So many people we could shelter and feed, but instead we are spending it on trying to make your computer check and answer your emails for you. At what point do we just look up and ask… what is the damn purpose of all of this? I guess money.
wrs 24 hours ago [-]
Well, on the other hand, software isn’t all about checking emails.
I know someone who worked for a nonprofit that made pregnancy health software that worked over text messaging. Its clients were women in Africa who didn’t have much, but they had a cell phone, so they could get reminders, track vitals, and so forth.
They had to find enough funding to pay several software engineers to build and maintain that system. If AI allows a single person to do it, at much lower cost, is that bad?
crabmusket 19 hours ago [-]
So, here is a case where AI was a technical fix to a social problem: that as a global society, funding is skewed away from things which benefit people. Google can find the budget for as many ad-touchers as it wants, but nonprofits have to scrounge and make do by paying for "metered intelligence" from a megacorp.
So in isolation, I think it's great that they managed to achieve this. But I mourn that the only way they achieved it was via this rapacious truth-destroying machine.
This isn't a new trend - AI didn't cause it. It's just the latest version of it.
wrs 59 minutes ago [-]
To be clear, this work was done years pre-AI. I was just giving an example of how lowering software development costs could be a public good.
bGl2YW5j 22 hours ago [-]
This is awesome. It's sad that examples like this are few and far between.
remich 20 hours ago [-]
Are they? Or do you just mean that it's few and far between that we hear about them? If it's the former, I think there's a much bigger universe of this kind of stuff than most people realize. Otoh, if you're just commenting on the lack of coverage, then, yeah I agree I wish more publicity was paid to small software like this. Maybe we need a catchy term - "organic software"? "Locally grown software"?
HaloZero 18 hours ago [-]
I talked to my friends who aren't in tech a lot about what they would want with software. A lot of the benefits of small software like this would actually be compliance and reporting issues with non-profit. Sifting through large amounts of data with very unstructured inputs.
The actual community building is fairly not as automated unless you have very specific problems. Like even in the example above, having an automated message is useful but staffing the team to handle when things are NOT in a good spot would probably be the real scaling cost.
jjpones 19 hours ago [-]
I feel you. This whole hoo-haa has made me so much more money-minded, and so much less optimistic about Software Development than I was 10 years ago. I just remind myself that it's bad now, but it can get much worse. Now I'm just trying to get what I can, then hopefully retire volunteering at an animal conservation sanctuary. And to the AI hopefuls, good luck, because I don't see any future where the zeitgeist of this technology won't just be about lining the pockets of already rich people.
rimbo789 20 hours ago [-]
starting? it was pretty clearly a net negative from the get go
tim333 6 hours ago [-]
>Does anybody really buy the BS that this will all make the world a better place one day?
Yeah - I think there's a lot of cool sci-fi like stuff in the future.
nmeagent 4 hours ago [-]
And probably a lot of uncool sci-fi like stuff, e.g. The Machine Stops.
squirrellous 13 hours ago [-]
The problem is scale. Beyond a certain scale it’s all a net negative: social networks, bitcoin, ads, machine learning, automated trading, big this big that, etc.
Unfortunately for fellow developers, software enables massive scale.
jesterson 21 hours ago [-]
I am having similar thoughts.
To add to list of questions - it's undeniable the AI is making humans dumber by doing mental job previously done by humans. So why we spend so much energy making AI smarter and fellow humans dumber?
Shouldn't we be moving in opposite direction - invest in people instead of some software and greedy psychopaths at helm of large companies behind it?
xvector 24 hours ago [-]
> But I also know that by using it I’m contributing to making my job redundant one day.
I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities. You can do so much more now. We are more limited by our ideas at this point than anything else.
Why is the reaction of so many people, once their menial work gets automated, "oh no, my menial work is automated." Why is it not "sweet, now I can do bigger/better/more ambitious things?"
(You can go on about corporate culture as the cause, but I've worked at regular corporations and most of FAANG. Initiative is rewarded almost everywhere.)
> Does anybody really buy the BS that this will all make the world a better place one day?
Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?
There is a reason that we aren't dying of dysentery at the ripe age of 45 on some peasant field after a hard winter day's worth of hard labor. The march of automation and technology has already "made the world a better place."
GeoAtreides 22 hours ago [-]
>Why is the reaction of so many people, once their menial work gets automated, "oh no, my menial work is automated." Why is it not "sweet, now I can do bigger/better/more ambitious things?"
because i have rent to pay? old age to prepare for?
why is it so hard to understand most people are not rich, that the cost of living is high, and that most people are VERY afraid their jobs will be automated away? why is so hard to understand that most people haven't worked at FAANG, they don't have stocks or savings, and are squeezed harder with every new day and every new war?
what world, what reality are you guys living in?!
xvector 21 hours ago [-]
Because there is always work to do. It is true that demand will drop for those that don't take initiative and aren't sure what to do now that AI can do their repetitive tasks. However, demand will surge for those that can think critically about how to utilize AI to empower businesses.
"Software engineer" as a profession is rapidly getting automated at my company, and yet our SWEs are delivering more value than ever before. The layer of abstraction has changed, that is all.
> what world, what reality are you guys living in?!
One that has seen immense benefits from the Industrial Revolution and previous waves of automation.
GeoAtreides 21 hours ago [-]
you might want to brush up on the short and medium consequences of the industrial revolution and the dark satanic mills where children were maimed or where people worked for 12h a day in horrendous conditions.
Do you think because 2 dev are now super productive with AI, the company will keep the other average 30 devs? no, of course not, they will fire and pocket the difference. Same for other industries, where AI will slowly diffuse like a poisonous gas and displace jobs and people, leaving behind a crippled white collar class. The profits will not trickle down and the increased productivity will be a hatchet, not a plough.
xvector 18 hours ago [-]
> Do you think because 2 dev are now super productive with AI, the company will keep the other average 30 devs? no, of course not, they will fire and pocket the difference
Yes, they will keep the other devs that can figure out how to use AI well. Businesses want to grow.
scorpioxy 15 hours ago [-]
That hasn't been my experience or the experience of anyone I know or have talked to about how LLMs have affected their work. The parent comment explains what happened.
The businesses fired the staff and pocketed the difference. The result? Growth, at least on paper, as you're saying. Previously they were paying for 10 people and now they're paying for 2 so more profit yay! Of course this is a short term gain which might result in long term pain. That last part remains to be seen.
kjkjadksj 5 hours ago [-]
Businesses are competing for the same pie. They can’t all be growing. There aren’t enough clients. There isn’t enough money.
senordevnyc 20 hours ago [-]
where children were maimed or where people worked for 12h a day in horrendous conditions
Such things were super uncommon before the industrial revolution, I'm sure.
zeroonetwothree 19 hours ago [-]
Working conditions did decline as a result of industrialization. It wasn't until around the 20th century that we could say working conditions were better for most people than pre-industrial society.
xvector 18 hours ago [-]
More workers died working the fields prior to the Industrial Revolution, than in factories.
catlifeonmars 10 hours ago [-]
Source?
xvector 10 hours ago [-]
> The rapid urbanisation that accompanied the Industrial Revolution in Britain is often argued to have been accompanied by a dramatic worsening of urban conditions [...] However, demographic evidence suggests that death rates were much higher in towns in the seventeenth and eighteenth centuries than in the nineteenth century, and that the Industrial Revolution was accompanied by profound improvements in the survival of urban residents, especially infants and rural migrants.
> early industrialisation coincided with significant improvements in survival, especially in towns (Buer, 2013; Davenport, 2020a; Landers, 1993; Wrigley et al., 1997)
> population growth rates in excess of 1% per year would have resulted in falling real wages and hunger in any previous period [...] the fact that wages kept pace at all with increasing population should be viewed as a major achievement (Crafts and Mills, 2020; Wrigley, 2011).
Davenport, Romola J. (2021). "Mortality, migration and epidemiological change in English cities, 1600–1870." International Journal of Paleopathology, 34, 37–49. PMC7611108.
GeoAtreides 9 hours ago [-]
Nobody argued how many people died during the Industrial Revolution or before; quality of life, on the other hand...
That being said.
You cite a study implying (you, not the study) the Industrial Revolution was what lead to lower death rates, so it's all good.
But that's not what the study says:
> These patterns are better explained by changes in breastfeeding practices and the prevalence or virulence of particular pathogens than by changes in sanitary conditions or poverty. Mortality patterns amongst young adult migrants were affected by a shift from acute to chronic infectious diseases over the period.
"than by changes in sanitary conditions or poverty" [my emphasis]
But wait! there's more! from the same study:
> The available evidence indicates a decline in urban mortality in the period c.1750-1820, especially amongst infants and (probably) rural-urban migrants.
"especially amongst infants and (probably) rural-urban migrants" ...where is the industrial revolution here?
And if that was not enough:
>Mortality at ages 1-4 years demonstrated a more complex pattern, falling between 1750 and 1830 before rising abruptly in the mid-nineteenth century.
"rising abruptly in the mid-nineteenth century"
turns out industrial revolution did in fact raise mortality and death rates
tdeck 8 hours ago [-]
Don't all the references to "in towns" imply that these people weren't working the fields?
RivieraKid 22 hours ago [-]
> I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities.
I really don't understand this way of thinking. Don't you think that AI could replace senior engineers? Sure, companies will be able to do bigger / better / more ambitious stuff - but without any software engineers.
> Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?
I definitely think that AI will be a net benefit for society but it could easily end up being be bad for me.
ej88 22 hours ago [-]
there doesnt seem to be a limit in terms of the ceiling of what companies can do with software, probably the most elastic demand out of any industry ever
the swe role is going to change but problem solving systems thinkers with initiative won't go away
RivieraKid 9 hours ago [-]
That's a possible outcome. Another possibility is that AI will handle all of the thinking and problem solving part. So the market value of thinking will drop. The bottleneck will still be humans, but their input will be (1) doing physical, real-world stuff (2) providing data that the AI doesn't have, e.g. information about a specific problem domain or how does a user interface feel.
ej88 5 hours ago [-]
assuming no asi, the market value of thinking without accountability trends to zero, the bottleneck will be thinking + accountability, at least for knowledge work
if ai truly solves novel thinking then nothing is a barrier. the physical world is downstream from robotics which is downstream from software. itll be able to persuade nation states to collect data for itself etc etc (insert sci fi ending)
szatkus 21 hours ago [-]
So far AI doesn't seem even close to replacing senior engieeners. Hell, it can't even replace junior engieeners entirely.
I use AI agents every day at work and I'm happy with that, but it took over two years and billions of dollars in investment to deliver anything useful (Claude Code et al). The current models are amazing, but they still randomly make mistakes that even a junior wouldn't make.
There's another paradigm shift to be made certainly, because currently it feels like we scaled up a bug brain to spit out code. It works great for some problems, but it's not what software developers usually do at work.
delbronski 23 hours ago [-]
And I’m shocked that anyone into tech can be so blind to the adverse effects the current tech industry is having on our world and our society.
We owe it to the world, as the experts, to be critical. The march of automation and technology has made the world a better place in some ways. I sure love modern medicine, but those drones flying over Ukraine and Russia sure don’t seem like they are making the world a better place. Nuclear bombs are not making the work a better place. Misinformation in social media is not making the world a better place.
Any belief you drink blindly will eventually find a way to harm you.
xvector 23 hours ago [-]
[flagged]
delbronski 23 hours ago [-]
Oh yeah, no you are right. Sorry for focusing on that little part of space and time where I and everyone I know and love is alive and being affected by our decisions. How dumb of me!
solenoid0937 23 hours ago [-]
It actually is genuinely wrong to prioritize your little bit of space and time over the needs of the species as a whole and the benefit of untold future billions.
If everyone thought like you we'd be stuck in the pre-Industrial phase. How miserable that would be!
miltonlost 24 hours ago [-]
Keep marching that automation and tehcnology to an acidified ocean. But hey, at least now we can code faster than we can review!
solenoid0937 23 hours ago [-]
AI won't be what acidifies our ocean, but AGI might save us from it.
Strangely enough, I don't see you calling to end the consumption of meat which would have a far larger environmental impact while not slowing global progress at all.
palata 23 hours ago [-]
> AI won't be what acidifies our ocean
Tech is what got us where we are. AI allows us to use more energy to produce more of what is currently measurably killing us.
> but AGI might save us from it.
This is just faith. Some believe that prayers may save us.
solenoid0937 23 hours ago [-]
"AI energy usage" is a convenient scapegoat not backed by data.
Many things are orders of magnitude bigger than AI in the energy usage problem that bring less comparable value.
palata 5 hours ago [-]
> "AI energy usage" is a convenient scapegoat not backed by data.
Except it's not what I said.
What I said is that with AI, we do more with more (energy). "Doing more" has repercussions that go further than just the energy used to vibe code.
The reason we are measurably living in a mass extinction (that is happening orders of magnitudes faster than the one that made the dinosaurs disappear) is also the reason the climate is measurably warming (to the point where it will probably kill many of us): we are really good at producing more by using more energy.
It's not one thing (like airplanes, or meat, or whatever you want): it's everywhere. It's the whole race for producing more and more. AI is exactly part of that.
Looking at the direct energy consumption of a technology (here AI) while conveniently ignoring all its indirect impacts and concluding that "I can't understand why people think that tech is part of the problem" shows a big lack of understanding of... well, what will probably kill your kids, most likely theirs.
remich 20 hours ago [-]
I'm starting to get to the point where I'll only listen to AI energy use critiques if the commentator tells me up front they abstain from all forms of social media, especially video-based social media, first.
palata 5 hours ago [-]
Lucky me: I don't use social media at all.
Note that I did not criticise the AI energy. I criticised tech as a whole. Tech is part of the problem (the problem here being "we are killing our only planet").
sillyfluke 9 hours ago [-]
If the current admin wasn't waging a war on the renewables they don't have personal investments in and propping up their own AI investments energy needs with revitalized fossil fuel barons while they get in on the new pie-in-the-sky "future" energy sources the tech oligarchs point to (nuclear fusion startups) in order to at least get rich if an alternative fuel source they actually invested in pans out, I could perhaps reconsider the notion that this comment isn't worth the pixel it's colored on.
palata 23 hours ago [-]
> There is a reason that we aren't dying of dysentery at the ripe age of 45 on some peasant field after a hard winter day's worth of hard labor.
Tell that to the people who will die before 45 because of global instability and global warming, I guess?
boxingdog 20 hours ago [-]
[dead]
chatmasta 1 days ago [-]
What I miss is people showing off their hand-crafted libraries or frameworks. That’s become way less common now that everyone is building a layer up the stack. I fear we’ll be stuck in a permanent state of using Tailwind and React and all the LLM-favored libraries as they were frozen in time at the beginning of 2025. Then again, that’ll be the agent’s problem, not mine…
All that said, it’s extremely exciting. I’ve been in tech, in one way or another, for 25 years. This is the most energizing (and simultaneously exhausting) atmosphere I’ve ever felt. The 2006-2011 years of early Facebook, Uber, etc. were exciting but nothing like this. The future is developing faster than we can process it.
tdeck 8 hours ago [-]
> I fear we’ll be stuck in a permanent state of using Tailwind and React and all the LLM-favored libraries as they were frozen in time at the beginning of 2025.
Would it be such a bad thing if the "right way" to build a JavaScript frontend didn't change so much every year?
naikrovek 6 hours ago [-]
let me take the comment you replied to and follow through a bit.
would it be such a bad thing if we moved away from JavaScript entirely?
If we commit to AI, that seems exceedingly unlikely to happen.
I mean do we really think that JavaScript is the best way to do this? I don't. I've been in IT and software development for 30 years. I thought I would see things progress, but I have not. Same operating systems, same browsers more or less still running JavaScript, same network stack, same everything. an immense amount of work to slowly evolve things that weren't designed to evolve for 30 years.
Thirty Years.
We all know that things around us are flawed and that there are better ways, but we do nothing about it. How many people are looking at new paradigms, new ways to do something? Three? Four? I bet it's within that order of magnitude. Come on.
I'm disappointed in everyone in this industry, including myself.
Look at Plan 9. It was different. It was flawed, but it tried to fix things, at least. It tried to do some sharp corners in Unix differently, and for the time I think it was good. At least they made an attempt. Linux took a few lessons from it, but I don't think anyone else did. Not really.
I'm mad. We have let ourselves down, we have let ourselves stagnate and simply spin wheels because using what's here is easier than designing something new and sharing it. Influential people don't look at new things often enough. People new to the field and young people don't understand what my complaint is about really, because this is all brand new, to them. They didn't witness the stagnation. I did. I am disappointed and I don't really know what to do about it.
zeroonetwothree 19 hours ago [-]
Maybe it's like how we're all still using languages from pre-2010? Python, JS, Java, C++, Go, Rust (most of those are 20th century). Once you move a layer up the stack it takes some extraordinary situation to upend the status quo.
kode-targz 8 hours ago [-]
because there isn't really that much to change that high up the stack. It's all the same. True innovation happens at the low levels of programming and hardware.
kehvyn 24 hours ago [-]
If it helps, I've mostly been using AI to implement things in the craziest languages I can justify.
I write Typescript and SQL by day, my last two personal projects were Rust and Perl.
I do worry that I'm not learning them as deeply, but I am learning them and without AI as an accelerant I probably wouldn't be trying them at all.
mattgreenrocks 24 hours ago [-]
Perhaps we're in an AI summer and a tech winter. Winter is always the time when people hole up, dream, and work on whatever big thing is next.
We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.
zer00eyz 24 hours ago [-]
We have 2 decades of abstraction.
The stacks of turtles that we use to run everything are starting to show their bloat.
The other day someone was lamenting dealign with an onslaught of bot traffic, and having to deal with blocking it. Maybe we need to get back to good old fashioned engineering and optimization. There was a thread on here the other day about PC gamer recommending RSS readers and having a 36gb webpage ( https://news.ycombinator.com/item?id=47480507 )
tom_ 19 hours ago [-]
~36 MB.
(though it sounds like if you left it for long enough, you'd get 36 GB of ads downloaded eventually)
jakelsaunders94 24 hours ago [-]
> What I miss is people showing off their hand-crafted libraries or frameworks.
Saame. I wonder if the use of AI will lead to less invention and adoption of new ideas in favour of ideas with lots of training data.
nicbou 14 hours ago [-]
That's interesting because the atmosphere feels the opposite of that. I was so excited by tech back then, so optimistic. Now I see it as a threat. People around me are worried about their future more than they are excited about the technology.
I think we've seen what the enthusiasm leads to, once these companies establish dominance. We even coined a word for it: enshittification.
jvanderbot 1 days ago [-]
How do I answer this without spamming: Yes, very much.
Everyone is in their own place adapting (or not) to AI. The disconnect b/w even folks on the same team is just crazy. At least it's gotten more concrete (here's what works for me, what do you do) vs catastrophizing jobpocolypse or "teh singularity", at least on day to day conversations.
doom2 9 hours ago [-]
What's boring to me is how abstract many of the "AI success stories" tend to be, even on here. A whole blog post about some new way to use LLMs, or a best practice, or whatever, and no link to the code or dotfiles. I understand that how you prompt is a big part of things but all the major providers have a lot of configuration options. There are whole ecosystems of plugins.
It's just not very interesting or useful to me to read about how you got AI to output better quality code or how you can program from your phone now without going into detail. And so many of the conversations are showing off the wins without talking about the tools, configurations, or other parts of the setup that made it possible.
peruvian 23 hours ago [-]
Yeah maybe some workplaces are starting to get more organized but in general there's teams with anti-LLM engineers still and some that have Claude Code running all day.
scorpioxy 23 hours ago [-]
Yes, extremes which seems to fit the general sentiment of the world right now.
For a while, it felt like I'm in a minority when I was saying that it can be a useful tool for certain things but it's not the magic that the sales guys are saying it is. Instead, all the hype and the "get rid of your programmers" messaging made it into this provocative issue.
HN was not immune to this phenomenon with certain HN accounts playing an active part in this. LLMs are/were supposed to be an iteration of machine learning/AI tools in general, instead they became a religion.
xvector 18 hours ago [-]
IDK, it's been pretty magical at our big tech.
zer00eyz 24 hours ago [-]
I'm sure as hell bored of the current conversations people are having about ai.
> here's what works for me, what do you do
This is at least progress... but many want to remain in denial, and cant even contemplate this portion of the conversation.
We're also ignoring the light AI shines on our industry, and how (badly) we have been practicing our craft. As an example there is a lot of gnashing of teeth right now about the VOLUME of code generated and how to deal with it... how were you dealing with code reviews? How were you reviewing the dependencies in your package manager? (Another supply chain attack today so someone is looking but maybe not you). Do you look at your DB or OS? Does the 2 decades of leet code, brain teaser fang style interview qualify candidates who are skilled at reading code? What is good code? Because after close to 30 years working in the industry, let me tell you the sins of the LLM have nothing on what I have seen people do...
JSR_FDED 1 days ago [-]
I’m sad that it’s crowded out all the interesting stuff I used to love learning about on HN.
jmkni 2 hours ago [-]
HN needs an "AI" tab, and the ability to flag things as AI
Some of it very interesting, but maybe it shouldn't be on the home page unless it's a certain critical mass (similar to show HN)?
JoshTriplett 24 hours ago [-]
I'm sad that it's crowding some of those things out of existence, not just out of being talked about.
4k93n2 21 hours ago [-]
you can at least block some of it out with ublock origin?
It doesn’t matter if nobody votes interesting stuff out of /new because the rest of HN only cares about AI posts.
mtndew4brkfst 24 hours ago [-]
Not limited to here, of course. Net-new publications to ArXiv for some (most?) CS subcategories are >=90% about models, transformers, training, quantization, or some other directly related field, or how to apply these towards a different specialty.
Kye 23 hours ago [-]
This is completely normal when a new thing is in the third section of the technology adoption curve.[0] AI will either go away (unlikely) or become a footnote in posts about what people are doing with it in the next stage.
Among non-programmers, you always hear about some fool that fell in love with an AI girlfriend or whatever, but you never hear about the people who open chatgpt up once, tried some things with it, said to themselves "huh, that's kind of neat" and then lost interest a day or two later, having conceived of no further items to which AI could provide assistance.
slfnflctd 23 hours ago [-]
> having conceived of no further items to which AI could provide assistance
For me, the issue isn't that I can't conceive of work AI could help with. It's that most of the work I currently need to be doing involves things AI is useless for.
I look forward to using it when I have an appropriate task. However, I don't actually have a lot of those, especially in my personal life. I suspect this is a fairly common experience.
zeroonetwothree 19 hours ago [-]
I mainly just use it instead of Google to do research or ask simple questions. Comes up like 2-3x per day
olivia-banks 1 days ago [-]
I actually hear about this fairly often. In quite a few of my college classes, there's a large focus on AI (even outside the computer science department). I find it surprising the amount of non-technical people who don't even think to use it, or otherwise haven't interacted with it except when required.
nmeagent 4 hours ago [-]
I find it surprising how many non-technical friends and family constantly anthropomorphize LLMs, regularly bringing up instances where they "asked AI" about this or that and it "told them" whatever. I'm tired of trying to explain that they are merely statistical sequence generators, don't have a mind, are occasionally completely out to lunch, and ultimately cannot be trusted. This is usually a losing battle. The sheer bullshit that "AI tells them" is often astonishing or ridiculous, but a lot of the time it's given undue weight and trusted anyway. The future is bleak.
sph 14 hours ago [-]
I have used ChatGPT one afternoon in 2022, said “neat”, said “this is gonna destroy the world”, and haven’t used any other LLM since. Do I qualify?
kjkjadksj 5 hours ago [-]
There is a reputation going on around I hear in real life conversation that it just doesn’t work, gives incorrect info, gets in the way. Multiple people saying they are forced to use it for work and wish they didn’t, or even worse, coworkers blindly follow it when it is wrong and then they need to be explained that they are misinformed and the llm is wrong. I think the google ai preview really poisoned the well; people cite that one specifically often.
mindcrime 22 hours ago [-]
OK, if you take "talking about AI" to mean just talking about "three different people’s (almost identical) Claude code workflow and yet another post about how you got OpenClaw to stroke your cat and play video games" then sure, that would be pretty boring.
But I don't see it that way. I've been fascinated by AI since I was a little kid (watching Max Headroom, Knight Rider, Whiz Kids, Wargames, Tron, Short Circuit, etc in the 80's) up through college in the 1990's when I first read about the 1956 Dartmouth AI workshop that kicked the field off, and up to today where we have the most powerful AI systems we've had. Every single bit of this stuff is wildly fascinating to me, but that's at least in part because I recognize (or "believe" if you will) that there's a lot more to "AI" than just "LLM's" or "Generative AI".
I still believe there are plenty of neural network architectures that haven't been explored yet, plenty more meat on the bone of metaheuristics, all sorts of angles on neuro-symbolic AI to work on, etc. And even "Agents" are pretty exciting when you go back and read the 90's era literature on Agents and realize that the things passing for "Agents" right now are a pretty thin reflection of what Agents can be. Really understanding MAS's involves economics, game theory, computer science, maybe even a hint of sociology.
As such, I still find AI fascinating and love talking about it... at least in the right context and with the right people. :-)
And besides... as they[1] say: "Swarm mode is sick fun".
Software developers are not going to stop talking about "AI" as long as it has such a huge potential for putting them out of work.
pvorb 21 hours ago [-]
I really like this paragraph about management caring about AI:
> What makes this worse, is our bosses have bought into it this time too. My managers never cared much about database technologies, IDE’s or javascript frameworks; they just wanted the feature so they could sell it. Management seems to have stepped firmly and somewhat haphazardly into the implementation detail now. I reckon most of us have got some sort of company initiative to ‘use more AI’ in our objectives this year.
tapoxi 1 days ago [-]
It's a black box that thinks for me, sometimes it's good, sometimes it's bad, sometimes it times out.
I am extremely skeptical of AI products anyone builds. It's just using one black box to build scaffolding around another black box and then typically want to charge money for it. I don't see any value there.
cdaringe 2 hours ago [-]
A horse without gear is a wild animal. Slap on a saddle, some reigns, and training and it’s suddenly a transport vehicle.
AI products can and do help make the raw models applicable to targeted domains. Think of them as a black box sure, but that doesn’t mean they dont add value.
d675 24 hours ago [-]
depends on if they're selling you an AI wrapper or if they built something useful.
Also, depends on who target user is.
AI can be used to build deterministic software
rolandnsharp 15 hours ago [-]
[dead]
overgard 1 days ago [-]
I'm like 99% convinced that most of the AI conversation upvotes at this point is astroturfing. I just don't see the correlation with the sentiment I get from talking to people in the real world (mostly negative AI sentiment) vs what I see here
pesus 23 hours ago [-]
I'm convinced that the majority of people hyping up AI don't actually interact with many people in real life, let alone people that aren't software engineers.
solenoid0937 22 hours ago [-]
To those of us on the cutting edge, the opinion of the average person when it comes to these things is totally irrelevant. I see the benefit and possibilities with my own eyes, I don't need the confirmation or denial of the average person.
All that said, I've already set up a few of my non tech close friends with Cowork and they are huge fans of it now. It's somewhat shocking how much menial repetitive work the average white collar job entails.
stemlord 18 hours ago [-]
The average person dislikes AI not because it isn't useful. Anyway, we're more or less all on the cutting edge together
solenoid0937 23 hours ago [-]
I think some companies are just behind the curve, so this sentiment seems bizarre to some.
At my big tech, AI is every conversation with everyone, every day. Becoming AI native is a huge deal for us. Literally everyone is making AI usage a core part of their job and it's been a big productivity accelerator.
Perhaps it's different where you work, so you don't see the sentiment.
ab71e5 20 hours ago [-]
> AI is every conversation with everyone
Wow that sounds horrible.
zeroonetwothree 19 hours ago [-]
Yes it is.
overgard 6 hours ago [-]
"AI native". I don't know if you intend it but you sound like a linked in lunatic, nobody talks like that
eudamoniac 12 hours ago [-]
As someone working somewhere very much like this, the "everyone" mentioned is actually a few people who are under the mistaken impression that the rest, keeping their head down, are equally interested and on board.
Your post was written almost verbatim by my coworker last week, who has no idea that I and half the team are not doing any of this stuff.
trigvi 23 hours ago [-]
Not necessarily. Personally, I'm both in love with AI (likely to upvote a convo) and scared about the short/medium term societal changes its job displacement will bring.
SyneRyder 23 hours ago [-]
Honestly, I think there's a big divide, and those of us who are using AI intensively might just be increasingly "going dark" & distancing ourselves from those "real world" people. It's becoming detrimental being around people who are so actively negative about AI. It feels like being around people who still insist the sun orbits the earth. Those people are actually happier believing what they believe, so why spend any more time trying to convince them they're wrong?
I spent 2024 on Mastodon and I absorbed their groupthink that AI was useless... I wish I could get that year of my life back. I wish I had that extra year headstart on AI compared to where I am now. So much of my coding frustrations that year that might have been solved from using AI. I am reluctantly back on X - I hate what has been done to Twitter, but that's where so much of the useful information on using AI is being shared.
Well, back to it. Claude has been building another local MCP server tool for me in the background.
solenoid0937 23 hours ago [-]
> It feels like being around people who still insist the sun orbits the earth.
100% feeling this divide as well.
People that deny the benefit of AI in 2026... I can't even engage with them anymore. I just move on with my life. These people are simply not living in reality, it will catch up to them eventually (unfortunately.)
kody 5 hours ago [-]
at which point they will learn to write markdown files too.
miltonlost 24 hours ago [-]
There's definitely some people working overtime to overhype AI on here. like 50% of the comments on this are from simianwords who only posts when people say negative AI sentiments.
MrLeap 1 days ago [-]
I'm old enough to remember being fatigued with so many people talking about making "apps". Programs that run on a phone. Before that everyone was excited about blogging. Web 2.0 ugh.
Before that we were excited about the wheel and the creation of fire. All capital drained into those ephemeral fancies.
The cycles cycle on.
delecti 22 hours ago [-]
Yep, the hype will die, and some of the substance will remain. I mean, we're currently commenting on Web2.0 about a blog post. Both stopped being the next big thing, and are now just some things we use. Relevant anecdote: I most recently worked on the apps (ding) for a car company (double-ding, fire and wheels).
matsemann 24 hours ago [-]
Yeah. I don't mind AI, but I'm waiting for it to stabilize and a good work flow being replicable for non-toy problems that should survive and evolve for a long time. I don't think I lose out much by not having 10 agents doing my work for me right now. In 6 months or some years or whatever I can just learn the new way of doing it. It's just exhausting with how much it changes month to month. Do I use it? Yes. Probably suboptimally. I'll learn later, though.
Like the new frontend frameworks coming every week after 2010 sometime. Not jumping on every single one, and waiting until react was declared the winner and learn that worked well. Sure, someone that used it from day 1 had more experience, but one quickly catch up.
mhitza 1 days ago [-]
Big Data, The Cloud, Quantum Computing, Web 3.0, and maybe a few I've forgotten about.
Only thing that stuck thus far is the cloud. Though not for infinite scalability and resiliency, cause that just dumps big invoices in your lap.
solenoid0937 23 hours ago [-]
Big Data absolutely became a thing
The Cloud happened as well, as you've pointed out
AI adoption is well past Quantum and Web 3. Comparing it to those two is nonsensical.
mhitza 23 hours ago [-]
It only is nonsensical if you create your own comparison dimension ("adoption") to construct your argument, to call what I said, nonsensical!
All those listed and more, are part of the cycles that the parent comment mentioned and which I've continued.
Same thing with Agile. Mostly sprint-based waterfall, iterative development is not something I've ever seen in practice. Or people over processes, remember those ideas?
BigData, was another hype cycle where even smaller companies wanted a "piece of the action". I've worked at the time in a sub 50 developers company, and the higher ups where all about big data. When in fact our system was struggling with GBs of data due to frugality in hardware.
For a moment in time you couldn't spit in any direction without hiting a Domain Driven Design talk. And now we disable safeguards and LLMs write a mix of garbled ideas from across all the laundered open source training data.
Too early to tell where AI will land, and if it will bring down the economy with it, but spending rate doesn't deliver equal results for all, and we will have to see after the dust settles.
throw4847285 22 hours ago [-]
This is so whiggish that it made my whig fly off my head when I read it. I spend a lot of time on HN, so I'm gonna need to secure my whig somehow, because this happens a lot.
s_u_d_o 24 hours ago [-]
Gosh how i miss the old HN Days… where one would actually code, read docs, and develop stuff and feel happy about it. Not write a prompt and watch a chatbox do all the work in a matter of seconds. It’s like we’re losing the meaning of building something… dk how to explain it more. But yeah, it’s tech! Nothing stays the same
sph 14 hours ago [-]
I miss the Javascript framework of the week.
amelius 21 hours ago [-]
The old HN was full of people feeling smug about their intellectual capabilities.
The new HN is full of people filled with anxiety about being replaced by an advanced calculator.
To an outsider, it could almost be funny if it wasn't so sad.
doublerabbit 10 hours ago [-]
And when that advanced calculator calculates division by zero. that's when it's the time to run.
Garlef 1 days ago [-]
“Everything has already been said, but not yet by everyone.”
— Karl Valentin
---
Personally, I'm still very interested in the topic.
But since the tech is moving very fast, the discussion is just very very unevenly distributed: There's lots of interesting things to say. But a lot of takes that were relevant 6 months ago are still being digested by most.
sodapopcan 1 days ago [-]
> “Everything has already been said, but not yet by everyone.” — Karl Valentin
Never heard this and I like it very much. This is just an off-topic comment to say thanks!
jakelsaunders94 24 hours ago [-]
This is a great saying, thank you for sharing it. Out of curiosity, do you have any links to intersting AI articles you've read recently? Maybe I'll change my mind.
I don't like the hype language applied by the channel host one bit - and so this is not something where I expect someone tired of the hype to be swayed - but I think his perspective is sometimes interesting (if you filter through the BS): He seems to get that the real challenge is not LLM quality but organisational integration: Tooling, harnesses, data access, etc, etc. And so in this department there's sometimes good input.
jakelsaunders94 22 hours ago [-]
Thank you for the rec and review, I’ll take a look!
holtkam2 19 hours ago [-]
To answer your question directly: yes I am bored of talking about AI. I think it’s funny how the folks who are screaming most loudly about their AI expertise typically have not built anything of value with it. They are so focused on the tool they have forgotten that the output is what mattered all along
mememememememo 1 days ago [-]
Yes. Go to Mastodon. I accidently stumbled on Mastodon last night (I knew about it of course but largely ignored it). Of the 100 or so posts they were all cool stuff. Only one was AI related and it was more a researchy geeky thing than the brainrot "I fired all my staff an hour ago. They were not happy. CRLF. CRLF. I have an agentic circus and I am the ringmaster of 666 agents. CRLF....." crap you get on Linked in.
Thanks. Edited. I have a mental block for some reason spelling it!
jakelsaunders94 24 hours ago [-]
I've been meaning to try Mastodon for a long time (I was never really a Twiiter user). As others have said elsewhere though, I'm not sure where to start. Did you just download the app and join mastodon.social?
neobrain 9 hours ago [-]
If you create an account, it may be worth looking into "starter packs", which are lists of accounts around specific topics to follow. That's an easy solution if you run into the "I don't know who to follow and there's no algorithm that'll tell me" problem.
mememememememo 23 hours ago [-]
I did much less. Just went to mastodon.social in my browser and read what is there. I think you can create an account from there. You can also choose another instance to read and create the account from.
rogix 7 hours ago [-]
Is it a requirement to never write an article about AI without explicitly saying how “amazing”, “extraordinary” or “breathtaking” it is before saying anything else? It is like asking for forgiveness before the blasphemy. Forgive me, God, I know how incredible you are, but why do you need us talking about you all the time? How long until we start talking about AI for what it is, a normal technology, without the risk of being condemned for heresy?
jimmyjazz14 1 days ago [-]
I think the advancements around models and such are still somewhat interesting but its all the hype around peripheral things like OpenClaw, agentic workflows and other hyped up AI-adjacent news that are getting pretty old.
Aerroon 1 days ago [-]
I think the workflows can be really interesting to read about. The other week I read a reddit post how someone got Qwen3.5 35B-A3B to go from 22.2% on the 45 hard problems of swebench-verified to 37.8% (opus 4.6 gets 40%).
All they essentially did was tell the LLM to test and verify whether the answer is correct with a prompt like the following:
>"You just edited X. Before moving on, verify the change is correct: write a short inline python -c or a /tmp test script that exercises the changed code path, run it with bash, and confirm the output is as expected."
Now whether this is true, I don't know, but I think talking about this kind of stuff is cool!
marcus_holmes 21 hours ago [-]
It's a conversational black hole. Every meeting with tech folks converges on what they're doing with LLMs these days.
Our local tech meetup is implementing an "LLM swear jar" where the first person to mention an LLM in a conversation has to put a dollar in the jar. At least it makes the inevitable gravitational pull of the subject somewhat more interesting.
mrbonner 1 days ago [-]
Yes, my wife asks me to shut up when I mention AI. Hah
bichiliad 18 hours ago [-]
This kind of feels like the same thing in which people talk more about the cameras they have instead of the photos they make, or the modular synth they built instead of the album they're working on with it. It's nice when it's confined to your hobby, but it's so weird when the whole world is talking about it. Imagine everyone going to a concert just to see the gear on the stage.
MintPaw 18 hours ago [-]
It might be that this is common with new tech, I'm too young to have grown up in the early concert scene. But with video games the curve was similar. People would show up just to see the tech and how much stuff people could put on the screen, It's only pretty recently that part has started getting boring.
I'd imagine similarly there were points in time where people who go to concerts just to see the electric guitars and lighting setups.
defrost 18 hours ago [-]
There'd be a suprising number of people that'd go to a Greatest Hits and Homages to the Fairlight CMI concert.
Argueably Jean-Michel Jarre concets were 100% gear-porn shows.
My_Name 3 hours ago [-]
Judging from the number of comments, it seems that this follows the universal rule of 'yes/no question in the headline' where the answer is always 'no'.
mr_bob_sacamano 1 days ago [-]
I wish there were a filter on Hacker News to hide all AI related posts.
erikerikson 1 days ago [-]
This is hacker news. Somebody made that and uses it so they don't see this post to tell you about that but it exists.
I think what's crazy is the desire to replicate current day corporate structures. Look at this multi agent Jira story reading bot that builds stuff cause we let it churn overnight. Like the whole idea that you don't need that nonsense to build something amazing.
jonhuber 1 days ago [-]
And the desire to not want to understand things.
bigstrat2003 22 hours ago [-]
It's wild to me how many people in the industry turn out to hate not only programming, but actually trying to understand the stuff they are working on. Bro you're in the wrong field, go do another job if you don't like doing that stuff.
Pretty funny boasting about accelerated results, when his public contributions are only in two repositories (gstack itself and a rails bundle with 14 commits).
Endlessly grooming the Agent reminds me of Gastown.
Curios to see what he'll present, if, from his 700+ contributions in private repositories.
marginalia_nu 1 days ago [-]
I think it's kinda double whammy, one the one hand working with AI leaves a lot of 5-15 minute breaks perfect for squeezing in a comment on a HN thread, while also supplanting the sort of work that would typically lead to interesting ideas or projects, substituting it with work that isn't that interesting to talk about (or at least hasn't been thought about for long enough to have interesting things to say).
buzarchitect 1 days ago [-]
This resonates. I build products on top of LLMs, and the most interesting work I do has nothing to do with AI; it's designing structured methodologies, figuring out what data to feed in before a conversation starts, deciding what to do when the model gives a weak answer. The AI is plumbing.
But nobody wants to hear about prompt calibration or pipeline architecture. They want to hear "I replaced my whole team with agents." The boring, useful work is invisible, and the flashy stuff gets all the oxygen
selimthegrim 23 hours ago [-]
Now do it with knowledge or causal graphs
buzarchitect 23 hours ago [-]
Causal graphs are interesting, but in my experience, the bottleneck isn't the representation; it's getting the model to actually follow through on weak signals instead of moving on to the next topic. A graph won't help if the system doesn't know what to do when it hits a node that doesn't resolve cleanly.
What's your experience been with them?
steve-atx-7600 8 hours ago [-]
“But they were all using basically the same hammer in the same way, so they were just screaming the same shit at each other at the top of their voices.” I don’t see it this way. I think we’re watching the nature of software engineering change permanently and profoundly in the span of months. Claude code 2026 and later means most engineers will no longer write most of their code. The nature of the work still includes design, but now requires dedication to figuring out how to break down work onto Claude instances and do so in an easily verifiable way.
elorant 23 hours ago [-]
I’m bored of using the AI for anything other than my work. Because with my work I can give very detailed and structured prompts and get the best results, while also being able to evaluate the answer. For everything else I’m kinda worn out by second guessing all the time or having to enter a long thread until I get a decent response.
WorldPeas 1 days ago [-]
I'm largely bored of wrappers, what still interests me are the new modalities of models being released and progressed on like small local VLMs, voice to voice and tts
flowerthoughts 16 hours ago [-]
I'm starting to believe that AGI really will cause a singularity, even before the technology exists: it's the only thing that will be talked about in my circles. People will forget to eat, families will be divided and split apart, houses will be foreclosed on due to neglect to pay the bills. It will be technology's last and eternal heroin high. The one you never experience coming down from.
tomrod 21 hours ago [-]
Not at all -- I am building more and more. But I've been doing AI/ML since 2005 -- and there is always more to learn.
The new GenAI architectures and tooling supported by them just give more fun things to do and fun ways to do it.
bilsbie 1 days ago [-]
I’m confused why the hype and the investment got so high. And why everyone treats it like a race. Why can’t we gradually develop it like dna sequencing.
olivia-banks 1 days ago [-]
To be fair, DNA sequencing was very hyped up (although not nearly as much as AI). The HGP finished two years ahead of schedule, which is sort of unheard of for something in it's domain, and was mainly a result of massive public interest about personalized medicine and the like. I will admit that a ton of foundational DNA sequencing stuff evolved over decades, but the massive leap forward in the early 2000s is comparable to the LLM hype now.
dylan604 1 days ago [-]
I assumed it was obvious. Being first is all that matters. Investors don't want to invest in second place. Obviously, first is achieving AGI and not some GPT bot. That's why so many people keep saying AGI is in _____ weeks away with some even being preposterous stating AGI might have already happened. They need to keep attracting investors. Same as Musk constantly saying FSD is ____ weeks away.
matusp 1 days ago [-]
Very much so. I wouldn't mind some interesting projects or results. But it's very basic opinions or parables all over again.
deathanatos 17 hours ago [-]
I mean, yes, I am bored. The emperor has no clothes.
I "tried" Claude the other day. It gave me 3 options for choosing, effectively, an API to call an AI. The first were sort of off limits, b/c my company… while I think we have a Claude Pro Max Ultra+ XL Premium S account, it's Conway's Law'd. But, oh, I can give it a vertex API key! "I can probably probably get one of those" — I thought to myself. The CLI even links you to docs, and … oh, the docs page is a 404. But the 404's prose misrepresents it as a 500.
Maybe Claude could take a bit of its own medicine before trying to peddle it on me?
We're on like our 8th? 9th? Github review bot. Absolutely none of them (Claude included) is seemingly capable of writing an actual suggestion. Instead of "your code is trash, here's a patch" I get a long-form prose explanation of what I need to change, which I must then translate into code. That's if it is correct. The most recent comment was attached to the wrong line number: "f-string that does not need to be an f-string on line 130" — this comment, mind you, the AI has put on line 50. Line 130? "else:" — no f-strings in sight.
"Phd level intelligence."
abcde666777 22 hours ago [-]
The topic of AI triggers people in various ways - anxiety and uncertainty about the future, frustration with excessive hype, and the debate between people on each side of the fence.
It will calm down once the dust starts to settle and there's some kind of consensus on how the chips have fallen.
Also there is an irony that talking about being sick of talking about AI is still talking about AI.
qsera 17 hours ago [-]
>The topic of AI triggers people in various ways
The only thing that triggers me about it peoples inability to understand how a scam works, after falling for such scams the n-th time.
Hyperloop, ubeam, blockchain, Elon musk taking all to mars....
In these line of scams, LLMs are a wet dream...
keithnz 1 days ago [-]
No, well, I still enjoy the articles. The thing that always surprises me is the negativity in comment threads. I'm genuinely quite excited about AI based development. Yesterday I was playing around with developing a marketing plan for a market gap where we could leverage our product and finding what features in our product would need changing/adding to improve our offering. Quite interesting results!
jakelsaunders94 24 hours ago [-]
I think in most places on the internet the negative comments are the ones that will win out. Same for AI I suppose. I tried not to bemoan the whole concept here, just the amount of 'airtime' it gets. Sort of like when something happens in the news (lately it's been the Epstein files for me), and you wish you could see a more balanced picture of world events.
fragmede 21 hours ago [-]
Surround yourself with positive people. Reddit's take for an event I was at made it sound like it went terribly, but I was there and had fun.
1a527dd5 1 days ago [-]
I don't think I'm quite bored. I'm exhausted/fatigued with the pace.
AStrangeMorrow 24 hours ago [-]
Yes it feels like a full time job just to try to keep up. And I’ve been in AI for close to 10 years so I feel like I have to keep up at least a minimum.
An other thing for me is that it has gotten a lot harder for small teams with few ressources, let one person, to release anything that can really compete with anything the big player put out.
Quite a few years back I was working on word2vec models / embeddings. With enough time and limited ressources I was able to, through careful data collection and preparation, produce models that outperformed existing embeddings for our fairly generic data retrieval tasks. You could download from models Facebook (fasttext) or other models available through gensim and other tools, and they were often larger embeddings (eg 1000 vs 300 for mine), but they would really underperform. And when evaluating on general benchmarks, for what existed back then, we were basically equivalent to the best models in English and French, if not a little better at times. Similarly later some colleagues did a new architecture inspired by BeRT after it came out, that outperformed again any existing models we could find.
But these days I feel like there is nothing much I can do in NLP. Even to fine-tune or distillate the larger models, you need a very beefy setup.
cmrdporcupine 1 days ago [-]
This.
I don't know how I'm burnt out from making this thing do work for me. But I am.
Yeah I don't think humans were meant to create things faster than they could understand them, but, here we are
vrganj 1 days ago [-]
AI is fine. The hype is annoying. What's even worse though are the incredible amounts of money and energy that are being thrown at it, with no regard for the consequences, in times of record inequality and looming climate apocalypse.
AI is the red herring that'll waste all our attention until it's too late.
lpcvoid 1 days ago [-]
AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.
tonmoy 1 days ago [-]
Im not sure I follow. AI barely consumes energy compared to other industries and instead of focusing on the heavy hitters first wasting time on the climate impact on AI doesn’t seem useful
elbasti 1 days ago [-]
This is wrong. AI uses ~4% of the US grid, and projections are that it will grow to 10%+ in the next 6 years.
And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
thethirdone 24 hours ago [-]
Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.
> And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
Emissions in 2018 were ~5250M metric ton and in 2024 it was 4750M. That is a reduction of 10% total emissions. Without going into calculations of green electricity and such, its still safe to say AI using 10% of the grid would not completely wipe out the reduction.
> Compare that to ~30% of all energy use for transportation
Transportation, especially ALL transportation, does a LOT. You're looking for ROI not the absolute values. I think it's undeniable that the positive economic effect of every car, truck, train, and plane is unfathomably huge. That's trains moving minerals, planes moving people, trucks transporting goods, and hundreds of combinations thereof, all interconnected. Literally no economic activity would happen without transportation, including the transition to green energy sources, of which would improve the emissions from transportation.
I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.
Also, I'm not sure about your math. 4% would be 4% of the whole like in a pie chart, not 4% of the remainder after removing one slice. 4% AI, 30% transportation, 66% other. I don't know where that 40% is from.
thethirdone 22 hours ago [-]
> Also, I'm not sure about your math. 4% would be 4% of the whole like in a pie chart, not 4% of the remainder after removing one slice. 4% AI, 30% transportation, 66% other. I don't know where that 40% is from.
AI is not currently 4% of the energy market of the US. Only the grid. I should have been more clear about the ALL ENERGY vs GRID distinction.
> I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.
I really made no statement on the value of doing things. Transportation is obviously very valuable. I just wanted a more fact based conversation.
elbasti 23 hours ago [-]
> Compare that to ~30% of all energy use for transportation. So approximately 40%*4% = 1.6% vs 30%. I find your correction to be more wrong that the initial statement.
I don't follow. The comparison is 30% of energy use for transportation vs 4% for AI, and soon 30% for transportation vs 10% for AI.
thethirdone 22 hours ago [-]
The grid is not all energy use. To get the numbers on an even playing field you need to compensate for that only ~40% of energy goes through the grid.
And that leaves a 6:1 ratio assuming projections run true. It very well might be possible to get efficiency wins from the transportation sector that outweigh growth in AI.
kai_mac 12 hours ago [-]
The 4% figure is for data centres generally afaik, and it's pre-AI
Of course nearly all of that growth is going to be AI
Insanity 1 days ago [-]
Pretty large amounts of energy go towards training large language models. Running them is also a non-negligible energy cost at scale.
But yeah, there's way worse industries out there when it comes to climate change impact.
datsci_est_2015 1 days ago [-]
? Am I misunderstanding the push for nuclear energy and record energy prices in locales with new “data centers”?
hirako2000 1 days ago [-]
Before large models things were starting to move to micro VM, lean hardware, firecracker cloud platforms running thin containers.
Ai buzz and now we are building giga factories. It stands for gigawatt usage, no less target.
surgical_fire 24 hours ago [-]
Which is why talk about AI datacenters typically involve energy supply constraints, and possibly the need to build power plants along with it.
It is, of course, because it barely uses any energy.
amelius 24 hours ago [-]
> AI is one of the causes that climate change is accelerating, which is another in a long list of reasons to hate it.
If you want to point at causes of climate change, look no further than adtech. It's the driving force behind our overconsumption.
And it has perhaps an even longer list of reasons to hate it.
bluefirebrand 22 hours ago [-]
AI and Adtech are the same damn industry
proc0 1 days ago [-]
People sure don't care about it anymore and it coincided with rise of AI. There's barely any mention of climate change compared to 5+ years ago. I really think this is all about how to keep the capitalist system from imploding because of so much debt (so the next big thing needs to happen to keep the growth).
sharemywin 1 days ago [-]
climate change was an important issue when they were trying to peddle EVs and solar.
lpcvoid 24 hours ago [-]
They == the lizard people, I assume?
xvector 1 days ago [-]
Seeing this kind of populist misinformation/bikeshedding on HN is particularly disappointing.
lpcvoid 24 hours ago [-]
So then explain to me where I wrote misinformation?
mostertoaster 24 hours ago [-]
The EPA repealed its 2009 conclusion that greenhouse gases warm the Earth and endanger human health and well-being.
So this is not a good reason to oppose AI. Now the sheer energy it requires does mean we might want to go nuclear though.
Natural gas is nice though because it does pollute the air far less than coal.
You might argue the EPA only repealed that because of political agendas, but the same argument could be made for why it was passed.
A lot of people got very rich off the fear mongering from climate alarmists.
computerdork 23 hours ago [-]
Hmm, it seems pretty clear that climate is getting hotter, so it seems natural for some people to be worried about what will happen to the planet in a few decades (me for one).
And, you may be right, it may not be that big a deal and that we're being alarmists, but it seems like we currently have the tools to slow it down greatly. Why not be on the safe side and use them?
... but to be honest, guessing my opinion won't sway you in any way, still thought I'd try. thanks!
mostertoaster 16 hours ago [-]
It’s really about the cost/benefit analysis.
The value of plowing ahead and using more energy is worth far more than making sure Florida doesn’t lose some coastline.
The presumptions I see that annoy me with the alarmists, is that they completely negate human agency and ingenuity, and they ignore the economic cost of many of the proposed plans.
Natural gas is far better than coal and should be encouraged rather than condemned. Nuclear power is best of all, is the cleanest and safest energy, and yet is hardly ever the first choice of the alarmists.
I’d rather spend double the energy unlocking breakthroughs in science with the help of AI, and address the problems when they come. I don’t go out of my way to lower my “carbon footprint”, but I also don’t just do things that are wasteful and deliberately harmful to the environment.
AI making us forget how to think for ourselves is a far bigger risk to mankind than climate change. Thanks.
computerdork 6 hours ago [-]
Agree that you need to balance costs with benefits, but nowadays, solar and wind are often the cheapest options (southern states or states with lots of wind). And nuclear is an option that even some staunch environmentalists support these days.
Yeah, don't think most people who support battling climate change are extremists. We just believe it's a big problem, and, to put it in monetary terms, having to deal with major changes in climate could cost the world tens of trillions of dollars by some scientist predictions. Yeah, it's like any problem, doing relatively small fixes now could save enormous amounts of time and money later down the line. Seems like it would probably good usage of our efforts.
mostertoaster 5 hours ago [-]
Yeah I’m all for doing what makes sense.
I probably just overreact and judge too quickly certain statements from my experiences of people who act like I’m destroying the earth because I have more than 3 kids.
I appreciate reasonable people though, and I should not assume, everyone is a crazy alarmist because they have any concern, so I apologize.
Sohcahtoa82 1 days ago [-]
> AI is fine. The hype is annoying.
I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"
In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.
We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.
[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.
onemoresoop 24 hours ago [-]
Follow closely on what the detractors say. Most of them are using AI themselves and are just pushing back on the hype or other ludicrous claims and that's a good thing. Is the current crop of Gen AI anything near AGI? Is it worth the current valuation? Can a company fire most staff and run on gen AI? We may see the economy completely crash and not because AI takes over but because of bad investments, hype and greed.
xvector 18 hours ago [-]
The same detractors I know today that use AI, said that LLMs were useless slop generators that would never amount to anything just a year or two ago.
Detractors, doomers, and techno-pessimists have got to be the most consistently wrong group in history. https://pessimistsarchive.org/
Sohcahtoa82 14 hours ago [-]
I think the techno-pessimists were right about NFTs.
xvector 10 hours ago [-]
Everyone and their grandma weren't using NFTs to get real work done.
eudamoniac 12 hours ago [-]
I've made tens of thousands of dollars so far by day trading puts on NVDA. As a detractor et al, at least I'm putting my money where my mouth is eh
beej71 1 days ago [-]
I don't think it's worthless. It can greatly speed up coding. And learning foreign languages. And many other things.
But I do think humanity is worse off because of it. So I'm a detractor in that way. :)
ben_w 22 hours ago [-]
> Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.
Well, I wouldn't go that far, but the hallucinations have moved up to being about more complicated things than they used to be.
Also, I've seen a few recent ones that "think" (for lack of a better word) that they know enough about politics to "know" they don't need to search for current events to, for example, answer a question about the consequences of the White House threatening military action to take Greenland. (The AI replied with something like "It is completely inconceivable that the US would ever do this").
heavyset_go 24 hours ago [-]
> certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?)
I mean, you can get mad at people you made up in your head, that's a thing people do, but this caricature falls in the same comforting bucket as "anyone who doesn't like <thing I like> is just ignorant/stupid" and "if you don't like me you're just jealous".
Maybe non-straw people have criticisms that aren't all butterflies and rainbows for good reasons, but you won't get to engage with them honestly and critically if you're telling yourself they're just ignorant from the start.
For example, I will bet that non-straw people will take issue with this, and for good reasons:
> Hallucinations are exceptionally rare now
arcxi 1 days ago [-]
This very comment is measurably more harmful than any AI criticism that annoys you - someone will read this and assume it's appropriate to accept whatever bullshit Claude generates at face value, with terrible consequences.
In contrast, what harm do those detractors cause? They don't generate as much code per hour?
xvector 24 hours ago [-]
By that logic we should all live in air-filtered bubbles. Anyone denying this is causing harm. After all, people might die if you let them out of their air-filtered bubble!
The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)
dijit 24 hours ago [-]
that’s a stretch and taking a measured approach to change is valid
arcxi 24 hours ago [-]
> detractors slow the pace of progress
Considering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.
slopinthebag 24 hours ago [-]
That's a pretty poor straw man - the issue is the amount of harm caused, not that there is a potential for some minuscule amount.
Also we need detractors because if we race into any technological advance too quickly we may cause unnecessary harm. Not all progress is without harms, and we need to be responsible about implementing it as risk-free as possible.
jarjoura 24 hours ago [-]
You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.
This is such a perfect example of the mania behind this rollout.
There's no way you can make the financials work here compared to JetBrains spending the same millions spent on AI infrastructure and instead building better search in Confluence. Confluence search SUCKS, but that's just a lack of focus (or resources) on building a more complex, more robust solution. It's a wiki.
Either way, making a more robust search is a one time cost that benefits everyone. Instead, you're running a piece of software that goes directly to Anthropic's bank account, and to the data centers and to hyper scalers. Every single query must be re-run from scratch, costing your company a fortune, that if not managed properly will come out of spending that money elsewhere.
lukevp 24 hours ago [-]
And what is using Confluence in the first place? Your MacBook Pro is faster than a supercomputer from 20 years ago. As we make compute cheaper, we find ways to use it that are less efficient in an absolute sense but more efficient for the end user. A graphical docs portal like Confluence is a hell of a lot easier to use than EMacs and SSH to edit plain text files on an 80 character terminal. But it uses thousands of times more compute.
It seems ridiculous right now because we don’t have hardware to accelerate the LLMs, but in 5 years this will be trivial to run.
jarjoura 21 hours ago [-]
I'm confused by your analogy. A wiki run server is extremely efficient to run, and can be hosted from a tiny little raspberry pie. A search engine can be optimized to provide results near O(1). You can even pull up and read results on a very old computer. All of the concerns around cost and resource efficiency can be addressed as all of this is a solved problem.
Even with an LLM agent getting cheaper to run in the future, it's still fundamentally non-deterministic so the ongoing cost for a single exploration query run can never get anywhere near as cheap as running a wiki with a proper search engine.
xboxnolifes 24 hours ago [-]
> You do realize though that using Claude Desktop to "search" through confluence is like paying a world class architect on the hour just to give you some tips on how to layout your small loft to maximize sunlight.
If I could pay a world class architect $1.50 to give me tips on how to maximize sunlight in my loft I would.
Would it be nice if confluence just had a robust search that had a one time cost and then benefited everyone thereafter? Sure, but that's not the current reality, and I do not have control over their actions. I can only control mine.
doug_durham 24 hours ago [-]
On reddit there are two sub-Reddits that are mirrors, /accelerate and /betteroffline. The people in the subs go there for dopamine hits. One for how AI is going to transform their lives and lead to a work-free future. The other how AI is worthless and how everyone (except them) is being fooled. They are the same people with opposite views. The people in either sub don't recognize this.
jackie293746 1 days ago [-]
Claude Opus 4.6 regularly makes up shit and hallucinates. I'm not a detractor by any means but "exceptionally rare" is fantasyland.
thrawa8387336 1 days ago [-]
Can vouch for this, plus, when it does work, stuff can take forever. Then, if I let it unsupervised, higher risk of doing the wrong thing. If I supervise it, then I become agent nanny.
surgical_fire 24 hours ago [-]
I have been experiencing it too.
I honestly am finding Codex considerably better, as much as I despise OpenAI.
lovasoa 24 hours ago [-]
I use the latest codex with gpt5.4 and Claude opus every day. they hallucinate every day. If you think they don't, you are probably being gaslighted by the models.
bigstrat2003 24 hours ago [-]
> a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then.
On the contrary. I update my opinion all the time, but every time I try the latest LLM it still sucks just as much. That is why it sounds like my opinion hasn't changed.
Forgeties79 24 hours ago [-]
This is going to sound flippant, but truly, I imagine most people find the group that disagrees with their take annoying as well.
SpicyLemonZest 23 hours ago [-]
I personally believe that LLMs have advanced immeasurably since ChatGPT came out, which was itself a world-historical event. I use AI daily in ways that enhance my productivity.
I say all of that to establish that I'm not a reflexive critic when I tell you, hallucinations are absolutely not exceptionally rare now. On multiple occasions this week (and it's only Tuesday!) I've had to disprove a LLM hallucination at work. They're just not as fun to talk about anymore, both because they're no longer new and because straightforward guardrails are effective at blocking the funny ones.
surgical_fire 1 days ago [-]
The detractors are a lot less numerous and certainly a lot less preachy than the ones on the hype train.
AI is alright. It's moderately useful, in certain contexts it speeds me up a lot, in other contexts not so much.
I also think that the economics of it make no sense and that it is, generally, a destructive technology. But it's not up to me to fix anything, I just try to keep on top of best practices while I need to pay bills.
The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.
heavyset_go 23 hours ago [-]
> The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.
We're in "too big to fail" territory, if we handled the recession we were heading towards/in years ago, instead of letting AI hype distract and redirect massive amounts of investment, attention and labor from elsewhere, we might have been in a better position.
jarjoura 24 hours ago [-]
On the flip side, if all this slop is floating around, and AI services do become untenable, think of all the immediate jobs that will open up to fix and maintain all the slop that's being thrown around right now. The millions of dollars of contracts spent to use these LLMs will be redirected back to hiring.
Though, my cynical take is that the investor class seemed dead-set on forcing us all to weave LLMs deep into our corporate infrastructures in a way that I'm not too sure it will ever "disappear" now. It'll cost just as much to detangle it as it was to adopt it.
teaearlgraycold 24 hours ago [-]
> Hallucinations are exceptionally rare now
The way we talk about "hallucinations" is extremely unproductive. Everything an LLM outputs is a hallucination. Just like how human perception is hallucination. These days I pretty much only hear this word come up among people that are ignorant of how LLMs work or what they're used for.
I've been asked why LLMs hallucinate. As if omniscient computer programs are some achievable goal and we just need to hammer out a few kinks to make our current crop of english-speaking computers perfect.
01100011 1 days ago [-]
It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues. If not, well we've accelerated a lot of our worst problems(global warming, big tech, wealth inequality, surveillance state, post-truth culture, etc).
gastonf 24 hours ago [-]
> If we get computers to think for us, we can solve a lot of our most pressing issues
If AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?
JoshTriplett 1 days ago [-]
> It's a hail mary dash towards AGI. If we get computers to think for us, we can solve a lot of our most pressing issues.
All but one of them simultaneously, in fact. The one being left out: wanting to keep existing.
xvector 24 hours ago [-]
What are you talking about? AGI is practically a prerequisite for transhumanism, and, well, not dying.
If you want to "keep existing" AGI happening is probably your only hope.
JoshTriplett 23 hours ago [-]
Aligned AGI, yes. Unaligned AGI is a fast way to die.
If you want to keep existing, slow down, make sure AGI is aligned first, and go into cryo if necessary.
If you don't want to keep existing, that doesn't mean you get to risk the rest of us.
slopinthebag 24 hours ago [-]
I highly doubt OP was talking about immortality
yladiz 24 hours ago [-]
This sounds just like the idea that quantum computing will solve a lot of computational issues, which we know isn’t true. Why would AGI be any different?
idle_zealot 24 hours ago [-]
> If we get computers to think for us, we can solve a lot of our most pressing issues
How, exactly, does more and better tech help with the fundamentally sociological issues of power distribution, wealth inequality, surveillance, etc? Are you operating on the assumption that a machine superintelligence will ignore the selfish orders of whoever makes it and immediately work to establish post-scarcity luxury space communism?
dvt 1 days ago [-]
> incredible amounts of ... energy
So tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.
Those links are about air pollution, not carbon emissions. You're engaging in some political posturing of your own.
dvt 23 hours ago [-]
Why are you lying? From literally the first paragraph of the CFR article:
> China is the world’s largest source of carbon emissions, and the air quality of many of its major cities fails to meet international health standards.
triceratops 21 hours ago [-]
But the focus of the article is on China's air quality.
And even though China emits more carbon annually than the US today, the US and Europe are still ahead in cumulative emissions: https://ourworldindata.org/grapher/cumulative-co2-emissions-.... Cumulative emissions are the carbon that's already in our atmosphere and causing heating today. If you want to apportion "blame" for climate change, then the US is 25% responsible, Europe is 30% responsible, and China is 14% responsible as of 2023. And India is only 3.6% responsible.
China's high emissions today power a manufacturing industry that has made cheap decarbonization via solar and batteries a realistic prospect. That's a much better use of their current emissions compared to what the developed countries do with theirs.
defrost 21 hours ago [-]
China has a large population and does the dirty work of manufacturing for much of the rest of the entire world.
China has done more for renewable energy solutions than any other country, and their per capita population consumption patterns for personal are lower than many G20 countries.
In a fair representation of data, the total high carbon dioxide output from China should be assigned to source- the people across the globe with high personal consumption that have off shored their industry to China.
runarberg 1 days ago [-]
Interesting that you accuse your parent of political posturing at the end of your post, which indeed contains plenty political posturing.
runarberg 1 days ago [-]
1% of worldwide energy expenditure is massive, incredible amounts of energy in fact.
wrqvrwvq 22 hours ago [-]
climate change is a hoax, but it's also disingenuous to pretend like ai delivers even an infinitesimal amount of the value of either agriculture or mining. Global population approaches zero without either of those things and if you deleted ai, no one would ever notice.
nisten 24 hours ago [-]
In 2-3 decades 30% of the world population will be over 60 years old (~3 BILLION seniors).We don't have an economic model for it, nor does gen-z want to all be Personal Support Workers while paying rent.
Nvidia only makes 6million data center GPUs a year. Huawei makes 900k. We need 10 to 100x more to be able to automate enough just to hold civilization together. Amazon built datacenters with near 0 water use but it used 35% more electricity overall. So tha problem can be solved however we need to change out of the whole scarcity mentality if we're going to actually make the planet nice.
sarchertech 23 hours ago [-]
> 30%
Thats not accurate. The estimate is about 2 billion in 25 years.
We also have models for how that works at a country level because we have countries that have far exceeded that.
And the vast majority of 60 year olds are still self sufficient and economically productive.
Average global retirement age is around 65 and in most countries it’s creeping towards 70. And percent of world population over 70 looks much more manageable over the time span we can realistically model.
oulipo2 1 days ago [-]
[flagged]
emp17344 1 days ago [-]
No, it’s… fine. Useful in a limited capacity. Not the machine god, but not machine Satan either. The reality is kind of boring.
vablings 1 days ago [-]
This summarizes mostly how I feel about it. It's a tool like any other tool we have advanced since the beginning of human civilization
Machine tools replaced blacksmiths
CNC machines replaced manual machines.
Robots replaced CNC machine tenders
CAD replaced draftsman (and also pushed that job onto engineers (grr))
P&P robots replaced human production lines.
The steam train replaced the horse and cart
This is a tale as old as time itself
datsci_est_2015 24 hours ago [-]
What do LLMs replace, pray tell? More like moving from a screwdriver to a drill, rather than replacing the carpenter all together.
Also note that there are inventions that may “replace” some part of a process, but actually induce a greater demand for labor in that process. Take the cotton gin, for example, which exploded the number of slaves required to pick cotton.
cindyllm 23 hours ago [-]
[dead]
kerblang 1 days ago [-]
Those were deterministic rather than stochastic
1 days ago [-]
bigstrat2003 22 hours ago [-]
Exactly. People love to compare LLMs to power tools for carpenters and smiths. But if my miter saw had a 20% chance to produce cuts at a 45 degree angle when I have it set for 90, I would throw it out so fast I would leave Looney Tunes style tracks. A tool which only sometimes does its job is worse than no tool at all.
vablings 5 hours ago [-]
To be rather pedantic your miter saw probably doesn't cut exactly 90 degrees. Especially if you reset it. LLMS are low accuracy for sure but so are humans. I am not saying AI is going to replace us all in a whole entirety my broader point is that these tools will be another tool that changes the market share of jobs.
kerblang 3 hours ago [-]
The low accuracy of human guesstimation is why we use deterministic tools, not so-called tools that imitate our ability (or worse).
Tools are not replacements for people! Tools are enhancements.
AI is an attempt to replace people with something unhuman.
bitwize 24 hours ago [-]
This isn't even our first AI hype cycle. That happened in the late 70s-80s. Every lab and agency needed Lisp machines to teach computers how to identify Russian missiles—or targets. The "GOFAI" techniques did not live up to the expectations of them, but they settled into niches where they were tremendously useful, and life went on. The same will happen with today's matmul-as-a-service AI.
steve_adams_86 1 days ago [-]
I don't see the threat from AI as capitalist at all, but more so feudalist. I mean, if things go in the direction of the worst-case scenario. It seems like the power potential transcends the problems of capitalism entirely.
But for now it's strictly hypothetical. Nothing I'm doing with AI matters enough to really make any statements about a broader scale in my field, let alone in entire economies.
heavyset_go 1 days ago [-]
Capitalism is just feudalism that works for the merchant class
plagiarist 24 hours ago [-]
Capitalism is feudalism but with raw generational wealth instead of generational wealth with divine right characteristics.
steve_adams_86 23 hours ago [-]
I see some overlap, but I think it's more complex than that. If we conflate the two so easily they lose meaning. Certainly, some people have that experience under capitalism. I think there are systemic failures which lead to life experiences that are probably not all that different from some peoples' experiences in feudal society, both at the top and bottom of the hierarchy.
The more I think about it though, I'm not sure feudalism is the right analogy. Serfs had a purpose and were depended upon. In a society where AGI is in the hands of a few, it seems reasonable to believe that there wouldn't be a need for serfs at all. Labour would become utterly irrelevant. You'd have no lord to be bound to. You'd be unnecessary.
I imagine the transition there would be some brutal form of capitalism, but the destination would not be fuedalism. I don't think we have a historical analog for that hypoethical destination.
runarberg 5 hours ago [-]
I see your point, in fact I am against the term Neo-colonialism for this exact reason. Neo-colonialism is bad, but next to the horrors of actual colonialism, it is a walk in the park. And naming economic policies which artificially increases the dependency of a foreign country in your economy, after a policy of mass extraction, neglect, violence, and even genocide really removes the horrors from the latter.
However it has been over 500 years since feudalism. People today are still very much living with the consequences of colonialism, some people are in fact still living under colonial rule (notably in Western Sahara and Palestine). The consequences of feudalism have long passed. I think it is fine actually to conflate the horrors of capitalism with the horrors of feudalism. 500 years ought to be long enough.
vrganj 1 days ago [-]
If we wanna go full-on Marxist analysis it is an attempt of the capitalist class to finally rid themselves of their dependence on labor and their pesky demands like sick leave and fair wages.
Through that analysis, one can also explain why the managerial caste is so obsessed with it - it is nothing less than an ideological device. One can also see this in the actual deification happening in some VC cycles and their belief in AGI as some sort of capitalist savior figure.
I see the point and don't disagree with it, but I find that framing is not the most compelling to the audience here...
mattgreenrocks 24 hours ago [-]
Yeah. Oftentimes get crickets here when I talk along those lines. Can't tell if apathy, learned helplessness, or obliviousness. Regardless, devs seem like an extremely docile labor group based on how they react to this and other economic pressures.
plagiarist 24 hours ago [-]
We will all be shocked at the rug pull after it has finished training on all our high-quality feedback for code it has written.
Human-Cabbage 24 hours ago [-]
This is correct at the firm level and breaks down at the aggregate level, which is where it gets interesting.
At the firm level, automating away labor costs is obviously rational. But capital in aggregate can't actually rid itself of labor, since labor is where surplus value comes from. A fully automated economy would be insanely productive and generate basically no profit. So the capitalist class pursuing this logic collectively is, without knowing it, pursuing the dissolution of the system that makes them the capitalist class.
You don't have to buy any of that to notice the more immediate mechanism though: AI doesn't need to actually replace workers to discipline them. The credible threat of replacement is enough to suppress wages, justify restructuring, and extract more from whoever's left. That's already happening and requires no AGI.
brookst 1 days ago [-]
AI is more likely to destroy capitalism than it is to increase inequality.
Ten years ago, what would it have cost you to build a Jira clone / competitor? Today one person can do it in a week, at least for the core tech.
In a year, only the very largest companies will pay for that kind of infrastructure tooling.
We’ve just started seeing the democratization of software and the capitalists are terrified.
plagiarist 23 hours ago [-]
I just don't know how to explain that you won't be destroying capitalism with AI. You have a subscription.
brookst 51 minutes ago [-]
I don’t know how to explain that the difference between barter systems, stored value, and truly communal property has nothing to do with capitalism.
People pay for things in all economic models. It’s bizarre to think that means everything is capitalism.
paulsutter 1 days ago [-]
How did HN become this kind of website?
JoshTriplett 24 hours ago [-]
Because AI is attacking, plagiarizing, competing with, and destroying the most common industry of people here on HN, so suddenly it mattered more to people who were previously unaffected.
Some people have been concerned with this kind of politics all along. Some people are realizing they should be now, because of AI. And that's okay; both groups can still work together.
emp17344 24 hours ago [-]
The parent comment is a pretty measured take. What’s your problem with it?
iwontberude 1 days ago [-]
I went to a conference and people were suggesting nationalizing AI companies so it's basically everywhere.
tartoran 23 hours ago [-]
Same way we turned internet into a public utility? Wait, did we do that?
asd198 1 days ago [-]
I'm kind of bored by AI promotion posts that pretend to be about something else.
PixelForg 19 hours ago [-]
I'm so conflicted about it. My main worry is that I go all in and when I'm used to it the prices increase drastically and I can't work without it.
I'm also learning art and I'll never use AI here, so I thought I have less time for hobby programming and I could just use AI for that but then I come back to the concern I mentioned above.Plus I can't proudly share anything I made with it either because I wouldn't have done much of the work at all.
I'm also feeling burnt out about web dev in general and doing the same thing during my free time just feels like more work to my brain. I wish I could find something interesting to do, and if I don't I'll quit programming in free time for good.
cowboylowrez 11 hours ago [-]
yeah the models are great and all but will they continue to be great? anthropic has some real miserable looking finances, and as a counter example, googles gemini is starting to get stingy with its tokens or whatever (I get this from complaints in reddits antigravity threads). I suspect that googles the one who's homing in on the actual costs as anthropic and others are still in startup launch mode with all the big investments.
arkt8 20 hours ago [-]
Most of boring is about people thinking AI is really intelligent. Thinking that it is magic. With magic comes the ghosts instead bugs. Who was lazy, will become even lazier. Engineers will keep building bridges... and also software.
As shown in "Normal Accidents" the strength is as high as its weaknesses, and in any complex system this is even more a problem. A catastrophic event is still to happen with AI as it happened in basically every complex system. They ocurred with trained people that wasnt believing in magic or laziness... so the scenario is even worse for AI.
Yes, I'm bored about people that believe in magic and the ghosts the are emerging and are yet to be seen.
axi0m 24 hours ago [-]
The worst part in all that noise: ask your customers what they need ; they will tell you "AI features". No matter what it is, or even how it compares to more traditional approaches when it comes to solving their pains. These two letters got beyond obsession.
sbinnee 21 hours ago [-]
I like the analogy to woodwork and hammer. It fits perfectly to what happens when they do not pay enough attention to the end result. They are not showing the actual product because it is not as shiny as their new agentic hammer.
brandensilva 19 hours ago [-]
I think people are having a hard time figuring out use cases so yeah the AI is the most exciting part.
MisterTea 1 days ago [-]
It is getting stuffy in the tech sector lately with all these AI postings but it's still a very new and very disruptive technology.
I also have to say that I don't use AI in my personal or professional life. And that is simply because I haven't felt any need to use it.
obsidianbases1 1 days ago [-]
No, but definitely tired of the "influencer" takes. You would think that this AI thing has been all but figured out, when really, even with the biggest openest claws we are still barely scratching the surface of a new era human-computer interaction
jakelsaunders94 24 hours ago [-]
Agreed, LinkedIn is a cesspit of this. But then it always has been so nothing new there.
dazzaji 14 hours ago [-]
Nope, not bored at all. I'm clicking on most HN AI threads out of sheer curiosity, eager to learn new techniques and see what folks are thinking or building. Given its transformative ripple effects, AI feels like the single biggest shift reshaping the economy and society right now. Kind of the opposite of boring.
mlhpdx 1 days ago [-]
Yes — talking and hearing/reading about it. I don’t fault folks for being excited when first getting into ut, but it’s rare to hear anything new said. And what is new is increasingly niche and unlikely to have any application to what I do.
paxys 1 days ago [-]
So then...don't talk about it? Do your job. Go home. Spend time with family. Find some non-tech hobbies. The solution isn't to change the world but to break your social media addiction (and yes, HN/Linkedin/X are included).
tomjakubowski 24 hours ago [-]
I'm becoming more bullish on AI, but it's still frustrating how much of the metaphorical oxygen it's taking. I feel like I'm hearing less about developments in software tech outside of AI fields.
ternera 1 days ago [-]
It's just a buzzword that draws more attention and more clicks. I also use AI for some projects, but it can be annoying when companies try to incorporate it in places it doesn't belong.
Insanity 23 hours ago [-]
Yes, and I'm increasingly bored of talking about 'the internet'. AI definitely plays a part in that, I don't want to deal with randomly generated articles or content. But even regular content, like YouTube shorts, are unhealthy for the human brain.
I'm currently reading non-things by Byung-Chul Han, which is an interesting exploration of internet's impact on humanity/humans. Haven't finished it yet, but enjoying it so far.
thisoneisreal 21 hours ago [-]
Hadn't heard of this book. Picked up a sample based on your comment and I'm really enjoying it, thanks.
ppqqrr 1 days ago [-]
yes. it's like a giant finger pointing at the moon, and everyone's talking about the finger.
dirk94018 22 hours ago [-]
Management is cargo culting the tooling without grasping what AI is actually good at. Because they don't look at it. Meanwhile smart blue collar guys are only limited by their willingness to ask questions. Because they do. It's the difference between the performance of work and work. The most fascinating aspect about AI may just be what it tells us about people, work, and society.
kelnos 22 hours ago [-]
God yes. I'm of the same mindset: I use it often, think it's great and revolutionary, but... why do we need to talk about it so much?
My technical interests are varied, and it's so boring to come to HN and see that a third (or more) of the front page is about AI.
Enough already. Let's talk about other things! And yes, I know, I should be a part of the solution and submit more articles.
rarisma 23 hours ago [-]
I love AI, think its super useful, I use claude daily and follow the industry closely but I would love to go a day without hearing about it
agentictrustkit 20 hours ago [-]
Initially I rally had a bad taste in my mouth. It had forced me to close a business (video editing). Recently its gone a different direction so I would say the "interest" part got a resurgence for me. I'm seeing all of theses tools, people, and systems promise "can do this..." and "can do that..." but because I have a background in trust law and trust creation I've looked at things differently.
I think the "can do" part gets boring but now I'm paralleling this to trust relationships and fiduciary responsiblities. What I mean is that we can not only instruct but then put a framework around an agent much like we do a trustee where they are compelled to act in the best interests of the beneficiaries (the human that created them in this case).
Anyway it's got me thinking in a different way.
remich 20 hours ago [-]
Fiduciary duty but for AI, interesting. I think there's some potential there, though of course you'll end up confronting the classic sci-fi trope of "what if the system judges what's best for the user in a way that is unexpected / harmful"? But, solve that with strong guardrails and/or scoping and you might have something.
leontrolski 23 hours ago [-]
Yes, AI or no AI, tell me about something actually interesting that you're working on.
Currently it feels a bit like everyone is talking about what new editor they're using. I don't care about that type of developer tooling very much. AI isn't coming up with some exciting new database, type system, etc etc.
"Look at how I'm able to web dev x% faster" because of LLMs is boring.
cdrnsf 1 days ago [-]
It's ruined the sparkle emoji for everyone.
nitwit005 23 hours ago [-]
Many of AI articles that I see on Hacker News just don't seem particularly interesting, and the comments end up heavily talking about AI in general, rather than the article.
> seems to have devolved into three different people’s (almost identical) Claude code workflow
I do feel like I've seen a number of those articles.
1 days ago [-]
tsoukase 13 hours ago [-]
At first I read "Is anybody else bored of talking TO AI?" And now I realised I am at both.
magic_hamster 20 hours ago [-]
I see AI as a new, unreliable resource that I can try and tame with good software practices. It's an incredibly fun challenge and there's a lot to learn.
thrill 1 days ago [-]
only of people constantly complaining about it like they have some special insight
Fr0styMatt88 19 hours ago [-]
I’m bored of people on YouTube talking about it, that’s for sure.
Everythink devolves from “cool that was a nice single video” to “here’s my schtick…. AGAIN”.
EmilySmith23 18 hours ago [-]
Yes very much, kinda crazy too. Everything right now evolves around AI and I'm really thinking about taking a break.
ogou 1 days ago [-]
Everything is fandom now. I grew up around people obsessed with Nascar and NFL. So much of the discourse sounds exactly the same. It beats listening to people talk about their dogs though.
monknomo 1 days ago [-]
I deeply wish to hear about other tech trends; I get enough of use more ai, do more with less, and ship faster at work. I'd rather hear about new tools and techniques here
bmau5 1 days ago [-]
The debate around "AGI" is the thing that gets me. People just moving goalposts and arbitrarily applying their own standards makes for a lot of wheel spinning
emp17344 1 days ago [-]
AI enthusiasts love to misuse and abuse the goalpost metaphor. It’s practically always an attempt to silence opponents.
bmau5 1 days ago [-]
It's easily abused by both sides of the debate because there's no strict widely accepted definition. I find it tiring because it's a largely inconsequential benchmark anyways (outside of Microsoft-OpenAI contract disputes).
emp17344 22 hours ago [-]
No, it’s abused almost solely by AI boosters. This isn’t a “both sides” situation.
nicbou 1 days ago [-]
I'm not.
At least I'm not tired of talking about how it's killing websites and filling everything with spam. I have spent most of a decade building a useful resource, and Google AI overviews has killed my traffic. It killed everyone's traffic. This thing gave me purpose, and I'm watching AI slowly strangle it.
I mourn the death of the independent web, and it frightens me that this is still the happy stage. We haven't yet felt the effect of stiffing content creators, and the LLM tools haven't yet begun to enshittify.
I am tired of discussions about agentic coding, but I would feel a lot better if we acknowledged all the harm being caused. Big tech went all in on this, stealing everything, putting everyone out of work, using up all resources with no regards for consequences, and they threaten to kill the economy if we don't let them have their way.
I feel like we are heading for a much worse place as a society, and all we can talk about is how to 10x our bullshit jobs, because we're afraid of falling behind.
tartoran 23 hours ago [-]
AI isn't doing any of that, it's the companies that train their AI's or push their slop that are doing all the harm. We should always keep in mind who the villain is.
nicbou 14 hours ago [-]
This is the "guns don't kill people" argument, isn't it? Semantics are not that important when people say "we have a gun problem".
One way or the other, tech companies have created a weapon and they are using it against us. Instead of stopping them, we're all trying to point it at someone else.
1 days ago [-]
1 days ago [-]
ekropotin 1 days ago [-]
No, because what most people miss about AI is that…
I’m just kidding. LinkedIn feed became so unbearable, that I had to install an extension to turn it off.
scuff3d 1 days ago [-]
I've been sick of it since 2022.
brightball 22 hours ago [-]
A little, yes. It’s starting to feel like “blockchain” from a topic exhaustion standpoint, just more useful.
jwilliams 1 days ago [-]
It's the most transformative technology I've clocked in my lifetime (and that includes home computers and the Internet).
Large organizations are making major decisions on the basis of it. Startups new and old will live and die by the shift that it's creating (is SaaS dead? Well investors will make it so). Mass engineering layoffs could be inevitable.
Sure. I vibe coded a thing is getting pretty tired. The rest? If anything we're not talking about it enough.
anonzzzies 19 hours ago [-]
I can see why people would he, but it is amazing tech when used right; as others here said; if you are an experienced developer, you can get a lot of mileage out of it. If you are not a dev or a bad dev, you might struggle because of over estimating what it can do for you.
The most frustrating about AI I find is that it is (trying to) replacing things that I like; writing, art, programming while leaving me with the things I absolutely hate like testing, chores etc. I do like reading code, so thats not a big issue; I spent most of the day doing that before AI, however, being a fulltime QA was always my nightmare but here we are; AI sucks at it for a more than trivial frontend and backend its not that good at either (I should write the tests or rather tell the AI in detail what to test for otherwise it will just positively test what it indeed wrote; not what the spec says it had to write in many cases).
But no, I like talking about AI, just not so much about slop, trivial usage (show hn; here is a SaaS to turn you into a rabbit!) or hyperbole (our jubs!) (Although I do believe it is the end of code; like said; I read and review code all day but have not written much for the past 6 months while working on complex non trivial SaaS projects; I am with antirez; it is automatic coding, not vibecoding for me).
kkrish83 17 hours ago [-]
It's not just tech, I think a lot of the internet is just about one topic. Its a very fascinating topic but its taken over the zeitgeist and the world is becoming a pretty boring place
jFriedensreich 22 hours ago [-]
Honestly, I am really surprised to not be bored of it more. Yet it seems to be existential enough, diversifying and developing fast enough to stay interesting. If the tooling gets boring, there will be some drama, if the corporate side gets boring, there will be some local inference maxxer, if all of that is boring there is some new alignment topic. What is even more surprising is no people around me seem to be tired of talking about it either. If i start talking about it, especially to non devs, I often pause to check If they can't hear it anymore, sometimes even ask explicitly but so far no one seemed to mind or even embrace it.
htx80nerd 1 days ago [-]
I'm bored of the everyday Claude spam. I've used Claude extensively and it was very sub par.
d675 23 hours ago [-]
what did you try to make? how?
mondrian 1 days ago [-]
Let's get back to filling the front page with Web3, DeFi, NFTs. Oh the good ol' days.
altairprime 1 days ago [-]
It’s been almost two days since someone posted /e/OS, so those good ol’ days aren’t entirely gone :)
TheOrange 1 days ago [-]
Yes!
Sounds very much like this blog I read too… he laughs at AI in his workplace a lot
Www.sometimesworking.com
cmollis 1 days ago [-]
yes, so bored. yada yada.. i've been 'obsolete' for 36 years and counting.
kabir_forest 18 hours ago [-]
Using Ai for writing, Now I first try to draft things out of my mind and then ask it to find mistakes and red flags.
Asking it to draft was weakening my own skills.
cesarvarela 20 hours ago [-]
As someone who has been in tech for 20+ years, I can say this is the most exciting it has ever been.
I've been spamming some auto research loops, and it is so addictive. Think about how many of humanity's problems will be solved because of this. Of course, it will also disappoint, like, we are still waiting for flying cars, but man, this is a unique moment in history.
zombot 6 hours ago [-]
I sure am bored of talking about AI. Always the same points pro and con being rehashed, it's so tiring. But what do you know, all this comment thread does is to continue talking about AI. We who are bored of it seem to be a tiny minority.
rbanffy 1 days ago [-]
I am. To keep talking about it I might just deploy a chatbot to do that for me.
IshKebab 1 days ago [-]
I wish there was an option to hide AI stories on HN, and AI-related repos on Github's trending page.
You could use AI to do it! Fight fire with fire.
I'm neutral on AI - so far it seems useful but flawed. But I don't want to hear about it constantly.
mervz 1 days ago [-]
100% this; GitHub is littered with a bunch of AI shit projects that pollute the trending page.
dsign 15 hours ago [-]
Yes. The problem is the level of the conversations: "AI is good for this. It's not good. It gaslights." It doesn't come out of that. If we were talking about the actual layers in the models and how they interconnect, at least there would be quite a bit of variety in the conversations. Also conversations about how is all about to end tend to be fun if the interlocutors are creative enough.
But the sooner we get to the part of history with the chromy-killer-robots and people-sabotaging-datacenters-and-foundries, the sooner we will get some meaningful excitement.
germandiago 7 hours ago [-]
Exhausted.
Lerc 24 hours ago [-]
Never tired of talking about AI. There are so many fascinating aspects to explore and papers delivering new ideas. It's a bit tiring keeping up with the new stuff but talking about what we've found is one of the things that makes it easier to keep up.
I'm somewhat tired of seeing the same rehashed claims of future ability, non-ability, profit, loss.
I actually like talking about the implications, future risks and challenges of AI. I have made submissions on ways AI should be regulated to benefit society. The problem is the assumption of what is happening and what will happen.
To many people seem to enter the conversation feeling that the absence of doubt is the same thing as being informed.
And especially people making claims based on premises that they seem to believe that if they build big enough towers on them, they will become true.
The number one thing that bothers me in all this, is people assuming the contents of the minds of others.
I find the pathologising of Sam Altman to be the most egregious form of this. It is one thing to disagree with someone's decisions, another thing to disagree with their stated opinions, but to decide upon a person's character based upon what you believe they are thinking in their private thoughts is simply projection.
I know this is an opinion of little worth to many, but my impression of Sam Altman is just a person who has different perspectives to me. The capitalist tech world he lives in would inevitably shape different values to me. What I have seen of him is consistent with a sincere expression of values. I can accept that a person might do something different to what I would, even the opposite of what I want while believing that they can be doing so for reasons that seem to be morally the right thing to do.
This also happened with cryptocurrency. Crypto advocates believe that it is a good thing for the world. Too many consider those who believe that crypto could benefit society to be evil. There is a difference between being wrong and being evil. No matter how certain you are you can still be wrong, in fact beyond a point I would say increased certancy would indicate a higher likelihood of being wrong.
So I'm happy to talk about AI. I have plenty to learn. I wonder if others went in with the goal to learn whether they would find it less tiring.
gverrilla 21 hours ago [-]
How does such a shallow post get so much attention? It's a circle meeting of engineers having a crisis? If so I'm happy to accelerate it by posting here and numbers go up.
vincentabolarin 1 days ago [-]
Management spins up something on Lovable and believes that building any software is as easy as typing a few prompts.
It's worse when there's a colleague of yours encouraging that by using AI blindly, piling up technical debt just to move at the pace that Management expects after signing you all up on some AI tool.
At the end of the day, everyone is talking about AI. For AI or against AI, it doesn't really matter.
beej71 1 days ago [-]
To answer the OP's question, apparently not! :)
amelius 24 hours ago [-]
Of course talking about AI is boring.
The analogy is someone from the 19th century talking about their slaves all day which is of course nonsense because they had other things to talk about.
lwansbrough 1 days ago [-]
It seems some people in this thread are not :)
zeroonetwothree 19 hours ago [-]
It's all we can talk about at work and I'm really tired of it. Yes, I use Claude every day to write 98% of my code, but it's fine we can talk about other things. We don't have to talk about the tools we are using constantly. And yes no one knows exactly what will happen, how much better it will get, will we lose our jobs, and other concerns. All of that is something you could worry about, but there's nothing to be done in continuously discussing it.
CrzyLngPwd 24 hours ago [-]
Yup.
Bored of hearing about it, bored of reading about it.
I love using these LLM tools, but honestly, it feels like every man and his dog has something to say about it, and is angling to make a quick buck or two from it.
And the slop, oh my goodness, it's never-ending on every site and service.
1 days ago [-]
SunshineTheCat 1 days ago [-]
I definitely get the comment about HN and seeing a billion posts about OpenClaw, Claude, or yet another post on an industry being disrupted by AI.
Tack on to that the increasing number of political stuff on here as well just makes it less and less an interesting place to visit.
Don't agree with the angry mob on the political stuff especially and you get downvoted/flagged into oblivion.
Just another echo chamber looking to have viewpoints confirmed in yet another one of the disappearing places online that foster any level of intellectual curiosity.
jimjimjim 1 days ago [-]
It feels like during the previous hype cycle of bitcoins, blockchains and NFTs. People are trying to find uses for new technologies but it seems like a lot of the conversations come from people (at this point I guess it's still people?) trying to increase the hype. Maybe they are trying to be thought leaders or maybe they are trying to boost some stock valuations.
mtndew4brkfst 24 hours ago [-]
The amount of questions I fielded about web3, coins, ledgers, etc as an IC speaking with customers or internal leadership was around an order of magnitude lower, and well-known brands weren't trying to sell me any of them. It was much rarer for it to get shoved into a product it wasn't helpful for, too.
Never thought I'd feel nostalgic about that era...
23 hours ago [-]
naikrovek 6 hours ago [-]
i've been tired of hearing about AI for about 10 years, now. and throughout that time, it's only gotten worse and worse and worse.
gojomo 24 hours ago [-]
You may be bored of AI, but because AI is not yet bored of us, turning away may be dangerous.
tartoran 23 hours ago [-]
AI is neither bored or nor engaged with us. It's just a technology that we can use or abuse. I doubt it'll become conscious anytime soon though our the desire to invent God or deceive others will push us to invent many contraptions to make it appear conscious.
HoldOnAMinute 1 days ago [-]
Anything to distract from the real war, billionaires vs everyone else.
sunaookami 13 hours ago [-]
No.
rednafi 22 hours ago [-]
I'm not bored of the technology per se but the people around it. The yappers, doomers, and the shills are insufferable.
peterlk 1 days ago [-]
Modern AI is a miracle. The math that makes it work is beautiful and really impressive. For example, if you wanted to map all knowledge on earth, how would you do it? AI answers that question by building a high dimensional vector space of embeddings, and traversing that space moves you through a topology of basically every concept that humans have.
Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%, but it’s still much better than what you might expect from a markov model of ngrams.
Openclaw is only sort of interesting. How to vibe code your first product is uninteresting. Claims about productivity increase from model usage are speculative and uninteresting. Endless think pieces on the effects of AI slop are uninteresting. There’s a lot of hype and grift and bullshit that is downstream of this very interesting technology, and basically none of that is interesting. The cool parts are when you actually open the models up and try to figure out what’s going on.
So no, I’m not bored of talking about AI. I’m not sure I ever will be. My suspicion is that those who are bored of it aren’t digging deep enough. With that said, that will likely only be interesting to people who think math is fun and cool. On the whole, AI is unlikely to affect our lives in proportion to the ink spilled by influencers.
jakelsaunders94 24 hours ago [-]
This is a really intersting take, and maybe shows that I haven't been thorough enough with my reading. My guess is that the deep technical articles are few and far between and the higher level 'hot takes' are what fills the room. Do you have any recommendations for interesting places to start?
peterlk 23 hours ago [-]
My favorites are the micrograd series by Andrej Karpathy on youtube [0], and “Why Deep Learning Works Unreasonably Well” [1]
The greats on youtube are also worth watching: 3B1B, numberphile, etc.
Why is it that a stochastic parrot can solve logic puzzles consistently and accurately?
peterlk 23 hours ago [-]
Attention is all you need…?
The short answer, as far as I’m aware, is that no one really knows. The longer answer is that we have a lot of partial answers that, in my mind, basically boil down to: model architectures draw a walk through the high dimensional vector space of concepts, and we’ve tuned them to land on the right answer. The fact that they do so consistently says something about how we encode logic in language and the effectiveness of these embedding/latent spaces.
bigstrat2003 22 hours ago [-]
> Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%...
It can't. As you say in the very next sentence. If it isn't solving any given puzzle with a 100% success rate, but randomly failing, then it isn't consistent.
doug_durham 24 hours ago [-]
Nope. It remains the most dynamic and impactful area in software today. I'm sure it will fade in to common practice over the next few years and become less talked about. I find it infinitely more interesting than yet another article talking about the wonders/horrors of the Rust borrow checker.
YmiYugy 1 days ago [-]
I'm bloody sick of it, but more exhausted than bored.
My workflow that was pretty stable for years, keeps changing massively on an almost monthly basis and that means I'm already skipping the fads of the week.
What's more annoying is that it feels actually worth it and thus keeps me churning.
leephillips 18 hours ago [-]
I’m certainly tired of hearing about it. HN is inundated by repetitive, boring AI articles. This helps a bit I think: https://hn-ai.org/.
mirekrusin 1 days ago [-]
Talking about AI being boring is boring as hell for sure.
KPGv2 19 hours ago [-]
Yes. My Android feed's Github repos are always AI these days. HN is 50+% AI posts. And I just built a NAS that is about 50% more expensive than it would have been without AI.
I'm so exhausted by this and ready for the economic crash.
bdangubic 18 hours ago [-]
economic crash will make it even worse unfortunately
QubridAI 1 days ago [-]
Honestly, a bit but only because the hype cycle is louder than the genuinely interesting work.
dgemm 1 days ago [-]
Kinda tired of being inundated with low quality AI slop absolutely everywhere.
pjmlp 13 hours ago [-]
I couldn't wait for the bubble to pop, everything is getting tied to AI KPIs.
tonymet 21 hours ago [-]
Like many endeavors, the most vocal in favor or against aren't really doing much. The people actually succeeding with AI probably aren't interested in giving away their secrets.
AI is especially sensitive to this. Unlike coding, where giving away the secret sauce also makes you look smart, divulging AI secrets only demystifies you -- revealing the shriveling man behind the Wizards curtain.
So anyone boasting about AI is likely not doing anything useful with it.
Similar to finance tips, btw.
somelamer567 22 hours ago [-]
I remember how annoying Ray 'Nerd Rapture' Kurzweil was twenty years ago with his "Singularity" stuff, and quietly thankful that he isn't anywhere near as in-my-face today as he was back in the day.
As bad as the AI hype wave is now, I can't help but wonder if it could have been even worse.
nickphx 23 hours ago [-]
i am tired of the desparation of the hype machine.
If its becoming just another thing in the toolbox that means its winning. Boring and useful tech stays around.
johnea 24 hours ago [-]
Yes!
There are other interesting things in the world today, and HN is overwhelmed with pretend intelligence.
Hype, detractors, ALL OF IT!
Maybe a separate web page or RSS feed could be created that is dedicated to the subject...
kgwxd 1 days ago [-]
Oh great, we're at the stage of constantly talking about it, AND talking about how we're sick of talking about it. Now every article will be as long as before + a prefix paragraph explaining how they know we're all sick of talking about it, but...
somewhereoutth 1 days ago [-]
My only hope is that it is such a disaster that it is effectively an extinction level event for this current technoscene (along the lines of the Permian–Triassic extinction event and others).
Then we can get back to the unglamorous, boring, thankless task of delivering business value to paying clients, and the public discourse will no longer be polluted by our inane witterings.
themafia 1 days ago [-]
You can call an engineer a "product manager" but that does not make them one.
korse 1 days ago [-]
Yepp.
luxuryballs 1 days ago [-]
“it’s all starting to feel a bit… routine” and that’s how I know it’s going to replace me if I stay an employee
jimmyjazz14 1 days ago [-]
I don't follow your logic, it was always either going to disappear or become routine much like any other tool you would use at work.
sciencejerk 24 hours ago [-]
I think that they mean that "routine" work like AI agent prompting and config is repetitive, predictable and somewhat thoughtless work. Human employees that perform repetitive, predictable, thoughtless work are easy to replace with AI
Siah 1 days ago [-]
Not me
guywithahat 1 days ago [-]
I'm not sure if this is a joke but the field is advancing so dramatically it's hard to stop talking about it. Every week at work I have to show a new AI feature to an executive, about how we can now write 1000's of lines of codes in minutes at a higher quality than the greatest engineers. This necessitates new tools and new purchases, as well as team and org shifts.
If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times. There are a lot of people who are deep in tech who still don't understand what agents and LLM's can do
1 days ago [-]
JohnFen 1 days ago [-]
> If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times.
I'd love for discussions of the tech to stop with the genAI version of the cryptobro cry "have fun being poor". It's mildly insulting and adds literally nothing to the conversations.
(Not meaning to single you out, just using it as an example. This is a very common rhetorical problem with most of the evangelism.)
solenoid0937 22 hours ago [-]
The difference between "have fun being poor" and the AI craze is that if you have a shred of initiative you can actually do incredible things with AI right now.
The detractors are so bizarre to me. I think it's because I work at a big tech that has so thoroughly wired AI into everything we do, and the benefits are so undeniable and totally perspective changing, that it's like arguing with someone that thinks the sun revolves around the earth.
So if you aren't doing something cool with AI, it's probably because you aren't empowered to at your company, or because you simply aren't taking the initiative. Seems like a pretty even split on HN.
JohnFen 9 hours ago [-]
I wasn't commenting on whether or not the tech is useful. I was commenting on how such a rhetorical approach is counterproductive and doesn't offer any sort on insight.
guywithahat 6 hours ago [-]
I guess I don't see how your comment is useful. If LLM's/AI are not generating code for your team, you need to update your processes. Telling someone they can't run faster than a car isn't evangelicalism of cars, and it wouldn't be counterproductive to tell someone who works at a shipping company they should be using vehicles to ship packages.
jimjimjim 1 days ago [-]
"higher quality than the greatest engineers". right...
and why do so many articles or comments have a general approach of 'It's great and if you don't think it is it's because you don't understand it.'
guywithahat 6 hours ago [-]
Because it's a tool which must be used properly. I've encountered senior engineers who, while great on their own complain that AI isn't good at code gen. When I talk to them about it they're using terrible free models and not putting in effort to understand how LLM's work. Agents can now write thousands of lines of code across large codebases following specs closely on levels that are simply impossible for teams of humans to do.
Humans simply cannot code as well as an LLM/Agent in most cases. It's like fighting a bear, and if you think you can beat an adult brown bear you're probably wrong.
smartmic 1 days ago [-]
AI has become a commodity- for the better or worse. And yes, we should treat it as such, especially no more big ideas from (C-level) managers, please.
mlsu 24 hours ago [-]
Over the last couple of years I've realized how shitty and tiring it is to do anything at all on the computer. Reading something like Reddit was tiring before, because of spam, submarine advertising, etc. But it was still worth it because the signal to noise ratio was still there. Now? No way. Easily 50% of comments are AI generated.
I used to have this idea that if I built something cool it would be valuable to donate it to the world for free. But now increasingly I'd be just making a donation to the training data, and on top of this I'm in competition with AI slop. Most people won't tell the difference and won't care. The noise floor for doing absolutely anything collaboratively on the computer is now 10x higher than it was before, and I'm basically checked out at this point. Even HN is becoming tiring to read since I think around 10-15% of comments that I read are AI generated. When that number reaches 30% I'm done forever, gone. My life is too short to waste time on this shit.
neerajk 1 days ago [-]
I'm just getting started ;)
kkrish83 17 hours ago [-]
Not just tech, AI conversations seem to be dominating all conversations online. It's either people talking about it or posting slop made by it. Its fascinating tech but it's making the world a boring place
tcdent 1 days ago [-]
The replies lol.
"Yes" Proceeds to talk about AI.
PowerElectronix 1 days ago [-]
AI is an ok tool.
josefritzishere 1 days ago [-]
Bored is a nice way to say it. Never has a technology been so odious also been so ubiquitous.
eudamoniac 12 hours ago [-]
I have always been an AI quality sceptic. I don't believe quality software is coming out of these models, but I figured it would probably speed up the development of poor quality software. What's surprising is that is seems to not even be doing that. We've had three years of claimed "3x-10x productivity" which means thousands of people have had 9-30 developer years to do something. Where is that output? I haven't seen a single AI developed thing reach Show HN or anywhere else that was worth a damn.
So at this point I have to just assume this shit doesn't work very well for some reason, because no one is outputting anything with it that resembles good, useful software.
acedTrex 22 hours ago [-]
I am so over it, its not interesting and none of the people participating in it really have any noteworthy skills. I spend more time lurking in lobste.rs these days than here.
keybored 1 days ago [-]
And this one will be different?
> At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.
Umm.
> I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.
It’s all positives. So what’s the problem?
There isn’t a problem with AI. Of course. It’s just the discourse around it is “boring”. And the managers are lame about it.
And what has been the AI discourse for the last few years. The same formula.
- AI is either good
- ... or it is the best thing to have happened to Me
- But I have feelings[1] or concerns about everything around AI, like the discourse, or people having two-hundred concurrent AI agents mania
It’s all just grease for the AI Inevitabilism bytemill.
> … And yes, I’m painfully aware of the irony of a post about moaning about posts about AI. Sorry.
OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe”
Very self-aware.
And now 117 points and 53 comments in 23 minutes.
jakelsaunders94 24 hours ago [-]
Hey :)
> And this one will be different?
I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.
>Umm.
??
> It’s all positives. So what’s the problem?
The article is trying to say that these things are great, but the level of conversation leads to a lack of novelty.
> It’s just the discourse around it is “boring”. And the managers are lame about it.
Exactly.
> OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe”
Very self-aware.
Is this sarcasm?
keybored 23 hours ago [-]
> I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.
Is anybody else bored of talking about AI? I’m beyond bored.
pugchat 23 hours ago [-]
[dead]
lesoutilsia 14 hours ago [-]
[dead]
agentpiravi 21 hours ago [-]
[dead]
harmf 19 hours ago [-]
[dead]
voxleone 22 hours ago [-]
[dead]
maxbeech 19 hours ago [-]
[dead]
bustah 11 hours ago [-]
[dead]
getverdict 20 hours ago [-]
[dead]
dfhvneoieno 1 days ago [-]
[dead]
rafaamaral 20 hours ago [-]
[dead]
ryguz 21 hours ago [-]
[dead]
baron3dl 18 hours ago [-]
[dead]
SadErn 24 hours ago [-]
[dead]
aaron695 1 days ago [-]
[dead]
aiwokz 23 hours ago [-]
[dead]
internet2000 1 days ago [-]
Not really? It's kind of a big deal.
schaefer 1 days ago [-]
> Not really? It's kind of a big deal.
Why on earth is the parent comment downvoted?
the title of the TFA asks a question. This statement directly answers that question. Seems very on-topic.
zeroonetwothree 19 hours ago [-]
I didn't downvote it but it would be more interesting if it had some content, like _what_ about it they find interesting.
geldedus 23 hours ago [-]
I am bored of Luddite people yelling at AI
bena 1 days ago [-]
Talking about how you are bored about talking about the thing is still talking about the thing.
I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development. All the people I work with who are getting the most value out of using AI to deliver software are people who are already very high-skilled engineers, and the more years of real experience they have, the better.
I know some guys who were road warriors for many years —- everything from racking and cabling servers, setting up infrastructure, and getting huge cloud deployments going all the way to embedded software, video game backends, etc. These guys were already really good at automation, seeing the whole life cycle of software, and understanding all the pressure points. For them, AI is the ultimate power tool. They’re just flying with it right now. (All of them also are aware that the AI vampire is very real.)
There’s still a lot to learn, and the tools are still very, very early on, but the value is clear.
I think for quite a few people, engaging with AI is maybe the first time ever in their entire career they are having to engage with systems thinking in a very concrete and directed way. Consequently, this is why so many software engineers are having an identity crisis: they’ve spent most of their career focusing on one very small section of the overall SDLC, meanwhile believing that was mostly all there was that they needed to know.
So I think we’re going to keep talking for quite a while, and the conversation will continue to be very unevenly distributed. Paradoxically, I’m not bored of it, because I’m learning so much listening to intelligent people share their learnings.
> I think what’s interesting about AI, and why there’s so much conversation, is that in order to be a good user of AI, you have to really understand software development.
This I agree with completely. You can see it in the difference between a prompt where you know exactly what you want and when things are a little woolley. A tool in the hands of a well trained craftsperson is always better used.
> So I think we’re going to keep talking for quite a while Me neither, and to be clear I'm okay with that. This was mostly a rant at the lack of diversity of discourse.
Agree, the diversity of the discourse is not great. There's a lot of "omg I just got started waaauw" articles out there along with "we're all gonna die!" stuff. And then a few seams of very excellent insight.
Deep research at least helps with dowsing for the knowledge...
{heart}
https://www.youtube.com/watch?v=k6Rl8TpGIP4
https://www.youtube.com/watch?v=Q0iuMByFjCQ&list=PLEouLkiLHd...
;)
It's really hard to separate the wheat from the chaff at this point but I've been positively surprised by the relatively few articles sharing their more advanced workflow, lessons learnt which helps me to avoid the traps, patterns emerging that taught me something new (or at least validated approaches I tried on my own which worked). Gets tiresome to keep pace so I try to not fall for FOMO, and avoid experimenting too much to not get lost until I see a pattern emerging from different sources.
This isn’t to say there’s not hype. Just that if you’re not seeing big productivity gains you need to make sure you really are an outlier and not just surplus to requirements.
Rather, I hear a lot of nuanced opinions of how the tech is useful in some scenarios, but that the net benefit is not clear. I.e. the tech has many drawbacks that make it require a lot of effort to extract actual value from. This is an opinion I personally share.
In most cases, those "big productivity gains" are vastly blown out of proportion. In the context of software development specifically, sure, you can now generate thousands of lines of code in an instant, but writing code was never the bottleneck. It was always the effort to carefully design and implement correct solutions to real-world problems. These new tools can approximate this to an extent, when given relevant context and expert guidance, but the output is always unreliable, and very difficult to verify.
So anyone who claims "big productivity gains" is likely not bothering to verify the output, which in most cases will eventually come back to haunt them and/or anyone who depends on their work. And this should concern everyone.
This is overly dismissive, there are many things that are possible now that weren't before because writing the code is no longer the bottleneck, like porting parts of the codebase from managed to unmanaged for teams with limited capacity. Writing code is about 1/3rd of the job. Another 1/3rd is analysis, which also benefits from AI allowing people who aren't very good at it to outperform. The final 1/3rd is-
> the effort to carefully design and implement correct solutions to real-world problems.
That's problem-solving - that part doesn't get sped up, and likely never will, reliably.
The only downside is not learning about method Y or Z that work differently than X but would also be sufficient, and you don’t learn the nuances and details of the problem space for X, Y, and Z.
No, its verify that X approach is semantically correct, architecturally makes sense, design is valid and then add tests and documentation. Basically, 80% of the work.
I've learned about outbox pattern, eventual consistency, CAP theorem, etc. It's been fun. But if I didn't ask the LLM to help me understand it would have just went with option A without me understanding why.
"Just verify" is glossing over a lot of difficult work, though. It doesn't just involve checking whether the program compiles and does what you wanted—that's the easy part. You should also verify that the program is secure, robust, reasonably performant, efficient, etc. Even if you think about these things, and ask the tool to do this for you, generate tests, etc., you will have the same verification problem in that case as well. The documentation could also be misleading, and so on. At each step of this process there will likely always be something you missed, which considering you're not experienced in X, Y, or Z, you have no ability to properly judge.
You can ignore all of this, of course, which majority of people do, but then don't be surprised when it fails in unexpected ways.
And verification is actually relatively simple for software. In many other fields and industries verification is very impractical and resource intensive. It doesn't take a genius to deduce the consequences of all of this. Hence, the net effect of these tools is arguably not positive.
I run static analysis on mixed human/AI codebases. The AI parts pass tests fine but they'll have stuff any SAST tool flags on first run — hardcoded creds, wildcard CORS, string-built SQL. Works in a demo, turns into a CVE in prod.
And nobody's review capacity scaled with generation speed. Most teams don't even have semgrep in CI. So you get unreviewed code just sitting in production.
The "10x" is real if you count lines shipped. Nobody counts the fix cost downstream though.
I am amazed at the incredible things it can do - only to turnaround and not be able to do a simple task a child can do. Just like people.
Specialists/generalists, top-down/bottom-up, BFS/DFS, pragmatists/idealists, ADHD/ASD; lots of continuums in software work and those at either extreme have biases.
Personally I think that there will be less programmers needed and the ones that will remain will have had to mellow out towards the center in all these continuums. We won't be able to rely on big teams balancing each other out.
Generalists will need to learn which details matter. Specialists will need to learn the delegation and risk tolerance usually reserved for the bemoaned management track. Hard to say which is the easier journey.
A lot of software engineering career capital was built on knowing which obscure method to call, which Stack Overflow answer to trust, how to navigate a specific framework's quirks. That knowledge was genuinely hard to acquire and it was a real signal. Now it's table stakes. The career capital that survives is knowing why you'd make a particular architectural decision, how to tell if generated code is actually correct, what the error message is really telling you.
The road warrior framing is right. Those people internalized systems thinking across the whole stack over years. AI doesn't replace that — it makes it worth more, because now one person with that mental model can move faster than a team without it. The people who are "bored of AI" are often the people who already made that transition and stopped finding it novel. The people still anxious about it usually haven't yet.
It's also a nasty tool used to dismiss criticism by tearing people down in work-friendly language.
Software does employ a lot turds that it shouldn't. We been knowing this. Impossible to ignore following the 2010s push to expand the hiring pool. Newcomers didn't even pretend to try or care.
Convenient that we're suddenly calling them out now. At the same time there's a need to indiscriminately invalidate professional-informed opinions.
I don't mean to snipe at AI, because it really does seem to have set more people on the path of learning, but I was writing VB5 apps when I was 14 by copying poorly understood bits and pieces from books. Now people are doing basically the same but with less typing and everyone thinks it's a revolution.
You might not consider this productive, but they do, so what you think literally doesn't matter to them.
There is more to it than "being able to make an entire application", which a novice could also have pulled off in a weekend 10 years ago.
I think it is genuinely impressive to be able to build one app with AI. But I haven't seen evidence that someone could build a maintanable, scalable app with ai. In fact, anecdotally, a friend of mine who runs an agency had a client who vibe coded their app, figured out that they couldn't maintain it, and had him rewrite the app in a way that could be maintained long term.
Again, I'm not an Ai detractor. I use it every day at work. But I've needed to harden my app and rules and such, such that the AI cannot make mistakes when I or another engineer is vibing a new feature. It's a work in progress.
I don't believe they said that folks new to AI can't make impressive use of it. They did however say that senior folks with lots of scrappy and holistic knowledge can do amazing things with it. Both can be true.
They still have absolutely no clue how it works, so how could they "write entire applications"? They vibed it, but they certainly didn't write any of it, not one bit of it, and they're clueless as to how to extend it, upgrade it, and maintain it so that the AI doesn't make it a bloated monstrosity of AI patches and fixes and workarounds that they simply could never begin to understand.
They were also following a dozen youtube tutorials step by step, so even that part was someone else doing the thinking.
Yeah, these are the same guys constantly bugging me to help them figure something out.
I’m not a naysayer by any means and at this point use Llms all day for many purpose. But it is undeniable that the exact moment complexity reaches a certain threshold, LLMs need more and more guidance and specific implementation details to make worthwhile improvements. This same complexity threshold is where real world functionality lives.
But at the same time the more I read about AI, the more I realize I need to learn about AI. Thus far I'm just using cursor and the Claude code extension alongside obra superpowers, and I've been quite happy with it. But on Twitter I see people with multiple instances of Claude code or open claw talking to each other and I don't even know how to begin to understand what's going on there. But I'm not letting myself get distracted — Claude code and open claw are tools. They could go away at any time. But systems thinking is something that won't go away. At least, that's my gambit.
Does it write good code? I dunno. But it looks cool, and I think interesting in its own right, even if it ends up being functionally useless.
Well, there was also a lot of unrelated things that happened as well around last November for me, but yes, getting into vibecoding for real was one of them, and man I feel physically drained coming back from work and going to use more AI.
Not sure what it is. I'm using AI personally to learn and bootstrap a lot of domain knowledge I never would have learned otherwise (even got into philosophy!, but man is it exhausting keeping up with AI. I would burn through a week's worth of credits in a day, and now I haven't vibe coded a week.
I think, I will chill. One day at a time.
My take is that it's similar to what Amber Case described in Calm Technology - with AI you are not steering one car, you're really steering three cars at the same time. The human mind isn't really designed for that.
I am finding that really structuring my time helps in terms of fighting back. And adopting an hours restriction, even if I could rage for 4 more hours, I don't. Instead I stop and go outside.
Me too. A key purpose of HN, and a bright time for that.
AI makes a ton of bad decisions too and it's up to you to work with it. If I had the knowledge of the dangers hidden in things I'm developing, I'd move even faster
Was able to make a great full web app, which I think is hardened for prod but it had to be refactored to do so. Which it happily did.
It's really about asking the right questions, breaking down tasks, and planning now. I'm going to tackle a huge project, hoping to share it here.
Now I'm working on a second project, all with AI. I haven't written a single line. It works better than a non programmer would make because I knew what to ask for. But I'll admit I'm not learning anything.
https://github.com/hparadiz/evemon
The core if you disable all the plugins is currently topped out at 73.8 MB after several days of running it. I've given it several audits with the AI agents using actual memory maps and doing the math on each variable.
I haven't had time to do Milkdrop yet but it's on my todo. The issue isn't doing the work. The issue is not having enough credits in my accounts to throw some compute at it. I'll get to it eventually. But it's actually way easier now to try new ways of packing the data into binary and profiling it for issues.
The issues I've had are edge cases like a 6 hour youtube stream. At one point the BPM detector was buffering the entire track in the pipewire sink. It took one throwaway prompt to the AI to solve that one.
Then the same happened with languages that managed memory.
And with IDE that could refactor your code in a click and autocomplete API calls.
And with Stack Overflow where people copy/pasted code they didn't understand.
But with that said, those who learn the underlying mechanisms will always be able to solve more problems than the folks who don't. When you know the lower pieces, your mental model tells you when and where the higher level pieces are likely to break. Legit superpower.
I would define that as being "seriously hamstrung"
So yeah i mean - who cares how it works - but also if you have experience in how things _do_ work you can solve problems other people cannot.
Yet most programmers nowadays can't write ASM or C and still manage to produce useful software.
For example, I haven’t racked and cabled a server in over 15 years. That used to be a valuable skill.
I also used to know how to operate Cisco switches and routers (on the original IOS!). I haven't thought about CIDR and the difference between a /24 and a /30 since the year 2008. A class IP addresses, how do those work? What subnet am I on? Is thing running on a different VLAN? Irrelevant to me these days. Some people still know it! But not as many as in the past.
The late Dr. Richard Hamming observed that once a upon a time, "a good man knew how to implement square root in machine code." If you didn't know how to do that, you weren't legit. These days nobody would make such a claim.
So some skills fade and others rise. And also, software has moved in predictable cycles for many decades at this point. We are still a very young field but we do have some history at this point.
So things will remain the same the more they change on that front.
Also anyone making a homelab has to know these stuff.
And there'll be a split too... like there's a giant divide between those mechanics who used to work on carburetors and the new gen with microcontrollers, injection systems, etc. People who think cars are 'too complicated' aren't wrong, but for someone who grew up in the injected era, i vastly prefer debugging issues over the canbus rather than snaking my ass around a hot exhaust to check something.
Since COVID I have seen teams scaled down, lots of custom development or devops/infra work work got replaced with SaaS and iPaaS cloud products, serverless/lambda, managed containers.
This is the next step.
Great that people feel more productive, unfortunely for many of them, us, more productivity means the C-suites can do some head count reduction yet again.
I don’t know if it’s the Universe delivering this farce or it’s the emergent LLM Singularity.
That's not what I said. I said that those who are already shining, are now shining even brighter. Give a great craftsman a new tool and he will find a way to apply it. If it is valueless, he will throw it away.
For what it's worth, your comment is also an HN trope, the disaffected low-effort armchair keyboard warrior.
But that said, this will be the 3rd major industry transition of my career. And having survived the past two, you will adapt, or you won't have a job. And that's why, once again, you will ultimately adapt, kicking and screaming if that's what it takes, so why not start early?
AI coding agents are useful already, but they make too many mistakes and they need handholding from expert engineering talent in 2026. Ask me again in 2027. But that's why the best results are coming from the talent right now with the experience to ask the right questions and propose the right tests and fixes as the human in the loop. Otherwise, it's still hallucinatory vibe coding in a loop IMO.
The surprise and disappointment, well not the surprise, is the usual hatred of success that defines humanity. Whatever, downvotes, right?
It's what happened with the internet and computer usage. As Apple made it easier to get online with zero computer knowledge, suddenly we're electing people like donald trump.
AI is the thing that for the first time can think better than us (or so at least some people believe) and is seen as an efficiency booster in the world of cognition and ideas. I'd think Hannah Arendt would be worried with what we are currently seeing and where we might be headed.
Turns out Lowtax was right and ahead of his time
We have hundreds and thousands of years of history showing humans committing atrocities against each other well before the advent of computers, or even the introduction of electricity. So while the tool may become so ubiquitous that there’s no option not to engage with it, I don’t think it really fundamentally alters the dynamics of human behavior.
Some people are motivated by greed. Others are motivated by nobility. It really just comes down to which wolf they're feeding.
In terms of the tool keeping people ignorant, there’s a part I agree with and a part that I don’t. I think, in terms of information dissemination, AI is probably the autocrat’s wet dream in terms of finally being able to achieve real-time redefinition of reality. That’s pretty scary, and I’m not sure what to do about it.
On the other hand, people have always been free to not really learn their craft and to just sort of get by and make a living. That was true a thousand years ago, and it’s true today. There’s always somebody who can do really a high-quality job, but they’re very expensive, and then there's a vast population who will do a medium to terrible job for less money. You get what you pay for. There's a reason history is primarily written about people with power and wealth, they were the only ones with the means to do anything.
I don’t agree with the assertion about the internet and the election of someone like Donald Trump. Well before the internet existed, politicians were using communication mediums to influence things and get elected—whether it was the telegraph, the telephone, or the TV. JFK famously was the first TV president (notably, he didn't wear a hat).
These technologies simply give politicians more reach, and they may change the dynamics of how voters are persuaded. But what’s true today was true three hundred years ago: there’s the face of power that you see publicly, and then there’s what really happens behind the scenes.
Spoken like someone who thinks they are going to be insulated from the fallout
Sure, it might hurt me personally. I'm not selfish enough to put that over what will be an incredibly empowering development for our species.
This will be good for a handful of elites and no one else
Where are they flying and why software has gone to shit?
Maybe this super stars programmers have to keep their reality breaking technology secret, but everything has not only degraded, but turned to absolute trash.
Yup
AI seems great when you have no way of truly validating its output.
My partner teaches at a small college. These people are absolutely lost, with administration totally sold on the idea that "AI is the future" while lacking any kind of coherent theory about how to apply it to pedagogy.
Administrators are typically uncritically buying into the hype, professors are a mix of compliant and (understandably) completely belligerent to the idea.
Students are being told conflicting information -- in one class that "ChatGPT is cheating" and in the very next class that using AI is mandatory for a good grade.
Its an absolute disaster.
In the relocation industry, it's losing translators, relocation consultants and immigration lawyers a lot of work. Their cases are also getting tougher because people are getting false information from ChatGPT and arguing with them.
This problem is compounded by the lack of training data for that topic. I spent years surfacing that sort of information and putting it online, but with AI overviews killing the economics of running a website, it feels pointless.
I see such stories everywhere. People being replaced by something half as good but a tenth of the cost. It's putting everyone out of work and making everything worse.
You can feel it with AI-generated content and responses, in AI-generated art, customer service bots and vibe-coded software. This gradual worsening of everything won't lead to lower prices or a better experience, so it's not really a tradeoff.
Now every toilet on the market only flushes number one. But hey, they're so much cheaper.
The closer they can map their real problems to make-document-bigger, the better their results will be.
Alas, that alignment is nearly 100% when it comes to academic cheating.
It is all small things, but none of those small things are captured anywhere so whoever is on the other end has to 'discover' through trial and error.
They paid for custom on prem software and in over a year, they have not fully provided both access and infrastructure for install it.
We have been paid already, but they paid for a tool they can’t get their shit together enough to let us install.
This is the opposite.
Doesn't sound that different from my tech job
So in most courses nothing has changed in the way we grade. Suddenly coursework grades have gone up sharply. Anyone with working neurons know why, but in the best case, nothing of consequence is done. In the worst case (fortunately uncommon), there are people trusting snake oil detectors and probably unfairly failing some students. Oh, and I forgot: there are also some people who are increasing the difficulty of the coursework in line with LLMs. Which I guess more or less makes sense... Except that if a student wants to learn without using them, then they suddenly will find assignments to be out of their league.
So yeah, it's a mess.
My son, who is a freshman at a major university in NYC, when he said to his freshman English professor that he wanted to write his papers without using AI, was told that this was "too advanced for a freshman English class" and that using AI was a requirement.
One of the teaching methods is "look at the context, like pictures, and guess what the word is". One example I remember was thinking "pony" is "horse" due to association without being able to sound it out.
https://twentyprsaday.github.io/
Now I have this love/hate relationship with it. Claude Code is amazing. I use it everyday because it makes me so much more efficient at my job. But I also know that by using it I’m contributing to making my job redundant one day.
At the same time I see how much resources we are wasting on AI. And to what end? Does anybody really buy the BS that this will all make the world a better place one day? So many people we could shelter and feed, but instead we are spending it on trying to make your computer check and answer your emails for you. At what point do we just look up and ask… what is the damn purpose of all of this? I guess money.
I know someone who worked for a nonprofit that made pregnancy health software that worked over text messaging. Its clients were women in Africa who didn’t have much, but they had a cell phone, so they could get reminders, track vitals, and so forth.
They had to find enough funding to pay several software engineers to build and maintain that system. If AI allows a single person to do it, at much lower cost, is that bad?
So in isolation, I think it's great that they managed to achieve this. But I mourn that the only way they achieved it was via this rapacious truth-destroying machine.
This isn't a new trend - AI didn't cause it. It's just the latest version of it.
The actual community building is fairly not as automated unless you have very specific problems. Like even in the example above, having an automated message is useful but staffing the team to handle when things are NOT in a good spot would probably be the real scaling cost.
Yeah - I think there's a lot of cool sci-fi like stuff in the future.
Unfortunately for fellow developers, software enables massive scale.
To add to list of questions - it's undeniable the AI is making humans dumber by doing mental job previously done by humans. So why we spend so much energy making AI smarter and fellow humans dumber?
Shouldn't we be moving in opposite direction - invest in people instead of some software and greedy psychopaths at helm of large companies behind it?
I don't see how this is the case if you're anything more than a junior engineer... it unlocks so many possibilities. You can do so much more now. We are more limited by our ideas at this point than anything else.
Why is the reaction of so many people, once their menial work gets automated, "oh no, my menial work is automated." Why is it not "sweet, now I can do bigger/better/more ambitious things?"
(You can go on about corporate culture as the cause, but I've worked at regular corporations and most of FAANG. Initiative is rewarded almost everywhere.)
> Does anybody really buy the BS that this will all make the world a better place one day?
Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?
There is a reason that we aren't dying of dysentery at the ripe age of 45 on some peasant field after a hard winter day's worth of hard labor. The march of automation and technology has already "made the world a better place."
because i have rent to pay? old age to prepare for?
why is it so hard to understand most people are not rich, that the cost of living is high, and that most people are VERY afraid their jobs will be automated away? why is so hard to understand that most people haven't worked at FAANG, they don't have stocks or savings, and are squeezed harder with every new day and every new war?
what world, what reality are you guys living in?!
"Software engineer" as a profession is rapidly getting automated at my company, and yet our SWEs are delivering more value than ever before. The layer of abstraction has changed, that is all.
> what world, what reality are you guys living in?!
One that has seen immense benefits from the Industrial Revolution and previous waves of automation.
Do you think because 2 dev are now super productive with AI, the company will keep the other average 30 devs? no, of course not, they will fire and pocket the difference. Same for other industries, where AI will slowly diffuse like a poisonous gas and displace jobs and people, leaving behind a crippled white collar class. The profits will not trickle down and the increased productivity will be a hatchet, not a plough.
Yes, they will keep the other devs that can figure out how to use AI well. Businesses want to grow.
The businesses fired the staff and pocketed the difference. The result? Growth, at least on paper, as you're saying. Previously they were paying for 10 people and now they're paying for 2 so more profit yay! Of course this is a short term gain which might result in long term pain. That last part remains to be seen.
Such things were super uncommon before the industrial revolution, I'm sure.
> early industrialisation coincided with significant improvements in survival, especially in towns (Buer, 2013; Davenport, 2020a; Landers, 1993; Wrigley et al., 1997)
> population growth rates in excess of 1% per year would have resulted in falling real wages and hunger in any previous period [...] the fact that wages kept pace at all with increasing population should be viewed as a major achievement (Crafts and Mills, 2020; Wrigley, 2011).
Davenport, Romola J. (2021). "Mortality, migration and epidemiological change in English cities, 1600–1870." International Journal of Paleopathology, 34, 37–49. PMC7611108.
That being said.
You cite a study implying (you, not the study) the Industrial Revolution was what lead to lower death rates, so it's all good.
But that's not what the study says:
> These patterns are better explained by changes in breastfeeding practices and the prevalence or virulence of particular pathogens than by changes in sanitary conditions or poverty. Mortality patterns amongst young adult migrants were affected by a shift from acute to chronic infectious diseases over the period.
"than by changes in sanitary conditions or poverty" [my emphasis]
But wait! there's more! from the same study:
> The available evidence indicates a decline in urban mortality in the period c.1750-1820, especially amongst infants and (probably) rural-urban migrants.
"especially amongst infants and (probably) rural-urban migrants" ...where is the industrial revolution here?
And if that was not enough:
>Mortality at ages 1-4 years demonstrated a more complex pattern, falling between 1750 and 1830 before rising abruptly in the mid-nineteenth century.
"rising abruptly in the mid-nineteenth century"
turns out industrial revolution did in fact raise mortality and death rates
I really don't understand this way of thinking. Don't you think that AI could replace senior engineers? Sure, companies will be able to do bigger / better / more ambitious stuff - but without any software engineers.
> Why is it BS? I'm shocked that anyone with a love and passion for technology can feel this way. Have you not seen the long history of automation and what it has brought humanity?
I definitely think that AI will be a net benefit for society but it could easily end up being be bad for me.
the swe role is going to change but problem solving systems thinkers with initiative won't go away
if ai truly solves novel thinking then nothing is a barrier. the physical world is downstream from robotics which is downstream from software. itll be able to persuade nation states to collect data for itself etc etc (insert sci fi ending)
I use AI agents every day at work and I'm happy with that, but it took over two years and billions of dollars in investment to deliver anything useful (Claude Code et al). The current models are amazing, but they still randomly make mistakes that even a junior wouldn't make.
There's another paradigm shift to be made certainly, because currently it feels like we scaled up a bug brain to spit out code. It works great for some problems, but it's not what software developers usually do at work.
We owe it to the world, as the experts, to be critical. The march of automation and technology has made the world a better place in some ways. I sure love modern medicine, but those drones flying over Ukraine and Russia sure don’t seem like they are making the world a better place. Nuclear bombs are not making the work a better place. Misinformation in social media is not making the world a better place.
Any belief you drink blindly will eventually find a way to harm you.
If everyone thought like you we'd be stuck in the pre-Industrial phase. How miserable that would be!
Strangely enough, I don't see you calling to end the consumption of meat which would have a far larger environmental impact while not slowing global progress at all.
Tech is what got us where we are. AI allows us to use more energy to produce more of what is currently measurably killing us.
> but AGI might save us from it.
This is just faith. Some believe that prayers may save us.
Many things are orders of magnitude bigger than AI in the energy usage problem that bring less comparable value.
Except it's not what I said.
What I said is that with AI, we do more with more (energy). "Doing more" has repercussions that go further than just the energy used to vibe code.
The reason we are measurably living in a mass extinction (that is happening orders of magnitudes faster than the one that made the dinosaurs disappear) is also the reason the climate is measurably warming (to the point where it will probably kill many of us): we are really good at producing more by using more energy.
It's not one thing (like airplanes, or meat, or whatever you want): it's everywhere. It's the whole race for producing more and more. AI is exactly part of that.
Looking at the direct energy consumption of a technology (here AI) while conveniently ignoring all its indirect impacts and concluding that "I can't understand why people think that tech is part of the problem" shows a big lack of understanding of... well, what will probably kill your kids, most likely theirs.
Note that I did not criticise the AI energy. I criticised tech as a whole. Tech is part of the problem (the problem here being "we are killing our only planet").
Tell that to the people who will die before 45 because of global instability and global warming, I guess?
All that said, it’s extremely exciting. I’ve been in tech, in one way or another, for 25 years. This is the most energizing (and simultaneously exhausting) atmosphere I’ve ever felt. The 2006-2011 years of early Facebook, Uber, etc. were exciting but nothing like this. The future is developing faster than we can process it.
Would it be such a bad thing if the "right way" to build a JavaScript frontend didn't change so much every year?
would it be such a bad thing if we moved away from JavaScript entirely?
If we commit to AI, that seems exceedingly unlikely to happen.
I mean do we really think that JavaScript is the best way to do this? I don't. I've been in IT and software development for 30 years. I thought I would see things progress, but I have not. Same operating systems, same browsers more or less still running JavaScript, same network stack, same everything. an immense amount of work to slowly evolve things that weren't designed to evolve for 30 years.
Thirty Years.
We all know that things around us are flawed and that there are better ways, but we do nothing about it. How many people are looking at new paradigms, new ways to do something? Three? Four? I bet it's within that order of magnitude. Come on.
I'm disappointed in everyone in this industry, including myself.
Look at Plan 9. It was different. It was flawed, but it tried to fix things, at least. It tried to do some sharp corners in Unix differently, and for the time I think it was good. At least they made an attempt. Linux took a few lessons from it, but I don't think anyone else did. Not really.
I'm mad. We have let ourselves down, we have let ourselves stagnate and simply spin wheels because using what's here is easier than designing something new and sharing it. Influential people don't look at new things often enough. People new to the field and young people don't understand what my complaint is about really, because this is all brand new, to them. They didn't witness the stagnation. I did. I am disappointed and I don't really know what to do about it.
I write Typescript and SQL by day, my last two personal projects were Rust and Perl.
I do worry that I'm not learning them as deeply, but I am learning them and without AI as an accelerant I probably wouldn't be trying them at all.
We're about due for some new computing abstractions to shake things up I think. Those won't be conceived by LLMs, though they may aid in implementing them.
The stacks of turtles that we use to run everything are starting to show their bloat.
The other day someone was lamenting dealign with an onslaught of bot traffic, and having to deal with blocking it. Maybe we need to get back to good old fashioned engineering and optimization. There was a thread on here the other day about PC gamer recommending RSS readers and having a 36gb webpage ( https://news.ycombinator.com/item?id=47480507 )
(though it sounds like if you left it for long enough, you'd get 36 GB of ads downloaded eventually)
Saame. I wonder if the use of AI will lead to less invention and adoption of new ideas in favour of ideas with lots of training data.
I think we've seen what the enthusiasm leads to, once these companies establish dominance. We even coined a word for it: enshittification.
Everyone is in their own place adapting (or not) to AI. The disconnect b/w even folks on the same team is just crazy. At least it's gotten more concrete (here's what works for me, what do you do) vs catastrophizing jobpocolypse or "teh singularity", at least on day to day conversations.
It's just not very interesting or useful to me to read about how you got AI to output better quality code or how you can program from your phone now without going into detail. And so many of the conversations are showing off the wins without talking about the tools, configurations, or other parts of the setup that made it possible.
For a while, it felt like I'm in a minority when I was saying that it can be a useful tool for certain things but it's not the magic that the sales guys are saying it is. Instead, all the hype and the "get rid of your programmers" messaging made it into this provocative issue.
HN was not immune to this phenomenon with certain HN accounts playing an active part in this. LLMs are/were supposed to be an iteration of machine learning/AI tools in general, instead they became a religion.
> here's what works for me, what do you do
This is at least progress... but many want to remain in denial, and cant even contemplate this portion of the conversation.
We're also ignoring the light AI shines on our industry, and how (badly) we have been practicing our craft. As an example there is a lot of gnashing of teeth right now about the VOLUME of code generated and how to deal with it... how were you dealing with code reviews? How were you reviewing the dependencies in your package manager? (Another supply chain attack today so someone is looking but maybe not you). Do you look at your DB or OS? Does the 2 decades of leet code, brain teaser fang style interview qualify candidates who are skilled at reading code? What is good code? Because after close to 30 years working in the industry, let me tell you the sins of the LLM have nothing on what I have seen people do...
Some of it very interesting, but maybe it shouldn't be on the home page unless it's a certain critical mass (similar to show HN)?
[0] https://en.wikipedia.org/wiki/Technology_adoption_life_cycle
Alternately: the trough of disillusionment.
https://en.wikipedia.org/wiki/Gartner_hype_cycle
For me, the issue isn't that I can't conceive of work AI could help with. It's that most of the work I currently need to be doing involves things AI is useless for.
I look forward to using it when I have an appropriate task. However, I don't actually have a lot of those, especially in my personal life. I suspect this is a fairly common experience.
But I don't see it that way. I've been fascinated by AI since I was a little kid (watching Max Headroom, Knight Rider, Whiz Kids, Wargames, Tron, Short Circuit, etc in the 80's) up through college in the 1990's when I first read about the 1956 Dartmouth AI workshop that kicked the field off, and up to today where we have the most powerful AI systems we've had. Every single bit of this stuff is wildly fascinating to me, but that's at least in part because I recognize (or "believe" if you will) that there's a lot more to "AI" than just "LLM's" or "Generative AI".
I still believe there are plenty of neural network architectures that haven't been explored yet, plenty more meat on the bone of metaheuristics, all sorts of angles on neuro-symbolic AI to work on, etc. And even "Agents" are pretty exciting when you go back and read the 90's era literature on Agents and realize that the things passing for "Agents" right now are a pretty thin reflection of what Agents can be. Really understanding MAS's involves economics, game theory, computer science, maybe even a hint of sociology.
As such, I still find AI fascinating and love talking about it... at least in the right context and with the right people. :-)
And besides... as they[1] say: "Swarm mode is sick fun".
[1]: https://static0.srcdn.com/wordpress/wp-content/uploads/2022/...
> What makes this worse, is our bosses have bought into it this time too. My managers never cared much about database technologies, IDE’s or javascript frameworks; they just wanted the feature so they could sell it. Management seems to have stepped firmly and somewhat haphazardly into the implementation detail now. I reckon most of us have got some sort of company initiative to ‘use more AI’ in our objectives this year.
I am extremely skeptical of AI products anyone builds. It's just using one black box to build scaffolding around another black box and then typically want to charge money for it. I don't see any value there.
AI products can and do help make the raw models applicable to targeted domains. Think of them as a black box sure, but that doesn’t mean they dont add value.
Also, depends on who target user is.
AI can be used to build deterministic software
All that said, I've already set up a few of my non tech close friends with Cowork and they are huge fans of it now. It's somewhat shocking how much menial repetitive work the average white collar job entails.
At my big tech, AI is every conversation with everyone, every day. Becoming AI native is a huge deal for us. Literally everyone is making AI usage a core part of their job and it's been a big productivity accelerator.
Perhaps it's different where you work, so you don't see the sentiment.
Wow that sounds horrible.
Your post was written almost verbatim by my coworker last week, who has no idea that I and half the team are not doing any of this stuff.
I spent 2024 on Mastodon and I absorbed their groupthink that AI was useless... I wish I could get that year of my life back. I wish I had that extra year headstart on AI compared to where I am now. So much of my coding frustrations that year that might have been solved from using AI. I am reluctantly back on X - I hate what has been done to Twitter, but that's where so much of the useful information on using AI is being shared.
Well, back to it. Claude has been building another local MCP server tool for me in the background.
100% feeling this divide as well.
People that deny the benefit of AI in 2026... I can't even engage with them anymore. I just move on with my life. These people are simply not living in reality, it will catch up to them eventually (unfortunately.)
Before that we were excited about the wheel and the creation of fire. All capital drained into those ephemeral fancies.
The cycles cycle on.
Like the new frontend frameworks coming every week after 2010 sometime. Not jumping on every single one, and waiting until react was declared the winner and learn that worked well. Sure, someone that used it from day 1 had more experience, but one quickly catch up.
Only thing that stuck thus far is the cloud. Though not for infinite scalability and resiliency, cause that just dumps big invoices in your lap.
The Cloud happened as well, as you've pointed out
AI adoption is well past Quantum and Web 3. Comparing it to those two is nonsensical.
All those listed and more, are part of the cycles that the parent comment mentioned and which I've continued.
Same thing with Agile. Mostly sprint-based waterfall, iterative development is not something I've ever seen in practice. Or people over processes, remember those ideas?
BigData, was another hype cycle where even smaller companies wanted a "piece of the action". I've worked at the time in a sub 50 developers company, and the higher ups where all about big data. When in fact our system was struggling with GBs of data due to frugality in hardware.
For a moment in time you couldn't spit in any direction without hiting a Domain Driven Design talk. And now we disable safeguards and LLMs write a mix of garbled ideas from across all the laundered open source training data.
Too early to tell where AI will land, and if it will bring down the economy with it, but spending rate doesn't deliver equal results for all, and we will have to see after the dust settles.
The new HN is full of people filled with anxiety about being replaced by an advanced calculator.
To an outsider, it could almost be funny if it wasn't so sad.
---
Personally, I'm still very interested in the topic.
But since the tech is moving very fast, the discussion is just very very unevenly distributed: There's lots of interesting things to say. But a lot of takes that were relevant 6 months ago are still being digested by most.
Never heard this and I like it very much. This is just an off-topic comment to say thanks!
I don't like the hype language applied by the channel host one bit - and so this is not something where I expect someone tired of the hype to be swayed - but I think his perspective is sometimes interesting (if you filter through the BS): He seems to get that the real challenge is not LLM quality but organisational integration: Tooling, harnesses, data access, etc, etc. And so in this department there's sometimes good input.
[1]: https://en.wikipedia.org/wiki/Mastodon_(social_network)
All they essentially did was tell the LLM to test and verify whether the answer is correct with a prompt like the following:
>"You just edited X. Before moving on, verify the change is correct: write a short inline python -c or a /tmp test script that exercises the changed code path, run it with bash, and confirm the output is as expected."
Now whether this is true, I don't know, but I think talking about this kind of stuff is cool!
Our local tech meetup is implementing an "LLM swear jar" where the first person to mention an LLM in a conversation has to put a dollar in the jar. At least it makes the inevitable gravitational pull of the subject somewhat more interesting.
I'd imagine similarly there were points in time where people who go to concerts just to see the electric guitars and lighting setups.
Argueably Jean-Michel Jarre concets were 100% gear-porn shows.
Endlessly grooming the Agent reminds me of Gastown.
Curios to see what he'll present, if, from his 700+ contributions in private repositories.
But nobody wants to hear about prompt calibration or pipeline architecture. They want to hear "I replaced my whole team with agents." The boring, useful work is invisible, and the flashy stuff gets all the oxygen
The new GenAI architectures and tooling supported by them just give more fun things to do and fun ways to do it.
I "tried" Claude the other day. It gave me 3 options for choosing, effectively, an API to call an AI. The first were sort of off limits, b/c my company… while I think we have a Claude Pro Max Ultra+ XL Premium S account, it's Conway's Law'd. But, oh, I can give it a vertex API key! "I can probably probably get one of those" — I thought to myself. The CLI even links you to docs, and … oh, the docs page is a 404. But the 404's prose misrepresents it as a 500.
Maybe Claude could take a bit of its own medicine before trying to peddle it on me?
We're on like our 8th? 9th? Github review bot. Absolutely none of them (Claude included) is seemingly capable of writing an actual suggestion. Instead of "your code is trash, here's a patch" I get a long-form prose explanation of what I need to change, which I must then translate into code. That's if it is correct. The most recent comment was attached to the wrong line number: "f-string that does not need to be an f-string on line 130" — this comment, mind you, the AI has put on line 50. Line 130? "else:" — no f-strings in sight.
"Phd level intelligence."
It will calm down once the dust starts to settle and there's some kind of consensus on how the chips have fallen.
Also there is an irony that talking about being sick of talking about AI is still talking about AI.
The only thing that triggers me about it peoples inability to understand how a scam works, after falling for such scams the n-th time.
Hyperloop, ubeam, blockchain, Elon musk taking all to mars....
In these line of scams, LLMs are a wet dream...
An other thing for me is that it has gotten a lot harder for small teams with few ressources, let one person, to release anything that can really compete with anything the big player put out.
Quite a few years back I was working on word2vec models / embeddings. With enough time and limited ressources I was able to, through careful data collection and preparation, produce models that outperformed existing embeddings for our fairly generic data retrieval tasks. You could download from models Facebook (fasttext) or other models available through gensim and other tools, and they were often larger embeddings (eg 1000 vs 300 for mine), but they would really underperform. And when evaluating on general benchmarks, for what existed back then, we were basically equivalent to the best models in English and French, if not a little better at times. Similarly later some colleagues did a new architecture inspired by BeRT after it came out, that outperformed again any existing models we could find.
But these days I feel like there is nothing much I can do in NLP. Even to fine-tune or distillate the larger models, you need a very beefy setup.
I don't know how I'm burnt out from making this thing do work for me. But I am.
AI is the red herring that'll waste all our attention until it's too late.
And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
> And most of that new capacity will be natural gas. That increase would basically whipe out the reduction in CO2 emissions the USA has had since 2018.
Emissions in 2018 were ~5250M metric ton and in 2024 it was 4750M. That is a reduction of 10% total emissions. Without going into calculations of green electricity and such, its still safe to say AI using 10% of the grid would not completely wipe out the reduction.
[0]: https://www.statista.com/statistics/183943/us-carbon-dioxide...
Transportation, especially ALL transportation, does a LOT. You're looking for ROI not the absolute values. I think it's undeniable that the positive economic effect of every car, truck, train, and plane is unfathomably huge. That's trains moving minerals, planes moving people, trucks transporting goods, and hundreds of combinations thereof, all interconnected. Literally no economic activity would happen without transportation, including the transition to green energy sources, of which would improve the emissions from transportation.
I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.
Also, I'm not sure about your math. 4% would be 4% of the whole like in a pie chart, not 4% of the remainder after removing one slice. 4% AI, 30% transportation, 66% other. I don't know where that 40% is from.
40% is for energy use in the US in the form of electricity. It was a rough number that I pulled from my memory. It is roughly right though. Check https://www.eia.gov/energyexplained/us-energy-facts/
AI is not currently 4% of the energy market of the US. Only the grid. I should have been more clear about the ALL ENERGY vs GRID distinction.
> I think it might be more emissions-efficient at generating value than AI by a factor exceeding the 7.5x energy use. Moving rocks from (place with rocks) to (place that needs rocks) continues to be just an insanely good thing for humanity.
I really made no statement on the value of doing things. Transportation is obviously very valuable. I just wanted a more fact based conversation.
I don't follow. The comparison is 30% of energy use for transportation vs 4% for AI, and soon 30% for transportation vs 10% for AI.
And that leaves a 6:1 ratio assuming projections run true. It very well might be possible to get efficiency wins from the transportation sector that outweigh growth in AI.
Of course nearly all of that growth is going to be AI
But yeah, there's way worse industries out there when it comes to climate change impact.
Ai buzz and now we are building giga factories. It stands for gigawatt usage, no less target.
It is, of course, because it barely uses any energy.
If you want to point at causes of climate change, look no further than adtech. It's the driving force behind our overconsumption.
And it has perhaps an even longer list of reasons to hate it.
So this is not a good reason to oppose AI. Now the sheer energy it requires does mean we might want to go nuclear though.
Natural gas is nice though because it does pollute the air far less than coal.
You might argue the EPA only repealed that because of political agendas, but the same argument could be made for why it was passed.
A lot of people got very rich off the fear mongering from climate alarmists.
And, you may be right, it may not be that big a deal and that we're being alarmists, but it seems like we currently have the tools to slow it down greatly. Why not be on the safe side and use them?
... but to be honest, guessing my opinion won't sway you in any way, still thought I'd try. thanks!
The value of plowing ahead and using more energy is worth far more than making sure Florida doesn’t lose some coastline.
The presumptions I see that annoy me with the alarmists, is that they completely negate human agency and ingenuity, and they ignore the economic cost of many of the proposed plans.
Natural gas is far better than coal and should be encouraged rather than condemned. Nuclear power is best of all, is the cleanest and safest energy, and yet is hardly ever the first choice of the alarmists.
I’d rather spend double the energy unlocking breakthroughs in science with the help of AI, and address the problems when they come. I don’t go out of my way to lower my “carbon footprint”, but I also don’t just do things that are wasteful and deliberately harmful to the environment.
AI making us forget how to think for ourselves is a far bigger risk to mankind than climate change. Thanks.
Yeah, don't think most people who support battling climate change are extremists. We just believe it's a big problem, and, to put it in monetary terms, having to deal with major changes in climate could cost the world tens of trillions of dollars by some scientist predictions. Yeah, it's like any problem, doing relatively small fixes now could save enormous amounts of time and money later down the line. Seems like it would probably good usage of our efforts.
I probably just overreact and judge too quickly certain statements from my experiences of people who act like I’m destroying the earth because I have more than 3 kids.
I appreciate reasonable people though, and I should not assume, everyone is a crazy alarmist because they have any concern, so I apologize.
I'm finding the detractors worse than the hype, because it seems like a certain subset of detractors [0] formed their opinion on AI in late 2022/early 2023 when ChatGPT came out (REALLY!? Over 3 years ago!?) and then never updated their opinions since then. They'll say things like "why would I want to consume X amount of energy and Y amount of water just to get a wrong answer?"
In other words, the people who think generative AI is an absolutely worthless and useless product are more annoying than the ones that think it's going to solve all the world's problems. They have no idea how much AI has improved since it reached center stage 3 years ago. Hallucinations are exceptionally rare now, since they now rely on searching for answers rather than what was in its training data.
We got Claude Desktop at work and it's been a godsend. It works so much better to find information from Confluence and present it to me in a digestible format than having to search by hand and combing through a dozen irrelevant results to find the one bit of information I need.
[0] For the purpose of this comment, this subset is meant to be detraction based on the quality of the product, not the other criticisms like copyright/content theft concerns, water/energy usage, whether or not Sam Altman is a good person, etc.
Detractors, doomers, and techno-pessimists have got to be the most consistently wrong group in history. https://pessimistsarchive.org/
But I do think humanity is worse off because of it. So I'm a detractor in that way. :)
Well, I wouldn't go that far, but the hallucinations have moved up to being about more complicated things than they used to be.
Also, I've seen a few recent ones that "think" (for lack of a better word) that they know enough about politics to "know" they don't need to search for current events to, for example, answer a question about the consequences of the White House threatening military action to take Greenland. (The AI replied with something like "It is completely inconceivable that the US would ever do this").
I mean, you can get mad at people you made up in your head, that's a thing people do, but this caricature falls in the same comforting bucket as "anyone who doesn't like <thing I like> is just ignorant/stupid" and "if you don't like me you're just jealous".
Maybe non-straw people have criticisms that aren't all butterflies and rainbows for good reasons, but you won't get to engage with them honestly and critically if you're telling yourself they're just ignorant from the start.
For example, I will bet that non-straw people will take issue with this, and for good reasons:
> Hallucinations are exceptionally rare now
In contrast, what harm do those detractors cause? They don't generate as much code per hour?
The "harm" (if you can call it that) is clear, detractors slow the pace of progress with meaningless and incorrect hand-wringing. A lack of progress harms everyone (as evidenced our amazing QoL today compared to any historical lens.)
Considering our climate, political and economic situation, I'd say not only is slowing the pace of progress not harmful, it's actually imperative for our long-term survival.
Also we need detractors because if we race into any technological advance too quickly we may cause unnecessary harm. Not all progress is without harms, and we need to be responsible about implementing it as risk-free as possible.
This is such a perfect example of the mania behind this rollout.
There's no way you can make the financials work here compared to JetBrains spending the same millions spent on AI infrastructure and instead building better search in Confluence. Confluence search SUCKS, but that's just a lack of focus (or resources) on building a more complex, more robust solution. It's a wiki.
Either way, making a more robust search is a one time cost that benefits everyone. Instead, you're running a piece of software that goes directly to Anthropic's bank account, and to the data centers and to hyper scalers. Every single query must be re-run from scratch, costing your company a fortune, that if not managed properly will come out of spending that money elsewhere.
It seems ridiculous right now because we don’t have hardware to accelerate the LLMs, but in 5 years this will be trivial to run.
Even with an LLM agent getting cheaper to run in the future, it's still fundamentally non-deterministic so the ongoing cost for a single exploration query run can never get anywhere near as cheap as running a wiki with a proper search engine.
If I could pay a world class architect $1.50 to give me tips on how to maximize sunlight in my loft I would.
Would it be nice if confluence just had a robust search that had a one time cost and then benefited everyone thereafter? Sure, but that's not the current reality, and I do not have control over their actions. I can only control mine.
I honestly am finding Codex considerably better, as much as I despise OpenAI.
On the contrary. I update my opinion all the time, but every time I try the latest LLM it still sucks just as much. That is why it sounds like my opinion hasn't changed.
I say all of that to establish that I'm not a reflexive critic when I tell you, hallucinations are absolutely not exceptionally rare now. On multiple occasions this week (and it's only Tuesday!) I've had to disprove a LLM hallucination at work. They're just not as fun to talk about anymore, both because they're no longer new and because straightforward guardrails are effective at blocking the funny ones.
AI is alright. It's moderately useful, in certain contexts it speeds me up a lot, in other contexts not so much.
I also think that the economics of it make no sense and that it is, generally, a destructive technology. But it's not up to me to fix anything, I just try to keep on top of best practices while I need to pay bills.
The economics bit is not my problem though. If all AI companies go bust and AI services disappear I can 100% manage without it.
We're in "too big to fail" territory, if we handled the recession we were heading towards/in years ago, instead of letting AI hype distract and redirect massive amounts of investment, attention and labor from elsewhere, we might have been in a better position.
Though, my cynical take is that the investor class seemed dead-set on forcing us all to weave LLMs deep into our corporate infrastructures in a way that I'm not too sure it will ever "disappear" now. It'll cost just as much to detangle it as it was to adopt it.
The way we talk about "hallucinations" is extremely unproductive. Everything an LLM outputs is a hallucination. Just like how human perception is hallucination. These days I pretty much only hear this word come up among people that are ignorant of how LLMs work or what they're used for.
I've been asked why LLMs hallucinate. As if omniscient computer programs are some achievable goal and we just need to hammer out a few kinks to make our current crop of english-speaking computers perfect.
If AGI is born from these efforts, it will likely be controlled by people who stand to lose the most from solving those issues. If an OpenAI-built AGI told Sam Altman that reducing wealth inequality requires taxing his own wealth, would he actually accept that? Would systems like that get even close to being in charge?
All but one of them simultaneously, in fact. The one being left out: wanting to keep existing.
If you want to "keep existing" AGI happening is probably your only hope.
If you want to keep existing, slow down, make sure AGI is aligned first, and go into cryo if necessary.
If you don't want to keep existing, that doesn't mean you get to risk the rest of us.
How, exactly, does more and better tech help with the fundamentally sociological issues of power distribution, wealth inequality, surveillance, etc? Are you operating on the assumption that a machine superintelligence will ignore the selfish orders of whoever makes it and immediately work to establish post-scarcity luxury space communism?
So tired of seeing this trope. Data center energy expenditure is like less than 1% of worldwide energy expenditure[1]. Have you heard of mining? Or agriculture? Or cars/airplanes/ships? It's just factually wrong and alarmist to spread the fake news that AI has any measurable effect on climate change.
[1] https://www.iea.org/reports/energy-and-ai/energy-supply-for-...
https://www.selc.org/news/resistance-against-elon-musks-xai-...
> China is the world’s largest source of carbon emissions, and the air quality of many of its major cities fails to meet international health standards.
As for carbon emissions: https://news.ycombinator.com/item?id=45108292
And even though China emits more carbon annually than the US today, the US and Europe are still ahead in cumulative emissions: https://ourworldindata.org/grapher/cumulative-co2-emissions-.... Cumulative emissions are the carbon that's already in our atmosphere and causing heating today. If you want to apportion "blame" for climate change, then the US is 25% responsible, Europe is 30% responsible, and China is 14% responsible as of 2023. And India is only 3.6% responsible.
China's high emissions today power a manufacturing industry that has made cheap decarbonization via solar and batteries a realistic prospect. That's a much better use of their current emissions compared to what the developed countries do with theirs.
China has done more for renewable energy solutions than any other country, and their per capita population consumption patterns for personal are lower than many G20 countries.
In a fair representation of data, the total high carbon dioxide output from China should be assigned to source- the people across the globe with high personal consumption that have off shored their industry to China.
Thats not accurate. The estimate is about 2 billion in 25 years.
https://www.who.int/news-room/fact-sheets/detail/ageing-and-...
We also have models for how that works at a country level because we have countries that have far exceeded that.
And the vast majority of 60 year olds are still self sufficient and economically productive.
Average global retirement age is around 65 and in most countries it’s creeping towards 70. And percent of world population over 70 looks much more manageable over the time span we can realistically model.
Machine tools replaced blacksmiths
CNC machines replaced manual machines.
Robots replaced CNC machine tenders
CAD replaced draftsman (and also pushed that job onto engineers (grr))
P&P robots replaced human production lines.
The steam train replaced the horse and cart
This is a tale as old as time itself
Also note that there are inventions that may “replace” some part of a process, but actually induce a greater demand for labor in that process. Take the cotton gin, for example, which exploded the number of slaves required to pick cotton.
Tools are not replacements for people! Tools are enhancements.
AI is an attempt to replace people with something unhuman.
But for now it's strictly hypothetical. Nothing I'm doing with AI matters enough to really make any statements about a broader scale in my field, let alone in entire economies.
The more I think about it though, I'm not sure feudalism is the right analogy. Serfs had a purpose and were depended upon. In a society where AGI is in the hands of a few, it seems reasonable to believe that there wouldn't be a need for serfs at all. Labour would become utterly irrelevant. You'd have no lord to be bound to. You'd be unnecessary.
I imagine the transition there would be some brutal form of capitalism, but the destination would not be fuedalism. I don't think we have a historical analog for that hypoethical destination.
However it has been over 500 years since feudalism. People today are still very much living with the consequences of colonialism, some people are in fact still living under colonial rule (notably in Western Sahara and Palestine). The consequences of feudalism have long passed. I think it is fine actually to conflate the horrors of capitalism with the horrors of feudalism. 500 years ought to be long enough.
Through that analysis, one can also explain why the managerial caste is so obsessed with it - it is nothing less than an ideological device. One can also see this in the actual deification happening in some VC cycles and their belief in AGI as some sort of capitalist savior figure.
I see the point and don't disagree with it, but I find that framing is not the most compelling to the audience here...
At the firm level, automating away labor costs is obviously rational. But capital in aggregate can't actually rid itself of labor, since labor is where surplus value comes from. A fully automated economy would be insanely productive and generate basically no profit. So the capitalist class pursuing this logic collectively is, without knowing it, pursuing the dissolution of the system that makes them the capitalist class.
You don't have to buy any of that to notice the more immediate mechanism though: AI doesn't need to actually replace workers to discipline them. The credible threat of replacement is enough to suppress wages, justify restructuring, and extract more from whoever's left. That's already happening and requires no AGI.
Ten years ago, what would it have cost you to build a Jira clone / competitor? Today one person can do it in a week, at least for the core tech.
In a year, only the very largest companies will pay for that kind of infrastructure tooling.
We’ve just started seeing the democratization of software and the capitalists are terrified.
People pay for things in all economic models. It’s bizarre to think that means everything is capitalism.
Some people have been concerned with this kind of politics all along. Some people are realizing they should be now, because of AI. And that's okay; both groups can still work together.
I'm also learning art and I'll never use AI here, so I thought I have less time for hobby programming and I could just use AI for that but then I come back to the concern I mentioned above.Plus I can't proudly share anything I made with it either because I wouldn't have done much of the work at all.
I'm also feeling burnt out about web dev in general and doing the same thing during my free time just feels like more work to my brain. I wish I could find something interesting to do, and if I don't I'll quit programming in free time for good.
As shown in "Normal Accidents" the strength is as high as its weaknesses, and in any complex system this is even more a problem. A catastrophic event is still to happen with AI as it happened in basically every complex system. They ocurred with trained people that wasnt believing in magic or laziness... so the scenario is even worse for AI.
Yes, I'm bored about people that believe in magic and the ghosts the are emerging and are yet to be seen.
I also have to say that I don't use AI in my personal or professional life. And that is simply because I haven't felt any need to use it.
I'm currently reading non-things by Byung-Chul Han, which is an interesting exploration of internet's impact on humanity/humans. Haven't finished it yet, but enjoying it so far.
My technical interests are varied, and it's so boring to come to HN and see that a third (or more) of the front page is about AI.
Enough already. Let's talk about other things! And yes, I know, I should be a part of the solution and submit more articles.
I think the "can do" part gets boring but now I'm paralleling this to trust relationships and fiduciary responsiblities. What I mean is that we can not only instruct but then put a framework around an agent much like we do a trustee where they are compelled to act in the best interests of the beneficiaries (the human that created them in this case).
Anyway it's got me thinking in a different way.
Currently it feels a bit like everyone is talking about what new editor they're using. I don't care about that type of developer tooling very much. AI isn't coming up with some exciting new database, type system, etc etc.
"Look at how I'm able to web dev x% faster" because of LLMs is boring.
> seems to have devolved into three different people’s (almost identical) Claude code workflow
I do feel like I've seen a number of those articles.
Everythink devolves from “cool that was a nice single video” to “here’s my schtick…. AGAIN”.
At least I'm not tired of talking about how it's killing websites and filling everything with spam. I have spent most of a decade building a useful resource, and Google AI overviews has killed my traffic. It killed everyone's traffic. This thing gave me purpose, and I'm watching AI slowly strangle it.
I mourn the death of the independent web, and it frightens me that this is still the happy stage. We haven't yet felt the effect of stiffing content creators, and the LLM tools haven't yet begun to enshittify.
I am tired of discussions about agentic coding, but I would feel a lot better if we acknowledged all the harm being caused. Big tech went all in on this, stealing everything, putting everyone out of work, using up all resources with no regards for consequences, and they threaten to kill the economy if we don't let them have their way.
I feel like we are heading for a much worse place as a society, and all we can talk about is how to 10x our bullshit jobs, because we're afraid of falling behind.
One way or the other, tech companies have created a weapon and they are using it against us. Instead of stopping them, we're all trying to point it at someone else.
I’m just kidding. LinkedIn feed became so unbearable, that I had to install an extension to turn it off.
Large organizations are making major decisions on the basis of it. Startups new and old will live and die by the shift that it's creating (is SaaS dead? Well investors will make it so). Mass engineering layoffs could be inevitable.
Sure. I vibe coded a thing is getting pretty tired. The rest? If anything we're not talking about it enough.
The most frustrating about AI I find is that it is (trying to) replacing things that I like; writing, art, programming while leaving me with the things I absolutely hate like testing, chores etc. I do like reading code, so thats not a big issue; I spent most of the day doing that before AI, however, being a fulltime QA was always my nightmare but here we are; AI sucks at it for a more than trivial frontend and backend its not that good at either (I should write the tests or rather tell the AI in detail what to test for otherwise it will just positively test what it indeed wrote; not what the spec says it had to write in many cases).
But no, I like talking about AI, just not so much about slop, trivial usage (show hn; here is a SaaS to turn you into a rabbit!) or hyperbole (our jubs!) (Although I do believe it is the end of code; like said; I read and review code all day but have not written much for the past 6 months while working on complex non trivial SaaS projects; I am with antirez; it is automatic coding, not vibecoding for me).
Sounds very much like this blog I read too… he laughs at AI in his workplace a lot Www.sometimesworking.com
Asking it to draft was weakening my own skills.
I've been spamming some auto research loops, and it is so addictive. Think about how many of humanity's problems will be solved because of this. Of course, it will also disappoint, like, we are still waiting for flying cars, but man, this is a unique moment in history.
You could use AI to do it! Fight fire with fire.
I'm neutral on AI - so far it seems useful but flawed. But I don't want to hear about it constantly.
But the sooner we get to the part of history with the chromy-killer-robots and people-sabotaging-datacenters-and-foundries, the sooner we will get some meaningful excitement.
I'm somewhat tired of seeing the same rehashed claims of future ability, non-ability, profit, loss.
I actually like talking about the implications, future risks and challenges of AI. I have made submissions on ways AI should be regulated to benefit society. The problem is the assumption of what is happening and what will happen.
To many people seem to enter the conversation feeling that the absence of doubt is the same thing as being informed.
And especially people making claims based on premises that they seem to believe that if they build big enough towers on them, they will become true.
The number one thing that bothers me in all this, is people assuming the contents of the minds of others.
I find the pathologising of Sam Altman to be the most egregious form of this. It is one thing to disagree with someone's decisions, another thing to disagree with their stated opinions, but to decide upon a person's character based upon what you believe they are thinking in their private thoughts is simply projection.
I know this is an opinion of little worth to many, but my impression of Sam Altman is just a person who has different perspectives to me. The capitalist tech world he lives in would inevitably shape different values to me. What I have seen of him is consistent with a sincere expression of values. I can accept that a person might do something different to what I would, even the opposite of what I want while believing that they can be doing so for reasons that seem to be morally the right thing to do.
This also happened with cryptocurrency. Crypto advocates believe that it is a good thing for the world. Too many consider those who believe that crypto could benefit society to be evil. There is a difference between being wrong and being evil. No matter how certain you are you can still be wrong, in fact beyond a point I would say increased certancy would indicate a higher likelihood of being wrong.
So I'm happy to talk about AI. I have plenty to learn. I wonder if others went in with the goal to learn whether they would find it less tiring.
It's worse when there's a colleague of yours encouraging that by using AI blindly, piling up technical debt just to move at the pace that Management expects after signing you all up on some AI tool.
At the end of the day, everyone is talking about AI. For AI or against AI, it doesn't really matter.
The analogy is someone from the 19th century talking about their slaves all day which is of course nonsense because they had other things to talk about.
Bored of hearing about it, bored of reading about it.
I love using these LLM tools, but honestly, it feels like every man and his dog has something to say about it, and is angling to make a quick buck or two from it.
And the slop, oh my goodness, it's never-ending on every site and service.
Tack on to that the increasing number of political stuff on here as well just makes it less and less an interesting place to visit.
Don't agree with the angry mob on the political stuff especially and you get downvoted/flagged into oblivion.
Just another echo chamber looking to have viewpoints confirmed in yet another one of the disappearing places online that foster any level of intellectual curiosity.
Never thought I'd feel nostalgic about that era...
Or another thought; why is it that a stochastic parrot can solve logic puzzles consistently and accurately? It might not be 100%, but it’s still much better than what you might expect from a markov model of ngrams.
Openclaw is only sort of interesting. How to vibe code your first product is uninteresting. Claims about productivity increase from model usage are speculative and uninteresting. Endless think pieces on the effects of AI slop are uninteresting. There’s a lot of hype and grift and bullshit that is downstream of this very interesting technology, and basically none of that is interesting. The cool parts are when you actually open the models up and try to figure out what’s going on.
So no, I’m not bored of talking about AI. I’m not sure I ever will be. My suspicion is that those who are bored of it aren’t digging deep enough. With that said, that will likely only be interesting to people who think math is fun and cool. On the whole, AI is unlikely to affect our lives in proportion to the ink spilled by influencers.
The greats on youtube are also worth watching: 3B1B, numberphile, etc.
[0] https://youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9Gv... [1] https://youtu.be/qx7hirqgfuU?si=8zmrbazuvnz379gk
The short answer, as far as I’m aware, is that no one really knows. The longer answer is that we have a lot of partial answers that, in my mind, basically boil down to: model architectures draw a walk through the high dimensional vector space of concepts, and we’ve tuned them to land on the right answer. The fact that they do so consistently says something about how we encode logic in language and the effectiveness of these embedding/latent spaces.
It can't. As you say in the very next sentence. If it isn't solving any given puzzle with a 100% success rate, but randomly failing, then it isn't consistent.
I'm so exhausted by this and ready for the economic crash.
AI is especially sensitive to this. Unlike coding, where giving away the secret sauce also makes you look smart, divulging AI secrets only demystifies you -- revealing the shriveling man behind the Wizards curtain.
So anyone boasting about AI is likely not doing anything useful with it.
Similar to finance tips, btw.
As bad as the AI hype wave is now, I can't help but wonder if it could have been even worse.
There are other interesting things in the world today, and HN is overwhelmed with pretend intelligence.
Hype, detractors, ALL OF IT!
Maybe a separate web page or RSS feed could be created that is dedicated to the subject...
Then we can get back to the unglamorous, boring, thankless task of delivering business value to paying clients, and the public discourse will no longer be polluted by our inane witterings.
If you're reading this and your life hasn't been thrown into disarray you're likely just behind the times. There are a lot of people who are deep in tech who still don't understand what agents and LLM's can do
I'd love for discussions of the tech to stop with the genAI version of the cryptobro cry "have fun being poor". It's mildly insulting and adds literally nothing to the conversations.
(Not meaning to single you out, just using it as an example. This is a very common rhetorical problem with most of the evangelism.)
The detractors are so bizarre to me. I think it's because I work at a big tech that has so thoroughly wired AI into everything we do, and the benefits are so undeniable and totally perspective changing, that it's like arguing with someone that thinks the sun revolves around the earth.
So if you aren't doing something cool with AI, it's probably because you aren't empowered to at your company, or because you simply aren't taking the initiative. Seems like a pretty even split on HN.
and why do so many articles or comments have a general approach of 'It's great and if you don't think it is it's because you don't understand it.'
Humans simply cannot code as well as an LLM/Agent in most cases. It's like fighting a bear, and if you think you can beat an adult brown bear you're probably wrong.
I used to have this idea that if I built something cool it would be valuable to donate it to the world for free. But now increasingly I'd be just making a donation to the training data, and on top of this I'm in competition with AI slop. Most people won't tell the difference and won't care. The noise floor for doing absolutely anything collaboratively on the computer is now 10x higher than it was before, and I'm basically checked out at this point. Even HN is becoming tiring to read since I think around 10-15% of comments that I read are AI generated. When that number reaches 30% I'm done forever, gone. My life is too short to waste time on this shit.
"Yes" Proceeds to talk about AI.
So at this point I have to just assume this shit doesn't work very well for some reason, because no one is outputting anything with it that resembles good, useful software.
> At serious risk of sounding like a heretic here, but I’m kinda bored of talking about AI.
Umm.
> I get it, AI is incredible. I use it every day, it’s completely changed my workflow. I recently started a new role in a tricky domain working at web scale (hey, remember web scale?) and it’s allowed me to go from 0-1 in terms of productivity in a matter of weeks.
It’s all positives. So what’s the problem?
There isn’t a problem with AI. Of course. It’s just the discourse around it is “boring”. And the managers are lame about it.
And what has been the AI discourse for the last few years. The same formula.
- AI is either good
- ... or it is the best thing to have happened to Me
- But I have feelings[1] or concerns about everything around AI, like the discourse, or people having two-hundred concurrent AI agents mania
It’s all just grease for the AI Inevitabilism bytemill.
[1] https://news.ycombinator.com/item?id=47487774
> … And yes, I’m painfully aware of the irony of a post about moaning about posts about AI. Sorry.
OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe”
Very self-aware.
And now 117 points and 53 comments in 23 minutes.
> And this one will be different? I think you're talking about my blog post here, in which case no, I'm afraid not. Hence the admission at the bottom.
>Umm. ??
> It’s all positives. So what’s the problem? The article is trying to say that these things are great, but the level of conversation leads to a lack of novelty.
> It’s just the discourse around it is “boring”. And the managers are lame about it. Exactly.
> OP can’t even resign himself to being a Type. Sigh. “I know what I just did hehe” Very self-aware.
Is this sarcasm?
Is anybody else bored of talking about AI? I’m beyond bored.
Why on earth is the parent comment downvoted? the title of the TFA asks a question. This statement directly answers that question. Seems very on-topic.