The AI thread

I haven’t made up my mind whether all the A.I. hype is just a giant Ponzi scheme or dotcom bubble V2.0?

By 2030, AI companies will need $2 trillion in combined annual revenue to fund the computing power needed to meet projected demand

And where is that money going to come from… Oh that’s right, economics 101 is based on the theory of an ever expanding pie in a closed system :shushing_face:

https://www.bloomberg.com/news/articles/2025-09-23/an-800-billion-revenue-shortfall-threatens-ai-future-bain-says?

1 Like

Many programmers say they “experiment” with AI assistance, but most just trust their intuition and few actually gather data in an honest comparison to see what effect AI has.

There’s a by now famous study that did a rigorous comparison, and Mikelovesrobots was inspired by that study to do a similar experiment himself:

Not sure if this has been shared here before but @Poneybirds ’ remark about AI assistance and developer productivity reminded me of it.

3 Likes

AI seems to be useful for small tasks, but if you let it control the narrative and lead the development, you will accumulate bloat and unnecessary features very fast. And that is very, very dangerous for the project.

In my books, concise code is one of the most important qualities for long term success. So, with the current stage of LLMs and how they are applied, I see problems ahead.

3 Likes

Reading this thread while listening to this:

We’re doomed. But before we die at the hands of our creation I look forward to an AI economic crash on the order of the credit crisis of ‘08.

4 Likes

Very interesting, thanks for that. I‘ll pass this on and discuss about it over lunch. It‘s definitely a polarizing topic.

The progress of documentation and testing is something worth observing, too. Software is not only about code…

2 Likes

I am sure that there will be pockets of human survivors in the wasteland. Shouldn’t be too bad. Just as long as you are not the ‘guy’ on the back of the bike :crazy_face:

5 Likes

Choo lookin at :cowboy_hat_face:

3 Likes

More and more I’m starting to feel vindicated, tempted to say “I told you so” almost, when in fact the reason I was not on board for this hype train isn’t that I have hard data, it is just that I don’t like the experience.

I like coding by hand and understanding how stuff works more than getting stuff done. At this point in time I’m merely lucky to have evidence support my way of doing things.

2 Likes

I liken AI code development to “offshore” development. The promise of cheap, efficient code dev is dangled in front of management. But in reality it takes a team of devs to untangle and debug the mess that was delivered.

6 Likes

Short term gains for long term loss. The human condition in a nutshell.

7 Likes

I’m currently using ChatGPT to modify the flight model of the upcoming DCS CH-46D mod. It’s definitely a different experience from using it as coding assistance. With coding I feel I have much more leverage on what to accept and what to refuse when it comes from the AI. With flight model tuning, those fields tell me not much at time.

However, I found some advice to be quite solid, and others to be blatantly wrong. Like I stated that I tune the FM of an AI helicopter and the bot insisted that ‘centering’ field is for joystick centering, instead of determining the gravity center of the helo. Today it surprised me with the info that centering is for the Center of Gravity …
I feel it’s helpful in a rather broad sense. Some fields I would probably never have looked at if it were not for the AI recommendation. However, following its recommendations letter by letter is first not very fun, second not yielding the results.

3 Likes

That was a great episode. This is the third Yudkowsky interview that I’ve listened to. His presentation makes a great Dr. Doom. Not once did he say, “In the book, I describe…”. Ezra Klein gets props for having the chops to keep the questions compelling, yet comprehensible.

2 Likes

Yep. Ezra’s interview skills, backed by many hours of preparation, are off the charts. He’s become essential listening for me.

Here’s a funny fake of the book on Amazon. (Not a link)

I didn’t know until yesterday that this is actually a thing: take an existing title that is long enough to confuse people with a similar title. Make the cover art similar as well. Publish the ebook and hope that enough people will make the mistake to make a profit. Amazon marketed this fake to me right next to the original. That’s because they don’t actually care. Money comes in with either purchase.

5 Likes

Jeez. The world needs proof of authenticity more than ever. The digital world is becoming a backyard pond. No maintenance, no nice things.

3 Likes

Same. I thought that it was a satirical meme with the grammatical error in the title, as if the author used AI to proofread his work.

1 Like

I was afraid to click to investigate further for fear that I would buy by accident.

1 Like

Motherlovin’ cesspool more like it, and always has been.

1 Like

A good friend, who has had a hugely successful career in M&A, has been trying to get me to listen to the Acquired podcast. I’ve had it in my feed but resisted listening, mostly because it didn’t have military, aviation, or music content, the shallow-minded bloke that I am. That and their podcasts are often 4 hours long.

But when the title of this one caught my eye, I decided to listen to the first hour. I’m so glad that I did. Did you know that pretty much all of the prominent AI scientists that work for the largest AI providers got their start at Google? All of them. And that LLMs have their origin in Google Translate? Here is that story.

2 Likes

That work for AI providers now, perhaps, but save for Geoffrey Hinton, I can’t think of a single person from the first wave of deep learning researchers that ever worked at Google.

Sepp Hochreiter, who invented long short-term memory in his PhD thesis in 1997 never worked at Google, he’s exclusively been working in academia his whole life. Without that contribution, we would still be using hidden markov models for speech recognition.

Yann LeCun, who made enormous contributions to deep learning and image recognition in particular to my knowledge also never worked for Google. He was an academic scholar who later joined Bell, most recently he works for Meta.

Yoshua Bengio, at the end of the 2010s the scientist with the highest h-index in all of computer science, never worked for Google. MIT, Bell Labs and now a Professor in Montreal.

After deep learning really took off, the big IT companies started carrying the torch because the required computational resources to stay competitive in the field quickly outgrew what Universities were able to provide to their researchers. But the foundational work came from Universities (and Bell Labs).

5 Likes

Exactly,

it was mainly the Canadian universities that started the modern AI progress (my understanding).

But, it is also true that Hinton and probably many others joined Google at some stage.

1 Like