Many programmers say they “experiment” with AI assistance, but most just trust their intuition and few actually gather data in an honest comparison to see what effect AI has.
There’s a by now famous study that did a rigorous comparison, and Mikelovesrobots was inspired by that study to do a similar experiment himself:
Not sure if this has been shared here before but @Poneybirds ’ remark about AI assistance and developer productivity reminded me of it.
AI seems to be useful for small tasks, but if you let it control the narrative and lead the development, you will accumulate bloat and unnecessary features very fast. And that is very, very dangerous for the project.
In my books, concise code is one of the most important qualities for long term success. So, with the current stage of LLMs and how they are applied, I see problems ahead.
I am sure that there will be pockets of human survivors in the wasteland. Shouldn’t be too bad. Just as long as you are not the ‘guy’ on the back of the bike
More and more I’m starting to feel vindicated, tempted to say “I told you so” almost, when in fact the reason I was not on board for this hype train isn’t that I have hard data, it is just that I don’t like the experience.
I like coding by hand and understanding how stuff works more than getting stuff done. At this point in time I’m merely lucky to have evidence support my way of doing things.
I liken AI code development to “offshore” development. The promise of cheap, efficient code dev is dangled in front of management. But in reality it takes a team of devs to untangle and debug the mess that was delivered.
I’m currently using ChatGPT to modify the flight model of the upcoming DCS CH-46D mod. It’s definitely a different experience from using it as coding assistance. With coding I feel I have much more leverage on what to accept and what to refuse when it comes from the AI. With flight model tuning, those fields tell me not much at time.
However, I found some advice to be quite solid, and others to be blatantly wrong. Like I stated that I tune the FM of an AI helicopter and the bot insisted that ‘centering’ field is for joystick centering, instead of determining the gravity center of the helo. Today it surprised me with the info that centering is for the Center of Gravity …
I feel it’s helpful in a rather broad sense. Some fields I would probably never have looked at if it were not for the AI recommendation. However, following its recommendations letter by letter is first not very fun, second not yielding the results.
That was a great episode. This is the third Yudkowsky interview that I’ve listened to. His presentation makes a great Dr. Doom. Not once did he say, “In the book, I describe…”. Ezra Klein gets props for having the chops to keep the questions compelling, yet comprehensible.
I didn’t know until yesterday that this is actually a thing: take an existing title that is long enough to confuse people with a similar title. Make the cover art similar as well. Publish the ebook and hope that enough people will make the mistake to make a profit. Amazon marketed this fake to me right next to the original. That’s because they don’t actually care. Money comes in with either purchase.
A good friend, who has had a hugely successful career in M&A, has been trying to get me to listen to the Acquired podcast. I’ve had it in my feed but resisted listening, mostly because it didn’t have military, aviation, or music content, the shallow-minded bloke that I am. That and their podcasts are often 4 hours long.
But when the title of this one caught my eye, I decided to listen to the first hour. I’m so glad that I did. Did you know that pretty much all of the prominent AI scientists that work for the largest AI providers got their start at Google? All of them. And that LLMs have their origin in Google Translate? Here is that story.
That work for AI providers now, perhaps, but save for Geoffrey Hinton, I can’t think of a single person from the first wave of deep learning researchers that ever worked at Google.
Sepp Hochreiter, who invented long short-term memory in his PhD thesis in 1997 never worked at Google, he’s exclusively been working in academia his whole life. Without that contribution, we would still be using hidden markov models for speech recognition.
Yann LeCun, who made enormous contributions to deep learning and image recognition in particular to my knowledge also never worked for Google. He was an academic scholar who later joined Bell, most recently he works for Meta.
Yoshua Bengio, at the end of the 2010s the scientist with the highest h-index in all of computer science, never worked for Google. MIT, Bell Labs and now a Professor in Montreal.
After deep learning really took off, the big IT companies started carrying the torch because the required computational resources to stay competitive in the field quickly outgrew what Universities were able to provide to their researchers. But the foundational work came from Universities (and Bell Labs).