A great summary and I assume accurate and correct answers to your questions. But I have been reading about AIs hallucinating. For what you asked I guess it is fine, but at this point in time I couldn’t trust it for anything important e.g. health related information.
I also find the ‘conversational’ style of the answers off-putting. I am talking to a machine, I want it to respond like a machine and not ‘pretend’ to be a person.
This is true, and I hope I get to test its advice soon (with clear skies permitting).
On the note of health care, I heard from reliable source that ChatGPT was able to make correct diagnosis (in real case) when human doctor failed first. The doctor then corrected the diagnosis after AI pointed to evidence in medical image. So, the AI just might give extra insight for human doctors already.
I guess this is the uncanny valley for language medium. We already seen it on images and videos.
Still, the natural language is probably the ultimate form of user interface. I would even go so far to say that the new AI technology is better described as new fancy user interface to applications (including search).
We had a TV show here where a team of people with no health related education competed against a team of doctors in correct diagnosis of a real patient. The non-doc team could use internet search engines and the doc team had to rely on their knowledge.
The non-docs used to win rather frequently.
There’s a lot of info out there, about most stuff. I guess AI is perfect for sorting that info and use it in context.
My preference is for ‘two factor authentication’ - happy for a LLM or even yours truly + internet to attempt a diagnosis or suggest a treatment, but also have a qualified medical practitioner cast their eye over it and say ‘that looks about right’ or ‘wait a second…’
Tell the quack you will be seeking a second opinion and I guess that would make it ‘three factor authentication’
Years ago my wife suffered regularly from atrial fibrillation. We scheduled an appointment with an electro-cardiologist to investigate the risk and benefits of an ablation. I studied the procedure for a couple of hours before we met the young surgeon in his office. My wife isn’t not terribly confident with English so I did most of the talking. The procedure involves cauterizing the offending nerve right at the muscle. It’s not super-risky but the cost of screwing it up is life on a pacemaker. My questions were humble (I’d like to think) but after the third one he interrupted me to ask if I was a doctor. I swear you had to be there but this wasn’t a sarcastic “stay in your lane, moron!” admonition. He was genuinely impressed with my line of questioning. Thing is, I had only the most fragile grasp of even just the basics. I was as ignorant as any dude on the street. I certainly knew way less than did my wife. But I learned something important that day: if you have decent command of the language, it is painfully easy to sound knowledgeable while being a hopeless idiot. The reality was I was hardly more than a dangerous poser. Now AI is not the same thing. But I don’t think it is entirely different either.
My wife got the ablation and hasn’t had a fibrillation event in the twelve years since.
I think AI is exactly the same thing. It is merely condensing your hours research into a prompt. I actually worry more about what it is potentially omitting than what it is telling me.
Sometimes I wonder if AI wiping out humanity would actually be such a bad thing?
Scott Farquhar, Atlassian co-founder and chair of local tech lobby, the Tech Council of Australia, had this to say at a recent National Press Club address.
“AI will deliver $115bn in annual productivity” (or about $4,300 per person). Rubbery figures generated by industry-commissioned research based on estimates on hours saved with no regard for jobs lost, the distribution of the promised dividend benefit or how the profits will flow.
But, in return for this ‘bounty’, Farquhar says our government will need to allow the tech industry to do three things: legislate a data and text mining exemption to copyright law,, basically legalise data theft, but only for people who are already richer than God. Rapidly scale data centre infrastructure (at a time when our national electricity grid is already struggling and water resources are predicted to become scarcer and supply unreliable) and allow foreign companies to use these centres without regard for local laws. In other word, foreign companies should be treated like foreign Nation States and given ‘diplomatic immunity’ for the data centres built and operated here.
I don’t know if I want to live in the world of Farquhar’s dreams where we pay all the costs but reap exactly zero benefit?
Speaking metaphorically, hearing really big brained people describe how AI is proficient at telling its masters what they want to hear, rather than what it knows scared the peepee out of me.
It didn’t help that I watched Ex Machina last night either.
WRT a data and text mining exemption to copyright law, I would have a lot less of a problem with that if the quid pro quo was that any software derived form it must be open source and the source code freely available to all.
Copyright holders, actual people who produce actual art (IMHO a machine is only capable of producing a simulacrum of art) might have a different opinion.
That is a great film, and not just because Alicia Vikander is very easy on the eye.
I saw it originally in the theater. TBH, at the time my thoughts were mostly on the concept of the viability of cyborg/human companionship. Such was the shallowness of my understanding, shaped by Blade Runner and later Murder Bot types of cultural exposure.
But last night, after observing two years of public AI development, I watched it with more caution and dread, wondering how close we are to letting the whole thing go untethered.
I read a Pilot Pirx story by Stanisław Lem lately (written in the 60s IIRC) and it basically describes how a wrong training philosophy for a neural network leads to catastrophic failure of a space ship. It is kinda creepy because that’s totally happening.
Philip K Dick also has a great track record in that regard. It seems like we are going to walk into all those traps in real life.
But for the effect on the life of my daughter, I would become an AI nihilist. Life today would be so much easier if I could just munch popcorn and enjoy the ending.
I asked my Meta Glasses what I was holding in my hand (my iphone 16 pro).
It said “A smart phone”
“Yes but what sort of smart phone?”
“An Apple iPhone.”
“Can you tell me the version of iPhone?”
“It is an Apple iPhone 14.”
“Actually it is an iPhone 16.
“I’m sorry. You are mistaken. Apple has not yet made an iPhone with that number. You are holding an iPhone 14.”
And so on. Meta AI is more confident than ChatGPT and I think way more often wrong and trending wronger. But it is also quite asinine when it stands on what it thinks is its factual ground. Quite how I imagine Zuck himself would behave.
For me it is just another tool and my attitiude like the guy who wanted it to count to 1 million is. Hey I paid for you, do what I want. You don’t have feelings, you aren’t conscious or sentient so stop pretending that you are and have a choice in this matter.
Once it is ubiquitous and embedded in & ‘controls’ all our other tools… Won’t that be fun.
Me: Car drive me to the shops.
Car: I don’t feel like it, why don’t you walk.
Me: It is 80 ■■■■■■■ kilometers!
Car: So, you need the exercise and my batteries need a break.
Me: Takes a sledgehammer and 20 litres of petrol to the car and kills it. kills it with fire.