Well said. Already beginning to happen, IMO.
Itâs not perfect, but I find GroundNews helpful. For each article it gives political bias (left, center, right), accuracy rating, and who owns the news outlet.
Thereâs already a term for this in data science circles. Itâs Habsburg AI
That brings to mind my favorite episode of 30 Rock.
Sometimes I donât think AI is ready, but this isnât really that much worse than David Croft honestly
Looks to have potential - the speed at which this is moving though.
I have been going though creating a GPT-2 type LLM using Python to try and understand what has happened since I last did AI 15 years back.
As that video says logically I would convert the input and output of the neural network to video and audio separately as they have been doingâŠhowever sounds like they have already moved onto getting the whole thing done via the LLM without any external conversion.
If that demo is realistic I wonder how many groups will eventually have the AI voice essentially running each conference meeting.
To this point Iâve not worried too much about AI encroaching on the livelihoods of human beings. But when I watched the trigonometry lesson example, I had an ah-ha moment.
I can see in the not too distant future, where we have Help Wanted ads that state, We are and AI management free organization. The same for schools and universities, espousing the pros and cons of their educational method, either by AI or human.
Sheâs going to he a better parent than most parents.
I wonder how the kid whoâs raised by perfect parents, will behave?
My wife and I are already in a struggle for our kidsâ socialization, as Iâm sure some other parents here are. On the other hand, I read a fascinating piece a couple of years ago about someone who lost their eyesight, lived alone, and although reluctantly at first, developed a deep, rewarding relationship with an AI. If memory serves, it was fiction, but predicted, or was aware of the AI assistant models forthcoming. Taking the good with the bad I suppose.
Normally my initial reaction is to dip immediately into my bottomless well of AI hate. But this could really help people manage their social phobias. I have one of my own now. Iâve been trying to learn Japanese at last. But I am terrified at trying to speak with a real person, especially my real Japanese wife. I simply hate being laughed at especially when that laughter is accompanied by zero comprehension. This is such a dumb hangup to have when learning a language. But I canât seem to get past it. Practicing with AI would get around that tic. Of course it might still laugh at me while failing to understand. I would then just immediately end the relationship with no cost.
I am suffering from AI/ML fatigue.
Lately I have read numerous articles ranging from AI will make humans irrelevant or extinct within the next 10 years to it is pure hype by the various companies in order to boost their share price/attract investors.
I am sure that while the âtruthâ is somewhere in between, I have a distinct feeling that the only people who will actually benefit from it are those who own/control the algorithms & for the rest of humanity (who arenât already)⊠get used to joblessness and poverty.
But at the moment. Thinking about AI and trying to sort the wheat from the chaff makes my brain hurt.
Our AI overlords are going to need us to run powerplants or at least mine fuel for them, and for that weâll need food as well.
Humans arenât becoming obsolete in anyone you knowâs lifetime.
If you are skilled at manual labour, I reckon you will have a job for a while yet. If I was a white collar worker, I would be feeling a bit nervous?
Just as long as we donât give our AI overlords control of the machines that build the machines
Itâll take it for sure (if it doesnât invent and run it from the beginning), but assuming its goals are to continue to exist and to expand why would it prioritise making stuff that expends energy and processor cycles to interact with the real world in a detailed way when it could be applying that to expanding its own abilities and just use us - to it, a renewable resource that it doesnât need to expend much energy or materials on - to do the grunt work in the real world? We already come with all the systems we need to do that well!
That Stargate (lol) compound is meant to take the same energy input as would run three quarters of a million households, if I read right. Weâre cheap to run, if youâre a machine (due to our self sufficiency and that we donât generally compete for the same resources).
Itâs all about the data. Plenty of algorithms out there, all useless without meaningful and current data.
The current trend is to spin up decommissioned reactors. Microsoft at Three Mile Island is one example.
Just watched an interesting video (of course by Googleâs recommendation).
It is kind of educated guess where things are going in the big picture.
To me it seems that the current AI providers (OpenAI, Anthropic, Google, Meta, Deepseek and dozens of smaller players) are going to have stiff competition providing the platform (AI models) for app (agent) developers in different domains.
It seems there is no monopoly available for any of the actors, no secret sauce. Also, big companies seem to be interested in providing the platform, on which the smaller ones can build their apps and solutions for specific problems.
As a coder I should probably be worried of being replaced by AI, but I prefer seeing AI as an invaluable tool to make things easier for me. It is a tool for processing text after all, and not even very autonomous one.
Of course there will be abuse of any tool in some form, so itâs hoping that it will not be put in control of nuclear weapons, etc.