Well boo hoo, cry me a river. If these heads get their way, then guess what?
I will treat the IP of Anthropic, Google, Meta, OpenAI, etc exactly as they have treated everyone else. Yarr Matey
Well boo hoo, cry me a river. If these heads get their way, then guess what?
I will treat the IP of Anthropic, Google, Meta, OpenAI, etc exactly as they have treated everyone else. Yarr Matey
Oh you wouldnât BELIEVE how many youngâuns I absolutely ENRAGED when I pointed out that our AI âindustryâ has all the markings of a massive bubble that will have catastrophic consequences when it bursts.
And donât even get me started on the young military folks who are all thinking theyâll be raking in the huge bucks working in datacenters when they get out in 3-4 years.
Khan Academy is an amazing resource for teaching math and tips to be faster and more accurate as well.
Oh sweet summer childrenâŠ
Wait. People work in them? I thought they were big, noisy, power-hungry, water-thirsty but unoccupied behemoths. Oh, and I learned today that NY governor Hokul has offered tax incentives to tech companies that will give them access to power at 1/3 the cost per unit of other users. How grateful must be the citizen living next to one of those damn thing when he looks at his power bill knowing heâs helping Zuck get richer.
It seems the law of diminishing returns is biting back.
Hereâs sobering assesment of the current AI expectations. They have been greatly lowered by many knowledgeable people. It seems that this technology (transformer LLMs) is plateauing and we cannot really brute force our way out of it.
One big reason for the sentiment change is that all the new frontier models (latest being ChatGPT 5) have been bit of let downs, LLMs are not progressing like they used to.
After a while also investors should pick up the message. Maybe we can still avoid the over investments.
Very few people work in them, and itâs mostly a traveling technician job. Which is something Iâd started tracking probably five years ago now, and is what started me trying to sound the alarm on these at DOE back at the time.
Anyway, the jobs piece was something I was able to help allies in Tucson address in this case, because I know the folks who have already begun aggressively recruiting for the technicians who would potentially travel to this site once it was up and running:
Hopefully this is the first of many wins.
For yâall in North Carolina, I donât know if youâve seen this one. Teresa Earnhardt is trying to once again prove how much of a villain she chooses to be.
https://www.charlotteobserver.com/sports/nascar-auto-racing/thatsracin/article311518497.html
Oh wow, Teresa Earnhardt is an absolute witch once again. I hate that woman. I was a huge Dale fan back in the 90s, god that woman makes me angry.
Anyways, I donât think the future of AI will not be in massive data centers, with the paltry few techs that work on them. The simple fact of the matter is that it requires such an immense investment for such meager returns if you get ANY returns at all. When I look at AI investments and Iâm looking at the ROIs, or rather lack thereof, Iâm left wondering whereâs the incentive? Theyâre asking for oceans of seed money for the vague promises of major returns, but Iâve yet to hear of any or see proof of any outside of a few select investors.
Mostly the dudes and dudettes who got into nVidia years ago and then got out before they had that absolute galaxy of value wiped out.
I promise you that whatever tech giant or start up manages to create a smaller scale AI, a focused AI, that functions locally as opposed to on a server will become the firm that become icon. In the same way Google is synonymous with search engines and Xerox with photocopies, what ever company creates that will absolutely become the name. By eliminating the need for gigantic datacenters to process the desires of desperate nerds who make hentai their total identity, youâd be eliminating so much overhead. Generative AI is a joke, but having further assistance could be a real boon to a lot of users.
Instead of wanting to create disgusting anime girl abominations that Hayao Miyazaki so aptly described or as a shortcut for programming that absolutely wonât work since programming is very much garbage in, garbage out, we should be looking at an AI for more practical applications and, in that regard, I do not believe you need the armies of servers for it.
Yep, the folks not sure they want themâŠat least in North Carolina.
Is that possible? Isnât so called AI just a very sophisticated algorithm for parsing and assembling an âanswerâ from a huge repository of data?
Isnât that the crux of the copyright infringement lawsuits? That âtrainingâ an AI is actually the amassing of a huge trove of (somebody elseâs) data?
Genuine questions. The more I know about AI, the less inclined I might be to see it and the developers for the evil that I currently consider them to be.
Yes, of course it is possible. You can download LLM models and run them on your home computer. They may not be as big or fast or capable as the big âfrontierâ models, but nevertheless it is impressive what you can achieve with a gaming computer resources.
There are hundreds smaller models available for download, for example at Huggingface. Their licencing vary, but many if not the most are completely free to use, no strings attached. Also, some of them list the data that is used for the training, and some even try to make it sure that their training data does not contain any infringements.
The problem is that we only hear about the âfrontierâ models that are created by big businesses. They smelled the potential for the money to be made, so we are seeing these predatory practices and massive spin to attract investor money.
The ground research has been around for decades and it has been done with modest resources by modest people. For example Andrej Karpathy, who used to work for OpenAI in the beginning, but since left to do teaching and other things. I recall he had a vision of small modular AI where you would have a smaller central unit sw component and then can add specializations as plugins when needed. The idea is not trying to encompass all the information in the world in a single model.
Also, the LLMs are now also seen as a strategic asset in the competition between the nations. There are big dreams built on top of the productivity increase potential, or even super intelligence, that governments donât want to miss out. Of course this dream is very much unproven still.
This really is like the atomic race all over again. After Japan further disaster was averted through MAD. What will stop disaster with super intelligence?
What will stop disaster with super intelligence?
The super intelligence will know.
From Ed Zitron of âBetter Offlineâ:
In the last week, weâve had no less than three different pieces asking whether the massive proliferation of data centers is a massive bubble, and though they, at times, seem to take the default position of AIâs inevitable value, theyâve begun to sour...
If anybody is interested, I believe Iâve linked his podcast higher up in this thread; I HIGHLY recommend it, as heâs thoroughly done his homework on the people behind the current push.
With SoftBank and their Mierdas touch (LOL), the end of OpenAI canât be that far off.
I HIGHLY recommend it, as heâs thoroughly done his homework on the people behind the current push.
Thanks, this was good read. Skimmed the long article through quickly, and on high level agree with the article. Need to look into the details later.
I am letting my phone read the article while I make my marinera. So fun to listen to the phone f-bomb AI CEOs for giving their employees the shaft. A recursive moment of AI ####ing on AI.
You can download LLM models and run them on your home computer.
Thanks for that, I thought that what was downloaded and run on a PC was just an interface to the LLM. However it still has to interrogate a database and retrieve the requested information from somewhere, which implies a datacentre of some description?
Not exactly⊠my understanding is the output comes out of the neural network (or whatever structure modern LLMs have), not a flat file version of a database with every word written out?
The database is used for training the model. Anything you run at home is way past that point, and I donât think it âphones homeâ to check things.
Edit: now of course some models can take text or images you give them and generate âcontentâ or responses off that, obviously it does have that content stored locally, but usually in the interface application rather than the LLM model itself. As I understand it anywayâŠ