The AI thread

Well boo hoo, cry me a river. If these :poop: heads get their way, then guess what?

I will treat the IP of Anthropic, Google, Meta, OpenAI, etc exactly as they have treated everyone else. Yarr Matey :pirate_flag:

7 Likes

Oh you wouldn’t BELIEVE how many young’uns I absolutely ENRAGED when I pointed out that our AI ‘industry’ has all the markings of a massive bubble that will have catastrophic consequences when it bursts.

And don’t even get me started on the young military folks who are all thinking they’ll be raking in the huge bucks working in datacenters when they get out in 3-4 years.

3 Likes

Khan Academy is an amazing resource for teaching math and tips to be faster and more accurate as well.

2 Likes

Oh sweet summer children


Wait. People work in them? I thought they were big, noisy, power-hungry, water-thirsty but unoccupied behemoths. Oh, and I learned today that NY governor Hokul has offered tax incentives to tech companies that will give them access to power at 1/3 the cost per unit of other users. How grateful must be the citizen living next to one of those damn thing when he looks at his power bill knowing he’s helping Zuck get richer.

1 Like

It seems the law of diminishing returns is biting back.

Here’s sobering assesment of the current AI expectations. They have been greatly lowered by many knowledgeable people. It seems that this technology (transformer LLMs) is plateauing and we cannot really brute force our way out of it.

One big reason for the sentiment change is that all the new frontier models (latest being ChatGPT 5) have been bit of let downs, LLMs are not progressing like they used to.

After a while also investors should pick up the message. Maybe we can still avoid the over investments.

2 Likes

Very few people work in them, and it’s mostly a traveling technician job. Which is something I’d started tracking probably five years ago now, and is what started me trying to sound the alarm on these at DOE back at the time.

Anyway, the jobs piece was something I was able to help allies in Tucson address in this case, because I know the folks who have already begun aggressively recruiting for the technicians who would potentially travel to this site once it was up and running:

Hopefully this is the first of many wins.

For y’all in North Carolina, I don’t know if you’ve seen this one. Teresa Earnhardt is trying to once again prove how much of a villain she chooses to be.

https://www.charlotteobserver.com/sports/nascar-auto-racing/thatsracin/article311518497.html

4 Likes

Oh wow, Teresa Earnhardt is an absolute witch once again. I hate that woman. I was a huge Dale fan back in the 90s, god that woman makes me angry.

Anyways, I don’t think the future of AI will not be in massive data centers, with the paltry few techs that work on them. The simple fact of the matter is that it requires such an immense investment for such meager returns if you get ANY returns at all. When I look at AI investments and I’m looking at the ROIs, or rather lack thereof, I’m left wondering where’s the incentive? They’re asking for oceans of seed money for the vague promises of major returns, but I’ve yet to hear of any or see proof of any outside of a few select investors.

Mostly the dudes and dudettes who got into nVidia years ago and then got out before they had that absolute galaxy of value wiped out.

I promise you that whatever tech giant or start up manages to create a smaller scale AI, a focused AI, that functions locally as opposed to on a server will become the firm that become icon. In the same way Google is synonymous with search engines and Xerox with photocopies, what ever company creates that will absolutely become the name. By eliminating the need for gigantic datacenters to process the desires of desperate nerds who make hentai their total identity, you’d be eliminating so much overhead. Generative AI is a joke, but having further assistance could be a real boon to a lot of users.

Instead of wanting to create disgusting anime girl abominations that Hayao Miyazaki so aptly described or as a shortcut for programming that absolutely won’t work since programming is very much garbage in, garbage out, we should be looking at an AI for more practical applications and, in that regard, I do not believe you need the armies of servers for it.

1 Like

Yep, the folks not sure they want them
at least in North Carolina.

Is that possible? Isn’t so called AI just a very sophisticated algorithm for parsing and assembling an ‘answer’ from a huge repository of data?

Isn’t that the crux of the copyright infringement lawsuits? That ‘training’ an AI is actually the amassing of a huge trove of (somebody else’s) data?

Genuine questions. The more I know about AI, the less inclined I might be to see it and the developers for the evil that I currently consider them to be.

1 Like

Very nice win in Tucson @Navynuke99!

1 Like

Yes, of course it is possible. You can download LLM models and run them on your home computer. They may not be as big or fast or capable as the big ‘frontier’ models, but nevertheless it is impressive what you can achieve with a gaming computer resources.

There are hundreds smaller models available for download, for example at Huggingface. Their licencing vary, but many if not the most are completely free to use, no strings attached. Also, some of them list the data that is used for the training, and some even try to make it sure that their training data does not contain any infringements.

The problem is that we only hear about the ‘frontier’ models that are created by big businesses. They smelled the potential for the money to be made, so we are seeing these predatory practices and massive spin to attract investor money.

The ground research has been around for decades and it has been done with modest resources by modest people. For example Andrej Karpathy, who used to work for OpenAI in the beginning, but since left to do teaching and other things. I recall he had a vision of small modular AI where you would have a smaller central unit sw component and then can add specializations as plugins when needed. The idea is not trying to encompass all the information in the world in a single model.

Also, the LLMs are now also seen as a strategic asset in the competition between the nations. There are big dreams built on top of the productivity increase potential, or even super intelligence, that governments don’t want to miss out. Of course this dream is very much unproven still.

1 Like

This really is like the atomic race all over again. After Japan further disaster was averted through MAD. What will stop disaster with super intelligence?

The super intelligence will know. :clown_face:

2 Likes

From Ed Zitron of “Better Offline”:

If anybody is interested, I believe I’ve linked his podcast higher up in this thread; I HIGHLY recommend it, as he’s thoroughly done his homework on the people behind the current push.

3 Likes

With SoftBank and their Mierdas touch (LOL), the end of OpenAI can’t be that far off.

3 Likes

Thanks, this was good read. Skimmed the long article through quickly, and on high level agree with the article. Need to look into the details later.

I am letting my phone read the article while I make my marinera. So fun to listen to the phone f-bomb AI CEOs for giving their employees the shaft. A recursive moment of AI ####ing on AI.

2 Likes

Thanks for that, I thought that what was downloaded and run on a PC was just an interface to the LLM. However it still has to interrogate a database and retrieve the requested information from somewhere, which implies a datacentre of some description?

3 Likes

Not exactly
 my understanding is the output comes out of the neural network (or whatever structure modern LLMs have), not a flat file version of a database with every word written out?

The database is used for training the model. Anything you run at home is way past that point, and I don’t think it “phones home” to check things.

Edit: now of course some models can take text or images you give them and generate “content” or responses off that, obviously it does have that content stored locally, but usually in the interface application rather than the LLM model itself. As I understand it anyway


2 Likes