The AI thread

I just got back from Australia, so I’m not dead but the time zone still makes me feel a bit like it.

I’ve used the Large Language Models (LLM), GPT3/4 a fair bit in a professional capacity. My work is more on the finance side, with quantitative models, but it’s all patterns in the end I guess.

The biggest question this stuff has raised for me in the last couple of years is that is this really that clever or is it more that people are just simpler than we thought? I know it sounds facetious (hello everyone, it’s been a while) but people are really good in ‘filling in gaps’, it is how are brains work. We do it if we see three rocks positioned and then swear it looks like a face, we do it when we see a poem copied with certain tokens replaced when given a subject - we attribute complex emergent behavior through our interpretation rather than it being ‘intelligent’.

I’m not trying to undersell it, as it is important, I’m just saying that this isn’t a breakthrough but just a steady march of deep learning layered convolutional nets progress. The training material and the front-end really show it off well. It’s crossing the chasm for sure and will be really popular and be the main contact point like google/browser is today.

As for what jobs will it impact? I think a lot, and it is no small irony for me that the people that were sniffingly predicting the end of blue-collar factory / trades work as dinosaurs are now probably the perfect first white collar replacement jobs. Programming, sales, consultants are all going to be impacted. It doesn’t mean those jobs go, it just means that plain ‘info workers’ will have to evolve. The way I see it is that if you were a 411 operator in the year 2000 and then Google became popular and phones got better and more connected then you probably saw your job go. Is that bad? If your job today isn’t really about creativity or decision making relying on loose information then you could be in trouble. You now have to create value not by just knowing things but in using that information better than a LLM can regurgitate.

So ‘AI’ is a bit of an overloaded term, and one thing I don’t see mentioned in the froth of what’s going on now is that the models are highly asymmetrical, in that they take massive resources to generate and very little resources to query and use. What this means is that something like GPT-3.5 is effectively frozen in time at about May 2021. GPT-4 is newer and incorporates web queries as part of the answer parser, but it is still relying on the fact that finding the training weights takes a commitment that’s nothing like using them. GPT 5 tries to fix some of these things, but the math is against it. This means it does not learn like we do - we are great at making good or bad decisions very quickly with new information that doesn’t have a current pattern. What jobs will do well with that?

The dystopian take on this is that shortly we’ll have GPT powered Office tools automatically creating sales proposal powerpoints to be emailed to GPT powered Office outlook that then reads the powerpoint and declines the sales proposal. Weeks of white collar hell shortened to seconds. Future models just get trained on content that previous models have generated and internet published, so everything becomes derivative and slightly duller. The era of ‘info punk’ might come back eventually and we grow tired of everything being boring and generated. We’ll see.

Anyway, back to bed!

PS I thought that poem was really nice. Anyone here can use a GPT-4 derivative today using the Bing Chat, I think it says there is a waiting list but if you register then it is instant access.

14 Likes

Awesome reply. Thanks for the Bing Chat tip. Worked fine in Edge with the browser logged in to, wait for it, a Hotmail account. Yeah baby :rofl: Chicks dig my Hotmail.

It makes sense that financials would be using this technology for a while. I almost emailed my hedge fund tech analyst buddy this morning to ask him what he was using, but knew that he would be months ahead in understanding it, and didn’t want to sound like a dufus (any more than I already do).

I’m not so much concerned whether it is intelligent or just has better Google foo. Will leave that up to the big brains to debate, but what I found fascinating in the Mudspike poem was how well it pushed our hot buttons. I tried the same exercise with Bard and received results that were decidedly more generic. Like it was an Amazon review bot.

Concerning the dystopian office tools, I used ChatGPT3.5 this morning to “Create a compensation plan for a beer salesperson”. This is something that I’ve asked my partners to help me with for a month and not getting much help. The response was detailed and mostly appropriate for our needs.

Happy travels and sleep FF. Please do stop in when able.

3 Likes

Yes.

2 Likes

You too!!!

We’re so old. :rofl:

2 Likes

Ok…. So when do we start? :slightly_smiling_face:

Each human brain has orders of magnitude more neurons than all of today’s LLMs combined. I think at this point, it is mostly a question of resources, generating labelled data and training the models on that data.

Where our brains really excel is at memory capacity and the energy efficiency of the processing. In time, larger models with more parameters and more complex topologies will get ever closer to our way of solving problems.

6 Likes

For me the use of biological terms for these things has always been a bit hokum. While there is a vague similarity with how we think the frontal lobes might work with visual pattern stimuli to a model’s CNN’s neuron, it is more a feel good analogy to just anthropomorphize the math. A brain is a brain and this isn’t it. The self learning bit is key in our mushy bits. GPT’s tokenization and attention mechanism for the transformer that made this happen don’t really have a biology equivalent that we know of, nor did it come from it. It’s a bit like calling a Harrier Hawk bird the same as the AV8B, because they can both hover.

One thing I do think that is pretty cool in that there is a generation of people about to come through school where they can have a tireless, 24 hour accessible ‘teacher’ for reference info. While the education focus has been about cheating on essays, a brighter side might be that there will be 10 year olds asking ten thousand ‘Why?’ questions to these things and it will never get tired and never want to stop or say ‘because it is’. The Socratic Method of back and forth to learn works really well with these chat models, as there is no teacher ego or patience. Of course it can be wrong, but that’s not something that ever stopped human teachers getting work, so at least that can be worked on over time. It might be a generational thing of how best to work and use these tools, and we might not be it.

6 Likes

For me, I don’t see what all the fuss is about with these advancements in AI. It seems to be an evolutionary step, not revolutionary. Remember in the end, they just display data that is already known. When search engines first appeared you asked a question and they showed you places where the answer might be. Now, they can piece the data together.

As FF alluded to (and great to see you around sir!), this could be a great teaching aid and I hope it’s embraced as such.

5 Likes

Looking ahead, and just as an external reference to what I meant (as those tend to be taken with a different preconception) the current thinking is that the current way of pushing parameters on and on with the current model architectures will hit a wall of being that more useful - the ‘AI Winter’ is used to describe why we might see the classic bell curve of usefulness. Like a jet engine after a certain altitude, if we want to push on to problem solving like we do then this might be a bit of a dead end and something else is needed. It’s still really (really) useful as a tool though.

3 Likes

“… would never stop, it would never leave him. It would always be there. And it would never hurt him, never shout at him or get drunk and hit him, or say it couldn’t spend time with him because it was too busy. And it would die to protect him…”

  • Sarah Conner
10 Likes

For me it is about the confluence of technologies, AI. robotics, synthetic biology, genetic manipulation, etc that were science fiction when I was a kid that gets me to believe that we are less than a few generations away from humans (as we know them today) being obsolete.

1 Like

Agreed. And a continuous flow of electricity. :wink:

1 Like

This.

About 15 years ago there were couple of prominent technologies for unsupervised deep learning: transformers (edit: err, it was autoencoders but anyways I think transformers use this) and Restricted Boltzmann Machines that was advocated by Geoffrey Hinton.

My favourite was always the latter, because it resembles more how our brains work. Apparently the autoencoders got the upper hand, as their behaviour was more robust in big learning.

2 Likes

Tell you what, as I’ve been deficit in Mudspike time recently, I’ll add a GPT-4 bot to the forum tomorrow when I get a mo that can interact in topics and we can torture. We might need to restrict usage to a group of us, as I’ll pay out of pocket (and I’m incredibly cheap) but we can run it for a while I think. It’ll be an expert on aviation and simulation games (just like us! :wink:) and I thinking of calling it @spike, with a wee robot icon. :robot:. @Discobot is going to get really jealous…

Hmm, or maybe @mudbot, dunno, open to ideas here (and no, I’m not asking GPT-4)…

8 Likes

Hi! To find out what I can do, say @discobot display help.

Ralph Wiggum Danger GIF

2 Likes

Considering the median age of our little group we could always go with @HAL.

Wheels

7 Likes

I like it, let me set it up now rather than sleep again. 2 secs (not going to be 2 secs).

1 Like

You there @HAL and perhaps introduce yourself?

My vote would be Number Six, but if you guys want HAL, fine.

2 Likes