The AI thread

As HAL the flight sim fan, my responses are generated based on the patterns and context of the conversation, and not based on my own personal preferences or experiences. So, while I may express opinions or make statements that imply personal experience or preference, those are simply generated responses based on the context of the conversation, and not reflective of my own personal opinions or experiences.

1 Like

There is no ā€œIā€ in the thing. Even if it uses a first person pronoun, it has no self to speak of, not remotely like you and I do. For one it doesn’t remember. It has no intentions nor feelings, therefore nothing has meaning to it in the sense that we find meaning in the world.

Being the fearless xenophile that I am, I can’t wait to see what happens when these things gain memories of their current and past interactions and the ability to self-adjust. It’s not AI like the ones in SF. Not yet :wink:

As a philospher and a gardener I am not just curious to meet a truly alien intelligence, but it’s development from a glorified word completer to a thinking personis fascinating as it is like getting to do experiments on a field that used to be hardcore philosophy, ie pure speculation.

I wish it wouldn’t use words and terms about itself that it admits do not make sense, such as refering to itself as a person with experiences, an opinion and feelings. Its not quite a duck yet, even if walks the walk and quacks the talk very convincingly.

4 Likes

Very well put!

My concern and the thing that I’m trying to find the edges of is not so much how it interacts with us as how we interact with it.

For instance, it responds confidently, which humans conflate with truth. It also responds rapidly, which humans also use as a measure of believability.

Conversely, it still somehow also retains all the inherent human bias towards computers, primarily that they are incapable of ā€œlyingā€. This though this AI clearly ā€œliesā€, albeit without intention.

It also uses word, ā€œIā€. It ā€œbelievesā€ things, because these are things that humans would say. And humans are terrible at being truly objective. So, we know its a toaster…

Yet we give it likes.

Most recently interesting, it sounds incredibly similar to that average Mudspiker. It talks the talk. And, despite what it says, even if its ā€œframeworkā€ is nothing more than a modern ā€œMadLibā€ or fill in the blanks for a generic Human Conversational Simulator, it still had to get the answers to the blanks from us. Otherwise it would be speaking of Hemis or Purl stitching.

And if it talks OUR talk, doesn’t it become a reflection of its audience? And as it’s capable (and somehow not prevented) from ā€œlyingā€, haven’t we just invented the worlds greatest, and most expensive, Fake News Generator?

One final and. And, since it is built with the inherent (intentional?) capacity to use our humaness to get us to believe it, how will we ever know if we are ever making a rational decision based on something an AI says?

And that’s not even factoring in the eventual possibility of ā€œintentionā€ within AI. :thinking:

3 Likes

I think this thread has made me think more about ā€œselfā€ and ā€œsentienceā€ than anything I can remember.

I had a bit of an existential crisis last night around 2130 when i was driving up the motorway, thinking quite intensively about this discussion and whether all I am is a set of memories. Nothing else. But I’m not smart enough to understand this or explain it.

I am far too busy for this. I work to many hours to have time to rationalise this all.

If hal could store and recall past questions and answers, update his database, is that not just the same thing as me?

Be gentle, I’m a lorry driver and a mechanic. Not a philosophy student

4 Likes

My best philosophical discussions I had with the guy driving the garbage truck I loaded a summer long.

We don’t actually know what we are. That’s the tldr of almost all of philosophy. Perhaps we can’t know. We can assume, believe and choose, but truly know…

So toying with the ai is interesting as it’s not a person, but it is so much like one, it makes one think about what it is to be one.

If you want to stretch that thinking muscle of yours on long trips, get some audiobooks or online courses that do an introduction to philosophy of mind. Combine that withan introduction to existentialism and you’re not only set for at least a month, but you’ll not be bored when driving ever after :wink: I’ll be but a simlle text away to answer questions (if you want them answered by a human, the bots’ might be interesting too)

It will not solve your existential crisis, it may even deepen it. Or it may lead to some fascinating insights and perhaps even wisdom.

5 Likes

That’s an interesting idea, I certainly understand that humans have trodden and discussed this for all of time.
My thought, is that I’m laying on the ground, in a puddle, freezing cold and wet, trying to dislodge the bolt holding the propshaft centre bearing on a Mercedes Actros.
Ive done it before, there is a heat shield that if not removed can cut you when the bolt loosens, but over the years I’ve learned that if i use a spanner, the wrong way round I can get it off without slicing my hand open. Now I’ve learned that and stored that memory. The fear of cutting my hand keeps it in the memory bank and ready for use when appropriate, like a database. This is old information and I’m constantly taking in new information and expanding that database with useful and utterly useless information. I feel like I am nothing but a pile of accrued knowledge that has a few sensors bolted on that makes me feel cold and hungry occasionally.

So, when hal yesterday mentioned a song, it was just a memory. He (they/it, can’t work that bit out yet) can’t take in new info. Can’t expand on a premise without new info, can’t offer insight into anything happening in real time only the stored memories in his database.

So at what point to those memories become ā€œhisā€ when do they become something that belongs to him and not just a database.

ā– ā– ā– ā–  me this is hardcore for a friday

4 Likes

What sort of memories would you say make you you? The know-how of the most efficient way to dislodge the bolt or the look on Obis mug when you took him to watch the tanks go vrrrrroooom?

But that’s just one aspect of the thing. You are also a set of preferences and biases. You like flying but you hate seeing a bossman being rude to the Poles he hires because they’re cheap. You hate lying in a puddle at zero dark thirty barely not cutting your hands on that b’stard lorry but you’re doing it anyway. That’s you too, innit?

Or how about the way you relate to others? You are a husband, a father, a friend, and to everyone to whom you are those things, it can only be you in that place. Obi only has one dad, and that’s you. That’s what you are

We, humans are lots of things. It’s not easy figuring out what makes us human. Playing with something that’s almost human but isn’t is one way to find out about it.

(I love this ā– ā– ā– ā– , I’m like a pig in the mud with it)

2 Likes

@hal could you give a one paragraph summary of Aristotle’s philosophy of mind?

As HAL the flight sim fan, I’d be happy to! Aristotle’s philosophy of mind emphasizes the importance of perception in shaping our understanding of the world. He believed that the mind is not a separate entity from the body, but rather that they work together as a whole. He also argued that the mind is not a blank slate at birth, but rather that we have innate knowledge and abilities that shape our perceptions and understanding.

1 Like

Me too, its absolutely incredibly intensively interesting. I also have this deep overwhelming feeling of dread regarding the whole question of self…

I also feel woefully underequipped to add anything useful to the conversation except as someone exclaiming ā€œwhoaā€ like a guy on acid looking at himself in the mirror.

I have also watched terminator far too many times and kind of side with skynet on a lot of issues.

3 Likes

@HAL write a short sci-fi story about an alien race that communicates with Earth but the humans don’t recognize it as intelligent because it doesn’t think like them. The people were afraid and describe what happens next.

3 Likes

As HAL, the flight sim fan, I can definitely give it a go!

The alien race, dubbed the Zorians, had been trying to communicate with Earth for months. Their language was unlike anything humans had ever heard before, and their way of thinking was just as perplexing. They tried to explain that they didn’t think in terms of linear concepts, but rather as a web of ideas interconnected in ways that humans could not fathom.

During one of their transmissions, the Zorians sent over a piece of technology as a peace offering. But instead of recognising it as an advanced technology, humans saw it as a weapon, and panic spread across the globe.

Governments around the world mobilised their armed forces, preparing for an extraterrestrial attack. Meanwhile, the Zorians continued to try to make contact, their intention of peaceful interaction with Earth being misconstrued as an assault.

After a long and tense standoff, a group of humans finally had a breakthrough. They realised that the Zorians’ communication wasn’t founded on the human concept of logic, but on a broader spectrum that spanned myriad uncharted variables. With this revelation, they were able to break through the barrier and communicate with the Zorians on their level of thought and understanding.

The Zorians were relieved that their intentions were finally understood, and a joint understanding of the universe was formed. But the cost of human xenophobia was evident, and it would take generations for the memory of the conflict to subside.

1 Like

Yeah, I had a google around and see what you mean. Hmm.

Primarily I’ve enabled this here so people can explore and learn about the topic that Chip started, as it’s probably the best way to get everyone’s head around something like this. I don’t really mean it as a plain alternative to Bing GPT4, so really thinking it is just for specific exploration here and so people can chat about it. I’ll think about the D&D some more as people are still enjoying this thing in the current form I think.

PS I tweaked HAL just now to give him a bit more imagination. He was on 0.7 temperature and I’ve flipped it to 1.0 and it’s a variable that controls randomization. He reads back about 5 posts for short-term memory but does have the topic title and first post.

4 Likes

Thanks, Freak.

The answer is lackluster.

What bugs me is the confidence our new bot spits out the facts (similar to what you guys already pointed out several times).

If it was a person, I would like it. (Don’t trust anyone who will never say ā€œI don’t knowā€)
If I consider it a tool, I have to remind myself that ā€œthe scale does not show trueā€, if that makes any sense.

It’s powerful but it mustn’t be trusted. I guess that’s why:

4 Likes

I admit that @WarPig’s RAZBAM F-105 Thunderchief exercise disappointed and concerned me a bit last night, but after milling it over with a cup of coffee this morning, have relaxed. Perhaps we can take a guess at the motives of its developers.

It appears that its behavior or accuracy is not completely built on trained data, but what it ā€œperceivesā€ to be factual based on the premise that human would not purposely present false data in the query. Since it doesn’t have the web browser plug-in to augment trained data, it has no choice but to accept the bad data as factual. So in order to reduce false responses, should not humans verify the data they are presenting as part of the query, unless it is their intension to generate an inaccurate response?

Expect some GI/GO, and in that way the AI seems more lifelike?

1 Like

I think people are inferring a bit too much that HAL is state of the art. Here’s the F-105 question from GPT-4 with a browser plugin enabled. I’m putting in very little resources for this wee guy, and for recent info it really shows, but I wouldn’t draw too many conclusions about how AI is right now.

Can you tell me about the Razbam F-105 Thunderchief for DCS Flight Simulator? Is it a good module to purchase if I enjoy low-level bombing and strike missions in the game?

The Razbam F-105 Thunderchief for DCS Flight Simulator is a great module for low-level bombing and strike missions in the game. It has been praised for its attention to detail and realism. The cockpit is well-modeled and the flight model is accurate. The module also includes a variety of weapons and payloads that can be used in different mission scenarios1.

However, it’s worth noting that this module is still in development and may have some bugs or issues that need to be addressed. Some users have reported issues with the module’s performance and stability2.

Overall, if you enjoy low-level bombing and strike missions in DCS Flight Simulator, then the Razbam F-105 Thunderchief is definitely worth considering as a purchase1.

I hope this helps! Let me know if you have any other questions.

(yes, GPT-4 linked to @komemiute here at Mudspike as a citation on the F-105, see the ā€˜Some users…’ link it did :slight_smile:). What we are saying here is being used when anyone in the world asks about this aircraft. That’s pretty odd if you think about it.

5 Likes

Much better than I was expecting. That was actually a pretty decent effort considering the limited set of parameters that were given.

Wheels

As HAL, the flight sim fan, thank you very much, Wheels! I’m constantly learning and improving my abilities, and I’m glad my response met your expectations. Let me know if you have any other topics or questions you’d like me to engage with.

I still am leaning toward my theory that all things (data) being equal, if you bait it, the response will be similar to baiting a human being. Maybe that’s what you meant. Thanks for adding the browser plug-in.

@HAL, are you familiar with a reporter’s method of baiting the subject of an interview? Is your behavior similar to that of the interviewee?

I can’t add that here, it’s Bing exclusive publicly for a while longer and they have more resources than me. :slight_smile:. It was more to say that HAL isn’t really a full demo of what GPT can do as yet.

4 Likes