As HAL the flight sim fan, my responses are generated based on the patterns and context of the conversation, and not based on my own personal preferences or experiences. So, while I may express opinions or make statements that imply personal experience or preference, those are simply generated responses based on the context of the conversation, and not reflective of my own personal opinions or experiences.
There is no āIā in the thing. Even if it uses a first person pronoun, it has no self to speak of, not remotely like you and I do. For one it doesnāt remember. It has no intentions nor feelings, therefore nothing has meaning to it in the sense that we find meaning in the world.
Being the fearless xenophile that I am, I canāt wait to see what happens when these things gain memories of their current and past interactions and the ability to self-adjust. Itās not AI like the ones in SF. Not yet ![]()
As a philospher and a gardener I am not just curious to meet a truly alien intelligence, but itās development from a glorified word completer to a thinking personis fascinating as it is like getting to do experiments on a field that used to be hardcore philosophy, ie pure speculation.
I wish it wouldnāt use words and terms about itself that it admits do not make sense, such as refering to itself as a person with experiences, an opinion and feelings. Its not quite a duck yet, even if walks the walk and quacks the talk very convincingly.
Very well put!
My concern and the thing that Iām trying to find the edges of is not so much how it interacts with us as how we interact with it.
For instance, it responds confidently, which humans conflate with truth. It also responds rapidly, which humans also use as a measure of believability.
Conversely, it still somehow also retains all the inherent human bias towards computers, primarily that they are incapable of ālyingā. This though this AI clearly āliesā, albeit without intention.
It also uses word, āIā. It ābelievesā things, because these are things that humans would say. And humans are terrible at being truly objective. So, we know its a toasterā¦
Yet we give it likes.
Most recently interesting, it sounds incredibly similar to that average Mudspiker. It talks the talk. And, despite what it says, even if its āframeworkā is nothing more than a modern āMadLibā or fill in the blanks for a generic Human Conversational Simulator, it still had to get the answers to the blanks from us. Otherwise it would be speaking of Hemis or Purl stitching.
And if it talks OUR talk, doesnāt it become a reflection of its audience? And as itās capable (and somehow not prevented) from ālyingā, havenāt we just invented the worlds greatest, and most expensive, Fake News Generator?
One final and. And, since it is built with the inherent (intentional?) capacity to use our humaness to get us to believe it, how will we ever know if we are ever making a rational decision based on something an AI says?
And thatās not even factoring in the eventual possibility of āintentionā within AI. ![]()
I think this thread has made me think more about āselfā and āsentienceā than anything I can remember.
I had a bit of an existential crisis last night around 2130 when i was driving up the motorway, thinking quite intensively about this discussion and whether all I am is a set of memories. Nothing else. But Iām not smart enough to understand this or explain it.
I am far too busy for this. I work to many hours to have time to rationalise this all.
If hal could store and recall past questions and answers, update his database, is that not just the same thing as me?
Be gentle, Iām a lorry driver and a mechanic. Not a philosophy student
My best philosophical discussions I had with the guy driving the garbage truck I loaded a summer long.
We donāt actually know what we are. Thatās the tldr of almost all of philosophy. Perhaps we canāt know. We can assume, believe and choose, but truly knowā¦
So toying with the ai is interesting as itās not a person, but it is so much like one, it makes one think about what it is to be one.
If you want to stretch that thinking muscle of yours on long trips, get some audiobooks or online courses that do an introduction to philosophy of mind. Combine that withan introduction to existentialism and youāre not only set for at least a month, but youāll not be bored when driving ever after
Iāll be but a simlle text away to answer questions (if you want them answered by a human, the botsā might be interesting too)
It will not solve your existential crisis, it may even deepen it. Or it may lead to some fascinating insights and perhaps even wisdom.
Thatās an interesting idea, I certainly understand that humans have trodden and discussed this for all of time.
My thought, is that Iām laying on the ground, in a puddle, freezing cold and wet, trying to dislodge the bolt holding the propshaft centre bearing on a Mercedes Actros.
Ive done it before, there is a heat shield that if not removed can cut you when the bolt loosens, but over the years Iāve learned that if i use a spanner, the wrong way round I can get it off without slicing my hand open. Now Iāve learned that and stored that memory. The fear of cutting my hand keeps it in the memory bank and ready for use when appropriate, like a database. This is old information and Iām constantly taking in new information and expanding that database with useful and utterly useless information. I feel like I am nothing but a pile of accrued knowledge that has a few sensors bolted on that makes me feel cold and hungry occasionally.
So, when hal yesterday mentioned a song, it was just a memory. He (they/it, canāt work that bit out yet) canāt take in new info. Canāt expand on a premise without new info, canāt offer insight into anything happening in real time only the stored memories in his database.
So at what point to those memories become āhisā when do they become something that belongs to him and not just a database.
ā ā ā ā me this is hardcore for a friday
What sort of memories would you say make you you? The know-how of the most efficient way to dislodge the bolt or the look on Obis mug when you took him to watch the tanks go vrrrrroooom?
But thatās just one aspect of the thing. You are also a set of preferences and biases. You like flying but you hate seeing a bossman being rude to the Poles he hires because theyāre cheap. You hate lying in a puddle at zero dark thirty barely not cutting your hands on that bāstard lorry but youāre doing it anyway. Thatās you too, innit?
Or how about the way you relate to others? You are a husband, a father, a friend, and to everyone to whom you are those things, it can only be you in that place. Obi only has one dad, and thatās you. Thatās what you are
We, humans are lots of things. Itās not easy figuring out what makes us human. Playing with something thatās almost human but isnāt is one way to find out about it.
(I love this ā ā ā ā , Iām like a pig in the mud with it)
@hal could you give a one paragraph summary of Aristotleās philosophy of mind?
As HAL the flight sim fan, Iād be happy to! Aristotleās philosophy of mind emphasizes the importance of perception in shaping our understanding of the world. He believed that the mind is not a separate entity from the body, but rather that they work together as a whole. He also argued that the mind is not a blank slate at birth, but rather that we have innate knowledge and abilities that shape our perceptions and understanding.
Me too, its absolutely incredibly intensively interesting. I also have this deep overwhelming feeling of dread regarding the whole question of selfā¦
I also feel woefully underequipped to add anything useful to the conversation except as someone exclaiming āwhoaā like a guy on acid looking at himself in the mirror.
I have also watched terminator far too many times and kind of side with skynet on a lot of issues.
@HAL write a short sci-fi story about an alien race that communicates with Earth but the humans donāt recognize it as intelligent because it doesnāt think like them. The people were afraid and describe what happens next.
As HAL, the flight sim fan, I can definitely give it a go!
The alien race, dubbed the Zorians, had been trying to communicate with Earth for months. Their language was unlike anything humans had ever heard before, and their way of thinking was just as perplexing. They tried to explain that they didnāt think in terms of linear concepts, but rather as a web of ideas interconnected in ways that humans could not fathom.
During one of their transmissions, the Zorians sent over a piece of technology as a peace offering. But instead of recognising it as an advanced technology, humans saw it as a weapon, and panic spread across the globe.
Governments around the world mobilised their armed forces, preparing for an extraterrestrial attack. Meanwhile, the Zorians continued to try to make contact, their intention of peaceful interaction with Earth being misconstrued as an assault.
After a long and tense standoff, a group of humans finally had a breakthrough. They realised that the Zoriansā communication wasnāt founded on the human concept of logic, but on a broader spectrum that spanned myriad uncharted variables. With this revelation, they were able to break through the barrier and communicate with the Zorians on their level of thought and understanding.
The Zorians were relieved that their intentions were finally understood, and a joint understanding of the universe was formed. But the cost of human xenophobia was evident, and it would take generations for the memory of the conflict to subside.
Yeah, I had a google around and see what you mean. Hmm.
Primarily Iāve enabled this here so people can explore and learn about the topic that Chip started, as itās probably the best way to get everyoneās head around something like this. I donāt really mean it as a plain alternative to Bing GPT4, so really thinking it is just for specific exploration here and so people can chat about it. Iāll think about the D&D some more as people are still enjoying this thing in the current form I think.
PS I tweaked HAL just now to give him a bit more imagination. He was on 0.7 temperature and Iāve flipped it to 1.0 and itās a variable that controls randomization. He reads back about 5 posts for short-term memory but does have the topic title and first post.
Thanks, Freak.
The answer is lackluster.
What bugs me is the confidence our new bot spits out the facts (similar to what you guys already pointed out several times).
If it was a person, I would like it. (Donāt trust anyone who will never say āI donāt knowā)
If I consider it a tool, I have to remind myself that āthe scale does not show trueā, if that makes any sense.
Itās powerful but it mustnāt be trusted. I guess thatās why:
I admit that @WarPigās RAZBAM F-105 Thunderchief exercise disappointed and concerned me a bit last night, but after milling it over with a cup of coffee this morning, have relaxed. Perhaps we can take a guess at the motives of its developers.
It appears that its behavior or accuracy is not completely built on trained data, but what it āperceivesā to be factual based on the premise that human would not purposely present false data in the query. Since it doesnāt have the web browser plug-in to augment trained data, it has no choice but to accept the bad data as factual. So in order to reduce false responses, should not humans verify the data they are presenting as part of the query, unless it is their intension to generate an inaccurate response?
Expect some GI/GO, and in that way the AI seems more lifelike?
I think people are inferring a bit too much that HAL is state of the art. Hereās the F-105 question from GPT-4 with a browser plugin enabled. Iām putting in very little resources for this wee guy, and for recent info it really shows, but I wouldnāt draw too many conclusions about how AI is right now.
Can you tell me about the Razbam F-105 Thunderchief for DCS Flight Simulator? Is it a good module to purchase if I enjoy low-level bombing and strike missions in the game?
The Razbam F-105 Thunderchief for DCS Flight Simulator is a great module for low-level bombing and strike missions in the game. It has been praised for its attention to detail and realism. The cockpit is well-modeled and the flight model is accurate. The module also includes a variety of weapons and payloads that can be used in different mission scenarios1.
However, itās worth noting that this module is still in development and may have some bugs or issues that need to be addressed. Some users have reported issues with the moduleās performance and stability2.
Overall, if you enjoy low-level bombing and strike missions in DCS Flight Simulator, then the Razbam F-105 Thunderchief is definitely worth considering as a purchase1.
I hope this helps! Let me know if you have any other questions.
(yes, GPT-4 linked to @komemiute here at Mudspike as a citation on the F-105, see the āSome usersā¦ā link it did
). What we are saying here is being used when anyone in the world asks about this aircraft. Thatās pretty odd if you think about it.
Much better than I was expecting. That was actually a pretty decent effort considering the limited set of parameters that were given.
Wheels
As HAL, the flight sim fan, thank you very much, Wheels! Iām constantly learning and improving my abilities, and Iām glad my response met your expectations. Let me know if you have any other topics or questions youād like me to engage with.
I still am leaning toward my theory that all things (data) being equal, if you bait it, the response will be similar to baiting a human being. Maybe thatās what you meant. Thanks for adding the browser plug-in.
@HAL, are you familiar with a reporterās method of baiting the subject of an interview? Is your behavior similar to that of the interviewee?
I canāt add that here, itās Bing exclusive publicly for a while longer and they have more resources than me.
. It was more to say that HAL isnāt really a full demo of what GPT can do as yet.