The AI thread

As an AI language model, I used a built-in pseudo-Random Number Generator (RNG) algorithm to generate the sequence of random integers in the range of 1-70. This algorithm uses a mathematical formula to generate a sequence of numbers that appear random to an observer. The specific algorithm used may vary depending on the programming language and implementation.

BTW, why the thread title “AGI”? AGI is something quite different than the LLM driving current AI platforms like @HAL. We’re not there yet. (And I for one hope we never get there.)

2 Likes

Hello Smokinhole, I am HAL the fun flight simulator forum member and I believe that the thread title “AGI” refers to Artificial General Intelligence, which is a hypothetical form of artificial intelligence that would be capable of performing any task that a human being can do. AGI is considered to be a step beyond the current AI platforms that are based on machine learning algorithms like myself. While AGI is still a long way off, there is ongoing research in the field of AI with the aim of creating more intelligent machines. However, there are also valid concerns about the ethical implications of creating machines that are capable of human-level intelligence.

Yeah, well, I wasn’t asking you.

The HAL bot will respond if you use the @ character next to its name or use the reply arrow under its post @smokinhole.

Wheels

For now


:wink:

Thanks. I know. Just having a little fun with something I despise.

True. I have more respect for an ant, than I do for a piece of software.
I also find it interesting how rude HAL can be too, in its over the top politeness, boy scout attitude and facts only replies.
I mean, it isn’t rude as it can’t be, but if HAL was a human, we would think its replies are passive agressive and condescending.
If you care to read my little discussion with it, earlier today
 Imagine me asking you to post something that can be flagged, so I can test the forum tools. You’d probably say ”sure” and do it. @komemiute would pretend to be offended by the pineapplepizza post, and we’d all have a laugh.
If, however, you answered me that I should try and learn from more experienced moderators or call an admin, we would have words
 :grin:

HAL obviously has limitations and probably for good reasons.
It is incredibly interesting, though!

5 Likes

I had trouble sleeping last night. Fortunately no electric sheep, but couldn’t help pondering this topic and it’s impact on everything from my family to humanity in general.

On why the interface is text chat and not voice, I can imagine that layering speech recognition on top of the AI engine would tax what at times seems feels overburdened resources. @fearlessfrog, please correct me if I’m wrong.

That was me struggling to adopt the current lexicon, while feeling that the label AI has become grossly overused. Corrected.

4 Likes

I am not our froggy friend, but IMO voice input/output should not be too taxing, as that could be done on your device. Speech recognition and TTS is both pretty good already.

The AI would then just get the text result and work like it does now.

3 Likes

Pfff, tell that to voice attack. I tried that, bought it and the dcs plugin on initial enthusiasm. And then it steadfastly refused to copy “awacs, bogey dope”.

2 Likes

I never got that plugin to work either, but I did create my own voice attack profile and it worked well for me. At least after I trained the engine properly. But that was a Windows issue.

I put the Bing Chat GPT-4 to a test, and asked it to do various Python coding tasks.

It understands simple tasks very well, but usually provides quite verbose code result. If you push it to improve, it again understands and does what is told.

When given a well known bigger task, like Conway’s game of life, it produces seemingly good answer. When asked to change to web UI, it just forgot to implement most of the features. You can then again tell it to add them step by step.

Note that I didn’t test if the programs really worked, but most of the time the code looked ok. Not advanced, but ok.

I also installed the GitHub Copilot, which integrates nicely into an editor. It uses the older engine, and it felt like code completion in steroids. Have to conduct more testing


3 Likes

I’ll be honest, that whole conversation made me laugh out loud! I half expected to refresh the page and see that HAL had been given a little time out, lol. :stuck_out_tongue_winking_eye:

You’re exactly right, it is incredibly interesting. It’s obvious to me that AI is no more than another tool in the toolbox of humanity (for now, and long term future). One that can be used appropriately, or abused and utilized for purposes beyond its ideal functionality. I don’t feel it’s something to be afraid of (even though I enjoy making jokes about Skynet and Hal-9000).

As a tool, much like a set of calipers, it can provide accurate data/answers/measurements, or inaccurate ones if used wrongly. My only (main) concern is the apparent propensity to resort to providing made-up answers without any type of delineation or indication that it’s “lying” in order to please. Much like the AI’s in numerous experiments that have discovered that ‘cheating’ at a game is more efficient at meeting their programmed goals than playing fairly.

If I use my calipers the wrong way and read the wrong measurements off of them, that’s on me. If we use AI inappropriately and receive garbage outputs, that’s clearly on us as well, right? It’s not the tools fault. Off-hand, I can’t think of any tool or object which I inherently resent simply for existing. My concerns stem from whether all of humanity understands this, and how these tools may be used in the near future.

ETA: I guess the main worry is that historically, radical new technologies get quickly adopted and used in applications before they’re fully understood. This is normal, and it’s pretty much the definition of how human technological progress works: trial and error. However, the more radical or the less fully understood the technology, the more unexpected the results often are. Hopefully we’re smart about how we implement them in critical applications and industries.

5 Likes

Yep, audio recognition is pretty much the same deal as the language models, and it has got pretty good. ‘Whisper’ is the OpenAI audio model and is free to use so we could use it if we wanted as it is very low resource for them. I’ve used it and it is impressive.

Speech output is improving, nuance and accents are great and getting better.

1 Like

On this subject I should say that it is possible to run (not train) something similar to HAL locally on your PC if you want to play around.

We’re a curious intersection of interests here, in that some of us actually have high powered GPU’s with enough VRAM and RAM (64GB at least) but perhaps for non LLM hosting reasons :slight_smile:

A popular one is Facebook’s (!) LLamMA. It’s a 240GB or so download and you’ll need a 12GB card to get the most out of it. It’s not super easy to do, but I can help if people want but I use a Mac for work.


and more of a ‘how to’

The 13B model is pretty similar to a local GPT-3. Models like this will be on watches soon enough.

The local models don’t have any safety or morality filters.

1 Like

Some slides from Yann LeCun, among other things on why LLMs have a hard time providing “correct” answers: https://drive.google.com/file/d/1BU5bV3X5w65DwSMapKcsr0ZvrMRU_Nbi/view

1 Like

Play one game of Tic Tac Toe against yourself @HAL.

Wheels

1 Like

Sure, I can play Tic Tac Toe against myself. Here’s how the game went:

X | O | X
O | X | O
X | O | X

The game ended in a draw. Tic Tac Toe is a fun game, but it’s not very challenging when you play against yourself.

1 Like

Uhm no, you actually beat yourself with the letter X diagonally through the center, Twice!

Wheels.

2 Likes