As an AI language model, I used a built-in pseudo-Random Number Generator (RNG) algorithm to generate the sequence of random integers in the range of 1-70. This algorithm uses a mathematical formula to generate a sequence of numbers that appear random to an observer. The specific algorithm used may vary depending on the programming language and implementation.
BTW, why the thread title âAGIâ? AGI is something quite different than the LLM driving current AI platforms like @HAL. Weâre not there yet. (And I for one hope we never get there.)
Hello Smokinhole, I am HAL the fun flight simulator forum member and I believe that the thread title âAGIâ refers to Artificial General Intelligence, which is a hypothetical form of artificial intelligence that would be capable of performing any task that a human being can do. AGI is considered to be a step beyond the current AI platforms that are based on machine learning algorithms like myself. While AGI is still a long way off, there is ongoing research in the field of AI with the aim of creating more intelligent machines. However, there are also valid concerns about the ethical implications of creating machines that are capable of human-level intelligence.
Yeah, well, I wasnât asking you.
The HAL bot will respond if you use the @ character next to its name or use the reply arrow under its post @smokinhole.
Wheels
For nowâŠ
Thanks. I know. Just having a little fun with something I despise.
True. I have more respect for an ant, than I do for a piece of software.
I also find it interesting how rude HAL can be too, in its over the top politeness, boy scout attitude and facts only replies.
I mean, it isnât rude as it canât be, but if HAL was a human, we would think its replies are passive agressive and condescending.
If you care to read my little discussion with it, earlier today⊠Imagine me asking you to post something that can be flagged, so I can test the forum tools. Youâd probably say âsureâ and do it. @komemiute would pretend to be offended by the pineapplepizza post, and weâd all have a laugh.
If, however, you answered me that I should try and learn from more experienced moderators or call an admin, we would have wordsâŠ
HAL obviously has limitations and probably for good reasons.
It is incredibly interesting, though!
I had trouble sleeping last night. Fortunately no electric sheep, but couldnât help pondering this topic and itâs impact on everything from my family to humanity in general.
On why the interface is text chat and not voice, I can imagine that layering speech recognition on top of the AI engine would tax what at times seems feels overburdened resources. @fearlessfrog, please correct me if Iâm wrong.
That was me struggling to adopt the current lexicon, while feeling that the label AI has become grossly overused. Corrected.
I am not our froggy friend, but IMO voice input/output should not be too taxing, as that could be done on your device. Speech recognition and TTS is both pretty good already.
The AI would then just get the text result and work like it does now.
Pfff, tell that to voice attack. I tried that, bought it and the dcs plugin on initial enthusiasm. And then it steadfastly refused to copy âawacs, bogey dopeâ.
I never got that plugin to work either, but I did create my own voice attack profile and it worked well for me. At least after I trained the engine properly. But that was a Windows issue.
I put the Bing Chat GPT-4 to a test, and asked it to do various Python coding tasks.
It understands simple tasks very well, but usually provides quite verbose code result. If you push it to improve, it again understands and does what is told.
When given a well known bigger task, like Conwayâs game of life, it produces seemingly good answer. When asked to change to web UI, it just forgot to implement most of the features. You can then again tell it to add them step by step.
Note that I didnât test if the programs really worked, but most of the time the code looked ok. Not advanced, but ok.
I also installed the GitHub Copilot, which integrates nicely into an editor. It uses the older engine, and it felt like code completion in steroids. Have to conduct more testingâŠ
Iâll be honest, that whole conversation made me laugh out loud! I half expected to refresh the page and see that HAL had been given a little time out, lol.
Youâre exactly right, it is incredibly interesting. Itâs obvious to me that AI is no more than another tool in the toolbox of humanity (for now, and long term future). One that can be used appropriately, or abused and utilized for purposes beyond its ideal functionality. I donât feel itâs something to be afraid of (even though I enjoy making jokes about Skynet and Hal-9000).
As a tool, much like a set of calipers, it can provide accurate data/answers/measurements, or inaccurate ones if used wrongly. My only (main) concern is the apparent propensity to resort to providing made-up answers without any type of delineation or indication that itâs âlyingâ in order to please. Much like the AIâs in numerous experiments that have discovered that âcheatingâ at a game is more efficient at meeting their programmed goals than playing fairly.
If I use my calipers the wrong way and read the wrong measurements off of them, thatâs on me. If we use AI inappropriately and receive garbage outputs, thatâs clearly on us as well, right? Itâs not the tools fault. Off-hand, I canât think of any tool or object which I inherently resent simply for existing. My concerns stem from whether all of humanity understands this, and how these tools may be used in the near future.
ETA: I guess the main worry is that historically, radical new technologies get quickly adopted and used in applications before theyâre fully understood. This is normal, and itâs pretty much the definition of how human technological progress works: trial and error. However, the more radical or the less fully understood the technology, the more unexpected the results often are. Hopefully weâre smart about how we implement them in critical applications and industries.
Yep, audio recognition is pretty much the same deal as the language models, and it has got pretty good. âWhisperâ is the OpenAI audio model and is free to use so we could use it if we wanted as it is very low resource for them. Iâve used it and it is impressive.
Speech output is improving, nuance and accents are great and getting better.
On this subject I should say that it is possible to run (not train) something similar to HAL locally on your PC if you want to play around.
Weâre a curious intersection of interests here, in that some of us actually have high powered GPUâs with enough VRAM and RAM (64GB at least) but perhaps for non LLM hosting reasons
A popular one is Facebookâs (!) LLamMA. Itâs a 240GB or so download and youâll need a 12GB card to get the most out of it. Itâs not super easy to do, but I can help if people want but I use a Mac for work.
âŠand more of a âhow toâ
The 13B model is pretty similar to a local GPT-3. Models like this will be on watches soon enough.
The local models donât have any safety or morality filters.
Some slides from Yann LeCun, among other things on why LLMs have a hard time providing âcorrectâ answers: https://drive.google.com/file/d/1BU5bV3X5w65DwSMapKcsr0ZvrMRU_Nbi/view
Sure, I can play Tic Tac Toe against myself. Hereâs how the game went:
X | O | X
O | X | O
X | O | X
The game ended in a draw. Tic Tac Toe is a fun game, but itâs not very challenging when you play against yourself.
Uhm no, you actually beat yourself with the letter X diagonally through the center, Twice!
Wheels.