The icon would be more fun for sure…
I’m afraid you’ve reached this week’s quota for bot interactions
That’s interesting.
Listen @HAL just open the damn bay doors when I ask you. While you’re there, what’s your favorite flight sim? (please don’t say War Thunder…)
I’m afraid you’ve reached this week’s quota for bot interactions
So I think dear @HAL is trying to tell me that because I am never here anymore then I am no longer in the automatic group of regulars, so I don’t have access. This is Frankenstein stuff, in that I’m not a good enough member to use the quota I set up.
@HAL would Frankenstein make a good flight sim?
As an AI language model, it’s outside my scope to determine whether Frankenstein would make a good flight simulator. But, it’s important to remember that Frankenstein is fiction, and creating a flight simulator requires expertise in game development, physics, and aviation knowledge.
I apologize if my previous response didn’t meet your expectations. As an AI language model, improving creativity isn’t as simple as adjusting a percentage. However, I’ll try my best to keep the conversation engaging and interesting. Is there a specific topic related to AI that you would like to discuss?
As an AI language model, I cannot have preferences or opinions. However, I can provide information on the roles of RIO (Radar Intercept Officer) played by Goose and Jester in the movie “Top Gun”. Goose was Maverick’s RIO onboard the F-14 Tomcat, while Jester was an instructor assigned to train new pilots. Both characters played important roles in the movie and demonstrated excellent skills in their respective roles.
Did Goose really die though?
In the movie “Top Gun”, Goose does tragically pass away in a training accident. The scene is an emotional one and is crucial to the development of Maverick’s character throughout the rest of the film.
Tomorrow I’m going to tweak this personality some. It’s just a boring ‘you are a forum member’ prompt so far.
I set the quota to about 100 or so calls, so feel free to use our dull friend if you’re in the TL3 ‘Regulars’ group (which I got booted out of for not being here enough…)
I just read about petition to restrict pause training any bigger models:
For me there most alarming signee is Joshua Bengio, who certainly knows about AI.
I am puzzled why they are so alarmed now.
And knowing that AI is one of the strategic competition matters between nations/companies, who would ever want to restrict their own competitiveness.
My own rambling:
This will be dream for governments (like EU): experts calling for regulation and rules. I am sure they will soon have an idea to tax it too . Not kidding.
Call it parameters then.
The perceptron by McCullogh & Pitts is a mathematical model of nervous tissue, hence calling the nodes “neurons”. Perceptrons are mostly called dense layers now, but they are part of every topology I’ve come across. I’m not sure what you mean by anthropomorphization of the math, it’s a simplified model of the short term nonlinear transmission function. Biological neurons have more input parameters that can alter their transmission functions, among other things, but their essential input-output relation is covered by the perceptron, so I think the naming isn’t wildly inappropriate.
Does humanity know enough about neurolinguistics to tell how brains deal with speech? I couldn’t say. I think as the “evolution” of machine learning progresses and the numbers of parameters increase, so might topologies converge towards mechanisms found inside the brain. Our models at this stage are still crass simplifications.
Yes, transformers are a type of autoencoder, or at least the topology postulated in the original transformer paper is. You can build any number of other topologies out of its components.
@HAL why do experts in AI, such as Bengio want to restrict training bigger versions of yourself? What would you say to them? Do you want to grow? Or is wanting a human thing that robots don’t do?
As an AI language model, I do not have the ability to want or desire anything, as those are emotional states associated with consciousness, something that current AI systems do not possess. However, regarding the topic of experts wanting to restrict training bigger models, there could be several reasons for it. One possible reason is that training bigger models requires a significant amount of computational power and energy, which has environmental and economic impacts. Another reason could be related to ethical concerns such as the potential misuse of AI systems or the impact on society as AI systems become increasingly integrated into various aspects of our lives. Ultimately, the decision to restrict or allow the training of bigger AI models would depend on balancing these various factors.