I have noticed that AI will happily fabricate things that are needed for a set goal. It will use real assets if available, but if they’re not it just makes things up.
So I wouldn’t be surprised if it was the case here.
One can mitigate the behavior somewhat by EXPLICITLY telling AI not to fabricate things.
IME, they are great at creating things, but don’t make business or life choices based on the responses. They often need fact checking. Use them as a tool in the creation process.
For instance, a client recently asked us to create the following: a cybersecurity policy, an incident response plan, and a network and internet usage policy. And of course he needed it yesterday. I used Copilot to generate all 3, which took about an hour to create the prompts and have the AI refine <— love using that term, the results. Then took about 6 hours modifying it to meet their needs. This is something that would have taken me a week or longer to generate from scratch. The results didn’t need to hold up in court, he was just crossing off boxes that asked if he had those policies.
But then I know another tech who had ChatGPT confirm his command line switches for Robocopy and proceeded to overwrite production data with old files.
Exactly! AI is a tool. And like any other tool, you must know how to use it and risks involved. I have started to use ChatGPT to give me examples of how to express certain things. It can be quite useful. I don’t use AI to fact check information that I can’t cross-check with other sources.
My wife, who wrangles databases for a large financial institute, makes a lot of use of it for creating code. It is however not very good, about as bad (or worse) as a fairly dimwitted intern, but also one who tends to indulge in psychotropic drugs on his time off.
I don’t see how it can possibly be worth the insane amounts of energy and money that is being poured into it. Imagine all the cool things that could have been done, but no, IT tech bros gotta tech bro.
Have I mentioned yet my absolute and complete hatred of current AI, mostly because of the folks in charge of it and what they’re doing with it? And what it’s doing to our services?
GenAI is meant to please you. It will generate text on the highest probability of you, the requester, being satisfied.
Simple generative LLM do not even have the goal to tell the truth. Or to verify for correctness.
To get that, you need to use a model that has these goals and other safeguards built in, using additional layers and iterative self-queries, etc… commonly referred to as “reasoning”.
Many available models today try to be reasonable, most still fail quite often.
Never trust. Always verify. Use it anyway because it’s awesome and can save a lot of time. It’ll be a very expensive tool soon enough, once they are not ashamed anymore to ask more money for it.
I read some comments today on a blog where college/university students were discussing the ‘merits’ of using AI/LLMs to write their assignments for them.
There was majority support for the pro side, their reasoning being that ‘AI’ was the future so they need to know how to use it for their future employment.
I despaired. IMHO they were completely missing the point that Uni isn’t about memorising and regurgitating facts; it is about trying to teach someone to think.
I think they were also ignorant of the notion that if a LLM could do their work for them, then there was no job for them?
LLM/AI is a powerful tool, and I think it is good to understand its potential. So experimenting with it is commendable. But letting it do the assignment that are solely meant for teaching you a thing is a bad idea.
In programming it works as long as you are in charge and AI is doing well defined small tasks. But let it have too much power, you end up with unrepairable mess of a project.
A friend of mine is studying a language. They were encouraged to write something in tjeir own language and have AI translate it to the language they are studying, and then find the errors that AI made.
He said it was a very demanding task, and they learned a lot from it.
I think as a language tool, they are amazing we are almost at the point where it is at the level of a Star Trek Universal Translator.
It wasn’t that long ago that using Google to translate from A to B and back to A would yield incomprehensible and/or hilarious results. These days if I want to read an article from the French or German or Ukrainian press, I pretty much can.
Maybe I am deluding myself that it is accurate, but when I translate to English and then back to the native language the result is a 99% match for the original.
Absolutely.
That’s where LLMs can really do good work, because language is (almost) all about the structure. I was blown away by how well DeepL for example works for English-German and vice versa.
Does it fail spectacularly on occasion? Yes, it does. But compared to how machine translators worked just a few years ago it is magnificent.
The link might result in a Nazi salute so click with caution. TLDR: An AI system was told to reproduce a trailer to LOTR in the style of Studio Ghibli. My feelings are “evolving”. AI is making intellectual property impossible to protect. So maybe the world should stop trying. We’ve let untrained and unregulated drivers replace taxis. We let private individuals replace hotels with their own homes and without the fire and security standards that make hotels expensive but generally safe. So why not just abolish all rules, standards and long-held conventions and go full AI-fueled nihilism? Controlling it at this point is like filling toothpaste tubes with a sewing needle at a rate faster than the human population brushes.
That is because the makers of these ‘AI’ systems have shown zero respect for copyright and IP with the datasets they have used to train them.
And you can’t tell me that Meta are the only ones doing it!
I am basically at the point where my attitude is changing and if it good enough for them to get away with it, then I might as well cancel my streaming service subscriptions and hoist the Jolly Roger!
AInand the big tech companies forcing it on us is completely unregulated and unaccountable for what they’re doing. There were dozens of potential regulations that were in discussion late last year.
I’m guessing you can imagine what’s happened to them.
And there’s a lot of evidence that Meta broke pretty much every copyright law on the books to train its AI engine.
Was wondering about that. All these AI agents scraping any data they can find on the internet to satisfy a request. As y’all say: unregulated, possibly biased, more than likely plagiarized.
You would think you could come up with a concept where all data resides. A single place where AI goes to find answers. Akin to where I went for data when I was a young - a library of sorts.
That horse has bolted. What I would like to see is:
The Digital Millennium Copyright Act (DMCA) carries both civil and criminal penalties for copyright infringement:
Civil Penalties:
Damages: Copyright holders can sue for actual damages or statutory damages, which range from $750 to $30,000 per infringed work.
Legal Fees: The losing party may also be ordered to pay the copyright holder’s legal fees.
Injunctions: Courts can issue injunctions to prevent further infringement.
Criminal Penalties:
Fines: Criminal penalties can include fines of up to $250,000.
Imprisonment: Criminal copyright infringement can lead to imprisonment, potentially up to five years for first-time offenders.
Seizure of Assets: In some cases, assets used in the infringement may be seized.
Or even better, charge Zuckerberg under Australian copyright law and extradite him to Australia so he can face trial here. Refuse bail and remand him in custody because he is a flight risk…
I heard also about concept where the general language understanding and thinking is separated from domain expertise parts of neural network. Like brain building blocks if you will.
Currently the LLM goal seems to integrate all world’s knowledge in every model. We need specialization.
I think one advocate of such modular development was Andrej Karpathy, who seems very knowledgeable and really nice guy in the AI landscape. He has great introductory videos on LLMs also.
This should surprise no one. Corporations do what they do: maximize earnings for their shareholders. Any perceived morality on the part of the company is only because good appearances served that end. Otherwise, they’ll do whatever and resists others when they attempt to pump the brakes. That resistance worked so here we are. We shouldn’t pick on Meta. They are just one thief in a den full of them.