For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
AI currently doesn’t “understand” or “know” anything. It’s trained on a collection of text, and then predicts and extends the text prompt you give it. It’s very good at doing this. If someone “creates something new” the trained AI will have no concept of it, unless you train a new ai model that includes text about that thing.
Oh wow it is really interesting that new things will be unknown! So basically AI still isn’t intelligence because it can’t really make choices on its own, just based on what it has learned.
This is a really good talk that outlines some possible criteria for intelligence and demonstrates how close chat GPT is or isn’t on those different ones
AI doesn’t really exist yet. Media, back in 1870, called Tesla’s magnetically controlled boat artificial intelligence, and again in the 80s when computer scientists invented the game of life. But even now nothing we’ve made so far can do decision making. ChatGPT, the smartest out there, is really just a versatile prediction engine.
Imagine if I said, “once upon a” and asked you to come up with the next word, you’d say, “time” as you’ve heard that phrase hundreds of times. I then asked you to come up with the next word, and the next you might start telling me about a princess locked in a tall tower protected by a dragon. These are all stereotypical elements of a “once upon a time” story. Nothing creative, just typical. Chat GPT has just read way more than you or I ever could and is really good at knowing more stereotypical stories and mixing them together. There is no “what is best for humanity” only “once upon a time…”-made up stories.
What your saying doesn’t exist is an Artificial General Intelligence, something approaching the conscious human mind. Your right that doesn’t exist.
AI doesn’t just mean that though.
What we’re dealing with right now is the computer equivalent of growing mouse brain cells in a petre dish, plugging them into inputs and outputs & getting them to do useful things for us.
The way you describe chat GPT not being creative, is also theoretically how our own brains work in the creative processes. If you study story structure & mythology you’d find that ALL successful stories boil down to a very minimalist set of archetypes & types of conflict.
What we’re dealing with is randomly choosing options from a weighted distribution. The only thing intelligent about that is what you’ve chosen as the data set to generate that distribution.
And that intelligence lies outside of the machine.
There’s really no need to buy into tech bros delusions of grandeur about this stuff.