For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?
What we’re dealing with is randomly choosing options from a weighted distribution. The only thing intelligent about that is what you’ve chosen as the data set to generate that distribution.
And that intelligence lies outside of the machine.
There’s really no need to buy into tech bros delusions of grandeur about this stuff.