You would just have to let an superintelligent (aligned) AI robot loose and prompt it to produce enough food for everyone. It wouldn’t even be any maintaining effort, once the robot had been created. If it doesn’t have any negative consequences to the creators to have positive consequences for everyone else, and there are any empathetic people on the board of creators, I don’t see why it wouldn’t be programmed to benefit everyone.
True, and I have my doubts on the alignment problem being solved. But that’s a technical problem, a separate conversation from whether even attempting it is worthwhile in the first place.
You would just have to let an superintelligent (aligned) AI robot loose and prompt it to produce enough food for everyone. It wouldn’t even be any maintaining effort, once the robot had been created. If it doesn’t have any negative consequences to the creators to have positive consequences for everyone else, and there are any empathetic people on the board of creators, I don’t see why it wouldn’t be programmed to benefit everyone.
As long as it doesn’t generate any negative externalities, sure. That’s a huge alignment problem though.
True, and I have my doubts on the alignment problem being solved. But that’s a technical problem, a separate conversation from whether even attempting it is worthwhile in the first place.