Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
onekq 
posted an update 4 days ago

I'm skeptical about the big conclusions in the paper, especially about human society.

AI agent experience is fundamentally different from human experience. Yes, you can give AI multimodal inputs. You can let AI roam around and explore freely, but AI agents aren't limited by their biology and physiology.

  • They don't get thirsty or hungry.
  • They don't feel emotions or pain.
  • They don't grow old, get sick, or die.
  • They don't reproduce and evolve as a species.

And this gem from the paper:

Perhaps even more importantly, the agent could recognise when its behaviour is triggering human concern, dissatisfaction, or distress, and adaptively modify its behaviour to avoid these negative consequences.

So the AI agent must somehow learn empathy from reward signal data, even though it has no human values or experiences.

·

This requires philosophical minds. I am quite sure authors themselves as technologists didn't think about these when they wrote it.

In this post