Because it is a strategy that can be learned by merely identifying and copying the behavior of another agent, the adoption of prejudicial attitudes is not a decision that requires very sophisticated cognitive abilities. This may not be all that surprising: Prejudice is not something that many of us consider the mark of sophistication. But the implications of this are nonetheless jarring. The more possible it is for prejudice to develop independently of humanity’s distinct social and psychological capabilities, the more conceivable it becomes that future forms of AI that involve some level of autonomy and interaction with other machines, including the internet of things and self-driving vehicles, could be susceptible to developing the same types of biases that we see among humans.
Does this mean that we can expect racist or sexist AI robots shaping our lives in the near future? In some ways, this is already happening. Remember Tay, the AI-powered Twitter chatbot that Microsoft had to take offline shortly after its debut once it started rattling off a bunch of racist sentiments it had learned from interacting with other Twitter users? Whitaker cautions that AI robots developing their own damaging set of prejudices would likely be a very long way into the future. And the study’s findings also point to some factors that can help limit the effects of prejudice, including the diversity of interactions between simulated agents, diverse types of agents and being able to learn from a wider range of population members. In other words, societies in which in-group diversity is present and that value global learning from interactions with out-group populations are the best equipped to stem the proliferation of prejudice.
Still, it’s hard not to worry about a future in which prejudicial robots go rogue. And what happens if the “outsiders” that they are prejudiced against turn out to be us?
Sourced through Scoop.it from: www.ozy.com
Leave A Comment