Researchers from the University of Georgia have conducted a study which confirms what many already suspected; humans now tend to trust algorithms more than each other, especially when it comes to tedious tasks.
The premise of the study was simple: some 1,500 participants were shown photos and asked to count the number of people in them.
The participants were able to take suggestions from a computer algorithm or the averages of guesses from their fellow humans in order to complete the task, which involved images of 15 to 5,000 people.
As the crowd size or complexity of the task increased, the participants, understandably, relied more and more on the algorithm to count the people. After all, computers are especially good at tedious tasks that humans shy away from, such as counting.
“It seems like there’s a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people,” says management information systems PhD student Eric Bogert, from the University of Georgia.
The researchers concede that, in this particular task at least, there is no ambiguity in terms of the answer, only right or wrong, so the lack of nuance or perspective makes the task ideal for an algorithm as opposed to a human.
“This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects,” says Aaron Schecter, an information systems researcher from the University of Georgia.
However, the researchers emphasized that our perception of how accurate an algorithm can be plays an important factor – outsourcing the task to a machine unwittingly affords the opportunity for bias and discrimination to creep in unbeknownst to the human participants.
“One of the common problems with AI is when it is used for awarding credit or approving someone for loans,” Schecter says.
“While that is a subjective decision,