-
Essay / Robots as a solution to equality in the job interview process
In an effort to understand the human mind, philosophers and scientists have turned to complex technology to explain psychological phenomena. In medieval times, philosophers compared the brain to a hydraulic pump, largely influenced by the prevalence of hydraulic systems as a newly discovered innovation. In the mid-19th century, models of the brain resembled telegraph technology, nicknamed the "Victorian Internet", as understanding neuronal activation traveling on nerves was compared to information traveling on the wires of a telegraph. Today, many view computers and robots as potential models of the brain, as evidenced by the popularization of the computer model of the mind and advances in artificial intelligence. While analogies provide a simple basis of comparison for the brain's many mysteries, they can also render complex technology and, by proxy, the brain as magical and inaccessible (Anderson). As a result, our society glorifies technology as infallible, unbiased, and infallible. Therefore, we have created more roles for technology, especially robots, to become even more involved in our lives. Say no to plagiarism. Get a tailor-made essay on “Why Violent Video Games Should Not Be Banned”? Get an Original Essay One human role that is beginning to show promise in replacing robots is that of the job interview process. In recent years, Australia's La Trobe University has partnered with Japan's NEC Corporation and Kyoto University to create communications robots with emotional intelligence to help conduct job interviews. hiring for businesses. These robots have the ability to perceive facial expressions, speech and body language to determine whether potential employees are “emotionally fit and culturally compatible” (“Matilda the Robot”). The first robots were called Matilda and Jack, but now they are joined. by similar robots Sophie, Charles, Betty and two other anonymous robots (Nickless), Dr Rajiv Khosla, director of the Center for Research in Computers, Communication and Social Innovation at La Trobe, says that "computing [technology of information] is such a type of robots). ubiquitous part of our lives, we believe that if you introduce devices like Sophie into an organization, it can improve the emotional well-being of individuals. Computers and robots are often limited to analyzing quantitative data, but communication robots like Matilda are capable of analyzing people and their qualitative and emotional properties. These emotionally intelligent robots show promising potential to eliminate inequities and biases in the employee selection process, but they will only be able to do so within specific parameters. Emotionally intelligent robots could help reduce employment inequality because they don't have implicit biases like humans do. Unfortunately, our biases often prevent us from making fair and equitable decisions, which is especially evident during the job interview process. In an interview, National Public Radio science correspondent Shankar Vedantam describes the results of research involving the effect of bias in the interview process. In one study, researchers found that the time of day the interview takes place has a profound impact on whether or not a candidate is chosenfor a job (Inskeep). This means that something as seemingly unimportant as circadian rhythms, one of our most primitive instincts, can be complicit in influencing our better judgment. Professional occupation constitutes a main means of income and an indicator of status. Given the importance of this role, we should strive to create a fair system for all job applicants, but complete fairness may not be possible if human bias cannot be controlled. Beyond basic physiological factors, these biases also extend to racial bias. In 2013, John Nunley, Adam Pugh, Nicholas Romero, and Richard Seals conducted research to understand the job market for college graduates across racial lines. They submitted 9,400 online applications on behalf of fake college graduates, with variations by college majors, work experience, gender and race. To indicate race, half of the candidates were given typically white-sounding names, such as "Cody Baker," while the other half were given typically black-sounding names like "DeShawn Jefferson." Despite equal qualifications among fake applicants, black applicants were 16% less likely to be called back for an interview (Arends). Therefore, racial bias, even unintentional and unconscious, can create injustice in the job interview process. In light of these implicit biases that affect the employee selection process, bots are a viable option for conducting objective and fair job interviews. Although robots are often viewed as machines for human comfort, they have the potential to level the playing field, particularly in situations in which humans think and behave irrationally. The robots operate according to purely logical algorithms, which allow them not to be influenced by irrational prejudices and to strictly respect precise criteria. Since a candidate's qualifications cannot necessarily be measured quantitatively and are therefore subject to qualitative bias, it may be fairer for them to be assessed by an objective machine. However, using bots to eliminate bias is not a panacea and should be approached with caution. Even though robots act logically, they only do so within the parameters of their programmed algorithms. If a program is coded to be inherently biased, it follows that the machine it runs on will perpetuate that bias. Last year, Amazon was accused of using a “racist algorithm” that excluded minority neighborhoods in big cities from its free same-day delivery service, while systematically offering the specialized service to predominantly white neighborhoods. The algorithm data linking maximum profit to predominantly white neighborhoods was a direct result of decades of systemic racism, which caused gentrification between high-income, white neighborhoods and low-income, minority neighborhoods. Ironically, low-income neighborhoods that have been excluded from service would benefit most from free additional services, while high-income neighborhoods that have benefited are more likely to have easier access to quality, low-cost goods. cost. While Amazon claimed to only use facts, that they would make no profit in the neighborhoods they excluded (Gralla), they were ultimately using an algorithm based on biased socioeconomic data to perpetuate racist patterns. Another similar example, and perhaps.