AI in Our Image: How Technology Reflects Systemic Biases
To many, artificial intelligence is something out of science fiction. A robot starts by responding to your every command, then eventually, somehow it becomes too human, and ends up outsmarting the human race.
But in reality, artificial intelligence, or AI, is not just a dream of the future—it’s already all around us, improving modern technology in ways that make it more effective and easy to use. Today, AI manifests in many forms, like search engine optimization or what your GPS on your phone uses to calculate the fastest route to reach your location. Some prospects for the future of AI include self-driving cars, which, if executed properly, could significantly reduce rates of auto accidents.
At the same time, the “human” factor of AI software that long seemed fictional is manifesting in technologies like Siri. These mechanisms are also meant to have more of a “personality.” While softwares similar to Siri have undeniably helped many, bots that receive information for responses from global data face a challenge: sexism.
In 2016, Microsoft created Tay, a bot that would learn to chat with users and intended as an experiment in machine learning and interaction. At first, Tay’s automation worked well, producing innocent, personable tweets, like “can i just say i’m super stoked to meet u? humans are super cool,” directed towards Twitter users. Soon enough, however, Tay began to mirror tweets sent by Twitter trolls, who bombarded Twitter with racist and sexist beliefs. Because Tay generated “her” responses from the general Twitter trends and popular sentiments, the account spiraled out of control. Tay began to tweet things like “I fucking hate feminists and they should all die and burn in hell,” and Microsoft shut the bot down no more than 16 hours after it went live.
Incidents like this raise the question: how can AI technology be created such that it doesn’t have prejudiced beliefs? Along with the significant biases against women in AI’s data, the number of women working in AI is extremely disproportionate to the amount of men. Without women playing a role in developing AI, tech companies everywhere risk another Microsoft scandal. Preventing the rise of more sexist bots requires concrete solutions. Girls Who Code and Million Women Mentors are organizations encouraging girls to join the computer science world and continue to support women in tech throughout higher education.
At Tufts, women majoring in computer science make up no more than 40 percent of the department—though only 17.9 percent of the School of Engineering. This combined enrollment between Arts and Sciences is much higher than the national average, where women earn only 18 percent of Computer Science degrees.
Professor Jivko Sinapov, a new hire in the Department of Computer Science specializing on cognitive and developmental robotics, says that at Tufts specifically, there is less of a gap between genders in the Computer Science department. In fact, he noted, this relative equality motivated him to work at Tufts.
“[The disparity in gender is] getting better, especially looking at graduate[s] and undergraduate[s],” Sinapov said. “However, higher up, for example, there are generally less women applying for positions. This is due to societal and institutional reasons, as historically science and tech have been assumed to be men’s jobs.”
Another professor, Donna Slonim, who has a dual appointment at Tufts between the Computer Science Department and the Department of Integrative Physiology and Pathobiology at Tufts School of Medicine, said that when she was an undergraduate student at Yale, her Computer Science (CS) program was about 15-20 percent female. She also notes that while over 30 percent of CS Arts and Sciences undergraduate majors at Tufts are women, “[Tufts’] SOE representation […] mirrors the national average; more than a quarter century after I graduated, women still make up under 20 percent of the undergrad CS majors across the country.”
Both Professor Sinapov and Professor Slonim offered possible solutions to increase CS participation among women. Sinapov recalled a program she participated in at UT Austin called “First Bites.” Sinapov described First Bites as a summer camp for high school girls where “they come to our labs and work with robots and spend some time coding.”
She is hopeful that having more programs like this will help breed a more inclusive field for women in the CS world. “As long as outreach is there, the situation can change,” she said.
Slonim believes important steps like these are being taken at Tufts as well. “At the most recent department faculty meeting, I was amused to notice three of the four most recent Computer Science department chairs sitting next to each other, all of them female,” she recalled. “We certainly try to put good role models front and center.”
Additionally, Slonim praised the fact that “we have an active Women in Computer Science group, a Society of Women Engineers group, and we just hosted the Tufts Women in Tech Conference. Faculty are very aware of this issue and talk about it frequently.”
Overall, Slonim said that the lack of women in CS is not one that is being taken lightly in the community. “There are many successful programs that do well at attracting women to CS,” she said. “Carnegie Mellon has been a leader in this area, but many different approaches have been demonstrated to work. The key is that all of them take effort and resources. At a time when universities are under siege financially, and faculty—especially in CS—are overloaded, it’s hard to devote adequate resources to this along with all the other top priorities.”
More than just the broader world of CS, Tufts is also starting to get a footing in the AI community, where gender equity continues to be an important issue. Sinapov was a recent robotics hire, but even still, there are only three CS professors who focus on this area—all of them men. The CS department is looking to hire another, but Sinapov noted that the majority of applicants are also men. Sinapov himself focuses on service robot projects, which he aims to make “a permanent fixture of [Halligan]” so the robots can “collaborate with those in the building” and in return will make the department more effective. This Halloween, Sinapov designed a robot which went around Halligan and delivered candy and chanted classic Halloween phrases.
Professor Matthias Scheutz, another faculty member who specializes in robotics, is working primarily on human robot interaction (HRI) and how to further help robots learn through experiencing the world. At his HRI lab, Scheutz is currently working on project focused on “moral competence in computational architectures.” He described the project as “designing robots to achieve moral competence naturally relies on the many layers and connections of concepts, rules, feelings, and judgments that constitute human morality.”
The concepts Scheutz studies directly tie into concerns about AI capabilities. Sinapov adds that beyond the Microsoft scandal, there are a lot more subtle biases in “machine learning classifiers and deep learning.”
“No one necessarily wants it to be there, but it’s there,” he said. These results are simply due to the data each machine’s algorithm is based upon, and counteracting these biases on a technological level is extremely difficult to accomplish.
The future of AI is entirely dependent on the school of thought that society as a whole chooses to project. If we continue to be a society that is biased, prejudiced, and discriminatory in our beliefs, the AI we create will mirror those values. Without initiatives and support for integrating and retaining marginalized student interest in computer science, and eventually, AI development, we risk perpetuating problematic technology that further oppress