Every time a new application of AI is announced, I feel a short-lived rush of excitement — followed soon after by a knot in my stomach. This is because I know the technology, more often than not, hasn't been designed with equity in mind.
One system, ChatGPT, has reached 100 million unique usersjust two months after its launch. The text-based tool engages users in interactive, friendly, AI-generated exchanges with a chatbot that has been developed to speak authoritatively on any subject it's prompted to address.
In an interview with Michael Barbaro on the The Dailypodcastfrom the New York Times, tech reporter Kevin Roose described how an app similar to ChatGPT, Bing's AI chatbot, which also is built on OpenAI's GPT-3 language model, responded to his request for a suggestion on a side dish to accompany French onion soup for Valentine's Day dinner with his wife. Not only did Bing answer the question with a salad recommendation, it also told him where to find the ingredients in the supermarket and the quantities needed to make the recipe for two, and it ended the exchange with a note wishing him and his wife a wonderful Valentine's Day — even adding a heart emoji.
The precision, specificity, and even charm of this exchange speaks to the accuracy and depth of knowledge needed to drive the technology. Who would not believe a bot like this?
Bing delivered this information by analyzing keywords in Roose's prompt — especially "French onion soup" and "side" — and using matching algorithms to craft the response most likely to answer his query. The algorithms are trained to answer user prompts using large language models developed by engineers working for OpenAI.
In 2020 members of the OpenAI team published an academic paper that states their language model is the largest ever created, with 175 billion parametersbehind its functionality. Having such a large language model should mean ChatGPT can talk about anything, right?
Unfortunately, that's not true. A model this size needs inputs from people across the globe, but inherently will reflect the biases of their writers. This means the contributions of women, children, and other people marginalized throughout the course of human history will be underrepresented, and this bias will be reflected in ChatGPT's functionality.
Earlier this year I was a guest on the Karen Hunter Show, and she referenced how, at that time, ChatGPT could not respond to her specific inquiry when she asked if artist Bessie Smithinfluenced gospel singer Mahalia Jackson, without additional prompting introducing new information.
While the bot could provide biographical information on each woman, it could not reliably discuss the relationship between the two. This is a travesty because Bessie Smith is one of the most important Blues singers in American history, who not only influenced Jackson, but is credited by musicologists to have laid the foundation for popular music in the United States. She is said to have influenced hundreds of artists, including the likes of Elvis Presley, Billie Holiday, and Janis Joplin. However ChatGPT still could not provide this context for Smith's influence.
This is because one of the ways racism and sexism manifests in American society is through the erasure of the contributions Black women have made. In order for musicologists to write widely about Smith's influence, they would have to acknowledge she had the power to shape the behavior of white people and culture at large. This challenges what author and social activist bell hooks called the "white supremacist, capitalist, patriarchal" valuesthat have shaped the United States.
Therefore Smith's contributions are minimized. As a result, when engineers at OpenAI were training the ChatGPT model, it appears they had limited access to information on Smith'sinfluence on contemporary American music. This became clear in ChatGPT's inability to give Hunter an adequate response, and in doing so, the failure reinforces the minimization of contributions made by Black women as a music industry norm.
In a more contemporary example exploring the potential influence of bias, consider the fact that, despite being the most celebrated Grammy winner in history, Beyoncé has never won for Record of the Year. Why?
One Grammy voter, identified by Varietyas a "music business veteran in his 70s," said he did not vote for Beyoncé's Renaissance as Record of the Year because the fanfare surrounding its release was "too portentous."The impact of this opinion, unrelated to the quality of the album itself, contributed to the artist continuing to go without Record of the Year recognition.
Looking to the future from a technical perspective, imagine engineers developing a training dataset for the most successful music artists of the early 21st century. If status as a Record of the Year Grammy award winner is weighted as an important factor, Beyoncé might not appear in this dataset, which is ludicrous.
Oversights of this nature infuriate me because new technological developments are purportedly advancing our society — they are, if you are a middle class, cisgender, heterosexual white man. However, if you are a Black woman, these applications reinforce Malcolm X's assertion that Black women are the most disrespected people in America.
This devaluation of the contributions Black women make to wider society impacts how I am perceived in the tech industry. For context, I am widely considered an expert on the racial impacts of advanced technical systems, regularly asked to join advisory boards and support product teams across the tech industry. In each of these venues I have been in meetings during which people are surprised at my expertise.
This is despite the fact that I lead a team that endorsed and recommended the Algorithmic Accountability Actto the U.S. House of Representatives in 2019 and again in 2022, and the language it includes around impact assessment has been adopted by the 2022 American Data Privacy Act. Despite the fact I lead a nonprofit organization that has been asked to help shape the United Nations' thinking on algorithmic bias. And despite the fact that I have held fellowships at Harvard, Stanford, and the University of Notre Dame, where I considered these issues.
Despite this wealth of experience, my presence is met with surprise, because Black women are still seen as diversity hires and unqualified for leadership roles.
ChatGPT's inability to recognize the impact of racialized sexism may not be a concern for some. However it becomes a matter of concern for us all when we consider Microsoft's plans to integrate ChatGPTinto our online search experience through Bing. Many rely on search engines to deliver accurate, objective, unbiased information, but that is impossible — not just because of bias in the training data, but also because the algorithms that drive ChatGPT are designed to predict rather than fact-check information.
This has already led to some notable mistakes.
It all raises the question, why use ChatGPT?
The stakes in this movie mishap are low, but consider the fact that a judge in Colombia has already used ChatGPT in a ruling— a major area of concern for Black people.
We have already seen how the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithmin use in the United States has predicted Black defendants would reoffend at higher rates than their white counterparts. Imagine a ruling written by ChatGPT using arrest data from New York City's "Stop and Frisk" era, when 90 percent of the Black and brown men stopped by law enforcement were innocent.
If we acknowledge the existence and significance of these issues, remedying the omission of voices of Black women and other marginalized groups is within reach.
For example, developers can identify and address training data deficiencies by contracting third-party validators, or independent experts, to conduct impact assessments on how the technology will be used by people from historically marginalized groups.
Releasing new technologies in beta to trusted users, as OpenAI has done, also could improve representation — if the pool of "trusted users" is inclusive, that is.
In addition, the passage of legislation like the Algorithmic Accountability Act, which was reintroduced to Congress in 2022, would establish federal guidelines protecting the rights of U.S. citizens, including requirements for impact assessments and transparency about when and how the technologies are used, among other safeguards.
My most sincere wish is for technological innovations to usher in new ways of thinking about society. With the rapid adoption of new resources like ChatGPT, we could quickly enter a new era of AI-supported access to knowledge. But using biased training data will project the legacy of oppression into the future.
Mashable Voices columns and analyses reflect the opinions of the writers.
文章
5
浏览
64858
获赞
524
Facebook bans far right ‘Boogaloo’ accounts from its platform
Facebook is cracking down on the Boogaloo movement.On Tuesday, the social media giant announcedthatElderly man calmly eats his food while everyone else runs from a snake
Nothing is going to stop this man from enjoying his food. Nothing.A large snake caused a slight paniWeChat HarmonyOS version arrives on Huawei AppGallery · TechNode
The HarmonyOS version of WeChat, built on an independent architecture separate from Android or iOS,Google Assistant on Home devices now speaks and understands Spanish
Google Assistant on certain Google Home devices (Android 6.0 Marshmallow and up) can now speak and uThis alignment test will tell you if you're a stupid horny baby
People online love a good alignment test. They also love to say "I'm baby." Here's something that coElon Musk and Mayor Rahm Emanuel announce the Chicago 'Loop'
Haters gonna hate, but that's not stopping Elon Musk and Chicago Mayor Rahm Emanuel from pushing forNew Apple Watch will have a larger screen, better heart rate detection
A bigger screen, but with the same physical dimensions? What is this witchcraft?!Apple expert Ming-CFacebook bug affected 14 million people's privacy settings
Welp, Facebook screwed us again.The advertising giant wants you to believe that you're in control ofLinkedIn says its extra intense clipboard snooping in iOS is a bug
LinkedIn's iOS app has taken the ongoing issue of snooping at users' clipboards to whole, new level.RedTube's new adult greeting cards will make any occasion sexy
Ever go into a Hallmark and think, "Ugh, if only this place was sexier"? Yeah... Well, either way, hGoogle is improving the Account settings app
It seems that now more than ever, tech companies are paying close attention to user privacy and makiFacebook AI is now capable of 'opening' eyes in photos where they're closed
Facebook wants to open your eyes in photos where they're closed — and now it can.A pair of FacGoogle says China and Iran tried to hack Biden and Trump's campaigns
Google has announced it has identified state-sponsored hacking attempts upon both Biden and Trump'sAnd the official meme of LGBTQ pride is ... the Babadook?
Happy Pride Month everyone: the Ba-Ba-DOOOOOK is gay. At least, that's the theory being advanced byCNN drops personality who called Trump a 'piece of sh*t' on Twitter
CNN has decided to part ways with host Reza Aslan after he called President Donald Trump a "piece of