It’s the storyline of some of the best-known works of science fiction, from Mary Shelley’s “Frankenstein” to the “Terminator” series: Man-made creation runs amok, rapidly outpacing its creators’ abilities to control it. The result? Chaos.

While we’re nowhere near capable of creating time-traveling robots or resurrecting dead flesh with electricity, our technology has come disturbingly close to producing a decidedly dystopian future in less flashy but more insidious ways. To wit: misinformation spread on social media, bias in datasets that exacerbates inequality and discrimination, and a lack of transparency that makes it difficult to predict and get ahead of problems, let alone solve them.

“A lot of people forget just how much technology they use — all the different types of software and how they work,” said UC Santa Barbara art professor Sarah Rosalena Brady, who specializes in computational craft and haptic media. “Especially now that we’ve become so dependent on it, we see it as an extension of our bodies, yet we forget all the different layers that we’re navigating. And so it becomes complex with the ethics involved because there isn’t a lot of transparency.”

 

Representation, Misinformation

 

Indeed, while technology takes advantage of computers and their marvelous capacity for executing multitudes of calculations, the act of programming, to Brady, is also an act of forgetting. As data becomes encoded and processed, the knowledge of how it was gathered, and from whose perspective it comes, becomes less obvious — and the bias is created or perpetuated, whether intentional or not.

For example, a widely used healthcare risk prediction algorithm was found in 2019 to favor non-Black patients over Black patients because it used previous healthcare spending as a metric for assessing need. Research shows that Black patients generally spend less on healthcare due to issues of income and access to quality care, leading to the unintentional interpretation that Black people needed less.

Herein lies the rub: Some of the most powerful and complex computational systems, such as artificial intelligence (AI), are black boxes. Trained to recognize patterns in large datasets, machine learning models make predictions based on connections between data points and outcomes in a highly iterative process. Each iteration adds a layer of complexity that refines those predictions, while also making their process more difficult to understand.

A version of this problem exists in the realm of geographic information systems (GIS), another field that handles massive amounts of current and historical data. And it has some serious social repercussions.

 

horizontal rule

 

“Some specific problems at the top of my personal list are redlining, gerrymandering, environmental racism and the lack of COVID testing and vaccination sites within communities of color,” said geographer Dawn Wright, a UC Santa Barbara alumna and chief scientist of Esri, the world’s leading supplier of GIS software.

Maps can perpetuate these problems, researchers say. The information often presented in maps inherit longstanding biases, such as colonial and racist place names that obscure or diminish the presence of Indigenous peoples, or promote the stigma of certain people by association with certain locations, such as the redlining of predominantly Black neighborhoods in the 1930s.    

 

If you look at Twitter or any other social media, you’ll find that a large percentage of the actual posts you see are not written by humans; they are actually written by bots

 

“Maps are often interpreted as social constructions that represent the political, commercial and other agendas of their makers,” Wright and UCSB geographers Trisalyn Nelson and Michael Goodchild said in a study published in the Proceedings of the National Academy of Sciences.

As if bias weren’t enough, we also have to contend with active forms of misrepresentation, such as the spread of misinformation.

“If you look at Twitter or any other social media, you’ll find that a large percentage of the actual posts you see are not written by humans; they are actually written by bots,” said William Wang, a professor of computer science. In this case, people with dishonorable intentions take advantage of social media’s wildfirelike spread of context-free information to inject lies that become part of the discourse. A 2018 study by the Knight Foundation found that in the month prior to the 2016 elections, more than 6.6 million tweets linked to fake news and conspiracy news publishers — the majority of them the result of automated posts.

 

Building Trust, Transparency and Diversity

 

For scholars and researchers alike, the development of more ethical technology requires an examination of our assumptions: Where and how do we get our information and what is the context?

This is one of the big questions Brady addresses in her art, fusing weaving techniques with AI to make tangible the invisible complexities and stealth operations that underlie some of our most powerful technologies, and to present a critique of their origins.

Her textiles series “Above Below,” for instance, is based on satellite images generated from the Mars Reconnaissance Orbiter. While the textiles are stunning in their futuristic depictions of the red planet, the materials and techniques are decidedly Indigenous, offering a look at Mars exploration through the perspective of people who have historically borne the brunt of such expeditions in the name of progress. The Earth’s blues seep into Mars’ reds, indicating the gradual takeover of the planet by notions of colonization: the extraction and commodification of resources, the notion of new terrain as property to be divided between and accessible only to the rich. It’s a cautionary tale.

Back on Earth, Wang, who leads UCSB’s Center for Responsible Machine Learning, tackles the fake fruits of technology with a shift toward transparency.

“My students and I have been working on how to decompose the concept of transparency into some workable definitions,” he said.

Among the measures: trust scores — algorithms that rank web content based on the trustworthiness and reliability of the source; or content moderation, in which actual people evaluate questionable content on social media sites.

 

horizontal rule

 

While we increasingly trust data and computers to do the heavy lifting of modern life — from choosing what to watch next, to driving, to making life-altering decisions — in the end, we are the heroes we’re looking for.

“What I see in the future is not just a human receiving information from AI,” Wang said. “It’s a virtuous circle in which the human is trying to adapt to the technology, but also giving feedback about how to improve the technology.”

Similarly, according to Wright, humans — the more diverse, the better — are key to ensuring that our technology reflects the best of us, as we grapple with current issues and with ones yet to emerge. With that in mind, she said, Esri is leveraging the power of maps, apps and geographic information systems to promote social and racial equity. Its Racial Equity and Social Justice Unified Team is “building apps that communities can actually use to potentially reimagine what public safety looks like, neighborhood by neighborhood.”

“Regardless of the dataset or the huge problem that affects people on a day-to-day basis,” Wright said, “it’s the diversity of the team involved that increases the likelihood that the solution is in the room.”

Additional Stories

Spring / Summer 2022

Additional Features


sports comic-book-style illustration

Gauchos Gone Pro

library books

Making a World of Difference

stylized bee image

Bee Happy