Losing Control Project Documentation

Titilayo Funso
5 min readApr 28, 2020

--

Losing Control Project Thumbnail Image

A message from the writer.

My name is Titilayo. I’m a first-generation Nigerian-American living in Atlanta, GA, and I study Human-Computer Interaction at Georgia Tech. While taking the course ‘Computing and the Anthropocene’, I learned about the Anthropocene, agency, slow violence, and A.I. ethics. Unfortunately, the more I learned about these topics, the more out of control of my future I felt. I realized that many significant decisions about technology adoption that will impact my life directly will be made almost entirely without my input. I started to think about how this lack of control might manifest in my life, similar to how the lack of agency manifests in the lives of many vulnerable people living in the ‘global south’ today. I saw myself in their shoes, as the future’s vulnerable person.

Within my studio project, I thought about what I could design to influence a change in perspective about the amount of control vulnerable people have regarding the implementation of artificial intelligence (A.I.) technology. The activity led me to use existing stories about slow violence in marginalized communities as a template for what slow violence may look like for vulnerable populations in the future when artificial intelligence-powered technologies become more ubiquitous. I designed a publication set in the year 2024 to inspire more conversations about A.I. ethics, preparedness, humanity, the Anthropocene, and inclusive dialogues.

Questions to consider as you read: Questions to consider as you read: If you were the article’s audience in 2024, what aid would you hope for? Is the help you’d hope for the same aid you’d expect a future reader of the article to receive?

Losing Control Medium:
https://medium.com/@tjfunso_98576/a-timeline-of-the-a-i-drone-attacks-e28f61b60909

Process Documentation Link:

https://docs.google.com/presentation/d/1utT8XAViYGpp6S1hGDyNTE6R7h0Jpc3fG-BoJFbn-80/edit?usp=sharing

3–5 supplementary project images:

35–50-word summary:
This project presents an alternative future of a population of black people made vulnerable by an autonomous weapon system. It speculates the impact of artificial intelligence disaster response on the environment, and how government systems are responsible for planning sustainable responses if they fail to protect their citizens from AI harm.

200–300-word description:
An article of the COVID-19 outbreak is used as an analogy for what the spread of an AI disaster outbreak might look like in the year 2024. A piece of journalism shows a timeline of how the outbreak originates, and of the devastation it brings, both to humanity and to the environment. In this future vulnerable populations, who are made refugees by artificial intelligence-powered autonomous weapon systems, are relocated in a manner that preserving each refugees’ dignity and humanity. Unfortunately, this responsible relocation does not happen until countless lives have been lost due to disaster response mishandling by initial governments involved.

The body of the article is written in West African Pidgin English because the people most directly affected by the killer drones are Nigerians. The language choice is intentional to invite an untraditional group to a conversation about the intersection of the Anthropocene and computing. Nigeria is also intentionally selected as the initial setting to expand on existing narratives of slow violence on the Nigerian people, which is often a byproduct of bad government deals made to secure large contracts with powerful international players. English headlines are woven into the piece to better communicate with a more mixed audience in a somewhat nostalgic way, using a series of sensationalized headlines to build the disastrous future-scape. This story highlights the grim ‘reality’ of vulnerable people caught in the crossfire between logical intentions and less than optimal outcomes. The article highlights the dependence of both regular civilians and the environment on government systems for protection.

In the face of expanding applications of artificial intelligence, it’s important to explore the possibility of system failure. Because automatic weapon systems are powered by algorithms, trained with historical data that fails to properly represent people of color, an intrinsic bias leads to an attack by AI drones.

In the piece, the drones were originally tasked with asset surveillance at a large Chinese petroleum extraction site. Due to a programming error, thousands of Nigerians are shot by the drones when the deep learning algorithm marks these people incorrectly as the intended targets. After weeks of bloodshed, the United States assumes the responsibility of sheltering both international and domestic AI refugees from the killer drones, rather than leave the people to fend for themselves. Officials roll out a progressive response to the AI refugee crisis that is purposed to contrast existing refugee management failures.

In a way, the storyline attempts to humanize vulnerable people who are often overlooked, by making the perpetrator a type of antagonist people living in a developed world can also fear. This time the killer isn’t an opposing ethnic group thousands of miles away, or some ailment that is typically a byproduct of living in extreme poverty. This time the bad guy is a highly sophisticated global terrorist that happens to be both intelligent and non-living. The reaction to this disaster is a statement about how the developed world needs to reckon with the new reality of shared violence.

--

--

No responses yet