The Future of Design is Nigh(ther) Good nor Bad

Faith Dismuke
8 min readMar 19, 2021

Design and technology has the potential to eliminate or automate complex problems, like inequality. I had the pleasure of attending the Looking at the Future of Design event hosted by Patricia Reiners, an innovation designer who focuses on XR technologies. In this talk, Reiners discussed five trends and technologies that can be beneficial for UX in the future. In that same timeframe, I also had the pleasure of joining Black Austin UX in watching the award-winning film, Coded Bias, a documentary, created by computer scientist and digital activist at MIT Media Lab, Joy Buolamwini, that discussed and revealed how modern technology has been used to enforce biases. While the future of technology can be exciting, I do not want to forget that technology is often a reflection of culture, and flaws in culture can and will show up in technology.

In her event, Reiners first discussed the different ages of technology:

1200s — Marine Age (invention of steam)

1760s — Industrial Age

1986 — Information Age (invention of the micro chips which aided in other inventions like the personal calculator, internet, cloud, smartphone)

2020 — Conceptual Age (based on several groundbreaking technologies including AI, blockchain, robotics, cloud computing, 3D printing, IoT, quantum computing, brain interfaces, etc.)

The Conceptual Age is a great time for creators and empathizers, and Reiners shared some technologies that UX designers should look into. One technology is AI.

Reiners shared a quote, “If data is the new oil, AI is the new electricity.” Ultimately AI is simply math based on one of the two methods to program a computer. While the first method is akin to a recipe where a computer is told explicitly what to do, the second method involves giving a computer a lot of data. As discussed in Coded Bias, AI was founded in Dartmouth in 1956. At that time, it was this group of people in the Dartmouth math department who got to decide what the field was going to be. This meant that a small fraction of the general population decided how intelligence was going to be defined, which then, intelligence was defined by the ability to win a chess game. This also meant that the data given to the computer for programming was going to be skewed which eventually lead to skewed results. Ignoring other forms of intelligence and leaving future technology for many in the hands of few makes AI’s origin story a setup for conscious and/or unconscious bias.

Since 1956, with increased availability of AI technology and more data to provide to machine learning, AI is integrated in nearly every tool and platform now. As a designer, Reiners shared benefits of using AI for scenarios like the automation of workflows. For instance, this technology allows for adjusting and adapting design decisions and reducing repetitive tasks. The implementation of AI can be exciting. One use of AI is pattern recognition where interfaces are adaptable and personalized based on the user’s needs. An example is the NaturalAI app that has different functions but no menu because it only provides what the user asks for. However, other uses of AI, as shown in Coded Bias, indicate that AI can be globally problematic when used in certain instances.

Coded Bias discusses the use of AI on a global scale. In the UK, biometric photos are saved to police databases similarly to how our fingerprints can be saved. While this may be well-intentioned for the use of tracking criminal activity, this technology is often rolled out without proper research and little framework and oversight. As stated in the film, this is the equivalent of the police basically picking up a new toy and seeing what happens. However, as quoted in the film, “You can’t experiment with people’s rights.”

Other geographical locations were also mentioned in the insightful film. In Hong Kong, facial recognition technology was used to track dissidents during protests. In China, the use of AI for tracking is known by the general population and provides a sort of social ranking and means of enforcing social order. While these methods may feel invasive and controlling as a United States resident looking outside at other places, the reality is AI being used for invasive and often unethical means is also prevalent here, but not as explicitly conveyed to the public as China.

“We punish poor people and elevate rich people and technology have automated that.”

“Racism is being mechanized.”

“The future is already here. It’s just not evenly distributed.”

“The past dwells within our algorithms.”

These were poignant quotes shared in Coded Bias that best describe how AI has been used in the United States. AI may not be inherently ugly, but our history is ugly in many ways towards many people. A prominent revealing of how our ugly history makes for ugly technology is with Tay.ai, Microsoft’s experimental Twitter account that used AI technology to formulate tweets. It unfortunately did not take long for Tay.ai to learn to create harmful tweets based on comments from other users.

For many of us, our most prominent experience with AI is likely via commercial purposes like ads in our social media feed or face recognition in iPhone photos (which one of my close friends is constantly labeled as me in my photos with my partner). However, AI has been used to determine people’s lives from mortgage acceptance to job acceptance to college acceptance.

For instance, AI has been used as an evaluation model for teachers where these scores and algorithms can determine a teacher’s tenure. In some US locations, like Houston TX, these scores can also be used to fire teachers. While this may seem like a well intentioned attempt to standardize teacher ranking and performance, the algorithm has shown to be flawed where teachers who have won teaching awards are scored as bad teachers and teachers are fired without disclosure about the algorithm that cost them their jobs. This has led to recent lawsuits over the violation of due process rights.

In another instance, Amazon was once under fire for the AI technology used in resume filtering where resumes with female names or indications of gender, like the inclusion of women sports or women organizations, were automatically filtered out. This is not Amazon’s first problem with biased technology. Recently, Amazon was set to release its recognition video software, even though it was shown to have gender and racial biases. There is another instance where risk assessment software was racially biased towards black people when determining an individuals crime risk.

I always believed that inequality in regards to technological advancements primarily, if not solely, came through a lack of access to technology equally across individuals. After watching Coded Bias, I realized I was wrong. While access to technology is still imbalanced in many ways, there is also a concern of the most invasive technology being used in poorer communities first. The film highlighted a case where invasive facial recognition was used in a New York City apartment. Recently, the NYC police have used a robotic dog that used AI technology to investigate a crime scene, which has been criticized by Congresswoman AOC.

NYC DigiDog

“Can machines ever know my grandmothers, as I knew them?” — Joy Buolamwini

I cannot answer Buolamwini’s quote. What I know is as a designer, it is my responsibility to use empathy and research to best design interfaces and products for our grandmothers, our children and our national and international neighbors and not against them. Reiners shared other trends in technology that must also be looked at through a UX perspective. Two technologies specifically, extended realities and Automation/IoT, can possibly aid in addressing biases in our technology and day-to-day interactions.

Technologies like virtual reality, which is full immersion, augmented reality, which is virtual combined with real world, and mixed reality, which is real world with the inclusion of objects, provide opportunity for people to access information beyond the screen. According to a CNN article, VR technology can be used to allow people to “walk in other people’s shoes.” This use has led to programs that promote empathy through VR goggles. While Reiner discussed the positives of automation in the context of automating flows to connect devices and the sometimes unnecessary “Let’s make an app” craze, it must be noted that automation has aided in eliminating biases in day to day experiences. For instance, because the Amazon GO store uses sensors instead of cashiers for checking out items. This has indirectly eliminated the anxiety that often comes when shopping while black and interacting with “overly helpful” store employees.

There’s hope, but we are far from where we think we are. Technology can and has done great things in protecting communities, but the numbers are not in everyone’s favor. A simple solution could be to simply create a culture where information that identified gender or race were simply not needed. For example, we can eliminate the inclusion of names on resumes. However, as idealistic as that may sound, I realized after talking with my partner that it is too late for that. It is too late to simply even the playing field when some groups have had generations of a head start.

Sometimes as a designer I feel limited in how I can fight against the ever-watching Big Brother or the deep-rooted biases sewn into our culture and programmed into technology. How can a button change or a changed content strategy make a big difference? I then think about Facebook’s 2010 experiment. In this experiment, Facebook conducted a simple A/B test where the design of a post was altered. In version one, the post included images of a user’s friends who voted. The second version did not include this. The results led to an increased voter turnout large enough to have been able to influence a presidential election year. What’s scary is that the general public would never have known about this experiment had Facebook not decided to share this information.

Facebook 2010 A/B testing posts

There may be details of the Conceptual Age technology that still seem like a mysterious black box. UX may not be directly involved in all of the data and programming that aid in machine learning and other technology. However, our role in technology can hurt or help communities. We can either eradicate or continue conscious and unconscious biases. Allowing individuals like Reiner to share their knowledge of modern technological trends and exposing ourselves to information from people like Buolamwini who boldly expose the often ugly face behind our technology are small but significant steps in determining where we want to place ourselves in this Conceptual Age.

--

--

Faith Dismuke

I am a lot of things (screenwriter, UX Designer, former pro athlete, author, and fries enthusiast) and I want to share my experiences with others.