Home » Module 6 - "Under Sampled Majority" » “Under Sampled Majority”

“Under Sampled Majority”

Hey all!

We’re almost halfway through the semester (I think) and we’ve been deliberately taking our time. Please DM me if you need any support or feel lost or just want to say hello. I’m here for you and want this asynchronous space to still feel human.

I want to jump back into thinking critically about the fields inside engineering. This goes for everyone, but especially for Computer Science majors — have you considered the ways in which your field has bias? the ways your field has a profound impact on how society is shaped?

I’m not sure if these questions are being raised in your Grove courses (I hope they are! Tell me if they are!) and since we’re considering both rhetoric and composition, these questions must be taken into account. 

For this week, I would like you to watch this 13 minute talk by Dr. Joy Buolamwini about facial recognition and the effects when the sample set skews white and male.

For the module comment, I would like you to consider the following:

Take note of 2-3 rhetorical issues Dr. Buolamwini raises that speak to you. For me, it was her reframing of the “under-sampled majority” as a way to think about who is represented in most technological spaces and who is erased. So often we say “minority” when speaking about the people of the global majority who are not white and that set standard creates an intentional bias which has real implications (think policing, thinking community funding, think incarceration rates)

Have you ever considered algorithmic bias when using your devices?

What are some ways we can shift the dominant data set?

If you have an experience of algorithmic bias that you want to share, I welcome it in this space but it is not required. Or if you want to add your experience to Dr. Buolamwini, I think that would be fantastic.

Thanks everyone for staying engaged and enjoy the rest of your week!


  1. This week’s module really brought up a topic that I was not exposed to or privy to. All these artificial intelligence implements often use the white population as test samples and often ignore or put other skin colors in the back of the list of importance. We can bring up the fact that most computer science graduates are often white people. This would explain why the facial recognition artificial intelligence designed by white people works better for white people. The lack of representation from other communities in the computer science field is alarming as well, we should be trying to spread CS to these communities so it’s not heavily dominated by the white population. If we have people of color working on these AIs, then we’re guaranteed to get better results for those of color out of it. Another aspect of this week’s module that caught my eye was how some people were being mismatched with criminal suspects due to how the AI was designed. It seems extremely unjust for the “minority” to be falsely identified as criminals due to their skin tone and their gender.

  2. I have not had any idea about on-going bias in our daily technical world. Most of the time, when a device fails to process/scan our records (SSN, resumes, transcripts, face ID, touchpad ID etc.), we tend to think of it as a technical glitch. Although, it may be technical issue but one can also argues that these types of glitches occur because the system was not provided with “sufficient” amount of data and only particularly one type of data sets were prioritized. In simple words, the system was designed for/against a certain ethnic/gender groups. This type of bias in the AI world does not provide equal opportunities for everyone in our society. Similarly, Dr. Buolamwini raised some interesting issues here. She explained how “laws of Physics” have not changed in a year, but feeding AI system with proper and complete data for each n every social group brought out improved results.
    Login.gov is a government based platform which verify individual’s identity based on the personal information(SSN, phone number Or address) entered by that person. After living in the United states for over a decade now, Login.gov could not verify my identity. This is something which I recently experienced now.

  3. There are two things that i found interesting and concerning at the same time. First is the fact that one in 2 people for over 130 millions adults in USA have their face registered on a flawed face recognition system. This means, there’s great chance that there could be many false faces that would come up on a random criminal records search which could lead to unnecessary search and hassle which is technically a violation of the right to privacy (4th amendment) to an extent. One more thing is the comparison between two “intersectional accuracy” auditions for Microsoft, IBM and Face ++. The first audition presented more accuracy in detecting lighter color people over darker and males over females. In the second audition, the accuracy of darker and female classes improved but it does show how companies were really ignorant about this and that could be catastrophic in many levels as it raises questions about their AI integrations.
    I think one way to shift the bias and class based dominance would be to widespread the races, ethnicities and nationalities that a company will search on. Most American companies will prefer to gain as much data as possible from Americans but a company like Microsoft is a global company that needs appeal to diversity which will help them randomize their AI more and make up for a better implantation.

  4. After viewing Dr.Buolamwini’s video, it has become clear that there is algorithmic bias in the technical world. In Dr.Buolamwini’s words, “coded gaze is the reflection of the priorities, preferences, and at times prejudices of those who have the power to shape technology,” and this issue has real world implications. For example, being detected by technology that can be used as mass surveillance can lead to false accusations for crimes. False positives match rates are over 90%– countless innocent people are being linked to criminals. Companies can claim to be aiding in criminal cases but it can be hurting more people through their false claims. Another issue that stood out to me was the concept of the “under-sampled majority.” The world of AI, like many other things, seems to be tailored/made to fit the white male. Their facial reignition accuracy is near perfect while women and men of color have more inaccurate result when comparing it to the results of a white male. This group of people is often referred to has a the “minority” even though they make up most of the world’s population. Like many things, AI needs significant improvement to dissolve their algorithmic bias.

  5. In the video, Dr. Buolamwini asked the question “who has access to freedom?” If I had heard this question before learning about Dr. Buolamwini’s research, then I would have said that everyone has access to it. But after learning that people may be misidentified by AI technology and labeled as a criminal- an act that may put them in jail for who knows how long- my answer changed. I realized that not everyone has access to freedom. In the case of criminals and facial identification for example, a person’s freedom is in the hands of the companies who create the facial recognition technologies. Whether or not they end up in jail due to being labeled as a criminal depend on how well the facial recognition software is able to identify them.
    Dr.Buolamwini’s research indicated that the companies she studied were able to improve their percentages within the span of one year. The numbers imply that the companies were able to improve their work and have the potential to improve it further. However, in order to get these companies to create change in the first place, it is up to us to notice which areas contain bias. Dr. Buolamwini was able to get companies to make updates because she was able to identify that there is bias in the facial recognition software. A bias that due to being unidentified for a long period of time, has now become a threat since “AI is now serving as the gatekeeper for current jobs”. Her research is not only meant to inform us on an area that she found to be important, but it is also meant to inform us of the idea that we need to be paying attention to the notion of bias. When we consider Computer Science, we typically consider a field of study that consists of largely facts and little to no bias. And so, to see bias present in facial identification comes as a shock to us. However, as mentioned in a previous module, Science has bias and isn’t objective. Identifying the bias helps identify problems that we may not have known about before. And identifying these problems helps lead to a search for solutions.

  6. After watching Dr.Buolamwini’s video, I recalled seeing an article saying that facial recognition technology can be a tool for racism. It was an article that three innocent citizens were arrested for an error in the facial recognition program in a protest following the Georgy Floyd incident. At that time, I only thought, ‘Face recognition technology is not perfect yet,’ but through this video, I first learned that this technology has fewer errors only for white men. So when I read the article again, all three arrested were black men. This was really racism, and I think it’s a very serious problem for innocent people to suffer. In this video, she said companies reduced errors a lot in a year. However, I think they should abandon prejudice, collect more data, and increase the accuracy of race and gender to 100% because another innocent person can be harmed.
    In addition, we have entered the era of contactless society due to COVID-19, and facial recognition technology is being used in various ways. I believe that all technologies are developed with good influence, but I think someone can abuse them enough. I recently watched a documentary about deep-fake phishing. While watching Dr.Buolamwini’s video, I couldn’t shake off the fact that many people were suffering from deep-fake phishing using facial recognition technology. I hope that this technology will not harm innocent people with 100% accuracy and that we will be able to capture deep-fakes with this accuracy, and that it will safely enter our daily lives without causing privacy violations.

  7. This week’s video was very eye opening. It truly shows the importance of advocacy, and inclusion. I really appreciate that this class opens a perspective that is not talked about much in any of my other classes. In connection with algorithmic bias another term comes to my mind which is “techno-racism”. Briefly its racism encoded in technology. Algorithmic bias falls under the category of techno-racism. But we could also bring up a recent example in connection with COVID-19 that is under the umbrella of biomedical engineering. Oximeters. The little thingies that you can purchase, clip on your finger to measure your oxygen saturation of your blood. It is less accurate on people of color, so much so that FDA has issued a warning on this. ( https://www.mobihealthnews.com/news/fda-warns-pulse-oximeters-less-accurate-people-darker-skin ) The problem is not with physics, but that engineering is not inclusive enough, and unwittingly (or wittingly), engineers code racism into their products. It’s a new aspect that our generation should watch out for and make sure we are inclusive with our designs and products.

  8. The technology that Doctor talked about is Artificial Intelligence. The issue that she mentioned in the video is about facial recognition which works on AI and the base of this technology is programming. This technology was initially made keeping in mind that how it would recognize a male person of fairer skin. May be later they add few more lines of code to it so that it could work efficiently on people of other genders and colors but it depends on the company that how much importance they are giving to those people while improving this technology. The facial recognition technology still has many drawbacks if used as a security lock for a device. This technology unlocks the device by recognizing your face. Now the question arises what will happen if someone has identical twin and both of them look same. Another issue has happened with me – my pixel phone could be unlocked even if my eyes were closed. It means that someone can unlock my phone when I am in deep sleep. That was the time when Google first introduced the facial recognition system to their phones. However they fixed it later with their next security update.

Leave a comment

Your email address will not be published.

Need help with the Commons? Visit our
help page
Send us a message