As artificial intelligence (AI) and automation become increasingly integrated into our everyday lives, conversations about tech ethics are moving from academic journals to news feeds. But despite the rising impact of AI on hiring, healthcare, education, and criminal justice, many people still do not fully engage with the ethical questions behind the code. While not everyone needs to be an expert, it is important to understand at least three key tech ethics issues, and understand why they are critical to the lives of all users of technology. 

Algorithms are builts on data, and data reflects the real world and its real biases. This means that even the most advanced AI models can reproduce or amplify the biases related to race, gender, or socioeconomic status. For example, a 2018 study by Joy Buolamwini and Timnit Gebru found that commercial facial recognition systems were significantly less accurate for women with darker skin tones. Various algorithms, like those used for predictive policing or credit scoring, have also shown patterns of disparate impact; outcomes unintentionally favor certain groups over others. Artificial intelligence has become the go to for this high stakes decisions such as loan approvals, job screening, and law enforcement. And these small biases can impact real people as these algorithms are actively going against them. 

Most people interact with AI systems daily, as they peruse through streaming platforms, engage with personalized ads, and utilize search engines. In exchange they are giving up vast amounts of personal data. However most users have limited understanding of what’s being collected, how it’s used, and who has access to it. Professor Shoshana Zuboff of Harvard Business School refers to this dynamic as “surveillance capitalism”, as personal data is harvested, not just for improving services, but for predicting and influencing human behavior for profit. At the same time, many AI systems function as “black boxes, so there is no way to explain the reasoning behind their decisions. Without transparency and informed consent, users are subject to decisions made by systems they do not fully understand. Their personal information and data is possibly being taken and used without their permission or knowledge. 

When a human makes a harmful or discriminatory decision, responsibility can be traced. But with AI, accountability can not be accepted by one entity. There is a question of whether the developer is responsible for the performance of the model, or the company, or the algorithm itself. This ambiguity raises concerns in the public policy spheres. Calls for algorithmic accountability and explainable AI are growing, particularly in sensitive areas like healthycare, criminal justice, and government services. Without clear lines of responsibility, it is difficult to audit systems, and protect those impacted by unfair models and hurt by the lack of responsibility taken by AI. 

As AI continues to embed itself into the lives of everyone, the questions surrounding its ethics need to be answered. If it is becoming a worldwide tool, there needs to be worldwide education. It is unfair to users to deal with the possible harm of this growing tool if they do not fully understand the risk and are given an explanation. 

Sources

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. https://proceedings.mlr.press/v81/buolamwini18a.html

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844110

Auxier, B., & Rainie, L. (2019). Americans and privacy: Concerned, confused and feeling lack of control over their personal information. Pew Research Center. https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/

Leave a comment

I’m Rayna

Welcome to Code and Current – your go-to space for smart, bold takes on the future of tech. We’re diving into all things AI and emerging tech, but in a way that actually makes sense and feels real. From how AI is reshaping industries to how it’s showing up in our everyday lives, we’re breaking it down without the boring stuff. Whether you’re into coding or just curious about what’s next, this blog connects the dots between today’s headlines and tomorrow’s tech. Let’s decode the now, and get ahead of what’s coming. 🚀