The "Neutrality" of Technology



By Liam Sadlier

In my engineering ethics class, we often discuss the role of technology in our lives and the ethical implications of the technology we use. Recently, our class had a discussion on the neutrality of technology. The prompt that sparked our conversation was, “Are technological artifacts morally neutral?” While this might not immediately seem to be an important conversation, it is important to remember how influential technology is in our everyday lives. Assuring that we properly analyze the development of technology could prevent dangerous mistakes as technology continues to become more powerful. To dive deeper into this discussion, let’s first define some terms. Technological artifacts are “material objects made by human agents as means to achieve practical ends” (oreilly.com, 2024). Technology is “the practical application of knowledge especially in a particular area” (merriam-webster.com, 2024). For example, a scalpel or an X-ray machine would be some technological artifacts that help make up the broader field of medical technology. 


With those definitions in mind, what does assigning moral value to a technological artifact mean? To continue with the medical example, assigning moral value to a scalpel would be to say that a scalpel is either good or bad. This is a complicated assignment to make because a scalpel could be used to perform a life-saving operation, or it could also be used to seriously harm someone. Two views exist on the extreme ends of the spectrum of this debate. First is the common sense view (a rather fortunate name for holders of this belief, I know). The common sense view states that technological artifacts are neutral. They would argue that a scalpel is neither good nor bad. It is just a scalpel. People can choose to do good or bad things with it, but the morality lies with the person, not the artifact. The opposing argument to this viewpoint is the strong view. The strong view, to varying degrees, applies morality to technological artifacts. Generally, they believe that a technological artifact holds moral weight, whether it is from the intentions of the creator of the artifact or the capacity for harm that it possesses. They would argue that a scalpel has a moral weight stemming from the design's intention and its capacity to do harm. While many ethicists and philosophers argue for views that lie between common sense and strong viewpoints, we will examine the two most basic and extreme perspectives in this post. Let’s look at some examples of technology and consider how neutral they can be. 


It can be intuitive to point to a scalpel and call it neutral. It was designed to be used in surgeries, and its primary purpose and use is to help people. However, as technology gets more complex, the line between neutrality and moral weight gets blurrier. One example would be nuclear weapons. Nukes can and have been used to unleash unimaginable devastation, and they were designed to do so. Now, nukes are stronger than when they were used on Japanese civilians, and there are a lot more of them. People who subscribe to the common sense viewpoint would argue that nukes are morally neutral, and moral values can only be ascribed to those who use them, depending on how they use them. The strong view, however, would argue that the intention of the design and creation of nukes is to destroy with overwhelming force. It would follow that that intention cannot be separated from the technology itself. 


Another example we can examine is artificial intelligence or AI. AI is seen by some as the ultimate neutral technology. AI technology can be seen as an arbiter of logic and reasoning. It is a computer making the decisions, after all, right? This view is unfortunately misguided, and AI is a technology that is nearly impossible to call neutral (in my opinion). AI is created using training data. That means that whatever data is used (or is not used) to train an AI model leads to biases within the model. This can lead to the developers' biases, conscious or unconscious, making their way into the AI model. When a model operates on biased data, it is hard to call it a truly neutral technology. This becomes even more concerning when the systemic biases that exist in our society are considered. If far-reaching AI technologies are not designed with these systemic biases in mind, they could become another powerful tool of oppression in an already oppressive environment for marginalized communities.


The purpose of this post is not to declare which view on technological neutrality is objectively correct. While I fall into the strong view camp, I can also understand that the common sense view can be an easier shorthand for simple technologies. I believe, however, that as the examined technology becomes more complex and integrated into everyday life, the strong view is a much better lens to analyze the impact of that technology. When technology affects the lives of millions (or billions) of people, it is worth investigating the morality of that technology.


References

https://www.oreilly.com/library/view/a-companion-to/9781118394236/OEBPS/c28-s2.htm

https://www.merriam-webster.com/dictionary/technology


Comments

Popular posts from this blog

America's Complicity

Why it’s Imperative that Democrats Support Palestinian Self-Determination

"Sorry, you're not welcome here, the culture of exclusivity that clouds democratic politics."