Philosophy and Ethics in Technology
How cross-domain knowledge transfer is key, how questions are the answer, how we can use philosophy to ask the right questions about the ethics in technology, and how/what we can learn from the ‘Black Lives Matter’ movement.
I was appalled to read that many universities (here, here, and here) are putting their Philosophy departments on the chopping block, altogether canceling Philosophy majors and minors. Through petitions and other means, some have succeeded in preventing this from happening. But, what these universities are hinting at is: We don’t think Philosophy is important or needed, and the ROI doesn’t justify running the departments.
Most innovations come from first-principle thinking, understanding deeper principles and mental models in several subjects, and understanding how to transfer one’s learning across seemingly disparate domains, subjects, technologies, and departments. Archimedes was a physicist, engineer, mathematician, and astronomer. Einstein was a physicist but always loved philosophy. Francis Bacon was a philosopher and statesman, yet he is credited with developing the scientific method. Aristotle was a philosopher and wrote across subjects like physics, biology, zoology, logic, ethics, poetry, theater, music, psychology, economics, politics, and many more. The list goes on. Yet, people fail to understand the importance of “learning transfer” and are focused on specialization.
The madness of specialization is getting worse. We initially had data scientists. Now we have AI/ML engineers, then the deep-learning and NLP specialists, and so on and so forth. Please don’t mistake me: specialization is great and will help one gain deep knowledge in one subject, which is absolutely critical in making progress in that domain. But at the same time, without understanding what’s happening elsewhere, your ideas for innovation are limited to the narrow scope of the subject you know.
The best examples of how cross-domain intelligence fueled innovation are the technologies inspired by nature. SONAR was inspired by the echolocation of bats; the robotic arm was inspired by the workings of an elephant’s trunk; there are some advanced swimsuits inspired by shark skin (drag reduction) and swarm technology that took inspiration from the bees. The best solutions are always simple: what fascinated me recently is how this one company tried to combat the deep-fake phenomenon but training people to identify deepfakes and helping them create “antibodies” and “immunize” themselves against deep-fakes. Cross-domain knowledge gives you an edge compared to your peers and other specialists, makes your career future proof and maybe bust the “Jack of all trades, master of none” myth.
So one may ask, ok, we can learn a lot from nature, but what about philosophy?
I am no expert in philosophy. I love the word ‘enthusiast.’ If I like a subject but want to give a disclaimer that I am not an expert, I can always say, ‘enthusiast.’ So, I am a philosophy enthusiast who has learned from philosophers like Marcus Aurelius, Nietzsche, Chomsky, and more recent ones like Peter-Paul Verbeek. Being an engineer with an obsession with continuous improvement and more experience in STEM than in philosophy, I would start asking the following investigative questions.
- What is the “as is” state of technology today?
- What are the consequences of the “as is” state of technology?
- Are these consequences OK?
- Where are we headed with how we are developing technology?
- Is that OK?
- What should the ideal future state be?
Philosophy can assist in answering questions 3 and 5 and, hopefully, come up with 6. We are at the brink of being controlled by technology versus us controlling the tech. Are we looking at what is at stake when a technology is developed instead of just being in this rush to develop technology before someone else does? Are we looking at how society is impacted, whether we are encouraging the good or bad side of human nature or solving certain social issues while causing new ones?
In the mad rush to develop the next best technology, ethics is looked at after the fact.
Once a technology is developed and people identify some unintended negative consequence of that technology, it’s always a debate about we should use it or ban it. We talk about privacy and security as if there is no way we can have both. There are always intense polarizing discussions about such subjects with no middle ground. But these are post-deployment discussions.
Fast is not always the best. As Daniel Kahneman tells us in “Thinking, Fast and Slow,” fast, instinctive thinking is good in simpler situations, but deliberate, thoughtful, and slow efforts are always fruitful and best in more complex scenarios. With our focus on minimum-viable products and getting to the market ASAP, innovators might tend to overlook this.
We need to think of the ethics of technology before and during the development phases. Testing should not only include testing for intended consequences but also try to identify potential unintended social impacts. Sure, it’s not easy to always predict the future. Sometimes, we may see unpredictable results, but always keeping in mind the ethics of technology and the impact of what you are creating is absolutely necessary. That is why I prescribe that every single innovator should learn philosophy irrespective of what piece of the puzzle you are solving.
Philosophy can help us with the ethical and responsible design of technology.
We need to ensure we are not just developing the next cool app or feature but also developing technology that will help us become better as a species. If we think that’s a lofty goal, then at the very least, we should try not to develop tech that will make us worse.
But some may argue that we would lose freedom and autonomy if we control certain aspects of technology or our use of technology (You shouldn’t control my screen time). But a little nudge in pushing us towards doing the right thing isn’t so bad, after all. It’s not new either. We have speed bumps on the roads to slow us down. My airport cab driver gets a loud notification when he exceeds the speed limit (not that he would change his behavior, but it would definitely wake me up from my little nap before the red-eye flight so I can gently ask him to slow down).
Also, if we really love our autonomy and freedom, why are we letting our devices and technology control us? If we really love our freedom, let us identify ways not to let ourselves controlled or carried away.
At the end of the day, autonomy is a myth. We are all bound by something — either religion, family, tradition, country, or even a thought/belief or a myth. If one says they are bound by nothing, they are bound by their belief of not being bound by anything. So, I say, let’s choose to be bound by better values instead.
Here’s an example of how philosophy can make a difference. I really liked this piece of research that classifies the influence technology can have on people and how we can better design for socially responsible behavior.
This paper describes two different dimensions of how a product can influence social behavior — strong/weak influence and hidden(implicit)/apparent(explicit) influence. Based on this classification, a product can
- Coerce somebody — Use of a speed camera to discourage fast driving. People are aware they are being forced to do it, and the changed behavior is externally motivated.
- Persuade somebody — A campaign for veganism, for example. People are aware they are being persuaded but are not really forced to do it.
- Seduce somebody — Social media that makes one look for social validation. People aren’t aware of the product’s influence, and the influence is weak. Although, the example of social media can soon turn from a weak influence to a stronger addiction.
- Decide for somebody — A building without elevators that forces people to use the stairs.
The research paper further lists different ways to influence socially responsible behavior, which fall under each of the four buckets.
However, the issue of climate change is serious right now. We should have made changes the day before yesterday. Technology is changing fast — AI/ML, Blockchain, RPA will change the world in ways we cannot even imagine. We need to act fast and act now. So for this article, I will focus on the two critical methods that fall in bucket 1 — decisive methods that are strong (because we need change now) and implicit in nature (because no one likes to be forced).
Firstly, it is to trigger the human tendencies that will automatically elicit the needed behavioral responses. Secondly, to make the desired behavior the only possible behavior to perform. These are high-level descriptions and can drive ethical behavior or unethical behavior depending on how they are used, which is why we need some regulation. When we have organizations like the FDA to regulate the food and drugs we consume, why do we not have something similar for the technology we consume?
So what does this have to do with the ‘Black Lives Matter’ movement?
Observing the ‘Black Lives Matter’ movement and especially the push against taking down the statues of racist white people made me think. Why did these white people commit these atrocities? It is because very few people told them it was wrong. In their time, everybody had slaves; everybody felt the white race was superior. Anyone who said otherwise was an outlier and an outcast. Many older white people still have the same beliefs and probably don’t say anything for fear of retaliation from the younger generation.
I then start to wonder, will our grandkids hate us for the things we did? Will they hate us for being irresponsible with our technology and doing the things that we do? We don’t have the luxury to stay quiet and hope they’d think it’s not us. Everything we said and did is on the Internet for generations to see. If we really don’t want them to hate us for the world we will give them, we need to be responsible and ethical in what we do.
Quality over quantity and accuracy over speed — let’s spend a little more time thinking about what we are doing and its impact.