When people think about "AI Ethics", an image that first comes to people's head is the "Trolley Problem". In short the trolley problem poses an imaginary scenario where a decision maker has to choose between doing nothing, and letting 5 people die, or take action, saving those 5 people, but causing another person to die.
The trolley problem asks people to imagine a situation where two goods must be put on the balance (the life of one person vs the life of 5). Consequently, the question that is normally asked is how an "Ethical AI", i.e., an ethical automated system, should behave in this situation. The conversation usually becomes some variation of "Should an automated car place its driver in danger to save a pedestrian?".
If you know me, you probably know that I think this entire discussion is a red herring.
An investigation in 2022 indicated that Tesla would often turn off the autopilot of its cars one second before collisions. This could be used to manipulate statistics about how often autopilot would be involved in car crashes.
This is an engineering decision, made by people belonging to a corporation, to improve the image of its AI product over the lives of people. When we discuss 'AI Ethics' today we should, first and foremost, discuss the ethics of developing and deploying automated systems. The discussion about whether a self driving car should endanger its driver to save a pedestrian is a red herring, when compared with the discussion about makers of self driving cars using tricks to hide the actual safety statistics of their products.
Of course, this is not limited to self driving cars, but indeed to all AI systems. Before we discuss whether LLM's are sentient or can become super intelligent, we should discuss how much damage their development is doing to our ecosystem. Before we discuss whether AI will directly affect elections across the globe, we should discuss how to regulate social media corporations that financially benefit from fake news. We should talk about the issues of launching large number of short-lived satellites in orbit.
I remember that when I was an undergraduate student there was a discussion about requiring the same sort of ethics training for computer engineers that was required from other professions such as civil engineers. Now that computing systems are truly ubiquitous, maybe it is time to have this conversation again.