How Ethical is Artificial Intelligence?
There are multiple ways to define the word “ethics.” The basic idea of ethics are rules, guidelines, or a set of principles that are deemed right or wrong. It can be applied to many aspects of life, such as education, religion, careers and so much more. According to the Merriam-Webster dictionary website, one of the many definitions for the term is “a set of moral principles: a theory or system of moral values. “When applied to the field of AI, it’s all about how the technology behaves. What happens when artificial intelligence does harm to humans? What should be done when a machine goes against what it was programmed or told to do? Is there a limit to what information should be able to be collected?
In Isaac Asimov’s collection of short stories called I, Robot, specifically in the story “Runaround”, the rules about ethics in robotics were mentioned. They are referred to as the “three fundamental Rules of Robotics” or laws.“We have: One a robot may not injure a human being, or through inaction, allow a human being to come to harm…Two continued Powell, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law…and Three, a robot must protect its existence as long as such protection does not conflict with the First or Second Laws (Asimov,37).”
A lot has changed since the initial publishing of the book in 1950. Although these laws are mentioned in a work of fiction, the conversation around sentience, humans versus machines, and general ethical practices is prevalent today. The talk around this subject seems to spike in popularity as technology advances. In the book Our Final Invention by James Barrat, he discusses the future of AI and some of the dangers, including the ability for our information to be collected online using this technology. He mentioned Google specifically.
Keep in mind that this book was published in 2013, and many advancements and changes have been made since then, but the topic is still at the forefront of conversation today. This subject may even be more prevalent now. Data collection is used in manufacturing, especially when wanting to improve the product's quality and the process's efficiency overall. When it becomes personal, and anyone can gain access to your private information, it becomes a concern for many people.
In manufacturing (and in other senses), “machine learning” allows the machine to complete the tasks asked of it. They are programmed to do these jobs, such as cooking a food product or cutting materials in the production line. Once the system knows what the programmer or machine operator demands, they have “learned” it. After that, they can complete the tasks many times in the future and make adaptations depending on different machine settings.
Research done by Eduardo Vyhmeister, Gabriel Gonzalez-Castane, and P.-O. Ostbergy talks about Europe’s ethical standards in artificial intelligence. They are very similar, if not the same as the ones mentioned in the previously discussed novel. “The European ethical principles for AI are based on ethical imperatives presented by the AI4People Group in 2018. These imperatives define the approaches on which AI components should rely. These imperatives include (1) non-maleficence, that state that AI should not harm people, (2) Beneficence, that state a worthwhile end goal for peoples, (3) Autonomy, which state the respect for people's goals and wishes, (4) Justice, that state that AI should act in a just and unbiased way and, (5) Explicability,that states explanation on how an AI system arrives at a conclusion or result (Vyhmeister, Gonzalez-Castane, Ostbergy,2022).”
Humans and animals can think, store information in our brains/memory, come up with new ideas, and make decisions. Whether through a computer, a phone, or a factory machine, technology can now demonstrate or mimic some of these abilities. Because of that, the rules previously discussed were created.
These rules talk about sentience, which is “ feeling or sensation as distinguished from perception and thought (Merriam-webster.com).” It is a very human-like ability. Alan Turing devised a test to see if it was possible to differentiate a living person from a machine. The Turing test had someone conducting the test and two other participants (a machine and a human). The machine passed if the one leading the conversation couldn’t tell the difference. If they can tell that they are speaking to another person, the humans pass the test. Turing’s experiment aims to see how intelligent machines can be (Barrat,65). It’s important to note that the machine isn’t thinking it’s mimicking that essential human function. It’s programmed to act or pretend as if it’s human-like. We have previously brought up the point that people worldwide have questioned whether AI will be more beneficial or harmful.
As we discussed prior, another point related to AI and ethics is data sharing. AI or AGI are used to collect data and understand where users of an online game, e-commerce stores, search engine, or website, in general, are operating geographically. Information such as gender, age, and search history may also be collected. This information doesn't have to be a cause for concern, but when it is shared without consent, and it is unknown where it’s being transferred to, you may start to question your safety.
In conclusion, like anything in life, there are pros and cons. This technology has the ability to evoke fear in some people about what it is capable of. While simultaneously we have seen the many benefits and amazing accomplishments made using AI.
Asimov, I. (2008b). I, Robot (Reprint). Del Rey.
Barrat. (2022). Artificial Intelligence and the End of the Human Era Our Final Invention (Paperback) - Common. St. Martin’s Griffin.
Merriam-Webster. (n.d.). Sentience definition & meaning. Merriam-Webster. Retrieved December 8, 2022, from https://www.merriam-webster.com/dictionary/sentience
Merriam-Webster. (n.d.). Ethics definition & meaning. Merriam-Webster. Retrieved December 8, 2022, from https://www.merriam-webster.com/dictionary/ethics
Vyhmeister, E., Gonzalez-Castane, G., & Östbergy, P.-O. (2022, May 11). Risk as a driver for AI framework development on manufacturing - AI and Ethics. SpringerLink. Retrieved December 8, 2022, from https://link.springer.com/article/10.1007/s43681-022-00159-3