Ethics of AI - A Course From Finland
Written by Nikos Vaggalis   
Friday, 15 October 2021

Α free online text-based course by the University of Helsinki for anyone who is interested in the ethical aspects of AI.

The University of Helsinki has already a track record for its Elements of AI course which launched in 2018 as we reported in Free AI Course from Finland. Initally it offered Introduction to AI as a free online MOOC, provided in English and intended to  accessible to a very wide audience. Later a second course, Building AI, was added and according to its website 730,000 people of enrolled.

Since many aspects of our society involve or are going to involve AΙ in the near future, the subject of AI Ethics is more relevant than ever and this new course is therefore a welcome counterpart.

As we examined in "How Will AI Transform Life By 2030? Initial Report" there are several societal sectors which are going to be affected by AI:

Transportation
It's a sector that will be heavily affected by automation through self-driving vehicles due to the innovations brought forward by major stakeholders, the likes of Google and Tesla.

Home/Service Robots
Coincident advances in mechanical and AI technologies promise to increase the safe and reliable use and utility of home robots. Special purpose robots will deliver packages, clean offices, and enhance security.

Healthcare
Healthcare is one sector that already enjoys the fruits of AI's application, as computers become more accurate than doctors in assessing many types of cancer, or finding new, more effective drugs.

Education
Though quality education will always require active engagement by human teachers, AI promises to enhance education at all levels, especially by providing personalization at scale. New forms of education powered by AI are also expected to play a crucial role in re-training those displaced by the machines workers.

Other sectors affected are going to be Low resource communities, Public safety and security, Employment and workplace and Entertainment. Read the report for more.

So where do Ethics fit into this picture? According to the course:

Ethics concern the questions of how developers, manufacturers, authorities and operators should behave in order to minimize the ethical risks that can arise from AI in society, either from design, inappropriate application, or intentional misuse of the technology.

Going one step further, it explains that there's three subfields of ethics:

1) Meta-ethics studies the meaning of ethical concepts, the existence of ethical entities (ontology) and the possibility of ethical knowledge (epistemology).

2) Normative ethics concerns the practical means of determining a moral (or ethically correct) course of action.

3) Applied ethics concerns what a moral agent (defined as someone who can judge what is right and wrong and be held accountable) is obligated or permitted to do in a specific situation or a particular domain of action.

Since AI ethics is a subfield of applied ethics this is the focus of the course..

AI ethics is a sensitive issue and we have witnessed its misusing on the subjects of privacy through the data collection and its subsequent malevolent interpretation by ClearView AI and Cambridge Analytica, and on the subjects of bias as examined in the article "How AI Discriminates" which practically shows how Machine Learning algorithms can make biased decisions when hiring. Subsequently, as people integrate AI more broadly and deeply into industrial processes and consumer products, best practices need to be spread, and regulatory regimes adapted.

To deal with those issues, the course examines the establishment of Ethical frameworks, which:

are attempts to build consensus around values and norms that can be adopted by a community – whether that’s a group of individuals, citizens, governments, businesses within the data sector or other stakeholders converged on a set of five principles: 

    • non-maleficence
    • responsibility or accountability
    • transparency and explainability
    • justice and fairness
    • respect for various human rights, such as privacy and security 

These five principles aim to answer a variety of questions and set safeguards in place:

  • Should we use AI for good and not for causing harm? (the principle of beneficence/ non-maleficence)
  • Who should be blamed when AI causes harm? (the principle of accountability)
  • Should we understand what, and why AI does whatever it does? (the principle of transparency)
  • Should AI be fair or non-discriminative? (the principle of fairness)
  • Should AI respect and promote human rights? (the principle of respecting basic human rights)
  • Who is responsible when a self-driven car crashes or an intelligent medical device fails?
  • How can AI applications be prevented from promulgating racial discrimination or financial cheating?
  • Who should reap the gains of efficiencies enabled by AI technologies and what protections should be afforded to people whose skills are rendered obsolete?

And that's what the course is about - to focus on these principles, analyze what they imply and interpret them in the fashion of not just traditional philosophy but also in practice discussing their problems and asking the tough questions, all that through a well rounded syllabus:

Chapter 1: What is AI ethics?
What does AI ethics mean and what role do values and norms play? We’ll also look at the principles of AI ethics that we will follow in this course. 

  • A guide to AI ethics
  • What is AI ethics?
  • Values and norms
  • A framework for AI ethics 

Chapter 2: What should we do?
What do the principles of beneficence (do good) and non-maleficence (do no harm) mean for AI, and how do they relate to the concept of the “common good? 

  • What should we do?
  • The common good – calculating consequences
  • Common good and well-being 

Chapter 3: Who should be blamed?
What does accountability actually mean, and how does it apply to AI ethics? We’ll also discuss what moral agency and responsibility mean and the difficulty of assigning blame. 

  • Algorithms and accountability
  • What is accountability?
  • Who should be blamed – and for what?
  • The problem of individuating responsibilities 

Chapter 4: Should we know how AI works
Why is transparency in AI important and what major issues are affected by transparency – and what are some of the risks associated with transparency in AI systems? 

  • Transparency in AI
  • What is transparency?
  • Transparency and the risks of openness 

Chapter 5: Should AI respect and promote rights?
What are human rights, and how do they tie into the current ethical guidelines and principles of AI? We’ll also look more closely at three rights of particular importance to AI: the right to privacy, security, and inclusion. 

  • Introduction
  • What are human rights?
  • Examples of human rights: privacy, security, and inclusion
  • AI rights for children 

Chapter 6: Should AI be fair and non-discriminative
What does fairness mean in relation to AI, how does discrimination manifest through AI – and what can we do to make these systems less biased? 

  • What is fairness
  • The varieties of fairness
  • Discrimination and biases
  • Removing bias 

Chapter 7: AI ethics in practice
What are some of the current challenges for AI ethics, what role do AI guidelines play in shaping the discussion, and how might things develop in the future? 

  • From principles to doing
  • Ethics as doing
  • Moving forward with ethics 

In the end, Ethics of AI makes for an easy to read and thoughtful guide which manages to put under the spotlight some very important issues that will concern our society as it moves forward. As such it's useful not just for engineers, policy makers and educators but also for the greater public that needs to be informed on the aspects of technology that will affect its life one way or the other. Ethics of AI tries to ensure that these ways are positive.

ethicsaihel

More Information

Ethics of AI

Related Articles

Free AI Course from Finland

How Will AI Transform Life By 2030? Initial Report

How AI Discriminates

Ethics Guidelines For Trustworthy AI

SAP's Creating Trustworthy and Ethical Artificial Intelligence

Updated AI Fairness 360 Toolkit Supports R and Scikit-learn

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


The PostgreSQL Extension Repo By Pigsty
09/12/2024

A repository containing any PostgreSQL extension you can imagine for Linux distributions is something that might be valuable if you are trying to save some time.



Programmer Gifts - Pi For Xmas
13/12/2024

The holiday season is a good time to learn about computers - you have the time. But where to start? Our advice is to ignore the pudding and go for a Pi.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info