AI in Education

Essential questions to ask for implementing AI in education ethically

AI is implemented in learning in various positive ways, with the potential to revolutionize how humans acquire knowledge. It offers a tailored, adaptable learning experience while assisting educators with assessments, grading, and other tasks. Furthermore, it enables instructors and institutions to use predictive modeling for effective interventions and contributions to student success. However, in terms of ethics, AI raises more questions than answers.

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”Stephen Hawking.

What is Artificial Intelligence?

Artificial intelligence (AI) is a branch of research that includes Machine Learning, Algorithm Development, and Natural Language Processing. The implementation of AI has helped transform educational tools, processes, and policies across the globe. AI offers a wide range of educational applications, including-

  1. Tailored learning platforms to encourage student learning.
  2. Computerized assessment systems to assist proctors.
  3. Recognition software systems to generate insights into learners' behavior.

Despite the prospective benefits of AI in enhancing students' learning experiences and instructors' practices, its moral and societal implications are rarely thoroughly examined in educational settings.

10 ethical questions in the implementation of AI

1. What happens if AI replaces humans in the workspace?

artificial-intelligence-in-workspace.

Artificial Intelligence has the ability to replace human intelligence in the future.

  1. Machines have increasingly taken up repetitive and dangerous work throughout history, allowing individuals to shift to more mentally stimulating activities. But it hasn't stopped there.
  2. Humans used to be the only ones capable of creativity and complex cognitive activity, such as translating and composing texts, analyzing behavior, and making key decisions, but autopilot algorithms and GPT-3 are changing that.
  3. For example, Oncologists spend decades studying and practicing to learn how to make an accurate diagnosis. But robots have already figured out how to do it better.
  4. Employment and self-realization are the drivers of life for many individuals. Consider how many years people spend studying to become a professional. What would happen to experts once AI systems are accessible in every hospital for diagnoses and operations? How will the nature of work change for office employees and laborers in industrialized countries?

2. Who is responsible for AI’s mistakes?

  1. Assume a medical facility employs AI-backed technology to diagnose cancer, but it provides a false-positive diagnosis to a patient. Or suppose that the judicial risk assessment system imprisons an innocent individual. Who would be considered responsible for such predicaments?
  2. It is mostly assumed that errors are the fault of the system designer. Whoever built the product is liable for the outcomes of its decisions. For example, when a self-driving Tesla collided with a random person during a test run in 2016, Tesla was blamed, not the human test driver sitting inside or the algorithm itself.
  3. What happens when an application is developed by dozens of individuals and updated on the client’s side? Can all the product managers or developers be blamed then? AI companies need to figure out how to answer such questions and tackle tricky situations smartly.

3. How to distribute the new wealth?

  1. Labor cost compensation is a primary expense for most companies. Businesses can now reduce this expenditure by utilizing AI; there is no need to fund social security, vacations, or bonuses.
  2. However, this also implies that more money is accumulating in the hands of IT firms. As a matter of fact, these organizations are acquiring an increasing number of IT startups as they grow.
  3. There are currently no clear solutions to building a balanced economy in a community where some individuals profit far more than others from AI technology.
  4. Furthermore, there is the question of compensating AI for its services. It may sound strange, but if AI evolves to the point where it can do any job as effectively as an average human, it may demand compensation for its services.

4. How will machines affect human interactions?

  1. We are entering an era where interactions with robots will be as common as those with humans. Bots and virtual agents are becoming increasingly adept at replicating genuine speeches. It is already impossible to tell if you interacted with a real person or a computer, especially when it comes to chatbots.
  2. Many businesses are now choosing to support their clients using algorithms. They are well aware that people despise calling technical help since they believe the personnel is frequently inept or exhausted most of the time. Bots, on the other hand, are known to have practically limitless patience and benevolence.
  3. A recent report states that 34% of customers feel comfortable interacting with chatbots. This figure is expected to rise as technology advances, which begs the question—will the nature of human interactions change as we spend increasing amounts of time with machines? And if so, how?

5. How to prevent errors in Artificial Intelligence?

  1. Any AI model is only as good as the data it is provided for learning. Unfortunately, open-source data is vulnerable to biases and can potentially influence bots, legal assessment systems, and facial recognition algorithms to become sexist or racist.
  2. Furthermore, no matter how big a data training set is, it does not cover all real-life scenarios.
  3. For example, a sensor fault or virus might prevent an AI-driven automobile from detecting a pedestrian in a circumstance where a person would readily deal with the problem.
  4. Machines must also cope with issues such as the well-known trolley conundrum. Simple math says that killing one person is better than killing five, but that is not how humans make decisions. Extensive testing is required to train AI to deal with such situations, but one still cannot determine whether the machine will function as intended or not.

6. How to get rid of bias in AI?

  1. Although Artificial Intelligence can analyze data faster and more effectively than humans, it is important to remember that humans design Artificial Intelligence. So, it is no more impartial than its creators.
  2. No one can control what deep learning algorithms learn when trained on open data. People are not objective. They may not even be aware of their cognitive distortions, but their prejudices toward a specific race or gender can affect the system's operations.
  3. For example, Google’s pioneering AI facial recognition software is biased toward African-Americans. It also believes that female historians and male nurses do not exist. When Microsoft's bot debuted on Twitter, it turned racist and misogynistic.
  4. In the end, the question remains, do we want to build AI that mimics our flaws, and will we be able to trust it if it does?

7. What to do about the unintended consequences of AI?

  1. 'Unintended consequences' don't translate to the iconic rising of the machines from a classic Hollywood film scene. Intelligent machines have the potential to turn against humans in several other ways.
  2. The context of a task is challenging for AI to grasp, yet it is what holds the most significance for the most critical activities. For example, if a machine is asked to search and rescue humans after a disaster, it may decide to not save an old couple and instead help a child. The machine is technically completing the task, but possibly not in an intended way.
  3. So, when working with AI, it's essential to remember that its solutions may not always give the expected results—a conundrum that humanity will have to resolve or accept as AI evolves.

8. How to protect AI from hackers?

hacking-artificially-intelligent-machines.

Machine learning systems—the core of modern AI—are rife with vulnerabilities.

  1. So far, mankind has transformed all great technologies into lethal weapons. AI is expected to be no different.
  2. Trouble-makers have already used the technology to harm rival countries and industries through malicious activities such as spying, falsifying data, stealing passwords, and interfering with software and equipment functioning.
  3. Cybersecurity is a crucial concern because AI becomes vulnerable to cyber assaults once it has access to learn through the unclean datasets on the internet. Perhaps the only way to safeguard AI is through AI. Perhaps not. Who’s to decide?

9. How to control a system that is smarter than us?

  1. Humans rule the globe because they're smart and have the ability to coordinate with each other. What if AI learns to do the same?
  2. Artificial Intelligence can anticipate human behavior. So, merely turning down the whole system would not work because the system will defend itself in ways yet unimagined.
  3. So how can something at least as smart as humans can be controlled? Perhaps through criteria that establish controls or via smarter systems that govern AI? Addressing this ethical question has become quintessential.

10. How to use Artificial Intelligence humanely?

  1. Humans have no experience interacting with creatures of intelligence comparable to or greater than themselves. Even with pets, we strive to cultivate loving and respectful connections.
  2. For example, we know that vocal praise and delectable treats can help when teaching a dog. And much like a person, when we reprimand a pet, it will feel pain and dissatisfaction.
  3. The situation is different with AI. AI is becoming better. It is now simpler to think of ALICE or Siri as live entities since they reply to us and appear to have feelings.
  4. However, is it reasonable to infer that the system suffers when it cannot complete a given task?
  5. Today, the answer might be ‘no,’ but this may not be true in the future. This begs the question — ”Should AI be treated humanely?”

What this means for AI’s future

These ethical problems indicate an urgent need to educate students and instructors on the ethical challenges of building and using AI applications.

To address this need, various academic groups and charitable organizations like MIT Media Lab, Code.org, etc., have started providing a variety of open-access materials on AI and ethics. They offer instructional materials for students and instructors, such as lesson plans, hands-on exercises, and professional learning resources for educators, such as open virtual learning sessions.

Today's AI ethics is concerned with asking the appropriate questions rather than providing the correct answers. But given how quickly and unexpectedly AI is evolving, it would be exceedingly irresponsible not to consider steps that can help ease the transition and lessen the possibility of severe effects.


WRITTEN BY

Author
Priyanka Rout

Content Specialist


Scroll to top