Can AI be trusted?

Can AI be trusted?

You might have heard about problems that arise when AI systems misinterpret data or propose solutions that reflect human prejudice. You’ll learn about the five pillars of AI ethics: fairness, robustness, explainability, transparency, and privacy. Through real-world examples, you’ll learn about AI ethics and why AI ethics are so important in building trustworthy AI systems.

5 Pillars of AI Ethics

AI ethics is a multidisciplinary field that studies how to optimize AI’s beneficial impact while reducing unintended or adverse outcomes. Trust in AI is built on five ethical pillars:

  • Fairness: Fairness is probably obvious it is to make sure that the models are not behaving in a biased way. Now it may actually start, but the challenges may start way before a model is built. It might be understanding if the data itself is biased. When you build a model how do you make sure that the model is not systematically giving an advantage or a disadvantage to a certain group? And the definition of the group varies by industry, by use case, it could be based on sensitive attributes like age and gender and ethnicity but may not be limited to any of those. You want to ensure that the system does not consistently favour one over the other unfairly.

  • Robustness: you want to ensure that your models behave well in exceptional conditions. How do you make sure that the model performance is good over time? What is happening with the effective data drift? Or for example, in the context of the pandemic, we know that customer behaviour has changed you know customer patterns have changed, and customer touch points have changed. Is your model still behaving as expected, or if it is not can you at least understand how the model behaviour is changing, how data is drifting, how accuracy is drifting, etc...

  • Explainability: Explainability is probably pretty obvious. How can you explain the behaviour of a model? Why was someone approved for a loan, why was someone rejected? When somebody applied for a job and that person was selected but someone with very similar qualifications applied that person was rejected, can you explain the behaviour to the end-user or a decision maker?

  • Transparency: Transparency, you want to be able to inspect everything about a model. Can you understand all the facts surrounding the model? Who built it, what data is being used, what algorithms, what packages are being used, who approved it, and who validated it? All of these aspects of the model, and facts about the model, should be easily available. Just like you know, you have you buy a food product and there is a, there's a label on it, you know, it has the nutritional facts, when was it manufactured, where was it manufactured, all of that. Just like that for a model, you should be able to get the facts of that model very quickly

  • Privacy: Privacy, can you make sure that the model, the data, the model that is built off of that model, and the insights from that model, are all that the model builder owns and retains control of those insights? And how do you do this not just in terms of consumption of the output of the model, but across the life cycle How do you make sure that data protection rules are in place through the model building testing validation and monitoring stages

As AI is increasingly embedded in everyday life, it is vital that people can trust AI. Practitioners infuse trust into AI systems with AI ethics.

Artificial intelligence is everywhere. It powers the navigation apps that help you find the most efficient or eco-friendly route. It drives the search engines that help you find the most relevant information. It helps doctors reach more accurate diagnoses and develop more optimal treatment plans. It improves weather forecasting, enabling you to prepare better for significant weather events. In conjunction with sensors and satellites, it can collect data about the environment that helps scientists better understand and make predictions about our changing world. AI can make life easier and safer by assisting people to make more informed decisions, connecting them with the right information they need at the right time, and finding patterns or efficiencies that they might not otherwise know about.

But AI also has the potential to harm. For example, AI can be used when determining who gets a loan, who gets accepted to a college or selected for a job, how employees are compensated, or even the lengths of prison sentences. In the context of AI, harm doesn’t necessarily have to do with physical damage. The harm can be less obvious, taking the form of inequity, discrimination, or exclusion. And this harm can be subtle because people may not always know when they are interacting with AI or when and how AI may be influencing decisions about them.

Advantages of Artificial Intelligence

1. Reduction in Human Error

2. Zero Risks

3. 24x7 Availability

4. Digital Assistance

5. New Inventions

6. Unbiased Decisions

7. Perform Repetitive Jobs

8. AI in Risky Situations

9. Faster decision-making

10. Pattern identification

11. Medical Applications

Disadvantages of Artificial Intelligence

1. High Costs

2. No creativity

3. Unemployment

4. Make Humans Lazy

5. No Ethics

6. Emotionless

7. No improvement

AI is a part of our lives, and its influence is growing, so it is crucial that people who design, develop, deploy, procure, and use AI understand how to use AI ethics to minimize harm and optimize benefits.

Did you find this article valuable?

Support TechLearn India by becoming a sponsor. Any amount is appreciated!