Building Trustworthy and Ethical AI is everyone’s responsibility

Whether you realized or not, Artificial Intelligence (AI) has quickly become part of our daily life. With traditional industry and businesses like fintech, media, healthcare, pharmaceuticals, and manufacturing adopting AI rapidly in recent years, concerns related to Ethics and Trustworthiness have been mounting.

Today, AI ‘assists’ many critical decisions influencing people’s life and well-being, for example, creditworthiness, mortgage approval, disease diagnosis, employment fitment, and so on. It was observed that even with human oversight, complex AI systems may end up doing more societal harm than social good.

Building Trustworthy and Ethical AI is a collective responsibility. We must apply fundamentals throughout the lifecycle of AI, for example, product definition, data collection, preprocessing, model tuning, post-processing, production deployment, and decommissioning phases. No doubt Government and Regulators have a role to play through monitoring and ensuring a level playing field for everyone, the same is for people building, deploying, and using AI systems. This includes executive leadership, product managers, developers, MLOps engineers, data scientists, test engineers, HR/Training teams, and users.

Bias and unfairness

While Trustworthy and Ethical AI is a broader topic, it’s tightly coupled with the prevention of of Bias and Unfairness. As the National Security Commission on Artificial Intelligence (NSCAI) observed in a recent report: “Left unchecked, seemingly neutral artificial intelligence (AI) tools can and will perpetuate inequalities and, in effect, automate discrimination.”

AI learns from observations made on past data. It learns the features of data and simplifies data representations for the purpose of finding patterns. During this process, data gets mapped to lower-dimensional (or latent) space in which data points that are “similar” are closer together on the graph. To give an example, even if we drop an undesired feature like ‘race’ from the training data, the algorithm will still learn indirectly through latent features like zip code. This means, just dropping ‘race’ will not be enough to prevent the AI learning biases from the data. This also brings out the fact that data ‘bias’ and ‘unfairness’ reflect the truth of the society we live in.

With not enough data points belonging to underrepresented sections of the society, high chances that they will be negatively impacted by AI decision-making. Moreover, AI will create more data with its ‘skewed’ learning which will be used to train it further and eventually create further disparity through its decision-making.

Trustworthy and Ethical AI is important

By definition, Trustworthiness means “the ability to be relied on as honest or truthful”. Organizations must ensure their AI systems are trustworthy, in absence of trust, undesired consequences may occur, including but not limited to business, reputation and goodwill loss, lawsuits, and class actions that can be potentially life-threatening for a business. On the other hand, Governments and Society must ensure that AI systems follow Ethical principles for the greater good of common citizens, one great example is UNESCO Ethical AI Recommendations.

As per the European Commission Ethics Guidelines for Trustworthy AI, Trustworthy AI must be Lawful, Ethical, and Robust.

Respect for human autonomy, fairness, explicability, and prevention of harm are four critical founding principles of Trustworthy AI. It’s critical that AI should work for human wellbeing, ensure safety, should be always under humans’ control, and never ever should harm any human being..

Who is driving Ethical AI?

Realization of Trustworthy AI is envisioned through the following actions:

Who is driving Ethical AI?

Leading tech companies have already announced one or another kind of Ethical AI initiatives and governance. As there is no common ground in terms of benchmark principals, guidelines, and framework, it’s difficult to assess whether the intent is genuine or merely optics. As AI will have a profound impact on society and well being of common citizens, just ‘self-certification’ will not be enough.

Governments should (some have started already) define the principles, policy, guidelines and establish an effective oversight and regulatory mechanism. This will help to ensure that common citizens are protected from intended/ unintended negative fallouts of AI. As AI evolves, frameworks and regulations should also evolve.

Recently, the US Federal government signed Executive Order On Advancing Racial Equity and Support for Underserved Communities, however, more needs to be done.

EU, UN & DoD have already taken the lead on this topic, with European Commission Ethics Guidelines for Trustworthy AI, UNESCO Elaboration of a Recommendation on the ethics of artificial intelligence and US Department of Defense Ethical Principles for Artificial Intelligence should be considered as baseline work towards defining a practical and mature guideline towards Trustworthy and Ethical AI.

Plan of action

Here we attempt to identify suggested actions for involved actors. This is in no way an all-inclusive list and should be taken as only a baseline and should be updated to support a particular case:

Conclusion

We all have actions to build Trustworthy and Ethical AI for the larger good of society (and humanity). With coordinated and persistent efforts, it is definitely possible.

Tags :

Related insights

  • All Posts
  • Article
  • Awards & Recognition
  • Blog
  • Brochures
  • Case Studies
  • Fintech
  • Insights
  • News
  • Stories
  • Testimonials
  • Uncategorized
  • Whitepaper
  • All Posts
  • Article
  • Awards & Recognition
  • Blog
  • Brochures
  • Case Studies
  • Fintech
  • Insights
  • News
  • Stories
  • Testimonials
  • Uncategorized
  • Whitepaper

Let’s create new possibilities with technology