Taking a Stand on AI Ethics

Aleksandra Hadzic
6 min readDec 7, 2021

--

As you read this, AI systems are being embedded and scaled faster than existing governance frameworks (i.e., the rules of the road) are evolving.

While it is clear that AI systems offer opportunities across various areas of life, many of these systems have a positive impact on the world in general.

When we fail to recognize that they pose significant risks, we all lose out on the fundamentals that are key to the whole concept of development itself.

Even though there is mainly a more philosophical rather than practical approach to this topic, it’s difficult to avoid the growing presence of AI in our lives. Particularly when we hear more and more about it and see it increasingly embedded in our social media, jobs, and homes.

There is no question that AI has the potential to improve our lives in countless ways.

However, ethical considerations cannot be ignored, and we cannot afford to get caught up in a false sense of excitement or awe. As with any powerful tool or technology, a risk/benefit analysis and thoughtful discussion about the ethics of AI need to occur.

Does AI Pose A Moral Dilemma?

Until this point, there’s the awareness that AI can automate many tasks, increase productivity and make our lives easier while also enabling us to solve problems that are simply too complicated for humans to solve alone. However, AI is not without its challenges. As we see AI becoming increasingly pervasive in our society and affecting every aspect of our lives, it also creates a host of new ethical and moral questions that have never been faced before.

Some of the most pressing issues we currently face regarding AI include:

  • The current lack of consumer protection surrounding AI algorithms
  • The potential for AI to intensify existing inequalities
  • The potential for AI to create new systemic issues at a rapid pace

Currently, there aren’t many comprehensive regulations addressing the ethical use of AI. This means there’s no real way for consumers to protect themselves from having their data misused by companies or being discriminated against due to the biases programmed into machine learning systems. It also means that companies have little incentive to use their customers’ data responsibly.

Moreover, we lack any legal framework to protect us from malicious or reckless uses of AI, such as autonomous weapons systems or algorithmic trading bots gone rogue.

Ethics of AI and the Inability To Think Beyond Today

The past few years have witnessed a proliferation of initiatives on ethics and AI.

Whether formal or informal, led by companies, governments, international and non-profit organizations, these initiatives have developed many principles and guidance to support the responsible use of AI systems and algorithmic technologies.

Despite these efforts, few have managed to make any real impact in modulating the effects of AI.

What are the reasons for this failure?

There are at least three crucial factors. First, although there is an unprecedented level of hype around AI-driven technologies, they remain growing.

The hype has created unrealistic expectations among people and organizations about the capabilities and applications of these technologies.

Secondly, many of these initiatives focus on symbolic guidance such as principles or best practices instead of concrete actions that can be easily implemented. Finally, these initiatives are fragmented across numerous organizations that do not coordinate or collaborate.

The hype around AI has created unrealistic expectations among people and organizations about the capabilities and applications of these technologies.

The ethics of AI in a connected world

This personal trade-in data has raised new concerns over the ethics of the invisible digital infrastructure that underpins many of society’s most essential systems, influencing the way we think, communicate and act. Yet when the imperative should be to address these imbalances, many businesses have been accused of turning a blind eye to them and failing to take responsibility for their role in creating an ethical AI framework.

The effective control of AI systems is especially dangerous since it is difficult to define.

The reach of AI systems into every aspect of daily life shifted more social and economic activities into the digital world. Leading technology companies now control many public services and digital infrastructures through procurement or outsourcing schemes.

The public is blissfully unaware that they’re giving up control over their own data to private companies who can be compelled by state actors to exploit it. And private companies are becoming increasingly dependent on public institutions for their basic functioning. At the same time, they also use state-of-the-art surveillance tools to track and monitor users to extract value from them.

When AI Challenges Face Principles in Practice

Today we’re at a critical moment in the use of AI. In some cases, this technology has been shown to have extraordinary potential to advance society. Still, it can also be used to undermine the core values at the heart of society.

Suppose significant advancements in AI are not accompanied by a greater understanding of these issues. In that case, it is our duty as professionals and academics to ensure that this becomes a top priority for our colleagues and government institutions.

We need to work together on this issue. As scientists, engineers, and designers, we have an essential role in educating people about AI’s risks and opportunities. It would be negligent of us not to address this issue head-on.

As I mentioned earlier, I believe the key is change — specifically organizational change within both industry and academia — but how exactly do we make this happen? And what should this change look like?

There are three main issues at play:

The first challenge is that many attempts to model and govern AI systems fall short of fully appreciating the nuances and complexities of these systems. This is particularly an issue when considering AI systems’ “ethics” dimensions.

The second major issue is that to date, all the talk about ethics is simply that: talk. While there has been much discussion on AI’s risks, challenges, and benefits, we have yet to see any real, impactful action. This is despite researchers already producing frameworks, concepts, and recommendations for governing AI systems.

The third major issue is that discussions on AI and ethics are still primarily confined to the ivory tower. For real progress to be made, it is critical for governments, businesses, civil society organizations, and citizens to get involved in this debate.

AI and Ethics: The Conversation We Must Have Now

The time to begin this conversation is now. This is not a conversation about whether AI should be built but how it should be designed and developed. The good news is that AI has the potential to deliver equally great benefits as it brings its dark sides to light.

The list of issues that need addressing is lengthy, and some are more complex than others. However, they all share a common root: they relate to the ethics of AI — what it means to “do no harm” and maximize societal well-being while minimizing harm. These ethical conundrums are not new; we have been wrestling with them for millennia.

But unlike with other transformative technologies such as nuclear power, we do not have the luxury of time for these discussions. We must start now because it may be too late to alter the course once deployed.

Education. Innovation. Solutions-based science

While it might be easy to feel helpless in the face of daunting, global problems like climate change or nuclear warfare, many individuals and grassroots networks have begun to integrate a new way to respond: proactive solutions-based science.

AI is often one of the most academically challenging research areas with which any scientist can engage, but it also has huge potential to benefit society and drive innovation. This is why I have made it my mission and passion to build this conversation. I want to help ensure that we consider all the ethical and social considerations within our existing networks of expertise before moving ahead, so that AI can indeed be a force for good in society.

--

--

Aleksandra Hadzic
Aleksandra Hadzic

Written by Aleksandra Hadzic

Researching AI. Merging Data Science and Digital Marketing.

No responses yet