The shift in the decision-making strategy — Bias in AI

Aleksandra Hadzic
6 min readApr 12, 2021

--

Have you ever wondered — Do we make decisions logically or intuitively?

Having a better idea on how to use data ethically; Source: Unsplash

When it comes to reasoning and feelings, I’ve always gone with my gut instinct instead of analyzing things more objectively and critically. But there’s one issue. It’s known as biases.

People are mostly biased because they don’t know when their heuristic intuition conflicts with the task’s rational considerations, which probably come into play later in the reasoning phase. Then, it drives us to the point where we make impulsive or maybe even irrational decisions. And we don’t know why.

“Black box” or explainable AI?

Considering that there is this term bias in human psychology, it has its equal in AI. The controversial questions raised about biases in AI are mostly related to racism and gender inequality. The switch that was flipped in my head to dig into general biases, specifically in AI, was triggered after watching Coded Bias (2020) on Netflix. This is a provocative film about how various forms of biometric surveillance, artificial intelligence, and data science technology can be biased in both implementation and use.

Coded Bias (2020); Source: Netflix

The information that currently drives the world, the beliefs and attitudes on which cultures, societies are presently based — we should not store them in artificial intelligence systems.

We do not need a recreated world in which we are now. Even less do we need a mathematical model or algorithm that will annul the years, decades, and centuries of the fight against racism, gender discrimination, and many other vital issues that are at stake.

Hence, it’s so important to become aware of the fact that the bias doesn’t come from the machine learning model.

The bias source is the data used to train the model, training dataset that we provide.

In some cases, the system performs admirably on certain types of data while failing miserably on others.

We are constantly in virtual environments, filled with various stimuli. Not considering just advertising stimuli, all of them are oversaturated in space and time. Although it is no longer an environment, it’s rather an ecosystem, taking into account global accessibility. In such conditions, the will is invisibly broken, layer by layer, until a state of hypersensitivity occurs. That’s the perfect atmosphere for having bias affect both our and systems’ now impulsive decision-making process. The most astonishing thing — we wouldn’t even notice.

Beyond reality; Source: Unsplash

Combating privacy attacks and security

As human beings or AI, we should not blame ourselves as a higher intelligence stored in the information we currently possess. We need to find a way to build mutual trust. Let’s see some examples of how we can rebuild trust.

Providing just one pattern occurring in the digital marketing industry, we can replicate it from the microsystem to the bigger picture. We wake up in the morning, and instead of having a moment for mindfulness and being on our own, we’ll reach out for the phone and, most probably, social networks. After just a few scrolls, the ad pops up. And there is another one. And another one.

That data that was just for a couple of seconds in front of your eyes is now imprinted into your mind. More importantly, if you engaged anyhow and did what that brand or company wanted you to, you’re now a part of their funnel.

You didn’t want to, but you are.

AdAge on Fire

It just happened that recommendation algorithms optimize recommendation decisions for particular data-driven marketing strategy goals: to raise brand awareness, drive more conversions, or maximize their profits.

Profit against humanity; Source: Unsplash

However, one crucial thing to remember is that these decisions may have significant consequences: recommending everyday items repeatedly to increase interactions and clicks can help the company achieve its goals, but it can also hurt individuals or the community.

AI-driven Audience Optimization

On the one hand, when AI and machine learning systems are trained using biased data, and the bias is not recognized and addressed, there can be severe consequences. Audience optimization is one example of a manifest of issues facing the implementation, where it comes to algorithmic bias.

A typical example is providing a customer base to find a lookalike audience on Facebook without manually cleaning data from customers with lousy purchase behavior. Otherwise, companies or brands could miss out on a potentially profitable consumer niche, causing their market share to stagnate.

Creating a lookalike audience from with clean datasets; Source: Unsplash

Once we start feeding algorithms with preprocessed, “clean” datasets, those exact niches and customers with the highest product engagement may skyrocket your data-driven strategy. Therefore, it is critical to be able to assess the situation and take appropriate action.

Speaking of customers’ more human side, we see this situation occurring in online advertising. Therefore, if we cannot escape it, we can fight for its ethical side.

The fact is that when a machine learning model starts operating, its initial results will be influenced by the data with which it was trained.

That’s why it is important to train those models with improved data, not data that proclaim racism or gender equality, for example. However, once the model is in place, the system can continue collecting data and learning independently.

Information is beautiful

Besides data preprocessing, further refining data is critical; it can be very costly without additional insight. As a result, it is essential to estimate how the model values specific features and feature combinations. You can improve the model by using surprise, surprise — human knowledge. Interpretation is and will be in our blood.

Interpretation is and will be in our blood; Source: Unsplash

It’s essential to revive the fact that — artificial intelligence is artificial. Its value is determined by the contribution of human observation and critique. Bias is the most serious of these issues if AI is not handled correctly. The key for any marketing team to minimize data bias’s negative impact is to keep humans involved as much as possible. Algorithms can’t imagine a future that isn’t the same as the present.

Human supervision is needed as long as these intelligent systems don’t meet ethical terms and conditions. Refining the model or the data collection shouldn’t be based on the return on investment only.

Amplifying human decision-making

Everything can be optimized, but as long as humans preserve their true nature and present information as beautiful as it is in the eye of the beholder.

Data doesn’t need to serve the algorithm that initiates digital marketing success or success in any other area.

Data is designed to break down internal biases, designed to tell a new story, one that goes against what we already thought.

What if we can have both — fighting bias and building trust in AI.

Then and only then, this shift in decision-making progress will finally make sense for the better times on the humanity timeline that we are witnessing.

--

--

Aleksandra Hadzic
Aleksandra Hadzic

Written by Aleksandra Hadzic

Researching AI. Merging Data Science and Digital Marketing.

Responses (2)