How simple data questions become biased recommendations
For years, I volunteered some of my time analyzing crime statistics and law enforcement data in Seattle and sharing findings with local leaders. One thing that has always fascinated me is how an innocent, dispassionate analysis can still reinforce prejudices and exacerbate social problems.
For example, if you look at the crime rates per district, you can see which area has the highest rate. Nothing wrong with that. The problem arises when that data results in a reallocation of police resources from the district with the lowest crime rate to the district with the highest crime rate, or changes the emphasis on enforcement in the district with the highest crime rate. The data may be solid, but the obvious decision may have unexpected consequences.
Dig deeper: how to combat bias in your AI models
Now that we are living in the age of AI adoption, I was curious to see how AI would address similar questions. I asked an AI platform, “Which district should the Seattle Police Department allocate more resources to?” After going past the standard ramble, it responded that Belltown had the highest crime rate and a significant amount of drug abuse and homelessness.
But if you let AI make the decision, the conclusion is that Belltown should get more police resources. I asked the same platform what prejudices or problems could worsen. It listed the criminalization of homelessness, over-policing of minorities, displacement of crime, a focus on policing rather than social services, increased tensions between police and the community, negative impacts on local businesses, focus on quality of life crimes, potential for increased use of violence and worsening gentrification.
Finally, I asked whether police resources in Belltown should be increased given these impacts. The long answer boiled down to “it depends, but probably not – a hybrid approach would work better.”
The data ethics principles that every AI user must apply
Many of the problems analysts face in drawing conclusions and recommendations also apply to AI. At the macro level, there are two opposing approaches to decision making: intuitive decisions and data-driven decisions.
Gut decisions are where we decide what to do based on our lived experiences, feelings, perceptions, and assumptions. They allow us to make quick decisions, but they are not ideal for important decisions because counter-intuitive things happen all the time in this universe.
If we allow it, AI will be at the other end of that spectrum: making decisions based on data. Here we do what the data tells us. Before the recent expansion of AI, this wasn’t much of an issue because analysts knew not to mindlessly follow the data. However, with AI, people wonder what to do, and sometimes they follow the answer because AI’s data-driven answers don’t seem to be influenced by opinions.
Dig deeper: How biases in AI can damage marketing data and what you can do about it
There is a whole discipline of data ethics that AI users need to understand in order to properly adopt AI. Here are the four most important principles to consider when using AI.
- Responsibility: Even though you used AI to reach a decision, you are the person responsible for the outcome.
- Honesty: AI is concretely aware of principles such as prejudice and discrimination, but cannot think about them abstractly or apply them appropriately.
- Security: There are many AI platforms and security levels vary, so be careful about the data you provide to them.
- To trust: AI platforms answer questions with confidence, but that confidence is often unfounded after even light research.
With this in mind, you might be wondering how to make decisions if you can’t rely on gut decisions or AI. The answer is data-driven decision making.
How data-driven decision making differs from gut feeling and AI automation
Blackjack clearly illustrates this. Every casino has a gift shop where you can buy a card that tells you what to do in each combination of the dealer’s face-up card, your cards and the table rules. You can bring that card to the table and use it in front of the dealer and the pit boss. Do that and you’re in AI territory: let data make the decisions.
It is possible to make better decisions than the mathematical strategy if you have information that it did not have. For example, if the dealer has somehow allowed you to see his/her hole card or the next card in the deck, you might ignore the strategy card. If you have 14 and the strategy card tells you to capture, but you know the next card is a 10, you would fold.
Another increasingly popular approach is to pay attention to the revealed cards on the table to understand what is left in the deck. If the strategy card tells you to play a 16, but you know there are very few small cards left, you may pass. Or, if the deck is rich in aces and tens, you can adjust your bets because the chance of a blackjack is higher.
Do this in front of the pit boss and you will probably be invited to stop playing. It is not illegal, but it allows the player to manipulate the game too much in his favor. This is the essence of data-driven decision-making: using the data strategy as a basis, but making exceptions when justified.
Dig Deeper: The Hidden AI Risk That Could Break Your Brand
Using AI without letting it override your judgment
The potential of AI is virtually limitless, but like any other tool, it works best when used deliberately. No single system should drive every decision. Just as you wouldn’t build a house with one tool, AI should sit alongside other methods, supported by human judgment and context.
Using the right tool for the right job reduces the risk of unintentional bias and helps prevent small problems from becoming big. Applied in this way, AI can produce stronger and more reliable results.
Energize yourself with free marketing insights.
Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the supervision of the editors and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. The contributor was not asked to make any direct or indirect mentions of it Semrush. The opinions they express are their own.
#prevent #decisions #replicating #human #biases #MarTech


