Algorithms’ biases can be destructive (Startup Twins: Episode 1)

There are things we can do to mitigate the threat.

4074
views
Silicon Valley Times - Startup Twins - Episode 1 Cover

Whenever the term algorithms pops up, some people naturally associate it with objectiveness (and perhaps accuracy). After all, algorithms are purely driven by data and are run by emotionless machines, so whatever output they present must be fair and unbiased, right? Unfortunately, that’s not always the case.

The truth is, predictive algorithms and machine learning (ML) models can only be as objective as their human designers set them up to be. That’s not to say that all developers program their models to be evil. In fact, the models themselves are often deployed with good intentions — such as to remove the human subjectivity element from complex decision-making, to reduce or even eliminate human error in critical data processing, etc.

Regardless of intentions, if whoever control the predictive models are not careful, they can end up wreaking more havoc than they do good.

Destructive ramifications may arise

Let’s say you are applying for a mortgage. You may have a high-paying job in a promising field. Your credit score checks pass with flying colors. But your application still gets denied. What happened?

It could be that a secret algorithm had been used to predict your ability to repay. The algorithm could be so poorly designed that it flagged your occupation as unstable — say, because your company’s competitors have recently gone through rounds of layoffs. How much should this really affect your individual application, if at all? What if you have been with the same company for the past decade, and are about to get a promotion? Wouldn’t you say that you have been treated with unfair judgment in this case?

Granted, it could very well have been something else that “informed” the mortgagee’s decision. But what if that something else is equally irrelevant? When all is said and done, you don’t know for sure why you have been rejected. You have no grounds to appeal.

Like a black box

What’s even scarier, is that these predictive algorithms have been infused deeper and deeper into our everyday life, often without our knowledge. When you board a flight, your airline might have already ranked you against other passengers in terms of how valuable of a customer you are — and they wouldn’t tell you anything about it. Something similar might happen when you browse an online shop or walk into a retail store. Gone are the days when you could freely go about your business without being profiled by organizations — for their own gains.

Even in situations where we are aware that algorithms might be at play, they are like a black box to us. We simply don’t know how they work, if they are even working properly as they should.

Social media’s “mom problem”

Blogger Chris Aldrich frequently shares articles he has written to social networks like Facebook and Twitter. As most of us already know, though, he has next to zero control of exactly who these posts will reach.

(As an experienced internet nobody, I can attest to the frustration. Take one of the Facebook Pages I run for example. On an arbitrary day, a photo I posted could reach 100K+ people out of the blue. But then the next photo I put up — a similar snap with an identical caption — would somehow only reach single-digit people. It’s also not helping that I have long decided against paying to boost my own posts.)

Aldrich’s problem didn’t end there. He had a “mom problem.” To be fair, Mama Aldrich did nothing wrong — she’s just always the first one to “like” whatever her son posted, out of unconditional love. Facebook’s algorithm would then immediately categorize those posts as family-related content, and thus covertly narrowing the audience they reached.

In hopes to break out of the self-fulfilling prophecy, Aldrich worked out a hack: For every new post, he would first set its privacy to “friends except mom (and maybe some other relatives).” After the algorithm had more accurately reached the intended audience for his musings, he would reconfigure the post to be “public” again. Mama Aldrich’s “like” would of course show up shortly thereafter. Everyone’s happy.

Lucky for Chris Aldrich, he was able to come up with a practical workaround. But behind the “mom problem,” social networks’ black box-like algorithms could perpetuate social implications that are a lot more insidious. Imagine how one-sided political agendas can end up constantly reaching certain demographics, even without being explicitly told to.

What can we do to tackle algorithms’ biases?

First and foremost, we can’t fix what we don’t understand. Remember the Millennium bug? It happened back in late 1990s, when engineers realized that a global computer issue was about to occur. Up until that point, most computer systems had been programmed to use only two digits to specify the YY in MM-DD-YY. So at midnight January 1st 2000, all of those systems would be confused as to what year it really was.

Owing to identifying the issue soon enough, most of the affected computers were promptly fixed via a coding patch or a system upgrade. However, back in the days people were not as tech literate as we are today. The “Millennium bug” wound up being used by a host of fringe groups to spread conspiracy theories.

Don’t be one of those people who chose to believe the misconceptions out of fear. Also, don’t spread unnecessary fear. What’s happening today? It’s not that machines have gone rogue. Algorithms are capable of yielding biased results because they reflect their designers and developers’ human bias.

It’s important to understand the current situation. Become part of the conversation. Help shape the future together.

Beyond that, ultimately it’s researchers and developers’ responsibility to craft algorithms with the utmost care, paying extra attention to underrepresented demographics that would be negatively impacted. They should also measure regularly to account for changes in any social landscape.

For those who are not currently in tech, you can help drive the narratives too. Push for transparency as much as possible. We can’t fix what we don’t understand.

What does this have to do with my startups?

I have a habit of asking myself questions and then searching for answers. I know many of you do, too.

For me, a great way to gain an in-depth understanding of a concept, is to build out a model or a prototype for it. And if our team’s continuous experience of building builds can benefit you in some way as well, that’s even better. This is what the first startup — an experimental lab — is about.

If you are not a huge fan of experiments, alternatively, reading and familiarizing yourself with what’s going on in our world should suffice too. And this is what the second startup — The Silicon Valley Times — is here for.

My goal is for the community to gain a better understanding in topics that intrigue you — either via experiments or by reading — whereby better informing your decisions of what actions to take.

Knowledge is accumulative. Better yet, there is no limit to how much one can learn. The two startups will be a supportive vehicle for multiplying our collective knowledge infinitely.

What’s your story?

Have you been unfairly treated by predictive systems? What solutions do you have for our society? Are you part of an initiative that aims to tackle the social issue outlined here? What inspired you to take the plunge? Join the conversation on our Facebook group. Inspire others.

What’s next?

In the next episode, we will explore another social issue — are we defenseless against automation taking our jobs?