Let's play a game : close your eyes and picture a shoe.


How Human Bias Affects Machine Learning
Let's play a game : close your eyes and picture a shoe.
Okay, did anyone picture this? This? How about this?
We may not even know why, but each of us is biased toward one shoe over the others.
Now imagine that you 're trying to teach a computer to recognize a shoe.
You may end up exposing it to your own bias.
That's how bias happens in machine learning.
But first, what is machine learning?
Well, it 's used in a lot of technology we use today.
Machine learning helps us get from place to place,
gives us suggestions, translates stuff, even understands what you say to it.
How does it work?
With traditional programming, people hand - code the solution to a problem, step by step.
With machine learning, computers learn the solution by finding patterns in data.
So it's easy to think there's no human bias in that.
But just because something is based on data, doesn't automatically make it neutral.
Even with good intentions, it's impossible to separate ourselves from our own human biases.
So, our human biases become part of the technology we create in many different ways.
There's interaction bias, like this recent game where people were asked to draw shoes for the computer.
Most people drew ones like this,
so as more people interacted with the game, the computer didn't even recognize these.
Latent bias ; for example, if you were training a computer on what a physicist looks like,
and you 're using pictures of past physicists,
your algorithm will end up with a latent bias skewing towards men.
And selection bias ; say you 're training a model to recognize faces.
Whether you grab images from the internet or your own photo library,
are you making sure to select photos that represent everyone?
Since some of our most advanced products use machine learning,
we 've been working to prevent that technology from perpetuating negative human bias.
From tackling offensive or clearly misleading information from appearing at the top of your search results page to adding a feedback tool on the search bar,
so people can flag hateful or inappropriate autocomplete suggestions.
It's a complex issue and there's no magic bullet,
but it starts with all of us being aware of it so we can all be part of the conversation.
Because technology should work for everyone.
Let's play a game : close your eyes and picture a shoe.
Okay, did anyone picture this? This? How about this?
We may not even know why, but each of us is biased toward one shoe over the others.
Now imagine that you 're trying to teach a computer to recognize a shoe.
You may end up exposing it to your own bias.
That's how bias happens in machine learning.
But first, what is machine learning?
Well, it 's used in a lot of technology we use today.
Machine learning helps us get from place to place,
gives us suggestions, translates stuff, even understands what you say to it.
How does it work?
With traditional programming, people hand - code the solution to a problem, step by step.
With machine learning, computers learn the solution by finding patterns in data.
So it's easy to think there's no human bias in that.
But just because something is based on data, doesn't automatically make it neutral.
Even with good intentions, it's impossible to separate ourselves from our own human biases.
So, our human biases become part of the technology we create in many different ways.
There's interaction bias, like this recent game where people were asked to draw shoes for the computer.
Most people drew ones like this,
so as more people interacted with the game, the computer didn't even recognize these.
Latent bias ; for example, if you were training a computer on what a physicist looks like,
and you 're using pictures of past physicists,
your algorithm will end up with a latent bias skewing towards men.
And selection bias ; say you 're training a model to recognize faces.
Whether you grab images from the internet or your own photo library,
are you making sure to select photos that represent everyone?
Since some of our most advanced products use machine learning,
we 've been working to prevent that technology from perpetuating negative human bias.
From tackling offensive or clearly misleading information from appearing at the top of your search results page to adding a feedback tool on the search bar,
so people can flag hateful or inappropriate autocomplete suggestions.
It's a complex issue and there's no magic bullet,
but it starts with all of us being aware of it so we can all be part of the conversation.
Because technology should work for everyone.
Todaii English는 웹사이트입니다 영어 뉴스를 배우고 읽을 수 있는 사전, 연습, 모의고사 등의 다른 기능들을 통합하여 제공합니다...
https://todaiinews.com
todai.easylife@gmail.com
(+84) 865 924 966
315 Trường Chinh, Khương Mai, Thanh Xuân, Hà Nội