Haven’t watched, but can you even keep human bias out of AI given that your programmers are humans?
Biased to be not biased?
Given that we’re heading into an AI/ML driven age, it would only make sense that as some point the machine will program itself.
We’re already seeing that with https://www.futurity.org/artificial-intelligence-bayou-coding-1740702/ and in a way earlier with Polymorphic code and Smalltalk’s network of objects - https://hackernoon.com/back-to-the-future-with-smalltalk-57c68fab583a.
Listened to the talk again.
I hear what she’s saying, and it makes sense in varying degrees. There are also counter arguments to some of her points.
It also begs to ask, will AI also eventually disregard the looks good on paper college degrees, phd’s etc, and instead gravitate towards skills, and experienced based credentials when making decisions?
What happens then?
I believe that would depend on the models used. If one weights keywords associated towards a skills/experience over white papers and degrees then one should get the scenario your asking.
I know Indeed and linkedin do this even if most recruiters and hiring managers can’t even tell the difference between x and y skills so they just have a spreadsheet of search filters one can find on google and mass spam everything that comes up.
Bias can’t be really programmed out then, at least, not easily in the near future, since gut feel also comes into play I would think.
Does bias even exist then, if one is to hire based on job function?