1 = true = rational = right = maleThe false dichotomies built into this asseveration rattle the brain. Indeed, the entire essay has this flavor, and it is not even false.
0 = false = emotional = left = female
But since the piece has gotten a fair bit of attention, I feel the need to respond to the key claim of the piece. The entire argument rests on the dual assertion that the fact that computers use binary numbers (1s and 0s) as the basis for their operation is (a) based on Aristotle's (elitist, sexist) philosophy, and (b) the fundamental reason why algorithmic systems are biased. Hence, new computer systems not based on Aristotelian "binary" logic can be universal, unbiased pure goodness.
Well.
First off, the "computers are binary and essentially invented by Aristotle" claim is a load of argle-bargle and pure applesauce. (Clickbait headlines in the Atlantic notwithstanding.) When electronic computers were first being developed in the 40s and 50s, different systems were experimented with (including ternary (three-valued) logic), but binary was the most practical for a simple reason. With binary logic, you can represent a 0 by "voltage close to 0" and 1 by "voltage close to maximum". When you introduce more possible values, the system becomes more sensitive to noise, and hence less reliable. (There are other technical reasons for binary computing, and there are some other reasons to prefer ternary systems, but this is enough for my purposes.)
Now, to bias. Binary numbers have nothing whatsoever to do with algorithmic bias. The binary number system does not limit you to using only 1 or 0 for values you need to represent (after all, you could not specify an address to Google Maps just as a 1 or 0, say). Indeed, you can represent as many different values as you like by stringing bits together. You can have as many categories of whatever you like as you like. Any computer scientist would recognize this aspect of the claim to be laughable.
Algorithmic bias is due to the simple fact that all decision systems have biases. (Indeed, it is impossible to learn anything from experience without some sort of bias.) No real system has perfect information, and any decision made on the basis of imperfect information is biased in some way. The question is not "Can we create unbiased algorithms?" but "Do we know what our algorithm's biases are?" and "Can we mitigate the ones we do not like?"
Utopian visions like Ms. Liu's that if we just had the right philosophy, we could build computer systems that will be universal and unbiased, pure purveyors of algorithmic goodness, are false and actually dangerous. They promote the technocratic idea that there are unbiased algorithms out there, if we could just find them, and so keep our focus on algorithmic development.
However, bias is inevitable. The way to combat pernicious bias is through continuous monitoring to discover instances of problematic bias and exercise of good judgment to adjust systems (whether algorithms, training data, how the systems are used, etc.) to mitigate the bad effects while maintaining the good ones. The proper way to combat algorithmic bias (which some are working on) is to develop better ways of detecting and characterizing such bias, and the societal institutions and incentives that enable dealing with deleterious such biases. (And this leads into questions of value systems and politics, which cannot be avoided in this arena. There is no royal road.)
Visions of simple solutions derived from proper thinking are seductive. But the necessary condition for developing and maintaining diversity-enhancing technologies will be, I'm afraid, eternal vigilance.
No comments:
Post a Comment