Famed editorial cartoonist Herbert Lawrence Block's classic criticism of the economic strategies of the Reagan administration, published on Feb. 2, 1984.
The fact that such a stark cartoon remains poignant over 30 years later isn’t surprising in the least. But the immediate relevance of the scenario it depicts to current affairs is.
The most ironic thing about Labor Day is that the wealthiest people with the easiest jobs get the day off while actual working class people still have to go to work and serve all the richies on vacation.
If we actually appreciated labor every fast food restaurant in the country would be closed tomorrow, but instead it’s one of the busiest, most dreaded days of the year for service employees.

It aint even actual labor day that happened months ago

Trickle down never works
no amount of budgeting will make up for the fact that we simply do not make enough money
no amount of therapy will make up for the fact that we simply do not make enough money
CODED BIAS (2020) dir. Shalini Kantayya

Not just “not progress”. AI algorithms will, by design, make things worse. These algorithms are really good at picking out patters and reinforcing them. Keyword: reinforcing. It will notice that only 14% are women, and instead of continuing to pick 14% women it will pick 0% women.
There are things AI algorithms do really well, which are useful and cannot be achieved (or at least not as well) with traditional algorithms: filtering background noise from a recording, finding cancerous cells in a biopsy sample, predicting how you will move next to ensure you won’t lose your cell phone coverage. But they should never, never be asked to make “value” judgments which directly or indirectly have the potential to discriminate against people.

How come it won’t continue with 14%?

Somewhat simplified: when the algorithm is trained, it looks at the training data it’s fed (here that would likely be CVs of people Amazon ended up employing) and calculates which factors in the data are the strongest predictors for a “success”. Then it will come up with a way to combine those factors into a score (a formula, which is typically not visible/understandable to the person who created the algorithm), which is then translated into “call for interview” and “not call for interview”.
However, algorithms are stupid. They are really good at finding “cheat” factors to get good at predicting, factors which a human might (hopefully) disregard. So in this case, the algorithm will learn that being male (whether stated explicitly or implicitly in the CV) means a six times larger chance of being hired in the training data.
This means that the algorithm will notice that sex has a major impact on what it has been told “success” looks like. So it will end up weighting sex heavily in the final formula it comes up with to score applicants, which means that a CV from a woman will only get a high score if the other factors the algorithm has decided are important outweigh the low score in the sex category. Since it’s unlikely a woman will be enough better in other categories to outweigh the low score on “sex”, she will likely not be called to an interview.
This is always going to be a massive problem whenever there is a bias in the training data. There is no reliable, proven-to-work way to strip that bias so the algorithm disregards those factors, which means the algorithm will reproduce and reinforce biases in the training data. Machine learning and artificial intelligence should therefore only be applied in situations where either the risk of adverse outcomes is very low (spell checking your text messages) or where the training data is reasonably likely to be unbiased (minimising number of dropped calls in a cell network).




