- cross-posted to:
- latestagecapitalism@lemmygrad.ml
- cross-posted to:
- latestagecapitalism@lemmygrad.ml
Wow. If a black box analysis of arbitrary facial characteristics is more meritocratic than the status quo, that speaks volumes about the nightmare hellscape shitshow of policy, procedure and discretion that resides behind the current set of ‘metrics’ being used.
The gamification of hiring is largely a result of businesses de-institutionalizing Human Resources. If you were hired on at a company like Exxon or IBM in the 1980s, there was an enormous professionalized team dedicated to sourcing prospective hires, vetting them, and negotiating their employment.
Now, we’ve automated so much of the process and gutted so much of the actual professionalized vetting and onboarding that its a total crap shoot as to whom you’re getting. Applicants aren’t trying to impress a recruiter, they’re just aiming to win the keyword search lottery. Businesses aren’t looking to cultivate talent long term, just fill contract positions at below-contractor rates.
So we get an influx of pseudo-science to substitute for what had been a real sociological science of hiring. People promising quick and easy answers to complex and difficult questions, on the premise that they can accelerate the churn of staff without driving up cost of doing business.
Gotcha. This is replacing one nonsense black box with a different one, then. That makes a depressing kind of sense. No evidence needed, either!
All of that being typed, I’m aware that the ‘If’ in my initial response is doing the same amount of heavy lifting as the ‘Some might argue’ in the article. Barring the revelation of some truly extraordinary evidence, I don’t accept the premise.
Spoken like somebody with the sloping brow of a common criminal.
A primary application of “AI” is providing blackboxes that enable the extremely privileged to wield arbitrary control with impunity.
Because HR is already using “phrenology”.
"Imagine appearing for a job interview and, without saying a single word, being told that you are not getting the role because your face didn’t fit. You would assume discrimination, and might even contemplate litigation. But what if bias was not the reason?
Uh… guys…
Discrimination: the act, practice, or an instance of unfairly treating a person or group differently from other people or groups on a class or categorical basis
Prejudice: an adverse opinion or leaning formed without just grounds or before sufficient knowledge
Bias: to give a settled and often prejudiced outlook to
Judging someone’s ability without knowing them, based solely on their appearance, is, like, kinda the definition of bias, discrimination, and prejudice. I think their stupid angle is “it’s not unfair because what if this time it really worked though!” 😅
I know this is the point, but there’s no way this could possibly end up with anything other than a lazily written, comically clichéd, Sci Fi future where there’s an underclass of like “class gammas” who have gamma face, and then the betas that blah blah. Whereas the alphas are the most perfect ughhhhh. It’s not even a huge leap; it’s fucking inevitable. That’s the outcome of this.
I should watch Gattaca again…
Like every corporate entity, they’re trying to redefine what those words mean. See, it’s not “insufficient knowledge” if they’re using an AI powered facial recognition program to get an objective prediction, right? Right?
People see me in cargo pants, polo shirt, a smartphone in my shirt pocket, and sometimes tech stuff in my (cargo) pants pockets and they assume that I am good at computers. I have an IT background and have been on the Internet since March of 1993 so they are correct. I call it the tech support uniform. However, people could dress similarly to try to fool people.
People will find ways, maybe makeup and prosthetics or AI modifications, to try to fool this system. Maybe they will learn to fake emotions. This system is a tool, not a solution.
Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure”
TLDR as soon as you have a system like this people will game it…
I think their stupid angle is “it’s not unfair because what if this time it really worked though!”
I think their angle is “its not unfair because the computer says it!”. automated bias. offloading liability to an AI.
Racial profiling keeps getting reinvented.
Fuck that.
They then used data on these individuals’ labour-market outcomes to see whether the Photo Big Five had any predictive power. The answer, they conclude, is yes: facial analysis has useful things to say about a person’s post-mba earnings and propensity to move jobs, among other things.
Correlation vs causation. More attractive people will be defaulted to better negotiating positions. People with richer backgrounds will probably look healthier. People from high stress environments will show signs of stress through skin wrinkles and resting muscles.
This is going to do nothing but enforce systemic biases, but in a kafkaesque Gattica way.
And then of course you have the garden of forking paths.
These models have zero restraint on their features, so we have an extremely large feature space, and we train the model to pick features predictive of the outcome. Even the process of training, evaluating, then selecting the best model at this scale ends up being essentially P hacking.
I cant imagine a model being trained like this /not/ end up encoding a bunch of features that correlate with race. It will find the white people, then reward its self as the group does statistically better.
Even a genuinely perfect model would immediately skew to bias; the moment some statistical fluke gets incorporated into the training data that becomes self re-enforcing and it’ll create and then re-enforce that bias in a feedback loop.
Usually these models are trained on past data, and then applied going forward. So whatever bias was in the past data will be used as a predictive variable. There are plenty of facial feature characteristics that correlate with race, and when the model picks those because the past data is racially biased (because of over-policing, lack of opportunity, poverty, etc), they will be in the model. Guaranteed. These models absolutely do not care that correlation != causation. They are correlation machines.
Exactly. It’s like saying that since every president has been over 6’ tall we should only allow tall people to run for president.
Cool. Literal Nazi shit, but now with AI 😵💫
Basically the slogan for the 2020s
Cool. Literal Nazi shit, still powered by IBM.

Not April fool’s or the onion? What the fuck?
The Economist has a tendency to put out articles seemingly designed to make conservatives bust nuts through their trousers at mach 4
Is Lucifer’s Poison Ivy destroying the fabric of civilization as we know it?
I remember when stuff like this was used to show how dystopian china is.
Haven’t you heard? Palantir CEO Says a Surveillance State Is Preferable to China Winning the AI Race.
Trump’s current Science Advisor (who was selected by Peter Thiel) gave an interview back in 2019 where he kept insisting the U.S. was at a disadvantage to China in the AI race bc we didn’t have access to the level of surveillance data China had (which it turns out, is possible thanks to a surveillance system we fucking created and sold to them). He also used this point to argue against any regulations for facial recognition tech in the U.S. because again, it would put us at a disadvantage.
But don’t worry, because the goal is to have an authoritarian surveillance state with “baked in American values,” so we won’t have to worry about ending up like China did with the surveillance tools we fucking sold them.
I’m not sure what values he’s claiming will be somehow baked into it (because again, we created it and sold it to China). My mind conjures up a scenario of automatic weapons and a speaker playing a screeching bald eagle, but maybe we’ll get some star spangled banner thrown in there too.
I haven’t heard of academics and/or media from China advocating for applications of phrenology/physiognomy or other related racist pseudosciences. Have you?
This is just phrenology with extra steps
They barely even added extra steps.
But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance?
I hate this so much, because spouting statistics is the number one go-to of idiot racists and other bigots trying to justify their prejudices. The whole fucking point is that judging someone’s value someone based on physical attributes outside their control, is fucking evil, and increasing the accuracy of your algorithm only makes it all the more insidious.
The Economist has never been shy to post some questionable kneejerk shit in the past, but this is approaching a low even for them. Not only do they give the concept credibility, but they’re even going out of their way to dishonestly paint it as some sort of progressive boon for the poor.
But what if bias was not the reason? What if
your face gave genuinely useful clues about your probable performancewe just agreed to redefine “bias” as something else, despite this fitting the definition of the word perfectly, just so I can claim this isn’t biased?
This is so absurd it almost feels like it isn’t real. But indeed, the article appears when I look it up
Woaw, we skipped right from diversity hiring to phrenology hiring without wasting a single beat. Boy has the modern world become efreceint.
At least high variance means the possibility of an opposite swing (this is cope)
Actually, what if slavery wasn’t such a bad idea after all? Lmao they never stop trying to resurrect class warfare and gatekeeping.
It’s completely normal for fascists to promote pseudo-science. Always had been.
Indeed their publication is named after one of the worst pseudo-sciences.
Race theory 2.0 AI edition just dropped.
I thought phrenology was still a science at the time of the German Reich, only made defunct later. Now I have my doubts.
Social darwinism was disproven in the 1900s and supply-side economics died in the 19th century so it’s not like pseudoscience does not spring up like weeds when rich people want to sponsor it.
That’s the thing with science communication. It barely exists.
There is a bogus theory. Nobody tries replicating it for decades because there’s no fame in replication. Then someone finally does and disproves the theory. If the author is lucky, it gets published on the last pages of some low-level journal, because there’s even less fame in failed replication. But the general public doesn’t read journals. They don’t even read science journalism. They might read a short note in a daily newspaper that was twisted into unrecognizability by an underpaid, overworked journalist who didn’t understand a word in the article they read in some pop science magazine.
Science doesn’t reach the general public, and if it does against all odds, it’s so twisted and corrupted that it frequently says the opposite of what the original paper said.
People do their general education in school, and once they leave they stop learning general topics.












