When you are poor, and especially if you are a poor person of color, an enormous amount of your life is out of your control. Almost everything is controlled by those of us who are white and middle class.
Will you have enough to eat? Depends on whether we let you keep your minimum-wage job after you took a day off to care for your sick child. Will you have a decent place to live? Depends on whether we’ll accept your “Section 8” voucher – if you even have a Section 8 voucher. Will your illness be treated? Depends on whether what we decide you’re eligible for Medicaid. Will you make it home without being stopped and frisked – or worse – for no reason? That depends on our police and how we trained them.
And because practitioners of journalism remain overwhelmingly white and almost exclusively middle class, we even get to decide how your story is told.
Editors should, therefore, be extra careful when people like us are telling the stories of people we often think of as “them.” The #MeeToo revelations have driven home the point when it comes to gender. We’ve learned how who controlled the narrative even affected our perceptions of the 2016 presidential campaign. As Rebecca Traister wrote:
“We see that the men who have had the power to abuse women’s bodies and psyches throughout their careers are in many cases also the ones in charge of our political and cultural stories.”
But I haven’t seen the same understanding when it comes to controlling narratives about race and class.
There are commendable exceptions. The New York Times showed such understanding last year, when it took an extraordinary look at how child welfare really works in its story about foster care as the new “Jane Crow.”
But so far this year things aren’t going as well at the Times.
This month the New York Times Magazine published a story about whether it’s a good idea to use “predictive analytics” – computer algorithms – to decide which families should be, at a minimum, investigated as alleged child abusers. The families are overwhelmingly poor and disproportionately African American and Native American.
There is no shortage of good freelance writers out there. So when the Times Magazine decided to assign a story, or accept a proposal submitted to them on this topic, why in the world did they think that the best person to do it would be science journalist who also is a white, middle-class foster parent?
Yes, there are some white middle class foster parents who get how the system really works and fight to change it. They understand how often poverty is confused with neglect, for example. One such foster parent helped change an entire child welfare system. Other foster parents make heroic efforts to reunite families.
Often, however, the attitudes of foster parents range from genteel condescension – “We really want to help these birth parents because we understand they’re sick” – to barely-disguised, or undisguised, hatred.
The extent to which many foster parents can’t seem to understand people who are not like them can be seen when they complain – often with justification – about how badly they are treated by child welfare agencies. But I have yet to read about a foster parent who took the next logical step and thought: “They really need us. If this is how they treat us, how are they treating the birth parents? And since they seem to think we’re awful, I wonder if all those things they told us about the birth parents are true?”
No, this does not mean that foster parents shouldn’t be allowed to write about their own experiences and about the child welfare system. But there is a difference between writing, say, an op-ed column or other commentary and being chosen by America’s de facto newspaper of record to write a news story that makes you, in effect, the arbiter of the debate over a key child welfare policy.
So at a bare minimum, if you’re going to entrust a story about impoverished people of color who are suspected child abusers to a white, middle-class foster parent, such as Dan Hurley, who wrote the Times Magazine story, editors should be extra vigilant about bias. They should do an extra level of fact checking, not only concerning what’s in the story, but what is left out.
The Times did none of that. The result is a whitewash – in every sense of the term. The well-
documented failures of predictive analytics across the country, in criminal justice and in child welfare, are minimized and the one experiment that allegedly avoids these pitfalls is glorified.
Dismissing racial bias
Hurley’s take on racial bias in child welfare in general is rife with contradictions. For starters, he shows deep sympathy for the “denial caucus” – that bizarre group within the child welfare field that believes they are so much better than everyone else that their field is magically exempt from the racial bias that permeates every other aspect of American life. So he cites only research that purports to find that bias is not a factor in the disproportionate rate at which families of color are investigated and their children are removed. He ignores the huge body of research that shows racial bias is indeed a crucial factor.
Hurley does quote people who acknowledge that yes, the underlying data in predictive analytics algorithms are biased. But then he goes on to claim the way the algorithms are used actually reduces bias in child welfare decision making.
In other words, the Times Magazine story tells us that there is no bias in child welfare decision-making now – but predictive analytics will reduce the bias that, the story says, already doesn’t exist.
Minimizing analytics failures
Hurley mentions in passing that
when it comes to criminal justice, where analytics are now entrenched as a tool for judges and parole boards, even larger complaints have arisen about the secrecy surrounding the workings of the algorithms themselves — most of which are developed, marketed and closely guarded by private firms.
But that’s all he says. He ignores the much bigger problem: Racism.
An exhaustive investigation by ProPublica found that in the case of a secret algorithm used in Broward County, Florida
The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.
But you’d never know that from Hurley’s story.
The analytics experiments that crashed and burned
Very few places have actually reached the testing and implementation stage. But two of them already a “false positive” rate of 95 percent. That is, 95 percent of the time, when the algorithm predicted something terrible would happen to a child – it didn’t.
have proven to be failures. In Los Angeles, the first experiment was abandoned during the testing phase after the algorithm produced
And in Illinois a program marketed by one of the most popular companies in the child welfare predictive analytics field failed spectacularly.
Hurley minimizes the Illinois failure and doesn’t mention Los Angeles at all – even though these places took the approach Hurley acknowledges is the norm, hiring a private company with a secret algorithm.
Hurley gushes that the experts brought in by the community on which he chooses to focus instead, including Prof. Rhema Vaithianathan, “share an enthusiasm for the prospect of using public databases for the public good.” I’m sure that’s true. But Hurley neglects to tell us about some profound questions raised about Vaithianathan’s methodology
Selection bias
While minimizing Illinois and ignoring Los Angeles, Hurley builds his whole story around Pittsburgh, the one place in the country where predictive analytics is least likely to fail – for the moment.
The system in Pittsburgh, and surrounding Allegheny County, has been run for 22 years by Marc Cherna. I happen to know Cherna. While I was teaching journalism in the Pittsburgh area I served on the screening committee that unanimously recommended him to run the county child welfare system. He now runs the entire human services agency.
We were right. He turned around a failing system and significantly reduced needless foster care through innovations such as placing housing counselors in every child welfare office so children weren’t taken away for lack of a decent place to live. When removal from the home really is necessary, Cherna pioneered the use of kinship care, placing children with extended family instead of strangers. Most recently, while most of the rest of Pennsylvania has gone through a foster-care panic as a result of the state’s response to the Jerry Sandusky scandal, Pittsburgh has not.
Cherna has tried to avoid the usual pittfalls of predictive analytics. Instead of a for-profit company with a secret algorithm – which, again, Hurley admits is the norm – Allegheny County opted to develop its own algorithm in the open. Everybody knows what goes into it, and community leaders were consulted. And I believe Cherna when he talks about all the checks and balances built into the algorithm and how it’s used, such as limiting its use to the initial decision whether or not to investigate an allegation.
But there’s a bigger problem. Even Marc Cherna won’t be running a child welfare system forever. What happens after he leaves? In particular, what will follow after this happens: A caseworker leaves a child in his own home and the child dies. The caseworker says “Oh, if only I’d known what the algorithm predicted, but only screeners get to see the data.” That’s when the abuses, and the massive needless removal of children, will start.
Story bias
The selection bias is compounded by the one and only case example Hurley uses – a case in which a human screener would have said “Don’t pursue this report,” the algorithm disagreed and, lo and behold, the case really was high risk! At a minimum, there could have been two stories – that one and a case in which the algorithm said the case was high risk, but it wasn’t, and a family was traumatized for nothing.
But that wouldn’t fit the “master narrative” – to use former St. Louis Post-Dispatch editor William Woo’s great term – of the white middle-class foster parent who wrote the story – or that of his editors at the Times.
Instead, what we get is the mirror image of most stories about birth parents who lose children to the system. Those stories almost always focus on the worst of the worst – the tiny fraction of parents who torture and kill their children – instead of the norm. But when the time comes to tout something the white middle class likes, the selection works in reverse: We get what is said to be the best of the best. That is equally unrepresentative.
The one question every editor should ask
At a minimum the editors at the Times should have asked one crucial question. It’s the kind of question I was taught to ask by my father when I was in seventh grade.
It was “Father’s Day” at my school, a day when fathers could sit in on their children’s classes. (No mothers allowed, but that’s another story.) My father was, himself, a history teacher. He was unimpressed by my social studies class, in particular with a mimeographed handout about Africa. I don’t remember a word of the handout. But I remember my father’s reaction. He was appalled by its condescension. He asked me one question: “What do you think an African would have written?”
I wish that, at some point, as they went over Dan Hurley’s story, just one New York Times Magazine editor had thought to ask: “What would a birth parent who lost children to foster care have written?”
Richard Wexler is executive director of the National Coalition for Child Protection Reform, www.nccpr.org This post also appears on the NCCPR Child Welfare Blog, www.nccprblog.org
Sources:
Our authors want to hear from you! Click to leave a comment
Related Posts