Former Pixar Research Scientist Describes The Racist Assumptions That Continue To Shape Digital Characters
As society has reckoned with its structural racism over the past year, discourse in the animation world has focused on areas where discrimination is most obvious, like employment and onscreen representation. But racism in digital graphics takes more insidious forms too, as Theodore Kim explained in a lecture earlier this month.
Kim is an associate professor in computer science at Yale University, and was previously a senior research scientist at Pixar (Coco, Incredibles 2, Toy Story 4). At SIGGRAPH’s virtual conference in early August, he spoke about the unconscious biases that shape how we render digital characters: a contemporary problem with old roots. His talk is not about cg animation per se, but it has big implications for how characters are depicted and stories told in the medium.
Kim breaks his lecture up into three parts framed as answers to questions; we’ve summarized them below. The lecture is worth watching in full, so we’ve embedded it at the bottom of this article.
What racial bias?
Kim convincingly shows that a bias in favor of white hair and skin exists in the field, as demonstrated by academic papers, Google search results, and documentation for renderers used by cg studios. The people depicted in the papers and documents overwhelmingly have pale skin and straight hair (and these are generally described merely as “skin” and “hair,” reinforcing the notion that white features are the norm). Type 4 hair — kinky hair common in Black people — is especially neglected.
One result of this bias in computer graphics: the industry’s emphasis on using subsurface scattering in the depiction of skin. This is a mode of light transport that creates a soft glowing effect, but it is more prominent in white skin than black. So this emphasis leads to misrepresentation of black skin.
Where did this bias come from?
The skew in computer graphics is rooted in similar biases in analog film and photography, says Kim. He mentions “leader ladies” — the models used as benchmarks for colors in film processing — and their counterparts in photography, the women who appear on “Shirley cards.” Again, these models were almost all white (although Kodak’s Shirley cards started diversifying in the 1990s).
Similarly, standard Hollywood lighting was developed around white skin. So much so that background lighting has come to be known as “rim lighting” because of the rim of light it produces around the subject’s outline — an effect that’s especially conspicuous with blonde hair. Kim cites a digital lighting manual to show the continuity of lighting conventions from the analog to the digital age.
With the rise of digital, Kim argues, the biases of the analog days were unconsciously reproduced in the algorithms. He juxtaposes galleries of leader ladies and of people shown in renderer documentations, which look remarkably similar.
What do we do now?
After stressing that there are many ways to tackle these issues, Kim proposes one: the creation of diverse benchmarks for computer graphics. Kodak’s progress on Shirley cards sets a precedent.
Kim addresses the case of Metahuman Creator, an Unreal tool developed by Epic Games to create realistic virtual humans of all ethnicities. He argues that these still leave something to be desired; for instance, the Black metahumans still betray an overemphasis on subsurface scattering.
In closing, Kim notes that more research is required to understand exactly what measurements would be needed to create useful, diversified benchmarks. He stresses that any such research will likely prompt pushback, as does anything that challenges the status quo on race. It is crucial, he says, that the industry at large stands up to this kind of criticism.
Image at top: metahuman (left) and Delroy Lindo in “Clockers” (right)