Shapeshifting
In 2016, Joy Buolamwini, a researcher with the Civic Media group at the MIT Media Lab and founder of Code4Rights, developed the Aspire Mirror. Buolamwini describes the Mirror on its website as a device that allows one to “see a reflection of [their] face based on what inspires [them] or what [they] hope to empathize with.”1 The project draws inspiration from Thing From The Future, an imagination card game that asks players to collaboratively and competitively describe objects from a range of alternative futures. The Mirror draws additional influence from futuristic machines and speculative imaginaries found in popular science fiction novels (for instance, the empathy box and mood organ in Phillip K. Dick’s Do Androids Dream of Electric Sheep?, tales of shape shifting from the Ghanaian tale of the spider Anansi, and movies like Transformers). Buolamwini says she developed the Mirror to induce empathies that can help facilitate the spread of compassion in humanity. Another important goal of the Mirror is to catalyze individual reflection based on a set of cultural values like humility, dedication, oneness with nature, harmony, faith and self-actualization. Ultimately for Buolamwini, these transformative futures are a “hall of possibilities” where individuals can explore self-determinant futures, “if only for a small period of time.”
Aspire Mirror relies on facial detection and tracking software to capture and interpret image data before transforming them into futuristic scenes or “paintings.” During testing, Buolamwini encountered a problem. The Mirror could not detect details of her presence due to her dark skin tones and facial features. In order to validate the device and generate an alternative reality, Buolamwini had to first alter her existing appearance to make herself visible, and gain access to aspirational futures. She accomplished this by wearing a white facial mask, with features that were more easily detected. For Buolamwini, this was no surprise. Buolamwini had encountered this limitation previously, while developing a previous computer vision system.
A brief note on computer vision and machine perception
Computer or machine vision is a broad term used to described methods for processing and analyzing high-dimensional data acquired from the “real” world in order to produce symbolic or numerical outputs. The processed data can then be used to form decisions. Computer vision is powered by machine learning and artificial intelligence algorithms, and are commonly used for facial and image detection, modelling and aesthetic judgement. It is derived from various bodies of research, including optics, theories of light, surface modelling, as well as auditory analysis. It is used to identify structures using internally represented models of objects previously known to the computer system. These models, usually geometric, find image features that match the model features with the right shape and position. An advantage of this technique is that the model encodes the object shape, thus allowing predictions of image data and lessening the chance of coincidental features being falsely recognized. The primary aim of computer vision is to interpret the visual array to understand objects detected in the environment.
As Brian Potetz and Tai Sing Lee write, computer vision models that infer 3D structures from 2D models are highly complex and underconstrained, requiring many assumptions to infer 3D shape from image points or environmental training data. Little is known about the merits of these assumptions in real scenarios, not to mention there is an awareness that these assumptions must be simplified using probabilistic priors based on real scenes. In fact, exact inference is only possible for a small subclass of potential problems, in which case approximations must be used. Furthermore, Potetz and Lee note that developing human 3D surface perception statistical studies (most often using Bayesian techniques) may uncover entirely new sources of information not immediately obvious from real physical models. They write: “Real scenes are affected by many regularities in the environment, such as the natural geometry of objects, the arrangements of objects in space, natural distributions of light and regularities in the position of the observer.”2
While Potetz and Lee see these trends as exploitable in terms of algorithm design, few studies have considered the implication in terms of the model’s own computational reasoning. Here, the model, despite its potential usefulness, maintains an operational perception of irregular and unobservable data to calculate the disparity between probabilistic priors and real scenarios. What would therefore be incomputable and unexplained is instead assigned a function and meaning in terms of the model’s own logics of calculation. In other words, the algorithm assigns a logical potentiality within its perception of distance from the real scenario that is not predetermined, but generative at each iteration. This potentiality exists even though exactly how they will be realized, or how they will become actual, is unknown.
According to Christopher Tyler, these objects, physically consist of aggregates of particles that cohere together to form the object under investigation. Tyler describes the process as such:
Although the objects may be stable and invariant in the scene before us, the cues that convey the presence of the objects to the eyes are much less stable. They may change in luminance or color, they may be disrupted by reflections or highlights or occlusion by intervening objects. The various cues carrying the information about the physical object structure, such as edge structure, binocular disparity, color, shading, texture, and motion vector fields, typically carry information that is inconsistent.3
These algorithmically organized particle structures emphasize an overriding problematic in computer vision research. In order to accomplish what is thought to be visual coherence it is functionally necessary to reduce any inherent levels of inconsistency or instability, as a human eye would. Here, coherence depends heavily on the mitigation of “occlusions” or blockages that may disrupt a clear view of the world around us. In other words, to perceive the world as such, the algorithm must functionally simulate the complexity of, say a human’s optical system, by means of reduction and simplification.
The human eye, for instance, inputs data on a large complex scale through a parallel set of sensory receptors in the retina. Structuring perception through machine architectures is problematic when considering the racialized object. To bring forth a visually coherent future in computer vision, or an incoherent and speculative one for that matter (using Buolamwini’s example), one must first reduce the hall of possibilities to a set of pre-existing conditions. This is inherent in the function of machine perception. While machine perception has been defined in a number of ways (for instance, logics, self-awareness, reasoning, planning, and problem solving), attempts at goal achievement are considered distinct from relative indeterminate human experience. In terms of the former, machine perception functions by stabilizing and simulating environmental phenomenon in order to grasp the fundamental basis of human and other sentient-based actions, mimicking behavior and managing tasks thought to exceed human capabilities. This is seen, most readily, in robotics inspired by the living, both aesthetically and functionally created to solve complex problems thought to be out of reach of most human. Nonetheless, the basis of simulation here characterizes the living as an emanation of pre-existing conditions, reducing the operation of individuation, and primarily the differences amongst the living, to no more than an assemblage of contradictions that are negated and subsumed into a higher, more homogenous, unity of existence.
Beyond reality and the problem of locating known objects
Many coders, like Buolamwini, use pre-written code to perform common computer vision tasks. In the case of Aspire Mirror, this code include libraries such as Beyond Reality Face NXT (NITE XML Toolkit) for face detection and tracking, Vibrant.js for color extraction, jQuery for animation, and Chrome Remote Desktop as a panel control. Beyond Reality Face NXT, for instance, is a cross platform and real time face tracking software development kit (SDK) that marks faces in images and webcam streams. It is a proprietary and downloadable SDK comprised of a collection of resources, such as data, pre-written code, subroutines, classes, values, and type specifications. The NXT analyzes faces using a morphable shape with sixty-eight individual feature points that indicate the eyes, nose, mouth, face, and other details found on the human head. Once the feature points are detected, the NXT estimates the 3D position, rotation and scale of the head, even while the head is in motion. It then uses the tracking data to overlay virtual objects onto images of the detected face. While the complete list of algorithms used in the SDK are proprietary to Tastenkunst, the company that develops the NXT, it is known that the SDK relies on an optimized FPS (framerate) algorithm and an Active Shape Model (ASM) algorithm.
ASMs are algorithms that generate statistical models of the shape of objects in order to conform the suggested shape to the point distribution model. According to Cootes, Taylor, et. al., the developers of Active Shape Algorithms, ASMs are a type of model-based vision algorithm established as an approach to recognizing and locating known objects in the presence of noise, clutter, and occlusion. The developers argue that while traditional model-based vision algorithms sacrifice model specificity in order to accommodate variability, the ASM instead fits the data in ways that are consistent with the training set, allowing the developers to locate partially occluded objects in noisy, cluttered images within specific contexts determined by the training data.
Regardless, common facial detection libraries are often trained on normalized spectrums of data that are prone to false negatives without proper light conditions. In other words, they are trained on image data that includes primarily white subjects. The white phenotype then becomes the pre-existing condition and the prototypical assemblage from which all future human characteristics are measured. As result, the darker the skin tones or the more variant a person’s phenotypical features are from the average white subject, the less likely the algorithm is to recognize that person’s presence.
Towards the black technical object
While Buolamwini notes that her failed Mirror is telling of the functional limitations of computer vision and machine perception, she argues that the problematic is largely one of representation. For Buolamwini, a failure to recognize black faces as coherent human objects demonstrates “a lack of diversity in the [data] training set [that] leads to an inability to easily characterize faces that do not fit the normal face derived from the training set.”4 At the same time she reminds us that ”whoever codes the system embeds her views… limited views create limited systems.”5 This “coded gaze,” as she calls it, is a potential catalyst for political action and technical intervention. “Let’s code with a universal gaze,” she writes.
Here it becomes important to foreground Buolamwini’s even unwitting link between black pathology and the technical object, what—in its convergence—we might call the “black technical object.” The link I draw here finds precedence in Fanon’s thesis on the operations of race and its impact on the psyche of the racialized. As a practicing psychiatrist, Fanon put forward a line of propositions that extend beyond the corporeal implications of racism into the realm of the psychic and collective individual. For Fanon, the colonial project is an organic system that is demarcated by a series of relations between these individuals and the violent forces of racism. It is essential to note that within these relations, the individual’s psychic fragmentation is made apparent to the individual at the moment of their encounter with racializing phenomena. Returning to Buolamwini’s case, we can think here in terms of the recognition of incompatibility between her own sense of self as a coder and the machine’s functional perception of her as an undetectable, and therefore non-existent, object. Within Fanon’s schema, Buolamwini’s undetectability exaggerates the dissonance between her own self-determination (she just wants to be seen) and the external forces (the Aspire Mirror) that restrict this desire based on a pre-determined set of rules. The possibility of self-determination is opened only under the condition that the individual alters the self, which in this case is a transformation of the flesh.
Fanon’s schema departs from the point of non-existence to discuss the objectification of the individual through this process, which includes their own psychic fragmentation. Consequently, Fanon explains, the individual embodies a phobic image of the self that is infused with paranoia, delirium, and self-doubt. As Fanon describes, the phobic image of the racialized (or what he calls the “photogenic object”) is embedded in the psychic orientation of the West.6 Within these imaginary systems, self-doubt becomes the guiding principle by which the racialized person views themselves as well as the world around them. These affective relations are felt and acted on, effectively replacing what is actually seen with a fictive belief about how one should perceive the photogenic object. While the white psychic structure experiences the same phobia, according to Fanon, it is instead articulated as a social incompatibility or threat, where the racialized become visible as individual beings only in as much as their possibility for existence aligns with pre-existing concepts of racial hierarchy. Fanon writes: “For the object, naturally, need not be there, it is enough that somewhere it exist: It is a possibility.”7
If we are to consider the photogenic object in contemporary spaces of algorithmic culture, it is apparent that the black technical object is always-already pre-conditioned by an affective prelogic of race that functions on the psychic level of experience. The possibility of an affirmative engagement between the black technical object and the algorithm, as a technical object, is then limited by the necessity to reconcile the psychic potential of the racialized individual with that of a pre-determined technical structure. Although the immediacy of computation’s lack of diversity—in terms of institutional value and algorithmic function—cannot be understated, a call to make black technical objects compatible to computer vision algorithms risks the further reduction of the lived potentiality of black individuals. While the achievement of Buolamwini’s aims might widen the scope of machine perception, not to mention the participation of excluded bodies in techno-social ecologies, the solution, as proposed, reinforces the presupposition that coherence and detectability are necessary components of human-techno relations. The drive towards uniform coherence works to circumvent any substantial consideration of the algorithm’s reliance on historical category —namely, what features represent the categories of human, gender, race, sexuality, and so on. Coders like Buolamwini speak directly to this problem of the erasure, yet fold seamlessly into the desire for representation. “Challenged to rethink, insurgent black intellectuals and/or artists are looking at new ways to write and talk about race and representation, working to transform the image,” as bell.hooks writes.8 hooks reminds us, however, that:
There is a direct and abiding connection between the maintenance of white supremacist patriarchy in this society and the institutionalization via … specific images, representations of race, of blackness that support and maintain the oppression, exploitation, and overall domination of all black people… For it is only as one imagines “woman” in the abstract, when woman becomes fiction or fantasy, can race not be seen as significant.9
Still, hooks is quick to remind us that the institutionalization and drive towards representation are devoid of the dynamisms of black life:
For some time now the critical challenge for black folks has been to expand the discussion of race and representation beyond debates about good and bad imagery. Often what is thought to be good is merely a reaction against representations created by white people that were blatantly stereotypical. Currently, however, we are bombarded by black folks creating and marketing similar stereotypical images. It is not an issue of “us” and “them.” The issue is really one of standpoint.10
In terms of computation, following hooks, we are immediately drawn to the problematic of black representation and algorithmic calculation. By regressing complex environmental data into a generalized pattern, the lived and multivalent specificities of black lives are represented “as if” the visual matrix maintains no connections to historical category, stereotype, or a moral imaginary. It further highlights a commitment within research to organize certain life in accordance with the rules of occlusion, which already necessitates a presupposition of object relation (in terms of the black face, that which is brought into view only in as much as it can make sense of the world in relation to the white body).
The consequences for the black technical object are immense. An undetectable black technical object is, in this instance, equivalent to what I call a “machinic non-existence,” or the formation of an empirical reality that—as Sylvia Wynter articulates in her critique of Western humanism—is a projection of a racialized substance that is metaphysically sustained by the Aristotelian embodiment of normalcy.11 Put another way, research in computation is an adaptation of the fictive and compulsive ordering of human attributes into a single coherent image of species, or what Wynter has described as “being secularly human.”12 Wynter draws on Judith Butler’s critique of gender, where Butler argues that Otherness emerges when we enact the nouns “man” and “woman” as abiding substances. Butler states that these substances are produced by the fictive and compulsive ordering of attributes into a coherent gender sequence.13 If these coherences are nothing more than “contingently created through the regulation of attributes,” then Butler posits that any ontology of substance is itself an “artificial effect.”14 Wynter’s adoption of this point of reference extends the artificiality of regulated attributes into the substances of class, sexual orientation, and race.
Her claim is prompted by the creation of what she describes as eugenic/dysgenic selection.15 The conditions for racial sorting and priority were already set forth in the establishment of data analytics and statistical correlation as viable tools for social inquiry. In many ways, the limits of calculation cannot be understood outside of the connection between racial sorting, social welfare, and quantification. We can think of this relation as illustrative of what Ian Hacking describes as “enduring ways of thinking” that have impacted philosophical understandings of mathematics.16 For instance, Hacking uses A.C. Crombie’s text Philosophical Presuppositions and Shifting Interpretations of Galileo as an example of the reliance on “(a) the simple postulation and deduction in the mathematical sciences, (b) experimental exploration, (c) hypothetical construction of models of analogy, (d) ordering of variety by comparison and taxonomy, (e) statistical analysis of regularities of populations, and (f) historical derivation of genetic development.”17 He highlights these paradigms to note an important shift in social frameworks that increasingly rely upon reasoning to establish scientific method. This matrix of knowledge production, as Hacking argues, has given rise to the false idea that information on population phenomena can be accurately evaluated through symbolic sampling, as opposed to the exhaustive census work that had been attempted prior.
These techniques were amplified by the work of Lambert Adolphe Jacques Quetelet and André-Michel Guerry. Quetelet and Gerry hypothesized that independent human phenomena behaved with the certainty of astronomical phenomena—a process that simulated the rotation of the Earth around the Sun.18 As result, human behavior was bound to astronomical causality, as universal constants. As with the stars, any deviations from derived human constants of behavior were deduced as perturbations to naturalized events that, once settled back into an equilibrium, could be returned to previously normalized patterns. In other words, the hypothesis aligned contingent and chaotic phenomena with a statistical order. The data, however, were actionable in their capacity to infer future social behavior based on historical and present observation.19 From these technologies, prototypical behavior emerged as a mode of population control, mediated through what German mathematician Johann Carl Friedrich Gauss termed the homme type, or the “average man.”20
Quetelet’s primary emphasis was on the codification of phenotypical phenomena, particularly race, in as much as race is extends beyond the biological and into moral substance. Hacking writes:
Where before one thought of a people in terms of its culture or its geography or its language or its rulers or its religion, Quetelet introduced a new objective measurable conception of a people. A race would be characterized by its measurements of physical and moral qualities, summed up in the average man of that race. This is half of the beginnings of eugenics, the other half being the reflection that one can introduce social policies that will either preserve or alter the average qualities of a race. In short, the average man led to both a new kind of information about populations and a new conception of how to control them.21
Here, Quetelet uses basic statistical probabilities to construct a matrix of phenotypical characteristics. Interestingly, he had little interest in deriving numerical averages of phenotypical factors such as height and weight (this would be the equivalent of preempting an individual’s rate of childbirth at 2.2 children or that an individual might marry 1.6 times). The primary function of the matrix is to identify a causal relationship between an array of unique, yet arbitrary, properties, such as human chest size to vegetable harvest, poetry, and moral attributes.
Using the logics of astronomical discovery, Quetelet hypothesized that any departures from normalized patterns of hereditary development were disruptions in the natural trajectory of evolution. More so, he believed these differences would (or could) be naturally de-selected. Alternatively, any conformity to normalized attributes would be naturally selected for both survival and superiority among its species. As Hacking writes:
Quetelet had made mean stature, eye-color, artistic faculty and disease into real quantities. Once he had done that … deviation from the means was just natural deviation, deviation made by nature, and that could not be conceived of as errors.22
In terms of computer vision, the site of exclusion in Aspire Mirror is also the precise moment where the black technical object is interpellated, fragmented and organized around the white visual metaphor. The coherence of racialized attributes, what I could furthermore call the “fictive substance of race,” links the dynamic instrumentalization of coherence found in computation to the “discursive negation of co-humaness.”23 The negation immediately enacts a pre-existent distance between the perception of oneself as a self-determinant being and a more formative system of abstract value. In other words, black life as such, or black being-as-being-self-constituted is at a continued distance from the illusionary prototypical image. What is one to do?
To merely include a representational object into a computational milieu that has already positioned the white object as the prototypical characteristic catalyzes disruption on the level superficiality. From this view, the white object remains whole, while the object of difference is seen as alienated, fragmented, and lacking in comparison. From this perspective, the position of the black technical object (as Fanon and Wynter have thoroughly articulated) is lodged in a recurrent dialectic, where it attempts to valorize or recapture black life from within the confines of normalized logics while simultaneously desiring to disrupt its hold. Can the black technical object be conceptualized as outside of the dialectic between human and machine? Is there such a thing, borrowing from Fred Moten, as an aspirational black life that can gain a right of refusal to representation? As such, would a universal computational gaze limit the self-determination of those that have little or no desire for inclusion in machine perception? Alternatively, as facial recognition becomes an increasingly important lens through which we understand the world, how can black technical objects generate new possibilities outside of phenotypical calculation, prototypical correlation, and the generalization of category? How might we create a more affirmative view of the relation between the black technical object and technology?
Perceptive unities
It is apparent that the immediacy of these concerns mirror the intensity at which computer vision is populated in the public domain. Recent debates surrounding data, artificial intelligence and machine learning as a techno-human practice, particularly in terms of the discriminatory powers of classification and further reductions in the life chances of marginalized populations, have raised significant questions concerning the logics of machine perception and its impact on the self-actualization of black potential. Most significant is a disparity between the act of existing/existence—particularly as it relates to differential human states of being (categories of race, gender, sexuality, and so on)—and the paradigms of epistemological operation. The result is no less real than the metaphysical gestures algorithms seek to appropriate, such as conflict resolution, discovery, compatibility and invention. Any solution to the complexity of the relation between black individuals and algorithms is only exacerbated by an over-dependence on mathematics as truth, and that which already exists in computation as fact—particularly in circumstances such as police surveillance or managerial oversight, where face detection reaches beyond the exploration of aspirations toward the operative reduction of life chances.
Recent trials conducted by the London Metropolitan Police Service (The Met) enact similar logics of perception. In 2018, The Met invited Christmas shoppers in Central London to take part in a trial of a facial detection system used to identify suspects wanted by the police. To trial the system, The Met mapped existing police image data onto the software. Once trained, the algorithm interacts with cameras at the specific location. The camera scans faces in the crowd, feeding the image data captured back into the algorithm. The algorithm then compares the captured image data and correlates it to image representations (suspects) flagged by the police. Data representations of the Christmas crowd are kept in the police database for weeks.24 The Met’s festive trial is part of a larger investigation into the use of facial detection software to identify criminal suspects. However, these attempts have been met with scrutiny in terms of privacy (officers are instructed to wear plain clothes), and also the robustness of the system. An investigation by campaign group Big Brother Watch indicates an alarming number of false positives. For example, in The Met’s trial at London’s Notting Hill carnival in 2016 and 2017, the system incorrectly flagged 102 people as potential suspects. All led to no arrests. This is in addition to data acquired from the South Wales Police that shows 2,451 false positives out of 2,685 so-called “matches.”25
Facial detection operations are not limited to blind trials. They are employed to solve a range of economic, social or political objectives which rely on a greater coherence of individuals and the environment for operational efficiency. For instance, an online retailer may desire to increase sales by identifying which of its present or future customers are likely to purchase certain goods or not; or when, if ever, they are likely to make a present or future purchase. A bank may want to gain a deeper understanding of its client base to determine which customers are most likely to default on a loan or credit card agreement. Once correlated, these customers can then be classified by risk and given either higher or lower interest rates. A judicial committee concerned with recidivism may want to make parole decisions based on the probability a convicted person will become a repeat offender. Or, a police agency may attempt to optimize the distribution of police cruisers based on the probability a neighborhood will yield lower crime rates based on police presence. Here, the notion of objectivity is once again challenged, bringing forth what Ezekiel Dixon-Román terms “algo-ritmo,” or the disciplining of the flesh. Dixon-Román writes:
Regardless of the degree of human subjectivity behind the code for this algorithm, it is the case that the performative act of this algorithm can be a powerful force in shaping and disciplining the flesh. In this example, algo-ritmo quite explicitly disciplines the flesh and designates humanity into full humans and nonhumans… As an immanent act beyond human intervention, algo-ritmo is a performative force that may do more than simply reify “difference.” With the ubiquity of algorithms in society, algo-ritmo has the capacity to reconfigure the boundaries of “difference” as well as further magnify the sedimentation of “difference.”26
Dixon-Román points to the act of classification as not only the reification of difference, but also the re-configuration of the ontologies of the human in relation to technology. In these instances, the process of computer vision begins at the presupposition by the human perceiver that the machine has already attained a state of objectification independent of the variability human experience and knowledge. Problematics are compounded when technological research collides with public trust/distrust in data and automated systems that organize the complex realities of human relations. While the technologies illuminate sentient and computational concerns, they also unearth human-machine phenomenon that might otherwise remain hidden or obscured from vision. In this way, machine learners are as much relational as they are operative in the shaping of perception “as if” they are direct simulations of sentient intelligences—as illustrated in desires for more “natural” or sentient-like behaviors in speech generation/recognition, robotics, and artificial intelligences.
In this sense, computer vision becomes an act of thought that privileges the view that individuals are created on the basis of coherence and categorical division. In this way, the relation between human reason and artificiality emerges as an immediate contradiction of perceptual domains. While one—the human—is based on a multivalent array of lived realities, the other is founded in a nominalism devoid of dynamism. This astonishing circumvention of indeterminacy naturalizes and objectifies the variant ways in which human beings live their lives to a degree that any mode of coexistence becomes no more than a transcendental presence brought forth by a single epistemological point of view. The operation of individuation is furthermore relegated to a series of representations amongst a falsely unified species. The consequence is the “freezing” of dynamic life into a homogenous milieu, and the immediate suppression of the lived conditions that catalyze certain individual transformations and subsequently inform future iterations of being.
We are, in Merleau-Ponty’s view, relational beings: life “has a social atmosphere just as much as it has a flavor of mortality.”27 Yet, by bringing forth the fictive substance of race into the realm of the artificial, machine perception functions as a series of “rudimental navigations,” using Johanna Seibt’s description, that operate under the logics of rule-based procedures, “as if” they are human.28 If machine perception is to mimic anything other than pre-existing hylemorphic association, then we must take into account that the machine is not human. The artificial cannot comprehend the full scope of life and human existence. It is through the rule-based procedure, the substance, that the machine maintains both its dependence on limits of metaphysics and the objectification of life itself. These demands are in addition to what Seibt argues is a “gradual increase of the regulatory dependence up to normativity.”29 Here, multivalent human perception is not enlisted to widen our understanding of ourselves or the world around us, but is instead of value to research only in as much it can assist in the demonstration of what machines can or cannot do.30 The current ontological position, she argues, can be described more precisely as tendencies to place value on the knowing how of knowledge production as a process dependent upon the formulation of description in conversation with normative modes of performance. I quote Seibt at length:
From a philosophical viewpoint it is a category mistake to assume that we can interact with anything—whether robot or human—as if it were a person. “Person” is not a descriptive predicate designating a feature, but a declarative-ascriptive predicate designing a response-dependent condition like “red” or “C#,” with the distinctive difference that the response to the item in question is not perceptual but normative— it is undertaking a commitment to certain actions and omissions, in accordance with rights and obligations… When we call an entity a person we thereby, in the performance of that utterance, take on certain absolute normative commitments—in fact, to call something a person is to do nothing else but to make these commitments… You cannot say “It is as if I hereby promise you…” nor “I promise you somewhat…” Similarly, if we treat some x as a person we are committed to taking x as a person, which means that we interact with x as a person.31
The role of normativity then, in its increased tendency to drive the production of normality, undertakes new significances in ethical debates surrounding the applications of artificial intelligence systems. At risk is the continued (re)production of the categorical other and the foreclosure of the lived conditions through which specific humans might determine if a machine is friend, foe, colleague, or neighbor. Luciana Parisi has shown that the “actualities that select, evaluate, transform, and produce data” expose the internal inconsistencies of rational-based systems, particularly functional mechanisms that advocate for the naturalization of reason.32 Her argument calls into question the fundamental logic of the rule-based procedure and offers new opportunities to re-assess the question of what actualities count as processes of artificial reason.
Without a wider scope, debates on these matters remain incomplete in their characterization of algorithmic prejudices and social discriminations. Attempts at reconciling this arguably unsettled debate rely on a commitment to sufficiently characterize the constitution of a more affirmative process of machinic existence that can gain a totality in relation to artificial modes of perception. The proposal asks us to consider what is overlooked in machine perception, and in doing so dislodge both the ontological and functional process of machine perception from its roots in substantialist metaphysics. Machine perception here demands a new reflexive position that can generate alternative levels of operation. As Robert Brandom reminds us: “Description [becomes] classification with consequences, either immediately practical (‘to be discarded/examined/kept’) or for further classification.”33
Affirmative psychic genesis
A revision of this field of perception demands a return to metaphysics and a reading of the genesis of being from the perspective of multivalent modes of reality. Although artificial systems are based on classification and descriptions, with tendencies of revision and prediction, there are other modes of logic that ontology has under-articulated. Whereas Siebt proposes a technological solution that can account for languages that do not contain terms for “human mental states, agentive goals, or social relations,” Stefano Harney and Fred Moten suggest that we imagine an ontological relation that prioritizes the psychic generation of the black technical object. Within this framework, we are asked to give thought to the black technical object that does not “want to be correct” or “corrected”:
Consider the following statement: “There’s nothing wrong with blackness”: What if this were the primitive axiom of a new black studies underived from the psycho-politico-pathology of populations and its corollary theorization of the state or of state racism; an axiom derived, as all such axioms are, from the “runaway tongues” and eloquent vulgarities encrypted in works and days that turn out to be of the native or the slave only insofar as the fugitive is misrecognized, and in bare lives that turn out to be bare only insofar as no attention is paid to them, only insofar as such lives persist under the sign and weight of a closed question?34
Here, we can imagine an object that develops an indifference to description or any other form of artificial representation. It would maintain its lived experience as such, as an internal transformation, both within and in excess of artificial perception. The act of psychological transformation, here, challenges the state of homogeneity by engaging in a transformative politics of affirmative self belonging.35 Harney and Moten develop their understanding of the transformative subject by suggesting that the individual viewed as not belonging is represented as a type of cultural entropy within dominant systems of power. However, this “hidden” individual is only excess insofar as it operates from the viewpoint of lack or decay. Harney and Moten argue that within the experience of social contact, what bell.hooks might call a “communion,” the entropic individual exceeds the barriers of social relations to enter an alternative space of becoming—made possible by a reimagining of the self. In other words, allowability for the unusable, uncommon, and thus incomputable individual potentializes the social space toward new ways of relating.
If, as Moten argues in “The Case of Blackness,” that ‘the cultural and political discourse on black pathology has been so pervasive that it could be said to constitute the background against which all representations of blacks, blackness, or (the color) black take place,” then how might the pathological dulling of black life inform new readings of computer vision? What would it take to resist notions of universal correlations, and instead value the dissonance that emerges in relation with analytics?36
By prioritising cohesion, algorithmic processes erode the potential for human difference and self-actualization. Coherence gives rise to what Deleuze and Guattari call the “demands of singularity,” or an insistence on the union of diverse entities into a single group, form, body, or relation.37 In “Nomadology: The War Machine,” Deleuze and Guattari remind us that “the model in question is one of becoming and heterogeneity, as opposed to the stable, the eternal, the identical, the constant.”38 The black technical object converges with the artificial in an assemblage of mutable and multivalent experiences. Here, both the black technical object and technical object inform each iteration of themselves in a self-governing system of feedback.
While entities within the assemblage might be perceived as incompatible, new conditions for self-actualization emerge at each moment of contact. In other words, the black sense of self is formed, informed, and reformed at the moment of dissonance between self-perception and any externally-constructed view of black life. These tensions are the conditions from which new iterations of the self are generated, which exceed the reductions of representation and visibility. They form a recurrent system of feedback, that enact what Foucault has called a technologies of the self, which “permit[s] individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection or immortality.”39 By prioritizing difference, entropy (or the loss of computational coherence as such) can be conceptualized as the condition through which transformation is made possible. In this way, entropy is revealed to a system inclusive of contingency, instability, multivalent modes of perceptions, indeterminacies, and iterations of self-actualization.
If we accept the present matrix of computer vision as a normalizing logics, then perhaps we should turn away from our dependencies on the artificial to activate the internal halls of possibilities that pre-exist in human potentiality. The concerns that arise from within our parasitic relation with the technical object are no less immediate, as artificial intelligences such as computer vision articulate a wider logic of reductionism and black exclusion. What we experience today as algorithmic prejudice is the materialization of an overriding logic of correlation and hierarchy hidden under the illusion of objectivity. Meanwhile, the fictive substances of race and racialization work to disrupt self-actualization by re-enforcing the false assumption of coherence. In the drive towards coherence, computer vision is set in place “as if” it is human and the guardian of judgement. In operation, it is assigned the role of interpellator, assigning value (in terms of visibility) to the individual only in as much as he/she/they can be measured against a universalizing concept of being. In the collision between blackness and the artificial, this operation can materialize, even unwittingly, as an incoherence, which places undue weight on the perception of oneself and environment. While this perception speaks to the immediacy of racism and racialization, an opportunity emerges to shift the pathological perspective from one of entropy and lack to a more affirmative process of psychic generation. Here, the development of a machinic existence would, at its origin, substitute the view of computational duress to one of black totality, always already in the process of transformation. The resultant incompatibility could then be seen as an act that, while bringing forth pre-existing substances of racialization, can make use of this duress to catalyst future affirmative iterations of the self. Ultimately, what is prioritized within this psychic encounter is a compassion for the self as already coherent at the encounter of artificial misrecognition—a self that is continually taking shape, as blackness has always done, in its exploration of infinite halls of possibility.
Joy Buolamwini, “Aspire Mirror,” ➝.
Brian Potetz and Tai-Sing Lee. “Scene Statistics and 3D Surface Perception.” In Computer Vision: From Surfaces to 3D Objects, ed. Christopher W. Tyler (Boca Raton, FL: Chapman and Hall/CRC, 2011): 1–24.
Christopher W. Tyler, Computer Vision: From Surfaces to 3D Objects (Boca Raton, FL: Chapman and Hall/CRC, 2011): vii.
Buolamwini, “Aspire Mirror,” ➝.
Ibid.
Frantz Fanon, Black Skin, White Masks (United Kingdom: Pluto Press, 2017), 151.
Fanon, Black Skin, White Masks, 155.
bell.hooks, Black Looks: Race and Representation (Boston: South End Press, 1992), 2.
Ibid.
bell.hooks, Black Looks. 4.
Sylvia Wynter, “Human Being as Noun? Or Being Human as Praxis? Towards the Autopoetic Turn/Overturn: A Manifesto” (2007), ➝.
Ibid.
Ibid.
Wynter, “Human Being as Noun? Or Being Human as Praxis?,” ➝.
Ibid.
Ian Hacking, The Taming of Chance: Ideas in Context (New York: Cambridge University Press, 1990), 6.
Ibid.
Hacking, The Taming of Chance.
Ibid.
Ibid.
Hacking, The Taming of Chance, 107.
Hacking, The Taming of Chance, 113.
Wynter, “Human Being as Noun? Or Being Human as Praxis?,” 4, ➝.
“Central London in facial recognition trial,” BBC (December 16, 2018), ➝.
“Face recognition police tools ‘staggeringly inaccurate’,” BBC (May 15, 2018), ➝.
Ezekiel J. Dixon-Román, “Algo-Ritmo: More-Than-Human Performative Acts and the Racializing Assemblages of Algorithmic Architectures.” Cultural Studies ↔ Critical Methodologies vol. 16, no. 5 (June 26, 2016): 1–9.
Maurice Merleau-Ponty, Phenomenology of Perception (New York and London: Routledge Classics, 2003), 425.
Sociable Robots and the Future of Social Relations—Proceedings of Robo-Philosophy, eds. Johanna Seibt et. al. (Amsterdam: IOS-Press, 2014): 97–105.
Johanna Seibt, “How to Naturalize Sensory Consciousness and Intentionality within a Process Monism with Normative Gradient: A Reading of Sellars,” in: Sellars and his Legacy, ed. James R. O’Shea (Oxford: Oxford University Press, 2016), 188.
Sociable Robots and the Future of Social Relations: 97–105.
Ibid.
Luciana Parisi, Contagious architecture: computation, aesthetics, and space (Cambridge: MIT Press, 2013), ix.
Robert Brandom, “How Analytic Philosophy Has Failed Cognitive Science”, Critique and Humanism vol. 31, no. 1 (2010): 151–174.
Stefano Harney and Fred Moten, The Undercommons: Fugitive Planning & Black Study, (Wivenhoe: Minor Compositions, 2013), 47–48.
Ibid.
Fred Moten, “The Case of Blackness,” Criticism vol. 50, no. 2 (Spring 2008): 177.
Gilles Deleuze and Felix Guattari, “Nomadology: The War Machine,” Atlas of Places (1986/February 2018), ➝.
Ibid.
Michel Foucault, Technologies of the Self: A Seminar with Michel Foucault, eds. Luther H. Martin, Huck Gutman, and Patrick H. Hutton (Amherst: University of Massachusetts Press, 1988).
Becoming Digital is a collaboration between e-flux Architecture and Ellie Abrons, McLain Clutter, and Adam Fure of the Taubman College of Architecture and Urban Planning.