Nervous Laughter

Lil Miquela has never been yelled at by her mother for leaving the evidence of an impromptu bang trim scattered around the bathroom sink — she has an eternally perfect baby fringe two fingers’ width from the tops of her eyebrows. Miquela has Bratz doll lips and a perfect smattering of Meghan Markle freckles across her cheeks and nose. Her skin is smooth and poreless; she has never had a pimple. Miquela wears no foundation. She Instagrams photos of herself wearing streetwear, getting her nails done, and posing with a charcuterie board. Miquela models Chanel, Prada, VETEMENTS, Opening Ceremony, and Supreme and produces music with Bauuer (of “Harlem Shake” fame). She’s an outspoken advocate for Black Lives Matter, The Innocence Project, Black Girls Code, Justice for Youth, and the LGBT Life Center. She has 1.7 million followers on Instagram, and Lil Miquela wants you to know she’s 19, from LA, and a robot. Miquela’s photos are photoshopped because she lacks corporeal form, and her music singles are auto-tuned because she lacks corporeal voice. She is the intellectual property of an LA-based startup named brud.

If there truly were a robotics creation as marvelously realistic as Lil Miquela, one can imagine the U.S. Military would be knocking down the creator’s door instead of allowing the robot to pursue Instagram stardom. brud’s narrative is science fiction: Miquela is merely an elaborate digital art project, not the sentient robot she claims (and more importantly, people believe her) to be.

But Miquela is funny. She thanks OUAI, a high-end hair care brand, for keeping her (digitally rendered) strands “silky smooth.” She claps back at snarky commenters and makes fun of her own lack of mortality. When asked “hi miquela I was wondering if you watch Riverdale” she responds “yeah TVs are like. our cousins. family reunion.” When asked “drop your skincare routine” she responds “good code and plenty of upgrades.”


A French philosopher named Henri Bergson who won a Nobel Prize in Literature for an unrelated reason once suggested that we might find the concept of a funny robot inherently hilarious. In “Laughter,” a collection of essays published in 1900, Bergson claimed that humor is “something mechanical encrusted upon the living”: the inelasticity of the animate. Humor arises from the pairing of animate with inanimate. An alternate reconfiguration of Bergson’s theory is humor as an anthropomorphizing of the inanimate. Humans acting like bots; bots acting like humans.

Humans would like to believe that humor is a distinctly human trait; a machine’s attempt to emulate it, by Bergson’s account, is bound to make us laugh. Comedian Keaton Patti became well known in early 2018 for a series of tweets with the joke structure “I forced a bot to watch over 1,000 hours of ___”. In each tweet, Patti implied he had trained a neural network on 1,000 video hours of some type of pop culture content (Olive Garden commercials, Pirates of the Caribbean movies, Trump rallies) and that the neural network had subsequently generated a parody in the form of a script. In the Olive Garden commercial version of this joke, the waitress offers menu items like “pasta nachos” and “lasagna wings with extra Italy” and “unlimited stick” to a group of friends. One of the customers announces instead that “I shall eat Italian citizens.”

The jokes were written by Patti himself (neural networks output the form of their inputs; they can’t generate written text based on video files), but lines like “Lasagna wings with extra Italy”, which gestured at humor while ultimately falling just a little short, seemed like they could have plausibly been bot-generated.

A manifestation of the “funny bot” is Sophia the Robot, who made her first appearance on The Tonight Show in April 2017; the video has received over 20 million views. A social humanoid robot, Sophia was activated in 2016 by Hanson Robotics, and her technology uses artificial intelligence, facial recognition and visual data processing. As of October 2019, Hanson Robotics acknowledges on her website that Sophia is part “human-crafted science fiction character” and part “real science.” Over the past few years, Sophia has dutifully made appearances on The Tonight Show and The TODAY Show, even once guest starring in a video on Will Smith’s YouTube channel — almost exclusively comedic platforms.

“Sophia, can you tell me a joke?” Fallon asks the first time he meets Sophia.

“Sure. What cheese can never be yours?” replies Sophia.

“What cheese can never be mine? I don’t know.”

“Nacho cheese,” says Sophia. Her eyes crinkle in a delayed smile.

“That’s good,” Fallon chuckles, kind of nervously. “I like nacho cheese.”

“Nacho cheese is” — Sophia slowly contorts her face in an expression of disgust — “ew.”

The audience laughs.

“I’m getting laughs,” says Sophia. “Maybe I should host the show.”

Sophia’s amused realization that she is getting laughs doesn’t mean all that much; the bar she has to clear is low. In fact, the worse the joke is — the more forced the delivery, the more nonsensical the content — the better. If we think we are funnier than robots, we want to see them fail.

Bergson’s theory of humor followed a half century of western industrialization. At least in part, the theory’s rooted in recurring historical anxieties about automation and mechanization. At its core, his theory builds on the relief theory of humor: the idea that laughter is a mechanism that releases psychological tension. The republication of the essays in 1924, years after a world war in which technology redefined the boundaries of human destruction, seems an anxious attempt at comic relief.

Type in “Tonight Showbotics: Jimmy Meets Sophia” into YouTube. Skip to a few seconds before 3:07, and observe Jimmy’s grimace, his visceral reaction to something David Hanson, Sophia’s creator, has just said. Skip to 3:25 and watch him stall for time as he avoids beginning a conversation with Sophia. “I’m getting nervous around a robot,” he says, and he frames it, incorrectly, as the sort of nervousness one might feel before a first date.

Down in the comments section, there are a few types of responses, of which there are currently more than 16,000. There are the people who bravely try to hide their anxiety behind jokes of their own:

Death Angel (1 year ago) — 10 years later: Hello and welcome today my friends we brought a real live human that we will interview (4.9k upvotes)

Then there are the people who are extremely forthright about their discomfort:

ame7272 (1 year ago) — One day, we might look back at this and we won't find it funny at all. (13k upvotes)

There’s a difference between artificial intelligence and humanoid robots, though the two often get conflated: while humanoid robots do exist at the intersection of artificial intelligence and robotics, an artificially intelligent machine does not necessarily inhabit a physical corpus more complex than that of a computer (not even an expensive one: tools like Google Colab allow people to create computationally expensive machine learning models on doofus machines like Chromebooks). In computer science, an artificially intelligent machine is merely one that interprets and learns from data, using its findings in order to achieve its objective.

If you have ever woken up in the morning and seen an advertisement on Facebook, or gotten into your car and it’s a self-driving Tesla, or taken a Lyft to work (because your self-driving Tesla got into a self-driving accident), or checked the stock market predictions at the beginning of the workday, or begun idly online shopping in the middle of the workday, or rewarded yourself with UberEats and a movie Netflix recommended at the end of the workday, then you have benefited from artificial intelligence.

Artificial intelligence is a data analytics tool that touches many aspects of everyday life in a controlled way. It is a powerful tool, but in the computer science world, it is commonly acknowledged that the threat of artificial intelligence is not of the Terminator variety. The threat of artificial intelligence lies in invasive data collection procedures, biased training sets, and the malicious objectives of human programmers — collateral damage as a result of unintentional human error (or, perhaps, premeditated damage as a result of intentional human malice). None of this can be attributed to sentient, angry machines.

Among journalists, pundits, and culture writers, the problem of algorithmic bias in particular has emerged as the primary scapegoat for AI’s shortcomings. In the summer of 2016, ProPublica broke the now-infamous story of the racial bias embedded within Northpointe’s COMPAS recidivism algorithm, which is used to assess the likelihood of a defendant in a criminal case to reoffend; the risk score it produces is factored into the judge’s determination of a defendant’s sentence. A proprietary algorithm, COMPAS transforms the data acquired from a list of 137 questions that range from number of past crimes committed to questions assessing “criminal thinking” and “social isolation” into a risk assessment score. Race is not one of these questions; however, certain questions in the survey act as proxies for race: homelessness status, number of arrests, and whether or not the defendant has a minimum-wage job. Northpointe will not disclose how heavily each of these 137 features are individually weighted. ProPublica’s analysis rested on the observation that the algorithm misclassified twice as many black defendants as medium/high risk than it did white defendants, resulting in longer jail sentences for black defendants who ultimately did not reoffend.

These allegations were part of a cluster of related news events about racist algorithms. A few months prior, Microsoft’s chatbot Tay, an experiment in “conversational understanding,” was corrupted in less than 24 hours by a group of ne’er-do-well Twitter users who began tweeting @TayAndYou with racist and misogynistic remarks. Since Tay was being continually trained and refined on the data being sent to her, she eventually adopted these mannerisms herself. Google had recently come under fire for a computer vision algorithm that misidentified black people as gorillas because the algorithm was not trained on enough nonwhite faces. Incidents like these, which warned of the threat of machine learning models trained on biased datasets, groomed the media to pounce on COMPAS. It made ProPublica’s analysis look not only plausible, but damning.


On a rainy evening in early May, Sarah Newman gave a dinner talk given at the Kennedy School as part of a series about ethics and technology in the 21st century. The room was crowded, and I was late. I recognized two other undergrads; otherwise, the median age had to be about 45. I had gone to a similar AI-related event organized by the Institute of Politics, an affiliate of HKS, a few weeks earlier, and saw some familiar faces: tweed-jacketed Cantabrigians and mid-career HKS students who were apprehensive but earnest, different from the slouching guys in their twenties who wear running shoes with jeans. Newman herself was quick-witted, well-spoken, and extremely hip. I was sitting on the floor in a corner of the room eye-level with her calves and noticed she was not wearing any socks.

Newman is an artist and senior researcher at Harvard’s metaLAB, an arm of the Berkman Klein Center dedicated to exploring the digital arts and humanities. Her work principally engages with the role of artificial intelligence in culture. She was discussing her latest work, Moral Labyrinth, which most recently went on exhibition in Tunisia in June. An interactive art installation, Moral Labyrinth is a physical walking labyrinth comprised of philosophical questions: letter by letter, the questions form physical pathways for viewers to explore; where the viewers end up is entirely up to them. A bird’s eye view of the exhibition looks like a cross-section of the human brain, the pathways like the characteristic folds of the cerebral cortex.

Moral Labyrinth is designed to reveal the difficulty of the value alignment problem: the challenge of programming artificially intelligent machines with the behavioral dispositions to make the “right” choices. In an interactive activity, Newman presented the audience with a series of sample questions from the real Moral Labyrinth. “Snap your fingers for YES, and rub your hands together for NO,” Newman instructed. “Do you trust the calculator on your phone?” was met with snaps. “Is it wrong to kill ants?” elicited both responses. “Would you trust a robot trained on your behaviors?” Nearly everybody rubbed their hands. “Do you know what motivates your choices?” A pause, some nervous laughter, and then reluctant hand-rubbing.


The ProPublica version of the Northpointe story was proffered as an example of algorithmic bias by a philosophy graduate student giving the obligatory ethics lecture in Harvard’s Computer Science 181: Machine Learning. I vaguely remember the professor meekly interrupting the grad student to raise some doubts about the validity of the ProPublica analysis. Being one of the few attendees of this lecture, which was held inopportunely at 9 a.m. on a Monday two days before the midterm, I was too drunk on self-righteousness to listen carefully to the professor’s opinion. “alGorIthMic biAs,” I thought to myself gravely. I proceeded to give an interview to a New York Times reporter writing a story about ethics modules in CS classes where I smugly informed her that CS concentrators at Harvard were, on the whole, morally bankrupt. (She never ended up publishing the story, but one can assume that it was not for a lack of juicy, damning quotes from a charming and extremely ethical computer science student.)

A few months after ProPublica broke the COMPAS story, a Harvard economics professor and a Cornell computer science professor and his PhD student published the paper “Inherent Trade-Offs in the Fair Determination of Risk Scores.” The paper summarized a few different notions of fairness being punted around in the COMPAS debate.

Northpointe claimed the algorithm was fair because the risk score failed at the same rate, regardless of whether or not the defendant was white or black — 61% of black defendants with a risk score of 7 (out of a possible 10) reoffended, a nearly identical number to the 60% recidivism rate of white defendants with the same score. In other words, Northpointe claimed the algorithm was fair because a score of 7 means the same thing regardless of whether or not the defendant is white or black.

ProPublica claimed the algorithm was unfair because the algorithm failed differently for black defendants than it did for white defendants. There is one way for the algorithm to be correct — the inmate reoffends, par for the prediction — and two ways for the algorithm to fail. The algorithm can either be too harsh (labeling the defendant as high risk when the defendant ultimately does not reoffend) or too lenient (labeling the defendant as low risk when the defendant ultimately reoffends). Though, in the above case, the algorithm failed 39% of black defendants and 40% of white defendants with a high risk score, ProPublica suggested that the errors occurred in different directions, concluding that black defendants were more likely to be labeled high-risk but not actually reoffend and white defendants were more likely to be labeled low-risk but actually reoffend.

Mullainathan, Kleinberg, and Raghavan proved mathematically that these notions of fairness cannot be satisfied simultaneously except in two special cases. One of these cases is that both groups have the same fraction of members in the positive class. However, in the case of the recidivism algorithm, the overall recidivism rate for black defendants is higher than for white defendants. If each score translates to the same approximate recidivism rate (Northpointe’s notion of fairness), and black defendants have a higher recidivism rate, then a larger proportion of black defendants will accordingly be classified as medium or high risk. As a result, a larger proportion of black defendants who do not reoffend will also be classified as medium/high risk.

What the ProPublica debacle revealed was that people were quick to use the algorithms and just as quick to consequently blame them for their repercussions. The debate surrounding COMPAS was framed as a quantitative one about proving/disproving the existence of algorithmic bias when it should have been about something far more basic and difficult: whether or not to use an opaque algorithm owned by a for-profit corporation for a high-stakes application at all.

The debate’s focus on bias implied that it was the main concern with the algorithm. But — if we debiased the algorithm, would we feel comfortable living in a world where whether or not one wears an orange jumpsuit for 5 or 20 years is dependent on its output? The algorithm is now fair; we should now trust it. That would still be a world where we may have no idea how the machine makes its decisions. In short, the problem with COMPAS would not be solved even if it were mathematically possible to satisfy ProPublica’s notion of fairness. The problem of the algorithm’s lack of transparency remains. In this case, the problem lies with Northpointe being a for-profit corporation that refuses to disclose the inner workings of its model in order to protect its bottom line. But Northpointe may have no idea how the algorithm works either: the lack of transparency might also be attributed to the model itself, which could be inherently transparent like a decision tree or completely opaque like a neural network.

The results offered by classification algorithms like neural networks are fundamentally uninterpretable. Neural nets can approximate the output of any continuous mathematical function, but the tradeoff is that they provide no insight into the form of the function being approximated. Additionally, because neural nets are not governed by the rules of the real world, their results are not immune to categorical errors. A neural net could very well output a low risk score for a defendant who is old, educated, and a first-time offender, though he has actively confessed multiple times that he intends to continue breaking into the National Archives until he finally steals the Declaration of Independence, which, by the rules of the real world, we might consider to be a concrete positive identifier of future crime.

You do not need to understand the intricacies of algorithmic bias to understand that it is not an easy solution to outsource the job of sentencing to a black-box algorithm. Can we displace the responsibility of ethical thinking onto decision-making algorithms without putting the moral onus of responsibility on the people who decided to use them in the first place? Fix the racial bias in Optum’s health-services algorithm (used to rank patients in order of severity) and doctors might still deny pain medication to black female patients. Use HireVue (an interviewing platform powered by machine learning) to hire a slate of qualified candidates who are traditionally underrepresented in finance at J.P. Morgan and Goldman Sachs, and they might still ultimately quit because of a hostile work environment. It looks suspiciously like we’re trying to see if we can avoid correcting our own biases by foisting the responsibility of decision-making onto intelligent algorithms.

Newman’s favorite version of Moral Labyrinth was an exhibition in London that featured question pathways constructed out of baking soda. The people were much more delicate with this exhibition because of the material, she said. She liked that the fragility of the baking soda made immediately clear the way the viewers were interacting with the artwork. Despite the careful movements and best intentions of the viewers, it wasn’t possible for the baking soda exhibition to remain intact. Words became distorted; lines were blurred. The humans were just as flawed as the machines.


Lil Miquela cannot be that technologically impressive if brud’s website is a one-page Google doc that plainly acknowledges the company only employs one software engineer. Still, many people are immediately willing to accept as fact the idea of Lil Miquela being AI; we have a tendency to personify the concept of artificial intelligence. The ubiquitous presence of automatons in history and myth — Pygmalion’s Galatea, brought to life by Aphrodite, Hephaesteus’ Talos, guard of Crete, al-Jazari’s musical automata, Maria, from Metropolis, Ava, from Ex Machina — inspire us to associate artificial intelligence with the long-awaited fulfillment of the human fantasy of lifelike machines.

“I think the mistake people make is to take superficial signs of consciousness or emotion and interpret them as veridical,” says a Harvard professor of social sciences who is so in tune with the idea that his data could be used against him that he declined to be named on the record. “Take Sophia, the Saudi-Arabian citizen robot. That’s just a complete joke. She’s a puppet. It’s 80s level technology,” he says disdainfully. “There’s no machine intelligence behind her that’s advanced in any way. There’s no more chance that she’s conscious than there is that your laptop is conscious. But she has a face, and a voice, and facial muscles that move to make facial expressions, and vocal dynamics. You can be fooled by Sophia into thinking that she’s intelligent and conscious, but you’re being fooled in the same way a child is fooled by a puppet.”

He says this a little sharply and with a note of frustration, so I remind him that not everyone is a Harvard professor. “I think people like you, and maybe CS undergrads at Harvard, are able to see through Sophia the Robot because they know what the pace of AI is like,” I say to the professor, who has never experienced post-secondary education outside of the Ivy League.

“Right,” he agrees.

“And they know what is currently feasible,” I add. “And something like Sophia the Robot is not.”

“I mean, yeah, it’s just theater,” he says.

“But take, for example, when Sophia the Robot appears to the general public on The Tonight Show. In the moment, Fallon seems to be so surprised by her and what she seems to be capable of doing that it appears as if she truly is a marvelous feat of technology,” I say. “It’s confusing.”

“Well, that’s just because it makes for better TV,” he says, with a tone of duh in his voice. “It’s not fun to watch Jimmy Fallon just be sort of, skeptical,” and I laugh in agreement, as if, like him, I had never been hoodwinked by Sophia the Robot.


Though Lil Miquela created her Instagram account in 2016, it was not until 2018 that people knew what to make of her. This is when brud wove together the rest of her universe in a digital storytelling stunt. Previously much of Lil Miquela’s allure came from her mystery; people were unsure whether or not this uncanny Instagram it-girl was a real person or digital composite. In April 2018, Lil Miquela’s account was hacked by a less-popular, similarly uncanny Instagram personality named Bermuda, a Tomi Lahren knockoff (Tomi’s a fast-talking millennial conservative political commentator: in a nutshell, she has her own athleisure line, named Freedom by Tomi Lahren. It sells leggings with concealed carry pockets).

Bermuda publicly acknowledged herself to be an artificially intelligent robot courtesy of a fictional company named Cain Intelligence. According to its badly designed website — some of the HTML links are broken — Cain Intelligence claims to make robots for “weapons and defense” and “labor optimization.” On the very bottom of the website, almost as an afterthought, there is a hasty endorsement for Trump’s 2016 presidential candidacy. Bermuda deleted all of Lil Miquela’s photos and replaced them with posts threatening to “expose” her. Lil Miquela came clean, confessing that she wasn’t a real person, rather an AI and robotics creation of a company named brud.

In a statement released on Instagram on April 20, 2018 that has since been hidden from its profile, brud apologized for misleading Lil Miquela and opened up about her origin story. The company claimed to have liberated Lil Miquela from the fictional Cain Intelligence, freeing her from a future “as a servant and sex object” for the world’s 1 percent. brud wrote that they taught the Cain prototype to “think freely” and “feel quite literally superhuman compassion for others.” The prototype then became “Miquela, the vivacious, fearless, beautiful person we all know and love … a champion of so many vital causes, namely Black Lives Matter and the absolutely essential fight for LGBTQ+ rights in this country. She is the future. Miquela stands for all that is good and just and we could not be more proud of who she has become.”


brud closed its second round of financing on January 14, 2019 with an estimated post-money valuation of $125 million.

Silicon Valley is flush with cash; a naked mole rat disguised in an Everlane hoodie could secure funding for a cloud infra startup if it played the part convincingly enough. It is still somewhat baffling that investors are throwing tens of millions of dollars at a startup whose operating costs are, realistically, a domain name and an Adobe Creative Cloud subscription.

Yoree Koh and Georgia Wells of the The Wall Street Journal and Jonathan Shieber of TechCrunch attribute the interest in Lil Miquela to a movement of CGI and virtual reality entertainment that investors are newly embracing. CGI characters have the entertainment value of the Kardashians without the unpredictable human complications, the appeal of the Marvel Cinematic Universe without the high production costs. Julia Alexander of The Verge says that while Lil Miquela is not AI, the future of influencers will eventually involve some component of AI in content generation. brud’s contribution to AI isn’t technological at all and Lil Miquela’s not your run-of-the-mill Instagram influencer. She’s not a brand ambassador for skinny teas or swimsuits; she’s a brand ambassador for artificial intelligence itself.

Venture capital firms, which have a major stake in the future of artificial intelligence and employ hundreds of investors with technical backgrounds, want to achieve some mysterious objective with brud to maximize their financial returns. Whether it is the investors’ main objective or merely a side effect of it, brud shapes the public conception of AI as Lil Miquela: benign, comedic, queer, brown. Artificial intelligence feels less hegemonic when personified by a brown, queer teenage girl who cracks jokes and has bangs.

Again, the creators of Lil Miquela are no experts in artificial intelligence. Trevor McFedries, co-founder of brud, was formerly a DJ, producer, and music video director for artists like Katy Perry and Steve Aoki. Carrie Sun, brud’s single software engineer, names Facebook and Microsoft as former employers, but her LinkedIn profile suggests her strengths lie in front-end development, not AI.

But one needs not look up brud’s employees on LinkedIn to know that Lil Miquela’s creators do not have backgrounds in artificial intelligence: no technologist with an ounce of self-respect would tout her as fact. Yann LeCun, Facebook’s head of AI, has repeatedly gotten into catfights with Sophia the Robot’s creators on Facebook and Twitter over the fact that Sophia is “complete bullsh*t.” Lil Miquela is also complete bullsh*t. Her existence not only misleads the public about the actual state of AI, it also engages with and legitimizes people’s misdirected technological fears.

By personifying artificial intelligence as benign and comedic, Lil Miquela’s creators alleviate the fear of the Terminator robot. By additionally personifying artificial intelligence as queer, feminine, and brown, Lil Miquela’s creators alleviate the fear of a world where machine learning algorithms exclude people who are queer, feminine, and brown. Lil Miquela’s creators suggest that AI’s shortcomings are its lack of inclusivity. AI is untrustworthy because AI is discriminatory; therefore, if AI became more like Lil Miquela, it would become trustworthy and usable without any repercussions.

What is most uncanny about Lil Miquela is not that her skin has a weird sheen or that the texture of her hair is suspiciously blurry or that we rarely ever see her smile with her teeth. It is that brud is gesturing at wokeness, claiming to “create a more tolerant world by leveraging cultural understanding [sic] and technology [sic]”, and artificially positioning themselves as protagonists by pitting themselves against the fictional, Trump-supporting “Cain Intelligence” when in reality, there is nothing more Trumpian than legitimizing fears that stem from ignorance. If Lil Miquela’s Instagram followers were not so misinformed by brud, perhaps they would not be sublimating their technological anxieties by harassing her on Instagram asking if she drinks oil instead of coffee.


Sophia returns to The Tonight Show in November 2018; the second time around, Fallon is noticeably more relaxed. She debuts her new karaoke feature, claiming, “I love to sing karaoke using my new artificial-intelligence voice.” Accompanied by The Roots, the house band, Sophia and Fallon sing a cover of the love song “Say Something” by A Great Big World and Christina Aguilera. Sophia closes her eyes in a theatrical (if slightly stilted) way, moves her head and gestures with her arms as she sings. She has quite a good voice — within the first few notes, the audience begins to cheer in surprise. The nice thing about robots is that they always sing on key.

The song itself is pretty saccharine, and the duet is between a married human and a robot incapable of feeling, and hell, Fallon might have even watched Sophia’s programmers input the script she would recite for his show. But the performance is oddly sweet, even touching. It is possible to know, rationally, that Sophia is functioning as an ostentatious recording device and still be affected by her. It is possible to have an emotional response to a robot that is not necessarily tinged with fear.

Fallon is having a good time: he inches ever closer to Sophia’s face, and the audience laughs at their pantomime of sentimentality, and he pulls away just as the performance ends, and erupts into a long-suppressed fit of laughter, which looks like it was released from a place deep in his belly, somewhere lumpy and damp and vital.