misc

Things I like seeing

@nisipisa

things i like seeing

♬ Claire de Lune – Ave Maria

Working on another Supernormal post while I keep chunking through TikToks. Juggling two or more projects is something I’ve never been good at, but always known I need to practice, and right now it feels really good! I have a clear vision and it takes the stress off of being barely halfway through my liked feed (!).

I’ve been running into some good ones, though! Like the above, by nisapisa! It’s weird, I’m starting to draw a lot of connections between TikToks, and I’m remembeirng one where a woman is talking about how misanthropic it is to hate on things that most people do. Or like, we want to mythologize certain things because few people are capable of them, rather than cherishing everyday activities. This is a good counterpoint to that: the things nisapisa describes are very human.

In honor of that, here are things I like seeing:

  1. Anyone dining on a restaurant patio. Double if it’s a nice restaurant and they have a nice bottle of wine or something. You know they’re having a moment.
  2. Older men dancing. Too many men are made to feel unattractive or uncomfortable in their bodies as they age, but nothing in the rulebook says you can’t still do TikTok dances!
  3. Women building anything, especially those videos where they open with “I wanted a greenhouse but I have zero carpentry experience” before building a whole greenhouse. Scientists were all like “hOw DiD rOmAnS bUiLd ThE pAnThEoN,” dude just give any woman a long enough Spotify playlist and they’ll figure it out.
  4. Couples on electric scooters. I knooow they’re like a blight on urban transit or whatever. But I never don’t smile when I see people zoom by on those goofy lil scooters — I was a scooter kid growing up, so it feels really whimsical to me.
  5. Related, but anyone just getting from A to B on a skateboard or roller blades, or any non-car non-bike vehicle. Hell yeah, zoom!
  6. People when they notice a photographer is around. Like you just keep doing you dude, but everyone suddenly becomes an actor portraying themselves and I think that’s really charming.

I want to like AI, stop making me hate it

It’s a tremendously rainy day, I’m still hours of TikToks away from completing my next Supernormal post — programming note, until I’m confident I have the time to juggle multiple projects simultaneously, posts will continue to be done when they’re done — and I have the day off work. Still, despite the threat of boredom, I initially refused to mess around with GPT.

My dislike of the technology has been misplaced by tech futurists and mercenary bloggers who, lacking loyalty to creative work on some fundamental level, focused the conversation on how we can replace pesky artists.

Pesky artist here: not cool. Also, good luck.

Nothing I’ve seen has inspired an iota of confidence that AI can replace creativity. In fact, the more I’ve learned the less confident I’ve become. Replacing artists is a whole other matter, an outright contradiction for what art means.

Still, I have a crypto-borne hunch that enough wealthy people believing art is mechanizable, optimizable, could make it so. The Tinkerbell effect: the creative power of collective belief.

Gather the real world manifestations of this hustledrone corpo-shitpost mania — the blessedly short hiatus of Clarkesworld resulting from an influx of AI submissions, from which other smaller literary publications have not been spared; the disgusting “future of animation” era courtesy of Corridor Crew — well, I’m not too sorry for my suspicion around generative AI.

BUT, I also don’t want to dismiss people’s real excitement about the real capabilities of this technology. It’s cool! It should be cool! I don’t want wealthy people to ruin that.

The key is that AI is productivity software, not creative software. Excellent writeup on this subject by Ryan Broderick. He also points out that, despite focus placed on creative industries by — I’m quoting here — “lightly bearded men who pay for Twitter,” the thing this new technology is best at is coding, not creative writing.

Broderick also writes about using ChatGPT to code without experience, but honestly the first thing that’s gotten me excited about this stuff maybe ever is a video by (checks notes) Wyatt Cheng… oh dear. Ahem, a video by Wyatt Cheng in which he recreates Flappy Bird with entirely AI written code.

Cheng’s ignominious position as an Activision Blizzard director aside, I think this video demonstrates something truly cool about generative AI: it lays lots of groundwork, but requires someone with actual skills and ideas to make anything approaching elegant or useful. Cheng regularly identifies problems in the code, things that would improve gamefeel, or just points where his vision didn’t line up with what GPT produced. He could tweak the program in real time because of his technical creative experience.

No architect ever found creative fulfilment in the pouring of concrete. It’s a necessary prerequisite for their creative work.

There’s a nuance here I worry I’m not capturing, because so far what I’ve written sounds like a billion think pieces already written. What I’m saying is, I think the common refrain that AI can be a device for inspiration is a little chickenshit. At the risk of sounding elitist, I don’t support the idea that creating a plot outline, or generating dummy paragraphs, or automatically generating character names is writing busywork. Every step of the writing process should be personal, from inspiration to blank page to revision.

Writers might use ChatGPT to organize their drafts into folders or, I don’t know, set a schedule with word count goals. Painters could identify which blends of paint will create a particular color — although maybe I’m showing my ignorance, and even the process of mixing paints is a source of inspiration. No one should be asking GPT for what to write about or what to paint.

Point is, art isn’t code. I’d rather encourage creative people to take the leap into self-reliance rather than assuage thorny parts of creative work with soulless, VC-funded robotherapy.

Hm. So far this post goes “I don’t like AI, but, I don’t like AI.” Let me share what it is I got up to today, my first time actually noodling with ChatGPT and having a good time.

Tabletop game design is my hobby. It doesn’t live anywhere online right now, but I like messing around with it. I use a website called Homebrewery to make my stuff look like official, publishable design. It occured to me that I could use GPT to translate my work — in this case a character class — from Google Docs, my native design environment, to Markdown, the code used by Homebrewery.

It worked okay for that. Seeing my work externalized was cool. Moreover, though, I started asking the machine for roll tables and additional class features. It produced templates in a format I was familiar with, and design language I was familiar with.

What excited me most was how useless everything was. GPT misunderstood my vision. Its range increments were all over the place — cantrips that incapacitated each creature in a 50-foot radius. The flavor text was sometimes neat, but wholly uninspired.

You know what it felt like? It felt like when you drag out a few boxes in Excel to autocomplete the spreadsheet. Nothing created by the machine felt like mine, nothing felt finished. Just a very organized blank space for me to apply my own ideas.

And, though I am very loathe to admit it, a few ideas made me go “oo!” but I mean, hell, artists can be inspired by a walk in the park. Maybe it’s naive of me to ignore the notion that a word association box like GPT could spark something.

ChatGPT thrived as a tidy, intelligent design environement. Like a smartphone-esque upgrade to Docs or Excel. I get to bring the ideas, I get to bring anything that makes the system playable or fun or beautiful, because I have spent a long time developing my own ideas about what’s playable and fun and beautiful.

I’m still waaaaays away from ever using this stuff in my writing. I suspect I never will. But in a high-overhead creative project like game design, GPT isn’t the villain it appears to be.

Personal mythology

Most of why I hate audiobooks — despite enjoying them moderately often, since they fill a really particular function my driving life — is that I can’t then go reference anything in them. My comprehension is poor. I flip through Nick Bostrom’s Superintelligence often because it’s full of really great insights, but it’s written with the transparency of a brick wall. By the time I understood anything being said, my brain lacks the macronutrients to encode anything. That’s why keeping a reading journal or Google doc matters to me.

So like, there’s this section of Colson Whitehead’s The Noble Hustle, which I was reading in January and resulted in a blog post I’m still very happy with, but also I haven’t finished it because it’s a damn audiobook and I can’t leave it out on my desk to remind me I’m reading it. Anyway. There’s this section. Whitehead describes his tendency wait out a few bad hands, to “bide” as he calls it, rather than get jumpy and bet big on nothing. He also describes himself describing that tendency to his poker tutor. “The biding thing” becomes an important part of his “personal mythology.”

I think a lot now about personal mythology. It’s an identity tool, it’s a navigation tool. It helps us make sense of why we’re good at some things and bad at other things.

I excelled at competetive trivia in high school because, if I read a poem, I could remember most of the lines. Not off the top of my head. But if someone said “jocund company,” I’d be like “oh, that’s the daffodil one.” Once the very first word of a question was “felicity,” and I buzzed because I remembered something I had read about Jeremy Bentham’s theory of felicitous calculus. This made some Catholic schoolkids from the opposing team very mad at me.

Anyway, that memory thing became core to my personal mythology.

On the other hand, I’ve always been sorrowfully dismal at remembering personal details. I can get names alright with a little bit of effort. But if you told me your job, or your plans this weekend, or god forbid your birthday, I really don’t know what to tell you. I probably don’t remember.

So there’s a really nice storyline: my verbal memory is very good, but my personal memory is awful. I bet if I hit the books I could learn some tricks to improve that deficiency. It’s easier, though, to just accept it into the legend I tell myself about myself. Just like I accepted dark undereye circles.

Where it gets messy is when, for example, I accept sleeplessness into my mythology and then cease to be sleepless. I’ve suffered from insomnia on and off for a long time. In the off periods, I get actually stressed that a part of my identity — even an unhealthy one — is gone. Or something like clumsiness, which I think is true about me but which I refuse to accept.

All in all though, I think personal mythologies are necessary and, maybe more important, inevitable. If only to decide how we get to present ourselves to the world.

Let me pet the dog

One of the great victories of the 21st century: storytellers have developed a language and moral code around animals, particularly dogs, and that code strengthens trust with the audience.

This occured to me when I saw The Banshees of Inisherin. (Spoilers ahead, but I think they’re worthwhile spoilers.) In that movie, a character goes off to commit violence, and twice dialogue assures us that the dog in proximity to that violent scene will be OK.

Since Can You Pet the Dog? got big, most games I’ve played that feature dogs let you pet them, from Midnight Suns to Pentiment. It’s less big (less search results, anyway), but Does the Dog Die should become an equally indispensible resource for movie goers.

I don’t have data in front of me. Subjectively, when an animal is killed in a movie it feels like that scene from Parks and Recreation where Leslie says she’s gonna cut Ben’s head off.

Or maybe more kino of me, There Will Be Blood when Daniel Plainview says he’s gonna cut that guy’s throat. A pet’s death has an extreme, outsized emotional impact on the plot.

Some writers do use this for some story effect — fair warning, an animal does actually die in The Banshees of Inisherin, and more specifically the death of John Wick’s dog incites the whole franchise. Wick kills 299 humans in revenge of one dog. Why?

This is the article I wish I wrote on the subject, by Ben Lindbergh, and it explains the breach of conduct in psychological terms. I’ll add just one thought, in more writerly terms: because of the psychological elements — to summarize, we bond to dogs with the same strength we bond to children, and either suffering brings us equal discomfort — hurting a dog is the nuclear option of getting a reaction from your audience. Like, say you’re watching a romcom, and in the inevitable fight scene one of the leads goes on some operatic rant about the evil of their costar, about tyranny and fascism, about the cruel winter of fate and the baleful silence of whatever god is said to shepherd the good and punish the wicked. It’s emotionally dissonant.

Killing a fictional animal is the equivalent. It says, “I, the filmmaker, am setting up an emotional payoff of the greatest imaginable magnitude.”

Such a payoff is very hard to come by. So what it actually reads as, in most cases, is, “I, the filmmaker, couldn’t think of anything better to make you feel something.”

Every audience starts out trusting their narrator. Your job is to keep that trust, to not betray it. Treating animals well is the best olive branch you can offer.

Stupid games

A person with a bag over their head. A smiley face is drawn on the bag.

C. Thi Nguyen on what he calls “stupid games”:

Stupid games have the following characteristics: first, they are only fun if you try to win; and second, the most fun part is when you fail.

His examples are Twister, Telephone, Bag on the Head (?), and “most drinking games.”

I love the idea of stupid games just in general, and specifically as they relate to the argument C. Thi Nguyen makes in Games: Agency as Art: that games are an art form in the medium of doing, voluntary struggle in pursuit of an otherwise unnecessary goal.

Am I only now realizing that, under this argument, Edward Fortyhands is a work of art? I counter with — why wouldn’t it be? William McGonagall “gained notoriety as an extremely bad poet” according to Wikipedia. Are his poems not art, just because they’re bad?

His bio continues, “who exhibited no recognition of, or concern for, his peers’ opinions of his work.” Is that not the truest, most artistic spirit one can have in approaching their own work?

This revelation is actually, physically affecting me. I don’t know what to do with all of this energy. Let my legacy be Mona Lisa Fortyhands. Let it be Sailing to Disaster.

It’s late so I’m writing about the Muppets

Kermit looking at a second Kermit across the desert at night, from The Muppet Movie.

My scattered thoughts about Muppets:

  1. What is it about the medium of colorful puppets that allows messages of hope to stowaway into the brain? I didn’t watch the Muppet growing up, but I’ve seen a suspicious number of very touching Kermit the Frog monologues. Maybe we can all take heart in the idea that having a silly voice and literally being a frog with a hand for a face isn’t enough to superficially deter us from good writing.
  2. Chaos Muppet Theory confuses me. Not because I disagree with it — I mean, the writer is very tongue-in-cheek and I don’t think they’re intending to guide public thought or anything. Just that, despite all this “everyone’s unique” jazz, it actually ends up really easy to separate everyone into two categories. We do it all the time. Logically (“likes ketchup” and “doesn’t like ketchup”), perceptually (“Chaos Muppet” and “Order Muppet”), dishonestly (“teachers deserve more credit” and “teachers deserve less credit” — no one actually believes teachers deserve less credit, that’s not what’s happening here).
  3. Huh. I actually don’t have any more thoughts on Muppets. This is pretty funny.

Dayten’s action-packed guide to Aktionsarten

A cartoon of Isaac Newton watching an apple fall from a tree.
Newton ponders after noticing an apple falling from a tree before it bounces on the ground.

I tried writing this once and it got wiped — which is an interesting writing exercise: Write something once, then write it again but frustrated. The result will condense. Purple prose fades to a kind of beige.

Lexical aspect is the boring word for Aktionsart, which describes verbs with respect to time. There are four or five classes, which you can fill out in sort of a truth table. It goes like this.

  • Achievements are those people you hate. They do everything they say they’ll do, right when they say they’re going to do it. They answer emails first thing in the morning. They drink matcha while they do it. Release, remember, notice, arrive: achievements are instantaneous, and as soon as they start, they finish.
  • Accomplishments do what they say they will, but you have to trust them. It might take a few business days, and their work might be patient and invisible. Whether out of respect for modern ideas about self-care and work-life balance, or due to the lack of ambition that plagues young people these days, accomplishments take time. Cook, drown, say, fold: accomplishments fulfill some end goal over time.
  • Semelfactives just sort of badger you. They blink in and out instantly (incessantly), but don’t do anything. Knock, click, sneeze: one giveaway for semelfactives is that you can repeat them over and over. (Note that we’re talking about deep brain cognitive abracadabra here, so even if a sneeze isn’t technically instantaneous, we still talk about it like it is. You wouldn’t ever say, “Oh, what were you doing while you were sneezing?”)
  • Activities mosey on, hands deep in their pockets, woolgathering over shapes in the clouds or plucking blades of grass to roll between their slow fingers. They’re going nowhere, and they aren’t getting there any time soon. Walk, read, sleep, talk: activities could go on forever and ever.
  • States are the “sometimes Y” of Aktionsarten. They don’t describe a goal or a lack of goal. They just are, and they stay are. Know, be, love, prefer: states reflect the unmoving world.

Do some of these seem to overlap? Can you think of verbs that fit in multiple categories? That would track. Linguists trade in permeable barriers, not rigid taxonomies. Evidenced, maybe, by the use of “achievement,” “accomplishment,” and “activity” as distinct terms of art.

My takeaway — if, indeed, there is any practical application of Aktionsarten: Try to be kind when it comes to achievements. Forgive people for what they don’t “notice”; judge them by what they don’t “look for.” Forgive people for what they don’t “remember”; judge them by what they don’t “consider.”

Is VR 4D (or; technorati alphabet soup)

An animation of a 2D platformer, projected as a plane from slices of a 3D Minecraft world. It's very complicated, I'm sorry.
Read on to know what you’re looking at. (Image credit: Mashpoe)

By way of figuring out whether I’ve got the bones of an actual essay, I want to talk through a weird turn of phrase I’ve seen a lot around discussions of VR — particularly laymen discussions of VR. It’s the idea that the virtual future we’re about to enter into is “4D.” Journalists often describe (I will use this word exactly once, and here it is) the metaverse as fourth dimensional.

Disclaimer that I am also a layman when it comes to VR. Disclaimer that I actively do not want to criticize laymen descriptions of things. But where does this language of dimensionality come from? Is it just a sci-fi synonym of “futuristic,” or do these writers have something else in mind?

To start, let’s talk about dimensionality. “Dimensions” come to us from topology, a field of mathematics that studies geometric objects. Specifically, it studies objects as they transform without breaking, puncturing, gluing, or sewing. Specifically, topology defines objects as pieces of dough or clay, that you can stretch and mold however you want, but that you can never tear up. A donut is different from a flat sheet, but not different from a drinking straw. Topologically. To be honest I’ve never found an easy description of topology.

Dimensionality appears in all kinds of fields, though. You’ve probably heard that the fourth dimension is time. What that means is that, if you were mapping the world on a coordinate grid, you could cover every point with a unique three-number code — but you would need a fourth number, a fourth dimension, to also indicate every point of the world at any point in time. I live at x latitude, y longitude, z elevation, and w time. This is a really classic use of dimensions.

As with literally every concept in math, dimensions are abstract. They don’t have to be used for anything. That’s what all the letters are for: they’re understudies for whatever ends up being useful. Fourth-dimensionality describes any conceptual space that can be defined along four axes. Your opinions on burgers, hot dogs, tacos, and french fries can be plotted in four-space (that is, in 4D). The Rotten Tomatoes audience scores for the Twilight tetralogy are (72, 61, 60, 65) if you average Breaking Dawn parts 1 and 2. The scores for Diary of a Wimpy Kid are (49, 62, 63, 30). You can plot the quality of a tetralogy in 4D.

Why do people mostly focus on dimensions of spacetime? Because it’s more interesitng to know how we should describe the universe than it is to know the quality of Diary of a Wimpy Kid movies. (Barely.) Milo Beckman writes in Math Without Numbers that researchers have a legendarily difficult time finding all of the unique shapes in four-space, despite the fact that it’s probably the best way to describe the real cosmos.

It’s not just that we don’t know the shape of the universe — until we finish classifying the four-manifolds, the universe might just be a shape we haven’t thought of yet.

None of this gets us any closer to knowing why VR apparently feels like a fourth dimension, on some instinctive level. Disregard time, and virtual environments are still modeling three-dimensional space by projecting it onto two-dimensional screens. You could assign a binary value in the fourth dimension — “blink once if you’re in the real world, blink twice if you’re in the virtual world.”

Here’s my best guess: Every virtual world simulates a 3D world. It simulates our world. Even the Matrix (which we are a far cry from) simulates the real world. While in one of these virtual worlds, you’re never very far from other virtual worlds. I imagine it like the Construct. You can enter it from anywhere, and load in anywhere. In that way, it overlaps the 3D simulation. Mashpoe’s 4D Minecraft is a more niche, but much better comparison.

Virtual reality isn’t just a world, but its own solar system of worlds that you can hop between at will. I’m generally skeptical about VR — I don’t think it will ever get to the fidelity of actual reality — but one thing it does well is collapse distance. Which is exactly what higher dimensions are about.

Uh, thanks for coming along on this journey with me. I hope you found it a little interesting. I definitely didn’t have an answer to my own question when I hit “new post.” At the very least, I hope you find topology as mind-bendy as I do. Now you can enjoy a scholarly chuckle to some jokes by mathematicians.

Santa’s panopticon

A sepia-toned drawing of santa at a feast table, surrounded by people in royal or religious headgear. He's holding his fingers over his lips, like he's asking you to keep a secret.

I’ve been really digging into the work of Rayne Fisher-Quann lately, particularly some of her conversations about “coolness” online. I’m going to cook on these ideas a little longer and hopefully come back with something more essay-y to say about how creativity appears online as an aesthetic quality rather than as a series of habits.

Meanwhile, her West Elm Caleb essay has me digging back through my college Google Drive — which, I think I locked myself out of trying to access. So long, Toxic Anime Husband PowerPoint presentation. Why I’m digging through the Drive though, is that Rayne’s coinage of the “feminist panopticon” reminded me of this study I read in my sociolinguistics days: “Encounter with reality: Children’s reactions on discovering the Santa Claus myth” by Anderson & Prentice. A few weeks too late to be holiday-relevant, almost a year late to be meme-relevant. This is where I plant my flag.

(Oh, and another thing: I just realized I can use exclamation points! Because this is my own damn website!! I promise I won’t use this power for ill.)

Anderson & Prentice interviewed children who no longer believed in Santa Claus. They asked the subjects, among other things, to recall the events that lead up to them uncovering the conspiracy, and how they felt afterwards. The researchers also interviewed the parents of the children who no longer believed in Santa Claus, and asked what they did to keep up the ruse, and why.

So as to delay the reveal of the study’s results, maybe it’s worth sharing my own nearness to this question. When I was in first grade, I broke the hard truth of Santa to — no lie — my entire class. Authorities found me sitting smugly at a long cafeteria table full of wailing six year olds. There were phone calls made. I hope that the universe truly is indifferent, because otherwise my judgment will be broadcast cosmos-wide.

Unless a nasty first-grader spoiled it for them, the children of Anderson & Prentice’s study actually reported strongly positive memories of their discovery of the Santa myth. Children are already very good at holding many conflicting beliefs at once, both committing to a fantasy fully while also recognizing its absurdity. Couple that with a kind of escape room that everyone from caretakers, to malls, to trillion-dollar corporations actively help build. You’d enjoy unraveling that mystery, too.

Parents, on the other hand, expressed strongly negative feelings when their children discovered the Santa myth. It’s a loss of innocence, it’s the end of the very first season of your life.

Two details from this study will (I hope) tie it back to Rayne Fisher-Quann’s panopticon.

  1. Parents were way more invested in enforcing the reality of Santa Claus than children were in accepting that reality.
  2. Parents enforced the Santa myth for the magic of it all, not to make the children behave, because cautionary tales really don’t make children behave.

Cautionary tales don’t make anyone behave. We delude ourselves to say accountability is the benefit of universal self-surveillance. Seeing West Elm Caleb — or any publicly bad person — dragged through the public square (or the public hypercube) will never prevent another similarly bad person from being similarly bad. I seriously doubt it will stop the person being dragged from being bad. If you let someone back off and mend themselves in a private space, they might. If you force someone to double down, they will. This is the universal fate of publicly shamed people, from Twitch chatters to world leaders, as far as I know.

All of this was true pior to the age of data brokerage. Now, desensitization to surveillance is basically a requirement for using the internet.

In the same way that caretakers shape the reality of their children to preserve the magic of Santa — and god, I am only now realizing how gymnastic it is to directly equate Santa and surveillance culture, but I think the metaphor is a sound one — in the same way caretakers propogate a version of reality that aligns with their own self-interest, so do all power systems curate the experiences of those they lord over, better aligning that experience with their own goals. In the case of tech billionaires, those goals are likely not to enforce good behavior. You have to wonder what “holiday magic” they’re trying to capture.

Seeing the river

Part of the Taking Stock with Teens infographic
Olive Garden. Olive Garden. | Photo via Piper Sandler

Twice each year — once in the spring and once in the fall — the international banking firm Piper  Sandler publishes its “Taking Stock with Teens” survey. The questionnaire is administered to 10,000 teenagers (average age 15.8), the objective to find out how young people are spending money.

Should we be giving investors the skeleton key for marketing to children? I don’t know that I can say. But the results are kind of incredible. It reads almost like a good Agatha Christie story. There are quiet “oh, I see” moments (teens still use mostly cash — of course they do, they can’t open bank accounts), as well as twists that, frankly, send me reeling. Favorite celebrity? Adam Sandler.

For context, three of Adam Sandler’s most recent movies are Hubie Halloween, Uncut Gems, and Hotel Transylvania 3: Summer Vacation.

For context, remember that the name of the firm is “Piper Sandler,” which compels me to imagine that this result is a dizzying fluke of word association. But anyway.

I don’t know why this survey captures my attention so much. It would’ve been last month that The Elder Scrolls V: Skyrim celebrated its tenth anniversary, which I mention because that game was as transformative for me as church, or Septimus Heap, or those days when the ice cream truck would come to my middle school. I knew I loved video games and played them often, but I tracked the release of Skyrim like a stormchaser. I watched press interviews and gameplay demos. Before, I had a handful of games I enjoyed, but found them mostly by scrolling through the Wii Shop until I found a title I’d heard of on YouTube. Skyrim wasn’t just the first video game release I’d followed. It was the first time I had felt like an expert, maybe ever. I remember facing the camera towards a river, and quoting to my older brother something I’d heard the developer say about the graphics of the water.

Still looks better than the metaverse. | Photo via TechCrunch

Skyrim turning 10 didn’t make me feel old, not like the Piper Sandler survey does. Or, given that I’m only 22, it didn’t make me feel like I was aging. With respect to movies and video games and TV shows, something that came out five years ago might as well have come out 15, or 25 years ago. They’re not “new,” so they’re “in the past.” History is very flat to me.

But I didn’t know who Emma Chamberlain was, and how am I supposed to reckon with the fact that the number one snack among teenagers is Goldfish. Goldfish. As a young person, it’s easy for me to view my cohort as, somehow, fundamentally in opposition to the generation before us. The Piper Sandler survey is a clue that, when my hairline recedes and my nose is cratered burgundy, the challenge will not be learning to appreciate young people. It will be learning to understand them at all.