Latest Technology News: VOX
Do we really need an app for everything?
Not every business needs an app. And yet.
On a flight about a year ago, I found myself in a predicament: I could not pay for my customary glass of plane wine to help calm the nerves. The problem wasn’t that I didn’t have cash or a credit card on me but instead that I didn’t have the airline’s app, which was necessary to complete the transaction. I was motivated to get the plane wine, but not that motivated — I gave up somewhere between downloading the app on the shoddy in-flight wifi and uploading my credit card to it. So now it sits idle on my phone, as do countless other apps I’ve had to get for one reason or another over the years, the vast majority of which I do not want or use.
It really does feel like there’s an app for everything these days — often for things where they’re not really needed. We all managed to do business with each other for years and years without having to pull out our phones at every corner.
Admittedly, the cultural peak of “there’s an app for that” mania was years ago, at a moment when, in many cases, said apps actually proposed making our lives better. But it’s been forever since many people have felt enthusiastic about downloading an application; instead of serving customers, apps now serve companies and have transformed into a sort of necessary evil to receive some product or service. The hotel has an app, the dentist has an app, the restaurant down the street has an app.
Apps are a way for companies to get customers into their ecosystems, to try to entice them with promotions and discounts, and, importantly, to get their data to track them or send that data to others. Consumers are sometimes sold on the convenience ploy — once you’re set up on that McDonald’s app, it does make your next order easier. But is the bother worth it to download the app in the first place? And once you do, what about that data trade-off? In an age of endless data breaches, is ordering that Big Mac 30 seconds faster worth the risk of a stolen credit card number?
“The proliferation of apps has many benefits for people,” said Karen Gullo, an analyst and senior media relations specialist at the Electronic Frontier Foundation (EFF), in an email. “Unfortunately, most businesses use apps to harvest and monetize our personal data. People can use their settings to block some data collecting and tracking, but app makers often find ways to get around that.”
So we’re all stuck with a bunch of apps floating around on our phones that really, seriously, were not necessary, many of which are tracking us in a way that is also, really, seriously not necessary.
You get an app, a company (and its friends) get your information
With the rise of mobile phones came the rise of apps, which, to a certain extent, makes sense. If we’re going to be carrying devices around with us all the time, we might as well make use of them.
Apps offer a promise of convenience for users and, for companies, dollar signs. Apps let businesses learn more about their customers, make them offers, and nudge them in ways that they hope will lead to more profits. In 2017 in Japan, McDonald’s found that customers using its app spent 35 percent more, on average. McDonald’s said the app made ordering more seamless, so people used it more often. It also noted that people took the app’s suggestions for add-ons and then stored those orders to be repeated later, which translated to higher spending. “Learning those habits and turning that back into marketing is one of the big draws,” said Dominic Sellitto, clinical assistant professor of management science and systems at the University at Buffalo School of Management.
The more app makers know about you, the better able they are to market and sell to you. They often also sell that information to third parties that want to reach you, too. And there aren’t a ton of legal barricades around how much data apps can collect and what they can do with it.
“There’s really no limit to data collection, so this data can be collected about you and shared and sold between different data brokers or analytics companies to build really granular consumer profiles, which can then be used for targeted advertising and sold for other purposes,” said Suzanne Bernstein, a law fellow at the Electronic Privacy Information Center (EPIC). Sure, maybe there’s a lengthy privacy disclosure, but nobody reads those, even if they do get into the details. “This whole system is sustained by this imbalance of power and control, this asymmetry, where we’re kind of in the dark as consumers as to what is happening with our data.”
And when consumers do get to understand what’s happening, what they find can be a little bothersome. Earlier this year, a court in Canada approved a settlement with customers of the coffee chain Tim Hortons over app users having their geolocation data collected without notice and consent. (The remedy was that those customers affected would get a free hot beverage and baked good.) McDonald’s and Chick-fil-A are both introducing features that let them track customers’ locations on mobile app orders, supposedly so their food will be fresh and crisp when they arrive to pick it up.
Some of the tracking and data stuff can be quite disturbing. In 2022, the FTC reached a settlement with the period tracker app Flo after finding it was sharing personal health information with marketing and analytics companies like Facebook and Google.
“Not every single company is doing this, but I think, unfortunately, it is one of the reasons why you see a proliferation of apps,” said Jennifer King, privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence.
The pandemic made our app-palooza situation, in many arenas, more palpable, as more people flocked to apps for shopping and entertainment and education, and companies were eager to oblige. The move to social distancing meant many businesses turned to phones as a way to make what were once in-person interactions virtual. Even as life has gotten back to normal, the insistence on apps has persisted.
Now, your hotel key is an app, as is your train ticket. Your dentist or doctor has maybe rolled out an app to book appointments even though the old way, honestly, worked just fine. At a restaurant, it’s not uncommon to wind up having to scan a QR code that eventually results in you downloading some app just so you can place an order or pay. “That’s going to be a relationship where they’re providing a service to the restaurant, but they’re taking your data without a doubt,” King said.
The app thing isn’t even that great for a lot of businesses, even though a lot of them do it
“It’s like having a TikTok page, I think it’s just that people feel like they need to have one because they hear that people use apps,” said Sucharita Kodali, vice president and principal analyst at Forrester. “Not everybody needs an app.”
That hasn’t stopped everybody — or, at least, a lot of businesses — from having one.
There are a bunch of appealing statistics about mobile apps that get most businesses “worked up into a lather,” said Jason Goldberg, commerce strategy officer at advertising firm Publicis. According to the Business of Apps, there are about 1.8 million apps available in the Apple App Store and 2.3 million in the Google Play Store. For both, business is among the most popular categories. People spend hours each day on their phones, largely on apps, and spend billions of dollars on apps. Goldberg said companies’ most valuable customers are often on their apps.
All of this convinces a lot of business leaders they need an app, too, that it will be the key to juicing their business. “The only problem with that is it’s basically wrong,” Goldberg said. “While there are a huge amount of people who download apps, you know what there are not? A huge amount of people who use that app more than once after they download it. There are astronomical abandonment rates.”
It’s kind of like a credit card for an individual retailer you maybe sign up for once to get a discount and then never use again. You download an app to make one purchase or to navigate one leg of a vacation, or on a day when you’re feeling inspired to kick-start your diet. And then you ... forget. The company may be getting some data on you still, but not as much as if you were a power user, which there’s no incentive for you to be.
“That’s the fundamental question brands need to ask themselves — what kind of relationship do you have with your shoppers, and are you one that has a lot of frequent interactions, or do you have moments, like in travel, when somebody is going to have 50 different questions?” Kodali, from Forrester, said. For a lot of companies, the answer to that question is no.
Kodali said that a lot of businesses have rolled out apps basically on account of FOMO — meaning fear of missing out — because they see everyone else doing it. Goldberg echoed the sentiment, adding that businesses have had plans for an app in place for a while and so have just proceeded without really asking whether it makes sense. “Only the very biggest and best companies can win the mobile app game,” he said. “Often, I see the mid-tier companies and the long-tier companies that shouldn’t be trying to compete with the goliaths ... but they make the mistake of trying to emulate what Amazon and Walmart are doing.”
Yes, the data is valuable to companies, but it’s not as exclusive to apps as it used to be. “You can get just as much data from good mobile webpages as you can from a mobile app,” Goldberg said.
There’s a bit of silver lining here, which is that app makers have a harder time tracking you than they used to, by design — specifically, by Apple’s design. In 2021, the iPhone maker updated its system so that people sometimes get the option when they open an app to ask it not to track. It works ... okay-ish, but it’s not perfect.
The FTC is currently working on rulemaking around the “commercial surveillance economy” and just how much businesses collect, analyze, and profit from people’s data. In the US, some states, such as California, are making headway on privacy laws or have them in place. Privacy advocates say that what the US really needs is a sweeping federal privacy law, which isn’t exactly on the horizon.
So in the meantime, we’re swimming in a sea of apps, many of which we don’t want or need. Companies are making money off the data that accompanies them, though not as much as many would probably like. And they’re not getting much better at protecting that data.
“There’s two pathways this can go. One pathway is people get more and more protective of their privacy, and that spurs legislation or some sort of movement that changes the way this works, or, on the flip side, we all just get desensitized to it and say, ‘Bummer, my credit card got stolen again,’” Sellitto said.
Perhaps, at the very least, we acknowledge we truly do not want or need this many apps — the convenience case really loses its oomph when you’re going to your phone to execute every little detail of your life. Or, in my case, when you’re in the air, attempting to secure a single glass of Pinot Grigio.
We live in a world that’s constantly trying to sucker us and trick us, where we’re always surrounded by scams big and small. It can feel impossible to navigate. Every two weeks, join Emily Stewart to look at all the little ways our economic systems control and manipulate the average person. Welcome to The Big Squeeze.
Sign up to get this column in your inbox.
Have ideas for a future column or thoughts on this one? Email firstname.lastname@example.org.
The Kia Challenge, explained
How a carmaker’s mistake created the ultimate internet challenge.
It’s safe to assume that 17-year-old Markell Hughes wasn’t too worried about getting caught for stealing cars last year. After all, he lives in Milwaukee, where just 11 percent of reported car thefts resulted in an arrest in 2021 and only 5 percent were prosecuted. But Hughes appeared in a documentary about the so-called “Kia Boys,” who take advantage of an exploit that makes certain Kia and Hyundai models easy to steal. The Kia Boys often joyride around in the stolen cars, usually driving dangerously and usually filming themselves doing it. The documentary was a hit on YouTube, and shortly after it was posted, someone called a police tip line and gave them Hughes’s name.
Among the evidence against Hughes was a call he placed from jail, where he seemed to brag about how many people saw him driving the stolen car.
“I heard my video went viral too,” he said. “I heard my shit hit 50K in one day.”
Teens’ desire to go viral is just one of the factors that has led to an exponential increase in Kia and Hyundai thefts across the country. Starting with model year 2011, Hyundai Motors, which makes Kias and Hyundais, decided not to install a theft prevention mechanism called an immobilizer in certain makes and models. For cars without immobilizers, all thieves have to do is rip off the steering column cover, remove the ignition cylinder, and turn the rectangular nub behind it to start the engine. As it happens, USB plugs fit pretty well over that rectangle. The immobilizer-free Kias and Hyundais could be stolen in a matter of seconds with just a screwdriver and a charging cord.
In 2021, Milwaukee, Wisconsin, reported a significant increase in car thefts, the majority of the cars stolen being Kias and Hyundais, and a lot of the suspected thieves being too young to drive. Videos began to surface on social media of young people joyriding in these cars, speeding and swerving, sometimes hanging out of windows. These were not sophisticated thieves stealing cars to strip and sell for parts. They were doing it for views and clout. They became known as “Kia Boys.”
By the next year, Kia and Hyundai thefts spiked all over the country, as videos showing how to steal the cars spread. A “Kia Challenge” to steal cars and post the results on platforms like TikTok and Instagram spread, too.
Some social media challenges are fun and harmless. Others are mean and dangerous. While Hyundai Motors scrambles to fix the problem with software upgrades and steering wheel lock giveaways, the Kia Challenge is causing financial and physical harm on a mass scale: The numbers of stolen Kia and Hyundais have increased by triple and even quadruple percentages in some areas. Reckless driving of the stolen cars has resulted in injuries and deaths, and the cars have also been used to commit other crimes. Hyundai has already agreed to pay up to $200 million to settle one class action lawsuit, but still faces lawsuits from insurers and cities, with possibly more to come. And thousands of people have had to deal with the enormous inconvenience and expense that comes with their car being stolen. In some cases, their cars were recovered, only to be stolen again. And again.
The US has certain safety regulations that every automobile sold and operated here must follow (with a few exceptions). Those regulations include theft prevention measures, but immobilizers aren’t one of them, which is a departure from many other countries where they are mandated, including Canada. While immobilizers aren’t perfect, they have been shown to cut down on the number of vehicle thefts. They also make it much more expensive to replace your car key if you lose it.
But even without the requirement, almost every automaker has an immobilizer as standard equipment on their cars. If you don’t count Kias and Hyundais, 96 percent of new cars in the US in model year 2015 had immobilizers, according to the Insurance Institute for Highway Safety. But only 26 percent of Kias and Hyundais did. Hyundai Motors chose not to include immobilizers on some 9 million of its cheaper models sold over the last decade, even as it did install them in cars where it was legally required to do so.
Spokespeople for both brands said their cars are compliant with federal regulations. The National Highway Traffic Safety Administration (NHTSA) didn’t respond to Vox’s question as to why it doesn’t require immobilizers.
For years, this wasn’t an issue. Kias and Hyundais were not commonly stolen cars. Then the Kia Challenge began.
Social media fuels performance crimes
Social media challenges aren’t new — remember planking? That means that tech companies, like Meta and Google, have had years to figure out how to deal with the potentially harmful trends. And while Kia Challenge videos are showing up on Instagram and YouTube, TikTok is getting most of the blame for spreading it. Even the NHTSA called out TikTok, saying in a February press release that “a TikTok social media challenge has spread nationwide and has resulted in at least 14 reported crashes and eight fatalities.” Kia and Hyundai, which are no doubt happy to have someone else to blame for their design flaws, have also mentioned TikTok by name in statements about the matter.
There are a few reasons for this. Videos showing how to steal the cars are believed to have started or at least become popularized on TikTok. The app is used by a lot of young people who are particularly susceptible to the lure of challenges, and it’s very good at amplifying them.
Trends that kids can try and share have always been one of TikTok’s selling points. That’s fine when the challenge is a fun dance move. This one isn’t of those. If you watch videos of people riding around in their stolen or likely stolen cars, you’ll see they often have their phones out, filming away as they speed, day or night, on vacant streets or crowded ones, seemingly with impunity.
TikTok knows it’s become notorious for deadly challenges. CEO Shou Chew was hammered at a recent congressional hearing with stories of various TikTok challenges kids died trying to complete. The company says it’s tried to remove dangerous challenge content that clearly violates its rules, and videos of people committing crimes like breaking into and stealing cars fall under that category. But the horse is out of the barn at that point.
TikTok also denies that the Kia Challenge is an issue on its platform.
“This isn’t and has not been a TikTok trend, however people are sharing widespread news reports and warnings issued by the companies themselves,” TikTok spokesperson Ben Rathe told Vox. “We do not allow content that promotes theft, and will be removed if found on our platform.”
But there’s also a gray area of problematic content that doesn’t exactly break TikTok’s rules. Videos of people driving Kias dangerously, for example, might get slapped with a “don’t try this at home” warning, but it’s not showing the act of stealing the car, nor is it definitive that the car has been stolen in the first place. Or maybe it’s a video someone took of a Kia being driven dangerously, but there’s no evidence that the person who took and posted the video was involved in its theft (or, again, that it was stolen at all). There is an argument to be made, however, that these videos are glorifying the Kia Boys and the Kia Challenge, and that the hope of going viral is one of the reasons people are doing the challenge at all.
Banning certain hashtags associated with the thefts could prevent some of the bad videos from getting on or staying on the platform, but it would also block a lot of videos that are fine or could even do some good. At this point, there are a lot of videos from victims or warnings to people to protect their Kias and Hyundais, including plenty of media reports. Those may well prevent car thefts.
And, again, this problem is not only on TikTok. Putting evidence of crimes on social media (or committing them for the purpose of putting them on social media) is known as “performance crime,” and the phenomenon predates TikTok. On Instagram, searching another commonly associated hashtag revealed in its top results multiple Reels showing people ripping off steering wheel covers or ignition cylinders, and starting cars with pliers and USB cables, which have been up for over a month with more than 30,000 views each. Meta did not respond to request for comment. And let’s not forget about YouTube, which some anti-crime groups wrote to last January asking the platform to do a better job of finding and removing videos that show how to steal Kias and Hyundais.
“YouTube’s harmful and dangerous policies prohibit videos that encourage dangerous or illegal activities that risk serious physical harm or death. We also don’t allow videos that show instructional theft,” said Elena Hernandez, a spokesperson for YouTube, adding that the platform has removed some Kia Challenge videos.
The fallout: Death, destruction, and lawsuits
The Hyundai and Kia thefts are so easy to do and have become so prevalent that some of the stories and statistics are absurd. People’s cars are stolen twice in one day. Some wait months for repairs to their stolen and recovered cars because there’s a back order of parts due to so many stolen cars needing them at the same time. Sixty-one percent of vehicles stolen in St. Louis in the last year are Kias and Hyundais, as are 88 percent of attempted thefts. Kia and Hyundai thefts increased by 767 percent in a year in the Chicago area, and they’re up almost 2,400 percent in Rochester, New York. The top seven out of 10 cars stolen in Wisconsin, where the trend began, in 2021 and 2022 were Kias and Hyundais. In 2020, only the Hyundai Sonata made Wisconsin’s top 10. Insurers are refusing to cover certain Kia and Hyundai models, or jacking up rates.
The damage isn’t just to cars, however. Several teens have died or been seriously hurt by crashing stolen Kias and Hyundais, which officials have attributed to the challenge. There are also crimes committed by people driving the stolen cars. There have been injuries (and in at least one case, possibly a death) to people who were hit by stolen cars. And there’s property damage, like houses that the stolen cars crash into.
Hyundai’s initial response to this was pretty gross, too. It offered Hyundai customers a security kit, for which customers had to pay $170 plus the cost of installation. Affected Kia models got steering wheel locks for free. Immobilizers now come standard in all Kias and Hyundais made since November 2021.
In February 2023, Hyundai rolled out a free software update that requires a key to be in the ignition for the car to turn on and lengthens the time the car alarm goes off from 30 seconds to one minute. It now provides a sticker that people can put in their windows to let would-be thieves know they’re wasting their time breaking in, though this assumes thieves are looking out for stickers before they decide to smash windows. Hyundai Motors is also working with police departments across the country to supply free steering wheel locks to affected cars. But you have to know this is going on to take advantage of that, and not everyone reads the news or receives the notices Hyundai has been sending out. And some reports say the update isn’t always effective. Nor is it yet available for all Kia models.
Several cities have now sued Hyundai Motors, most recently Baltimore. Attorneys general from several states have sent angry letters to Hyundai demanding that it do more to stop the thefts, and sent a letter to the NHTSA urging it to issue a recall on the affected cars. Lawmakers are writing letters, too. Nearly 70 insurers filed a class action lawsuit against Hyundai estimating that they’ll pay out about $600 million over the stolen cars. And Hyundai recently settled one class action suit brought by customers for $200 million.
It’s likely that when all is said and done, this will cost Hyundai more money than it would’ve spent if it had put the immobilizers in the cars in the first place. But it seems the biggest price is, as always, being paid by the victims, and the only lesson for them to learn is to buy from a different automaker (if they can afford it) and hope this one isn’t the subject of the next social media challenge.
Social media platforms, especially TikTok, don’t seem to be able to do much to nip burgeoning dangerous challenges in the bud. Doing so would go against everything their platforms are designed to do. That problem is in no way unique to TikTok, but it seemingly has no easy solution. This one has caused potentially billions of dollars in damage, not to mention the human cost that can never be reimbursed. What will the next challenge cost?
Markell Hughes, the driver in the YouTube documentary, is now 18. He pleaded guilty to one charge of operating a vehicle without the owner’s consent for the scene in that video. He faces up to three and a half years for that and another six years for a separate stolen vehicle case. He’s due to be sentenced this month.
If you have a Hyundai or Kia, click here (Hyundai) or here (Kia) to see how to get a software update and/or free steering wheel lock to help protect your car.
Should we know where our friends are at all times?
“I love you, now let me watch your location 24/7.”
A few months ago, sick with a cold on a Saturday morning, feeling miserable about the fact that I was stuck at home alone for the weekend, I checked Find My Friends. It’s an app that comes standard on iPhones these days, the same one that tracks your phone or laptop or Airpods in case you lose them. But instead of devices, it shows the location of your friends, or rather, their phones.
I tend to check Find My Friends when I want to see if anyone’s nearby and might be up for a spontaneous hang, or else just to see their little bubbles in some kind of bizarre exercise in virtual closeness. But that morning I noticed one of my friends’ bubbles wasn’t where it usually was. It was miles away, in a neighborhood where neither of us knew anyone. She was, I assumed, at a random person’s apartment where, we can go ahead and assume, she’d slept over the night before.
It probably says a lot about me that my first thought was, “Good for her!” and not, “Is she okay?” considering the fact that for many women, the point of location sharing with their friends is to ensure safety — that if they share their location before heading out on a solo trip or a first date, they’ll know that at least one person will know where to find them if the worst happens. But I’m more interested in the knotty social questions that mass location sharing forces us all to ask of each other and ourselves: How do we decide who to share — and not share — our location with? When does looking at your friends’ bubbles go from cute to creepy? Or, in my case, how weird is it to ask my friend for juicy details on her one-night stand?
Friends sharing their real-time locations with each other is a pretty recent facet of modern life. Though apps like Foursquare have been around since the dawn of the smartphone age, mass location sharing was only introduced around 2017, when Google rolled out location sharing on its Maps function and Snapchat launched Snap Map, allowing users to see where their contacts were at any moment. By the time Apple merged the Find My iPhone and Find My Friends apps into a single app called “Find My” in 2019, location sharing had become just another type of social networking, despite the fact that for many people, it still feels a little icky.
“As a non-location sharer, I see it as the natural conclusion of the digital-age expectation that we’re always online, always available, and have no reasonable expectation of a private, offline life,” explains Scott Nover, a tech reporter at Quartz. “We’re no longer just putting up an ‘away message,’ or occasionally ‘checking in’ somewhere on social media, but broadcasting our whereabouts at all times.”
This is pretty much the standard response I get when I ask most people above the age of, say, 30, about why they would or wouldn’t share their location with friends. Many people who remember a time before social media find it distressing that someone could be watching their little bubble on an app, judging the fact that they’re out late at night or, conversely, that they rarely leave their homes.
Young people, meanwhile, have grown up in an era where parents tracking their kids using tools like Life360 is the norm. The (arguably invasive) app is the subject of plenty of debate on Reddit, where kids lament the ability of their helicopter parents to know where they are at all times. It makes sense, then, that sharing location with friends feels mundane in comparison; many describe it as simply the next step in digital intimacy after following someone on Instagram. “It’s so, so common among basically everyone I know, just for safety reasons but also for fun,” explained 22-year-old Nicki Camberg, who shares her location with 10 friends. “Especially in a college setting, you can see when your friends are in a particular dining hall or library and go find them, in a non-creepy way.”
Louise Barkhuus, currently a visiting professor of computer science at Columbia University who has studied college students who share their location with one another, says that young people are extremely relaxed about digital privacy issues. The bigger issue they have is the social awkwardness of turning off your location after agreeing to share it with someone. “I do see people who are worried about turning off location with people, even though they don’t want to share [location] with them anymore,” she says. “They’re like, ‘No, they’re gonna get a notification, they’re going to be confronting me.’”
Hurt feelings and FOMO, the subject of the Wall Street Journal’s recent piece about location sharing among teen girls, seems to be less of an issue with adults. Instead, we fret over how to broach the subject. “It can be very awkward to bring up the topic of, ‘Hey, start sharing your location with me,’ even if you know the other person will say yes,” Camberg says. The typical conversation will likely go something like this: You’ll have plans to meet up with someone at a crowded place — a concert, a park, a beach, etc — and have an obvious immediate need to know where someone is. The tricky part, though, is gauging how long to extend the access: On Apple’s Find My Friends, you can choose to either share your current location for one hour, until the end of the day, or to share indefinitely. “If the person only shares their location with you for [an hour], it’s definitely, like, a signal. You’re like, “Oh, are we not good enough friends for you to permanently share your location with me?’” Camberg says she gets around the awkwardness by sharing her location permanently with people to avoid that conversation, “and then maybe turn it off a few days later when we aren’t together.”
That awkwardness can still come back around eventually. Nadia (not her real name), a 24-year-old in Los Angeles, realized one of her friends stopped sharing her location after she noticed on Instagram that the friend, who lived in New York, was in Southern California. When she went to check how close she was, she realized she no longer had access to her location. Worse, she learned said friend had asked a mutual acquaintance to grab dinner in LA, but just with the two of them. “I was like, ‘This is so weird. I would find it so much less rude if you texted me upfront to say, ‘Hey, I’m in LA, but it’s a super short trip so I don’t have time to catch up.’ Like, I’m busy too, I have my own life.”
When sharing your location with a friend, it’s important to remember you’re sharing your location with a human being, and human beings don’t always have good intentions or end up being the people you thought they were. Particularly in intimate relationships, location sharing comes with its own set of norms and anxieties, for obvious reasons. Author Ella Dawson recalls a toxic relationship in which her then-boyfriend used location sharing as a way to monitor her whereabouts. “It took me years to find out that he actually used location sharing to avoid me overlapping with his live-in girlfriend, whom he told me he’d broken up with months prior to us meeting. He insisted she use location sharing for the same reason,” she says.
Katina Michael, a professor at Arizona State University who has studied location-based technologies in the private and academic spheres for more than 25 years, describes the shift to mass location sharing as one of the central tenets of uberveillance, the academic term used to mark human beings’, companies’, and governments’ widespread electronic surveillance of other people. “It’s the most powerful thing, knowing where someone is,” she says. “It’s sacred knowledge. It’s God knowledge, when you think about it.” (Perhaps this is part of the enjoyment of staring at our friends’ bubbles: We get to play God to our virtual Sim friends.)
Michael finds the casualness with which people share locations with their friends worrisome. She cites the Tempe, Arizona, man who, in 2019, was arrested for posing as a teenage girl on Snapchat to find the locations of underage girls and then watching them in their homes. (Another man was arrested for the same thing in Florida in 2022.) In France, one man tracked his girlfriend on Snap Map and ended up stabbing the man she was with. Of course, location services have also helped solve innumerable crimes, which is why so many families and friends rely on it for peace of mind. In 2019, one woman credited location sharing with saving her life after she went into anaphylactic shock and her roommate was able to find her and call an ambulance.
If location sharing is the new normal, Michael hopes to bring about changes to the ways location tracking apps breach our privacy. First and foremost, she says that people should have the right to view their own location data and delete it if they wish. She was also the working group chair of a new code of age-appropriate standards for young people, which includes guidelines for terms of service agreements written in plain, simple language. “If even lawyers can’t figure out what terms and conditions are talking about, what’s the average adult to do?” she says, never mind the teens and kids who use these features.
To be fair, the government and the companies that have direct access to your personal data (and the many more that can buy it) already know where you are. The idea that sharing your location with a friend is some kind of unprecedented breach of trust is to be willingly blind to the fact that these services are already being used — by people you don’t know and will never meet — whether you want them to be or not. “For the vast majority of people and the vast majority of circumstances, the benefits they get from sharing their whereabouts way exceed the risks that might be out there,” one computer security strategist told the New York Times in 2017. Barkhuus, for her part, believes that people distrusting location sharing will be akin to people refusing to buy cellphones in the ’90s and early 2000s. “I don’t think it’s going to go away. I think it’s only going to be more common,” she explains. As for the cultural changes mass location sharing might bring, she says, “We’re going to have to be a little more honest with each other.”
As for my friend’s location on that Saturday morning, I never brought it up. I figured it would be best to keep at least the illusion of privacy between us — even if it didn’t actually exist.
This column was first published in the Vox Culture newsletter. Sign up here so you don’t miss the next one, plus get newsletter exclusives.
Is Apple’s weird headset the future?
Apple’s new goggles aren’t for normals. Not yet, anyway. So why does Apple want to show them off?
Every big, new Apple Product Launch follows a template, one the company pioneered and perfected with the iPhone and then the iPad.
First, long-running rumors and speculation about a mystery device — a version of existing products made by competitors but presumably much better because Apple is making it — percolate among the Apple-obsessed tech set. Then a somewhat clearer picture emerges, courtesy of reporting by mainstream media outlets. The hype crests as Apple unveils The Product at A Big Deal launch event, and then customers flock to buy The Product by the millions.
And that’s kind of what’s happening with the new “mixed-reality” headset the tech world expects Apple to unveil at its developer conference on June 5, in what would arguably be its most ambitious launch since the iPad in 2010. There has been reporting for years about Apple’s efforts to make the devices, and now outlets like the New York Times and Bloomberg have given us a pretty good idea of what to expect.
But this one feels different. The coming headset reveal seems deflated and muddled, without anything like the anticipation that accompanied earlier products. There are also real questions about whether anyone will want to buy what Apple is reportedly selling: an ungainly piece of equipment that will cost around $3,000, make the wearer look extremely uncool, and with a utility that is completely theoretical.
It’s a weird place for Apple to be: It has put billions of dollars into this tech (its competitors are doing the same) in the hope that this will be a platform on the level of the next smartphone and that Apple’s headset will be the equivalent of the iPhone. But even headset boosters don’t think the device Apple will likely show off in June will be anything like the iPhone former CEO Steve Jobs unveiled in 2007.
In the best-case scenario, it’s an early version of tech that hints at the promise to come, when we get a better, cheaper, lighter version ... someday down the road.
So on the one hand, Apple is set to unveil a device that could say a lot about its future and the future of consumer tech. But it’s also a bit of a daydream, which will make it very hard to determine whether it’s a hit or a dud. And in the meantime, Apple will very much remain the company that sells iPhones, which is a very good business to be in.
Okay. So what, exactly, should we expect from Apple’s headset? And, more importantly, what does Apple expect us to do once the company announces it?
What is Apple’s new headset actually going to do?
In private meetings this spring, Apple has been showing off the headset. The device looks familiar, since it resembles and functions the way earlier headsets created by rivals like Meta and Microsoft do. It’s also novel because it will do things other headsets don’t, for better and worse.
While Apple CEO Tim Cook has previously hinted about creating a computerized version of glasses — lightweight and unobtrusive things that look like real-world objects many people already wear — the new Apple headset is not it. It’s a relatively bulky thing that straps onto your face and requires so much power that users will have to wear a battery pack on their waist or in their pocket.
The headset is supposed to have two different functions. One is a virtual reality mode, where users see a complete digital landscape — similar to the VR Oculus devices Meta has been making for years. There other is a so-called mixed reality mode — although there’s speculation Apple may use the term “extended reality” when it talks about this — where users can see the real world through the headset, but also see and even interact with digital objects projected onto the real world. That’s an idea that headset startup Magic Leap promised, when it showed off a video of a whale rising out of a school gym nearly a decade ago, but never really delivered.
Apple is likely to add two tweaks it thinks will distinguish its headsets from the pack.
The first is a “copresence” feature, which I’ve heard described in a couple different ways. In one, someone wearing a headset can share video of the thing they’re looking at with another person wearing a headset, and they can both experience the same thing at the same time. Say, you’re walking on the beach, and you want someone who’s across the country to virtually join you while you walk. The other version is closer to something we’ve seen before: You put on a headset and talk to a computer-generated avatar of another person appearing in your field of view.
And perhaps most confusingly, Apple is supposedly going to place exterior screens on the front of headsets, so people who aren’t wearing the headset can see a video display of the eyes of the person wearing the headset. Does that sound like a straight-up nightmare to you? Me too. But people who’ve heard Apple’s pitch say the company thinks it will make the device more social and less dystopian than the zombie-with-computer-on-face image that Mark Zuckerberg proudly showed off in 2016 as part of a marketing push for his Oculus headsets.
What will you actually do with these things once they’re on your face? Good question. Mark Gurman’s reporting for Bloomberg has suggested that Apple intends to port lots of its existing iOS apps to the new device — but the likes of a calculator app certainly won’t convince anyone to use it, let alone buy it. Last year, the New York Times reported that Jon Favreau, the director behind Elf and Iron Man, as well as the creator of the Star Wars series The Mandalorian on Disney+, was going to make content for the device. I’ve also heard, but haven’t confirmed, that Disney itself will be making stuff for the headset; a Disney rep declined to comment.
All of which suggests that Apple is being quite literal about announcing the new device at its developers conference: It’s hoping that once it shows this thing off to the world, other people will think up fun or at least useful things to do with it, and build up apps to make that happen. That will make the headsets more popular, which will then encourage more developers to build cool apps, which will make them more popular. Repeat.
But ... why?
This is where things get very strained if you’re trying to imagine Apple creating another dent-in-the-universe product like the iPhone — or even just a thing that many people buy, like the iPad, and later on the Apple Watch (more than 100 million sold) and AirPods (hundreds of millions sold).
That’s because, despite the collective efforts of Google, Meta, Microsoft, and other tech companies, no one has been able to convince very large numbers of people that virtual reality headsets or augmented headsets or any kind of headsets are things they want to use. That’s different, by the way, from selling headsets; they have been able to do that over the years. Tech research shop CCS insights predicts that consumers will buy 11 million headsets in 2023 alone — a tally it describes as a “slow year.”
But they have never really taken off beyond a gaming novelty or an industrial tool some workers are obliged to use. You can see the disappointment in the behavior of the companies that launched them. Google, which kicked off Big Tech’s augmented and virtual reality phase with its Glass device in 2012, eventually conceded that they were too weird for normals to wear and tried turning them into devices for industrial use; it is formally pulling the plug on the gadgets this fall. Microsoft launched its HoloLens AR headset in 2016 but never broke through; the company has recently been reduced to issuing blog posts insisting that it still cares about the device.
Meta, meanwhile, has poured billions into goggles of all kinds and insists it will do so for years to come. But earlier this year, a Meta executive conceded that consumers don’t love the devices Meta is selling them: “We need to be better at growth and retention and resurrection,” Mark Rabkin, the company’s vice president for VR, said in February.
So how is Tim Cook going to convince consumers that this time is different? It’s going to be tough. For starters, while Cook has been a highly successful CEO — under his tenure, Apple’s stock has soared, and the company is once again approaching $3 trillion in total value — he is not a charismatic salesman in the Steve Jobs mode.
And even Steve Jobs would struggle to sell the benefits of AR or VR goggles. That’s because, by their very nature, you can only see what they do when you wear them yourself. And if you stand on a stage telling people how great they are, you’ll just look like someone onstage with a computer strapped to your face.
“I call it the ‘TV on the radio’ problem,” says Magic Leap founder Rony Abovitz: It’s hard to describe a “television” to an audience that has never seen one and is listening to you talk about it on the radio. Abovitz thinks Apple will solve this by sending devices out for hands-on demonstrations at its hundreds of retail stores.
Meanwhile, Cook, who initially dismissed the idea of goggles in favor of glasses, now says he was wrong, and that, theoretically, goggles could be awesome. Here’s his test run for his pitch, which he floated to GQ magazine earlier this year:
It could empower people to achieve things they couldn’t achieve before. We might be able to collaborate on something much easier if we were sitting here brainstorming about it and all of a sudden we could pull up something digitally and both see it and begin to collaborate on it and create with it. And so it’s the idea that there is this environment that may be even better than just the real world—to overlay the virtual world on top of it might be an even better world. And so this is exciting. If it could accelerate creativity, if it could just help you do things that you do all day long and you didn’t really think about doing them in a different way.
I mean, maybe? I’m in favor of collaboration. But I’ve spent three years being forced to collaborate with people using technology, and my strong preference right now is to collaborate with them in person whenever I can. And when I do have to call or Zoom or Google Meet with people, I already have tech that lets me do it. Like a phone, or a computer.
Asking me to wear a device to do it better — and asking someone else to do the same — means that it has to be way, way better than what we have now.
And maybe Apple’s headsets will be way, way better at this. (At various points, people who worked at Meta have tried to tell me that the company’s devices are actually pretty good for collaboration, just like their boss says they are. But their hearts have never seemed into it.)
Reports suggest that even Apple executives aren’t fully on board for this launch — and that’s not anything we’ve ever heard before as Apple prepares one of these things. (To be fair, that could also indicate that Apple in 2023 is a different company than it used to be, where very little of the company’s inner workings ever showed up in print — to say nothing of dissent.)
The most logical argument is that Apple doesn’t really think it will sell tens of millions of these things, in this form, at $3,000 a pop. Rather, it thinks the initial buyers will be developers, hobbyists, and Apple super fans. And Apple believes it will learn a lot about the device’s potential once they’re out in the wild, with real people testing them and providing feedback. And that years down the road, when costs come down and the tech improves and there are multiple killer apps for this stuff, Apple’s headset will take off.
Industry experts say Apple may have no choice but to put out AR/VR tech that’s not completely refined because it needs to see how the things perform in the real world and to see how developers and consumers react to them.
“There’s nothing to replace being in the field, being in the dirt, just grinding it out,” says Abovitz. “I think they’ve been on the sidelines way too long. At some point, you have to go into the wild.”
That is decidedly not how Apple has done things in the past: Normally, Apple announces a product, then tells you you can buy it very soon, and sales go up and to the right.
Apple boosters will note that the original iPhone wasn’t a gangbusters hit from the get-go: It took a price cut and the eventual introduction of the App Store to really get things going. The Apple Watch also took a while to find its footing: Apple initially positioned it as a fashion item, but most people ended up using it as a high-tech pedometer, so now Apple markets it as a “fitness” product.
But the phone, the watch, the earbuds — and going very far back, the iPod — these were all things that had real-world analogs and real-world use cases, and didn’t require people to make word salad to pitch them. Maybe the goggles simply require a different timetable before they’re really, truly ready, and Apple is starting now because it has to eventually. But I’d feel more confident about the prospects for this tech at show time if I thought the show was ready.
Mark Zuckerberg says the hardest part of Meta’s “year of efficiency” is over
After laying off nearly 10,600 people in recent months, the CEO said “this has been a particularly thrashy period” in a leaked Q&A.
Meta laid off 5,100 more employees on Wednesday in its third round of mass cuts in the past three months, according to executive remarks in a company meeting on Thursday. That brings the company’s total layoffs to 10,600 in the first half of 2023, as part of Mark Zuckerberg’s planned “year of efficiency” to cut costs, shake up company culture, and narrow focus in response to slower growth in the tech industry. The company has also closed 5,300 open roles, Meta’s head of people, Lori Goler, told employees.
The company’s executives announced details about the layoffs on Thursday morning in a Q&A with employees that Vox obtained a recording of.
“We’re saying goodbye to a lot of really talented people who’ve been part of this company,” said Zuckerberg in his opening remarks, before telling remaining staff that “the bottom line is that you’re here today.”
“I know that this has been a particularly thrashy period,” he said. “My hope is to provide as much stability moving forward as possible.”
A spokesperson for Meta declined to comment.
Zuckerberg has his work cut out for him to return Meta’s company culture to a state of normalcy after months of layoffs that have tanked employee morale, leaving many uncertain about their futures and some reportedly unsure of what to focus on. The CEO is hoping the worst is behind his company. While he wouldn’t rule out future layoffs, especially smaller ones, he said this was the last planned mass wave for now. Zuckerberg also said that employees would receive an update about return-to-office plans in the coming weeks giving more “consistent expectations and guidelines” around when and how often employees need to be in the office in person.
”We want to get a more of a critical mass of people in-person together in the offices a few days a week,” Zuckerberg said.
Meta’s continued downsizing is one of the starkest examples of how many major tech companies are tightening their belts after nearly two decades of uninterrupted growth. Silicon Valley as a whole has been going through an economic downturn that has caused major tech firms like Meta to drastically cut back on employee staffing and benefits. While Wall Street has responded positively to Meta’s cuts, the layoffs have taken a toll on Meta’s workforce.
“Restructuring is obviously a very difficult thing,” Zuckerberg said in response to a question about how Meta can rebuild its culture. “So it’s not like you can bounce back immediately. And in some ways, my goal has been to change our culture.”
He added that Meta could more easily shift to a “scrappier” work culture with a smaller staff.
“We were this big company, and I think we were getting a bit more bureaucratic. Part of the point of some of this restructuring is to break up the mode of it,” said Zuckerberg. “So yeah, I mean, some teams are maybe a little smaller now than [they] would be comfortable [with], and that causes issues in some ways for sure. But in other ways, I think it just forces us to find ways to be scrappy, or get things done more efficiently, and that means that there are going to be fewer environments or projects where there are too many cooks in the kitchen.”
From 2019 to 2022, Meta nearly doubled its headcount. But that’s when the company’s profits and user engagement were soaring. That started changing last February, when Facebook, for the first time, reported a decline in total users, and the advertising industry as a whole — the company’s main line of business — started to slow down. In recent months, some employees have openly questioned Zuckerberg and company leadership about whether they should be held accountable for decisions that led to the mass cuts.
Zuckerberg first announced in March that the company planned to eliminate 10,000 positions by the end of May, after previously cutting 11,000 in November. Last month, Meta cut around 4,000 of those planned 10,000 positions, leaving about 6,000 positions potentially on the chopping block this round. At the end of 2022, Meta, which is the parent company of Facebook, Instagram, and WhatsApp, had around 86,000 employees.
Last year was arguably the most brutal in Meta’s nearly 20-year history. Meta has been making somewhat of a comeback financially, however. It had stronger-than-expected earnings last quarter, and its stocks have recovered from historic lows. But now, Zuckerberg needs to make a comeback with his employee culture, too.
Update, May 18, 6:30 pm ET: This story, originally published on May 18, has been updated to include additional details about the layoffs after they happened.
Elon Musk tried a big presidential broadcast event and Twitter broke
The glitchy Ron DeSantis interview is the latest milestone in Twitter’s shift to the right.
Elon Musk’s high-profile Twitter event to launch the presidential bid of Florida Gov. Ron DeSantis on Wednesday got off to a rough start, crashing several times about 30 minutes into the broadcast before restarting the whole thing.
“That was insane, sorry,” said Musk after launching a new audio broadcast on Twitter Spaces about half an hour after the scheduled start time. “We’re actually doing this from David Sacks’s Twitter account because doing it from mine basically broke the system.”
Musk was referring to tech investor and entrepreneur David Sacks, who moderated the interview. Sacks claimed that this was the largest group that has “ever met online.” In reality, the number of people was far smaller than previous online events on or off Twitter. For example, more than 12 million people attended a virtual Travis Scott concert in Fortnite, and according to Twitter, more than 3 million people listened to Musk’s recent Twitter Spaces with the BBC.
During the interview, DeSantis said he chose to announce his presidential bid on Twitter in part because he aligns with Musk’s self-proclaimed free speech values. He also repeatedly criticized the so-called “woke mob” and “woke mind virus,” which is also a grievance of Musk’s.
“I think free speech in this in this country was on its way out the door,” said DeSantis. “That did not happen during Covid. Truth was censored repeatedly, and now that Twitter is in the hands of a free-speech advocate, that would not be able to happen again on this Twitter platform. So I think what was done to Twitter is really significant in the future of our country.”
While DeSantis claims he’s all about free speech, critics say that the Florida governor, like Musk, is actually serving a right-wing agenda that isn’t so free for everyone.
Musk has long claimed he wants Twitter to be a digital town square open to debate from all aspects of the political spectrum.
“For Twitter to deserve public trust it must be politically neutral, which effectively means upsetting the far right and the far left equally,” the billionaire tweeted in April last year, shortly after he made his bid to buy the company.
But lately, Musk has been upsetting one side a lot more than the other. He has been courting some of the most powerful figures in conservative politics to make Twitter their platform of choice, while angering liberals by engaging with conspiracy theories and culture-war-baiting rhetoric.
That approach was clear on Tuesday, when Musk hosted DeSantis on Twitter. It was the first time a presidential candidate has announced their presidential bid on a social media network. It’s also notable that Musk, the company’s owner, is throwing his star power and massive following behind the effort. The DeSantis event happened the same day the Daily Wire, a conservative media outlet that hosts shows by popular right-wing pundits like Ben Shapiro and Matt Walsh, said it would be streaming its shows for free on Twitter. And just two weeks prior, recently fired Fox News host Tucker Carlson said he’s producing a new show that will run on Twitter — another major right-wing media coup for the platform.
While Musk has been busy promoting right-wing powerhouses on Twitter, he hasn’t made any similar public partnerships with liberal politicians, left-leaning or even neutral media outlets. His cozying up to the right seems to be alienating some liberal users. A recent Pew study shows that Twitter users who identify as Democrats were almost 10 percent more likely to say they would stop using the platform in a year (the partisan gap was even greater with Democratic women than men). And in the weeks after Musk took over Twitter, high-profile Republican Twitter accounts gained tens of thousands of followers while their Democratic counterparts experienced a decline, according to a Washington Post analysis.
It’s Musk’s prerogative to encourage whoever he wants on Twitter. While he says he will soon be stepping down as CEO and handing the position to former NBCUniversal ad executive Linda Yaccarino — who has her own conservative credentials as a well-known Trump supporter — Musk still controls and runs the company.
If we go back to Musk’s original stated reason for acquiring Twitter, which he reiterated shortly after he first took control of the company, he said it’s “important to the future of civilization to have a common digital town square, where a wide range of beliefs can be debated in a healthy manner, without resorting to violence,” and that “there is currently great danger that social media will splinter into far right wing and far left wing echo chambers that generate more hate and divide our society.”
Musk went on to critique mainstream media, “in relentless pursuit of clicks, much of traditional media has fueled and catered to those polarized extremes, as they believe that is what brings in money, but in doing so, the opportunity for dialogue is lost.”
But now, Musk is shifting Twitter toward the polarized, echo-chamber model of media he criticized. It’s not just that he is welcoming more right-wing voices onto Twitter. He’s also enabling them to seem more authoritative, boosting their voices, and allowing more controversial speech.
Musk has allowed neo-Nazis, white supremacists, and other hateful accounts back on his platform under his “freedom of speech but not freedom of reach” approach to content moderation. While Musk says that Twitter doesn’t endorse hateful content and won’t amplify it in people’s feeds, the fact that these accounts are allowed on the platform has turned off some users. But conservative accounts have embraced Musk’s new hate speech policies, and paid their way into being verified under Musk’s new check mark system, which allows their content to show up higher in replies and comments. At the same time, Musk has stripped check marks from many media organizations, reporters, and politicians who refuse to pay for verification, making Twitter’s new check-marked class look markedly more right-wing. And Musk has repeatedly broken his promises to allow speech he disagrees with onto the platform by temporarily banning journalists, comedians, and others who draw his ire.
DeSantis, like Musk, has been accused of contradicting his free-speech values: The governor has endorsed controversial new legal reforms in his state restricting how schools can talk to students about gender and race in the classroom.
Musk could embrace the new reality that Twitter is now a place where conservative voices are often welcomed by the company’s owner, and liberal ones are attacked. But somehow, he continues to hold on to the digital town square dream despite his failure to bring that vision to fruition.
In a Wall Street Journal talk on Tuesday, Musk said he “absolutely” wants to also interview Democrats and politicians across the spectrum. It’s unclear, though, whether Democratic politicians would want to go on Twitter to interview with Musk, who — like it or not — is now seen as being publicly aligned with conservatives.
“I am interested in X/Twitter being somewhat of a public town square where more and more organizations post content and make announcements on Twitter,” Musk said in the WSJ interview.
But what kind of public town square is Musk building if one side is welcomed with open arms and the other is attacked? The Atlantic’s Charlie Warzel has argued that Twitter has evolved into a far-right-wing platform. Sara Fischer and Mike Allen at Axios wrote that “the center of media gravity” is moving from Fox News to Twitter. One could also argue that it’s not quite there yet because of all the left-leaning or apolitical holdouts who, despite all the drama, just can’t seem to quit Twitter. But if it continues to alienate a wide swath of users, Twitter will turn into something more similar to Trump’s Truth Social or the now-defunct Parler: echo chambers of conservative voices.
Musk has the power to drive attention to the politicians he favors by controlling the fire hose of information on Twitter. Musk has the ability to boost DeSantis, Carlson, and other conservatives he’s partnering with on Twitter so that they’re plastered all over people’s timelines. He has already given these figures free promotion and a launching pad at pivotal stages in their careers. Ultimately, it’s Musk who controls the Twitter fire hose of information. Let’s also not forget that he tweets to his 140 million Twitter followers on a daily basis.
A number of outlets, including this one, have recently reported that Twitter as we know it is dying. But a more updated conclusion is that it’s being reborn as something else entirely, something certainly more right-leaning, and possibly even more of a polarized hellscape than the Twitter before it. In any case, it’s clear now that the Twitter Musk is building is not the all-inclusive digital square he promised, but it is the one he wants.
Update, May 24, 7:50 pm ET: This story has been updated with details from the DeSantis Twitter broadcast event.
Can you safely build something that may kill you?
How OpenAI’s Sam Altman is keeping up the AI safety balancing act.
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies,” OpenAI CEO Sam Altman once said. He was joking. Probably. Mostly. It’s a little hard to tell.
Altman’s company, OpenAI, is fundraising unfathomable amounts of money in order to build powerful groundbreaking AI systems. “The risks could be extraordinary,” he wrote in a February blog post. “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” His overall conclusion, nonetheless: OpenAI should press forward.
There’s a fundamental oddity on display whenever Altman talks about existential risks from AI, and it was particularly notable in his most recent blog post, “Governance of superintelligence”, which also lists OpenAI president Greg Brockman and chief scientist Ilya Sutskever as co-authors.
It’s kind of weird to think that what you do might kill everyone, but still do it
The oddity is this: Altman isn’t wholly persuaded of the case that AI may destroy life on Earth, but he does take it very seriously. Much of his writing and thinking is in conversation with AI safety concerns. His blog posts link to respected AI safety thinkers like Holden Karnofsky, and often dive into fairly in-depth disagreements with safety researchers over questions like how the cost of hardware at the point where powerful systems are first developed will affect “takeoff speed” — the rate at which improvements to powerful AI systems drive development of more powerful AI systems.
At the very least, it is hard to accuse him of ignorance.
But many people, if they thought their work had significant potential to destroy the world, would probably stop doing it. Geoffrey Hinton left his role at Google when he became convinced that dangers from AI were real and potentially imminent. Leading figures in AI have called for a slowdown while we figure out how to evaluate systems for safety and govern their development.
Altman has said OpenAI will slow down or change course if it comes to realize that it’s driving toward catastrophe. But right now he thinks that, even though everyone might die of advanced AI, the best course is full steam ahead, because developing AI sooner makes it safer and because other, worse actors might develop it otherwise.
Altman appears to me to be walking an odd tightrope. Some of the people around him think that AI safety is fundamentally unserious and won’t be a problem. Others think that safety is the highest-stakes problem humanity has ever faced. OpenAI would like to alienate neither of them. (It would also like to make unfathomable sums of money and not destroy the world.) It’s not an easy balancing act.
“Some people in the AI field think the risks of AGI (and successor systems) are fictitious,” the February blog post says. “We would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.”
And as momentum has grown toward some kind of regulation of AI, fears have grown — especially in techno-optimist, futurist Silicon Valley — that a vague threat of doom will lead to valuable, important technologies that could vastly improve the human condition being nipped in the bud.
There are some genuine trade-offs between ensuring AI is developed safely and building it as fast as possible. Regulatory policy adequate to notice if AI systems are extremely dangerous will probably add to the costs of building powerful AI systems, and will mean we move slower as our systems get more dangerous. I don’t think there’s a way out of this trade-off entirely. But it’s also obviously possible for regulation to be wildly more inefficient than necessary, to crush lots of value with minimal effects on safety.
Trying to keep everyone happy when it comes to regulation
The latest OpenAI blog post reads to me as an effort by Altman and the rest of OpenAI’s leadership to once again dance a tightrope: to call for regulation which they think will be adequate to prevent the literal end of life on Earth (and other catastrophes), and to ward off regulation that they think will be blunt, costly, and bad for the world.
That’s why the so-called governance road map for superintelligence contains paragraphs warning: “Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.
“By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.”
Cynically, this just reads “regulate us at some unspecified future point, not today!” Slightly less cynically, I think that both of the sentiments Altman is trying to convey here are deeply felt in Silicon Valley right now. People are scared both that AI is something powerful, dangerous, and world-changing, worth approaching differently than your typical consumer software startup — and that many possible regulatory proposals would be strangling human prosperity in its cradle.
But the problem with “regulate the dangerous, powerful future AI systems, not the present-day safe ones” is that, because AI systems that were developed with our current training techniques are poorly understood, it’s not actually clear that it’ll be obvious when the “dangerous, powerful” ones show up — and there’ll always be commercial incentive to say that a system is safe when it’s not.
I’m excited about specific proposals to tie regulation to specific capabilities: to have higher standards for systems that can do large-scale independent actions, systems that are highly manipulative and persuasive, systems that can give instructions for acts of terror, and so on. But to get anywhere, the conversation does have to get specific. What makes a system powerful enough to be important to regulate? How do we know the risks of today’s systems, and how do we know when those risks get too high to tolerate? That’s what a “governance of superintelligence” plan has to answer.
The Wild West of streaming TV is here and it’s free
Welcome to FAST: The free, ad-supported, streaming television bargain bin.
I was looking for Night Court, for research purposes. Not the new version; the original, which went off the air in 1992. Much to my surprise, I found all nine seasons on a streaming app that I’d never heard of before, and that I didn’t have to pay for, called Freevee. The catch? I just had to watch a few ads.
A free streaming service? In this subscription economy? What is this magic?
I dove into my TV’s app listings and discovered a cornucopia of similar offerings, with strange names like Tubi, Pluto, and Xumo. If they don’t sound familiar, you’ll recognize their owners: Fox, Paramount, and Comcast, respectively. Freevee is owned by Amazon. Even my TV has its own free streaming app, Samsung TV Plus. Content can vary, but the format is pretty standard: They offer hundreds of linear live channels and on-demand libraries of thousands of hours of TV shows and movies. The content ranges from old and obscure to recent reruns and castoffs. You might see a few original shows in there, too. And maybe a few your friends recommended.
These are free, ad-supported, streaming TV — also known as FAST — services. They’re kind of having a moment. Viewers are finding them as they look for alternatives to costly cable and premium streaming subscriptions. Studios, cable companies, and streaming device manufacturers are turning to them as they look for ways to grab new eyeballs and ad dollars, wring more money out of their archives, and promote their other paid services and products. If you’ve only known a world of paying for subscriptions (or using your parents’ password) to watch streaming movies and TV shows, FAST might seem like a novel idea. If you’re older, it probably looks like an updated digital version of an old friend called basic cable.
FAST is a throwback to television before Netflix. It may also be a big part of its future, according to Alan Wolk, media analyst and co-founder of TVREV.
“Once Netflix and Disney and all of them get their ad product up and running,” Wolk said, “the big advertisers who’ve been hesitant to spend money on streaming because it’s still mostly reruns are going to go, ‘Oh, I get it. Netflix, that’s the new primetime, FAST is the new cable. This is how I’ll spend the money.’”
The rise and mechanics of FAST
Wolk knows the world of FAST pretty well because he’s the one who came up with the term around the end of 2018. It was a way to distinguish completely free streaming services with paid streamers that had a cheaper ad tier. This is also around the time when FAST started to take off, with major media companies and device manufacturers buying them up or starting their own. They often rely on third parties to fill up their libraries and channels, which resemble what you’ll find on traditional television. Some have their own original or exclusive content.
You’ll also, of course, find those unskippable ads plopped in the middle of it. These companies aren’t providing FAST services and content out of the goodness of their hearts. For something like Paramount, which bought Pluto for $340 million in January 2019, FAST is a way to reach whoever isn’t watching Paramount’s broadcast and cable channels and doesn’t want to pay for its premium streaming service, Paramount+. It’s also a way to monetize its voluminous archives of television and movies, and give free users a taste of what they can get on Paramount+.
“Our ecosystems complement each other,” Scott Reich, senior vice president of programming at Pluto, said. While Paramount+ has the new season of RuPaul’s Drag Race All Stars, Pluto’s got the previous season and the first episode of the new one.
“We’re helping upsell over to Paramount+ to see the new season,” Reich said. “You can use Pluto TV as a way of catching up and previewing. And then you go behind the paywall on Paramount+ to continue.”
Or maybe you’re Fox, which doesn’t have a paid streaming service (aside from Fox Nation, which is a niche product) to lose billions of dollars a year on. So it can put a little piece of money into Tubi, which it does. Those investments have helped Tubi amass the largest library of all the FAST services, and they’ve helped it make big gains in viewership and ad revenue. Fox bought Tubi for $440 million in March 2020. It reportedly turned down offers of up to $2 billion three years later, and Fox Corporation CEO Lachlan Murdoch recently said its performance has been “nothing short of stellar.”
“We make money when people consume content, so deep engagement is really the key,” said Adam Lewinson, Tubi’s chief content officer. “In this world we live in these days, where everyone is down their own rabbit hole, if I made a judgment call that everyone is going to watch this one piece of content, it’s a very risky bet. As opposed to saying well, across 50,000 titles, we truly have something for everyone. And, frankly, a lot of it.”
For a device manufacturer like Roku, FAST is a way to monetize the exclusive access it has to users. It can put its own Roku Channel front and center on users’ menus, and it can use the data it collects on what they watch to target ads to them. Yes, your TV is spying on you, unless you’ve opted out of being watched while you’re watching. That’s why Roku is happy to offer the Roku Channel to non-Roku users too. Like most of these services, Roku Channel is available as a standalone app and on the internet. You don’t have to own a Roku or even a television.
And if you’re a third-party provider, FAST offers another way to distribute and make money off of your content. Some companies try to get their shows on as many platforms as possible, which is why you can find 24/7 channels of Forensic Files reruns on seemingly every FAST service. Or they may do exclusive deals or partnerships with FAST services, like Samsung TV Plus’s new Conan O’Brien TV, Freevee’s Washington Post Television, or Roku’s Mythical 24/7. If you can make money off of mostly repackaged content or episodes of Topper, a show that peaked at 24 in the Nielsen ratings in 1954, well, why not?
FAST channels typically have revenue-sharing deals with distributors, in which case they’re both getting paid based on how many ads are served. Sometimes that’s a good deal, sometimes it’s not. But the FAST service isn’t taking a risk either way.
“They make money on them together, and only if people are watching,” David Offenberg, a finance professor at Loyola Marymount University said. “If nobody watches the show, no ads get served, then nobody makes any money. And it doesn’t cost anybody. ... The economics are vastly different than all the subscription services.”
If they’re owned by a company with its own archives, like Pluto and its Paramount library, then that’s an easy enough source. Or they may have licensing agreements, where they just pay a set fee for access to the content. The platforms won’t tell you exactly how their various deals are structured, but Tubi’s Lewinson told Vox that the platform has more than 450 content partners, from major studios on down to tiny independent distributors, and that the industry in general has “all different kinds of business models.”
“It’s very Wild West still,” Wolk said.
Speaking of the West, you may have noticed that Westworld isn’t on Max (formerly known as HBO Max) anymore. It’s on FAST. In an effort to cut costs, Warner Bros. Discovery decided it was time to get rid of old content that wasn’t bringing in enough viewers to justify the cost of hosting it on the platform, like paying out residuals. Some of those shows got a second home on Tubi and Roku. Expect to see this kind of deal happen a lot more as premium streaming services that spent like crazy to win the streaming wars realize that they have to have a sustainable business model, too. Disney just cut dozens of shows, including the ’80s fantasy reboot Willow from Disney+ and Hulu. You’ll probably see some of them pop back up on FAST. Some shows are still pretty fresh too; Willow’s last episode came out just a few months ago.
This all gets back to a major reason why FAST is coming into its own as an alternative revenue stream and distribution model. The streaming wars saw several major media companies launch their own premium streaming services, complete with big libraries and exclusive content. It cost them billions, but they needed to win over the viewers and their pocketbooks as their traditional television audiences kept shrinking.
Now, it’s clear even to the most anti-ad streamer — Netflix — that ad-free streaming alone isn’t enough. The company is adding ad tiers, cutting back on content, and looking for ways to monetize the stuff that’s not bringing in subscribers. They’re also hoping to reach the audiences that can’t or won’t pay for streaming or cable. FAST is a way to do all of those things.
“They’re serving two different markets, really. The subscription service is serving the higher-end consumer, and the FAST service is serving the lower-end consumer,” Offenberg said.
It seems to be working. People are tuning in, and advertisers are responding accordingly. Pluto and Tubi recently met the viewership minutes threshold to break into Nielsen’s The Gauge ratings report, which measures total minutes viewed on television screens, becoming the first FAST services to do so. Pluto, Tubi, and Roku combined had just 2.7 percent of all television viewing in March 2023, Nielsen told Vox. Netflix alone had 7.3 percent that month. But that 2.7 was a 53 percent increase from just a year ago. Pluto had 12 million monthly active users in 2019; it now has 80 million. Tubi currently has 64 million monthly active users, up from 25 million three years ago.
“Cable TV numbers are just falling through the floor,” Offenberg said. “As we speak, probably another million people have dropped. FAST is an easy, free alternative.”
Why you’ll be watching a FAST service soon (if you aren’t already)
A FAST service probably isn’t going to replace your current television habits, be they subscription streaming or traditional television. But it could make for a nice supplement, especially if you’re looking for ways to cut costs.
Frankly, cable and subscription streaming companies have given us plenty of reasons to leave them. Cable bills kept going up, so we cut the cord and subscribed to a streaming service instead. Much less money and plenty of stuff to watch, including the big shows everyone was talking about and maybe some of the network and cable shows you were missing, shown shortly after they aired. Then another streamer came along, so you subscribed to that, too. Then a third. A fourth. And then they all raised their prices. They took some of your favorite shows off the platform entirely. You’re paying more and getting less. If you’re going to have to watch ads, well, you might as well not be paying money to do it.
Or maybe you’ll discover FAST because you followed a show you liked that got kicked off a paid platform and put on a free one. Maybe you heard about an original show on Freevee, or you’re into Tubi’s no-budget horror movies, or you really want to watch Star Trek: Voyager, but not enough to pay for Paramount+.
Maybe you just want to turn the television on, channel surf, and then let Pluto’s exclusive all-Blue Bloods channel or the platform-agnostic Bob Ross channel feed you content instead of spending several minutes sifting through an on-demand library picking what you want to watch next. There’s something to be said for that tried-and-tested TV-watching formula. According to a recent survey from Tivo, people are about 50-50 on whether they prefer FAST channels or the on-demand libraries.
As to which FAST service you should watch, you don’t have to make any kind of commitment or, in most cases, even make an account to start watching. Here are a few guides to FAST platforms that’ll get you started on that journey. They offer a lot of the same third-party channels, so the differences lie in their interface (you might find some easier to use or search than others) and whatever they have that no one else does. Pluto prides itself on its human curators who program its exclusive channels, and it has all that Paramount content. Tubi’s recent Super Bowl ad (being owned by the network that’s airing the game has its benefits) showed large rabbits throwing people down holes of content, because Tubi’s strategy is to have the largest possible library with something for everyone, including niche and underserved audiences. Freevee and Roku have had a few breakthrough original shows. And now that they’re getting more viewers and ad money, the quality of the content is improving. Wolk, the guy who coined the term, says the evolution of FAST is reminiscent of the early days of cable.
“They’re getting better shows, more recent shows, shows from premium networks,” he said. “In the old days, cable was how you reached a larger audience. You would hit your audience on primetime. And then all the people you missed on primetime, that’s why you advertised on cable, to reach them. We’re going to see a similar thing.”
There is one thing missing from that, Wolk said, and that’s money. Subscription streamers spend and lose a lot of it, for the most part. TV and film writers are currently on strike because streamers pay them so much less than traditional television did. Well, FAST pays even less than that. For decades, broadcast and cable channels had a major source of revenue in billions of dollars worth of fees that cable companies (and their customers) paid to carry them, known as retransmission and carriage fees. Those don’t exist in the world of FAST. Or premium streaming, for that matter.
That said, you get what you pay for. Unless you came in because it had a show that you specifically wanted, like my Night Court journey, FAST services may not have exactly what you’re looking for. But they’ll probably have something you’d like.
Perhaps the best way I can illustrate this is through Betty White. Across the FAST landscape, you can find a lot of shows from White’s prodigious career, from her early ’50s sitcom Life with Elizabeth to Betty White’s Off Their Rockers, which ended in 2017. You can even find her animal talk show from the ’70s if you want to see Vincent Price’s pug, Puffalina Pansy Price. But you won’t find The Golden Girls, which is probably what you wanted to see. That’s on Hulu. For now, anyway.
A version of this story was also published in the Vox technology newsletter. Sign up here so you don’t miss the next one!
Montana’s TikTok ban — and the legal challenge of it — explained
TikTok is now suing the state over its new policy.
Last week, Montana became the first state in the United States to ban TikTok, amid concerns lawmakers have raised over the Chinese government’s potential ability to access the app’s data.
The move — which comes as the federal government and other states have vocalized national security worries about the app — goes much further than existing policies to restrict access to the social media platform. The ban has also faced questions regarding enforcement, and has been legally challenged by TikTok on the grounds that it violates users’ and the company’s First Amendment rights.
The law, which is slated to go into effect on January 1, 2024, focuses on penalizing TikTok as well as app stores that allow users to download the product. If TikTok continues to operate in Montana, it will be fined $10,000 for a user’s initial attempt to access the app, and $10,000 a day for every day it continues to allow that user access. The same goes for Google and Apple: If they allow users to download TikTok in Montana via their app stores, they will have to pay similar penalties. Individual users are not penalized for accessing the app under this law.
The Montana law is the latest indication of US lawmakers’ growing hostility toward the app, which is owned by the China-based company, ByteDance. The new policy follows federal and state bans on the use of the app on government phones due to national security concerns and fears that the Chinese government is using the app to surveil users or distribute misinformation. Montana Gov. Greg Gianforte has said he signed the law to “protect Montanans’ personal and private data from the Chinese Communist Party.” Some lawmakers have also pointed to a 2017 Chinese law that requires the country’s companies to respond to government demands for data related to national security as a reason to limit Americans’ access to the app.
TikTok has pushed back against these critiques, claiming that the Chinese government has not asked the company to hand over data and that it wouldn’t comply even if that did happen. CNN reported that “there is so far no evidence that the Chinese government has ever accessed personal information of US-based TikTok users.” And a 2023 Georgia Institute of Technology report found that TikTok’s data collection efforts were similar to those of other social media platforms, like Facebook, a finding that suggested its practices aligned with those of other tech companies.
The company’s statements and independent reports, however, haven’t reassured lawmakers. And fueling much of this push is the fact that anti-China sentiment has grown recently in the US due to rising geopolitical tensions and intensifying economic competition.
At this point, it’s not clear the ban will have the effect that lawmakers intended. There are ways for users to get around it, and it’s not certain how much tech companies can do to guarantee that residents of one state aren’t able to access or download the app. Additionally, TikTok, as well as the ACLU and other civil rights groups have argued that the ban infringes on users’ free speech; there are an estimated 200,000 TikTok users in Montana and 150 million TikTok users in the US.
“With this ban, [lawmakers] have trampled on the free speech of hundreds of thousands of Montanans who use the app to express themselves, gather information, and run their small business in the name of anti-Chinese sentiment,” Keegan Medrano, a policy director at the ACLU of Montana, said in a statement.
The actual impact of the ban is uncertain
There are outstanding enforcement and legal questions about whether the ban can be implemented effectively or whether it can exist at all.
As written, the law puts the onus on TikTok, as well as on app stores run by Apple and Google, to make sure that Montana users don’t download or access the app. Cybersecurity experts told the Associated Press that it could be tough for app stores to restrict users in a specific state from downloading the app. They noted that there would also be many ways for users to evade these restrictions — including by using a virtual private network, or VPN, that would allow them to mask their IP address and therefore their location.
Additionally, the ban is now being challenged in court. A number of groups and legal experts have echoed the ACLU’s claim that a ban could be viewed as a restriction on people’s ability to exercise free speech via the app. There’s some precedent for that argument preserving access to a foreign-owned app: In 2020, a federal court stopped the Trump administration’s ban of the messaging app WeChat, which is owned by Chinese multimedia company Tencent, after users said it would violate their free speech rights. That same year, a federal court blocked the Trump administration’s attempt to ban TikTok by arguing that it overstepped the scope of presidential powers.
“Montanans are indisputably exercising their First Amendment rights when they post and consume content on TikTok,” Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University, told CBS News. “Because Montana can’t establish that the ban is necessary or tailored to any legitimate interest, the law is almost certain to be struck down as unconstitutional.” Since the ban won’t go into effect until January, it’s possible a judge could block it from ever coming to fruition.
TikTok’s lawsuit against Montana cites this argument and also suggests that the state proposal is preempted by federal law that governs issues like national security. Additionally, it states that the ban is a violation of the Constitution’s commerce clause because it would place a burden on “interstate commerce” and interferes with the app’s availability across states.
“We are challenging Montana’s unconstitutional TikTok ban to protect our business and the hundreds of thousands of TikTok users in Montana,” TikTok spokesperson Brooke Oberwetter said in a statement. “We believe our legal challenge will prevail based on an exceedingly strong set of precedents and facts.”
Ultimately, Montana’s ban may prove to be a test case. The way the law is implemented — and considered by the courts — could determine how other states, and even the federal government, approach additional limitations on TikTok moving forward.
Update, May 23, 4 pm ET: This story was originally published on May 18 and has been updated to include the lawsuit TikTok has filed against the Montana ban.
President Biden’s new executive action is all about children and the internet
The Biden administration is fighting back against social media platforms that it says are harming children.
As more evidence emerges that internet platforms can harm children — and that tech companies either can’t or won’t do anything to protect their users — the government has understandably felt the need to step in. President Biden is doing just that with an executive action issued on May 23 that declares an “unprecedented youth mental health crisis” in the country, which he blames at least partially on the internet. The action was accompanied by an advisory from Surgeon General Vivek Murthy about the risks that social media may pose to children.
They join attempts by lawmakers to regulate the internet for kids. States have proposed and even passed laws that restrict what children can access online, up to banning certain services entirely. On the federal level, several recently introduced bipartisan bills run the gamut from giving children more privacy protections to forbidding them from using social media at all. Some efforts also try to control the content that children can be exposed to.
Critics of such legislation point to privacy issues with age verification mechanisms and fears that forced content moderation will inevitably lead to censorship, preventing kids from seeing material that’s helpful along with what’s considered harmful.
We’re already getting a glimpse of what various factions in this country think the internet should look like. We might be getting a much better look soon.
Biden’s latest salvo in his fight against the internet
The president’s feelings about children online are well known at this point. He’s mentioned in two State of the Union addresses that he believes social media harms kids and violates their privacy. His May 23 executive action to “protect youth mental health, safety, and privacy online” attempts to marshal various parts of the administration to address it.
These include a new Task Force on Kids Online Health and Safety, to be headed up by the Department of Health and Human Services and the Department of Commerce. The Department of Education is being asked to improve children’s privacy in educational tools and issue policies on best practices for using internet devices in schools. The Commerce department will promote support services for child victims of online bullying and abuse. And the Department of Homeland Security and the Department of Justice will work with the National Center for Missing and Exploited Children on a database of child sexual abuse material, which can help online services detect when those images are uploaded onto their platforms.
At the same time, Surgeon General Murthy issued an advisory that outlined social media’s perceived risks and benefits to children, saying that “we do not have enough evidence to conclude that [social media] is sufficiently safe for them.” Lawmakers, the advisory said, can mitigate possible harm with policies such as age minimums, increased data privacy for children, and age-appropriate health and safety standards for platforms to implement.
“Our children have become unknowing participants in a decades-long experiment,” the advisory says, echoing Biden’s statements in the last two SOTU addresses.
A new federal push to protect kids online — that states would help enforce
Protecting children from online evils, real or imagined, is a tale almost as old as the modern internet. Some of those fears, we’re increasingly learning, are not unfounded. Recent studies say that kids’ mental health is at crisis levels, and social media is often pointed to as a major contributor to that. Facebook whistleblower Frances Haugen’s 2021 revelations that the company hid research that said its services hurt teens’ mental health — claims that the social media giant says are inaccurate — are also cited as a major motivating factor for the legislative action we’re seeing now. Some researchers say that a link between social media usage and harm to children’s mental health still hasn’t yet been established. The surgeon general’s advisory says there’s an “urgent need” for more research that would fill knowledge gaps, and calls on tech companies to provide their data to researchers to facilitate that.
That action most recently took the form of the Kids Online Safety Act, or KOSA, which was reintroduced on May 2. Cosponsored by Sens. Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), this legislation would require platforms to implement several safeguards for users under 18. The bill is controversial for a few reasons, one of which is a so-called “duty of care” provision. This would mean that covered platforms have to prevent kids from being exposed to content that promotes or could contribute to mental health disorders, physical violence, bullying, harassment, sexual exploitation, abuse, and drugs, among other things.
On its face, these seem like good things for children to avoid. KOSA’s proponents aren’t wrong when they say that social media platforms don’t just push potentially harmful content at children, but because these companies rely on keeping users’ attention however possible to power their business model, harmful content ends up finding its way to kids. Free speech and civil rights advocates, however, are wary of any legislation that tries to control content, no matter how well-meaning. So several such groups, including Fight for the Future, the Electronic Frontier Foundation (EFF), and the American Civil Liberties Union, have all come out against KOSA. At the same time, the bill already has the support of at least 30 senators from both sides of the aisle.
“We generally don’t like it when the government is trying to tell parents the correct way to parent their children,” said India McKinney, director of federal affairs at the EFF. “Yes, there is harmful stuff that happens online. That is absolutely true. But how do you define that in legislation, to make it clear what you mean and what you don’t mean, and in a way that platforms can [moderate]?”
The bill’s authors believe they’ve made it plenty clear in this latest version of KOSA, where definitions of harmful content are narrower and less open to interpretation than the previous congressional session’s version. For instance, “grooming” — which some on the right wing have adopted as their preferred term for pretty much any LGBTQ+ content — is no longer listed as an example of sexual exploitation. Along with about a third of the Senate, KOSA has the support of many children’s health and safety advocacy groups. Also, Lizzo.
KOSA’s opponents aren’t just wary of its provisions about content. They also don’t like the power it gives to state attorneys general to enforce it. Some see this as an opening for state leaders fighting a culture war to go after online platforms that host speech about transgender rights, abortion care, or mention that gay couples exist. Or, really, any other content that’s become politically advantageous to censor and can be interpreted to fall under KOSA’s definitions, narrow as they are.
While it may have seemed like a stretch just a few years ago, this highly politicized version of kids’ online safety has become a reality to reckon with in the midst of the latest moral panic that some Republicans have made the center of their campaign strategies. Some of these attorneys general and the states they represent have pushed laws that ban books or public school curricula if they contain sexual, LGBTQ+, or race-related content. The laws are vaguely worded enough that libraries and schools are banning books preemptively just in case someone finds something objectionable in them. Some states are trying to or already have passed anti-trans laws that ban or restrict gender-affirming care for kids and even adults. They’ve even tried to ban drag shows.
Those states could conceivably do something similar to the digital world if given the chance. It’s not lost on some of KOSA’s opponents that Sen. Blackburn represents Tennessee, the state that tried to ban drag shows from being performed for or near children, or that she’s made several anti-gay and anti-trans comments and votes. We also know that platforms tend to over-moderate to ensure they can’t get in trouble, as we’ve seen some of them do to sex and sex-work-related content in the wake of FOSTA-SESTA. The end result is censorship, be it forced or voluntary.
A cautionary tale from state laws
Some recently enacted children’s online safety legislation shows us what state leaders want the internet to look like. These state laws pertain to children, but they impact adults, too.
A Utah law requires social media platforms to verify the ages of their users, which means people of all ages will likely have to submit some kind of verification to log into their social media accounts. The state passed another law that requires porn sites to verify visitors’ ages, which has prompted several porn sites to block Utah IP addresses entirely, saying it wasn’t possible for them to verify ages as the new law required.
Louisiana also banned children from visiting porn sites and requires those sites to verify visitors’ ages by proving their identities. While Pornhub implemented an age verification system to comply with the law, it noted that Louisiana-based traffic decreased by 80 percent after it went into effect. And sure, it’s possible that 80 percent was all children who could no longer access the site. It’s more likely that it was adults who could view that content legally but didn’t want to upload their IDs to be able to do so.
Meanwhile, Arkansas passed a law that requires users under 18 to get parental consent to use certain social media platforms (it’s so far unclear how ages will be verified). California has the Age-Appropriate Design Code, which requires online services to implement certain design features for younger users and limit the data that can be collected on them. Montana passed a law that would ban TikTok entirely, which isn’t exactly a child safety law but does very much affect children, with whom the platform is very popular. The list of other states considering children’s online safety bills goes on and on.
Federal legislation for kids’ online safety is much less likely to be passed than the state versions, as Congress is more divided and moves more slowly than many state legislatures. But there are bipartisan bills that have some potential — and, critics say, problems.
Along with KOSA, there’s EARN IT, which passed out of committee on May 4, setting it up for a vote in the Senate (the last two incarnations of EARN IT similarly passed out of committee, but never got a floor vote). Supporters say it will help law enforcement better fight child sexual abuse material. Opponents fear that EARN IT will be used to weaken or ban encryption for everyone. The Protecting Kids on Social Media Act, introduced last month, bans children under 13 from using social media and requires parental consent for children 13 and over. That would prevent children from seeing social media’s harms, but it would also keep them away from online resources that do some good.
And then there’s the sequel to the Children’s Online Privacy Protection Act, a 1998 law that gave children under 13 certain privacy rights and remains the only federal consumer online privacy law we have, even decades later. The Children and Teens’ Online Privacy Protection Act, or COPPA 2.0, was introduced on May 3 by Sens. Bill Cassidy (R-LA) and Ed Markey (D-MA). Markey was also behind the original COPPA. As a privacy bill, COPPA 2.0 doesn’t have the same content moderation issues that other bills do, but Markey has had a hard time getting it passed in previous sessions. And it stops short of giving privacy protections to adults, which privacy advocates, understandably, very much support.
Any law that covers people regardless of age, critics of these kinds of bills often point out, would take away the need to verify users’ ages — which can be a privacy violation in and of itself. Many of Louisiana’s porn-enjoying adults can probably attest to that. It could also solve or ameliorate some of the children’s safety issues without the need for problematic child-specific safety laws. But Congress so far hasn’t come close to passing that kind of privacy law after years of trying, so it seems unlikely that it will anytime soon.
Children’s online safety measures have been proposed and debated for decades, but they rarely went much further than that. Now, the threat that these ideas become law is very real, in part because the dangers online platforms present to kids are very real. But so is the possibility that kids’ online safety laws could be weaponized to censor content according to subjective and politicized views of what’s harmful. We’ve already seen what those views can do to school libraries. We may soon see what they’ll do to the internet.
Update, May 23, 3:45 pm ET: This article, originally published May 5, has been updated to add the surgeon general’s advisory and Biden’s executive action.
A version of this story was first published in the Vox technology newsletter. Sign up here so you don’t miss the next one!