Truth Social: App stores as a new front in the platform governance of Donald Trump
The launch of “Truth Social” is the latest episode in the sorry saga of Donald Trump and social media, but one which implicates new sites…
The launch of “Truth Social” is the latest episode in the sorry saga of Donald Trump and social media, but one which implicates new sites and actors in platform governance, including app stores. Will it go the way of Parler?
The following piece anticipates and incorporates forthcoming work with Jessica Morley and Luciano Floridi, the ideas presented here have benefited from recent conversations with Corinne Cath-Speth and Emmie Hine, and I am as usual indebted to Nayana Prakash for the space to hash these and other issues out on our regular podcast.
Donald Trump and social media have had a fractious friendship. Trump owes his initial ascendance to the presidency to social media, having exploited Facebook and especially Twitter as a way to circumnavigate a skeptical mainstream press and Republican party gatekeepers in the 2016 nominating process. Defying the old adage that politicians must “campaign in poetry and govern in prose”, as president, Trump maintained the ranting-and-raving approach to social media that he adopted as a candidate (albeit with added “covfefe”.) While in office, and especially when he seemed likely to win re-election (before the Covid pandemic struck), social media companies accommodated Trump’s unprecedentedly unpresidential use of their platforms, despite fairly clear and multiple breaches of their terms of service. As the tumultuous 2020 wore on, however, the calculus for such kowtowing changed. Not only did the Covid pandemic and Trump’s bizarre responses to it (anyone for bleach?) see his poll numbers slide, but other events that year, especially the protests following the murder of George Floyd, brought Trump’s inflammatory use of social media into such focus that platforms had little choice but to act. In May 2020, Twitter for the first time labelled two of Trump’s tweets about mail-in voting “potentially misleading”. Days later, Twitter restricted (but did not remove) a Trump tweet threatening that “when the looting starts, the shooting starts”, due to its glorification of violence.
And so it was that in early January 2021, Trump’s despicable last hurrah in office was to foment violent insurrection at the United States Capitol, where lawmakers had gathered to conduct the formality of certifying Joe Biden’s election win. Five people died in the mayhem, which saw lawmakers evacuated with seconds to spare as protesters stormed the congressional dais. Trump’s role in instigating the protests was the last straw for social media platforms Twitter, which banned Trump from its platform, and Facebook, which provisionally suspended him.
Trump’s removal from mainstream social media marked the end of a long and arguably symbiotic relationship. Being deprived the oxygen of online attention following the removal of his virtual platform by these companies (as well as the withdrawal of his presidential platform by voters) is obviously what underlies the launch of his own platform today. But the question of why Trump is launching his own social media platform is both easier and less interesting to answer than the questions of what impact the new platform will have, and how it will be received. Symbolically if not functionally, Trump’s transition from mainstream platform user to startup platform operator marks a watershed moment in the politics of platform governance. The launch of “Truth Social” brings to light a frequently underrated aspect of platform governance: how platform operators govern not only their own users, but also other platforms. To see what this might mean for the prospects of “Truth Social”, we must return to the events of January 2021.
As large platforms like Facebook and Twitter were announcing their suspensions of President Trump, other large technology providers, including Google¹, Apple and Amazon, took action against several of the smaller fringe platforms that had allegedly played a role in facilitating the violent insurrection at the Capitol. Foremost among these was Parler. Founded in 2018, Parler is a social network similar to Twitter, which pitches itself as “the internet’s town square”, a place where “every individual has the right to speak and be heard”. Beneath the veneer of ideological “impartiality” and unfettered “free speech”, however, Parler’s user base was largely made up of American right-wingers who had migrated from places like Twitter and Facebook, upset at the supposedly censorious approach these mainstream platforms were increasingly taking to Donald Trump and other conservative figures. This created a curious paradox: while Parler’s CEO John Matze took to the airwaves throughout 2020 to portray Parler as a post-ideological nirvana of free speech and free thinking, his platform was steadily filling up with a hard core of far-right users, despite Parler even offering money to try to lure liberal users to join. And in a further irony, because Parler eschewed its rivals’ algorithmic approach to content curation – instead using a simple reverse-chronological feed a la the first version of Twitter and Facebook’s original News Feed – new users were faced with the choice of either a firehose of far-right (and very often objectionable) content, or no content at all.
The decision taken by Apple and Google to remove Parler from their respective app stores in the aftermath of the Capitol riots, alongside Amazon’s decision to remove cloud hosting services for the platform, was not because of the ideological stripe of its users per se, but because of the risk to public safety that its continued operation was perceived to pose. Whatever the merits (and timing) of their decisions, the removal of Parler from app stores by Apple and Google (it has since been restored to Apple’s App Store) illuminated the hitherto quieter, but no less significant, governance role played by app store operators. Platform governance has tended to be seen in two senses: platforms are governed by states, and platforms govern their users. But as the Parler case shows, several large platform providers do not merely govern their own users as individuals, but also other platforms, and by extension their users. Thus while much attention has rightly been paid to how platform operators moderate content on their own platforms, including with the use of algorithms and novel high-level governance mechanisms such as Facebook’s Oversight Board, the launch of Truth Social as a standalone platform puts the onus on other governing actors such as app store operators. This is because no platform does, in fact, “stand alone”, but instead relies on a range of technological infrastructure. José van Dijck and colleagues’ notion of deplatformization and Joan Donovan’s concept of content moderation across the tech stack are both useful here, identifying the range of powerful actors behind and beyond social media companies, including not only app store operators but also domain registrars and payment processing services, that together decide what who gets to say and do what on the internet.
As a result, while Trump’s new platform may, like Parler before it, pitch itself as a place free of the censorious shackles of left-wing Silicon Valley elites like those operating Facebook and Twitter, who get to “decide what you can and can’t say”, it will still rely on, and effectively be governed by, several other technology companies. Foremost among these are Apple and Google. Although Trump’s startup platform could have decided to operate outside of app stores, by relying on Android users to “sideload” the app (as Parler now does) and relying on iPhone users to use the app through their browser, in reality modern apps rely on the stack of operating system functionality available only to installed apps, such as system-level notifications. This means being subjected to Google and Apple’s respective app store guidelines – guidelines which Apple and Google cited in their decision to remove Parler.
Given their duopolistic dominance of the smartphone software market, it is Apple and Google, therefore, who will be in the position to decide whether Truth Social meets their standards for inclusion and retention in their app stores. Since its more locked-down smartphone operating system prevents users from “sideloading” apps and thus stops apps from side-stepping its app store governance, let us focus here on Apple. Unfortunately, even by the enigmatic standards of social media content moderation, Apple’s guidelines for app developers are at points almost self-parodically opaque. For instance, in its introductory preamble to its App Store Review Guidelines, Apple notes that:
“We strongly support all points of view being represented on the App Store, as long as the apps are respectful to users with differing opinions and the quality of the app experience is great. We will reject apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, “I’ll know it when I see it”. And we think that you will also know it when you cross it.”
Through its tongue-in-cheek reference to Justice Potter Stewart’s definition of obscenity as “I know it when I see it” (ironic given Apple’s longstanding stance against apps containing pornography) this statement inadvertently encapsulates the present limitations of Apple’s approach to governing its App Store. By reserving the right to reject “any content or behavior that we believe is over the line”, Apple asserts its status as judge, jury and executioner with respect to the apps it allows on its smartphone platform. Though its fine print is somewhat more specific with respect to how it defines “objectionable content”, another point of ambiguity arises with respect to “platform apps” like Facebook and Twitter but also Parler and Truth Social. Apple acknowledges that “apps with user-generated content [ie, social media and other content-sharing platforms] present particular challenges”, and demands that such apps must include mechanisms to allow the “filtering objectionable material from being posted”, the “report[ing of] offensive content”, and the “ability to block abusive users from the service”. In this context, Apple accords itself the role of “meta-moderator”, overseeing the enforcement of the standards it sets for platform-apps’ own content moderation, and in theory intervening when they fall short.
Besides underscoring the rather arbitrary nature of app store governance (as also evidenced by real-world examples of apps rejected for obscure reasons), these guidelines also surface a more specific challenge for the governance of platform apps like Truth Social. The guidelines presume a good-faith effort on the part of app developers and operators to develop apps that treat users fairly and equally. For “big tent” platforms like Facebook and Twitter, the logic of ensuring rapid growth of their user base and maximising “engagement” means little incentive to segregate or sanction users for ideological or demographic reasons, other than age²—and the several outrages over problematic content, like hate speech or disinformation, that has been allowed to proliferate on these platforms, reflects operators’ unwillingness to act on content firmly or quickly enough. Although Apple does specify in its developer guidelines that “apps should not include content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy”, which includes “discriminatory … commentary about religion, race, sexual orientation, gender, national/ethnic origin, or other targeted groups”, it is not clear how these prohibitions translate to apps that primarily contain “user-generated content”. To what extent, for example, are platform apps permitted to develop and enforce their own terms of service and mechanisms for moderation that diverge from Apple’s own? To consider the question of hate speech, for example, at what point does a hate-filled platform (that is, a platform frequented by hate-spewing individuals) become seen (and governed) as a hateful platform tout court?³ Parler is, again, an illustrative edge case here. There is no doubt that long before the Capitol riots, Parler was beset with objectionable content, and ill-equipped or unwilling to clean it up – rather graphically exemplified by its CEO’s beseechment to users not to “threaten to kill anyone in the comment section”. Up until January 2021, however, Parler remained on app stores (it even topped app store charts in the wake of Trump’s defeat), suggesting that platforms were content with (or inexplicably unaware of) the level of objectionable content on the platform. It was only when this spilled over into the “real world”, and specifically onto the floor of Congress, that Apple and Google – no doubt fearing a PR backlash – felt obliged to remove it.
Truth Social has only just launched, but we may be nonetheless be permitted several assumptions about what form it will take. Presumably, Trump’s platform will attract a particular kind of user – those who support him politically. Additionally, at least some of these users will indulge in a range of hateful, conspiratorial, or outright insurrectionist communication on the platform. And finally, the platform will be insufficiently prepared or unwilling (or both) to tackle this objectionable content effectively. If these conditions hold, it will now be Apple and Google, rather than Twitter or Facebook, who will be charged with deciding how to enforce their guidelines in view of Trump’s or his followers’ latest outrage. And as we have seen, these guidelines are rather ambiguous with respect to when a threshold for objectionable content is met.
None of these are easy questions to answer, nor straightforward governance decisions to take, and the arrival of what we may as well call “Trump Social” places Apple and Google in an unenviable position. But whether they like it or not, given their dominance over the smartphone app ecosystem – not to mention the high revenue cut that they take for app sales and subscriptions through their app stores, which they claim is necessary for user safety and wellbeing – Apple and Google are now in the platform governance hot seat. It will be interesting, to say the least, to see whether Apple and Google’s governance of Donald Trump as platform operator imitates or diverges from Facebook, Twitter, and other social media companies’ earlier governance of him as (impetuous) platform user.
1 Google has also had to face decisions in relation to governance of its own platforms, especially YouTube, in light of various outrages concerning the prevalence and spread of problematic content such as hate speech on that platform.
2 Of course, users are segregated on these platforms in a very granular way, through the deployment of algorithmic ordering and advertising systems based on data profiling. And platform policies are far from neutral with respect their impact on different groups of users; consider the othering effects of Facebook’s “real name” policy on indigenous groups and trans people, for example.
3 This mirrors debates ongoing in Germany regarding the threshold of conspiratorial activity in relation to Covid restrictions would justify the banning of the private messaging app Telegram in that country. And more generally, the power of states to ban platform apps should not be underrated: consider the ban on TikTok in India or the suspension of Twitter, now reversed, in Nigeria.