Stock

Instagram Features to Limit Political Content Cause an Uproar, but Giving Users Greater Control on Social Media Presents an Exciting Opportunity

David Inserra

Last month, Instagram users took notice of two features that change how much political content users see on Instagram. However, the rollout wasn’t exactly smooth, with users of various political stripes accusing Meta of trying to suppress content. But rather than attack this change, users and policymakers should recognize it as a significant move to give users greater control over their news feeds and opportunity for a more user‐​first social media experience.

First, let’s understand what changed on Instagram. Within the user controls for Instagram,

users are now able to click a button to either limit or not limit “political content” regarding “governments, elections, or social topics.” The current default option is to limit political content.

users are able to determine how fact‐​checked content is treated. They can choose to heavily reduce how much they see such content, somewhat reduce it, or not reduce it at all, with the default being to somewhat reduce fact‐​checked content.

The discovery of these new features, however, was met with accusations by various parties that Meta was hiding this change to suppress certain viewpoints. But Meta has been announcing these changes for some time now, so these aren’t exactly secret changes. 

A better question might be what gets classified as political and social content. While some types of content are very clearly political, others largely depend on one’s viewpoint. Is it political to merely express horror over the October 7 attack on Israeli civilians? Or to express sadness over the casualties in Gaza? Is support for certain types of energy like solar power or nuclear power a political issue? Are various diversity initiatives covered? What about content supporting or rejecting environmental, social, and governance investing strategies? It would be nice to know more about what is and isn’t considered a political issue.

But perhaps even better than mere transparency, Meta could give users more specific control over various types of content. Some users may want to see even more political content than Instagram’s filters currently allow—others may want to see even less. Some may want to see a lot of specific types of political or social content but not content about other issues. There is a lot of room here to further empower users.

For example, Bluesky—a social media company that became open to the public this year—would allow users to quickly and easily subscribe to different independent moderation services. These services would allow users to truly customize what their social media experiences are, ranging from how potentially hateful content is handled to less serious decisions like whether to block pictures of spiders.

Alternatively, platforms like Reddit give different subcommunities, known as subreddits, the ability to add different rules to fit those communities. Reddit also lets users upvote or downvote comments by other users, which the independent moderators of subreddits can use to limit posts by users whose comments are regularly downvoted.

The other tool users can make use of on Instagram determines how fact‐​checked content is treated. Through its fact‐​checkers, Meta has long suppressed content that it had deemed to be misinformation. The problem, of course, is the ancient question: Who watchers the watchers? The fact‐​checkers have significant authority to declare content false, partly false, or missing context and get content suppressed. But nearly every piece of content is missing some context—after all, good writing requires that only the most essential context is provided. What facts count as the most important context differs from person to person or organization to organization. And how various facts are interpreted or assessed is subject to all sorts of potential bias.

This is not to say that there are no truths or facts—just that much of our discussions around major social issues are not as black‐​and‐​white as many fact‐​checkers would assert. Add to this the fact that it is difficult to meaningfully appeal a fact-checker’s decision, and it is easy to see how some users might be frustrated with such a system. 

Of course, this is Meta’s system, and other platforms provide different fact‐​checking systems. X has invested in Community Notes, which empowers users of X to write notes that add additional context to a post. When enough people from a variety of perspectives believe that a given note is helpful, it can be shown to all users. This crowdsourcing model relies not on the views of a fact‐​checker but instead focuses on only providing widely agreed‐​upon facts.

The reality is that users can have substantively different views on what is political, what is misinformation, and how much of any such content they want to see on certain apps. Allowing users to choose may be a new way to help alleviate the moderator’s dilemma where moderators will tend to err on the side of caution for many types of speech. For example, while I strongly believe that content moderation policies should favor more expression, I recognize that there are other people who vigorously disagree and prefer a feed with more limited views.

Because of this, these new tools that give more control to users may be the easiest way out of the moderation paradox. Some users can choose to heavily suppress fact‐​checked content in their feeds, while others can choose to see more. I can choose to use my Instagram for foodie, travel, gaming, and family content and keep politics on X, Facebook, or LinkedIn. And by giving users even more options for control, users and advertisers across the political spectrum and around the world can truly customize their experience based on what they are comfortable with, rather than a one‐​size‐​fits‐​all approach.

Platforms could even allow civil society organizations to create various one‐​click moderation filters. Want to see social media that is minimally moderated in favor of expression? Use a filter by a free speech group. Want your social media experience to completely avoid certain sensitive topics that trigger or disturb you? Use a filter by an online safety organization. Of course, these would all be optional. If you like the current experience offered by Instagram or any other platform, you can keep it. And none of this requires government mandates or regulations that infringe on freedom of expression.

While a lot of ink has been spilled over whether the changes to Instagram are suppressing one group or another, these changes provide a glimpse of what is possible if social media companies provide users with transparent controls over what they see online.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

More in:Stock

Stock

VP Picks for Kamala Harris

Chris Edwards Media outlets are identifying vice-presidential options for likely presidential candidate Kamala Harris. Many ...