close

Architecture of harm

By Asad Baig
April 10, 2026
High school student poses with her mobile showing her social media applications in Melbourne. — Reuters/File
High school student poses with her mobile showing her social media applications in Melbourne. — Reuters/File

For years, the debate around social media harms remained trapped in abstractions. Platforms insisted they were neutral conduits. Policymakers circled reform but rarely acted decisively.

Meanwhile, the lived experiences of users, especially children and women, accumulated quietly into something harder to ignore. What has changed now is that this accumulation has finally been translated into legal liability, not by regulators, but by the courts.

In the past few weeks, two major court decisions in the US have, for the first time, directly held social media companies responsible for harm caused to users, especially children. In a landmark trial in Los Angeles, a jury found Meta and Google liable for designing platforms like Instagram and YouTube in ways that contributed to addiction and serious mental health issues, awarding damages to a young woman who had used these products since childhood.

At almost the same time, in a separate case in New Mexico, another jury ordered Meta to pay $375 million for failing to protect children from exploitation and harmful interactions on its platforms.

In addition to the scale of penalties, these developments are significant because the courts have effectively acknowledged that harm on social media is not only about what users post, but about how these platforms are deliberately designed, and that this design itself can make companies legally accountable.

Crucially, this design is tied directly to profit. The same systems that amplify outrage, hate and emotionally charged content tend to keep users engaged for longer, generating more data and higher advertising revenue. In that sense, harm is not just a byproduct of these platforms but is often embedded within the very mechanics that make them commercially successful.

For over two decades, platforms have operated under Section 230, which effectively shields them from liability for user-generated content. This created a legal architecture that allowed platforms to host, amplify, and profit from content without being treated as publishers. Most attempts to challenge this model failed because they focused on content moderation. The argument was always about whether platforms should be responsible for what users post.

What has now changed this argument is that, instead of focusing on content, these cases focused on product design. Plaintiffs argued that social media platforms are not passive intermediaries but engineered systems deliberately designed to maximise engagement, often at the expense of user well-being. The harm, therefore, is not incidental but structural.

This shift is significant because it bypasses the long-standing immunity frameworks. If a platform is treated as a product, rather than a publisher, then it can be held liable in the same way as any other product that causes harm. The comparison that has begun to emerge, and not without reason, is with the tobacco industry. Not because social media is identical in effect, but because the legal strategy is similar: establish that companies knowingly designed systems that could harm users, especially vulnerable ones, and failed to take adequate steps to mitigate that harm.

The implications of this are profound. For children, the evidence has been mounting for years. Internal research from these companies has repeatedly shown that prolonged exposure to algorithmically curated feeds can exacerbate anxiety, depression and body image issues. Features such as endless scrolling, social validation metrics and targeted content recommendations are particularly potent in shaping adolescent behaviour. What the courts are now acknowledging is that these are not neutral features. They are design choices with foreseeable consequences.

In legal terms, foreseeability is critical. It means companies cannot claim ignorance. If harm can be reasonably anticipated, there is a duty of care.

For women, the harms are different but just as structural. Social media platforms have become spaces where harassment, stalking, image-based abuse and coordinated trolling are not only widespread but often amplified. Systems built to maximise engagement tend to privilege outrage and conflict, allowing abusive content to travel further and faster than efforts to counter it.

Women, especially journalists, activists and public figures, are disproportionately targeted. The impact goes beyond individual harm. It shapes participation itself. When online spaces become hostile, they push voices out. This is a consequence of how these platforms are designed. These cases connect that reality to legal accountability.

For years, the business model of social media has been simple: maximise attention. More time on the platform means more data and more advertising revenue. In that model, features that increase engagement are rewarded, while safety is treated as a cost.

If courts begin to impose liability for design-linked harms, that equation shifts. Features that drive engagement but also cause harm become legal risks. Companies will have to weigh profitability against liability.

This could force real changes. Design choices may be reconsidered. Safeguards for younger users may be strengthened. Greater transparency around algorithms, long resisted, may become unavoidable. Some features may need to be fundamentally redesigned.

But change will not come easily. These companies will appeal. They will argue that causation is unclear and that user choice matters. There will be efforts to limit liability. The challenge becomes sharper in places like Pakistan, where harms are similar but protections are weaker. Women face harassment that can spill into real-world violence. Children navigate digital spaces with minimal safeguards. Access to justice is limited, and the imbalance of power between platforms and users is even greater.

The question is whether these legal shifts will travel. Platforms may adopt higher standards globally, or protections may remain uneven across regions. But impunity is eroding. The significance lies not just in the damages awarded, but in the precedent set. The law is beginning to catch up with the reality that these platforms are not passive intermediaries, but powerful systems shaping behaviour and well-being.

What comes next will determine how far this shift goes. Whether it leads to meaningful redesign or is absorbed as a cost of doing business remains uncertain. But the direction is clear. The debate has moved from whether harm exists to who is responsible for it. And that, in itself, changes everything.


The writer is the founder and executive director of Media Matters for Democracy. He tweets/posts @asadbeyg