Platform Liability under the Digital Services Act

What platforms are actually responsible for — and how liability is established in practice.

Platform liability is no longer passive

The Digital Services Act (DSA) fundamentally changes how platform responsibility is defined.

Liability is no longer limited to whether illegal content exists on a platform. It is determined by how platforms detect, act on, and prevent illegal content at scale.

This shifts responsibility from passive hosting to active, operational control.

In practice

What platform liability means in practice

Under the DSA, liability is not based on policies or intent. It is based on observable behaviour.

Platforms are expected to:

detect illegal content

act on valid signals

prevent repeated exposure

document decisions consistently

Key principle

Liability is defined through action, timing, and consistency — not intent.

Notice and action

Notice and action defines your obligations

The DSA requires platforms to provide mechanisms for reporting illegal content.

Once a platform receives a sufficiently precise and substantiated notice, it obtains actual knowledge and must act expeditiously.

Signals can include:

user reports

legal notices

trusted flaggers

Failure to act after valid notice creates direct liability exposure.

Actual knowledge

When awareness becomes liability

"Actual knowledge" is a critical legal threshold.

A platform can no longer remain neutral once it:

receives a valid report

identifies illegal content internally

processes a trusted signal

At this point, inaction is no longer passive — it becomes liability.

Why removal is not enough.

Removing content once does not resolve the problem.

Illegal content frequently reappears through:

Re-uploads

Minor modifications

Re-encoding or compression

Distribution across platforms

Core issue

Without persistent content identity, platforms cannot recognise previously removed content. Each re-upload is treated as a new case — making consistent enforcement impossible.

Trusted flaggers

Trusted flaggers and prioritised handling

The DSA introduces trusted flaggers, whose reports must be prioritised and handled without undue delay.

This creates additional operational requirements:

faster response times

prioritised signal processing

consistent handling across cases

stronger documentation expectations

Platforms must be able to process and prioritize signals correctly — not just receive them.

Where most platforms fail

Most platforms do not fail because they lack policies. They fail because they cannot enforce them consistently.

Common gaps:

Reactive moderation workflows

Inconsistent decisions across similar cases

Inability to prevent re-upload

No persistent memory of content

Fragmented or incomplete documentation

Result

Enforcement becomes inconsistent, and compliance becomes difficult to defend.

Why moderation alone does not work

Moderation is inherently reactive:

content is handled after exposure

decisions are isolated

enforcement lacks continuity

Scaling moderation increases cost — but does not create control.

You cannot moderate your way into compliance.

What's required

What the DSA actually requires

To operate defensibly under the DSA, platforms must move beyond moderation.

They need systems that enable:

01

Persistent content recognition

Previously identified content must be detectable across uploads, formats, and transformations.

02

Consistent enforcement logic

Decisions must be applied uniformly across similar cases and over time.

03

Fast and prioritised response

Signals — especially from trusted flaggers — must be processed without delay.

04

Audit-ready documentation

All actions must be traceable, time-stamped, and verifiable under regulatory review.

This represents a shift from handling content to controlling it at a system level.

Broader framework

Part of a broader regulatory framework

The DSA does not operate in isolation.

Platform liability intersects with:

Compliance must therefore be managed across multiple regulatory layers simultaneously.

Where SASHA fits

Without persistent content identity, enforcement breaks down.

SASHA enables platforms to embed identity and governance directly into digital content.

This allows platforms to:

recognise previously identified content across uploads

apply enforcement decisions automatically at upload

prevent reappearance of illegal material — even when modified

generate structured, time-stamped audit logs

When illegal content is identified — for example via a trusted flagger — the decision is attached to the content itself.

Any future attempts to distribute that content can be detected and blocked automatically.

This enables platforms to demonstrate that they act expeditiously, consistently, and objectively — as required under the DSA.

Build defensible platform compliance

The DSA requires more than policies and manual moderation.

It requires systems that can detect, act, prevent, and document — consistently and at scale.

Book a meeting with our team