What platforms are actually responsible for — and how liability is established in practice.
The Digital Services Act (DSA) fundamentally changes how platform responsibility is defined.
Liability is no longer limited to whether illegal content exists on a platform. It is determined by how platforms detect, act on, and prevent illegal content at scale.
This shifts responsibility from passive hosting to active, operational control.
In practice
Under the DSA, liability is not based on policies or intent. It is based on observable behaviour.
Platforms are expected to:
detect illegal content
act on valid signals
prevent repeated exposure
document decisions consistently
Key principle
Liability is defined through action, timing, and consistency — not intent.
Notice and action
The DSA requires platforms to provide mechanisms for reporting illegal content.
Once a platform receives a sufficiently precise and substantiated notice, it obtains actual knowledge and must act expeditiously.
Signals can include:
user reports
legal notices
trusted flaggers
Failure to act after valid notice creates direct liability exposure.
Actual knowledge
"Actual knowledge" is a critical legal threshold.
A platform can no longer remain neutral once it:
receives a valid report
identifies illegal content internally
processes a trusted signal
At this point, inaction is no longer passive — it becomes liability.
Removing content once does not resolve the problem.
Illegal content frequently reappears through:
Core issue
Without persistent content identity, platforms cannot recognise previously removed content. Each re-upload is treated as a new case — making consistent enforcement impossible.
Trusted flaggers
The DSA introduces trusted flaggers, whose reports must be prioritised and handled without undue delay.
This creates additional operational requirements:
faster response times
prioritised signal processing
consistent handling across cases
stronger documentation expectations
Platforms must be able to process and prioritize signals correctly — not just receive them.
Most platforms do not fail because they lack policies. They fail because they cannot enforce them consistently.
Common gaps:
Result
Enforcement becomes inconsistent, and compliance becomes difficult to defend.
Moderation is inherently reactive:
content is handled after exposure
decisions are isolated
enforcement lacks continuity
Scaling moderation increases cost — but does not create control.
You cannot moderate your way into compliance.
What's required
To operate defensibly under the DSA, platforms must move beyond moderation.
They need systems that enable:
01
Previously identified content must be detectable across uploads, formats, and transformations.
02
Decisions must be applied uniformly across similar cases and over time.
03
Signals — especially from trusted flaggers — must be processed without delay.
04
All actions must be traceable, time-stamped, and verifiable under regulatory review.
This represents a shift from handling content to controlling it at a system level.
Broader framework
The DSA does not operate in isolation.
Platform liability intersects with:
Compliance must therefore be managed across multiple regulatory layers simultaneously.
SASHA enables platforms to embed identity and governance directly into digital content.
This allows platforms to:
recognise previously identified content across uploads
apply enforcement decisions automatically at upload
prevent reappearance of illegal material — even when modified
generate structured, time-stamped audit logs
When illegal content is identified — for example via a trusted flagger — the decision is attached to the content itself.
Any future attempts to distribute that content can be detected and blocked automatically.
This enables platforms to demonstrate that they act expeditiously, consistently, and objectively — as required under the DSA.
The DSA requires more than policies and manual moderation.
It requires systems that can detect, act, prevent, and document — consistently and at scale.