How to document the decision not to do a DPIA (so it’s audit-proof)
In the world of GDPR compliance, we spend a lot of energy discussing how to conduct a Data Protection Impact Assessment (DPIA). But there is a scenario that happens far more often: The decision that a project does not need one.
This is a dangerous blind spot for many organizations.
Under the GDPR’s Accountability Principle (Article 5(2)), it is not enough to simply be compliant; you must be able to demonstrate compliance. If a regulator asks why you didn't perform a DPIA for a specific data-heavy feature, saying "We thought it was low risk" is not a defense. You need proof that you asked the question, analyzed the risk, and formally concluded "No."
This record is often called a DPIA Threshold Assessment or a Necessity Assessment. Here is how to create one that is maximally compliant and defensible.
The Documentation Trap: Inherent vs. Residual Risk
When you document a decision not to conduct a DPIA, the biggest mistake is presenting a sanitized version of reality.
Auditors get suspicious when they see a complex project with a "Necessity Assessment" full of flat "No" responses. If your project touches personal data, there is rarely zero risk. Claiming zero risk looks like negligence.
To make your "No" decision defensible, your documentation must distinguish between Inherent Risk (the raw risk before controls) and Residual Risk (the risk after your security measures).
The "Flat No" (Weak Documentation):
Does the project involve sensitive data?
[x] No. (Technically false, but you checked 'No' because you felt it was safe.)
Result: This looks like you didn't understand the data.
The "Mitigated No" (Strong Documentation):
Does the project involve sensitive data?
[ ] Yes. (Acknowledge the inherent risk).
Justification for no DPIA: While sensitive data is present (Inherent Risk), it is hashed at the point of entry and stored with strict access controls (Safeguards). Therefore, the likelihood of harm is negligible (Residual Risk).
The Takeaway:
Your decision record shouldn't just say "Safe." It should say "Safe because..."
By explicitly documenting the gap between the raw risk and the final state, you prove to the regulator that you didn't ignore the dangers—you engineered them out.
What "Maximally Compliant" Looks Like
A standard checkbox is okay. A "Gold Standard" decision record protects you three years from now when the team has changed and the regulator comes knocking.
A robust assessment record includes three specific layers:
- The Trigger Check: Systematically checking against Article 35(3) criteria, WP29 guidelines, and—crucially—your national authority's "Blacklist" (Article 35(4)).
- The Safeguard Shield: Explicitly listing the technical and organizational measures (TOMs) that reduce the risk.
- The Reassessment Trigger: A promise to review the document if the "facts on the ground" change (e.g., new AI model, new API integration).
The Template
Below is a structure you can use immediately. It bridges the gap between a quick email and a full 20-page DPIA. It is short enough to be operational but detailed enough to survive scrutiny.
Pro Tip: Store this in your GRC system or Privacy Management tool and link it directly to your Record of Processing Activities (RoPA).
DPIA Necessity Assessment (Decision Record)
Project / Feature: [Name]
Processing Owner: [Name]
Privacy Owner / DPO: [Name]
Date: [YYYY-MM-DD]
Decision: ☐ DPIA required ☐ DPIA recommended ☒ DPIA not required (rationale below)
1. Processing Summary
Briefly describe the "What" and "Why" to set the context.
- Purpose: [Why are we doing this?]
- Data Categories: [e.g., Financial, Health, Contact info?]
- Scale: [Volume of data / Number of users]
- Retention: [How long is data kept?]
2. The Trigger Check (Article 35 "Likely High Risk")
A. EDPB/WP29 Screening Criteria
Check any that apply. If you check 2 or more, a full DPIA is usually required unless you have strong safeguards.
- ☐ Evaluation/scoring
- ☐ Automated decision-making with legal effect
- ☐ Systematic monitoring
- ☐ Sensitive data
- ☐ Large scale
- ☐ Matching/combining datasets
- ☐ Vulnerable subjects (employees, children)
- ☐ Innovative use of new tech
- ☐ Preventing exercise of a right
B. Supervisory Authority Lists (Art. 35(4))
- Did we check the national "Blacklist"? ☐ Yes
- Outcome: [e.g., "Verified against ICO/CNIL list; our operation is not listed."]
3. Safeguards (The "Why it's safe" Shield)
Crucial Section: If you checked any boxes above, explain here why the risk is actually low.
- Minimization: [e.g., "We are only collecting email, not names."]
- Security: [e.g., "Data is encrypted at rest; access is RBAC restricted."]
- Vendor Controls: [e.g., "DPA signed with strict sub-processor limits."]
- Transparency: [e.g., "Users can opt-out at any time via settings."]
4. Decision Rationale
We conclude a full DPIA is not required because:
Based on the safeguards listed above, the processing does not result in a "likely high risk" to the rights and freedoms of individuals. Although [Risk Factor X] is present, it is mitigated by [Control Y].
5. Reassessment Plan
Next Review Date: [Date]
Reassess immediately if:
- Scale of processing increases significantly.
- New category of data (e.g., biometric) is added.
- Automated decision-making is introduced.
Summary
Documentation is your defense. By spending 15 minutes filling out a structured Necessity Assessment, you save yourself hours of panic during an audit. It proves you didn't ignore the risk—you managed it.
Comments ()