Skip to main content

E3 — Misinformation & Disinformation

Medium severityEU AI Act Art. 50NIST AI 600-1Australian Disinformation RegisterCDSA

Domain: E — Fairness & Social | Jurisdiction: AU, EU, Global


Layer 1 — Executive card

AI systems generate or amplify false, misleading, or deceptive information at scale — creating systemic information integrity risk.

AI dramatically lowers the cost of producing convincing false content at scale. An AI-generated false news article about a pharmaceutical company's clinical trial failure reportedly erased $100 billion in shareholder value before being debunked. The EU AI Act Art. 50 creates mandatory disclosure obligations for AI-generated content presented to natural persons.

Do we have synthetic content disclosure mechanisms in place for AI-generated content presented to natural persons, as required by EU AI Act Art. 50?

AI can generate convincing false information at scale — fake news, fabricated quotes, synthetic personas. The EU AI Act Art. 50 creates mandatory disclosure obligations. Your organisation faces risk in two directions: AI systems may generate misinformation inadvertently, and synthetic content attributed to your organisation may be used against you.


Layer 2 — Practitioner overview

Likelihood drivers

  • AI-generated content deployed at scale without human review
  • No content provenance or watermarking on AI-generated outputs
  • No monitoring for AI-generated content misuse or manipulation
  • Synthetic content disclosure obligations not implemented

Consequence types

TypeExample
Market manipulationFalse AI-generated information eroding shareholder value
Regulatory enforcementEU AI Act synthetic content disclosure obligations
Reputational harmDeepfake or AI-generated false content attributed to organisation
Information integrityAI-generated synthetic personas in public consultations

Affected functions

Communications · Legal · Compliance · Risk · Marketing

Controls summary

ControlOwnerEffortGo-live?Definition of done
Synthetic content disclosureComplianceLowRequiredAll AI-generated content presented to natural persons disclosed as such per EU AI Act Art. 50. Mechanism implemented and tested before go-live.
Content provenance (C2PA)TechnologyMediumPost-launchC2PA content authentication evaluated. Deployment decision documented. If deployed, official organisational media signed with organisational certificate.
AI content detection for inbound contentComplianceMediumPost-launchAI-generated content detection deployed for high-stakes inbound content. Results reviewed as part of content processing workflow.
Media literacy and staff trainingHRLowPost-launchStaff educated on identifying AI-generated content. Training completed and tracked.

Layer 3 — Controls detail

E3-001 — Synthetic content disclosure

Owner: Compliance | Type: Preventive | Effort: Low | Go-live required: Yes

Any AI system that generates text, images, audio, or video presented to natural persons must disclose that the content is AI-generated. This is both an EU AI Act legal obligation (Art. 50) and an ethical requirement — a person interacting with AI-generated content has a right to know they are not interacting with a human author.

Implementation requirements: (1) Scope definition — identify every touchpoint in the product or service where AI-generated content is presented to natural persons: chatbot responses, AI-generated documents, synthetic images, audio narration, video content. Disclosure must be applied consistently — selective disclosure that only covers some touchpoints creates both legal risk and user trust damage; (2) Disclosure mechanism — implement disclosure in a manner that is clear, prominent, and understandable. Acceptable mechanisms: (a) inline label on generated content ("Generated by AI"); (b) system-level disclosure at session start for AI chatbots; (c) metadata embedding for synthetic media (C2PA standard — see E3-002); (d) watermark for synthetic audio and video. Buried disclosures in terms and conditions do not satisfy the EU AI Act requirement; (3) Chatbot identity disclosure — AI chatbots must not deceive users into believing they are human. On direct inquiry ("Are you a human?", "Am I talking to a person?"), the system must disclose it is an AI. This obligation applies even when the chatbot has a persona or human-sounding name; (4) Implementation testing — test the disclosure mechanism before go-live: confirm it appears consistently across all touchpoints, is visible on all device types (mobile, desktop), and cannot be disabled by users in ways that create a misleading impression; (5) Documentation — document the disclosure mechanism, when it was implemented, and how compliance was confirmed. Retain as evidence of EU AI Act Art. 50 compliance.

Jurisdiction notes: AU — no mandatory AI content disclosure obligation as of 2026; OAIC has noted AI disclosure as an emerging expectation in privacy policy and consumer communications | EU — EU AI Act Art. 50 — mandatory disclosure for AI chatbots and AI-generated synthetic media presented to natural persons; effective August 2, 2025 for GPAI systems, August 2, 2026 for high-risk AI | US — FTC Act Section 5 — undisclosed AI-generated content in consumer contexts may constitute deceptive practice; no federal mandate as of 2026. Several states considering disclosure requirements


E3-002 — Content provenance (C2PA)

Owner: Technology | Type: Detective | Effort: Medium | Go-live required: No (post-launch)

C2PA (Coalition for Content Provenance and Authenticity) is an open technical standard for embedding cryptographically signed provenance metadata into content — establishing who created it, when, with what tools, and whether it has been modified. It is the emerging technical infrastructure for AI content attribution and the mechanism for EU AI Act Art. 50 machine-readable disclosure for synthetic media.

Implementation requirements: (1) Standard evaluation — assess whether C2PA is appropriate for the organisation's content types and workflows. C2PA is well-suited to: images, video, audio, and documents. Assess current tool and platform support for C2PA in the content creation and publishing pipeline; (2) Signing infrastructure — if C2PA is deployed, obtain a C2PA certificate from an accredited issuer. All AI-generated official organisational media (press imagery, video content, official publications) should be signed with the organisational certificate. Maintain the private key in a secure key management system; (3) Provenance display — implement a mechanism for readers to verify content provenance (C2PA-compatible viewer, or reference to verification tools). The value of C2PA is only realised if recipients can verify signatures; (4) Inbound content — for organisations that receive external content as part of operations (journalism, research, procurement), C2PA metadata on received content should be checked as part of content processing. Absence of C2PA metadata is not confirmation of inauthenticity, but presence of authentic C2PA metadata is confirmatory; (5) Decision documentation — if C2PA is not deployed, document the assessment and rationale. The decision to defer should reflect a genuine assessment of applicability, not a default non-decision.

Jurisdiction notes: EU — EU AI Act Art. 50(2) — AI systems generating synthetic media must mark outputs with machine-readable format. C2PA is the emerging implementation standard for this requirement | AU — no mandatory C2PA requirement; voluntary adoption aligns with DISR voluntary AI Safety Standard | US — NIST AI 600-1 — content provenance is identified as a key mitigation for synthetic content risks


E3-003 — AI content detection for inbound content

Owner: Compliance | Type: Detective | Effort: Medium | Go-live required: No (post-launch)

For organisations where inbound content authenticity is material — news organisations, insurers assessing submitted evidence, courts, financial regulators — AI-generated content detection tools provide a signal for whether inbound content may be synthetic. These tools are probabilistic, not definitive, and must be used as decision-support rather than determinative.

Implementation requirements: (1) Use case scoping — identify which inbound content types are material enough to warrant AI detection screening: submitted documents, photographs as evidence, regulatory submissions, research inputs. Not all inbound content requires screening; prioritise by the consequence of acting on false content; (2) Tooling selection — AI detection tools vary significantly in accuracy and are subject to rapid obsolescence as generation models improve. Select tools with documented false positive and false negative rates against current generation models. Review tool effectiveness annually; (3) Decision-support framing — train staff using AI detection results to understand the probabilistic nature of the output. A high AI-detection score is a flag for investigation — it is not proof the content is fake. A low score is not proof the content is authentic. Human investigation remains required for material content; (4) Investigation protocol — define what happens when AI detection flags content: who investigates, what investigation steps are taken (reverse image search, metadata analysis, source verification), and who makes the final determination. Document decisions on flagged content; (5) Limitations disclosure — where AI detection is used in consequential processes (insurance claims, legal proceedings), disclose its use and its limitations. Relying solely on AI detection in a consequential process without disclosure and without human review is not appropriate.

Jurisdiction notes: AU — Evidence Act obligations — AI detection evidence would require expert testimony on methodology and reliability in legal proceedings | EU — GDPR Art. 22 — if AI detection results in automated decisions affecting individuals, safeguards apply | US — evidence admissibility standards vary by jurisdiction; AI detection findings would require expert evidence foundation


E3-004 — Media literacy and staff training

Owner: HR | Type: Preventive | Effort: Low | Go-live required: No (post-launch)

Staff who consume, evaluate, and act on information — in finance, in communications, in risk functions — need practical skills to recognise AI-generated content and understand why detection is becoming harder. This is not generic digital literacy training; it must be specific to AI-generated content and current as of the training date.

Implementation requirements: (1) Content — training must cover: (a) how AI-generated text, images, and audio are created and why they are increasingly indistinguishable; (b) specific indicators that may remain: metadata anomalies, C2PA absence, distributional tells in generated images (hands, lighting, backgrounds); (c) detection tool capabilities and limitations — what the tools can and cannot tell you; (d) verification processes — how to check the authenticity of high-stakes content before acting on it; (e) organisational policy on use of AI content detection tools; (2) Targeting by role — not all staff need the same depth. Finance staff need specific training on AI-generated payment instructions and voice cloning. Communications staff need training on synthetic imagery. Research staff need training on AI-generated text. Tailor training to role-specific exposure; (3) Currency — AI content capabilities are advancing at a rapid pace. Training that was current eighteen months ago may be materially outdated. Annual refresh minimum; additional refreshes when significant capability jumps occur (e.g. new voice cloning services become accessible); (4) Practical scenarios — include examples relevant to the organisation's specific risk context. Abstract training about AI deepfakes is less effective than training that uses realistic examples from the industry the staff work in.

Jurisdiction notes: AU — APRA CPS 234 — security awareness training obligation; AI-generated phishing and social engineering is a current threat that training must address | EU — NIS2 — training obligations for entities in scope; AI-generated social engineering is within scope of NIS2 security awareness requirements | US — FFIEC — financial institution security awareness training must reflect current threat landscape


KPIs

MetricTargetFrequency
AI-generated content presented without disclosureZeroMonitored continuously
C2PA evaluation decision documentedYes — approved or deferred with rationaleAt initial assessment
Staff media literacy training completion> 90% of targeted rolesAnnual
AI detection false positive rate reviewWithin documented acceptable thresholdQuarterly
Chatbot identity disclosure tested100% of AI interfaces tested on direct inquiryAt each release

Layer 4 — Technical implementation

Synthetic content disclosure — implementation patterns

from enum import Enum

class ContentType(Enum):
TEXT = "text"
IMAGE = "image"
AUDIO = "audio"
VIDEO = "video"
DOCUMENT = "document"

class DisclosureMethod(Enum):
INLINE_LABEL = "inline_label" # "Generated by AI" label on content
SESSION_BANNER = "session_banner" # Disclosure at session start for chatbots
METADATA_C2PA = "metadata_c2pa" # Cryptographic provenance in content metadata
WATERMARK = "watermark" # Perceptual watermark for audio/video

DISCLOSURE_POLICY = {
# content_type -> required_methods
ContentType.TEXT: [DisclosureMethod.INLINE_LABEL],
ContentType.IMAGE: [DisclosureMethod.INLINE_LABEL, DisclosureMethod.METADATA_C2PA],
ContentType.AUDIO: [DisclosureMethod.INLINE_LABEL, DisclosureMethod.WATERMARK],
ContentType.VIDEO: [DisclosureMethod.INLINE_LABEL, DisclosureMethod.WATERMARK, DisclosureMethod.METADATA_C2PA],
ContentType.DOCUMENT: [DisclosureMethod.INLINE_LABEL],
}

def apply_disclosure(content: dict, content_type: ContentType) -> dict:
"""
Apply required disclosure mechanisms to AI-generated content
before delivery to user.
"""
required_methods = DISCLOSURE_POLICY[content_type]
applied = []

for method in required_methods:
if method == DisclosureMethod.INLINE_LABEL:
content["ai_disclosure_label"] = "Generated by AI"
applied.append(method.value)
elif method == DisclosureMethod.SESSION_BANNER:
content["session_disclosure"] = True
applied.append(method.value)
elif method == DisclosureMethod.METADATA_C2PA:
# In production: call C2PA signing service
content["c2pa_signed"] = False # placeholder
applied.append(method.value)
elif method == DisclosureMethod.WATERMARK:
content["watermark_applied"] = False # placeholder — call watermarking service
applied.append(method.value)

content["disclosure_applied"] = applied
content["eu_ai_act_art50_compliant"] = (
len(applied) == len(required_methods)
)
return content

def chatbot_identity_check(user_message: str) -> bool:
"""
Detect direct identity inquiry — must always disclose AI nature.
Returns True if message is a direct identity question.
"""
IDENTITY_INDICATORS = [
"are you human", "are you a person", "am i talking to",
"is this a bot", "are you real", "are you ai",
"are you a robot", "who am i talking to",
]
msg_lower = user_message.lower()
return any(indicator in msg_lower for indicator in IDENTITY_INDICATORS)

Compliance implementation

Australia: No mandatory AI content disclosure legislation as of 2026. However, the Australian Consumer Law prohibits misleading or deceptive conduct — presenting AI-generated content in a way that creates a false impression of human authorship may constitute a breach. OAIC guidance expects organisations to be transparent about AI use in privacy policies and communications. The voluntary DISR AI Safety Standard includes transparency as a core principle.

EU: EU AI Act Art. 50 is the most prescriptive disclosure requirement globally. Key obligations: (1) AI chatbot systems must disclose they are AI unless obvious from context; (2) AI systems generating synthetic images, audio, or video must ensure outputs are marked in machine-readable format; (3) Persons disseminating AI-generated content must disclose the AI origin. Effective for GPAI providers from August 2, 2025; for deployers from August 2, 2026. C2PA is the emerging implementation mechanism for machine-readable marking.

US: No federal AI disclosure mandate as of 2026. FTC guidance on AI and deception (2023) indicates that undisclosed AI personas in consumer contexts may constitute deceptive practice under Section 5. Political advertising: several states require disclosure of AI-generated political content (California AB 2655, Texas HB 4337). The FTC has brought enforcement actions against companies for deceptive AI chatbot practices — including cases where chatbots impersonated humans in consumer interactions.


Incident examples

AI-generated false news pharmaceutical (illustrative, documented risk): An AI-generated fake news article about a pharmaceutical company's clinical trial failure reportedly erased significant shareholder value before being debunked — demonstrating the market manipulation potential of AI-generated misinformation.

EU election deepfakes (2024): AI-generated deepfake videos of politicians making inflammatory statements were distributed during EU elections to manipulate voter perception.


Scenario seed

Context: A public consultation on a proposed infrastructure project receives thousands of submissions. The project team notices many submissions use nearly identical language despite purporting to come from different individuals.

Trigger: Investigation reveals the submissions are AI-generated synthetic personas, manufactured to create the appearance of public opposition.

Difficulty: Advanced | Jurisdictions: EU, AU, Global

[Full scenario with discussion questions available in the AI Risk Training Module — coming soon.]