Once limited to parody videos, deepfake technology has matured to the point where entire developments, businesses, and leadership messages can be convincingly generated and delivered. For investors, this presents a serious challenge: how do you know if the project you’re funding even exists in reality?
- 25% of businesses report experiencing a deepfake or AI-manipulated incident in the past year.
- 20% of businesses say they’ve received deepfake threats and 12% fell victim.
- 36% of Australians have been targeted by deepfake scams in the past 12 months.
- 22% of those targeted lost money, while 48% didn’t report the fraud.
- AU$2.03 billion was lost to scams in 2024 with deepfakes driving a growing share.
These figures show that deepfakes are not a future concern they are a present reality, eroding trust and creating material risks for investors and organisations alike.
The Investor Trap: Projects That Don’t Exist
A growing concern in markets such as property, mining, and infrastructure is the use of deepfakes to showcase glossy “progress updates” of developments that haven’t even broken ground. Fraudsters fabricate drone flyovers of non-existent construction sites, interviews with fake project managers generated by AI avatars, investor updates with fabricated financial dashboards or staged ground-breaking ceremonies.
For investors, especially those funding projects remotely, these fabrications can be difficult to spot, leading to significant financial loss as well as loss of trust, once the fraud inevitably unravels.
Deepfakes Beyond Property Investment
The risks aren’t confined to real estate or infrastructure. Across industries, deepfakes are being weaponised in ways that pose serious risks to organisations and markets:
- Executive impersonation: Synthetic audio or video of CEOs making false statements, potentially manipulating share prices or reputational standing.
- False media coverage: Fabricated news clips showing “endorsements” or “partnerships” that were never made.
- Synthetic identity fraud: AI-generated personas used to front start-ups, pitch to investors, or pass KYC checks with fake but convincing “proof of life.”
- Political and regulatory manipulation: Fake commentary attributed to regulators or policymakers to sway investor sentiment or political agendas, as experienced by former Premier Annastacia Palaszczuk in Queensland.
Why This Matters for Investors and Organisations
Deepfakes erode the traditional markers of trust: face-to-face meetings, live presentations, and “seeing is believing.” Without robust verification, investors may unknowingly fund phantom ventures. Organisations face reputational damage, litigation, and loss of market confidence or consumer trust if they fall victim to deepfake-driven fraud due to lack of guardrails in place.
Guardrails for Resilience
At ADAICO, we view deepfakes as part of the broader AI risk landscape that requires proactive governance. Practical steps include:
- Verification frameworks: Independent validation of project progress before capital release.
- Technology countermeasures: Deploying AI-driven detection tools to flag manipulated media.
- Governance protocols: Establishing policies for how video, audio, and project updates are authenticated and reported to boards and investors.
- Awareness and training: Equipping investors and executives to spot red flags and question “too good to be true” updates.
Final Word
Deepfakes are an emerging frontier of fraud that can distort markets, mislead investors, and undermine organisational resilience. As regulators race to catch up, it falls to boards, executives, and investors to put safeguards in place.
At ADAICO, we help organisations prevent, plan, and respond to AI-driven risks like deepfakes building resilience in an era where digital trust is under attack.
