Legal ethics in the age of deepfakes
By Gonzalo Soto,
Founder WSC Legal
Artificial Intelligence (AI) technology is developing and evolving faster than the law can adapt.
Unlike previous technological advances that were held back until their safety could be established, AI is a very real presence in society and the impact is yet to be fully understood.
One disturbing feature of AI is its ability to create fake videos and images using real people.
These deepfakes open up a plethora of legal and ethical concerns that need to be understood, given that the technology does not seem to be going anywhere.
What is the concern with deepfakes?
Where once AI-produced content was a distorted version of a nightmare reality where objects lacked true permanence and people moved unnaturally, deepfakes are becoming increasingly realistic.
While there are some provisions to curtail their impact, such as embedding watermarks onto AI-produced videos, most safeguards have easy workarounds.
Filters can be tricked and watermarks removed, resulting in a host of content that should not be able to be produced being made with ease.
This has resulted in the creation of material that might be illegal in its own right, as well as allowing for material that could potentially pervert the course of justice to be produced.
Where once photographic and videographic evidence were compelling for court cases, these are no longer reliable, as there is a risk that they may be deepfakes.
It would be deeply problematic to prosecute or exonerate a person based on fictitious evidence.
How can legal ethics adapt to deepfakes?
The most obvious impact of deepfakes is ensuring that they do not materially harm a person.
This means, for example, condemning the creation of non-consensual sexual content and bringing those who create them to justice.
Early education, both in schools and at home, plays a crucial role in discouraging the malicious use of artificial intelligence.
In a narrower sense, legal professionals will need to become more sceptical about the validity of photos and videos that are being proposed as evidence.
AI detection tools can be useful in this endeavour, though they are not without error.
As such, legal professionals can continue the push to establish clear standards and robust authentication tools.
Around the world, jurisdictions are struggling to keep pace with technology when it comes to establishing laws that protect users and others from the effects of AI.
Where possible, legal professionals should work to be part of these conversations to help shape policy and restore a sense of ethics to legal practice.
Cross-border collaboration can be a vital component of this, given that AI does not adhere to borders and content generated in one country can be spread globally with ease.
Through collaboration within the Lexlink Network, we can help make the world a safer place through controlled management of AI deepfakes.
Get in touch with our team today to find out more about Lexlink’s work on managing the impact of deepfakes.
