AI has grown quite fast and has led to great inventions, but it has also seen technologies like DeepFake emerge. These AI-generated media, look almost similar to authentic content and present complex ethical, legal, and social issues. Currently, governments are trying to address the problem of deepfakes to prevent people from damaging their reputations and to maintain trust in similar services.
This article aims to give the reader an understanding of what deepfakes are, the rationale for their regulation, deepfake laws around the world, the challenges to regulating deepfakes, and the ethical and legal issues that have arisen from this technology.
Deepfakes are a kind of synthetic media in which artificial intelligence or AI is used to superimpose one’s image, voice, or both on another individual’s image, voice, or both. This term comes from the combination of “deep learning”, an artificial intelligence method that drives the creation of the images of these hyper-realistic forgeries, and ‘fake’ because they are such.
Deepfakes can imitate a person physically, vocally, or otherwise, and in most cases, are indiscernible from real material. Although technology has many positive uses in entertainment, media, etc., it has many problems in undesirable areas. Deepfake tech is typically applied in situations when the subject’s consent was not given, or violated; generally, for.FileOutputStream deepfake creation.
The ability to deceive the public, defame personalities, and undermine the public’s confidence in media has placed them in the center of ethical and legal issues. Given the increasing advancement in deepfake technology, countering the phenomenon has now become an important goal for governments and technology companies worldwide.
Deepfakes, as the showcase of the AI technology potential, regressed to a social problem that became a serious issue because of the possibilities it opened. As we have seen governments and organizations realize the need to regulate deep fake due to the damages that this technology brings to society across various fields.
Here’s a deeper look at the reasons driving the urgency to implement such regulations:
This has led to victims suffering from and having serious psychological and emotional injuries from deepfake pornography, fake media content, and more.
Deepfakes pose a huge risk as far as the real sense and credibility of democratic principles are concerned, especially given the influence of manipulation of the public.
Deepfakes have emerged as a sophisticated tool for criminals, contributing to significant financial damages:
The rise of deepfakes challenges the very foundation of truth and authenticity in the digital age:
The legal restrictions on deepfakes also differ from country to country concerning their legal, civilizational, and technological characteristics. While some countries have developed elaborate legislation, others use the available ones to respond to the issues arising from this relatively young technology. The arising issue with deepfakes is that the technology is advancing quickly and the instances of their use are only going to grow, hence, the need for international cooperation and united regulation standards will arise.
At present, the United States does not have federal legislation that is entirely dedicated to deepfakes, but the discourse on filling this lacuna is gradually emerging. The No Artificial Intelligence Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act intelligently up the intellectual ante. This proposed law will mean that it will become a criminal offense to develop any electronic image of another person if they are alive or even dead and without their consent. That needs to cover both look and sound, due to the increasing complexity of AI-generated content.
Other bill proposals for federal legislation include the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which targets performer’s voice and likeness, and the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, which seeks to protect individuals from non-consensual explicit images. However, this means that the deepest regulation of deepfakes still does not work in the United States as a single federal law.
Some states in the U.S. have passed laws to regulate deepfakes depending on certain problems like interference in the elections, non-consensual creation of adult material, or identity theft. California has set progressive statutory laws such that Assembly Bill 730 outlaws deepfakes in political campaigns, and Assembly Bill 602 holds perpetrators accountable to victims of non-consensual pornography.
Another was Texas which passed Senate Bill 751 that will prevent the making and dissemination of deepfake videos that seek to alter the electoral process. The state also prohibits the creation of steamy sex scenes through the fake impersonation of any person without their permission. Other states, including Florida, New York, Illinois, and Virginia are also supporting certain regulations in specific forms. However, because their definitions and applications are not uniform, these kinds of laws provide a rather piecemeal set of protection one state at a time.
The UK passed the Online Safety Act of 2023 legalizing the sharing of fake sexually explicit images when the act results in distress and where the sender had intent or could care less. This law has made a significant achievement compared to the fight against the effects of deepfakes.
Those who have been targeted by non-objectionable deepfake content must use the current laws that govern things like defamation, harassment, or data privacy which can be a tall order. This illustrates the requirement for additional regulation to be enacted while dealing with the newer complex issues brought by deepfakes.
The AI Act put forward by the European Union is the pioneering attempt to build the legal framework for AI solutions, including deepfakes. Following Article 52(3), the Act provides new transparency provisions that will dictate that the creators of such deepfake videos must indicate that the content was synthetically generated. As mentioned above, the Act does not prohibit deepfakes directly; this measure should also be considered as a work in progress that attempts to weigh the possible benefits to be gained from developing Deepfake technology against possible harm done to people’s rights and freedoms.
Audit The Act was being negotiated in December 2023, and approved in 2024. When it comes into force, the Act will place the EU in the vanguard and set the course for the regulation of AI in the rest of the world.
China has not been defensive against deepfake technology and has rapidly established a set of strict laws to protect the technology from misuse. It is going to be mandatory to label all AI-generated content by the start of 2023 to avoid confounding end-users. It holds sanctions when they are violated hence indicating the government’s desire to retain control over the content produced and shared on the Internet.
China’s regulations do however cover the additional practical issue of using deepfakes in frauds and disinformation, signifying that China understands how to encompass all of the potential evils of this technology.
Australia has only just started to develop more concrete legislation to address deepfakes with the emphasis currently being placed on safety and minimisation of harm online. Currently, there are legal provisions in place concerning defamation, harassment, and misuse of data some of which deepfake cases are dealt with. But, the authors provide compelling evidence that such laws do not adequately protect AI-generated content from infringements.
The Australian government’s approach is to engage stakeholders in a dialogue to establish a strong policy that both fosters innovation as well as protects the consumer.
Anti-deepfake measures have been put in place in France to prevent cases of identity fraud and prevent the spread of fake news. These laws also penalize producers or distributors of manipulated content that results in harm. However, as with many other countries, the French regulations are manifested in particular concerns and do not consider the broader scope of deepfake yet.
Currently, only a few countries including Japan, Singapore, and India are showing signs of concern about the regulation of deepfake. The legislation in these regions is under development, but it aims to contribute to transparency and protect people from negative impacts. For instance, while Japan is in the process of establishing ethical principles for the use of artificial intelligence, Singapore is thinking of laws against harm to online users and fake news.
Regulating deepfakes is a complex task due to:
The proliferation of deepfake technology presents complex legal and ethical challenges that demand careful consideration:
There is still much that remains uncertain regarding how the right to free speech can be reconciled with the fight against deepfake harms. A lack of regulation poses a danger to genuine, innovative applications of AI, and excessive regulation poses a danger to victims and their institutions.
Often profit, deepfakes use someone’s image, voice, or identity without their consent, violating one’s right to privacy and self-determination. Non-consensual deepfake pornography and identity theft are even worse forms, given that such harassment takes time to ruin one’s personal and professional life.
It is not easy to apportion blame where deepfakes have done the damage. Who should be held legally responsible; the originator of the deepfake video, the hosting company of the video, or the company disseminating it? This is so because enforcement and accountability become even more challenging when the suspects are unknown entities in social media.
Five, legal systems have to guard against over-policing some creators or persons or categories of creators, which will entail reinforcement of existing injustices. A fair approach means that regulations should not make some categories of people irrelevant in society, so there should be a proper approach to policy-making.
A major risk posed by deepfakes is the ability to trust media and institutions, something which negates notions of truth or authenticity. The realist concept of the simulation provides ethical concern about the contraction of the truth in society.
If societies elaborate on such legal, ethical, and regulatory trends, they can use the advantages of deepfake technology and control its threats. Currently, this situation requires a complex, integrated, and visionary strategy in the best interest of the country. As deepfake technology evolves, the regulatory landscape must adapt to address its challenges through multi-faceted solutions:
Deepfakes can open up a new, innovative chapter in communication media while at the same time raising severe threats. Their use has rapidly increased therefore there is a need for the incorporation of extensive legislation and standards to regulate the technology. Lacking such approaches, deepfakes’ improper use may seriously jeopardize people, organizations, and fiscal systems and erode confidence in digital environments. Negotiable between the two will be on how to bring invention without increasing the risks of harm from malicious attacks in virtual platforms.
People are also reading:
Explore the various types of financial crimes including money laundering, fraud, insider trading, and...
Read MoreUnderstand the profound impact of financial crimes on economies, societies, and individuals, and the...
Read MoreExplore how technology plays a pivotal role in preventing financial crimes. Delve into the...
Read More