Sir,
Legislators around the world are worried about the risks associated with generative AI, and efforts are being made – sometimes on an inter-parliamentary level – to work out best practices to address the risks of this rapidly developing field. Singapore actively participates in such fora and regularly shares our experience with counterparts internationally, and I support such moves to work together across borders to take a firm stance against such malicious and downright criminal actions.
My cut will focus on how generative-AI – through improving deepfake technology – is able to supercharge the effects of online violence against individuals. I am mindful that some in this House have been victims of such deepfake online violence and have reported these matters to the relevant authorities, and wish to acknowledge the real harm that such actions have against ALL victims. My cut today will focus on areas where I feel special attention is warranted.
Children
First, children. The Center for Democracy and Technology (CDT) in the US reported in 2024 that 40% of students and 29% of teachers were aware of a deepfake depicting individuals associated with their school being shared.
In Singapore, reports in November 2024 emerged that students from one of our schools were investigated for deepfake nude photos of female classmates and shared amongst WhatsApp groups, illustrating that we too are experiencing this problem.
There is increasing acknowledgement that because of their young age and still-developing brain, children are more vulnerable to the long-lasting psychological damage caused by harmful deepfakes. Victims of such crimes end up with significant distress, anxiety and depression, and some even post-traumatic stress disorder. There is also often a long tail of effects that stretch beyond social and emotional damage, especially as child victims sometimes end up being unable to attend school because they are suffering so much.
Minding the gender gap
Next, minding the gender gap. While both males and females have been victims, women are still very much the overwhelming target of deepfakes, particularly those involving sexually explicit images. A 2019 industry report found that 100% of examined content on ‘deepfake pornography websites’ targeted women. Some commentators are also worried about the weaponisation of AI against women, particularly when facial search engines can scoop up these deepfaked images and link them to one’s internet identity for a long time.
There is thus concern about a chill effect on women’s career progression. A 2020 study by the Economist Intelligence Unit notes that 7% of women surveyed lost or had to change jobs due to online violence, with 35% reporting mental health issues. Even more alarmingly, 9 in 10 women restrict their online activity in an attempt to protect themselves. This increases the digital gender divide, and limits access to employment, education, healthcare and community through digital spaces, directly correlating with lost career opportunities.
This also has implications in our efforts to get more women into politics. A 2024 Oxford study notes that women may be discouraged from running for public office when female politicians are targeted. A fellow delegate at a CPA conference on AI and misinformation last year shared that when both female and male politicians in his country fell victim to sexually explicit deepfakes, hardly anyone clicked on the links for the male politician, while the deepfaked content for the female politician went viral.
There is thus clearly a gendered perspective to harms caused by AI that we need to be mindful of, and our measures to deal with the problem has to address these.
Tackling the problem: steps taken so far
MHA announced last month that there will be amendments to the Penal Code to make clear that our offences apply to sexually explicit deepfakes produced through AI, and I would like to seek clarifications from the Minister on when we can expect these to be tabled.
There was also an announcement of a new agency to tackle online harms announced by PM Wong in October last year as a joint MDDI and Minlaw effort. I support this, as no victim should have to submit individual takedown requests. Equally welcome are the laws that MDDI announced will be introduced to allow victims to file civil claims against their perpetrators, and I look forward to hearing more details about these upcoming changes.
What more can we do?
I asked the MHA Minister in August 2024 to consider an Anti-Scam Centre like model, to allow a centralised response to deepfake related crimes, and hope the new online harms agency can be the base from which we work, and include both psychological support and education.
Our officers handling such cases should be given regular and updated training to support victims of such crimes from a victim-centric approach, as some may feel hesitant reporting such crimes out of shame or fear of inadvertently triggering the Streisand effect. It should also be easy for victims to be channelled to trained mental health professionals to support them through the entire process.
For education, it is crucial that agencies also work across departments to ensure that both children and adults are aware of the real harms that such deepfakes can cause.
Better data to understand trends
Finally, I hope that the Ministry can start collecting and publishing granulated data that show and track the issue over time, paying attention to particularly vulnerable groups of victims such as children and women, so that we can all play our part in fighting the scourge of these crimes.