Minister warns of consequences for X if child sex abuse images are still generated

There will be “serious consequences” for X if the EU finds it continues to facilitate sharing of non-consensual intimate images and child sexual abuse material (CSAM), the artificial intelligence (AI) minister has said. The EU Commission has launched an investigation into the social media company after reports that CSAM and non-consensual sexualised images of adults were generated through the Grok AI tool and disseminated on the platform. It is examining whether the company is meeting its obligations under the Digital Services Act (DSA). X is obliged to assess and mitigate any potential systemic risks related to its services in the EU, including the spread of illegal content and potential threats to fundamental rights, including of minors, posed by its platform and features. The maximum fine for companies under the Digital Services Act is 6% of worldwide turnover. Media regulator CnaM said it was investigating X in collaboration with the Commission’s investigation under the DSA, but was not pursuing any action under the Online Safety Code. X gave written assurances to State representatives, including the Minister of State with responsibility for AI and the Data Protection Commission (DPC), that it had implemented safeguards “globally” on January 20, around the ability to generate “images of real people in revealing attire” on X and the standalone Grok app. However, Minister Niamh Smyth was told that the capability was still available on the service this week. Ms Smyth has indicated there will be “serious consequences for X” should the investigation find that it continued to facilitate the sharing of non-consensual intimate imagery and CSAM. A spokeswoman for Ms Smyth said CnaM is using “proper channels to determine the validity of X’s claims” and this will form part of the investigation taken by the European Commission. She plans to meet X about the matter again, and the DPC has also written to the company to seek clarification on whether the capability has been restricted. People Before Profit TD Paul Murphy told her during an Oireachtas Committee that the capability to create sexualised images was still available in other jurisdictions, and to users in Ireland using a VPN to connect through other countries in Europe. Ms Smyth later told the committee she did not trust X and pledged to “take action”. The opening of formal proceedings by the European Commission relieves CnaM to enforce the DSA over the suspected infringements. The Competition and Consumer Protection Commission is also a competent authority under the DSA, but said its regulatory role solely relates to specific obligations for online platforms where consumers purchase goods or services. It said CnaM is responsible for the Online Safety Framework, which is aimed at reducing the risk of people being exposed to illegal or harmful content online. This framework says that platforms are legally obliged to have rules on content and to include them in their terms and conditions. When it comes to compelling people to produce information for their inquiries, CnaM’s executive chairman, Jeremy Godfrey, has acknowledged the regulator has “more powers” under the Online Safety Code compared with “not quite such good powers” under the DSA. The regulator told the Press Association that it had not opened any investigation under the Online Safety Code, saying the issue would be most effectively addressed as “a systemic risk” under the obligations around illegal content in the DSA. Ms Smyth has repeatedly criticised X’s rhetoric in response to the issue, saying its blame on “bad actors” and “user manipulation” of the tool did not “sit well with” her. She says the company has huge financial resources and should have designed safeguards into the technology. She has previously said large social media companies should be treated as publishers, and it would be her “intention” to shut down access to them if they were found to violate Irish law on sexual abuse materials. Ms Smyth has said her focus is on X and Grok itself rather than the users, who she said would be dealt with by gardai. Gardai have said they are aware of a “proliferation” of such AI-generated material. On January 14, Detective Chief Superintendent Barry Walsh at the Garda National Cyber Crime Bureau said there were 200 investigations into specific referrals about potential CSAM on X. The figure has potentially grown in the intervening two weeks. Gardai and Government figures have said that existing legislation is sufficient to deal with the issue of AI-generated non-consensual sexualised images of real people and children. However, Sinn Féin TD Ruairí Ó Murchú said this week there was a need for legislation to tackle services that generate sexualised images of adults. It came after Angela Willis, Garda Assistant Commissioner, organised and serious crime, told the Children’s Committee that a complainant is currently required to investigate intimate abuse imagery of adults, and that such material needs to be shared to constitute an offence. Ms Willis told an Oireachtas committee that the production, circulation and generation of CSAM is prohibited, whether it is generated by AI or not. She also said that legislation provides for the liability of directors and officers of corporate bodies of companies where illegal content is shared. I believe we need to get to a point where if you’re under the age of 16, you can’t be on social media Tanaiste Simon Harris Elsewhere this week, Tanaiste Simon Harris said there needed to be greater enforcement on age verification in Ireland. Mr Harris, who believes Ireland needs to prohibit outright the use of social media by those under 16, said there had to be a “baring of teeth” and enforcement of the age of digital consent, set out by the Data Protection Act 2018. This means online service providers like social media platforms, which rely on consent as the legal basis for processing personal data, must obtain the consent of the child’s parents. Ireland Tractors protest held outside Leinster House as Bord Bia chairman urged to quit Mr Harris said, “We have to start actually enforcing the age of digital consent, adding that some social media companies have signed up for the “rollout of age verification” in March. “But quite frankly, I believe we need to get to a point where if you’re under the age of 16, you can’t be on social media.” Comment was sought from X and its parent company xAI – which also operates Grok.
AI Article