BERLIN: Google blocked or removed more than 5.5 billion adverts that violated the online company’s guidelines or were intended to serve fraudulent purposes last year.
A Google report on ad security, which was published on Tuesday, says the blocked content included adverts for dangerous products or sexual content, but also adverts that promised a miracle cure or quick riches in a dubious manner.
Google’s vice president, Duncan Lennox, said the company’s aim was to recognise fraudulent ads and block the accounts responsible for them before they reach the Google platforms, or remove them immediately as soon as they are discovered.
The most important trend in 2023, Lennox said, was the impact of generative artificial intelligence (AI), as seen in chatbots such as Google Gemini. He said there was “no question” that the introduction of readily available AI video tools has exacerbated the proliferation of fraudulent adverts with deepfakes.
Deepfakes are photos, videos or audio files that are deliberately altered using AI, resulting in people appearing to do or say things that they have never actually done or said.
The Google manager emphasised that the AI also helps to identify fraudulent content and enforce Google’s guidelines. AI is significantly involved in 90% of blocking decisions, he said.
If advertisers or publishers believe that the AI has made a mistake, Google teams will check this, and if errors are found, these findings are used to further improve the systems, he added.
Lennox said that Google had also blocked or removed 12.7 million advertiser accounts, which is almost double the number from the previous year. Industry experts estimate that Google’s servers deliver an average of around 30 billion adverts a day. – dpa