The European Union has warned Microsoft that it could be fined up to 1% of its global annual turnover under the bloc’s online governance regime, the Digital Services Act (DSA), after the company failed to respond to a legally binding request for information (RFI) that focused on its generative AI tools.
Back in March, the EU had asked Microsoft and a number of other tech giants for information about systemic risks posed by generative AI tools. On Friday, the Commission said Microsoft failed to provide some of the documents it asked for.
The Commission has given the company until May 27 to supply the requested data or risk enforcement. Fines under the DSA can scale up to 6% of global annual revenue, but incorrect, incomplete or misleading information provided in response to a formal RFI can result in a standalone fine of 1%. That could sum to a penalty of up to a couple of billion dollars in Microsoft’s case — the company reported revenue of $ 211.92 billion in the fiscal year ended June 30, 2023.
Larger platforms’ systemic risk obligations under the DSA are overseen by the Commission itself, and this warning sits atop a toolbox of powerful enforcement options that could be far costlier for Microsoft than any reputational ding it might get for failing to produce data on request.
The Commission said it is missing information related to risks stemming from search engine Bing’s generative AI features — notably, the regulator highlighted AI assistant “Copilot in Bing” and image generation tool “Image Creator by Designer.”
The EU said it is particularly concerned about any risks the tools may pose to civic discourse and electoral processes.
The Commission has given Microsoft until May 27 to provide the missing information or risk a fine of 1% of annual revenue. If the company fails to produce the data by then, the Commission may also impose “periodic penalties” of up to 5% of its average daily income or worldwide annual turnover.
Bing was designated as a so-called “very large online search engine” (VLOSE) under the DSA back in April 2023, meaning it is subject to an extra layer of obligations related to mitigating systemic risks like disinformation.
The DSA’s obligation on larger platforms to mitigate disinformation puts generative AI technologies squarely in the frame. Tech giants have been at the forefront of embedding GenAI into their mainstream platforms despite glaring flaws such as the tendency for large language models (LLMs) to fabricate information while presenting it as fact.
AI-powered image generation tools have also been shown to produce racially biased or potentially harmful output, such as misleading deepfakes. The EU, meanwhile, has an election coming up next month, which is concentrating minds in Brussels on AI-fuelled political disinformation.
“The request for information is based on the suspicion that Bing may have breached the DSA for risks linked to generative AI, such as so-called ‘hallucinations,’ the viral dissemination of deepfakes, as well as the automated manipulation of services that can mislead voters,” the Commission wrote in a press release.
“Under the DSA, designated services, including Bing, must carry out adequate risk assessment and adopt respective risk mitigation measures (Art 34 and 35 of the DSA). Generative AI is one of the risks identified by the Commission in its guidelines on the integrity of electoral processes, in particular for the upcoming elections to the European Parliament in June.”
Microsoft did not immediately respond to a request for comment.