French President Emmanuel Macron urged for the acceleration of artificial intelligence (AI) development in Europe while advocating for the implementation of intelligent regulations that facilitate the growth of tech companies. During his visit to Vivatech, Europe’s largest startup and tech event, Macron acknowledged that Europe, including France and the EU, is falling behind the United Kingdom, the United States, and China in terms of innovation and regulation.
His remarks coincided with the approval of the world’s first comprehensive set of regulations for AI by European lawmakers. However, the rules are not expected to take full effect for several years, as negotiations between EU member states, the Parliament, and the European Commission are still ongoing.
While Macron commended the ongoing discussions within the EU as a positive debate, he expressed concern that by the time the regulations are finalized, they might already be based on outdated assumptions and knowledge. He emphasized the need for caution to avoid overly rigid regulations.
The rapid advancements in chatbot technology, exemplified by ChatGPT, have demonstrated both the advantages and potential dangers associated with this emerging field.
In addition to his call for boosting AI development in Europe, President Macron emphasized the need for broader discussions involving the United Kingdom and the United States. He proposed involving influential organizations such as UNESCO and the OECD in these discussions, leveraging their expertise in cultural affairs and economic cooperation, respectively.
Macron revealed his plans to meet with billionaire Elon Musk, who owns prominent companies like Twitter, Tesla, and SpaceX, to address the regulatory requirements in the fields of artificial intelligence and social media. The primary focus of the meeting will be to promote France and Europe’s attractiveness in these sectors.
Elon Musk is scheduled to deliver a speech at Vivatech on Friday, further contributing to the dialogue surrounding AI.
The proposed EU regulations, initially introduced in 2021, seek to govern any product or service utilizing artificial intelligence systems. The regulations aim to categorize AI systems into four levels of risk, ranging from minimal to unacceptable. Higher-risk applications, especially those targeted at children, will be subject to stricter requirements, including transparency and the use of accurate data.