简体中文
繁體中文
English
Pусский
日本語
ภาษาไทย
Tiếng Việt
Bahasa Indonesia
Español
हिन्दी
Filippiiniläinen
Français
Deutsch
Português
Türkçe
한국어
العربية
Abstract:Google CEO Sundar Pichai argued existing laws could be repurposed to regulate AI sector by sector, rather than creating new ones.
Google CEO Sundar Pichai told The Financial Times that governments should be wary of “rushing” into broad AI regulation.
The Google chief instead argued existing laws could be repurposed to regulate AI sector by sector and cautioned that hastily drawn-up laws could hinder “innovation and research.”
There's a good reason Pichai probably wants regulators to stay away wide-reaching caps on AI. The technology is a key plank for his company's future growth, from its core advertising business to its fast-growing cloud division and its research arm DeepMind.
Visit Business Insider's homepage for more stories.
Google CEO Sundar Pichai struck a cautionary tone in an interview with The Financial Times talking about the future regulation of AI.
Speaking in Helsinki, Pichai warned against broad regulation of AI, arguing instead that existing laws could be repurposed sector by sector “rather than assuming that everything you have to do is new.”
“It is such a broad cross-cutting technology, so it's important to look at [regulation] more in certain vertical situations,” said Pichai.
Read more: A former Google engineer warned that robot weapons could cause accidental mass killings
“Rather than rushing into it in a way that prevents innovation and research, you actually need to solve some of the difficult problems,” he added, citing known issues with the technology such as algorithmic bias and accountability.
Pichai's comments are set against the backdrop of Google's own rocky relationship with artificial intelligence technology, and backlash over some of its projects.
The company canned an autonomous drone contract with the Department of Defense — codenamed Project Maven — after fierce employee backlash, with thousands signing a petition and multiple resignations. Engineers felt the project was an unethical use of AI, and Google allowed its contract to elapse in March of this year.
Artificial intelligence remains a core future growth plank for Google more generally.
Pichai said in a first-quarter earnings call this year that almost three-quarters of Google's advertising clients are using automated ad technologies which make use of machine learning and AI. The firm makes the bulk of its revenue from advertising, reporting $33 billion in revenue for the second quarter. Google's fast-growing cloud business also uses AI. And its new health division is part of the firm's efforts to commercialise research produced by its AI lab, DeepMind.
Laura Nolan, a former Google engineer who was recruited to Project Maven and subsequently resigned over ethical concerns, told Business Insider via email that while Pichai may be speaking out of self-interest, she agrees that blanket AI regulation may be too broad an approach.
“Laws shouldn't be about specific technological implementations, and AI is a really fuzzy term anyway,” she said.
Professor Sandra Wachter of the Oxford Internet Institute also told Business Insider that the contexts in which AI is used change how it should be governed.
“I think it makes a lot of sense to look at sectorial regulation rather than just an overarching AI framework. Risks and benefits of AI will differ depending on the sector, the application and the context. For example, AI used in criminal justice will pose different challenges than AI used in the health sector. It is crucial to assess if all the laws are fit for purpose,” said Wachter. She said that in some cases existing laws could be tweaked, such as data protection and non-discrimination laws.
However, both Nolan and Wachter cited specific areas which may require new laws to be written — such as facial recognition.
“Algorithmic decision-making is definitely an area that I believe the US needs to regulate,” Nolan said. “Another major area seems to be automated biometric-based identification (i.e. facial recognition but also other biometrics like gait recognition and so on). These are areas where we see clear problems emerging due to new technological capabilities, and not to regulate would be to knowingly allow harms to occur.”
“The use of AI in warfare absolutely needs to be regulated — it is brittle and unpredictable, and it would be very dangerous to build weapons that use AI to perform the critical function of target selection,” she added.
Disclaimer:
The views in this article only represent the author's personal views, and do not constitute investment advice on this platform. This platform does not guarantee the accuracy, completeness and timeliness of the information in the article, and will not be liable for any loss caused by the use of or reliance on the information in the article.
Google pledged $250 million in ad grants to help spread information, $200 million into a fund, and $20 million as Cloud credits for researchers.
As Alphabet becomes the latest firm to achieve a 13-figure market cap, analysts still forecast years of growth ahead.
The Cyprus Securities and Exchange Commission (‘CySEC’) would like to inform the public that the Administrative Court has issued a decision on 9th December 2019 in the Application No. 1044/2014, Constantinos Mylonas v. Cyprus Securities and Exchange Commission.
The European Securities and Markets Authority (ESMA) has published a press release announcing that it has extended the temporary equivalence and recognition of UK central counterparties (CCPs).