Google to no longer ban AI development for weapons
Google on Tuesday wrote that the world’s democracies needed to lead the charge in developing artificial intelligence systems used for national security. The blog post appeared to change a policy Google has held for several years.
A 2019 document that spelled out several AI principles Google had established earlier. Those principles included a promise that the company would not develop artificial intelligence systems that could be used to cause harm. The promise no longer appears in the company’s latest edition of its AI principles.
Why the change? The company on Tuesday said it wanted to focus on promoting human rights, equality, and freedom. It favors democratic nations—and companies that share those values—to lead the development of AI systems. Such governments and organizations should work together to build AI systems that would protect people, Google said.
So is Google no longer focused on safety? On Tuesday, Google published an updated Frontier Safety Framework document outlining ways to curb the development of harmful artificial intelligence systems and retain human control over the systems.
Dig deeper: Read Grace Snell’s report in WORLD Magazine about how some people believe it’s time for humans to stop reproducing.
An actual newsletter worth subscribing to instead of just a collection of links. —Adam
Sign up to receive The Sift email newsletter each weekday morning for the latest headlines from WORLD’s breaking news team.
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.