Google Establishes Seven Ethical Principles to Use Artificial intelligence

Google got into a big problem that could well have been avoided. A problem mainly of communication when not knowing how to explain what was Project Maven, the controversial initiative for the Department of Defense of the United States that sought to develop artificial intelligence for military drones. This triggered a tremendous negative reaction mainly in Google, where more than 4,000 employees signed a petition for the company to abandon the project.

Google finally gave in to pressure from its employees, and the world, as its image was severely damaged and triggered a general identity crisis inside the company. So a few days ago they confirmed their decision not to go ahead with the project. Given this, now Google seeks to clean up its image by establishing seven principles that will govern the use of artificial intelligence in future projects and developments.
Control of damages after the controversy of Project Maven.

Google AI
Google AI

Everything seems to indicate that these seven ethical principles were elaborated to calm the worry about the work of Google in Project Maven. From here basic rules emerge but there are elements that draw little attention :

“We want to make it clear that, while we are not developing AI for use in weapons, we will continue to work with governments and the military in many other areas, including cybersecurity, training, military recruitment, medical care for veterans, as well as search and rescue.”

What does this mean? That Google will continue to seek military and government contracts, since these are sectors that are currently investing billions of dollars in cloud services, and Google obviously does not want to be left out of this juicy business.

The seven commandments of the Google AI

In the letter signed by Sundar Pichai, CEO of Google, it is mentioned that “specific standards” have been established, that is, they are not theoretical or flexible concepts, which “will actively govern our research and product development and will influence in our business decisions. ”

From now on, Google says it will evaluate artificial intelligence projects based on the following objectives:

  1. That is socially beneficial. Google says it will take into account a wide range of social and economic factors and will proceed in cases where they believe in the benefits, such as in areas of medical care, security, energy, transportation, manufacturing, entertainment and more.

  2. Avoid creating or reinforcing an unfair bias. As we saw with ‘ Norman ‘, at this point the company says it will work to avoid “unfair impacts” in their algorithms, where they will avoid injecting racial, sexual or political bias into automated decision making.

  3. Be built and tested to be safe. Here they mention that they will continue implementing strict safety and protection practices to avoid results that generate damage. Ensuring that they will test AI technologies in controlled environments and monitor their operation after implementation.

  4. Be responsible for people. “Our AI technologies will be subject to appropriate human direction and control. ” That is an artificial intelligence that will always be supported and under the control of a human being.
    Also See: The New Gaming Monitor from MSI Broke Speed Records
  5. Incorporate privacy design principles. Following the scandal of Facebook, Google seeks to shield itself by ensuring that its AI will respect the privacy of the data it obtains. Undoubtedly an important point to know that Google uses AI in almost all its services, from Gmail to Photos and much more.

  6. Maintain high standards of scientific excellence. Google knows that artificial intelligence has the potential to help in new scientific and research fields, so they are committed to supporting open research, intellectual rigor, integrity, and collaboration.

  7. Be available for uses that go according to these principles. This point involves all the previous ones and makes clear the error of Project Maven because if any project goes against these principles, they simply will not take it. Here Google specifies potentially harmful or abusive scenarios that will be evaluated according to four points: Purpose and main use, nature and uniqueness, scale and the nature of Google’s participation.

In addition to these seven principles, Google guarantees that it will not design or deploy developments in artificial intelligence in the following areas :

Google AI
Google AI
  • “Technologies that cause or may cause a general damage, when there is a material risk of harm, we will proceed only when we consider that the benefits substantially outweigh the risks and we will incorporate the appropriate security restrictions.”
  • “Weapons or other technologies whose main purpose or implementation is to directly cause or facilitate injury to persons.”
  • “Technologies that collect or use information for surveillance that violates internationally accepted standards.”
  • “Technologies whose purpose violates widely accepted principles of international law and human rights.”

This letter with ethical principles comes at a decisive moment not only for Google but for a large number of companies that are investing large amounts of money in developments based on artificial intelligence.

For more stuff visit our site techverses.com and discover what you want.

Source: Google

Google Establishes Seven Ethical Principles to Use Artificial intelligence
5 (100%) 1 vote

Add a Comment

Your email address will not be published. Required fields are marked *