This Google Cloud video introduces the concept of responsible AI and explains why Google has implemented AI principles. The video emphasizes the importance of responsible AI practices in all stages of a project and across various organizations, highlighting the impact of human decisions in AI development and deployment.
According to the transcript, Google will not design or deploy AI in these four application areas:
Google's seven AI principles, announced in June 2018, are:
Google incorporates responsibility into its AI products and organization by using its AI principles as a framework to guide responsible decision-making. They've incorporated responsibility by design into their products and, more importantly, their organization. They use assessments and reviews to ensure projects align with their AI principles, establishing rigor and consistency across product areas and geographies. They also build responsibility into AI deployment to create better models and build trust with customers. They acknowledge that a robust process, even if not every decision is agreed upon, builds trust.
The common misconception is that machines play the central decision-making role in artificial intelligence. The reality is that people design and build the machines and decide how they are used. Human decisions are threaded throughout AI development, from data collection and model training to deployment and application. Every decision made by a person introduces their own set of values into the AI system.