You are here

Responsible AI: A Global Policy Framework

Responsible AI: A Global Policy Framework


It would not be an understatement to say that the world has changed beyond recognition since the pub­lication of the first edition of Responsible AI. We have been placed in the grip of a global pandemic, and in a fast moving world, Artificial Intelligence moves at light speed. It is in this environment that ITechLaw brings you the 2021 update to Responsible AI.

Edited by John Buyers of Osborne Clarke LLP, UK and Susan Barty of CMS of CMS LLP, and written together with a team of 38 specialists from 17 countries, the 156 page 2021 Update includes not only a substantive update to each of the eight principal chapters to Responsible AI and comprehensive update to the original Global Policy Framework, but also a practical “Responsible AI Impact Assessment” template that we hope will be of significant value to AI experts and industry leaders.


Responsible AI 2021 Update

ABOUT THE AUTHORS

A multi-disciplinary group of 54 technology law experts, researchers and industry representatives from 16 countries developed this detailed actionable framework and are seeking public comments to create an updated version. The book covers these principles in detail:
 

SUGGESTED ACTIONS

  • Grounding the responsible AI framework in the human-centric principle of “accountability” that makes organizations developing, deploying, or using AI systems accountable for harm caused by AI;
  • Promoting a context-sensitive framework for transparency and explainability;
  • Elaborating the notion of elegant failure, as well as revisiting the “tragic choices” dilemma;
  • Supporting open-data practices that help encourage AI innovation while ensuring reasonable and fair privacy, consent, and competitiveness; and
  • Encouraging responsible AI by design.