Papers
arxiv:2407.01557

AI Governance and Accountability: An Analysis of Anthropic's Claude

Published on May 2
Authors:

Abstract

As AI systems become increasingly prevalent and impactful, the need for effective AI governance and accountability measures is paramount. This paper examines the AI governance landscape, focusing on Anthropic's Claude, a foundational AI model. We analyze Claude through the lens of the NIST AI Risk Management Framework and the EU AI Act, identifying potential threats and proposing mitigation strategies. The paper highlights the importance of transparency, rigorous benchmarking, and comprehensive data handling processes in ensuring the responsible development and deployment of AI systems. We conclude by discussing the social impact of AI governance and the ethical considerations surrounding AI accountability.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.01557 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.01557 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.01557 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.