To what extent are we responsible for our content and how to create safer Spaces?

Community Article Published August 30, 2024

This is a brief blog that outlines some thoughts surrounding the question: To what extent are we responsible for our content and how to create safer Spaces? Certainly relevant for the Telegram CEO Pavel Durov but not less important for people like you and me.

πŸ˜… My own "oops"-moment. I created a space with a Flux model and it resulted in some inappropriate content generation. So, I had a small discussion about creating safe AI with some colleagues over at Hugging Face. Here’s what you can do!πŸ‘‡

πŸ”¦ The ethics team has a nice collection of tools and ideas to help owners secure their code and prevent misuse. Several ways to create safer spaces can be found here. https://huggingface.co./collections/society-ethics/provenance-watermarking-and-deepfake-detection-65c6792b0831983147bb7578

πŸ“· Use AI classifiers to filter out harmful or inappropriate content. It’s a simple but effective way to stop misuse in its tracks. For stable diffusion, we have implemented a basic baseline to block basic keywords and terms. https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py

πŸ“Š Track Usage: Consider monitoring user activities in some way, like logging IP addresses. While there are privacy concerns and GDPR-related caveats, it helps to detect and prevent abuse.

βš“ Most content platforms fall under the international safe harbour principle, which does not hold them accountable for illegal content if they don't know it is there (privacy-related you simply can't), and if they act promptly when they do. https://en.wikipedia.org/wiki/International_Safe_Harbor_Privacy_Principles

πŸ“œ Clear Guidelines: Set transparent usage policies. Make sure users understand what’s acceptable and what the consequences are for breaking the rules. We have some at Hugging Face too. https://huggingface.co./content-guidelines

βš–οΈ Open Source Legal clauses for products using LLMs: This morning I saw this post from Gideon Mendels from Comet ML that shared public legal clauses that should cover common risky scenarios around the usage of LLMs in production. https://gist.github.com/gidim/18e1685f6a47b235e393e57bad89d454

Thanks for the discussion πŸ€“ Noemie Chirokoff, Margaret Mitchell, Omar Sanseviero, Bruna Sellin Trevelin