> We further urge the machine learning community to act proactively by establishing robust design guidelines, collaborating with public health experts, and supporting targeted policy measures to ensure responsible and ethical deployment
We’ve seen this play out before, when social media first came to prominence. I’m too old and cynical to believe anything will happen. But I really don’t know what to do about it at a person level. Even if I refuse to engage in this content, and am able to identify it, and keep my family away from it…it feels like a critical mass of people in my community/city/country are going to be engaging with it. It feels hopeless.
I tend to think that it leads to censorship, and then censorship at a broader level in the name of protecting our kids. See with social networks where you now have to give your ID card to protect kids.
The best way in that case is education of the kids / people and automatically flag potentially harmful / disgusting content and let the owner of the device set-up the level of filtering he wants.
Like with LLMs they should be somewhat neutral in default mode but they should never refuse a request if user asks.
Otherwise the line between technology provider and content moderator is too blurry, and tomorrow SV people are going to abuse of that power (or be coerced by money or politics).
At a person / parent level, time limits (like you can do with web filtering device for TikTok), content policy would solve and taking time to spend with the kids as much as possible and to talk to them so they don’t become dumber and dumber due to short videos.
But totally opposed that it should be done on public policy level: “now you have right to watch pornography but only after you give ID to prove you are adult” (this is already the case in France for example)
It can quickly become: “now to watch / generate controversial content, you have to ID”
That doesn't work when the Chinese produce uncensored open weight models, or ones that can easily be adapted to create uncensored content.
Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.
> Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.
Censorship doesn't work for stuff that is currently illegal. See pirated movies.
We’ve seen this play out before, when social media first came to prominence. I’m too old and cynical to believe anything will happen. But I really don’t know what to do about it at a person level. Even if I refuse to engage in this content, and am able to identify it, and keep my family away from it…it feels like a critical mass of people in my community/city/country are going to be engaging with it. It feels hopeless.