![]() "I worry about other effects of synthetic images of illegal content - that it will exacerbate the illegal behaviors that are portrayed," Dotan told TechCrunch via email. That bodes poorly for the future of these AI systems, according to Ravit Dotan, VP of responsible AI at Mission Control. A study carried out in 2019 revealed that, of the 90% to 95% of deepfakes that are non-consensual, about 90% are of women. Women, unfortunately, are most likely by far to be the victims of this. Those two capabilities could be risky when combined, allowing bad actors to create pornographic "deepfakes" that - worst-case scenario - might perpetuate abuse or implicate someone in a crime they didn't commit. (The license for the open source Stable Diffusion prohibits certain applications, like exploiting minors, but the model itself isn't fettered on the technical level.) Moreover, many don't have the ability to create art of public figures, unlike Stable Diffusion. Other AI art-generating systems, like OpenAI's DALL-E 2, have implemented strict filters for pornographic material. Stable Diffusion is very much new territory. However, Safety Classifier - while on by default - can be disabled. One of these mechanisms is an adjustable AI tool, Safety Classifier, included in the overall Stable Diffusion software package that attempts to detect and block offensive or undesirable images. ![]() Emad Mostaque, the CEO of Stability AI, called it "unfortunate" that the model leaked on 4chan and stressed that the company was working with "leading ethicists and technologies" on safety and other mechanisms around responsible release.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |