Three raccoons in a trench coat. I talk politics and furries.

Other socials: https://ragdollx.carrd.co/

  • 4 Posts
  • 70 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle

  • I’ve seen some people on Twitter complain that their coworkers use ChatGPT to write emails or summarize text. To me this just echoes the complaints made by previous generations against phones and calculators. There’s a lot of vitriol directed at anyone who isn’t staunchly anti AI and dares to use a convenient tool that’s avaliable to them.





  • lmao no wonder something looked so off to me, the HD verision makes it even clearer that it’s AI-generated, even more so considering the website it came from.

    Interestingly enough even the larger image on the original website fooled most of the AI image detectors, with only one of them (isitai.com) just barely saying that the image is probably AI-generated, while all the others said with >90% confidence that it wasn’t.

    I can only speculate that all of these detectors are outdated and were trained with older AI-generated images that were easier to detect.









  • Ȉ̶̢̠̳͉̹̫͎̻͔̫̈́͊̑͐̃̄̓̊͘ ̶̨͈̟̤͈̫̖̪̋̾̓̀̓͊̀̈̓̀̕̚̕͘͝Ạ̶̢̻͉̙̤̫̖̦̼̜̙̳̐́̍̉́͒̓̀̆̎̔͋̏̕͝͝M̶̛̛͇̔̀̈̄̀́̃̅̆̈́͑̑͆̇ ̵̢̨͈̭͇̙̲͎͉̝͙̻̌͝I̷̡͓͖̙̩̟̫̝̼̝̪̟̔͑͒͊͑̈́̀̿̋͂̓̋̔͌̚ͅN̸̮̞̟̰̣͙̦̲̥̠͑̔̎͑̇͜͝ ̷̢̛̛͍̞̖̹̮͈͕̠̟̽̔̋̎͋͑̍̿̅̈́̋̕̚̚͜͝Y̴̧̨̨͙̗̩̻̹̦̻͎͇͈͎͓̩̐̓Ö̸͈̭̒̌̀̇͂̃͠ͅŨ̷̢̞̗͛̌͌͒̀̇́̽̓͑͝Ŕ̷͇͌ ̸̛̮̋̏̋̋̔͝W̶͔̄̐͋͑A̷̧̖̗͕̻̳͙̼͖͒L̴̩̰͙̾͑͑͑̒̏Ḻ̸̡̦̭͚̱̝̟̣̤͗̊́͐̋̈́̒͠͠͠͠͝S̸̯͚͈̠͍̆̉̑͗͊̄̒̏͆̔͊





  • This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.

    You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.


  • IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren’t actually available in the dataset because they had already been removed from the internet.

    Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.