• 3 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle






  • jocanib@lemmy.worldOPtoTechnology@lemmy.worldTesla’s Dieselgate
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    11 months ago

    many years now

    This appears to be an escalating fraud, affecting newer models more than old. So I’d guess that’s ^^ the answer.

    It’s not just a Reuters investigation, they’ve been fined by a few jurisdictions and they absolutely do have the ability to pay lawyers to defend those charges if they’re false.



  • They don’t seem to list the instances they trawled (just the top 25 on a random day with a link to the site they got the ranking from but no list of the instances, that I can see).

    We performed a two day time-boxed ingest of the local public timelines of the top 25 accessible Mastodon instances as determined by total user count reported by the Fediverse Observer…

    That said, most of this seems to come from the Japanese instances which most instances defederate from precisely because of CSAM? From the report:

    Since the release of Stable Diffusion 1.5, there has been a steady increase in the prevalence of Computer-Generated CSAM (CG-CSAM) in online forums, with increasing levels of realism.17 This content is highly prevalent on the Fediverse, primarily on servers within Japanese jurisdiction.18 While CSAM is illegal in Japan, its laws exclude computer-generated content as well as manga and anime. The difference in laws and server policies between Japan and much of the rest of the world means that communities dedicated to CG-CSAM—along with other illustrations of child sexual abuse—flourish on some Japanese servers, fostering an environment that also brings with it other forms of harm to children. These same primarily Japanese servers were the source of most detected known instances of non-computer-generated CSAM. We found that on one of the largest Mastodon instances in the Fediverse (based in Japan), 11 of the top 20 most commonly used hashtags were related to pedophilia (both in English and Japanese).

    Some history for those who don’t already know: Mastodon is big in Japan. The reason why is… uncomfortable

    I haven’t read the report in full yet but it seems to be a perfectly reasonable set of recommendations to improve the ability of moderators to prevent this stuff being posted (beyond defederating from dodgy instances, which most if not all non-dodgy instances already do).

    It doesn’t seem to address the issue of some instances existing largely so that this sort of stuff can be posted.














  • It will almost always be detectable if you just read what is written. Especially for academic work. It doesn’t know what a citation is, only what one looks like and where they appear. It can’t summarise a paper accurately. It’s easy to force laughably bad output by just asking the right sort of question.

    The simplest approach for setting homework is to give them the LLM output and get them to check it for errors and omissions. LLMs can’t critique their own work and students probably learn more from chasing down errors than filling a blank sheet of paper for the sake of it.