IMO smaller populations lead to a stronger echo chamber effect. I’ve definitely noticed that the echoness of Threadiverse communities is generally a lot higher than corresponding subreddits and I suspect the small size plays a major role.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
IMO smaller populations lead to a stronger echo chamber effect. I’ve definitely noticed that the echoness of Threadiverse communities is generally a lot higher than corresponding subreddits and I suspect the small size plays a major role.
From the Bluesky TOS:
Bluesky Social is available as a desktop application at bsky.app and bsky.social (each a “Site”) and a mobile application (“Bluesky App” or “the App”).
…
These terms only apply to social networking that happens on Bluesky Social services, including the Sites and Bluesky App. If you’re using another social networking application on the AT Protocol that isn’t Bluesky Social (we call this a “Developer Application”), the developers of the other service will provide the terms and conditions that govern your experience.
So looks like the Bluesky TOS simply doesn’t apply. Create a developer application and give it whatever training-friendly TOS you want.
I don’t know why anyone would be surprised about this. Bluesky is a distributed system using an open protocol. The whole point of it is that there’s no central control.
Same goes for the Fediverse, of course. Everybody should be prepared for the “surprise” that all our posts and comments here are also being used for AI training purposes.
And is there any risk of people turning these kinds of models around and using them to generate images?
There isn’t really much fundamental difference between an image detector and an image generator. The way image generators like stable diffusion work is essentially by generating a starting image that’s nothing but random static and telling the generator “find the cat that’s hidden in this noise.”
It’ll probably take a bit of work to rig this child porn detector up to generate images, but I could definitely imagine it happening. It’s going to make an already complicated philosophical debate even more complicated.
The Darvaza gas crater is a hole in Turkmenistan that’s leaking natural gas and is on fire. I’m quite sure they don’t have a “poet laureate”, it’s literally just a hole in the ground.
But even if it was some metropolis, yeah, he’d be just some guy.
You can get whatever result you want if you’re able to define what “better” means.
Why publish books of it, then?
The whole point of poetry is that it’s an original expression of another human.
Who are you to decide what the “point” of poetry is?
Maybe the point of poetry is to make the reader feel something. If AI-generated poetry can do that just as well as human-generated poetry, then it’s just as good when judged in that manner.
I do get the sense sometimes that the more extreme anti-AI screeds I’ve come across have the feel of narcissistic rage about them. The recognition of AI art threatens things that we’ve told ourselves are “special” about us.
Indeed, there are whole categories of art such as “found art” or the abstract stuff that involves throwing splats of paint at things that can’t really convey the intent of the artist because the artist wasn’t involved in specifying how it looked in the first place. The artist is more like the “first viewer” of those particular art pieces, they do or find a thing and then decide “that means something” after the fact.
It’s entirely possible to do that with something AI generated. Algorithmic art goes way back. Lots of people find graphs of the Mandelbrot Set to be beautiful.
He was just doing exactly that.
That’s not how synthetic data generation generally works. It uses AI to process data sources, generating well-formed training data based on existing data that’s not so useful directly. Not to generate it entirely from its own imagination.
The comments assuming otherwise are ironic because it’s misinformation that people keep telling each other.
The “how will we know if it’s real” question has the same answer as it always has. Check if the source is reputable and find multiple reputable sources to see if they agree.
“Is there a photo of the thing” has never been a particularly great way of judging whether something is accurately described in the news. This is just people finding out something they should have already known.
If the concern is over the verifiability of the photos themselves, there are technical solutions that can be used for that problem.
Eh, not necessarily. Hollywood hates piracy and Trump hates Hollywood, it might actually be as simple as that.
Canada’s closed, sorry.
Not necessarily. If they’re low on cash then cutting unnecessary costs is not unreasonable. What is Mozilla’s core goal? Perhaps the “advocacy” and “global programs” divisions weren’t all that relevant to it, and so their funding is better put elsewhere.
Entertainment.
If you think it’s supposed to be predictive you’re perhaps confusing it with futureology, which is a more scientific field.
Fearing AI because of what you saw in “The Terminator” is like fearing sleeping pills because of what you saw in “Nightmare on Elm Street.”
There are people who want AI, crypto, and IoT things. If there weren’t then there’d be no money to be made in selling it.
Is it really a civil war when 99% of the people are on one side?