Hey, y’all! Just another random, loudmouthed, opinionated, Southern-fried nerdy American living abroad.
This is my lemmy account because I got sick of how unstable kbin dot social was.
Mastodon: @stopthatgirl7
I’ve got to admit, I’ve wanted to do one of those tests just because my family is such a mix of “lol we don’t know.” Like, no really, what IS my maternal grandma? She does not look like the rest of her family and had a different family name from her siblings. And ok really, where DID my paternal great-grandmother who lied about her race so she could marry my great-grandfather back when “miscegenation” was illegal, come from? And WAS that great-grandpa biracial himself?
There’s a reason I call myself an ethnic Rorschach test, and I’d love to know why it is I am. But the rest of my family is against the idea of finding out because “it doesn’t matter” plus who knows how just data might be used one day.
I read the headline and went, “…I mean, what were you expecting?”
If real women don’t like you, that’s a You issue. Make yourself a better person that other people - aka women - want to be around.
The chatbot was actually pretty irresponsible about a lot of things, looks like. As in, it doesn’t respond the right way to mentions of suicide and tries to convince the person using it that it’s a real person.
This guy made an account to try it out for himself, and yikes: https://youtu.be/FExnXCEAe6k?si=oxqoZ02uhsOKbbSF
Respectfully requesting that in the future, you read articles before replying.
And:
According to Straight, the issue was caused by a piece of wiring that had come loose from the battery that powered a wristwatch used to control the exoskeleton. This would cost peanuts for Lifeward to fix up, but it refused to service anything more than five years old, Straight said.
“I find it very hard to believe after paying nearly $100,000 for the machine and training that a $20 battery for the watch is the reason I can’t walk anymore?” he wrote on Facebook.
This is all over a battery in a watch.
So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.
If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?
The way I laughed just reading the first paragraph.
Someone posted links to some of the AI generated songs, and they are straight up copying. Blatantly so. If a human made them, they would be sued, too.
…oh my GOD, they are cooked.
Right now, it’s all being funded by one person, Zhang Jingna (a photographer that recently sued and won her case when someone plagiarized her work) but it’s grown so quickly she got hit with a $96K bill for one month.
Deviant Art is also trying to scrape artists’ work for AI. It what allows AI at to be posted and has prompted it.
I am just so, so tired of being constantly inundated with being told to CONSUME.
If they do, it’s going to be a bad time for them, since Cara has Glaze integration and encourages everyone to use it. https://blog.cara.app/blog/cara-glaze-about
Also, the resale values are…not good.
Y’all gotta stop listening to Elon Musk without the world’s largest block of salt.
Not earthquakes this big. There hasn’t been one this size there in 25 years.
Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.
I scroll through to see if things have been posted before. If I don’t see it, I assume it hasn’t. And I use a client so I don’t see if there are cross posts because it doesn’t display them.
That’s the other big reason I’m hesitant - different tests can give totally different results, so who knows what’s “right”?