This account is being kept for the posterity, but it won’t see further activity past February.

If you want to contact me, I’m at /u/lvxferre@mander.xyz

  • 0 Posts
  • 68 Comments
Joined 3 years ago
cake
Cake day: April 9th, 2021

help-circle

  • They’re still providing the code for people who buy the compiled software. And they are not restricting their ability to redistribute that code. So it’s still compliant with the GPL in the letter. However, if you redistribute it, they’ll refuse to service you further versions of the software.

    It’s clearly a loophole because they can argue “ackshyually, we didn’t restrict you, we just don’t want further businesses with you, see ya sucker”.


  • Lvxferre@lemmy.mltoOpen Source@lemmy.mlThoughts on Post-Open Source?
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    8 months ago

    I think that the RHEL example is out-of-place, since IBM (“Red Hat”) is clearly exploiting a loophole of the GNU Public License. Similar loopholes have been later addressed by e.g. the AGPL and the GPLv3*, so I expect this one to be addressed too.

    So perhaps, if the GPL is “not enough”, the solution might be more GPL.

    *note that the license used by the kernel is GPLv2. Cue to Android (for all intents and purposes non-free software) using the kernel, but not the rest.


  • Lvxferre@lemmy.mltoTechnology@lemmy.mlTransparent Aluminium
    link
    fedilink
    arrow-up
    97
    arrow-down
    7
    ·
    edit-2
    8 months ago

    Misleading name, on the same level as calling water “non-explosive hydrogen”. That said the material looks promising, as a glass replacement for some applications (the text mentions a few of them, like armoured windows).

    (It is not a metal; it’s a ceramic, mostly oxygen with bits and bobs of aluminium and nitrogen. Interesting nonetheless, even if I’m picking on the name.)




  • I don’t know (…or care, really) about USA so I’ll speak on more general grounds.

    There’s a lot of stuff in social media that makes it a great soapbox for social manipulation:

    • low cost, wide reaching: it’s easy to be heard
    • decontextualisation: it gives more room for assumers¹ to do their shit, and make an incorrect context out of nowhere.
    • virality: it’s easy to start a witch hunt. Cue to the pitchfork emporium / Twitter MC of the day.
    • upvote/like-based systems: people don’t upvote your content (increasing its visibility) because you’re right, they do it because you say it confidently.
    • on the Internet, nobody knows that you’re a dog: concern trolling made easy.

    Now look at what @startle@toast.ooo said: “Dunno man, seems like it might be the fascists.”. IMO that user is being spot on, those five things make social media specially easy to manipulate for fascists². And they’re mostly the ones creating this dichotomisation of society³, because that’s how they’re able to congregate the nutjobs into a political discourse. Suddenly the village idiot doesn’t simply say “they’re hiding aliens from us” (stupid, but morally OK), the discourse becomes “the Jews are hiding aliens from us” (stupid and Antisemitic).

    1. By “assumers” I mean individuals who are quick to draw conclusions based on little to no reasoning, evidence, or thought. This plague exists since the dawn of time, it’s just that decontextualisation gives them more room to assume shit out of nowhere.
    2. Fascists often babble about “virtue signalling”, without realising that themselves are prone to signal adherence to their stupid beliefs. They don’t want to be in the receiving end of their own witch hunts.
    3. By “society” I mean at the very least Western Europe plus the Americas; probably more. It is not exclusive to USA.

  • Lvxferre@lemmy.mltoTechnology@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Given that it’s pointing straight to “no”, should I interpret “AI” as “additional irony”?

    …seriously, model-based generation is in its infancy. Currently it outputs mostly trash; you need to spend quite a bit of time to sort something useful out of it. If anyone here actually believes that it’s smart, I have a bridge to sell you.


  • Not even parrots - the birds are actually smart.

    I’m not a lawyer but I can see a good way for lawyers to use ChatGPT: tell it to list laws that are potentially related to the case, then manually check those laws to see if they apply. This would work nicely in countries with Roman law; and perhaps in countries with tribal law too (the article is from USA), as long as the model is fed with older cases for precedent.

    And… really, that’s the best use for those bots IMO - asking it to sort, filter and search information from messy and large systems. Letting it write things for you, like those two lawyers did, is worse than laziness: it stinks stupidity.

    It’s also immoral. The lawyer is a human being, thus someone who can be held responsible for one’s actions; ChatGPT is not and, as such, it should not be in charge of decisions that affect human lives.



  • Lvxferre@lemmy.mltoLinux Gaming@lemmy.worldGamedev and linux
    link
    fedilink
    English
    arrow-up
    59
    ·
    10 months ago

    That doesn’t surprise me.

    Linux users are biased towards higher technical expertise, and they have a different mindset - most of the software that we use is the result of collaborative projects, and we’re often encouraged to help the devs out. And while the collaborative situation might not be true for game development, the mindset leaks out.


  • For the people discussing here: remember that the morality of an act depends on the act itself, the context where it happens, and the moral premises. It does not depend on how you phrase or label the act.

    With that in mind: since I define arseholery as “actions or behaviour that cause more harm to someone else than they benefit the agent”, and there’s practically no harm being caused by OP’s actions, I do not think that OP is being an arsehole.





  • The idea has some merit but it’s harder to implement than it looks like. Model-based image generation is heavily biased towards typical values, so you’d need a lot of poison to do it. And that poison would need to be consistent - it doesn’t work if you tell the model now that cats are dogs and then that ferrets are dogs, you need to pick one.

    I’m rather entertained by the amount of fallacies and assumptions ITT though. I get that you guys are excited with model-based image gen; frankly, I’m the same when it comes to text gen. But those two things won’t help, learn the difference between “X is true” and “I want X to be true”.