• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • I don’t disagree, but don’t pretend you haven’t effectively set up the equal and opposite thing here. No mods will ban anyone but other than that every comment section is an implicit competition for best pro-Palestinian talking point, even when decency demands otherwise. We don’t talk about Oct 7, and if we do it was friendly fire, and if it wasn’t it was a natural consequence of Israeli policy in Gaza and that is the real issue. Yeah fine we admit the attack was not a hundred percent morally sound if you insist so much, but we don’t assign a moral weight to it or linger on it because hey when you make innocents suffer, you sow the wind and eventually reap the whirlwind, oh sure Hamas’ response was ugly but what can you do, you know, be a bastard and it comes around. Now it is our moral duty to call loud and clear for a ceasefire – the cycle of violence must stop.

    I know what you’re thinking: that’s not fair! That’s not my opinion! Yeah, the circlejerk doesn’t care about your private opinion. You know better than to contradict any of the above around here in writing, and that’s enough. I’m sure a lot of people privately think “oh… tbh that last IDF strike was unconscionable” before posting on /r/worldnews the part of their opinion they know the crowd will like better.




  • The prime problem is that every social space eventually becomes a circlejerk. Bots and astroturfing exacerbate the problem but it exists perfectly fine on its own – in the early 2000s I had the misfortune of running across plenty of gigantic, years-long circlejerks where definitely no bots or nefarious foreign manipulators were involved (I’m talking console wars, Harry Potter ship wars, stupid shit like that). People form circle jerks in the same way that salts form crystals. It’s just in their nature.

    The thing with circlejerks isn’t that there’s overwhelming agreement on some subject. You’ll get dunked on in most any social media space for claiming that the Earth is flat or that Putin is a swell guy, that in itself is obviously not a problem. What makes a circlejerk is that takes get cheered for and upvoted not in proportion to how much they are anchored in reality, but in proportion to how useful they are in galvanizing allies and disrupting enemies. Whoever shouts “glory to the cause” in the most compelling way gets all the oxygen. At that point the amount of brain rot is only going to increase. No matter how righteous the cause, inevitably there comes the point where you can go on the Righteous Cause Forum and post “2+2=5, therefore all glory to the cause” and get 400 upvotes.

    Everyone talks a big game about how much they like truth, reason and moral consistency, but in the end when it’s just them and the upvote button and “do I stop and honestly examine this argument that gives me warm fuzzy feelings”, “is it really fair to dunk on Hated Group X by applying a standard I would never apply to anyone else” – the true colors show. It’s depressing and it makes most of social media into information silos where totalizing ideologies go to get validated, and if you feel alienated by this then clearly that space isn’t for you.


  • I do exactly this kind of thing for my day job. In short: reading a syntactic description of an algorithm written in assembly language is not the equivalent of understanding what you’ve just read, which in turn is not the equivalent of having a concise and comprehensible logical representation of what you’ve just read, which in turn is not the equivalent of understanding the principles according to which the logical system thus described will behave when given various kinds of input.


  • This is an issue that has plagued the machine learning field since long before this latest generative AI craze. Decision trees you can understand, SVMs and Naive Bayes too, but the moment you get into automatic feature extraction and RBF kernels and stuff like that, it becomes difficult to understand how the verdicts issued by the model relate to the real world. Having said that, I’m pretty sure GPTs are even more inscrutable and made the problem worse.





  • Let X equal the number of cans of spinach in the known universe. Let Y equal the number of times the Hulk can possibly get “angrier” without succumbing to an aneurysm. Since the Hulk gets stronger when he gets angrier (designated [Zi r = Zf]) and Popeye’s spinach ability allows him to attain a level above his opponents strength ([Zi+1 = Zf],) the only way one of these combatants is going to lose is if the source of their power gives out. Thus if X is greater than Y, Popeye wins. If Y is greater than X, the Hulk wins. This is relatively untenable until one realizes that Olive Oyle is present in the room. Since that is the case, and theorists have speculated that Olive is in fact Female, there will be twice as many X chromosomes in the Room as Y chromosomes. Since X > Y, Popeye wins.

    (World Wide Web Fights Grudge Match, ‘Popeye vs Hulk’)


  • Yes, definitely. It instigated a lot of turmoil and a gamut of spicy takes regarding the fundamental question of whether password managers as a model “work”. On the one hand some people laughed at the idea of putting your password on the cloud and touted post-it notes for being a more secure alternative. On the other hand people extolled the virtues of the cryptographic model at the base of password managers, claiming that even if tomorrow the entire LastPass executive org went rogue, your password would still be safe.

    As far as I understand, the truth is more nuanced. Consider that this breach took place 9 months ago, but you’re only reading about cracked passwords now. It seems like the model did what it was supposed to do, and people behind the breach had to patiently brute-force victim master passwords. This means they got to the least secure passwords first: If you picked “19 deranged geese obliterating a succulent dutch honey jar at high noon” or whatever, you’re probably safe. But it doesn’t strike me as too wise to get complacent on account of this, either. Suppose next time the attackers get enough access to “tweak” the LastPass chrome extension to exfiltrate passwords. Now what?

    The thing is we’re stuck between a rock and a hard place with passwords. We already know it’s impractical to ask users to remember 50 different secure passwords. So assuming we solve this using a password vault, there’s no optimal place to keep it. On the cloud you get incidents like this. Outside of the cloud one day you’re going to lose your thumb drive, your machine, your whatever. “So keep a backup” but who out of your normie relatives is honestly going to do this, and do you really trust a backup you haven’t used in 5 years to work in the moment of truth? I don’t know if there is any proper solution in the immediately visible solution space, and if there is, I don’t know if anyone has the financial incentive to implement it, sell it, buy it. People say the future is in passwordless authentication, FIDO2 etc, but try to google actually using one of these for your 5 most-used accounts, you’re not going to come out of the experience very thrilled.



  • Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.

    Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance – proof of concept ready in less than a day.

    Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I’ve encountered all these intimately as I’ve had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an “impossibility theorem for putting AI on autopilot”? Or are these limitations just artifacts we can engineer away and route around?

    It seems like instead of having this discussion, it’s become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That’s, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, “they also laughed at Bozo the clown” and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this “ha ha suck it AI” discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end – do they lose any face? I think they don’t. I think this is why this sneering has become such a lucrative online professional sport.



  • An old anecdote from my alma mater – in an introductory course to discrete math, the professor was teaching combinatorics and began: “Suppose you have an urn with three balls inside colored red, green and blue…” At this point one of the students interjected: “Half the class are electrical engineering majors, how is any of this relevant to our studies?” there was a beat and the professor corrected himself: “Suppose you have an urn with three resistors inside colored red, green and blue…”



  • Well, fine, and I can’t fault new published material having a “no AI” clause in its term of service. But that doesn’t mean we get to dream this clause into being retroactively for all the works ChatGPT was trained on. Even the most reasonable law in the world can’t be enforced on someone who broke it 6 months before it was legislated.

    Fortunately the “horses out the barn” effect here is maybe not so bad. Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the “hands off” clause - basically like the knowledge cutoff, but forever. It’s untenable, OpenAI will be forced to cave and pay up.



  • bh11235@infosec.pubtoProgramming@programming.devThe Fall of Stack Overflow
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    edit-2
    1 year ago

    You lack vision, but I see a place where people get blocked and their questions opened then immediately closed as duplicates. Opened and closed, opened and closed all day, all night. Soon, where the internet once stood will be a string of condescending experts, admonitions that “you shouldn’t do that, do Y instead”, pleas for information closed as off-topic. Passive aggression, spiteful ego contests and wonderful, wonderful karma meters reaching as far as the eye can see. My God, it’ll be beautiful.