• 0 Posts
  • 28 Comments
Joined 6 months ago
cake
Cake day: March 3rd, 2024

help-circle
  • language is intrinsically tied to culture, history, and group identity, so any concept that is expressed through a certain linguistic system is inseparable from its cultural roots

    i feel like this is a big part of it. it reminds me of the Sapir Whorf Hypothesis. search results and neural networks are susceptible to bias just like a human is; “garbage in garbage out” as they say.

    the quote directly after mentions that newer or more precise searches produce more coherent results across languages. that reminds me of the time i got curious and looked up Marxism on Conservapedia. as you might expect, the high level descriptions of Marxism are highly critical and include a lot of bias, but interestingly once you dig down to concepts like historical materialism etc it gets harder to spin, since popular media narratives largely ignore those details and any “spin” would likely be blatant falsehood.

    the author of the article seems to really want there to be a malicious conspiratorial effort to suppress information, and, while that may be true in some cases, it just doesn’t seem feasible at scale. this is good to call out, but i don’t think these people who concern their lives with the research and advancement of language concepts are sleeping on the fact that bias exists.


  • it’s super weird that people think LLMs are so fundamentally different from neural networks, the underlying technology. neural network architectures are constantly improving, and LLMs are just a product of a ton of research and an emergence after the discovery of the transformer architecture. what LLMs have shown us is that we’re definitely on the right track using neural networks to solve a wide range of problems classified as “AI”







  • a lot of things are unknown.

    i’d be very surprised if it doesn’t have an opt out.

    a point i was trying to make is that a lot of this info already exists on their servers, and your trust in the privacy of that is what it is. if you don’t trust them that it’s run on per user virtualized compute, that it’s e2e encrypted, or that they’re using local models i don’t know what to tell you. the model isn’t hoovering up your messages and sending them back to Apple unencrypted. it doesn’t need to for these features.

    all that said, this is just what they’ve told us, and there aren’t many people who know exactly what the implementation details are.

    the privacy issue with Recall, as i said, is that it collects a ton of data passively, without explicit consent. if i open my KeePass database on a Recall enabled machine, i have little assurance that this bot doesn’t know my Gmail password. this bot uses existing data, in controlled systems. that’s the difference. sure maybe people see Apple as more trustworthy, but maybe sociology has something to do with your reaction to it as well.


  • people generally probably hate the iOS integration just because it’s another AI product, but they’re fundamentally different. the problem with Recall isn’t the AI, it’s the trove of extra data that gets collected that you normally wouldn’t save to disk whereas the iOS features are only accessing existing data that you give it access to.

    from my perspective this is a pretty good use case for “AI” and about as good as you can do privacy wise, if their claims pan out. most features use existing data that is user controlled and local models, and it’s pretty explicit about when it’s reaching out to the cloud.

    this data is already accessible by services on your phone or exists in iCloud. if you don’t trust that infrastructure already then of course you don’t want this feature. you know how you can search for pictures of people in Photos? that’s the terrifying cLoUD Ai looking through your pictures and classifying them. this feature actually moves a lot of that semantic search on device, which is inherently more private.

    of course it does make access to that data easier, so if someone could unlock your device they could potentially get access to sensitive data with simple prompts like “nudes plz”, but you should have layers of security on more sensitive stuff like bank or social accounts that would keep Siri from reading it. likely Siri won’t be able to get access to app data unless it’s specified via their API.



  • tbh this research has been ongoing for a while. this guy has been working on this problem for years in his homelab. it’s also known that this could be a step toward better efficiency.

    this definitely doesn’t spell the end of digital electronics. at the end of the day, we’re still going to want light switches, and it’s not practical to have a butter spreading robot that can experience an existential crisis. neural networks, both organic and artificial, perform more or less the same function: given some input, predict an output and attempt to learn from that outcome. the neat part is when you pile on a trillion of them, you get a being that can adapt to scenarios it’s not familiar with efficiently.

    you’ll notice they’re not advertising any experimental results with regard to prediction benchmarks. that’s because 1) this actually isn’t large scale enough to compete with state of the art ANNs, 2) the relatively low resolution (16 bit) means inputs and outputs will be simple, and 3) this is more of a SaaS product than an introduction to organic computing as a concept.

    it looks like a neat API if you want to start messing with these concepts without having to build a lab.







  • IBM then. or, i don’t know, the British Royal Family?

    the reality of talking about extremist economics is no one knows how it would work out in the long term. but regardless, if it happened tomorrow we already have a Microsoft to deal with.

    “taxation is theft” “wage labour is exploitation”

    sometimes things are subtle and complicated and can’t be practically boiled down to absolutes.


  • “we don’t know how” != “it’s not possible”

    i think OpenAI more than anyone knows the challenges with scaling data and training. anyone working on AI knows the line: “a baby can learn to recognize elephants from a single instance”. reducing training data and time is fundamental to advancement. don’t get me wrong, it’s great to put numbers to these things. i just don’t think this paper is super groundbreaking or profound. a bit clickbaity and sensational for Computerphile