Former RIF user from reddit, new to lemmy.

  • 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle
  • I’d assume that’s either due to bias in the training set, or poor design choices. The former is already a big problem in facial recognition, and can’t really be fixed unless we update datasets. With the latter, this could be using things like visible light for classification, where the contrast between target and background won’t necessarily be the same for all skin tones and times os day. Cars aren’t limited by DNA to only grow a specific type of eye, and you can still create training data from things like infrared or LIDAR. In either case though, it goes to show how important it is to test for bias in datasets and deal with it before actually deploying anything…



  • It does, but it’s important to note that the theoretical basis for much of the rapid progress we’re seeing now (e.g. machine learning) has actually existed for quite a long time. Training very large models wasn’t feasible at the time they were theorised, but the basis for them did exist.

    When it comes to brains, we don’t even have a good understanding of how multisensory integration works yet, let alone how we could, even in theory, implant multisensory impressions like ads. It’s much easier with things like movement disorders or paralysis because our understanding of those phenomena is much more advanced. Plus - we’re only really dealing with one modality there - movement.

    Deep brain stimulation for psychiatric conditions does exist, but it’s poorly understood, to the point where there isn’t even really a consensus on where you should place the stimulating electrodes for the best effects. At least that’s what a colleague who worked on DBS described a while ago, and I doubt it would’ve changed dramatically in a year.