• 0 Posts
  • 150 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle







  • An exceptionally well trained AI customer service has the potential to be amazing.

    I only call or try to chat/email with customer service if something has gone way wrong - like outside the typical customer service capability of assistance.

    If an AI can realize that my problem is human worthy and escalate it faster, that would save me time in the chat queue talking with someone who barely knows my native language.

    Alas, AIs will be poorly trained, so the bad-english CS reps will still be right behind the AI interface waiting for me.



  • When I was young I remember that banks often had large drive-thrus with pneumatic tube systems at each car stall.

    There would only be one teller but they could serve quite a few lanes.

    If you wanted a cash withdrawal, you might put your ID and your withdrawal slip in the tube, and a few minutes later it would come back with cash in it.

    It was pretty rad. But ATMs seem like a better bet overall.










  • Well thought-out and articulated opinion, thanks for sharing.

    If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.

    When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.

    Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.

    To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.


  • Well stated and explained. I’m not an AI researcher but I develop with LLMs quite a lot right now.

    Hallucination is a huge problem we face when we’re trying to use LLMs for non-fiction. It’s a little bit like having a friend who can lie straight-faced and convincingly. You cannot distinguish whether they are telling you the truth or they’re lying until you rely on the output.

    I think one of the nearest solutions to this may be the addition of extra layers or observer engines that are very deterministic and trained on only extremely reputable sources, perhaps only peer reviewed trade journals, for example, or sources we deem trustworthy. Unfortunately this could only serve to improve our confidence in the facts, not remove hallucination entirely.

    It’s even feasible that we could have multiple observers with different domains of expertise (i.e. training sources) and voting capability to fact check and subjectively rate the LLMs output trustworthiness.

    But all this will accomplish short term is to perhaps roll the dice in our favor a bit more often.

    The perceived results from the end users however may significantly improve. Consider some human examples: sometimes people disagree with their doctor so they go see another doctor and another until they get the answer they want. Sometimes two very experienced lawyers both look at the facts and disagree.

    The system that prevents me from knowingly stating something as true, despite not knowing, without some ability to back up my claims is my reputation and my personal values and ethics. LLMs can only pretend to have those traits when we tell them to.