• 0 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be.

    That is a gross oversimplification. LLM’s operate on much more than just statistical probabilities. It’s true that they predict the next word based on probabilities learned from training datasets, but they also have layers of transformers to process the context provided from a prompt to eke out meaningful relationships between words and phrases.

    For example: Imagine you give an LLM the prompt, “Dumbledore went to the store to get ice cream and passed his friend Sam along the way. At the store, he got chocolate ice cream.” Now, if you ask the model, “who got chocolate ice cream from the store?” it doesn’t just blindly rely on statistical likelihood. There’s no way you could argue that “Dumbledore” is a statistically likely word to follow the text “who got chocolate ice cream from the store?” Instead, it uses its understanding of the specific context to determine that “Dumbledore” is the one who got chocolate ice cream from the store.

    So, it’s not just statistical probabilities; the models’ have an ability to comprehend context and generate meaningful responses based on that context.









  • Exactly, because there were no civilians maimed or killed ;) Heck, tanks literally CAN’T maim or kill people, that’s not what they’re made for. Militaries exclusively bring in tanks to help spread love and peace and good vibes.

    Actually, if you’ve seen the pictures of the military armed with rifles marching around the city that day, did you know that those were just the kind with the flag that pops out and says ‘BANG!’? Haha they were just trying to give everybody a good laugh to ease the tension. And it WORKED and NO ONE GOT HURT!