• skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      What’s the associated system instruction set to? If you’re using the API it won’t give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they’re essentially just “predict the next word” functions at heart.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Interesting, I don’t see any huge red flags there.

          I gather frequency penalties have fallen out of favour, due to the harmful side effects being worse than the very occasional loop trap.

    • BootLoop@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      It can happen on most LLMs and is usually programmed to decentivize repeating text heavily.

      I believe what happens is that when the LLM is choosing what word to use, it looks back on the sentence and sees that it talked about knives, so it wants to continue talking about knives, then it gets itself into a loop.