What’s the associated system instruction set to? If you’re using the API it won’t give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they’re essentially just “predict the next word” functions at heart.
It can happen on most LLMs and is usually programmed to decentivize repeating text heavily.
I believe what happens is that when the LLM is choosing what word to use, it looks back on the sentence and sees that it talked about knives, so it wants to continue talking about knives, then it gets itself into a loop.
deleted by creator
W
TF2 Pyro starter pack
What’s the associated system instruction set to? If you’re using the API it won’t give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they’re essentially just “predict the next word” functions at heart.
Redacted
Interesting, I don’t see any huge red flags there.
I gather frequency penalties have fallen out of favour, due to the harmful side effects being worse than the very occasional loop trap.
It can happen on most LLMs and is usually programmed to decentivize repeating text heavily.
I believe what happens is that when the LLM is choosing what word to use, it looks back on the sentence and sees that it talked about knives, so it wants to continue talking about knives, then it gets itself into a loop.