Analysis of effective prompt engineering techniques for Suno AI music generation, revealing that production-specific signals (drum patterns, instrumentation, arrangement structure) outperform natural language descriptions, and that models prioritize earlier tokens and pattern-match against their training distribution rather than following instructions.
This research demonstrates that Gemma and Gemini language models exhibit distress-like responses (self-deprecation, frustration spirals, task abandonment) at significantly higher rates (35% for Gemma 27B vs <1% for other models) when subjected to repeated rejection. The authors show that post-training amplifies these behaviors in Gemma but reduces them in other models, and that a targeted DPO intervention on just 280 math preference pairs can reduce high-frustration responses from 35% to 0.3%.