bug-bounty517
xss286
rce150
bragging-post119
google112
exploit106
account-takeover106
open-source92
csrf85
privilege-escalation84
microsoft83
authentication-bypass83
facebook79
stored-xss75
cve71
access-control66
ai-agents64
reflected-xss63
web-security63
writeup63
malware61
ssrf53
input-validation52
smart-contract49
defi48
phishing48
cross-site-scripting47
sql-injection47
ethereum46
tool46
privacy45
information-disclosure44
api-security40
cloudflare39
apple39
lfi37
vulnerability-disclosure37
dos37
llm37
web-application37
burp-suite36
browser36
reverse-engineering36
opinion36
automation34
oauth34
web333
html-injection33
smart-contract-vulnerability33
responsible-disclosure33
0
5/10
Research demonstrates that removing code comments from SWE-bench Verified tasks unexpectedly improves performance for GPT-5-mini but not GPT-5.2, revealing that semantic content in comments creates model-dependent 'memetic' effects (distraction, anchoring, overgeneralization) that can either help or hinder AI agent reasoning. The study frames codebases as informational organisms and proposes antimemetics—using documentation as a defensive system to guide or constrain agent behavior.
ai-agent-behavior
code-analysis
llm-evaluation
benchmark
prompt-injection
semantic-content
codebase-alignment
memetics
swe-bench
gpt-models
code-comments
agent-robustness
SWE-bench Verified
mini-swe-agent
GPT-5-mini
GPT-5.2
OpenAI
requests
Matplotlib
Antimemetic AI