Hi, this is true. Although we do opt-out of AI scrapers using our robots.txt, it is still the Internet and relying on the platform for such protection is not 100% safe, no matter if you rely on GitHub or our platform. We recently had to block IP addresses that were relentlessly scraping our platform but _after_ they caused performance problems. ~n
P.S. WIRED published something on AI "content aggregators" ignoring robots.txt very recently: https://www.wired.com/story/perplexity-is-a-bullshit-machine/