The ClaudeBot web crawler that Anthropic uses to scrape training data for AI models like Claude has hammered iFixit’s website almost a million times in a 24-hour period, seemingly violating the repair company’s Terms of Use in the process.
“If any of those requests accessed our terms of service, they would have told you that use of our content expressly forbidden. But don’t ask me, ask Claude!” said iFixit CEO Kyle Wiens on X, posting images that show Anthropic’s chatbot acknowledging that iFixit’s content was off limits. “You’re not only taking our content without paying, you’re tying up our devops resources. If you want to have a conversation about licensing our content for commercial use, we’re right here.”
Hey @AnthropicAI: I get you're hungry for data. Claude is really smart! But do you really need to hit our servers a million times in 24 hours?
— Kyle Wiens (@kwiens) July 24, 2024
You're not only taking our content without paying, you're tying up our devops resources. Not cool.
iFixit’s Terms of Use policy states that “reproducing, copying or distributing” any content from the website is “strictly prohibited without the express prior written permission” from the company, with specific inclusion of “training a machine learning or AI model.” When Anthropic was questioned on this by 404 Media, however, the AI company linked back to an FAQ page that says its crawler can only be blocked via a robots.txt file extension.
Wiens says iFixit has since added the crawl-delay extension to its robots.txt. We have asked Wiens and Anthropic for comment and will update this story if we hear back.
iFixit doesn’t seem to be alone, with Read the Docs co-founder Eric Holscher and Freelancer.com CEO Matt Barrie saying in Wiens’ thread that their site had also been aggressively scraped by Anthropic’s crawler. This also doesn’t seem to be new behavior for ClaudeBot, with several months-old Reddit threads reporting a dramatic increase in Anthropic’s web scraping. In April this year, the Linux Mint web forum attributed a site outage to strain caused by ClaudeBot’s scraping activities.
Disallowing crawlers via robots.txt files is also the opt-out method of choice for many other AI companies like OpenAI, but it doesn’t provide website owners with any flexibility to denote what scraping is and isn’t permitted. Another AI company, Perplexity, has been known to ignore robots.txt exclusions entirely. Still, it is one of the few options available for companies to keep their data out of AI training materials, which Reddit has applied in its recent crackdown on web crawlers.
Posted from: this blog via Microsoft Power Automate.
0 Comments