Just like when mastodon.social condemned Meta for their horrible moderation decisions and inability to act properly in the interest of its users, and said that the instance would be cutting ties/not federating with Threads, they kept on federating like nothing happened.
I don’t believe anything coming out of mastodon.social unless I can see action being taken with my own two eyes.
Also, blocking scrapers is very easy, and it has nothing to do with a robots.txt (which they ignore).
This instance receives 500+ IPs with differing user agents all connecting at once but keeping within rate limits by distribution of bots.
The only way I know it’s a scraper is if they do something dumb like using “google.com” as the referrer for every request or by eyeballing the logs and noticing multiple entries from the same /12.
Just like when mastodon.social condemned Meta for their horrible moderation decisions and inability to act properly in the interest of its users, and said that the instance would be cutting ties/not federating with Threads, they kept on federating like nothing happened.
I don’t believe anything coming out of mastodon.social unless I can see action being taken with my own two eyes.
Also, blocking scrapers is very easy, and it has nothing to do with a robots.txt (which they ignore).
Can you please show exactly there this was said?
The entirety of the internet disagrees.
How is blocking scrapers easy?
This instance receives 500+ IPs with differing user agents all connecting at once but keeping within rate limits by distribution of bots.
The only way I know it’s a scraper is if they do something dumb like using “google.com” as the referrer for every request or by eyeballing the logs and noticing multiple entries from the same /12.
Exactly this, you can only stop scrapers that play by the rules.
Each one of those books powering GPT had like protection on them already.