I’m a blade runner. 😁
Just here for good conversation with good people.
I’m a blade runner. 😁
I don’t think it’s a bad idea but it’s largely dependent on the crawler. I can’t speak for AI based crawlers, but typical scraping targets specific elements on a page or grabbing the whole page and parsing it for what you’re looking for. In both instances, your content is already scrapped and added to the pile. Overall, I have to wonder how long “poisoning the water well” is going to work. You can take me with a grain of salt, though; I work on detecting bots for a living.
Mom’s Eggo waffles.
… enhances the tightening of the butterfly and the meat
Excuse me, what?
Late reply here too. I’m sorry about that.
You can read my comment that I made here in regards to what I think could be causing you problems.
I do take this seriously and will try to find some time to put together a configuration like this for testing, so thanks for sharing.
Apologies for the late response!
I’ll echo similar thoughts to what I said in another comment. Librewolf, Mullvad, and other privacy based browsers are going to be a double-edged sword. You can take me with a grain of salt but these types of browsers actually do make you stand out in terms of fingerprinting. They have their own unique signatures, and the more you tweak the more you stand out too. Does it protect your privacy? It’s really hard to tell, there’s no data to suggest one way or another that I’m aware of. But, these changes are going to make you more likely to be challenged by captcha and blocked by sites in general.
I wish we didn’t have to try and solve this type of problem. Privacy should be a right.
Thanks for sharing that!
Truthfully, Firefox is fairly easy to detect. Several facets of the API it uses makes for quick identification. For example, Firefox should be able to report its build ID. Also, it won’t report specifics about the WebGL renderer you’re using like the vendor and architecture.
The link you shared is great and really highlights something I was thinking about today regarding this subject. The more you harden and change things the more you stand out. You’re also more likely to trigger bot detection when you alter specifics about your browser like the major version you’re on. I’ve seen some extensions change the user agent to much older major versions like Firefox 60. That’s a big red flag.
The user agent thing was bizarre, especially since it was also on Minecraft.net! I swapped to a generic Chrome on Windows agent and it instantly started working again and let me use the site as normal again.
Yes that is bizarre 😂 It’s not clear to me if Microsoft is using their own anti-bot solution or a third party one, but it doesn’t sound really successful with the way it’s reacting.
Overall, I can’t help but thinking the best route is to use the same thing as everyone else but roll your own VPN and change MAC addresses. Ideally, we would have some laws against all of this but I don’t foresee that anytime soon.
I wish I could do more to help. I’m happy to answer questions you might have, though.
I for one want to offer a heartfelt apology. As someone that works in this space, bots are becoming more and more sophisticated. I can’t speak for Cloudflare, but we’re definitely not interested in your personal information. As someone who also prefers their privacy on the web, the fact that bot signatures overlap with privacy-centric signatures sucks. I myself have experienced it on my mobile device with Ghostery. It’s frustrating, I know.
Would you mind sharing the guide you used for hardening your Firefox? I’m curious to see what could potentially be triggering the issue.
Also, I just want to say, I think it’s hilarious that a site blocked you but then allows you to continue browsing after changing your user agent. That right there is bot behavior.
To circle back around to the actual block, I bet changing your skin executes JavaScript which flags something from the anti-bot software.
Task failed successfully.
Nah Albuquerque, fam.
Thanks for sharing this graph. Please forgive my pessimism regarding the subject. I know a lot of progress is being made in the area of renewables and sometimes it still feels dire. Hopefully we can hasten that downward trend with coal.
NYT also uses a third party bot identification and mitigation service.
There isn’t. This article is laughable because there is an astronomical amount of bot traffic that masquerades as legitimate human traffic. Things like puppeteer extra stealth and residential proxies have made it easier to hide a bots presence on the web. Also, the tracking they allude to via fingerprinting would very much be the same whether it’s a human solving a captcha or a seamless process where your browser solves one.
Ah yes, from Twitter to Shitter.
I agree and I wish I was actually that cool. I just look at data all day and write rules. 🫠