- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they’re a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.
ETH Zurich PhD student Andreas Plesner and his colleagues’ new research, available as a pre-print paper, focuses on Google’s ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an “invisible” reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.
Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low “human” confidence rating.
I can see a future where the Internet is completely run by bots and AI to the point where no human actually uses the Internet anymore.
It’s like an island that gets overrun with rats - there are just too many to deal with so you leave.
Some believe this happened years ago. Check out Dead Internet Theory.
Yeah, I predict that in the future, you can’t expect that content on the internet is written by humans. If you go to the internet, then it will probably not be to connect to other humans. Maybe you want to know something that a bot can tell you or you have some administrative task to fulfill, like filing a form.
Basically Cyberpunk, people only interact with the night city intranet because the global internet has been taken over by AIs.