This is what AI should be used for. There’s another great article on the same site about using AI to find qanats too which was fascinating.
https://gizmodo.com/cold-war-spy-photos-reveal-ancient-subterranean-aqueducts-2000500005
That’s fantastic news, but one piece worth noting… The “Nazca Lines” is not a monolith. There are at least TWO known cultures who contributed to this World Heritage Site:
The Nazca (obviously), but also their ancestors, the Paracas. Possibly the Topará as well.
That cat is kinda derpy, but of course you gotta start somewhere. Fascinating figures.
That’s neat, but how do we know they aren’t an AI hallucination backed up by the human brain’s tendency to see patterns? For example, would the same AI search have found the face on Mars?
Hallucinations are an issue for generative AI. This is a classification problem, not gen AI. This type of use for AI predates gen AI by many years. What you describe is called a false positive, not a hallucination.
For this type of problem you use AI to narrow down a set to a more manageable size. e.g. you have tens of thousands of images and the AI identifies a few dozen that are likely what you’re looking for. Humans would have taken forever to manually review all those images. Instead you have humans verifying just the reduced set, and confirming the findings through further investigation.