Abstract: We present a new CAPTCHA which is based on identifying an image’s upright orientation. This task requires analysis of the often complex contents of an image, a task which humans usually perform well and machines generally do not. Given a large repository of images, such as those from a web search result, we use a suite of automated orientation detectors to prune those images that can be automatically set upright easily. We then apply a social feedback mechanism to verify that the remaining images have a human-recognizable upright orientation. The main advantages of our CAPTCHA technique over the traditional text recognition techniques are that it is language-independent, does not require text-entry (e.g. for a mobile device), and employs another domain for CAPTCHA generation beyond character obfuscation. This CAPTCHA lends itself to rapid implementation and has an almost limitless supply of images. We conducted extensive experiments to measure the viability of this technique.
The performance of image retrieval is an important subject because it can be so dependent on how images are classified and keyed (e.g., human text description or automated feature extraction). The heavy-duty word is ontology. A colleague at HP Labs and I created a benchmark (called BIRDS-I) to measure the performance of content-based image retrieval (CBIR). See HP Labs Technical Report HPL-2000-162.
And, at the intersection of security and AI:
"Software that can solve any text-based CAPTCHA will be as much a milestone for artificial intelligence as it will be a problem for online security."