Iris Recognition: Techniques, Stages and Databases
General Overview
Iris recognition is a biometric modality and automatic verification technique that allows identifying a person by the unique parameters — patterns, coloration — of their iris. The first iris recognition algorithm was postulated in the early 90s by John Daugman, which led to the debut of the early iris scanners in 1999.
Iris recognition appears quite promising due to a number of benefits:
- Contactless method. A person does not need to physically touch a scanner to get verified.
- Operation speed. Verification takes two seconds on average.
- High accuracy. The iris offers 250 verification key points for one eye and 500 key points for two, resulting in a false acceptance chance of 1 in 1.4 million. Fingerprints are typically identified with only 16 key points.
- High uniqueness. It is reported that ‘chaotic morphogenesis’ greatly contributes to unique iris morphology that is never shared even by identical twins.
Due to its advantages, iris recognition is implemented in various security systems and governmental programs, including India’s Aadhaar program. At the same time, it faces numerous barriers — spoofing attacks or visual noises — which challenge accuracy of the method. A group of techniques has been developed to solve these issues.
Standard Structure of an Iris Recognition System
Even though different iris recognition systems can use distinctive tools and techniques, they tend to share the same basic structure:
- Image acquisition. The system strives to obtain a high-quality iris image, which in most cases is under Near-Infrared (NIR) illumination.
- Preprocessing. It includes two stages: a) Segmentation focuses on isolation of the iris from the eye image. In some cases periocular regions can also be isolated. b) Normalization compensates for the size variations of an eye pupil: shrinking caused by illumination changes, and so on.
- Feature encoding. Iris parameters are extracted with the help of feature analysis in the form of binary codes (digital biometric template).
- Matching. Finally, the system compares parameters of the scanned iris with the biometric templates stored in its memory.
Apart from these basic stages, some systems may have extra steps to complement this four-step algorithm.
Iris Databases
The first publicly available iris dataset CASIA v.1 was released in 2003 by the Chinese Academy of Sciences. The first iris recognition challenge dubbed ICE occurred in 2005 featuring its own collection of iris image samples. Since then, a large assortment of iris datasets has been accumulated for training and testing purposes.
Iris databases primarily focus on two essential aspects:
- Intrinsic properties. These are parameters that belong to a certain technology or process: type of the sensor, illumination conditions, iris size, and so forth.
- Extrinsic properties. They imply conditions/parameters of an individual usage case: spoofing attacks, images obtained in an uncontrolled environment, aging or degradation of the human iris, etc.
CASIA datasets are highly popular due to the wealth of samples adjusted to emulate different scenarios and their also availability (see here on how to get CASIA databases). Other popular instances include UBIRIS v.2, MICHE DB, CISP, VISOB 2.0, ND-CrossSensor-2013, and others. All in all, there are about 158 iris datasets both private and publicly available.
Iris Recognition Stages
Technically, iris recognition includes the following stages, some of which were already mentioned in the iris recognition system structure:
Acquisition of iris image
Iris image acquisition is usually performed under the Near-Infrared illumination (in 700–900-nm band) — the idea was voiced by John Daugman in his work. Visible light is seen as an unfavorable condition: eye melanin — responsible for the eye color — tends to naturally absorb light, which obscures a subject’s eyes making it burdensome or even impossible to localize pupils and irises. Visible light, however, can be used in the smartphone iris recognition scenario.
Preprocessing
Preprocessing includes Segmentation and Normalization.
Segmentation
Segmentation means localizing iris (and its patterns) from the entire eye image. There are various methods to achieve that. A lot of solutions employ Daugman’s classic method, which uses an integro-differential operator capable of detecting eyelid boundaries.
Circular Hough Transform and Hysteresis thresholding are also used for segmentation as they can detect edges of the iris. Another method utilizes Gaussian smoothing function together with histogram equalization to increase the iris image contrast. After this is done, segmentation is achieved through application of Canny edge detector and probabilistic Circular Hough Transform.
Normalization
A widely used normalization method is the Daugman’s rubber sheet model: it unwraps a circular Iris region into a rectangular block of fixed dimension.
Feature extraction and selection
A variety of methods are proposed for feature extraction: 2-D Gabor wavelet and hamming distance usage, mother wavelets (fast-decaying oscillating waveform) like Daubechies, Haar, Coiflet, Biorthogonal and Symlet, the tandem of 1-d Log Gabor filters and Haar wavelets, and other techniques.
Classification
For classification, an optimized support vector machine (SVM) with high performance accuracy, is suggested. Other methods propose a cascaded classifier technique with Haar wavelets (HWs) used for training an SVM, Hamming distance technique and Euclidean distance for classifying features obtained from the feature extraction phase, k-nearest neighbor used as a classifier due to its cost-efficiency, and others.
Main Methods & Algorithms of Iris Recognition
As mentioned above, a large number of methods is used in iris recognition. Discussed here are two more commonly used methods:
Using 2-D Gabor wavelets
John Daugman proposed a quadrature 2-D Gabor wavelet used for a patch-wise phase quantization of the iris pattern:
[math]\displaystyle{ {h}_{\{Re,Im\}}={sgn}_{\{Re,Im\}}\int_{\rho}\int_{\phi}\int_{\phi}{I(\rho,\phi)e^{-i\omega(\theta_0-\phi)}}\cdot
\textstyle e^{-{({r_0}-\rho)}^2/\alpha^2} e^{-{({\theta_0}-\phi)}^2/\beta^2}\rho d \rho d \phi
}[/math]
Daugman’s original Interpretation:
- [math]\displaystyle{ \boldsymbol{{h}_{\{Re,Im\}}} }[/math] — complex-valued bit whose real and imaginary parts can be either 1 or 0 depending on the sign of the 2-D integral,
- [math]\displaystyle{ \boldsymbol{I(\rho,\phi)} }[/math] — raw iris image in a dimensionless polar coordinate system that is size- and translation-invariant,
- [math]\displaystyle{ \boldsymbol{\alpha} }[/math] and [math]\displaystyle{ \boldsymbol{\beta} }[/math] — multiscale 2-D wavelet size parameters, spanning an eight-fold range from 0.15 to 1.2 mm on the iris,
- [math]\displaystyle{ \boldsymbol{\omega} }[/math] — wavelet frequency, spanning three octaves in inverse proportion to β,
- [math]\displaystyle{ \boldsymbol{r_0} }[/math] and [math]\displaystyle{ \boldsymbol{\theta_0} }[/math] — polar coordinates of each region of iris for which the phasor coordinates [math]\displaystyle{ \boldsymbol{{h}_{\{Re,Im\}}} }[/math] are computed.
Convolutional neural networks in iris recognition
Convolutional Neural Networks or CNNs have proven their superiority at image recognition through the use of repetitive neural blocks that form a convoluted layer. A number of CNNs were proposed for iris recognition such as DeepIris, duplicate DeepIrisNets etc. The main issue with deep neural networks and iris recognition is that existing iris datasets fail to provide ample training material. As a solution to this, the transfer learning technique was proposed.
Iris Recognition in Noncooperative Environments
For imperfect conditions caused by noise, facial occlusions such as eyelids or poor illumination, a combination of Canny Edge Detection, Circular Hough Transform and K-Means Clustering is proposed to reduce error rates. Hierarchical convolutional neural networks (HCNNs) have also been suggested as a segmentation method in problematic environments.
Recognition at a Distance
Usually, iris recognition demands user cooperation and close-distance image acquisition, which increases customer friction and slows the throughput speed. As a solution, Iris Recognition at a Distance (IAAD) architectures were proposed.
The hardware components for an IAAD system require harmless illumination preferably at the 700-900 nm wavelengths, camera capable of capturing a 200-pixel iris image at a distance, lens (possibly telescopic), and control units responsible for the synchronization of the equipment. In the software part, ansti-spoofing, feature extraction, segmentation, normalization, and other typical components are required for iris recognition.
Postmortem Iris Recognition
Postmortem iris recognition is a promising method of personal identification for deceased individuals as it allows identifying deceased irises more accurately than other traditional forensic methods. As death causes pupils to mid-dilate (the so-called cadaver position), iris patterns still remain visible until natural decomposition begins. The latter can be delayed for up to three weeks with appropriate cadaver storage. The main issue with postmortem iris identification is that it is expensive compared to live identification and lacks training datasets.
FAQ
Is it possible to spoof iris?
Yes, at least two attack types are known in iris anti-spoofing.
Iris recognition is susceptible to two Presentation Attack (PA) methods. The first tactic involves printing a colored copy of a target’s iris or replaying it from a smartphone’s screen and then presenting it to the system’s camera.
The second technique is more challenging as it requires creating a fake prosthetic eye imitating iris patterns of a victim. However, this is possible since making an ocular prosthesis can take merely a few hours.
Spoofing with video injection is also a possible technique. In which the video stream of a system is intercepted and modified to trick it on an internal level. Cadaver eye scenario is also possible.
What are the main advantages of iris identification?
Iris recognition offers a number of advantages that other modalities do not provide.
Iris recognition modality retains some significant benefits. First, it’s more accurate than fingerprint recognition as it provides 250 verification key points for one iris and 500 for two, while a fingerprint has only 16.
Second, it’s contactless and unintrusive, which is favorable in the context of a pandemic or a high customer throughput. Then, the iris uniqueness is further validated by its morphology, which never repeats a pattern twice in the human population. And the procedure requires about 2 seconds to complete a verification. At the same time, iris modality is vulnerable to spoofing attacks.
What are the main iris liveness detection methods?
Iris anti-spoofing can be hardware or software-based.
Iris liveness detection relies on hardware and software approaches. The hardware approach involves:
- Multispectral imaging. Three eye layers are spectrographically analyzed.
- Electrooculography. Cornea-retinal standing potential is measured. However, this procedure is invasive.
- 3D imaging. Iris shadows, caused by uneven illumination, are registered and measured to make sure the eye is three-dimensional.
Software-based liveness detection depends on machine learning, as well as such methods as Fourier image decomposition, Laplacian transform, wavelet analysis, etc. They help expose imperfections of a printed iris picture, etc.
What are the main iris datasets?
A considerable group of human iris datasets exists.
In iris recognition CASIA datasets play a leading role. CASIA v.1, designed by the Chinese Academy of Sciences (CAS), was the first iris dataset available to the broad public. As a result, a cavalcade of other datasets was released: CASIA-Iris-Lamp, CASIA-Iris-Twins, and others.
Apart from the CAS datasets there are more analogues used by the research community. Their assortment includes MMU iris dataset, ND-CrossSensor-Iris 2012, ND-IrisTemplate-Aging 2008–2010, etc.
Iris datasets focus on a) Intrinsic properties — sensors, illumination, iris anatomy b) Extrinsic properties — iris aging, spoofing scenarios, and so on.
When was the first iris recognition performed?
Iris recognition became commercially usable in 1999.
Iris recognition was envisioned by such researchers as J.H. Doggart and F.H. Adler in the first half of the 20th century. Other sources additionally mention ophthalmologist F. Burch who also studied the uniqueness of the human iris patterns. In turn, the mentioned studies could be influenced by A. Bertillon’s work on forensic biometrics.
John Daugman published a study on innovative iris recognition methodology in 1994. It relied on high entropy for accurate pattern recognition. Subsequently, the technology was commercialized and the first iris scanner — LG IrisAccess 2200 — was released in 1999.
Is it possible to recognize iris at a distance?
A few studies prove that iris recognition can be remote.
One of the early studies — published in 2009 — proposed a novel solution for iris recognition patterns at a distance of 3 meters or 9.8 feet. It applies a self-adaptive camera system consisting of a tilt unit (PTU) and a wide-angle USB camera.
A similar research called Iris Recognition at a Distance (IAAD) suggests a hardware architecture that includes harmless illumination (700-900 nm range), a 200-pixel camera, as well as synchronization units.
The system’s anti-spoofing algorithm should perform typical options: image acquisition, preprocessing, feature extraction, and so on. Another solution is reported to recognize irises at 12 meters or 40 feet.
What are the main stages of iris recognition?
Iris recognition typically includes 6 stages.
Iris recognition consists of 6 main stages executed in the following order:
- Acquisition. Iris image should be acquired under the near-infrared illumination.
- Preprocessing. Segmentation and normalization are required at this step.
- Segmentation. Irises are localized in the image with an integro-differential operator or other methods.
- Normalization. The circular Iris region is transformed into a rectangular block of fixed dimension with the Daugman’s rubber sheet model.
- Feature extraction. Iris features are extracted with the 2-D Gabor wavelet or other methods.
- Classification. Extracted features are classified with the Support Vector Machine (SVM) or other methods.
After that, liveness detection can be applied.
References
- Biometrics Researcher Asks: Is That Eyeball Dead or Alive? Iris scanners provide excellent biometric identification, but they can be spoofed
- Iris Recognition: Fast and Easy to Use
- Why iris sees growth potential down the road
- Review of the Sixteen Points Fingerprint Standard in England and Wales
- Chaos, complexity and morphogenesis: Optical-pattern formation and recognition
- The other 'fingerprints' you don't know about
- Iris Image Recognition Based on Independent Component Analysis and Support Vector Machine
- Aadhaar program
- Iris Recognition Development Techniques: A Comprehensive Review
- CASIA Iris Image Database Version 1.0 (partial)
- Sample collection for the CASIA-Iris-Interval dataset in progress
- National Laboratory of Pattern Recognition (NLPR) | Institute of Automation, Chinese Academy of Sciences(CASIA)
- UBIRIS v.2
- MICHE DB
- CISP
- VISOB 2.0
- Sample images from ND-CrossSensor-2013 (1st row), CASIAIris-Thousand (2nd row) and UBIRIS datasets (3rd row)
- A survey of iris datasets
- How Iris Recognition Works
- Is eye color determined by genetics?
- A Review On Iris Recognition Systems
- Canny edge detector applied to a photo of a cat
- Canny edge detector by Wikipedia
- Daugman's Rubber sheet model for the normalization of the iris
- DeepIris: Iris Recognition Using A Deep Learning Approach
- DeepIrisNet: Deep iris representation with applications in iris recognition and cross-sensor iris recognition
- Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective
- A Review on Iris Recognition in Non-Cooperative Environment
- Long range iris recognition: A survey
- Post-Mortem Iris Recognition—A Survey and Assessment of the State of the Art
- Post-mortem Human Iris Recognition