Iris Recognition: Techniques, Stages and Databases

From Antispoofing Wiki

General Overview

Iris recognition is a biometric modality and automatic verification technique that allows identifying a person by the unique parameters — patterns, coloration — of their iris. The first iris recognition algorithm was postulated in the early 90s by John Daugman, which led to the debut of the early iris scanners in 1999.

Iris recognition appears quite promising due to a number of benefits:

  • Contactless method. A person does not need to physically touch a scanner to get verified.
  • Operation speed. Verification takes two seconds on average.
  • High accuracy. The iris offers 250 verification key points for one eye and 500 key points for two, resulting in a false acceptance chance of 1 in 1.4 million. Fingerprints are typically identified with only 16 key points.
  • High uniqueness. It is reported that ‘chaotic morphogenesis’ greatly contributes to unique iris morphology that is never shared even by identical twins.

Due to its advantages, iris recognition is implemented in various security systems and governmental programs, including India’s Aadhaar program. At the same time, it faces numerous barriers — spoofing attacks or visual noises — which challenge accuracy of the method. A group of techniques has been developed to solve these issues.

Standard Structure of an Iris Recognition System

Even though different iris recognition systems can use distinctive tools and techniques, they tend to share the same basic structure:

  1. Image acquisition. The system strives to obtain a high-quality iris image, which in most cases is under Near-Infrared (NIR) illumination.
  2. Preprocessing. It includes two stages: a) Segmentation focuses on isolation of the iris from the eye image. In some cases periocular regions can also be isolated. b) Normalization compensates for the size variations of an eye pupil: shrinking caused by illumination changes, and so on.
  3. Feature encoding. Iris parameters are extracted with the help of feature analysis in the form of binary codes (digital biometric template).
  4. Matching. Finally, the system compares parameters of the scanned iris with the biometric templates stored in its memory.

Apart from these basic stages, some systems may have extra steps to complement this four-step algorithm.

Iris Databases

The first publicly available iris dataset CASIA v.1 was released in 2003 by the Chinese Academy of Sciences. The first iris recognition challenge dubbed ICE occurred in 2005 featuring its own collection of iris image samples. Since then, a large assortment of iris datasets has been accumulated for training and testing purposes.

Iris databases primarily focus on two essential aspects:

  • Intrinsic properties. These are parameters that belong to a certain technology or process: type of the sensor, illumination conditions, iris size, and so forth.
  • Extrinsic properties. They imply conditions/parameters of an individual usage case: spoofing attacks, images obtained in an uncontrolled environment, aging or degradation of the human iris, etc.

CASIA datasets are highly popular due to the wealth of samples adjusted to emulate different scenarios and their also availability (see here on how to get CASIA databases). Other popular instances include UBIRIS v.2, MICHE DB, CISP, VISOB 2.0, ND-CrossSensor-2013, and others. All in all, there are about 158 iris datasets both private and publicly available.

Iris Recognition Stages

Technically, iris recognition includes the following stages, some of which were already mentioned in the iris recognition system structure:

Acquisition of iris image

Iris image acquisition is usually performed under the Near-Infrared illumination (in 700–900-nm band) — the idea was voiced by John Daugman in his work. Visible light is seen as an unfavorable condition: eye melanin — responsible for the eye color — tends to naturally absorb light, which obscures a subject’s eyes making it burdensome or even impossible to localize pupils and irises. Visible light, however, can be used in the smartphone iris recognition scenario.


Preprocessing includes Segmentation and Normalization.


Segmentation means localizing iris (and its patterns) from the entire eye image. There are various methods to achieve that. A lot of solutions employ Daugman’s classic method, which uses an integro-differential operator capable of detecting eyelid boundaries.

Circular Hough Transform and Hysteresis thresholding are also used for segmentation as they can detect edges of the iris. Another method utilizes Gaussian smoothing function together with histogram equalization to increase the iris image contrast. After this is done, segmentation is achieved through application of Canny edge detector and probabilistic Circular Hough Transform.


A widely used normalization method is the Daugman’s rubber sheet model: it unwraps a circular Iris region into a rectangular block of fixed dimension.

Feature extraction and selection

A variety of methods are proposed for feature extraction: 2-D Gabor wavelet and hamming distance usage, mother wavelets (fast-decaying oscillating waveform) like Daubechies, Haar, Coiflet, Biorthogonal and Symlet, the tandem of 1-d Log Gabor filters and Haar wavelets, and other techniques.


For classification, an optimized support vector machine (SVM) with high performance accuracy, is suggested. Other methods propose a cascaded classifier technique with Haar wavelets (HWs) used for training an SVM, Hamming distance technique and Euclidean distance for classifying features obtained from the feature extraction phase, k-nearest neighbor used as a classifier due to its cost-efficiency, and others.

Main Methods & Algorithms of Iris Recognition

As mentioned above, a large number of methods is used in iris recognition. Discussed here are two more commonly used methods:

Using 2-D Gabor wavelets

John Daugman proposed a quadrature 2-D Gabor wavelet used for a patch-wise phase quantization of the iris pattern:

[math]\displaystyle{ {h}_{\{Re,Im\}}={sgn}_{\{Re,Im\}}\int_{\rho}\int_{\phi}\int_{\phi}{I(\rho,\phi)e^{-i\omega(\theta_0-\phi)}}\cdot \textstyle e^{-{({r_0}-\rho)}^2/\alpha^2} e^{-{({\theta_0}-\phi)}^2/\beta^2}\rho d \rho d \phi }[/math]

Daugman’s original Interpretation:

  • [math]\displaystyle{ \boldsymbol{{h}_{\{Re,Im\}}} }[/math] — complex-valued bit whose real and imaginary parts can be either 1 or 0 depending on the sign of the 2-D integral,
  • [math]\displaystyle{ \boldsymbol{I(\rho,\phi)} }[/math] — raw iris image in a dimensionless polar coordinate system that is size- and translation-invariant,
  • [math]\displaystyle{ \boldsymbol{\alpha} }[/math] and [math]\displaystyle{ \boldsymbol{\beta} }[/math] — multiscale 2-D wavelet size parameters, spanning an eight-fold range from 0.15 to 1.2 mm on the iris,
  • [math]\displaystyle{ \boldsymbol{\omega} }[/math] — wavelet frequency, spanning three octaves in inverse proportion to β,
  • [math]\displaystyle{ \boldsymbol{r_0} }[/math] and [math]\displaystyle{ \boldsymbol{\theta_0} }[/math] — polar coordinates of each region of iris for which the phasor coordinates [math]\displaystyle{ \boldsymbol{{h}_{\{Re,Im\}}} }[/math] are computed.

Convolutional neural networks in iris recognition

Convolutional Neural Networks or CNNs have proven their superiority at image recognition through the use of repetitive neural blocks that form a convoluted layer. A number of CNNs were proposed for iris recognition such as DeepIris, duplicate DeepIrisNets etc. The main issue with deep neural networks and iris recognition is that existing iris datasets fail to provide ample training material. As a solution to this, the transfer learning technique was proposed.

Iris Recognition in Noncooperative Environments

For imperfect conditions caused by noise, facial occlusions such as eyelids or poor illumination, a combination of Canny Edge Detection, Circular Hough Transform and K-Means Clustering is proposed to reduce error rates. Hierarchical convolutional neural networks (HCNNs) have also been suggested as a segmentation method in problematic environments.

Recognition at a Distance

Usually, iris recognition demands user cooperation and close-distance image acquisition, which increases customer friction and slows the throughput speed. As a solution, Iris Recognition at a Distance (IAAD) architectures were proposed.

The hardware components for an IAAD system require harmless illumination preferably at the 700-900 nm wavelengths, camera capable of capturing a 200-pixel iris image at a distance, lens (possibly telescopic), and control units responsible for the synchronization of the equipment. In the software part, ansti-spoofing, feature extraction, segmentation, normalization, and other typical components are required for iris recognition.

Postmortem Iris Recognition

Postmortem iris recognition is a promising method of personal identification for deceased individuals as it allows identifying deceased irises more accurately than other traditional forensic methods. As death causes pupils to mid-dilate (the so-called cadaver position), iris patterns still remain visible until natural decomposition begins. The latter can be delayed for up to three weeks with appropriate cadaver storage. The main issue with postmortem iris identification is that it is expensive compared to live identification and lacks training datasets.


  1. Biometrics Researcher Asks: Is That Eyeball Dead or Alive? Iris scanners provide excellent biometric identification, but they can be spoofed
  2. Iris Recognition: Fast and Easy to Use
  3. Why iris sees growth potential down the road
  4. Review of the Sixteen Points Fingerprint Standard in England and Wales
  5. Chaos, complexity and morphogenesis: Optical-pattern formation and recognition
  6. The other 'fingerprints' you don't know about
  7. Iris Image Recognition Based on Independent Component Analysis and Support Vector Machine
  8. Aadhaar program
  9. Iris Recognition Development Techniques: A Comprehensive Review
  10. CASIA Iris Image Database Version 1.0 (partial)
  11. Sample collection for the CASIA-Iris-Interval dataset in progress
  12. National Laboratory of Pattern Recognition (NLPR) | Institute of Automation, Chinese Academy of Sciences(CASIA)
  13. UBIRIS v.2
  14. MICHE DB
  15. CISP
  16. VISOB 2.0
  17. Sample images from ND-CrossSensor-2013 (1st row), CASIAIris-Thousand (2nd row) and UBIRIS datasets (3rd row)
  18. A survey of iris datasets
  19. How Iris Recognition Works
  20. Is eye color determined by genetics?
  21. A Review On Iris Recognition Systems
  22. Canny edge detector applied to a photo of a cat
  23. Canny edge detector by Wikipedia
  24. Daugman's Rubber sheet model for the normalization of the iris
  25. DeepIris: Iris Recognition Using A Deep Learning Approach
  26. DeepIrisNet: Deep iris representation with applications in iris recognition and cross-sensor iris recognition
  27. Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective
  28. A Review on Iris Recognition in Non-Cooperative Environment
  29. Long range iris recognition: A survey
  30. Post-Mortem Iris Recognition—A Survey and Assessment of the State of the Art
  31. Post-mortem Human Iris Recognition