Problem Definition on Face Recognition : A Review
Akshaya Kumar Sharma1* and Amit Shrivastava2
1 Resaerch Scholar, Department of Electronic and Communication Engineering, VNS Group of Institutions, Bhopal (Madhya Pradesh), INDIA
2 Assistant Professor, Department of Electronic and Communication Engineering,VNS Group of Institutions, Bhopal (Madhya Pradesh), INDIA
* Correspondence: E-mail: firstname.lastname@example.org
(Received 21 Dec, 2018; Accepted 05 Feb, 2019; Published 10 Feb, 2019)
ABSTRACT: Face recognition has many challenges due to illumination variations, large dimensionality, uncontrolled environments, aging and pose variations. In the recent years, Face recognition get remarkable improvement and accuracy to overcome these challenges, but matching in the heterogamous environment such as near infrared and visible spectrum is very challenging task. Matching of face images capture in near infrared spectrum (NIR) to face images of the visible spectrum (VIS) is a very challenging task. Recent research is categorized in three aspects such as face synthesis analysis, sub space methods, and local feature-based approaches. Face recognition has many challenges due to illumination variations, large dimensionality, uncontrolled environments, pose variations and aging. In the recent years, Face recognition get remarkable improvement and accuracy to overcome these challenges, but illumination change is still challenging. In this paper we study earlier research work to find challenges in the cross spectral face recognition model.
Keywords: Cross Spectral Face Recognition; human identification; near infrared spectrum (NIR); discriminate feature extraction (CDFE) and visible spectrum (VIS).
Face Recognition & Applications of Biometric Systems: The applications of bioscience are divided into the subsequent three main teams.
Commercial applications like electronic network login, electronic information security, e-commerce, Internet access, ATM, MasterCard, physical access management, cellular phone, PDA, medical records management, and distance learning.
Government applications like national ID card, punitory facility, drivers license, Social Security, welfare-disbursement, border management, and passport management.
Forensic applications like cadaver identification, criminal investigation, terrorist identification, adulthood determination, and missing youngsters.
Traditionally, business applications have used knowledge-based systems (e.g., PINs and passwords), government applications have used token-based systems (e.g., ID cards and badges), and rhetorical applications have relied on human experts to match biometric options. Biometric systems are being more and more deployed in large-scale civilian applications. The Schiphol Privium theme at the Amsterdam airport, for instance, employs iris scan cards to hurry up the passport and visa management procedures . Passengers registered in this theme insert their card at the gate and appearance into a camera; the camera acquires the image of the travelers eye and processes it to find the iris and calculate the Iris code ; the computed Iris code is compared with the info residing in the card to finish user verification. An analogous theme is additionally being used to verify the identity of Schiphol flying field staff working in high-security areas. Thus, biometric systems are used to enhance user convenience whereas rising security.
Face recognition system which is a very popular now days, has very useful applications such as forensic, person identification, bank card identification , access control  and surveillance [5 6]. A face recognition system process is shown in figure 1.1. Face images are acquired by the camera, followed by features are extracted and stored in the database as the biometric template. For the recognition any user the similar process is repeated up to the features extraction then extracted features are matched with the stored features and decision is made as accept or reject.
Face recognition has many challenges due to illumination variations, large dimensionality, uncontrolled environments, aging and pose variations. In the recent years, Face recognition get remarkable improvement and accuracy to overcome these challenges, but matching in the heterogamous environment such as near infrared and visible spectrum is very challenging task. Matching of face images capture in near infrared spectrum (NIR) to face images of the visible spectrum (VIS) is a very challenging task.
Figure 1: Face recognition workflow.
LITERATURE REVIEW: Face matching in the heterogeous spectrum is very challleging task Cross-spectral matching VIS-NIR image is more challenging task in the different modalities. Researcher have proposed many solutions or algorithm to match the visible and NIR images. Recent research is categorized in three aspects such as face synthesis analysis, sub space methods, and local feature-based approaches.
Traditional research work on heterogeneous face recognition mainly specialized in 3 ways to alleviate the cross modal gap : coming up with invariant features for various modalities, reworking one face modality to the opposite, and protruding each image modalities onto a standard subspace. Modal-invariant features SIFT or LBP are extracted in [5, 6]. Synthesis primarily based approaches are utilized within the research of [7, 8]. Tang et al.  propose an Eigen transformation method whereas Liu et al.  reconstruct image patches based LLE. Approaches of  and  project cross-domain pictures to a standard mathematical space by using LDA and TCA (transfer element analysis), severally.
Recently, Felix et al.  propose joint-Dictionary primarily based approach to reconstruct face pictures on the premise of pictures in the alternative domain that achieves the best verification rate (85.80%) on the CASIA NIR-VIS a pair of.0 Face Database  up to now. With the event of deep learning technique, several vision Related issues enter into a replacement era. Some tries have been created with relation to heterogeneous face recognition. J. Ngiam et al.  propose a bimodal Deep AE method supported denoising antoencoder. To exert the potential effects of all layers, Srivastava et al.  counsel a multi-modal DBM approach. The progressive rank-1 accuracy (86.16%) is achieved by, that resorts to RBM combined with removed PCA options. though these unsupervised approaches typically perform well on small-scale NIR-VIS datasets, the matching accuracy of NIR-VIS is still way below than those of the VIS face recognition strategies.
Similarity based representation of different domain to common subspace where NIR images and VIS image have similar representation in subspace proposed a common discriminate feature extraction (CDFE) in which intra- modality and intermodality local smoothing is done. Jun-Yong et al.  proposed transductive heterogeneous face matching (THFM) which learns the VIS-NIR matching from the VIS-NIR image. It also proposed feature representation based on Log-DoG filtering, local encoding, and uniform feature normalization. Yi et al. introduced canonicalcorrelation analysis (CCA) to find out the correlation between NIR and VIS faces from NIR-VIS face pairs . Recently, Lei and Li  suggested determination the matter via coupled spectral regression (CSR). In their model, an occasional dimensional illustrationfor each face was initial computed victimization discriminative graph embedding methodology and so two associated projections were learned severally to project heterogeneous information into the discriminative common topological space for final classification. Our work conjointly mines a topological space, however our objective is for modeling domain adaptation for VIS-NIR matching in a very transductive way, whereas these connected works area unit non-transductive. Invariant features extraction can be in global based and local feature based. The objective of these methods to extract features which are invariant to lighting conditions.
Tan and Triggset al.  reduce the difference between NIR and VIS images by preprocessing based on Gamma correction, Difference-of-Gaussian (DoG) filtering, klare et al.  combine the histogram gradients (HOG) features with LBP to describe the face images. Light Source Invariant Features (LSIFs) is proposed to reduce the gap between VIS and NIR face image .Goswami et al. introduced an efficient preprocessing chain to cut back the difference between VIS and NIR facial pictures supported Gamma correction, Difference-of-Gaussian (DoG) filtering and distinction deed . Liao et al. advised encryption both VIS and NIR face pictures victimization Multi-block LBP (MBLBP) followed by DoG filtering . LightAdaBoost and R-LDA were conducted for more feature choice. Following this work, Binary Laplacian of Gaussian (LoG) was also investigated in . Recently, Liu et al. projected light-weight Source Invariant options (LSIFs) to fill the gap between VIS and NIR face pictures . During this work, multi-scale DoG is first performed to come up with over-complete face illustration, and then three native descriptors specifically HOG, GLOH and SIFT area unit applied to construct the candidate feature pool, and finally light AdaBoost is employed to pick the simplest options. However, AdaBoost is time overwhelming and desires plentiful samples for getting sturdy performance, and it limits its use in our case. or else, so as to help the training model in this work, we have a tendency to area unit a lot of willing to take advantage of the domain invariant feature during a lot of economic approach victimization learning procedure like .
A lot of significantly, existing feature descriptors area unit designed by trial and error and lacking of theoretical support. In this work, we have a tendency to explore the basis of the principle of some popular existing descriptors for VIS-NIR matching and more introduce our projected descriptor besides illumination invariant property analysis. Yi et al.  used canonical correlation analysis primarily based learning in linear discriminate analysis (LDA) topological space for matching. Random subspaces primarily based ensemble of classifier is used alongside nearest neighbor (NN) and distributed illustration primarily based matching. Similarly, Maeng et al.  used HOG options for cross-spectrum and cross-distance face matching. Most of those algorithms are evaluated on tiny scale datasets, like heterogeneous face biometrics (HFB) dataset  and CARL  that comprises limited range of subjects and/or vague experimental protocols. Therefore, claims concerning generalize ability of performance may not be created with confidence and benchmarking will be difficult.
PROBLEM DEFINITION: Face recognition has many challenges due to illumination variations, large dimensionality, uncontrolled environments, pose variations and aging. In the recent years, Face recognition get remarkable improvement and accuracy to overcome these challenges, but illumination change is still changing. Li et al.  proposed an NIR imaging system that gives satisfactory results for face recognition in illumination variance conditions but it does not give good results when matching NIR image to visible images. Unfortunately, all face images in the database are store in the visible spectrum.
There are several challenges in the cross spectral face recognition model as follows:
· Visible and near infrared spectrum has deferent wavelength, visible spectrum has wavelength in between .4΅m to .7΅m and NIR spectrum has wavelength from.7΅m to 1.4΅m.
· Human face Images of same person in NIR spectrum and Visible spectrum are look very differently so even is very difficult task for human to recognize these images.· Illumination changes and differences in facial expressions.
· Some practical challenges due to the oriental and misalignment of the face in the different images.
· Illumination variances, facial expressions, surrounding environment and lighting condition also effect the matching.
1. Schiphol Backs Eye Scan Security. CNN World News. [Online]. Available: http://www.cnn.com/2002/WORLD/europe/03/27/schiphol.security/
2. J. Daugman (1999) Recognizing persons by their Iris patterns, in Biometrics: Personal Identification in a Networked Society, A. K. Jain, R. Bolle, and S. Pankanti, Eds. Norwell, M. A.: Kluwer, pp. 103121.
3. C. Sanderson (2008) Biometric Person Recognition: Face, Speech and Fusion. VDM Publishing.
4. S. Ouyang, T. Hospedales, Y. Song, and X. Li. A survey on heterogeneous face recognition: Sketch, infra-red, 3d and low-resolution. In arXiv preprint arXiv:1409.5114, 2014.
5. G. H. K. and S. T. (2012) Inter-modality face sketch recognition. In Multimedia and Expo, IEEE International Conference, 224229.
6. B. Klare and A. K. Jain (2010) Sketch-to-photo matching: a feature-based approach. In Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 1, 7667.
7. Q. Liu, X. Tang, H. Jin, H. Lu, and S. Ma (2005) A nonlinear approach for face sketch synthesis and recognition. In Computer Vision and Pattern Recognition, IEEE Computer Society Conference, 1, 10051010.
8. X. Tang and X. Wang (2003) Face sketch synthesis and recognition. In Computer Vision, IEEE International Conference, 1, 687694.
9. R. Wang, J. Yang, D. Yi, and S. Z. Li (2009) An analysis-bysynthesis method for heterogeneous face biometrics. In Biometrics, International Conference, 319326.
10. S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang (2011) Domain adaptation via transfer component analysis. Neural Networks, IEEE Transactions, 22(2),199210.
11. F. Juefei-Xu, D. K. Pal, and M. Savvides (2015) Nir-vis heterogeneous face recognition via cross-spectral joint dictionary learning and reconstruction. In Computer Vision and Pattern Recognition Workshops, IEEE International Conference, 14115.
12. S. Z. Li, D. Yi, Z. Lei, and S. Liao (2013) The casianir-vis 2.0 face database. In Computer Vision and Pattern Recognition Workshops, IEEE International Conference, 348 353.
13. J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In Machine Learning, International Conference on, pages 689696, 2011.
14. N. Srivastava and R. R. Salakhutdinov (2012) Multimodal learning with deep boltzmann machines. In Neural Information Processing Systems (NIPS), 22222230.
15. R. Wang, J. Yang, D. Yi, and S. Li. (2009) An analysis-by-synthesis method for heterogeneous face biometrics, in Proc. 3rd Int. Conf. ICB, pp. 319326.
16. J. Chen, D. Yi, J. Yang, G. Zhao, S. Z. Li, and M. Pietikainen (2009) Learning mappings for face synthesis from near infrared to visible light images, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition , pp. 156163.
17. Z. Lei and S. Z. Li. (2009) Coupled spectral regression for matching heterogeneous faces, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition , pp. 11231128.
18. J. Y. Zhu, W. S. Zheng, and J. Lai (2012) Transductive VIS-NIR face matching, in Proceedings of IEEE International Conference on Image Processing , pp. 14371440.
19. Jun-Yong Zhu, Wei-Shi Zheng and Jian-Huang Lai, Matching NIR Face to VIS Face Using Transduction, IEEE Trans on information forensics and security, vol. 9 issue 3, march 2014
20. D. Yi, R. Liu, R. Chu, Z. Lei, and S. Li (2007) Face matching between near infrared and visible light images, in Proc. Int. Conf. ICB, 523530.
21. Z. Lei and S. Li (2009) Coupled spectral regression for matching heterogeneous faces, in Proc. IEEE Conf. CVPR, 11231128.
22. L. Hong and A. K. Jain (1998) Integrating faces and fingerprints for personal identification, IEEE Trans. Pattern Anal. Machine Intell. , 20, 12951307.
23. X. Tan and B. Triggs (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions, IEEE Trans. Image Process., 19(6), 16351650.
24. B. Klare and A. Jain (2010) Heterogeneous face recognition: Matching NIR to visible light images, in Proc. 20th ICPR, 15131516.
25. S. Liu, D. Yi, Z. Lei, and S. Li (2012) Heterogeneous face image matching using multi-scale features, in Proc. 5th IAPRICB, 7984.
26. D. Goswami, C. H. Chan, D. Windridge and J. Kittler (2011) Evaluation of face recognition system in heterogeneous environments (visible vs NIR), in Proc. IEEE ICCVW, 21602167.
27. S. Liao, D. Yi, Z. Lei, R. Qin, and S. Li (2009) Heterogeneous face recognition from local structures of normalized appearance, in Proc. 3rd Int. Conf. ICB, pp. 209218.
28. D. Yi, S. Liao, Z. Lei, J. Sang, and S. Li (2009) Partial face matching between near infrared and visible images in MBGC portal challenge, in Proc.3rd Int. Conf. ICB, 733742.
29. T. Ojala, M. Pietikainen, and T. Maenpaa (2002) Multiresolutiongray-scale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Mach. Intell., 24(7), 971987.
30. D. Yi, R. Liu, R. Chu, Z. Lei, and S. Z. Li (2007) Face matching between near infrared and visible light images, in Advances in Biometrics, 523530.
31. H. Maeng, S. Liao, D. Kang, S. W. Lee, and A. K. Jain (2013) Nighttime face recognition at long distance: cross-distance and cross-spectral matching, in Proceedings of Asian Conference on Computer Vision, 708721.
32. S. Z. Li, Z. Lei, and M. Ao. (2009) The HFB face database for heterogeneous face biometrics research, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops , 18.
33. V. Espinosa-Dur΄ o, M. Faundez-Zanuy, and J. Mekyska (2013) A new face database simultaneously acquired in visible, near-infrared and thermal spectrums, Cognitive Computation, 5(1), 119135.
34. M. Zahid Alam, Ravi Shankar Mishra and A.S. Zadgaonkar, (2015) Image Denoising using Common Vector Elimination by PCA and Wavelet Transform, International Journal on Emerging Technologies, 6(2): 157-164.
35. Nandan Kumar and Prof. Sneha Jain, (2018) Digital Water Marking Techniques and Uses intellectual property rights, International Journal of Electrical, Electronics, and Computer Engineering 7(2): 42-50.
36. Nandan Kumar and Prof. Sneha Jain, (2018) A Review of Digital Water Marking in protect information copyright info privacy Techniques, International Journal on Emerging Technologies, 9(2): 54-61,2018.
37. Vivek Patil, (2015) Error-Free correlation in Encrypted Attack Traffic by Watermarking flow through Stepping Stones, International Journal on Emerging Technologies, 6(2): 235-239.