Tuesday, 11 February 2020

Polynomiography

Polynomiography: A Computational and Mathematical Perspective

By
Rana Sohail
MSCS (Networking), MIT


Abstract— Polynomiography is basically the study of art and science of visualization to make approximations for the complex polynomial’s zeros. The study revolves around the pictorial view of polynomials where the image is created and further be used in educational and scientific world. The applications created by polynomiography are very useful in the fields of art, education and science. This paper will make an endeavour to establish the relationship between the computational and mathematical aspects.

KeywordsPolynomiography, polynomials, art, science, mathematics,


I. INTRODUCTION

Before describing the polynomiography let’s discuss the polynomial as it can be defined as a finite collection of points in Euclidean Plane where these points are basically are the roots or zeros. These roots bifurcates the Euclidean Plane into independent territories which are determined by an iteration function. Now polynomiography can be described as the painting of these points in an ordered fashion thus making it an artwork. So polynomiography is the study of art and science of mental picture to make estimates for the complex polynomial’s zeros. The study encompasses the graphic view of polynomials in which the created images are used in the world of education and science. World of polynomiography is made up of two words ‘polynomial’ and ‘graphy’, the meaning is very prominent and that is polynomials are presented with the help of graphs. Actually we require specific algorithms to do the job.  Polynomial is an object which is based on the mathematical basic and fundamental class. The multiplication of algebraic expression is much faster through algorithm than before. Here the pixels are involved which may vary from hundred to millions. Today the computer makes it very easy to get the results. Millions of pixels are manipulated on the screen of a computer as in [1].
Polynomiography is the result of infinite number of iteration functions which are established for the purpose of approximation of the polynomial’s root-finding. Here the iteration function means the mapping of the plane into itself.    
The paper is organized in sections. In section II, Polynomiography base and its fields will be defined. In section III, relationship of mathematical concepts and polynomiography will be discussed. In Section IV, the discussion will be concluded.

II.      POLYNOMIOGRAPHY – THE BASE AND ITs FIELDS

A.    The Base
Once we take the polynomial
p(z)= anzn+an−1zn−1 + ….+ a1z + a0;
where n ≥ 2, and the coefficients ai are complex numbers. This problem presents the approximating roots of p(z). Here iteration function is operative which belongs to “Basic Family” as in [2]. Among others, Basic Family has two famous members namely Newton’s iteration function;
  


and Halley’s iteration function;

In both cases the upper and lower bounds are declared as Um and Lm where m is a natural number and 2. The numbers are computable in terms of coefficient of p i.e. each root θ can have Lmθ  Um.
Polynomiography gives out the visualizations of such situations of Basic Family and its properties. 

B.    Fields
Polynomiography is useful in three important fields of life which are very famous, details are as under:-
(1)       Visual Art:             Polynomiography software is very useful; it works like a camera or musical instrument. Through it can draw simple as well as complex patterns of designs which can be comparable with any classic work of human.
Polynomiography is very useful in teaching the basics and professional artistic and mathematical concepts in the classroom at any level. There is no requirement of basic teaching of such software; even a beginner can handle it comfortably. An analogous image to the camera can be easily created digitally by it. It is all possible by using iteration functions by the software.
Polynomiography makes it possible to solve the issue of “reverse root-finding”. Here the roots of any polynomial can be found through iteration function and combination of desired colour scheme can create wonderful designs. Likewise, user can create images by putting the coefficients of the polynomial or zeros locations in the software. In the field of art, polynomiography can be specific in terms of creating images in following way as in [3]:-
  Polynomiographer uses one polynomial and produces a variety of images. This is possible because of the use of the variety of iteration functions.
  Polynomiographer can turn an ordinary image to a very attractive and beautiful image by using colours and imaginative creativity.
  Polynomiographer  can  combine  the  art  and mathematics by using either polynomials or iteration functions.
  Polynomiographer can use the already created polynomiographs and mixing of two or more can be done.
Figure 1 and 2 are examples of polynomiographic software creation.
(2)  Education:  Polynomiography can be very useful in the field of teaching. It is used to deploy and solve difficult theorems which are totally dealing with the polynomials. It is also helpful in the understanding of algebra and geometry. Figure 3 gives out Fundamental Theorem of Algebra visually as in [1]. It show the polynomiography of nine digit number.
Once  the  level  is  raised  to  higher  education, polynomiography software still has the command over calculus, numerical analysis, notion of convergence, limits, iteration functions, fractals, root-finding algorithms etc. 
(3)  Science: Polynomiography do have importance in the field of science because almost all the science theories are based on polynomials and if we know the roots of a polynomial then we can say that we know the polynomial as well.  In  science  special  polynomials  like  Legendre polynomials as in [4], Chebyshev polynomials as in [5], orthogonal polynomials as in [6] etc were difficult to be understood but polynomiography made it very easy and simple to understand. Figure 4 as in [1] gives some polynomiography for a polynomial arising in physics.


III. RELATIONSHIP OF  MATHEMATICAL  CONCEPTS AND


POLYNOMIOGRAPHY
A. Numeric Polynomiography
The use of polynomiography software application for numerical equations is very interesting. Here numbers can be encrypted with it. We can take the examples of ID or credit card numbers that can be divided into two dimensional images which can be resembled as a fingerprint. Now different fingerprints can be presented by different numbers. We can take the example of number a 8 a 7 …a 0 which can be identified by the polynomial P(z) = a 8 z 8 +….+ a 1 z + a 0 as in [1]. 
Finding square root of numbers is another way of handling through polynomiography which is a very interesting and simple task. Here polynomial equations are approximated by square rooting through images as in [7]. Figure 5 shows the graphical view of computing square root of two.
Likewise irrational numbers, complex numbers and iterative methods can be more elaborative and understandable with the help of polynomiography as never before as explained in [7].
B. Basins of Attractions and Voronoi Region of Polynomial Roots
The basins of attraction of a root in relation with iteration function are the regions in the complex plane as shown in figure 6
It is basically the set of initial conditions which leads to long time behaviour approaching the attractor. So the better quality of long time motion of a system could be different. It depends upon the initial conditions as the attractors may be corresponding to periodic or quasiperiodic or chaotic behaviours of different types. In a state plane a region presenting the basin of attraction will be different because it varies from system to system. These attractions can be drawn graphically but not with ease and if at any stage of drawing a minute mistake occurs then whole process would be a waste. Polynomiography made it so easy and simple that software draws all of it once the initial conditions are set in the prescribed regions.
Julia set as defined in [8] of the polynomial roots has the nature of fractal. Images of basins of attractions of Newton's method are very similar to that of some special polynomials. Mathematical analysis of complex iterations can be dealt with polynomiography as in [9].
Voronoi Region of polynomial roots can be defined as a diagram which divides the space into a number of regions. Points in each region are set at initial stage and all points are set in such a way that they are placed closer to each other. These regions are also known as voronoi cells. These diagrams are very helpful in the fields of science, technology and artwork.
There are number algorithms which find out the Voronoi regions by computation like Divide and Conquer, Brute Force, Fortune’s Line Sweep etc. Once computing intersection of half planes the time taken will be O (n 2 log n) and while proving lower bounds require Ω (n log n) and so on. 
Figure 7 shows the space divided into twenty regions with twenty points or cells placed in them in a way to be closed to each other. This sort of display will be very easy to depict with the help of polynomiograhy software.
C. Root Sensitivity
The n roots of a polynomial of degree n depend continuously on its coefficients. It can be explained as a polynomial root is a zero of the polynomial function and any non – zero polynomial and its degree has equal roots. The equation polynomial f of degree two will be:-
f (x) = x 2 – 5x + 6
have two roots 2 and 3, since
f (2) = 2 2 – 5.2 + 6 = 0 and
f (3) = 3 2 – 5.3 + 6 = 0
in case the function has to map real numbers to another real numbers then their zeros will be the x-coordinates of the points where their graph will be meeting the x-axis as explained in [10].
Polynomial roots are very sensitive to even small changes which are if carried out in their coefficients. We may take the example of
p(z)=(z−1)(z−2)…..(z−n)
suppose n = 7 then
p(z)=z 7  − 28z 6 + 322z 5 − 1960z 4 + 6769z 3 − 13132z 2 + 13068z – 5040

If we change coefficient of z 6 from −28 to −28:002 then this small change can have a large change in the roots. Some real roots will become complex. This aspect can’t be visualized in advance once solving the equation but polynomiography made it possible to have advance information on this important aspect as in [11]. Figure 8 shows the changes in the roots as the coefficient of z 6 is decreased.

D. Complex Multiplication
Multiplication of complex numbers as in [12] is an important subject and polynomiography made it very easy to understand, we can take an example
Suppose two complex numbers z 1 = (a+bi) and z 2 = (c+di). They can be added as (a+b) + i(c+d), and their product is said to be (ac−bd)+i(ad +bc).
We take another example to explain it further in detail. Once the complex plane is multiplied by i then the result will be (3+4i) x i = 3i + 4i 2 and i 2 = -1, so: 3i + 4i 2 = -4 + 3i, figure 9 shows the equation in complex plane.
One important thing can be observed that rotation of angle is right angle (90° or π/2) and same will be the results once same multiplications are carried out.
(-4 + 3i) x i = -4i + 3i 2 = -3 - 4i,
(-3 - 4i) x i = -3i - 4i 2 = 4 - 3i and
(4 - 3i) x i = 4i - 3i 2 = 3 + 4i
So the results are clear that rotation has completed a circle and it finished where it has started. Figure 10 shows a very clear right angle and rotation. Such complex plane calculations can confuse the readers but polynomiography made very simple, anyone with little knowledge can easily understand it.



IV. CONCLUSION

In this paper polynomiography is discussed in detail. The concept of polynomial study in the fields of art, science and education has been highlighted. The mathematical concepts have specially been discussed. The mathematical concepts which are ambiguous and very difficult to grasp by the students are given a new dimensional world where these conceptual thinking has a platform of graphical view. The calculations for the polynomial roots and their coefficients are if carried out with the help of computer or calculator will be accurate but there are lots of chances where their execution on a graph paper would be a disaster with a minute mistake. These fields were very difficult to understand but polynomiography made it very simple and easy to understand. Polynomials may be simple or complex both can be dealt very intelligently by the polynomiography. This paper has the intentions to clear the basic concepts of the beginners in the field of Polynomiography in a befitting manner.


REFERENCES

[1]  Bahman Kalantari, “Polynomiography - A New Intersection between Mathematics and Art”, Department of Computer Science, Rutger University, USA, 2000, pp. 1.

[2]  Bahman Kalantari, “The Fundamental Theorem of Algebra and Iteration Functions”, Department of Computer Science, Rutger University, USA, 2003, sec. III and sec. VII.
[3]  Bahman Kalantari, “Polynomiography and Applications in Art, Education, and Science”, Department of Computer Science, Rutger University, USA, 2003, para 3.
[4]  J.C. Mason and D.C. Handscomb, “Chebyshev Polynomials”, New York: Washington D.C, CRC Press LLC, 2003, Ch. 1. 
[5]  Harry Bateman, “Higher Transcendental Functions”, vol. II, New York,
Toronto, London, McGral-Hill Book Company inc, 1953, Ch. 10, pp.178-182
[6]  H.L. Krall and Orrin Frink, “A New Class of Orthogonal Polynomials: The Bessel Polynomials”,  Transactions of the American Mathematical Society, 1949.
[7]  Bahman Kalantari, “A New Visual Art Medium: Polynomiography” Rutgers University, Computer Graphics, Vol. 38 No. 3 Aug. 2004, ACM SIGGRAPH, Los Angeles, California, USA, Art. 21, pp. 21-23
[8]  Wikipedia, The Free Encyclopaedia website, “Julia Set”. [Online]. Available: http://en.wikipedia.org/wiki/Julia_set
[9]  Bahman Kalantari, “The Art in Polynomiography of Special Polynomials”, Department of Computer Science, Rutger University, USA, 2003.
[10] Wikipedia, The Free Encyclopaedia website, “Zero of a function”. [Online]. Available: http://en.wikipedia.org/wiki/Polynomial_roots
[11] Bahman Kalantari et al., “Animation of Mathematical Concepts using Polynomiography”, Department of Computer Science, Rutger University, USA, 2004.
[12] Math is Fun Advanced website, “Complex Number Multiplication”. [Online].  Available:  http://www.mathsisfun.com/algebra/complex-number-multiply.html

Face Recognition Algorithms

Face Recognition Algorithms – An In-depth Study
By
Rana Sohail
MSCS (Networking), MIT


Abstract— Few years back, faces were used by humans to recognize the people and now the same procedure is adapted by the machines. After lot of experiments in this field the fact is established that machines are more accurate than a human eye. At the early stage of machine performance, algorithms were based on simple geometry but at present it is not that simple. Highly sophisticated scientific techniques are in use involving advanced mathematical calculations. Past few years have seen the development of many algorithms and their modified shapes.  This paper will review and compare the available face recognition algorithms and identify the most efficient one among them. Furthermore, it will also be determined whether a better algorithm could be developed or otherwise.       
KeywordsFaces recognition, algorithms, comparison, machines
I. Introduction
The present era is an age of multimedia where people interact with machines more frequently in terms of proving their identity. The machines specifically computers are concerned with the proofs provided by the people to prove themselves “What they are” and not “Who they are”. The proof may be of ID cards, passwords, PIN code, secret questions etc. which could easily be used by others than owners. Few years back the technology of biometrics revolutionised the complete scenario. Biometrics deals with the physical traits of living person like DNA, fingerprints, face expressions, physical appearance, handwriting etc. The identification through biometric is almost flawless.
Face recognition, one of the way of biometric method, is based on facial features analysis algorithm which carry out different face analysis. These analysis may consists of either single or combination of face alignment, modelling, relighting verification, authentication, expression, gender recognition etc.  The multimedia information is based on digital images/ videos and computers can detect the human faces in them through such algorithms as in [1]. Once any random image or video is analysed by face detection algorithm, it determines the faces in that image and give location of each image. The face recognition technology is very useful and famous in the fields of mugshot identification as in [2], people’s surveillance, generation or reconstruction of faces, and access control as in [3].
  There are number of Face Recognition Algorithms which have been developed over a past few years. They have been developed, experimented, modified, improved and launched. This paper will make an endeavour to explore these algorithms and carry out a fair comparison among them. By doing so the most efficient algorithm will be identified. In addition it will be pondered upon whether a better face recognition algorithm could be developed or otherwise.
The paper is organized in sections. In section II, history and basic framework of face recognition algorithm will be discussed. In section III, available face detection algorithms will be review. In Section IV, comparison will be drawn among these algorithms to highlight most efficient algorithm among all. In section V, the discussion will be concluded.
II. Basic Framework of Algorithm
The basic task of algorithm is to locate the features of the face like eyes, nose, ears, mouth, chin, cheeks, forehead, and hair colour on the image. This is for calculating the distances among each other along with ratios to certain reference points. Before discussing the algorithmic basic framework there is a need to explore its history a bit for better understanding. 
A. Historical Background
The roots could be discovered deep down in the previous century, in 1960s, the concept of face recognition was introduced. In 1970s, the location of face features were determined and calculated manually as explained in [4]. By 1988, involvement of linear algebra for face recognition was considered as a milestone which took less than hundred values to complete the task as mentioned in [5]. Eigenfaces techniques were introduced in 1991, which enabled automation of face recognition in real-time scenario as in [6]. After such a discovery there was no looking back and now a number of software are developed which are helping out the public sector as well as the government and secret agencies. The extension of help is in multidimensional fields like law enforcement agencies, missing children identification, fraudulent cases like passport, ID cards etc.
B. Basic Framework
There are mainly three components which are the functional modules of face recognition algorithm as explained in [1] are appended below:-
     (1) Face Detector: It performs the task of locating the faces of humans in an image. The image could be normal or clumsy with a simple or complicated background. Identifying the exact location of face could not be possible therefore an approximation is expected. It has got two major components:-
i. Feature Extractor: It is a component held by face detector. It has the task of transforming the pixels of detected image of a face from the main photo into vector representation.
ii. Pattern Recognizer: It is another component held by face detector. It carries out the search of the database to find out whether a match is present or not. It further categorizes feature vector as “face” or “non-face” image.
(2) Face Recognizer: It establishes the fact that the detected human face belongs to which individual through its data bank of different fed photographs/ mugshots. It has got same two major components like face detector:-
i. Feature Extractor: It is a component held by face recognizer. It has the same task as in face detector that is transforming the pixels of detected image of a face from the main photo into vector representation.
ii. Pattern Recognizer: It is another component held by face recognizer. It has the same function like face detection that is to carry out the search of the database to find out whether a match of the face is present or not. It further classifies feature vector as a particular individual’s face which could be by name recognition.
(3) Eye Localizer: As face detector gives out approximated location of the face which could be wrong as well therefore eye localizer helps out in its own way. It determines the location of both eyes on the face to confirm the human face and by doing so gives out the exact location of the face in the image. The above framework could be depicted in figure 1 as in [1] below:-
Fig. 1 – Basic Framework – Face Recognition Algorithm 

III. Algorithms Used for Face Recognition


There are number of algorithms which provide services in this field. The face recognition is image based and has two categories; appearance and model based. The appearance based algorithms are further categorized as linear and non-linear. Whereas model based algorithms are computed dimensionally either 2-dimensinal or 3-dimensional. These algorithms are shown in figure 2 as in [3] and elaborated as under:-
Fig. 2 – Basic Framework – Face Recognition Algorithm

A. Eigenfaces Technique
It works on the basis of principal component analysis (PCA) in the form of linear combination. It has two functions; initialization of the system and face recognition process as explained in [7]. Steps involved in initialization operation are as under:-
1) Acquisition of Training Set: The training set is acquired from the concerned authorities which include the set (initial) of images of the face.
2) Defining Face SpaceEigenfaces are calculated with the help of training sets. Only those M images are kept which have the highest eigenvalues. Face space is defined by these M images and with addition of new images, the eigenfaces are recalculated and updated.
3) CalculationThis step involves in calculating the related distribution in M dimensional weight space of each individual’s face images onto the face space.   
The above steps are used for system initialization, after that following steps are adopted for new face images recognition:-
1) Keeping in view the input images and M eigenfaces, the weights set are calculated. It is done by input images projection onto each eigenfaces one by one.
2) It is determined whether it is an image or not and if it is then of known person or not. This done by comparing the image with face space.
3) Once it is declared that it is a face then known or unknown person tag is classified.
4) The weight patterns are updated and it is an optional clause.
5) Another optional clause, there are chances where a face which is unknown but come across many times then its weight patterns are recorded and added into the known faces.  
B. Artificial Neural Networks (ANNs)
The algorithm’s computation is statistical learning based and the inspiration is from biological neural networks or nervous system where neurons are connected to each other. These neurons have the capability of computing the values from the data (inputs) and can detect pattern recognition as in [8]. The ANN algorithm performs the task in two stages as in [9] explained as under:-
1) Stage I – ANN FilterThe filter gets a region of 20 x 20 pixels of an image and gives out an output of 1 (face) or -1 (no face).   With the help of this filter each and every location of image is searched for any face. The algorithm has two steps of pre-processing and neural networking which evaluate the intensity values and face presence respectively.
2) Stage II – Overlap Detections Merging and Arbitration: Here the detector’s reliability is improved upon by two ways. One way is to merge the overlapped detections which have been achieved through single network and second way same process but through multiple networks. Both the ways lead to eliminate the false detections and identify the correct face detections.  
C. Principal Component Analysis (PCA)
This is same as eigenfaces algorithm which has already been explained above in detail. Actually the 2-dimentional image of the face is transformed into the 1-dimentional vectors which leads to difficult situation of evaluating the covariance matrix correctly. The accuracy would be questionable once the matrix has a large size but the samples are in smaller numbers. Another issue is of consumption of lot of time in the process which take to deadlock at times as explained in [10]
(1) 2-Dimential PCA (2DPCA): As explained earlier, in original PCA algorithm the 2-dimentional image has to transform into 1-dimentional vectors which gives out the results with lot of difficulties and issues. Therefore a new method of 2DPCA would be more helpful. This algorithm takes on the image rows to make the covariance matrix without any transformation from image to vectors. Then computation is done and eigenvectors are produced as the vectors. The accuracy is more reliable because the matrix has the same size as of image width therefore the results are achieved in much lessor timeframe. Moreover, it is very efficient once compared with PCA algorithm.
(2) Diagonal PCA (DiaPCA): As 2DPCA could only read out the material between rows therefore it could not cover diagonally. This issue can be resolved by transformation of original images into diagonal face images which are corresponding to it. This algorithm reads out the information between the rows and columns simultaneously.
B. Independent Component Analysis (ICA)
ICA is basically an extension of PCA. The ICA deals with the data analysis and compression. It can be explained as the statistical way for the transformation of a vector which is multidimensional, into those type components which are independent from each other statistically as explained in [11].  
C. Linear Discriminant Analysis (LDA)
This algorithm works on the bases by reducing the dimensions and extracting the features out of it. The LDA algorithm works on data classification where data sets are transformed. There are two approaches by which the test vectors are classified as in [12]:-
(1) Class-Dependent Transformation: The ratio among variance of class in its own class variance is maximized, resultantly the separability of class is attained which is adequate and sufficient enough for the operation. For the transformation of data sets independently, 2-optimizing criteria are involved.
(2) Class-Independent Transformation: The overall ratio of variance of class in its own class variance is maximized, resultantly the separability of class is attained. For the transformation of data sets independently, 1-optimizing criteria is involved.
D. Kernel PCA (KPCA)
It is a non-linear PCA extension in which input space is mapped into the feature space. Then within the feature space process of computation is carried out on the principal components. In other words, a non-linear mapping of input data into feature space and then linear PCA computation in featured space is known as KPCA.  [13]
E. ISOMAP/ Locally Linear Embedding (LLE)
It is a non-linear method where reduction is carried out dimensionally. The algorithm determines all neighboring components then a graph is constructed in the light of these neighbors (nodes). On the base of neighborhood graph the shortest path among the nodes is computed and lower dimensional embedding is also computed.  [14]
F. Elastic Bunch Graph Matching (EBGM)
In this algorithm graphs are used which are labelled. The Edges represent the information of distances and nodes gives out responses of wavelet which are placed in jets. The new images are matched with already stored graphs to formulate the image graph. These model graphs are gathered to form a gallery. Since one person can’t be recognized with one or two graphs therefore a number of different directional images are gathered in the gallery for a single person which forms a shape of a bunch. So for every pose of a person bunch graphs are used.
There are two stages involved in the creation of bunch graphs; the graph’s structure which includes the edges/ nodes and assigning the labels which includes the jets/ distances as elaborated in [15].
G. Active Appearance Model (AAM)
During the training phase AAM algorithm computes the statistical model with the new image appearance. Generally in the medical field it is used for faces tracking. It works in an environment where the process of optimization is driven by taking the difference of current estimates of image appearance with the target appearance of an image as described in [16].
H. 3D Morph able Model (3DMM)
In this 3-dimensional shape of the faces are used which is very easy and helpful in finding and estimating the pose along with the illuminating strength of the image. Moreover, 3D is also supports the enhancement of light and resolution of the image as in [17].
IV. Comparison – Face Recognition Algorithms
In this section a comparison will be drawn among the available algorithms based on the appearance and model to highlight the most efficient one among them in terms of different aspects like better performer among them, with lower error rates, less time consumer and better success rates. These are elaborated as under:-
A. Appearance – Based Algorithms – Comparison
The performance can be gauged through putting the algorithms in some experimental system. Appearance-based algorithms are either linear or non-linear standard. PCA, LDA and ICA work on linear scheming and are well compared by the experiments carried out by MATLAB tool as explained in [18].
In the first experiment the training images are gradually increased which results in a decrease in ICA performance but PCA and LDA remains constant. The second experiment uses the illumination increase in the training images. 20% increase in illumination does not have much effect on the success rate of these algorithms but 50% increase reduces the success rate of all.  The result shows that PCA is less effected than LDA and LDA is less effected than ICA. Third experiment is based on face partial occlusion. The result shows that PCA is less sensitive than ICA and ICA is less sensitive than LDA. Therefore the success rate is much better in case of PCA as compared to others as in [18].
  The non-linear algorithms like KPCA or KLDA gives out better results in terms of much lower error rates and more success rates. The experiment shows the performance comparison among the kernel (non-linear) and PCA, LDA and ICA (linear) algorithms as in [19].
B. Model-Based Algorithms – Comparison
In the model-based algorithms there are two main aspects; 2-dimentional (2D) and 3-dimentional (3D) based algorithms. The performance can be gauged through comparing both types in some experimental system. Model-based algorithms are either 2D or 3D standard as explained in [20].
The experiment is basically a test carried out comparing the performance of both 2 and 3 dimensional classification. 2D algorithm may be 90% successful in the controlled conditions of lights but not good enough while dealing with the illumination increase or pose of image or face expression variations. 3D algorithm is more successful due to taking into consideration of pose, head shape and face from three sides dimensionally. Moreover, 3D algorithm memorizes the geometry of the face and makes out the polygonal mesh consisting of vertices and edges. The connection of vertices through edges gives out a three dimensional image as in [20].   
C. Most Efficient Face Recognition Algorithm
The above comparison gives out Kernel PCA algorithm and 3D model algorithm as the best ones among the appearance-based and model-based algorithms respectively. If appearance and model based algorithms are combined then more efficient algorithm can be achieved which could give out better results.
An experiment is carried out where 2D and 3D based algorithms are tested with PCA based algorithm. The results shows that 3D algorithm based on PCA algorithm has better performance than 2D as in [21].
The development in this field have shown above mentioned algorithms are the most efficient face recognition algorithms so far. But there is no end to it, still the avenues are open for exploration.
V. Conclusion
This paper has been developed to discuss the available face recognition/ detection algorithms. During the discussion it has been observed that this art is about half a century old and seen a steady progress. The process of evolution has not reach to its culmination point yet. A number of algorithms pondered upon reveals that new one is better than previous one.  The base of principal component analysis gives out good results at individual level and once combined with 3-dimentional algorithm even better results are achieved.


In the present era where security is at high alert, need of biometrics system has never been felt so vigorously before. Recognition through face is the best option to identify among the friend and foe.
References
[1]         Shang-Hung Lin, “An Introduction to Face Recognition Technology”, Info. sci. spec. issue on MIT. part 2, vol 3, 2000, pp. 1-7. 
[2]         National Institute of Standards and Technology (NIST): NIST Special Database 18: Mugshot Identification Database (MID), [Online]. Available: http://www.nist.gov/srd/nistsd18.htm
[3]         X. Lu, “Image Analysis for Face Recognition” Dept. of comp. sci. & engr. Michigan State Uni. USA, 2003, pp. 7.
[4]         A. J. Goldstein et. al. “Identification of Human Faces”, proc. IEEE, May 1971, vol. 59, No. 5, pp. 748-760.
[5]         L. Sirovich and M. Kirby, “A Low-Dimensional Procedure for the Characterization of Human Faces” J. Optical Soc. Am. A, 1987, vol. 4, No. 3, pp. 519-524.
[6]         M. A. Turk and A. P. Pentland, “Face Recognition Using Eigenfaces” proc. IEEE, 1991, pp. 856-591.
[7]         M. Turk and A. Pentland, “Eigenfaces for Recognition” journal of cognitive neuroscience, vol 3, No. 1, 1991. pp. 72-74. 
[8]         Artificial neural network, [Online]. Available: http://en.wikipedia.org/wiki/Artificial_neural_network.
[9]         H. A. Rowley et. al. “Human Face Detection in Visual Scenes,” Carnegie Mellon uni. USA. Jul. 1995, pp. 1-6
[10]      D. Zhang et. al. “Diagonal Principal Component Analysis for Face Recognition”, Nanjing uni. China, 2006, pp. 1-2
[11]      P. Comon, “Independent  Component  Analysis, A  New  Concept”, sig. proc. 36, 1994, pp. 287-314
[12]      S. Balakrishnama and A. Ganapathiraju, “Linear Discriminant Analysis - A Brief Tutorial”, Mississippi State uni. USA, 1998, pp. 2
[13]      K.I. Kim et, al, “Face Recognition Using Kernel Principal Component Analysis”, IEEE sig. proc. vol. 9, no. 2, Feb. 2002, pp. 1-3.
[14]      H. Zha and Z. Ahang, “Isometric Embedding and Continuum ISOMAP”, proc. of 12th Intl. conf. on Machine Learning, USA, 2003, pp. 864-871.
[15]      L. Wiskott et. al. “Face Recognition by Elastic Bunch Graph Matching”, Ruhr uni. Bochum, Germany, 1996, pp. 1-12.
[16]      T. F. Cootes et. al. “Active Appearance Models” uni. of Manchester, proc. European conf. on comp. vision, 1998, vol. 2, UK, pp. 484-498.  
[17]      X. Zhu et. al. “Robust 3D Morphable Model Fitting by Sparse SIFT Flow”, Chinese Academy of Sci. China, 2014, pp. 1-6.
[18]      Onsen and Adnan, “Face Recognition using PCA, LDA and ICA Approaches on Colored Images”, Istanbul uni. Journal of elec. & electronic engr. Vol. 3, No. 1, 2003, pp. 735-743  
[19]      M.H. Yang, “Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methodes”, proc. of 5th intl. conf. on auto. Face gesture recog. 2002, pp. 215-220.
[20]      A.F. Abate et. al. “2D and 3D face recognition: A survey”, Italy, 2007, pp. 1897-1904.
[21]      K.I. Chang et. al. “Face Recognition Using 2D and 3D Facial Data”, Notre Dame uni. USA, 2003, pp. 1-7.

Phonemic Learning – An In-Depth Study Introduction Learning, a non-ending phenomenon starts from the cradle and ends in the grave. Huma...

Popular Posts