Thus, the accurate diagnosis especially for the early stages of Alzheimer’s is very important. The Alzheimer’s Disease Neuroimaging Initiative (ADNI) was launched in 2003 with primary goal of analysing whether a combination of serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be useful to measure the progression of mild cognitive impairment and early Alzheimer’s disease. Deep learning is a branch of Artificial intelligence that employs a variety of probabilistic calculations and optimisation techniques allowing the system to gain from vast and complex data. A convolutional neural network (CNN) is a class of deep learning, feed-forward artificial neural networks that is most commonly used in analysing visual imagery.This project aims at developing a deep learning algorithm using Convolution Neural Network in classification for better detection of early Alzheimer’s disease using dataset provided by ADNI for classification. Millions of people around the world are living with Alzheimer’s disease or some other form of dementia. Early detection of Alzheimer’s disease has a lot of benefits that can help the patient and their loved ones in making the right decisions. Early diagnosis allows people with AD and their families to receive timely practical information, advice and support; giving then access to available drug and non-drug therapies that may improve their cognition and enhance their quality of life. Undetected Alzheimer’s disease places older adults at risk for delirium, motor vehicle accidents, medication errors, and financial difficulties to name a few. Hence, this project aims at improving the accuracy of early detection of Alzheimer’s disease in order to help the people take right decision at the right time. Existing systemVarious classification methods have been proposed for automatically detecting Alzheimer’s Disease patients from normal patient.
The results have achieved high accuracies up to 87% sensitivity and 95% specificity. For example, in Efficient mining of association rules for the early diagnosis of Alzheimer’s disease by R. Chaves et al. (2011), pathologically unproven dataset from ADNI of 97 participants was used, of which 41 were labelled as healthy controls and 56 were labelled as AD patients by expert physicians. Comparisons were made with other techniques like PCA-SVM, GMM-SVM, output revealed the classification accuracy of 94. 87% with 91. 07% sensitivity and 100% specificityLimitationsThe most common problems among the existing models is the input size, attributes and validation. It is easier to get higher accuracies with smaller datasets but such methods cannot be used to represent a larger population of data. It has been noted that small sample size is prone to overtraining while large data size can have several effects on robustness, accuracy and reproducibility. It is impressive that 96. 6% accuracy is attained, but unproven data used as input and the given size of the data put some doubt on the robustness of the model. Proposed ModelThe proposed model consists of a three-fold approach viz:
For an effective classification of the Alzheimer’s data, the first step is pre-processing. Here, the pathologically proven data set is processed to avoid class imbalance and then it is converted to readable data type. Deep learning algorithms works very well when the number of instances of one class are almost equal to the number of instances of other class. Class imbalance can damage the classification result severely. Attribute selection involves searching through all possible combinations of attributes in the data to find which subset of attributes works best for the purpose of prediction and classification. For any classification task, it can lead to an increase in accuracy or reduction in computational costs. The third step is based on classification using Convolution neural network with minimum support and minimum confidence. The classification is done using 10-fold cross validation i. e. , data is divided into 10 parts. One part is used as test and remaining 9 are used as training data and the process is repeated 10 times to validate the results.