Future of Information and Communication Conference (FICC) 2023
2-3 March 2023
Publication Links
IJACSA
Special Issues
Future of Information and Communication Conference (FICC)
Computing Conference
Intelligent Systems Conference (IntelliSys)
Future Technologies Conference (FTC)
Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2023.0140478
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 14 Issue 4, 2023.
Abstract: In many facial expression recognition models it is necessary to prevent overfitting to check no units (neurons) depend on each other. Therefore, dropout regularization can be applied to ignore few nodes randomly while processing the remaining neurons. Hence, dropout helps dealing with overfitting and predicts the desired results with more accuracy at different layers of the neural network like ‘visible’, ‘hidden’ and ‘convolutional’ layers. In neural networks there are layers like dense, fully connected, convolutional and recurrent (LSTM- long short term memory). It is possible to embed the dropout layer with any of these layers. Model drops the units randomly from the neural network, meaning model removes its connection from other units. Many researchers found dropout regularization a most powerful technique in machine learning and deep learning. Dropping few units (neurons) randomly and processing the remaining units can be considered in two phases like forward and backward pass (stages). Once the model drops few units randomly and select ‘n’ from the remaining units it is obvious that weight of the units could change during processing. It must be noted that updated weight doesn’t reflect on the dropped units. Dropping and stepping-in few units seem to be very good process as those units which step-in will represent the network. It is assumed to have maximum chance for the stepped-in units to have less dependency and model gives better results with higher accuracy.
B. H. Pansambal and A. B. Nandgaokar, “Integrating Dropout Regularization Technique at Different Layers to Improve the Performance of Neural Networks” International Journal of Advanced Computer Science and Applications(IJACSA), 14(4), 2023. http://dx.doi.org/10.14569/IJACSA.2023.0140478
@article{Pansambal2023,
title = {Integrating Dropout Regularization Technique at Different Layers to Improve the Performance of Neural Networks},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2023.0140478},
url = {http://dx.doi.org/10.14569/IJACSA.2023.0140478},
year = {2023},
publisher = {The Science and Information Organization},
volume = {14},
number = {4},
author = {B. H. Pansambal and A. B. Nandgaokar}
}