I have been learning Neural networks for a while now, predominantly with respect to natural language processing. I have been using Kaggle notebooks since I am a beginner. So, recently I was working on a Tamil News Classification dataset I found on Kaggle. The model uses LSTM RNN neural network to classify the news into appropriate news groups. The code in the notebook has an accuracy of around 90+. (Notebook for Reference: https://www.kaggle.com/sagorsemantics/tamil-nlp-lstm) When I tried to create an LSTM Model, my accuracy was around 34%, despite using the same layers, activation function, optimizer, hyperparameters etc. Which I thought was strange. After asking around, I was advised to use hyperparameter tuning to achieve a higher accuracy. I did so. (My code here: https://github.com/Vijeeguna/Tamil-News-Article-Classification/blob/main/tamil_news_classification_LSTM_RNN_CNN.py) But my accuracy continues to be low at 34%. I have played around with layers, dropout, etc. But the accuracy wont budge.
I am at a loss. I don't understand how/why this is. Any nudge in the right direction would be most welcome.
Code on Collab with accuracy I got: https://colab.research.google.com/drive/1P7H6J98GGizrGpMXl8QtTAzWsdgIvGAw?usp=sharing
[Also, I am a true novice. I have been learning thro Kaggle notebooks almost exclusively. Please be patient and dumb things down for me.]
question from:
https://stackoverflow.com/questions/65647874/natural-language-processing-lstm-neural-network-accuracy-too-low 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…