Time series modelling and forecasting for predicting Covid19 Case Load using LSTM

Main Article Content

Sellam V
Mohit Gorakhpuriya
Avani Mishra
Prince Kevadiya

Abstract

The epidemic of the Novel Coronavirus across the globe has influenced the globe overall and caused a large number of death results. This remains as an unfavorable admonition to general wellbeing and will be set apart as perhaps the inordinate pandemics in the history. Inorder to validate and analyse, the details was taken from COVID-19. The detail contains daily tallies of confirmed, relieved and demise cases. Likewise, it includes extra data with respect to patients testing present in various states and the outcomes isolated in confirmed and invalidated cases. With the data provided it allows infected person to get the proper treatment and timely quarantine. The proposed paper utilizes Long Short Term Memory (LSTM ) networks for sequential prediction of data. The networks are viable apparatuses in short-term time series gauge the COVID-19 confirmed cases. It is a complex gated memory unit made to disappearing gradient issues restricting the effectiveness of a basic Recurrent Neural Network (RNN). Here Neural Network is used to solve the complex operation on the dataset. The result demonstrate that the LSTM Network is executed with various activation functions by utilizing a exponential linear unit brought about better execution for determining the complete number of COVID-19 cases. With the timely observations the corona virus state can be effectively monitored and the proper treatment can be assigned for the infected ones.

Article Details

How to Cite
[1]
S. V, M. Gorakhpuriya, A. Mishra, and P. Kevadiya, “Time series modelling and forecasting for predicting Covid19 Case Load using LSTM”, Int. J. Data. Science., vol. 2, no. 1, pp. 56-62, Sep. 2021.
Section
Articles

References

L. Wang, S. Tasoulis, T. Roos, and J. Kangasharju, ‘‘Kvasir: Scalable provision of semantically relevant Web content on big data framework,’’ IEEE Trans. Big Data, vol. 2, no. 3, pp. 219–233, Sep. 2016.

M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, ‘‘Spark: Cluster computing with working sets,’’ in Proc. HotCloud, 2010,p. 10.

M. Abadi et al., ‘‘TensorFlow: Large-scale machine learning on heteroge-neous distributed systems,’’2016,arXiv:1603.04467. [Online]. Available:http://arxiv.org/abs/1603.04467

D. Sun, G. Zhang, and W. Zheng, ‘‘Big data stream computing: Technolo-gies and instances,’’ J. Softw., vol. 25, no. 4, pp. 839–862, 2014.

Z. Yu, Z. Bei, and X. Qian, ‘‘Datasize-aware high dimensional configu-rations auto-tuning of in-memory cluster computing,’’ in Proc. ASPLOS, 2018, pp. 564–577.

A. Jacob, B. Harris, J. Buhler, R. Chamberlain, and Y. Cho, ‘‘Scalable softcore vector processor for biosequence applications,’’ in Proc. 14th Annu. IEEE Symp. Field-Program. Custom Comput. Mach., Apr. 2006, pp. 295–296.

Z. Fadika and M. Govindaraju, ‘‘DELMA: Dynamically ELastic MapRe-duce framework for CPU-intensive applications,’’ in Proc. 11th IEEE/ACM Int. Symp. Cluster, Cloud Grid Comput., May 2011, pp. 454–463.

L. Bao, X. Liu, Z. Xu, and B. Fang, ‘‘AutoConfig: Automatic configuration tuning for distributed message systems,’’ in Proc. 33rd ACM/IEEE Int. Conf. Automat. Softw. Eng. (ASE), Sep. 2018, pp. 29–40.

A. Caprara, M. Monaci, P. Toth, and P. L. Guida, ‘‘A Lagrangian heuristic algorithm for a real-world train timetabling problem,’’ Discrete Appl. Math., vol. 154, no. 5, pp. 738–753, Apr. 2006.

I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, ‘‘Generative adversarial nets,’’ in Proc. NIPS, 2014, pp. 2672–2680.

K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, ‘‘A fast and elitist multiobjective genetic algorithm: NSGA-II,’’ IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002.

Z. Bei, Z. Yu, N. Luo, C. Jiang, C. Xu, and S. Feng, ‘‘Configuring in-memory cluster computing using random forest,’’ Future Gener. ComputSyst., vol. 79, pp. 1–15, Feb. 2018.

N. Yigitbasi, T. L. Willke, G. Liao, and D. Epema, ‘‘Towards machine learning-based auto-tuning of mapreduce,’’ in Proc. MASCOTS, 2013, pp. 11–20.

G. Wang, J. Xu, and B. He, ‘‘A novel method for tuning configuration parameters of spark based on machine learning,’’ in Proc.HPCC/SmartCity/DSS, 2016, pp. 586–593.

Apache Spark. Accessed: Sep. 5, 2019. [Online]. Available:http://spark.apache.org/

M. Zaharia, M. Chowdhury, T. Das, J. Ma, M. McCauley, M. J. Franklin, S. Shenker, and I. Stoica, ‘‘Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing,’’ in Proc. NSDI, 2012,pp. 15–28.

G. Mackey, S. Sehrish, and J. Wang, ‘‘Improving metadata management for small files in HDFS,’’ in Proc. IEEE Int. Conf. Cluster Comput. Workshops, Sep. 2009, pp. 1–4.

Q. Fan, K. Zeitouni, N. Xiong, Q. Wu, S. Camtepe, and Y. Tian, ‘‘Nash equilibrium-based semantic cache in mobile sensor grid database systems,’’ IEEE Trans. Syst., Man, Cybern., Syst., vol. 47, no. 9, pp. 2550–2561, Sep. 2017.

M. Mirza and S. Osindero, ‘‘Conditional generative adversarial nets,’’ 2014, arXiv:1411.1784. [Online]. Available: http://arxiv.org/abs/1411. 1784

M. Arjovsky, S. Chintala, and L. Bottou, ‘‘Wasserstein GAN,’’ 2017, arXiv:1701.07875. [Online]. Available: http://arxiv.org/abs/1701.07875

C. Hwang and S. Lin, ‘‘Hill climbing for diversity retrieval,’’ in Proc. CSIE, 2009, pp. 154–158.

L. Wang, S. Li, F. Tian, and X. Fu, ‘‘A noisy chaotic neural network for solving combinatorial optimization problems: Stochastic chaotic simulated annealing,’’ IEEE Trans. Syst. Man, Cybern. B, Cybern., vol. 34, no. 5, pp. 2119–2125, Oct. 2004.

S. Alupoaei and S. Katkoori, ‘‘Ant colony optimization technique for macrocell overlap removal,’’ in Proc. 17th Int. Conf. VLSI Design, 2004, pp. 963–968.