The Performance of Artificial Neural Network Using Heterogeneous Transfer Functions

Main Article Content

Tayo P. Ogundunmade
Adedayo A. Adepoju

Abstract

Neural networks have been very important models across computer vision, natural language processing, speech and image recognition, aircraft safety and many more. It uses a variety of architectures that centres on the Multi-Layer Perceptron (MLP) which is the most commonly used type of Artificial Neural Network. MLP has been found to be good in terms of model precision in the usage of Homogenous Transfer/activation Functions (HTFs), especially with large data set. Based on the preliminary investigations of ranking of transfer functions by error variance (Udomboso, 2014), three HTFs are considered to perform better than other HTFs in prediction. These HTFs are the Hyperbolic Tangent Transfer functions (TANH), Hyperbolic Tangent Sigmoid Transfer function (TANSIG) and the Symmetric Saturating Linear Transfer Function (SSLTF). In this work, the performance of two Heterogeneous Transfer Functions (HETFs), which came as a result of the convolution of the three best HTFs, were compared with the performance of the three above listed HTFs. The hidden neurons used are 2, 5 and 10, while the sample sizes include 50, 100, 200, 500 and 1000. The data were divided into training sets of 90, 80 and 70 respectively.


The results showed that the HETFs performed better in terms of the forecast using Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error as the forecast prediction criteria.

Article Details

How to Cite
[1]
T. P. Ogundunmade and A. A. Adepoju, “The Performance of Artificial Neural Network Using Heterogeneous Transfer Functions”, Int. J. Data. Science., vol. 2, no. 2, pp. 92-103, Dec. 2021.
Section
Articles

References

Aitchison, L. (2020) A statistical theory of cold posteriors in deep neural networks. arXiv preprint arXiv:2008.05912.

Aitchison, L., Yang, A. X., and Ober, S. W. (2020) Deep kernel processes. arXiv preprint arXiv:2010.01590, .

Bayes, T. An essay towards solving a problem in the doctrine of chances. Philosophical transactions of the Royal Society of London, 53:370–418, 1763. By the late Rev. Mr. Bayes, FRS communicated by Mr. Price, in a letter to John Canton, AMFRS.

Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. (2015) Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424.

Christopher Godwin Udomboso(2013) On Some Properties of a Heterogeneous Transfer Function Involving Symmetric Saturated Linear (SATLINS) with Hyperbolic Tangent (TANH)Transfer Functions. Journal of Modern Applied Statistical Methods. Volume 12, Issue 2, Article 26.

Christopher Godwin Udomboso (2014) On the level of precision of a heterogeneous statistical neural network model. PhD thesis, Department of Statistics, University of Ibadan, Nigeria.

Heek, J. and Kalchbrenner, N (2019). Bayesian inference for large scale image classification. arXiv preprint arXiv:1908.03491.

Gauss, C. F. Theoria motvs corporvm coelestivm in sectionibvs conicis solem ambientivm. Sumtibus F. Perthes et IH Besser, 1809.

Garriga-Alonso, A. and Fortuin, V. (2021). Exact Langevin dynamics with stochastic gradients. arXiv preprint arXiv:2102.01691

Garriga-Alonso, A. and van der Wilk, M (2021). Correlated weights in infinite limits of deep convolutional neural networks. arXiv preprint arXiv:2101.04097.

Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013) Bayesian data analysis. CRC press.

Nalisnick, E. T.(2018) On priors for Bayesian neural networks. PhD thesis, UC Irvine.

Neal, R. M.(1996) Bayesian learning for neural networks, volume 118. Springer.

Student (1908) The probable error of a mean. Biometrika, pp. 1–25.

Wenzel, F., Roth, K., Veeling, B. S., Swi atkowski, J., Tran, L., Mandt, S., Snoek, J., Salimans, T., Jenatton, R., and Nowozin, S(2020a). How good is the Bayes posterior in deep neural networks really? In International Conference on Machine Learning