Influence of Training Algorithms of ANNs on the Prediction of Reservoir Water Level of an Hydroelectric Station

Authors

  • Edison Chafla Ministerio del Interior Ecuador
  • Gabriel Asqui Santillán Transelectric CELEC E.P
  • Jorge Paucar Escuela Superior Politécnica de Chimborazo
  • Diana E. Olmedo Vizueta Escuela Superior Politécnica de Chimborazo

DOI:

https://doi.org/10.47187/perspectivas.vol1iss1.pp16-22.2019

Keywords:

Artificial Intelligence, Artificial Neural Networks, Predictor, Keras, Tensorflow

Abstract

The present paper reports on the obtained results from the analysis of the influence of training algorithms, for artificial neural networks (ANNs), on the prediction error of the reservoir water level of a hydroelectric station. The studied algorithms are those forming the Keras library, which uses the back-end of TensorFlow. Data for this study are the historical records (2005-2016) of reservoir level, streamflow, and active power from an Ecuadorian hydroelectric plant. Such data was divided for the training, validation, and test stages. The hardware platform was a graphic processing unit (GPU) Nvidia 1050Ti, which allowed for exploiting the highly parallel computing capability of TensorFlow. Seven algorithms were evaluated. The Tukey test revealed that the Nadam algorithm obtained the lowest significative difference respect to its counterparts, engaging it as the more efficient. The respective obtained RNA plant model reached effective prediction thresholds up to 48 hours. The obtained results allow for optimization of the planification of energy production on the hydroelectric station trough an accurate prediction of the hydric resources for quotas of desired production.

Métricas

References

BANCO MUNDIAL, “Energía @ www.bancomundial.org.” 2017.

L. Galarza and Tecpetrol, “Estadística Anual y Multianual del Sector Eléctrico Ecuatoriano,” p. 208, 2017.

Instituto Nacional de Estadística y Censos, “Proyecciones Poblacionales @ www.ecuadorencifras.gob.ec.” 2017.

REN21, “Renewables 2017: Global Status Report,” Paris, 2017.

C. A. Villacís Laínez, Y. F. Suarez Nuñez, and X. M. Güillín Llanos, “Análisis de la Responsabilidad Social en el Ecuador,” Rev. Publicando, vol. 3, no. 8, pp. 452–466, 2016.

Consejo Nacional de Electricidad, “Plan Maestro de Electrificación 2013-2022,” Plan Maest. Electrif. 2013-2022, vol. 1, p. 116, 2013.

J. Hernández-Ambato, G. Asqui-Santillán, A. Arellano, and C. Cunalata, “Multistep-ahead Streamflow and Reservoir Level Prediction Using ANNs for Production Planning in Hydroelectric Stations,” in 2017 16th IEEE International Conference on Machine Learning and Applications, 2017, pp. 479–484.

G. E. Asqui Santillán, “Predicción del nivel de agua del embalse, basado en redes neuronales, para la mejora de la planificación de producción de energía en la Central Hidroeléctrica Agoyán.,” 2017.

J. R. Azagra, “Control robusto cuantitativo de sistemas con múltiples entradas de actuación y una salida objeto de control,” 2017.

P. Isasi Viñuela and I. M. Galván León, Redes de Neuronas Artificiales: Un Enfoque Práctico. Pearson Educación, 2004.

G. Asqui Santillán, D. Olmedo Vizueta, and J. Hernández Ambato, “Modelamiento basado en Redes Neuronales Artificiales para la Predicción de Recursos Hídricos en una Central Hidroeléctrica,” in V Congreso Internacional de la Ciencia, Tecnología, Emprendimiento E Innovación, 2018, pp. 532–545.

J. M. Zaldívar, E. Gutiérrez, I. M. Galván, F. Strozzi, and A. Tomasin, “Forecasting high waters at Venice Lagoon using chaotic time series analysis and nonlinear neural networks,” J. Hydroinformatics, vol. 2, no. 1, pp. 61–84, 2000.

J. L. Correa-Figueroa, E. Morales-Sánchez, J. A. Huerta-Ruelas, J. J. González-Barbosa, and C. R. Cárdenas-Pérez, “Sistema de adquisición de señales SEMG para la detección de fatiga muscular,” Rev. Mex. Ing. Biomed., vol. 37, no. 1, pp. 17–27, 2016.

F. Chollet, “Optimizers - Keras Documentation,” 2015. .

F. Chollet, Deep Learning with Python, vol. 80, no. 1. 2007.

T. Dozat, “Incorporating Nesterov Momentum into Adam,” ICLR Work., no. 1, pp. 2013–2016, 2016.

A. T. Hadgu, A. Nigam, and E. Diaz-Aviles, “Large-scale learning with AdaGrad on Spark,” in Big Data (Big Data), 2015 IEEE International Conference on, 2015, pp. 2828–2830.

S. Ruder, “An overview of gradient descent optimization algorithms,” pp. 1–14, 2016.

M. D. Zeiler, “ADADELTA: An Adaptive Learning Rate Method,” 2012.

M. C. Mukkamala and M. Hein, “Variants of RMSProp and Adagrad with Logarithmic Regret Bounds,” CoRR, vol. abs/1706.05507, 2017.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” CoRR, vol. abs/1412.6, 2014.

X. Zeng, Z. Zhang, and D. Wang, “AdaMax Online Training for Speech Recognition,” 2016.

S. J. Reddi, S. Kale, and S. Kumar, “On the Convergence of Adam and Beyond,” in ICLR 2018, 2018, pp. 1–23.

E. Morales and J. González, “Aprendizaje Computacional,” Inst. Nac. Astrofísica, Óptica y Electrónica, pp. 1–232, 2011.

B. Cecila, “Facultad de Ciencias Forestales,” 2002.

J. A. Villalpando-García, A. Castillo-Morales, M. E. Ramírez-Guzmán, G. Rendón-Sánchez, and U. Larqué-Saavedra, Mario, “COMPARACIÓN DE LOS PROCEDIMIENTOS DE TUKEY, DUNCAN, DUNNETT, HSU Y BECHHOFER PARA SELECCIÓN DE MEDIAS,” Agrociencia, vol. 35, no. 1, pp. 79–86, 2001.

Published

2019-01-08

How to Cite

[1]
E. Chafla, G. Asqui Santillán, J. Paucar, and D. E. Olmedo Vizueta, “Influence of Training Algorithms of ANNs on the Prediction of Reservoir Water Level of an Hydroelectric Station: Array”, Perspectivas, vol. 1, no. 1, pp. 16–22, Jan. 2019.

Issue

Section

Artículos arbitrados