R.N. Yadav, P.K. Kalra, and J. John
[1] W.S. McCulloch & W. Pitts, A logical calculus of the ideasimmanent in nervous activity, Bulletin of Mathematical Biophysics, 5, 1943, 115–133. doi:10.1007/BF02478259 [2] C.L. Giles & T. Maxwell, Learning, invariance and generalization in higher-order neural networks, Applied Optics, 26, 1987,4972–4978. [3] C. Koch & T. Poggio, Single neuron computation (San Diego:Academic Press, 1992). [4] M. Schmitt, On the complexity of computing and learningwith multiplicative neural networks, Neural Computation, 14,2001, 241–301.335 doi:10.1162/08997660252741121 [5] T. Poggio, On optimal nonlinear associative recall, BiologicalCybernatics, 19, 1975, 201–209. [6] T.A. Plat, Randomly connected Sigma-Pi neurons can formassociator networks, Network Computation in Neural Systems,11, 2000, 321–332. doi:10.1088/0954-898X/11/4/305 [7] M. Guler & E. Sahin, A new higher order binary-input neuralunit: Learning and generalizing effectively via using minimalnumber of monomials, Proc. 3rd Turkish Symp. on ArtificialIntelligence and Neural Networks, Ankara, Turkey, 1994, 51–60. [8] M. Sinha, K. Kumar, & P.K. Kalra, Some new neural networkarchitectures with improved learning schemes, Soft Computing,4 (4), 2000, 214–223. doi:10.1007/s005000000057 [9] R.N.Yadav, V. Singh, & P.K. Kalra, Classification using singleneuron, Proc. 1st IEEE Int. Conf. on Industrial Informatics(INDIN’03), Banff, Canada, August 21–24, 2003, 124–129. doi:10.1109/INDIN.2003.1300258 [10] D.K.Chaturvedi, M. Mohan, R.K. Singh, & P.K. Kalra, Improved generalized neuron model for short-term load forecasting, Soft Computing, 8 (5), 2004, 370–379. [11] E.M. Iyoda, H. Nabuhara, & K. Hirota, Translated multiplicative neuron: An extended multiplicative neuron that cantranslate decision surfaces, Journal of Advanced ComputationalIntelligence and Intelligent Informatics, 8 (5), 2004, 460–468. [12] E.M. Iyoda, H. Nobuhara, & K. Hrota, A solution for N-bitparity problem using a single multiplicative neuron, NeuralProcessing Letters, 18, 2003, 213–218. doi:10.1023/B:NEPL.0000011147.74207.8c [13] M. Anthony & P.L. Barlett, Neural network learning: Theoretical foundations (Cambridge: Cambridge University Press,2001). [14] V. Vapnik, Statistical learning theory (New York: John Wileyand Sons, 1998). [15] N. Kumar, R.N. Yadav, P.K. Kalra, & D.H. Ballard, Constructing learning machines using a multiplicative single neuron,submitted to Neural Computation, 2005. [16] S. Haykin, Neural networks: A comprehensive foundation(Singapore: Pearson, 2003). [17] K. Hornik, M. Stinchcombe, & H. White, Multilayer feedforward networks are universal approximators, Neural Network,2, 1989, 359–366. doi:10.1016/0893-6080(89)90020-8 [18] C. Cervellera & M. Muselli, Deterministic design for neuralnetwork learning: An approach based on discrepancy, IEEETrans. on Neural Networks, 15 (3), 2004, 533–544. doi:10.1109/TNN.2004.824413 [19] M. Mackey & L. Glass, Oscillation and chaos in physiologicalcontrol systems, Science, 197, 1977, 287–289. doi:10.1126/science.267326 [20] G.E.P. Box, G.M. Jenkins, & G.C. Reinse, Time series analysis:Forecasting and control (Englewood Cliffs, NJ: Prentice-Hall,1994).
Important Links:
Go Back