Muhammad Aurangzeb, Frank L. Lewis, and Manfred Huber
[1] H. Abelson and A.A. diSessa, Turtle Geometry (Cambridge:MIT Press, 1980). [2] E.P. Silva Jr., M.A.P. Idiart, M. Trevisan, and P.M. Engel,Autonomous learning architecture for environmental mapping,Journal of Intelligent and Robotic Systems, 39, 2004, 243–263. [3] N. Roy and G. Dudek, Collaborative robot exploration andrendezvous: algorithms, performance bounds and observations,Autonomous Robots, 11, 2001, 117–136. [4] L. Moremo, J.M. Armingol, S. Garrido, A. de La Escalera, andM.A. Salichs, A genetic algorithm for mobile robot localizationusing ultrasonic sensors, Journal of Intelligent and RoboticSystems, 34, 2002, 135–154. [5] M. Jansen, M. Oelinger, K. Hoeksema, and U. Hoppe, Aninteractive maze scenario with physical robots and other smartdevices, Proc. 2nd IEEE International Workshop on Wirelessand Mobile Technologies in Education, Jung-Li, Taiwan, 2004. [6] Z. Cai and Z. Peng, Cooperative co-evolutionary adaptivegenetic algorithm in path planning of cooperative multi-mobilerobot systems, Journal of Intelligent and Robotic Systems, 33,2002, 61–71. [7] Y. Kobayashi, Y. Wada, and T. Kiguchi, Knowledge represen-tation and utilization for optimal route search, IEEE Transac-tions on Systems, Man, and Cybernetics-Part B, Cybernetics,SMC-16 (3), 1986, 454–462. [8] S.X. Yang and M. Meng, Neural network approaches to dy-namic collision-free trajectory generation, IEEE Transactionson Systems, Man, and Cybernetics-Part B, Cybernetics, 31(3),2001, 302–318. [9] B.F. Goldiez, A.M. Ahmad, and P.A. Hancock, Effects ofaugmented reality display settings on human wayfinding per-formance, IEEE Transactions on Systems, Man, and Cyber-netics – Part C, Reviews, 37(5), 2007, 839–845. [10] M.A. Wiering and H. van Hasselt, Ensemble algorithms inreinforcement learning, IEEE Transactions on Systems, Man,and Cybernetics – Part B, Cybernetics, 38(4), 2008, 930–936. [11] A.M. Whitbrook, U. Aickelin, and J.M. Garibaldi, Idiotypicimmune networks in mobile-robot control, IEEE Transactionson Systems, Man, and Cybernetics – Part B, Cybernetics,37(6), 2007, 1581–1598. [12] J. Suzuki and Y. Yamamoto, Building an artificial immunenetwork for decentralized policy negotiation in a communicationend system, Proc. 4th World Conf. on SCI, Orlando, FL, 2000. [13] E. Bonabeau, M. Dorigo, and G. Theraulaz, Swarm intelligencefrom natural to artificial systems (Oxford University Press,Inc.: New York, NY, 1999). [14] D. Karaboga and B. Akay, A survey: algorithms simulatingbee swarm intelligence, Artificial Intelligence Review, 31(1–4),2009, 61–85, DOI: 10.1007/s10462-009-9127-4. [15] M. Dorigo, Optimization, learning and natural algorithms,Ph.D. Thesis, Politecnico di Milano, Italie, 1992. [16] M. Dorigo, V. Maniezzo, and A. Colorni, The ant system: opti-mization by a colony of cooperating agents, IEEE Transactionson Systems, Man and Cybernetetics. B(26), 1996, 29–42. [17] D. Karaboga, An idea based on honey bee swarm for numer-ical optimization, Technical Report TR06, Erciyes University,Engineering Faculty, Computer Engineering Department, 2005. [18] D. Dasgupta, Z. Ji, and F. Gonzalez, Artificial immune system(AIS) research in the last five years, Proc. Congress on Evo-lutionary Computation, CEC ’03, Canberra, Australia, 2003. [19] A. Kaveh and S. Talatahari, A novel heuristic optimiza-tion method: charged system search, Springer Acta Mechan-ica, Springer-Verlag, 2010, 267–289. DOI 10.1007/s00707-009-0270-4. [20] X.-S. Yang and S. Deb, Cuckoo search via L´evy flights, WorldCongress on Nature & Biologically Inspired Computing (NaBIC2009). IEEE Publications, Coimbatore, India, 2009, 210–214. [21] X.-S. Yang and S. Deb, Engineering optimization by cuckoosearch, International Journal of Mathematical Modeling andNumerical Optimizations, 1(4), 2010, 330–343. [22] X.F. Yang, Firefly algorithms for multimodal optimization.Stochastic algorithms, Foundations and Applications, SAGA2009. Lecture Notes in Computer Sciences, 5792, 2009,169–178. [23] S.M. Farahani, A.A. Abshouri, B. Nasiri, and M.R. Meybodi,Some hybrid models to improve firefly algorithm performance,International Journal of Artificial Intelligence, 8(S(12)), 2012,97–117. [24] E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, A grav-itational search algorithm, Journal of Information Science,179(13), 2009, 2232–2248. [25] .H.S. Hosseini, The intelligent water drops algorithm: A nature-inspired swarm-based optimization algorithm, InternationalJournal of Bio-Inspired Computation, 1(1/2), 2009, 71–79. [26] J. Kennedy and R. Eberhart, Particle swarm optimization,Proc. IEEE International Conf. on Neural Networks, IV, 1995,1942–1948. [27] C. Li and S. Yang, Fast multi-swarm optimization for dynamicoptimization problems, Proc. Fourth International Conf. onNatural Computation, ICNC ’08, 2008. [28] P. Rabanal, I. Rodr´ıguez, and F. Rubio, Using river formationdynamics to design heuristic algorithms, Lecture Notes inComputer Science, 4618, 2007, 163–177. [29] J.M. Bishop, Stochastic searching networks, Proc. 1st IEEConf. on Artificial Neural Networks, London, 1989, 329–331. [30] P.D. Beattie and J.M. Bishop, Self-localization in the “scenarioautonomous wheelchair, Journal of Intelligent and RoboticSystems, 22, 1998, 255–267. [31] D. Jones, D.A. Harrison, and A.J. Davies, Experience outweighsintelligence: an investigation into the use of ant colony systemsfor maze solving, Proc. the ACIS Fourth International Conf.on Software Engineering, Artificial Intelligence, Networkingand Parallel/Distributed Computing (SNPD’03), 2003. [32] Maze solving algorithm, http://en.wikipedia.org/wiki/Maze_solving_algorithm (accessed Feb. 06, 2013). [33] N.S.V. Rao, S. Kareti, and W. Shi, Robot navigation in un-known terrains: introductory survey of non-heuristic algo-rithms, Report prepared by Oak Ridge National Laboratory,July 1993. [34] W.D. Pullen, Maze classification, http://www.astrolog.org/labyrnth/algrithm.htm#solve, January 24, 2011 (accessed Feb.6, 2013). [35] R. Diestel, Graph theory, 4th ed. (Heidelberg, Dordrecht,London, New York, NY: Springer, 2010). [36] R.S. Sutton and A.G. Barto, Reinforcement learning – anintroduction (MIT Press: Cambridge, MA, 1998). [37] L.P. Kaelbling, M.L. Littman, and A.W. Moore, Reinforcementlearning: a survey, Journal of Artificial Intelligence Research,4, 1996, 237–285. [38] S.D. Poisson, Research on the probability of judgments incriminal and civil matters, Elibron Classics, 1838. [39] IEEE: Guidelines for 64 bits global Identifier, March, 1997.245 [40] S. Goss, S. Aron, J.-L. Deneubourg, and J.-M. Pasteels,Self-organized shortcuts in the Argentine ant, Nature wis-senschaften, 76, 1989, 579–581. [41] J.-L. Deneubourg, S. Aron, S. Goss, and J.-M. Pasteels,The self-organizing exploratory pattern of the Argentine ant,Journal of Insect Behavior, 3, 1990, 159. [42] T. Stutzle and H. Hoos, Improvements on the ant system:introducing the max-min ant system, in IR.F. Albrecht G.D.Smith, and N.C. Steele (eds.), Artificial neural networks andgenetic algorithms (Wien, New York: Springer Verlag, 1998),245–249. [43] M. Aurangzeb, Internal structure and dynamic decisions forcoalitions on graphs, Doctoral Thesis, Department of ElectricalEngineering, University of Texas at Arlington. [44] J. Kubica, Toolbox to create a maze in MatLab, http://www.ri.cmu.edu/, 2003 (accessed Nov. 2012).
Important Links:
Go Back