||The demand for computing power continues growing year by year. Meanwhile, sequential processing techniques are becoming insufficient for many complex problems. As a result, large-scale computing frameworks, such as cloud computing and grid computing, become necessary to fulfill the computing needs. A large-scale computing framework is usually comprised of distributed and heterogeneous computing resources.|
The distributed and heterogeneous properties intricate system management, and may impose additional rules and restrictions on the environment while using it. The rules and restrictions are very likely to puzzle application developers and make application optimization difficult to achieve. Most systems in literature aim to optimize the throughput of the entire system, such as Condor, Globus Toolkit, and IOS. Those systems ignore individual application performance in most cases, and thus may not be fare to all users. Furthermore, they cannot handle the environment change seamlessly.
This research proposes an alternative computing model relating to cyber organisms, aiming to make applications smarter and more adaptive to environment changes. In this model, the computing environment is not responsible for application optimization and reconfiguration. Instead, it provides necessary information for each individual application through a standard interface. Each application consists of several processes to cooperate some given tasks. The sub-components are able to communicate with each other, sense the change of the environment, and react accordingly. Based on the proposed model, we implement a prototype using Message Passing Interface (MPI).
The major contribution of the proposed system is that the applications are self-manageable and self-provisioning. In other words, it can automatically add new processes to a right computing host, delete old processes from a wrong computing host, and move data to balance workload between hosts when the environment changes. Therefore, different applications can be optimized for different purposes. The experimental results confirm that our system can effectively adapt to the change of the environment and automatically improve application performance.
|| M. Weiser, “Some computer science issues in ubiquitous computing,” Communications of the ACM, vol. 36, 1993, pp. 75–84.|
 u-Taiwan - http://www.utaiwan.nat.gov.tw/index.php.
 I. Foster, “The anatomy of the grid: Enabling scalable virtual organizations,” Euro-Par 2001 Parallel Processing, pp. 1–4.
 M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R.H. Katz, A. Konwinski, G. Lee, D.A. Patterson, A. Rabkin, I. Stoica, and others, “Above the clouds: A berkeley view of cloud computing,” EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2009-28, 2009.
 G. Agha, “Computing in pervasive cyberspace,” Communications of the ACM, vol. 51, 2008, pp. 68–70.
 G.A. Agha, “Actors: a model of concurrent computation in distributed systems,” AITR-844, 1985.
 K. Schmidt-Nielsen,“Animal physiology adaptation and environment, ” Cambridge [u.a.]: Cambridge Univ. Press, 1997.
 A.S. Tanenbaum and M.V. Steen, “Distributed systems”, Citeseer, 2002.
 W.J. Wang, K. El Maghraoui, J. Cummings, J. Napolitano, B.K. Szymanski, and C.A. Varela, “A middleware framework for maximum likelihood evaluation over dynamic grids,” Second IEEE International Conference on e-Science and Grid Computing, 2006. e-Science'06, 2006, pp. 105–105.
 T. Desell, N. Cole, M. Magdon-Ismail, H. Newberg, B. Szymanski, and C. Varela, “Distributed and generic maximum likelihood evaluation,” 3rd IEEE International Conference on e-Science and Grid Computing (eScience2007), page 8pp, Bangalore, India, 2007.
 C. Wu, “A Dynamic Load-Balancing Maximum Likelihood Evaluation Framework,” 2009.
 Condor Project Homepage - http://www.cs.wisc.edu/condor/.
 D. Thain, T. Tannenbaum, and M. Livny, “Distributed computing in practice: The Condor experience,” Concurrency and Computation Practice and Experience, vol. 17, 2005, pp. 323–356.
 The Globus Alliance - http://www.globus.org/.
 C. Varela and G. Agha, “Programming dynamically reconfigurable open systems with SALSA,” ACM SIGPLAN Notices, vol. 36, 2001, pp. 34.
 SALSA Programming Language - http://wcl.cs.rpi.edu/salsa/.
 Welcome to Apache Hadoop! - http://hadoop.apache.org/.
 J. Dean and S. Ghemawat, “Map Reduce: Simplified data processing on large clusters,” Communications of the ACM-Association for Computing Machinery-CACM, vol. 51, 2008, pp. 107–114.
 D. Borthakur, “The hadoop distributed file system: Architecture and design,” Hadoop Project Website, 2007.
 S. Ghemawat, H. Gobioff, and S.T. Leung, “The Google file system,” ACM SIGOPS Operating Systems Review, vol. 37, 2003, pp. 43.
 A. Mahanti and D.L. Eager, “Adaptive data parallel computing on workstation clusters,” Journal of Parallel and Distributed Computing, vol. 64, 2004, pp. 1241-1255.
 M. Kaddoura, S. Ranka, and A. Wang, “Array decompositions for nonuniform computational environments,” Journal of Parallel and Distributed Computing, vol. 36, 1996, pp. 91–105.
 K.E. Maghraoui, T.J. Desell, B.K. Szymanski, and C.A. Varela, “The internet operating system: Middleware for adaptive distributed computing,” International Journal of High Performance Computing Applications, vol. 20, 2006, pp. 467.
 S. Shen, “Runtime Reconfiguration Using I/O and CPU Profiler over Dynamic P2P Systems,” 2009.
 “MPI: A Message-Passing Interface Standard Version 2.1,” Message Passing Interface Forum, 9月. 2008.
 MPICH2 : High-performance and Widely Portable MPI - http://www.mcs.anl.gov/research/projects/mpich2/.
 Open MPI: Open Source High Performance Computing - http://www.open-mpi.org/.
 LAM/MPI Parallel Computing - http://www.lam-mpi.org/.
 R. Bündgen, M. Göbel, and W. Küchlin, “A master-slave approach to parallel term rewriting on a hierarchical multiprocessor,” Design and Implementation of Symbolic Computation Systems, 1996, pp. 183-194.