dc.description.abstract | The cache server has been widely applied to save the bandwidth for popular data, and to speed up the acquirement for data. The location of the cache servers influences the hit rate and the response time. The problems for the high duplicated contents among cache servers, and the degradation on the efficiency caused by improper configurations from users have become important issues.
Recently the concept of active network has been proposed, difference where each node in the active network can execute simple code according to the content of packets to process the passed-through packets. Thus the load of client sides cache shared, and the traffic of network is reduced. Generally the cache area the cache server is located in the main memory of an active node. The cache area is divided into the code cache storing code temporarily and the data cache storing the passed-through data. Therefore, each active node can be treated as a cache server. For the capacity the main memory is finite, the cache area has to be arranged appropriately. If the frequently, the cache area may be utilized inefficiently; otherwise, the response time may be increased. Therefore, how to appropriately organize these distributed cache servers in active networks and to decide whether to cache data or not are addressed in the thesis.
The thesis proposes a caching algorithm to decide whether cache data or not, according to the acceptable response time of users and the change of the network environment. Based on the acceptable response time, The proposed algorithm is capable of determining the frequency of the cached data for various network environment by self-mangement of cache servers. The data is stored fairly among cache servers by self-organizing the cache servers. Therefore, the probability that users get the data before the acceptable response time is increased. For the data is processed transparently along the passed-through path, the degradation of the efficiency due to users’ improper configuration can be obviously prevented. Compared with the method that caches data once within a fixed distance, the proposed algorithm is able to increase the number of data copies by 12%,but increase the hit rate by 1.2%, increase the probability that users get the data before the acceptable response time by 7%, and the response time is decreased by 19%. | en_US |