##### Author

Shukla, Samta

##### Other Contributors

Abouzeid, Alhussein A.; Anshelevich, Elliot; Kar, Koushik; Zhang, Tong;

##### Date Issued

2018-05

##### Subject

Computer and systems engineering

##### Degree

PhD;

##### Terms of Use

This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.;

##### Abstract

We formulate the problem of as a non-convex, non-linear mixed-integer program. We prove that it is NP-Hard under both multicast/unicast modes of server transmission -- even when the caches have a large capacity and storage costs are linear -- and develop greedy algorithms that have provable performance bounds for the case of uncapacitated caches. Finally, we propose heuristics with low computational complexity for the capacitated cache case as well as for the case of convex storage costs. We refer to the caching policy described in Chapter 5 as Proactive Retention Routing Optimization (PRRO).; In Chapter 5, we consider the problem of proactive retention aware caching in a hierarchical cache network as in Chapter 4 except that the cache coverage areas now can overlap. This means that a content request from a user can get partially served from more than one cache. Our goal is to design a caching policy that minimizes the sum of content storage costs and server access costs over two design variables: the retention time of each cached content (assumed deterministic) and the probability that a user routes content requests to each of its associated caches it is connected to.; In Chapter 4, we study the problem of proactive (i.e. predictive) content caching on a hierarchical cache network, where content requests are catered by $N$ cache nodes connected with a single server. We assume that the cache coverage areas do not overlap (i.e. a user can get served from exactly one cache). We assume that the retention times are deterministic unknowns for this problem. Our goal is to find a caching policy that minimizes the sum of content storage and content download costs over all retention times when: (1) storage cost is a linear function of retention times, (2) storage cost is a convex function of retention times. We first prove that the problem is NP-Hard in general. We derive efficient polynomial time solutions for the specific case of linear storage cost under the assumption that all caches are uncapacitated. For the case when storage cost is a practically motivated convex function, we prove approximation bound on the performance. For the general case of finite cache capacity, we propose a heuristic to cache contents based on their relative popularity. We refer to the caching policy described in Chapter 4 as Proactive Retention Aware Caching (PRAC).; In Chapter 6, we take a step back and investigate whether randomized or deterministic retention times lead to optimum storage costs. We adhere to the same model as in Chapter \ref{tmc}, i.e. a single cache, and aim to find the optimal retention distribution over all possible randomized and deterministic distributions. We relax the content arrival process from Poisson arrivals in above chapters to renewal arrivals. We prove that deterministic retention times give a lower bound on the expected cost compared to any randomized policy with the same mean. Further, we propose a formulation to solve for the optimal deterministic retention times and give a brief sketch of the extension to general networks.; We consider the problem of storage-aware data caching in a heterogeneous wireless edge network consisting of mobile users connected to a server and associated to one or more edge caches. Our goal is to design caching policies that minimize the sum of content storage costs and server access costs subject to caching duration and routing fraction variables. Traditional caching only considered the network layer metrics while making a caching decision; in our work, the novelty lies in including storage costs, motivated by the recent technological advancements in network edge caching.; Our work is structured in four chapters. In the first chapter, we study the case when content requests are handled by a single cache node connected with a server. We solve this problem by formulating the problem as a Convex program, and characterizing the optimal rule for evicting contents from the cache upon a cache miss by employing the Markov Decision Process framework. The second chapter studies a hierarchical network consisting of an array of caches connected with a server. Under the assumption that cache coverage areas do not overlap, we first prove that the problem is NP-Hard in general. We then derive optimal polynomial time solutions for the specific case of linear storage cost, and approximation algorithms for the case of practically motivated convex storage cost function under the assumption that all caches are un-capacitated. In the third chapter, we consider the general case of overlapping cache coverages. This problem is NP-Hard even when the caches have a large capacity and storage costs are linear. We propose a greedy algorithm to solve the problem; we devise a solution concept called "tight solutions" to bound the performance of our greedy algorithm. The last chapter studies the pertinent problem of finding the optimal caching duration distribution for minimizing storage costs when content arrivals follow a renewal process. We prove that deterministic caching durations are optimal for this setting.; Our caching model captures multiple novel aspects such as the duration for which a content is cached, cache storage costs incurred due to storing a content in the cache, reactive and proactive modes of content caching at edge caches, fine-grained capabilities of modern wireless technologies, such as server multicast/unicast transmissions, device multi-path routing, and cache access constraints. Numerical evaluations as well as real data-driven simulations demonstrate that our storage-aware caching policies outperform other state-of-the-art caching policies for a wide range of parameters of interest.; We first consider a reactive caching set up in Chapter 3 for the case when content requests are handled by a single cache node connected with a server. Our goal is to minimize the content storage cost at the cache subject to a constraint on content download cost (or delay) from the server. We assume that the retention times of content are distributed as exponential random variables, with a given but unknown mean parameter for every content. We first assume a large uncapacitated cache and derive the optimal mean parameters for all contents by expressing the problem of minimizing storage cost subject to delay constraint as a convex program. We further assume a finite capacitated cache and propose a Markov-Decision Process (MDP) based rule for optimally evicting a content upon a cache miss, by building on the optimal retention times calculated for large caches. We refer to the policy described in Chapter 3 as DARE (Damage Aware REtention Caching).; We consider the problem of storage-aware data caching in a heterogeneous wireless edge network consisting of mobile users connected to a server and associated to one or more edge caches. Contents requests are assumed to follow independent and identically distributed Poisson arrival processes modulated with ZipF-law. The broad theme of this dissertation is to design caching policies that minimize the sum of content storage costs and server access transmissions costs subject to several design variables as outlined in each chapter. Our caching model captures multiple novel aspects such as the duration for which a content is cached -- referred as retention time, cache storage costs incurred due to storing a content in the cache, reactive and proactive modes of content caching at edge caches, fine-grained capabilities of modern wireless technologies, such as server multicast/unicast transmissions, device multipath routing, and cache access constraints. Numerical evaluations as well as real data-driven simulations demonstrate that our retention-aware caching policies outperform other state-of-the-art caching policies for a wide range of parameters of interest.;

##### Description

May 2018; School of Engineering

##### Department

Dept. of Electrical, Computer, and Systems Engineering;

##### Publisher

Rensselaer Polytechnic Institute, Troy, NY

##### Relationships

Rensselaer Theses and Dissertations Online Collection;

##### Access

Restricted to current Rensselaer faculty, staff and students. Access inquiries may be directed to the Rensselaer Libraries.;