Caching is done using a pull model. Akamai does not know about objects for which we haven’t seen requests, nor do we push content out to our servers in order to pre-warm our caches. One slight exception to this is the Prefetching feature (https://developer.akamai.com/stuff/Optimization/Pre-fetching.html
) where the servers examine the web page that is being downloaded (...) and proactively make requests ahead of the End User to ensure that we have an object in cache before the user gets around to requesting it.
As the cache on a server fills up, our software looks for the least recently used objects in its cache and evicts them to make room for a new object. (...) Content is never evicted unless there are requests for new objects and we need to make room in the cache for them.
Asynchronous Refresh or Prefresh is currently enabled by default with a setting of 90% (can be modified). This means that for all requests, if a client request is received in the last 10% of the time-to-live, the Akamai server serves it from its cache, then sends an IMS request to the origin (asynchronous refresh of the content).
In order to go deeper, you can read this documentation about TTL: https://control.akamai.com/dl/customers/other/EDGESERV/About_TTL.pdf
on p.9 Prefresh is explained. In addition to that, here is the Akamai EdgeServer Configuration Guide (https://control.akamai.com/dl/customers/other/EDGESERV/ESConfigGuide-Customer.pdf
), which includes all information about the Akamai Intelligent Platform options and the way to configure them. Take a look on p.75.