A technique that, through continual computation, harnesses available computer resources during periods of low processing activity and low network activity, such as idle time, for prefetching, e.g., web pages, or pre-selected portions thereof, into local cache of a client computer. This technique utilizes a probabilistic user model to specify, at any one time, those pages or portions of pages that are likely to be prefetched given, e.g., a web page currently being rendered to a user, which promise to provide the largest benefit (expected utility) to the user. Specifically, once a user, at a client computer, enters an address of a desired web page, a set containing web addresses of web pages, that based on the user model are each likely to be accessed next by that user, are determined, with corresponding files therefor prefetched, in order of their expected utility to the user, by the client computer during intervals of low processing activity and low network activity. Expected utility of a page or portion is assessed as a product of rate of refinement in utility of that page or portion to the user multiplied by its transition probability. Once prefetched, these pages or portions are stored in local cache at the client computer for ready access should the user next select any such page or portion.