Dean A Mulla, Sorin Iacobovici: Pending access queue for providing data to a target register during an intermediate pipeline phase after a computer cache miss. Hewlett Packard Company, Augustus W Winfield, February 6, 2001: US06185660 (56 worldwide citation)

An apparatus in a computer, called a pending access queue, for providing data for register load instructions after a cache miss. After a cache miss, when data is available for a register load instruction, the data is first directed to the pending access queue and is provided to an execution pipeline ...

James S Finnell, Dean A Mulla: Method for decreasing penalty resulting from a cache miss in multi-level cache system. November 19, 1996: US05577227 (38 worldwide citation)

A computing system includes a processor, a main memory, a first level cache and a second level cache. The second level cache contains data lines. The first level cache contains data line fragments of data lines within the second level cache. In response to a processor attempt to access a data word, ...

Terry L Lyon, Eric R DeLano, Dean A Mulla: Method and system for early tag accesses for lower-level caches in parallel with first-level cache. Hewlett Packard Company, July 30, 2002: US06427188 (29 worldwide citation)

A system and method are disclosed which determine in parallel for multiple levels of a multi-level cache whether any one of such multiple levels is capable of satisfying a memory access request. Tags for multiple levels of a multi-level cache are accessed in parallel to determine whether the address ...


Dean A Mulla, Reid James Riedlinger, Thomas Grutkowski: Cache address conflict mechanism without store buffers. Hewlett Packard Company, March 25, 2003: US06539457 (25 worldwide citation)

The inventive cache manages address conflicts and maintains program order without using a store buffer. The cache utilizes an issue algorithm to insure that accesses issued in the same clock are actually issued in an order that is consistent with program order. This is enabled by performing address ...

John Wai Cheong Fu, Dean A Mulla, Gregory S Mathews, Stuart E Sailer: Dual-ported, pipelined, two level cache system. Intel Corporation, Schwegman Lundberg Woessner & Kluth P A, August 7, 2001: US06272597 (20 worldwide citation)

A novel on-chip cache memory and method of operation are provided which increase microprocessor performance. The on-chip cache memory has two levels. The first level is optimized for low latency and the second level is optimized for capacity. Both levels of cache are pipelined and can support simult ...

Gregory S Mathews, Dean A Mulla: Method and apparatus for managing a memory array. Intel Corporation, David J Kaplan, August 15, 2000: US06105115 (19 worldwide citation)

A NRU algorithm is used to track lines in each region of a memory array such that the corresponding NRU bits are reset on a region-by-region basis. That is, the NRU bits of one region are reset when all of the bits in that region indicate that their corresponding lines have recently been used. Simil ...

Daming Jin, Dean A Mulla, Douglas J Cutter, Thomas Grutkowski: Distributed MUX scheme for bi-endian rotator circuit. Hewlett Packard Development Company, Intel Corporation, February 3, 2004: US06687262 (16 worldwide citation)

The inventive control logic provides the selection signals for a bi-endian rotator MUX. The logic determines the starting point for the data transfer by determining which input register byte is going to Byte

Dean A Mulla, Terry L Lyon, Reid James Riedlinger, Thomas Grutkowski: Cache chain structure to implement high bandwidth low latency cache memory subsystem. Hewlett Packard Development Company, Intel Corporation, April 29, 2003: US06557078 (14 worldwide citation)

The inventive cache uses a queuing structure which provides out-of-order cache memory access support for multiple accesses, as well as support for managing bank conflicts and address conflicts. The inventive cache can support four data accesses that are hits per clocks, support one access that misse ...

Dean A Mulla, Terry L Lyon, Reid James Riedlinger, Tom Grutkowski: L1 cache memory. Hewlett Packard Company, Intel Corporation, January 14, 2003: US06507892 (13 worldwide citation)

The inventive cache processes multiple access requests simultaneously by using separate queuing structures for data and instructions. The inventive cache uses ordering mechanisms that guarantee program order when there are address conflicts and architectural ordering requirements. The queuing struct ...