Improved Cache Performance with Reduced Energy

Highlights

Cache design improves performance by ~10%
Win-win situation: design that improves performance also reduces power consumption Consists of a main cache structure preceded by a small, highly associative filter Blocks inserted into the cache are far more likely to be frequently reused, resulting in more efficient utilization of cache space Random sampling cache has better overall performance than larger, more expensive caches Our Innovation

Random sampling of references is used to identify locality and select the memory blocks to be inserted into the cache Selection is completely stateless: No need to maintain any data about previous memory usage Set look-aside buffer (SLB) augments classic CAM tag-store / SRAM data-store design to offer a fast, low-power lookup The Opportunity

Attractive to chip designers seeking improved performance and reduced power consumption Addresses an ever-expanding industry with a state-of-the-art solution Development Milestones

Explore the effectiveness of probabilistic filtering for L2 caches



Additional Information

Yoav Etsion and Dror G. Feitelson, "L1 Cache Filtering Through Random Selection of Memory References." In 16th Intl. Conf. Parallel Architectures and Compilation Techniques, Sep 2007. (http://www.cs.huji.ac.il/~etsman/papers/CorePact.pdf)

Type of Offer: Licensing



Next Patent »
« More Computer Science Patents

Share on      


CrowdSell Your Patent