University of Cambridge > Talks.cam > Computer Laboratory Computer Architecture Group Meeting > Green Cache

Green Cache

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Timothy M Jones.

Abstract

Traditional caches tightly couple data with their metadata in the form of address tags. The location of a requested datum is determined by hierarchical searches, in which many address tags (≈the cache associativity) at each cache level are compared to the requested address to answer the question, Is the datum here? The address tags of data being evicted also need to be searched, to cover coherence-corner cases. The hierarchical searches add both energy and latency overhead to memory accesses.

Green Cache is a new, efficient and simple coherent cache architecture that relies on a new kind of metadata, cacheline pointers (CP), that answer the question, Where is the datum? CPs encode the location of the requested data (in which cache? in which associative way?) in fewer bits than address tags (≈6 bits vs. ≈30 bits). CPs are not coupled to their corresponding data; instead, CPs are stored in a small, separate metadata hierarchy that makes it likely that a core will find the CP for a requested datum in its small, private metadata cache (e.g., the requested datum is in associative way seven of the LLC slice adjacent to core three).

By introducing delayed acknowledges, the coherence protocol of Green Cache makes the CP information deterministic, guaranteeing that the requested datum remains in its identified location when the request for it reaches that location. This removes many traditional coherence corner cases and allows the data caches to be implemented by a plain SRAM array – with no address tags or comparators, just a simple SRAM read. Its coherence protocol also automatically classifies private data, thereby removing 90% of the traditional directory traffic, and offers flexible data placement, enabling cheap cache bypassing and non-uniform cache architecture topologies (NUCA). Green Cache reduces the traffic in the memory hierarchy of a mobile processor by an average of 70%, reduces its dynamic energy (in EDP ) by 50% and reduces its latency for L1 cache misses by 30% across a wide selection of benchmarks.

BIO

Erik Hagersten has held a professor chair in computer architecture at Uppsala University in Sweden since 1999. Prior to this, he was the chief architect for Sun Microsystem’s high-end server engineering division in the US 1993 -1999. In 2006 he founded Acumem AB, developing new modeling technology for multicore software optimisations. Acumem was acquired by Rogue Wave Software Inc. in 2010. Since 2014 he is the CEO of his second startup, Green Cache AB.

At Uppsala, Erik has built up he Uppsala Architecture Research Team, UART (group page) – one of the largest architecture research groups in Europe. UART performs research in fast performance modeling technology, compiler technology as well as more traditional computer architecture topics, with an emphasis towards energy-efficiency.

He is a member of the Royal Swedish Academy of Engineering Sciences (IVA) since 2002 and received the most prestigious research award of Uppsala University 2013 (The Björkénska award).

This talk is part of the Computer Laboratory Computer Architecture Group Meeting series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity