uu.seUppsala University Publications
12345672 of 9
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Leveraging Existing Microarchitectural Structures to Improve First-Level Caching Efficiency
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Computer Systems.ORCID iD: 0000-0002-6259-7821
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Low-latency data access is essential for performance. To achieve this, processors use fast first-level caches combined with out-of-order execution, to decrease and hide memory access latency respectively. While these approaches are effective for performance, they cost significant energy, leading to the development of many techniques that require designers to trade-off performance and efficiency.

Way-prediction and filter caches are two of the most common strategies for improving first-level cache energy efficiency while still minimizing latency. They both have compromises as way-prediction trades off some latency for better energy efficiency, while filter caches trade off some energy efficiency for lower latency. However, these strategies are not mutually exclusive. By borrowing elements from both, and taking into account SRAM memory layout limitations, we proposed a novel MRU-L0 cache that mitigates many of their shortcomings while preserving their benefits. Moreover, while first-level caches are tightly integrated into the cpu pipeline, existing work on these techniques largely ignores the impact they have on instruction scheduling. We show that the variable hit latency introduced by way-misspredictions causes instruction replays of load dependent instruction chains, which hurts performance and efficiency. We study this effect and propose a variable latency cache-hit instruction scheduler, that identifies potential misschedulings, reduces instruction replays, reduces negative performance impact, and further improves cache energy efficiency.

Modern pipelines also employ sophisticated execution strategies to hide memory latency and improve performance. While their primary use is for performance and correctness, they require intermediate storage that can be used as a cache as well. In this work we demonstrate how the store-buffer, paired with the memory dependency predictor, can be used to efficiently cache dirty data; and how the physical register file, paired with a value predictor, can be used to efficiently cache clean data. These strategies not only improve both performance and energy, but do so with no additional storage and minimal additional complexity, since they recycle existing cpu structures to detect reuse, memory ordering violations, and misspeculations.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2019. , p. 42
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 1821
Keywords [en]
Energy Efficient Caching, Memory Architecture, Single Thread Performance, First-Level Caching, Out-of-Order Pipelines, Instruction Scheduling, Filter-Cache, Way-Prediction, Value-Prediction, Register-Sharing.
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:uu:diva-383811ISBN: 978-91-513-0681-0 (print)OAI: oai:DiVA.org:uu-383811DiVA, id: diva2:1317355
Public defence
2019-08-26, Sal VIII, Universitetshuset, Biskopsgatan 3, Uppsala, 09:00 (English)
Opponent
Supervisors
Available from: 2019-06-11 Created: 2019-05-22 Last updated: 2019-07-01
List of papers
1. Addressing energy challenges in filter caches
Open this publication in new window or tab >>Addressing energy challenges in filter caches
2017 (English)In: Proc. 29th International Symposium on Computer Architecture and High Performance Computing, IEEE Computer Society, 2017, p. 49-56Conference paper, Published paper (Refereed)
Abstract [en]

Filter caches and way-predictors are common approaches to improve the efficiency and/or performance of first-level caches. Filter caches use a small L0 to provide more efficient and faster access to a small subset of the data, and work well for programs with high locality. Way-predictors improve efficiency by accessing only the way predicted, which alleviates the need to read all ways in parallel without increasing latency, but hurts performance due to mispredictions.In this work we examine how SRAM layout constraints (h-trees and data mapping inside the cache) affect way-predictors and filter caches. We show that accessing the smaller L0 array can be significantly more energy efficient than attempting to read fewer ways from a larger L1 cache; and that the main source of energy inefficiency in filter caches comes from L0 and L1 misses. We propose a filter cache optimization that shares the tag array between the L0 and the L1, which incurs the overhead of reading the larger tag array on every access, but in return allows us to directly access the correct L1 way on each L0 miss. This optimization does not add any extra latency and counter-intuitively, improves the filter caches overall energy efficiency beyond that of the way-predictor.By combining the low power benefits of a physically smaller L0 with the reduction in miss energy by reading L1 tags upfront in parallel with L0 data, we show that the optimized filter cache reduces the dynamic cache energy compared to a traditional filter cache by 26% while providing the same performance advantage. Compared to a way-predictor, the optimized cache improves performance by 6% and energy by 2%.

Place, publisher, year, edition, pages
IEEE Computer Society, 2017
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-334221 (URN)10.1109/SBAC-PAD.2017.14 (DOI)000426895600007 ()978-1-5090-1233-6 (ISBN)
Conference
29th International Symposium on Computer Architecture and High Performance Computing SBAC-PAD, 2017, October 17–20, Campinas, Brazil.
Available from: 2017-11-09 Created: 2017-11-21 Last updated: 2019-05-22Bibliographically approved
2. Dynamically Disabling Way-prediction to Reduce Instruction Replay
Open this publication in new window or tab >>Dynamically Disabling Way-prediction to Reduce Instruction Replay
2018 (English)In: 2018 IEEE 36th International Conference on Computer Design (ICCD), IEEE, 2018, p. 140-143Conference paper, Published paper (Refereed)
Abstract [en]

Way-predictors have long been used to reduce dynamic cache energy without the performance loss of serial caches. However, they produce variable-latency hits, as incorrect predictions increase load-to-use latency. While the performance impact of these extra cycles has been well-studied, the need to replay subsequent instructions in the pipeline due to the load latency increase has been ignored. In this work we show that way-predictors pay a significant performance penalty beyond previously studied effects due to instruction replays caused by mispredictions. To address this, we propose a solution that learns the confidence of the way prediction and dynamically disables it when it is likely to mispredict and cause replays. This allows us to reduce cache latency (when we can trust the way-prediction) while still avoiding the need to replay instructions in the pipeline (by avoiding way-mispredictions). Standard way-predictors degrade IPC by 6.9% vs. a parallel cache due to 10% of the instructions being replayed (worst case 42.3%). While our solution decreases way-prediction accuracy by turning off the way-predictor in some cases when it would have been correct, it delivers higher performance than a standard way-predictor. Our confidence-based way-predictor degrades IPC by only 4.4% by replaying just 5.6% of the instructions (worse case 16.3%). This reduces the way-predictor cache energy overhead compared to serial access cache, from 8.5% to 3.7% on average and on the worst case, from 33.8% to 9.5%.

Place, publisher, year, edition, pages
IEEE, 2018
Series
Proceedings IEEE International Conference on Computer Design, ISSN 1063-6404, E-ISSN 2576-6996
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-361215 (URN)10.1109/ICCD.2018.00029 (DOI)000458293200018 ()978-1-5386-8477-1 (ISBN)
Conference
IEEE 36th International Conference on Computer Design (ICCD), October 7–10, 2018, Orlando, FL, USA
Available from: 2018-09-21 Created: 2018-09-21 Last updated: 2019-05-22Bibliographically approved
3. Minimizing Replay under Way-Prediction
Open this publication in new window or tab >>Minimizing Replay under Way-Prediction
2019 (English)Report (Other academic)
Abstract [en]

Way-predictors are effective at reducing dynamic cache energy by reducing the number of ways accessed, but introduce additional latency for incorrect way-predictions. While previous work has studied the impact of the increased latency for incorrect way-predictions, we show that the latency variability has a far greater effect as it forces replay of in-flight instructions on an incorrect way-prediction. To address the problem, we propose a solution that learns the confidence of the way-prediction and dynamically disables it when it is likely to mispredict. We further improve this approach by biasing the confidence to reduce latency variability further at the cost of reduced way-predictions. Our results show that instruction replay in a way-predictor reduces IPC by 6.9% due to 10% of the instructions being replayed. Our confidence-based way-predictor degrades IPC by only 2.9% by replaying just 3.4% of the instructions, reducing way-predictor cache energy overhead (compared to serial access cache) from 8.5% to 1.9%.

Series
Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2019-003
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-383596 (URN)
Available from: 2019-05-17 Created: 2019-05-17 Last updated: 2019-07-03Bibliographically approved
4. Filter caching for free: The untapped potential of the store-buffer
Open this publication in new window or tab >>Filter caching for free: The untapped potential of the store-buffer
2019 (English)In: Proc. 46th International Symposium on Computer Architecture, New York: ACM Press, 2019, p. 436-448Conference paper, Published paper (Refereed)
Abstract [en]

Modern processors contain store-buffers to allow stores to retire under a miss, thus hiding store-miss latency. The store-buffer needs to be large (for performance) and searched on every load (for correctness), thereby making it a costly structure in both area and energy. Yet on every load, the store-buffer is probed in parallel with the L1 and TLB, with no concern for the store-buffer's intrinsic hit rate or whether a store-buffer hit can be predicted to save energy by disabling the L1 and TLB probes.

In this work we cache data that have been written back to memory in a unified store-queue/buffer/cache, and predict hits to avoid L1/TLB probes and save energy. By dynamically adjusting the allocation of entries between the store-queue/buffer/cache, we can achieve nearly optimal reuse, without causing stalls. We are able to do this efficiently and cheaply by recognizing key properties of stores: free caching (since they must be written into the store-buffer for correctness we need no additional data movement), cheap coherence (since we only need to track state changes of the local, dirty data in the store-buffer), and free and accurate hit prediction (since the memory dependence predictor already does this for scheduling).

As a result, we are able to increase the store-buffer hit rate and reduce store-buffer/TLB/L1 dynamic energy by 11.8% (up to 26.4%) on SPEC2006 without hurting performance (average IPC improvements of 1.5%, up to 4.7%).The cost for these improvements is a 0.2% increase in L1 cache capacity (1 bit per line) and one additional tail pointer in the store-buffer.

Place, publisher, year, edition, pages
New York: ACM Press, 2019
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-383473 (URN)10.1145/3307650.3322269 (DOI)978-1-4503-6669-4 (ISBN)
Conference
ISCA 2019, June 22–26, Phoenix, AZ
Funder
Knut and Alice Wallenberg FoundationEU, Horizon 2020, 715283EU, Horizon 2020, 801051Swedish Foundation for Strategic Research , SM17-0064
Available from: 2019-06-22 Created: 2019-05-16 Last updated: 2019-07-03Bibliographically approved
5. Efficient temporal and spatial load to load forwarding
Open this publication in new window or tab >>Efficient temporal and spatial load to load forwarding
2019 (English)In: Proc. 52nd International Symposium on Microarchitecture, 2019Conference paper, Published paper (Refereed)
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-383477 (URN)
Conference
MICRO 2019, October 12–16, Columbus, OH
Available from: 2019-05-16 Created: 2019-05-16 Last updated: 2019-07-03

Open Access in DiVA

fulltext(691 kB)27 downloads
File information
File name FULLTEXT01.pdfFile size 691 kBChecksum SHA-512
16f18067305de41e7a9b22314953e0d7aa0227ea441e4bca1561d13e0e6510d64a1e54e01a9a9d7e6342f72e0b4722ba23bab4aa8013f834f84891ce34802113
Type fulltextMimetype application/pdf

Authority records BETA

Alves, Ricardo

Search in DiVA

By author/editor
Alves, Ricardo
By organisation
Division of Computer Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 27 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 73 hits
12345672 of 9
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf