uu.seUppsala University Publications
Change search
Link to record
Permanent link

Direct link
BETA
Eklöv, David
Publications (10 of 17) Show all publications
Eklöv, D., Nikoleris, N. & Hagersten, E. (2014). A software based profiling method for obtaining speedup stacks on commodity multi-cores. In: 2014 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS): ISPASS 2014. Paper presented at ISPASS 2014, March 23-25, Monterey, CA (pp. 148-157). IEEE Computer Society
Open this publication in new window or tab >>A software based profiling method for obtaining speedup stacks on commodity multi-cores
2014 (English)In: 2014 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS): ISPASS 2014, IEEE Computer Society, 2014, p. 148-157Conference paper, Published paper (Refereed)
Abstract [en]

A key goodness metric of multi-threaded programs is how their execution times scale when increasing the number of threads. However, there are several bottlenecks that can limit the scalability of a multi-threaded program, e.g., contention for shared cache capacity and off-chip memory bandwidth; and synchronization overheads. In order to improve the scalability of a multi-threaded program, it is vital to be able to quantify how the program is impacted by these scalability bottlenecks. We present a software profiling method for obtaining speedup stacks. A speedup stack reports how much each scalability bottleneck limits the scalability of a multi-threaded program. It thereby quantifies how much its scalability can be improved by eliminating a given bottleneck. A software developer can use this information to determine what optimizations are most likely to improve scalability, while a computer architect can use it to analyze the resource demands of emerging workloads. The proposed method profiles the program on real commodity multi-cores (i.e., no simulations required) using existing performance counters. Consequently, the obtained speedup stacks accurately account for all idiosyncrasies of the machine on which the program is profiled. While the main contribution of this paper is the profiling method to obtain speedup stacks, we present several examples of how speedup stacks can be used to analyze the resource requirements of multi-threaded programs. Furthermore, we discuss how their scalability can be improved by both software developers and computer architects.

Place, publisher, year, edition, pages
IEEE Computer Society, 2014
Series
IEEE International Symposium on Performance Analysis of Systems and Software-ISPASS
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-224230 (URN)10.1109/ISPASS.2014.6844479 (DOI)000364102000025 ()978-1-4799-3604-5 (ISBN)
Conference
ISPASS 2014, March 23-25, Monterey, CA
Projects
UPMARC
Available from: 2014-05-06 Created: 2014-05-06 Last updated: 2018-12-14Bibliographically approved
Nikoleris, N., Eklöv, D. & Hagersten, E. (2014). Extending statistical cache models to support detailed pipeline simulators. In: 2014 IEEE International Symposium On Performance Analysis Of Systems And Software (Ispass): . Paper presented at ISPASS 2014, March 23-25, Monterey, CA (pp. 86-95). IEEE Computer Society
Open this publication in new window or tab >>Extending statistical cache models to support detailed pipeline simulators
2014 (English)In: 2014 IEEE International Symposium On Performance Analysis Of Systems And Software (Ispass), IEEE Computer Society, 2014, p. 86-95Conference paper, Published paper (Refereed)
Abstract [en]

Simulators are widely used in computer architecture research. While detailed cycle-accurate simulations provide useful insights, studies using modern workloads typically require days or weeks. Evaluating many design points, only exacerbates the simulation overhead. Recent works propose methods with good accuracy that reduce the simulated overhead either by sampling the execution (e.g., SMARTS and SimPoint) or by using fast analytical models of the simulated designs (e.g., Interval Simulation). While these techniques reduce significantly the simulation overhead, modeling processor components with large state, such as the last-level cache, requires costly simulation to warm them up. Statistical simulation methods, such as SMARTS, report that the warm-up overhead accounts for 99% of the simulation overhead, while only 1% of the time is spent simulating the target design. This paper proposes WarmSim, a method that eliminates the need to warm up the cache. WarmSim builds on top of a statistical cache modeling technique and extends it to model accurately not only the miss ratio but also the outcome of every cache request. WarmSim uses as input, an application's memory reuse information which is hardware independent. Therefore, different cache configurations can be simulated using the same input data. We demonstrate that this approach can be used to estimate the CPI of the SPEC CPU2006 benchmarks with an average error of 1.77%, reducing the overhead compared to a simulation with a 10M instruction warm-up by a factor of 50x.

Place, publisher, year, edition, pages
IEEE Computer Society, 2014
Series
IEEE International Symposium on Performance Analysis of Systems and Software-ISPASS
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-224221 (URN)10.1109/ISPASS.2014.6844464 (DOI)000364102000010 ()978-1-4799-3604-5 (ISBN)
Conference
ISPASS 2014, March 23-25, Monterey, CA
Projects
UPMARC
Available from: 2014-05-06 Created: 2014-05-06 Last updated: 2018-12-14Bibliographically approved
Eklöv, D., Nikoleris, N., Black-Schaffer, D. & Hagersten, E. (2013). Bandwidth Bandit: Quantitative Characterization of Memory Contention. In: Proc. 11th International Symposium on Code Generation and Optimization: CGO 2013. Paper presented at CGO 2013, 23-27 February, Shenzhen, China (pp. 99-108). IEEE Computer Society
Open this publication in new window or tab >>Bandwidth Bandit: Quantitative Characterization of Memory Contention
2013 (English)In: Proc. 11th International Symposium on Code Generation and Optimization: CGO 2013, IEEE Computer Society, 2013, p. 99-108Conference paper, Published paper (Refereed)
Abstract [en]

On multicore processors, co-executing applications compete for shared resources, such as cache capacity and memory bandwidth. This leads to suboptimal resource allocation and can cause substantial performance loss, which makes it important to effectively manage these shared resources. This, however, requires insights into how the applications are impacted by such resource sharing. While there are several methods to analyze the performance impact of cache contention, less attention has been paid to general, quantitative methods for analyzing the impact of contention for memory bandwidth. To this end we introduce the Bandwidth Bandit, a general, quantitative, profiling method for analyzing the performance impact of contention for memory bandwidth on multicore machines. The profiling data captured by the Bandwidth Bandit is presented in a bandwidth graph. This graph accurately captures the measured application's performance as a function of its available memory bandwidth, and enables us to determine how much the application suffers when its available bandwidth is reduced. To demonstrate the value of this data, we present a case study in which we use the bandwidth graph to analyze the performance impact of memory contention when co-running multiple instances of single threaded application.

Place, publisher, year, edition, pages
IEEE Computer Society, 2013
Keywords
bandwidth, memory, caches
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-194101 (URN)10.1109/CGO.2013.6494987 (DOI)000318700200010 ()978-1-4673-5524-7 (ISBN)
Conference
CGO 2013, 23-27 February, Shenzhen, China
Projects
UPMARC
Funder
Swedish Research Council
Available from: 2013-04-18 Created: 2013-02-08 Last updated: 2018-12-14Bibliographically approved
Eklöv, D. (2012). A Profiling Method for Analyzing Scalability Bottlenecks on Multicores.
Open this publication in new window or tab >>A Profiling Method for Analyzing Scalability Bottlenecks on Multicores
2012 (English)Report (Other academic)
Publisher
p. 12
National Category
Computer Systems
Identifiers
urn:nbn:se:uu:diva-182453 (URN)
Available from: 2012-10-10 Created: 2012-10-10 Last updated: 2018-06-28Bibliographically approved
Eklöv, D., Nikoleris, N., Black-Schaffer, D. & Hägersten, E. (2012). Bandwidth bandit: Quantitative characterization of memory contention. In: Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT. Paper presented at 21st International Conference on Parallel Architectures and Compilation Techniques, PACT 2012, 19 September 2012 through 23 September 2012, Minneapolis, MN, USA (pp. 457-458).
Open this publication in new window or tab >>Bandwidth bandit: Quantitative characterization of memory contention
2012 (English)In: Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT, 2012, p. 457-458Conference paper, Published paper (Refereed)
Abstract [en]

Applications that are co-scheduled on a multi-core compete for shared resources, such as cache capacity and memory bandwidth. The performance degradation resulting from this contention can be substantial, which makes it important to effectively manage these shared resources. This, however, requires quantitative insight into how applications are impacted by such contention. In this paper we present a quantitative method to measure applications' sensitivities to different degrees of contention for off-chip memory bandwidth on real hardware. We then use the data captured with our profiling method to estimate the throughput of a set of co-running instances of a single threaded application.

Keywords
Memory contention, Performance prediction, Cache capacity, Memory bandwidths, Multi core, Off-chip memories, Performance degradation, Profiling methods, Quantitative characterization, Quantitative method, Shared resources, Single-threaded, Parallel architectures
National Category
Natural Sciences
Identifiers
urn:nbn:se:uu:diva-186828 (URN)10.1145/2370816.2370894 (DOI)978-1-4503-1182-3 (ISBN)
Conference
21st International Conference on Parallel Architectures and Compilation Techniques, PACT 2012, 19 September 2012 through 23 September 2012, Minneapolis, MN, USA
Available from: 2012-12-07 Created: 2012-11-29 Last updated: 2012-12-07Bibliographically approved
Eklöv, D. (2012). Profiling Methods for Memory Centric Software Performance Analysis. (Doctoral dissertation). Uppsala: Acta Universitatis Upsaliensis
Open this publication in new window or tab >>Profiling Methods for Memory Centric Software Performance Analysis
2012 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

To reduce latency and increase bandwidth to memory, modern microprocessors are often designed with deep memory hierarchies including several levels of caches. For such microprocessors, both the latency and the bandwidth to off-chip memory are typically about two orders of magnitude worse than the latency and bandwidth to the fastest on-chip cache. Consequently, the performance of many applications is largely determined by how well they utilize the caches and bandwidths in the memory hierarchy. For such applications, there are two principal approaches to improve performance: optimize the memory hierarchy and optimize the software. In both cases, it is important to both qualitatively and quantitatively understand how the software utilizes and interacts with the resources (e.g., cache and bandwidths) in the memory hierarchy.

This thesis presents several novel profiling methods for memory-centric software performance analysis. The goal of these profiling methods is to provide general, high-level, quantitative information describing how the profiled applications utilize the resources in the memory hierarchy, and thereby help software and hardware developers identify opportunities for memory related hardware and software optimizations. For such techniques to be broadly applicable the data collection should have minimal impact on the profiled application, while not being dependent on custom hardware and/or operating system extensions. Furthermore, the resulting profiling information should be accurate and easy to interpret.

While several use cases are presented, the main focus of this thesis is the design and evaluation of the core profiling methods. These core profiling methods measure and/or estimate how high-level performance metrics, such as miss-and fetch ratio; off-chip bandwidth demand; and execution rate are affected by the amount of resources the profiled applications receive. This thesis shows that such high-level profiling information can be accurately obtained with very little impact on the profiled applications and without requiring costly simulations or custom hardware support.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2012. p. 51
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 1000
National Category
Computer Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-182594 (URN)978-91-554-8541-2 (ISBN)
Public defence
2012-12-21, Room 2446, Polacksbacken, Lägerhyddsvägen 2, Uppsala, 13:00 (English)
Opponent
Supervisors
Projects
UPMARC
Available from: 2012-11-29 Created: 2012-10-11 Last updated: 2018-01-12Bibliographically approved
Eklöv, D., Nikoleris, N., Black-Schaffer, D. & Hägersten, E. (2012). Quantitative Characterization of Memory Contention. Uppsala: Uppsala universitet
Open this publication in new window or tab >>Quantitative Characterization of Memory Contention
2012 (English)Report (Other academic)
Abstract [en]

On multicore processors, co-executing applications compete for shared resources, such as cache capacity and memory bandwidth. This leads to suboptimal resource allocation and can cause substantial performance loss, which makes it important to effectively manage these shared resources. This, however, requires insights into how the applications are impacted by such resource sharing.

While there are several methods to analyze the performance impact of cache contention, less attention has been paid to general, quantitative methods for analyzing the impact of contention for memory bandwidth. To this end we introduce the Bandwidth Bandit, a general, quantitative, profiling method for analyzing the performance impact of contention for memory bandwidth on multicore machines.

The profiling data captured by the Bandwidth Bandit is presented in a it bandwidth graph. This graph accurately captures the measured application's performance as a function of its available memory bandwidth, and enables us to determine how much the application suffers when its available bandwidth is reduced. To demonstrate the value of this data, we present a case study in which we use the bandwidth graph to analyze the performance impact of memory contention when co-running multiple instances of single threaded application.

Place, publisher, year, edition, pages
Uppsala: Uppsala universitet, 2012. p. 10
Series
Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2012-029
National Category
Computer Systems
Research subject
Computer Systems Sciences
Identifiers
urn:nbn:se:uu:diva-182445 (URN)
Available from: 2013-03-28 Created: 2012-10-10 Last updated: 2013-03-28Bibliographically approved
Eklöv, D., Nikoleris, N., Black-Schaffer, D. & Hagersten, E. (2011). Cache Pirating: Measuring the curse of the shared cache.
Open this publication in new window or tab >>Cache Pirating: Measuring the curse of the shared cache
2011 (English)Report (Other academic)
Series
Technical report / Department of Information Technology, Uppsala University, ISSN 1404-3203 ; 2011-001
National Category
Computer Sciences
Identifiers
urn:nbn:se:uu:diva-150548 (URN)
Projects
UPMARC
Available from: 2011-01-10 Created: 2011-03-31 Last updated: 2018-01-12Bibliographically approved
Eklöv, D., Nikoleris, N., Black-Schaffer, D. & Hagersten, E. (2011). Cache Pirating: Measuring the Curse of the Shared Cache. In: Proc. 40th International Conference on Parallel Processing. Paper presented at ICPP 2011 (pp. 165-175). IEEE Computer Society
Open this publication in new window or tab >>Cache Pirating: Measuring the Curse of the Shared Cache
2011 (English)In: Proc. 40th International Conference on Parallel Processing, IEEE Computer Society, 2011, p. 165-175Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE Computer Society, 2011
National Category
Computer Engineering
Identifiers
urn:nbn:se:uu:diva-181254 (URN)10.1109/ICPP.2011.15 (DOI)978-1-4577-1336-1 (ISBN)
Conference
ICPP 2011
Projects
UPMARCCoDeR-MP
Available from: 2011-10-17 Created: 2012-09-20 Last updated: 2018-12-14Bibliographically approved
Eklöv, D. (2011). Efficient methods for application performance analysis. (Licentiate dissertation). Uppsala University
Open this publication in new window or tab >>Efficient methods for application performance analysis
2011 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

To reduce latency and increase bandwidth to memory, modern microprocessors are designed with deep memory hierarchies including several levels of caches. For such microprocessors, the service time for fetching data from off-chip memory is about two orders of magnitude longer than fetching data from the level-one cache. Consequently, the performance of applications is largely determined by how well they utilize the caches in the memory hierarchy, captured by their miss ratio curves. However, efficiently obtaining an application's miss ratio curve and interpreting its performance implications is hard. This task becomes even more challenging when analyzing application performance on multicore processors where several applications/threads share caches and memory bandwidths. To accomplish this, we need powerful techniques that capture applications' cache utilization and provide intuitive performance metrics.

In this thesis we present three techniques for analyzing application performance, StatStack, StatCC and Cache Pirating. Our main focus is on providing memory hierarchy related performance metrics such as miss ratio, fetch ratio and bandwidth demand, but also execution rate. These techniques are based on profiling information, requiring both runtime data collection and post processing. For such techniques to be broadly applicable the data collection has to have minimal impact on the profiled application, allow profiling of unmodified binaries, and not depend on custom hardware and/or operating system extensions. Furthermore, the information provided has to be accurate and easy to interpret by programmers, the runtime environment and compilers.

StatStack estimates an application's miss ratio curve, StatCC estimates the miss ratio of co-running application sharing the last-level cache and Cache Pirating measures any desired performance metric available through hardware performance counters as a function of cache size. We have experimentally shown that our methods are both efficient and accurate. The runtime information required by StatStack and StatCC can be collected with an average runtime overhead of 40%. The Cache Pirating method measures the desired performance metrics with an average runtime overhead of 5%.

Place, publisher, year, edition, pages
Uppsala University, 2011
Series
Information technology licentiate theses: Licentiate theses from the Department of Information Technology, ISSN 1404-5117 ; 2011-001
National Category
Computer Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-227616 (URN)
Supervisors
Projects
UPMARC
Available from: 2011-02-18 Created: 2014-06-28 Last updated: 2018-01-11Bibliographically approved
Organisations

Search in DiVA

Show all publications