header
Local cover image
Local cover image
Image from OpenLibrary

Efficient utilization of GPGPU cache hierarchy / Mahmoud Khairy Abdelsadek Abdallah ; Supervised Amr G. Wassal

By: Contributor(s): Material type: TextTextLanguage: English Publication details: Cairo : Mahmoud Khairy Abdelsadek Abdallah , 2015Description: 66 P. : charts , plans ; 30cmOther title:
  • الاستخدام الكفء للذاكرة المرحلية لوحدات معالجة الرسومات [Added title page title]
Subject(s): Online resources: Available additional physical forms:
  • Issued also as CD
Dissertation note: Thesis (M.Sc.) - Cairo University - Faculty of Engineering - Department of Computer Engineering Summary: Throughput processors, such as GPGPUs, rely on massive multithreading to hide long memory latency. However, the high number of active threads GPGPU executes concurrently leads to severe cache thrashing and conflict misses. In this work, we propose a low-cost thrashing-resistant conflict-avoiding streaming-aware GPGPU cache management scheme that efficiently utilizes the GPGPU cache resources and addresses all the problems associated with GPGPU caches. The proposed method employs three orthogonal techniques. First, it dynamically detects and bypasses streaming applications. Second, a Dynamic Warp Throttling via Cores Sampling (DWT-CS) is proposed to alleviate cache thrashing. DWT-CS runs an exhaustive searching over cores to find the best number of warps that achieves the highest performance. Third, we employ a better cache indexing function, Pseudo Random Interleaving Cache (PRIC), that is based on polynomial modulus mapping, to mitigate associativity stalls and eliminate conflict misses. Our proposed method improves the average performance of streaming and contention applications by 1.2X and 2.3X respectively
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)

Thesis (M.Sc.) - Cairo University - Faculty of Engineering - Department of Computer Engineering

Throughput processors, such as GPGPUs, rely on massive multithreading to hide long memory latency. However, the high number of active threads GPGPU executes concurrently leads to severe cache thrashing and conflict misses. In this work, we propose a low-cost thrashing-resistant conflict-avoiding streaming-aware GPGPU cache management scheme that efficiently utilizes the GPGPU cache resources and addresses all the problems associated with GPGPU caches. The proposed method employs three orthogonal techniques. First, it dynamically detects and bypasses streaming applications. Second, a Dynamic Warp Throttling via Cores Sampling (DWT-CS) is proposed to alleviate cache thrashing. DWT-CS runs an exhaustive searching over cores to find the best number of warps that achieves the highest performance. Third, we employ a better cache indexing function, Pseudo Random Interleaving Cache (PRIC), that is based on polynomial modulus mapping, to mitigate associativity stalls and eliminate conflict misses. Our proposed method improves the average performance of streaming and contention applications by 1.2X and 2.3X respectively

Issued also as CD

There are no comments on this title.

to post a comment.

Click on an image to view it in the image viewer

Local cover image