A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, and practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers. Table of Contents: Basic Elements of Large Cache Design / Organizing Data in CMP Last Level Caches / Policies Impacting Cache Hit Rates / Interconnection Networks within Large Caches / Technology / Concluding Remarks
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
Rajeev Balasubramonian is a Professor at the School of Computing, University of Utah. He received his B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Bombay in 1998. He received his M.S. (2000)and Ph.D. (2003) from the University of Rochester. His primary research interests include memory systems, security, and application-specific architectures. Prof. Balasubramonian is a recipient of an NSF CAREER award, faculty research awards from IBM, Google, HPE, an Intel Outstanding Research Award, and various teaching awards at the University of Utah. He has co-authored papers that have been selected as IEEE Micro Top Picks (2007 and 2010) and that have received three best paper awards.
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
EUR 2,26 per la spedizione in U.S.A.
Destinazione, tempi e costiEUR 7,67 per la spedizione in U.S.A.
Destinazione, tempi e costiDa: Best Price, Torrance, CA, U.S.A.
Condizione: New. SUPER FAST SHIPPING. Codice articolo 9783031006067
Quantità: 2 disponibili
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: New. Codice articolo 44569593-n
Quantità: Più di 20 disponibili
Da: California Books, Miami, FL, U.S.A.
Condizione: New. Codice articolo I-9783031006067
Quantità: Più di 20 disponibili
Da: GreatBookPrices, Columbia, MD, U.S.A.
Condizione: As New. Unread book in perfect condition. Codice articolo 44569593
Quantità: Più di 20 disponibili
Da: Books Puddle, New York, NY, U.S.A.
Condizione: New. 1st edition NO-PA16APR2015-KAP. Codice articolo 26395061400
Quantità: 4 disponibili
Da: Majestic Books, Hounslow, Regno Unito
Condizione: New. Print on Demand. Codice articolo 402364231
Quantità: 4 disponibili
Da: Ria Christie Collections, Uxbridge, Regno Unito
Condizione: New. In. Codice articolo ria9783031006067_new
Quantità: Più di 20 disponibili
Da: Chiron Media, Wallingford, Regno Unito
PF. Condizione: New. Codice articolo 6666-IUK-9783031006067
Quantità: 10 disponibili
Da: GreatBookPricesUK, Woodford Green, Regno Unito
Condizione: New. Codice articolo 44569593-n
Quantità: Più di 20 disponibili
Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania
Taschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, and practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers. Table of Contents: Basic Elements of Large Cache Design / Organizing Data in CMP Last Level Caches / Policies Impacting Cache Hit Rates / Interconnection Networks within Large Caches / Technology / Concluding Remarks 156 pp. Englisch. Codice articolo 9783031006067
Quantità: 2 disponibili