Articoli correlati a A Primer on Compression in the Memory Hierarchy

A Primer on Compression in the Memory Hierarchy - Brossura

 
9783031006234: A Primer on Compression in the Memory Hierarchy

Sinossi

This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them.

Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.

Informazioni sull?autore

Dr. Somayeh Sardashti earned her Ph.D. degree in Computer Sciences from the University of Wisconsin-Madison. Her research interests include computer systems and architecture, high performance and energy-optimized memory hierarchies, exploiting new memory, and hardware technologies for high performance database systems. She currently works in Exadata Storage Server and Database Machine group at Oracle Corporation. She was the winner of the ACM student research competition at Grace Hopper conference in 2013. She holds an M.S. in Computer Sciences from the University of Wisconsin-Madison, another Master's degree, and a B.S. in computer engineering from the University of Tehran.

Dr. Angelos Arelakis earned his Ph.D. degree in Computer Science and Engineering in 2015 from Chalmers University of Technology, Sweden. He is a researcher at Chalmers University of Technology and a co-founder of ZeroPoint Technologies Corp. His research focuses on high performance computer architecture, in particular designing cache and memory hierarchies that are efficiently utilized by today's multicore systems, data compression, and reconfigurable computing. He holds an M.Sc. degree in Computer Engineering from Delft University of Technology (Netherlands) and a 5-year Engineering Diploma in Electronics and Computer Engineering from the Technical University of Crete (Greece).
Per Stenstrom earned his Ph.D. degree in computer engineering in 1990 from Lund University, Sweden. Since 1995 has been a Professor of Computer Engineering at Chalmers University of Technology, Sweden. His research interests are devoted to high-performance computer architecture and he has made major contributions to especially high-performance memory systems. He has authored or co-authored 3 textbooks, 130 publications in international journals and conferences, and around ten patents. He is regularly serving program committees of major conferences in the computer architecture field and is an Associate Editor-in-Chief of the Journal of Parallel and Distributed Computing and a Senior Associate Editor of ACM Transactions on Architecture and Code Optimization. He co-founded the HiPEAC Network of Excellence funded by the European Commission. He has also acted as General and Program Chair for a large num-ber of conferences including the ACM/IEEE Int. Symposium on Computer Architecture, the IEEE High-Performance Computer Architecture Symposium, the IEEE International Parallel and Distributed Processing Symposium, and the ACM International Conference on Supercomputing. He is a Member-at-Large of the ACM Council, a Fellow of the ACM and the IEEE, and a member of Academia Europaea, the Royal Swedish Academy of Engineering Sciences, and the Spanish Royal Academy of Engineering.
Prof. David A. Wood is a Professor in the Computer Sciences Department at the University of Wisconsin, Madison. He also holds a courtesy appointment in the Electrical and Computer Engineering Department. Dr. Wood received a B.S. in Electrical Engineering and Computer Science (1981) and a Ph.D. in Computer Science (1990), both at the University of California, Berkeley. Dr. Wood is an ACM Fellow (2005) and IEEE Fellow (2004), received the University of Wisconsin's H.I. Romnes Faculty Fellowship (1999) and Vilas Associate (2011), and received the National Science Foundation's Presidential Young Investigator award (1991). Dr. Wood is an Associate Editor of ACM Transactions on Architecture and Compiler Optimization, serves as Past Chair of ACM SIGARCH, served as Program Committee Chairman of ASPLOS-X (2002), and has served on numerous program committees. Dr. Wood is an ACM Fellow, an IEEE Fellow, and a member of the IEEE Computer Society. Dr. Wood has published over 70 technical papers and is an inventor on thirteen U.S. and international patents, several of which have been licensed to industry.

Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.

Compra usato

Condizioni: molto buono
Visualizza questo articolo

EUR 12,30 per la spedizione da U.S.A. a Italia

Destinazione, tempi e costi

EUR 9,70 per la spedizione da Germania a Italia

Destinazione, tempi e costi

Altre edizioni note dello stesso titolo

9781627054157: A Primer on Compression in the Memory Hierarchy

Edizione in evidenza

ISBN 10:  1627054154 ISBN 13:  9781627054157
Casa editrice: Morgan & Claypool, 2015
Brossura

Risultati della ricerca per A Primer on Compression in the Memory Hierarchy

Foto dell'editore

Sardashti, Somayeh,Arelakis, Angelos,Stenstrà m, Per,Wood, David A.
Editore: Springer, 2015
ISBN 10: 3031006232 ISBN 13: 9783031006234
Antico o usato paperback

Da: Books From California, Simi Valley, CA, U.S.A.

Valutazione del venditore 4 su 5 stelle 4 stelle, Maggiori informazioni sulle valutazioni dei venditori

paperback. Condizione: Very Good. Codice articolo mon0003658476

Contatta il venditore

Compra usato

EUR 21,20
Convertire valuta
Spese di spedizione: EUR 12,30
Da: U.S.A. a: Italia
Destinazione, tempi e costi

Quantità: 1 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Somayeh Sardashti|Angelos Arelakis|Per Stenström|David A. Wood
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Brossura
Print on Demand

Da: moluna, Greven, Germania

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Dr. Somayeh Sardashti earned her Ph.D. degree in Computer Sciences from the University of Wisconsin-Madison. Her research interests include computer systems and architecture, high performance and energy-optimized memory hierarchies, exploiting new memory, a. Codice articolo 608129038

Contatta il venditore

Compra nuovo

EUR 28,42
Convertire valuta
Spese di spedizione: EUR 9,70
Da: Germania a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Foto dell'editore

Sardashti, Somayeh; Arelakis, Angelos; Stenström, Per; Wood, David A.
Editore: Springer, 2015
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Brossura

Da: California Books, Miami, FL, U.S.A.

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. Codice articolo I-9783031006234

Contatta il venditore

Compra nuovo

EUR 33,21
Convertire valuta
Spese di spedizione: EUR 7,64
Da: U.S.A. a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Somayeh Sardashti
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Taschenbuch
Print on Demand

Da: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Germania

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Taschenbuch. Condizione: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them. 88 pp. Englisch. Codice articolo 9783031006234

Contatta il venditore

Compra nuovo

EUR 29,95
Convertire valuta
Spese di spedizione: EUR 11,00
Da: Germania a: Italia
Destinazione, tempi e costi

Quantità: 2 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Sardashti, Somayeh
Editore: Springer 12/16/2015, 2015
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Paperback or Softback

Da: BargainBookStores, Grand Rapids, MI, U.S.A.

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Paperback or Softback. Condizione: New. A Primer on Compression in the Memory Hierarchy 0.37. Book. Codice articolo BBS-9783031006234

Contatta il venditore

Compra nuovo

EUR 31,48
Convertire valuta
Spese di spedizione: EUR 11,45
Da: U.S.A. a: Italia
Destinazione, tempi e costi

Quantità: 5 disponibili

Aggiungi al carrello

Foto dell'editore

Sardashti, Somayeh; Arelakis, Angelos; Stenström, Per; Wood, David A.
Editore: Springer, 2015
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Brossura

Da: Ria Christie Collections, Uxbridge, Regno Unito

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. In. Codice articolo ria9783031006234_new

Contatta il venditore

Compra nuovo

EUR 34,46
Convertire valuta
Spese di spedizione: EUR 10,47
Da: Regno Unito a: Italia
Destinazione, tempi e costi

Quantità: Più di 20 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Somayeh Sardashti
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Taschenbuch

Da: AHA-BUCH GmbH, Einbeck, Germania

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Taschenbuch. Condizione: Neu. Druck auf Anfrage Neuware - Printed after ordering - This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them. Codice articolo 9783031006234

Contatta il venditore

Compra nuovo

EUR 29,95
Convertire valuta
Spese di spedizione: EUR 14,99
Da: Germania a: Italia
Destinazione, tempi e costi

Quantità: 1 disponibili

Aggiungi al carrello

Immagini fornite dal venditore

Somayeh Sardashti
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Taschenbuch

Da: buchversandmimpf2000, Emtmannsberg, BAYE, Germania

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Taschenbuch. Condizione: Neu. Neuware -This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 88 pp. Englisch. Codice articolo 9783031006234

Contatta il venditore

Compra nuovo

EUR 29,95
Convertire valuta
Spese di spedizione: EUR 15,00
Da: Germania a: Italia
Destinazione, tempi e costi

Quantità: 2 disponibili

Aggiungi al carrello

Foto dell'editore

Sardashti, Somayeh; Arelakis, Angelos; Stenström, Per; Wood, David A.
Editore: Springer, 2015
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Brossura

Da: Books Puddle, New York, NY, U.S.A.

Valutazione del venditore 4 su 5 stelle 4 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. 1st edition NO-PA16APR2015-KAP. Codice articolo 26395061345

Contatta il venditore

Compra nuovo

EUR 40,61
Convertire valuta
Spese di spedizione: EUR 7,64
Da: U.S.A. a: Italia
Destinazione, tempi e costi

Quantità: 4 disponibili

Aggiungi al carrello

Foto dell'editore

Sardashti, Somayeh; Arelakis, Angelos; Stenström, Per; Wood, David A.
Editore: Springer, 2015
ISBN 10: 3031006232 ISBN 13: 9783031006234
Nuovo Brossura
Print on Demand

Da: Majestic Books, Hounslow, Regno Unito

Valutazione del venditore 5 su 5 stelle 5 stelle, Maggiori informazioni sulle valutazioni dei venditori

Condizione: New. Print on Demand. Codice articolo 402364350

Contatta il venditore

Compra nuovo

EUR 41,05
Convertire valuta
Spese di spedizione: EUR 10,31
Da: Regno Unito a: Italia
Destinazione, tempi e costi

Quantità: 4 disponibili

Aggiungi al carrello

Vedi altre 2 copie di questo libro

Vedi tutti i risultati per questo libro