COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Computer Laboratory Systems Research Group Seminar > Leveraging RDMA to Build a Real-Time Cloud for the Internet of Things
Leveraging RDMA to Build a Real-Time Cloud for the Internet of ThingsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Liang Wang. We’ve all read the hype about the Internet of Things: a technology trend that seems endlessly about to happen, yet has been stubbornly hard to integrate with cloud computing. In this talk I want to ask why this has been so, and how we can fix it. I’ll start by describing work Cornell has done over the past few years on creating a cloud platform to host “smart power grid” applications. This involves (1) new rack-scale management solutions aimed at applications that need to run 24×7, (2) replication with ultra-fast updates for scalable real-time responsiveness, and (3) new real-time storage solutions, to enable a new kind of big-data temporal computing. But new models also create huge performance puzzles. For our work, these center on how to overcome Brewer’s CAP principle. CAP was about performance tradeoffs, and the key to conquering CAP is to leverage RDMA and NVRAM hardware, which offer amazing speedups for critical data paths. By working from the ground up, and using RDMA and NVRAM as accelerators for data replication, storage and management solutions, we can get huge speedups compared to older styles of cloud computing, even when consistency and fault-tolerance are required. This work is all open-source, and should be as useful on cloud systems as on clusters, as long as the cloud runs a container OS: With older styles of cloud virtualization, it can be incredibly hard to offer scheduling guarantees, and technologies like RDMA and NVRAM are very hard to virtualize. Containers eliminate both of those obstacles. Bio: Ken Birman has been a systems researcher and faculty member at Cornell University since getting his PhD from UC Berkeley in 1981. He is best known for work on virtually synchronous process group computing models (an early version of what has become known as the Paxos family of protocols), and his software has been widely used. The Isis Toolkit that Ken built ran the NYSE for more than a decade, and is still used to operate many aspects of the communication system in the French air traffic control system. A follow-on named Vsync is widely used in graduate courses that teach distributed computing. This talk is actually based on his newest system, called Derecho. This talk is part of the Computer Laboratory Systems Research Group Seminar series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsSciSoc – Cambridge University Scientific Society Horizon Seminars Edwina Currie: Lies, damned lies and politicians CCIMI Sociolinguistics Seminar archaeologyOther talksTreatment Centre Simulation Interrogating T cell signalling and effector function in hypoxic environments Mechanical properties of cells or cell components on the micro- and nanometer scale A Bourdiesian analysis of songwriting habitus Diagnosing diseases of childhood: a bioarchaeological and palaeopathological perspective How to write good papers |