Home » Uncategorized » Usenix FAST 2011

Usenix FAST 2011

I am at the FAST 2011 conference this year presenting RAMCloud as a poster. See Poster at USENIX FAST 2011 for what I came up with. There have been some interesting talks at the conference so far with entire sessions devoted to flash/SSDs and data de-duplication.

One of the papers at the conference claimed that 4K was the average size of files on a windows machine – measured over multiple years at Microsoft. Another suggested that flash lifetimes were so poor that writes should be de-duplicated before storage. Yet another flash paper wanted to use value-locality (just another name for dedup? ) to improve performance of writes.

 

Lingo learned –

Wear Leveling – The process of spreading out writes to different physical locations on a solid-state device to avoid making too many writes to the same physical address on the device. This is required because these SSDs support a limited number of write cycles to any one location before that location becomes unreliable !

Erasure code – An error correction method that uses multiple extra bits after the original message to provide error correction for the message. Parity codes are a special case.

 

Interesting aside – several attendees were updating their company wikis with FAST 2011 trip reports as the talks were going on ! I was using an emacs buffer mostly.

Advertisements

One thought on “Usenix FAST 2011

  1. The conference report has been published at http://static.usenix.org/publications/login/2011-06/openpdfs/FAST11reports.pdfThe excerpt regarding the RAMCloud poster was -Thursday Poster SessionFirst set of posters summarized by Shivaram Venkataraman (venkata4@ illinois.edu)RAMCloud: Scalable Storage System in MemoryNanda Kumar Jayakumar, Diego Ongaro, Stephen Rumble, Ryan Stutsman, John Ousterhout, and Mendel Rosenblum, Stanford UniversityNanda Kumar Jayakumar presented RAMCloud, a cluster- wide in-memory storage system. The storage system is designed for applications which require low latency accesses and was based on a log-structured design. Updates to the system are appended to an in-memory log and these updates are then asynchronously flushed to disk to ensure reliability. Nanda explained that the design was also optimized for high throughput and acknowledged that while the cost of build- ing such a system might be high, there were many real-world applications for which this would be affordable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s