[Beowulf] BIG 'ram' using SSDs - was single machine with 500 GB of RAM

Vincent Diepeveen diep at xs4all.nl
Wed Jan 9 10:21:42 PST 2013


On Jan 9, 2013, at 4:33 PM, Ellis H. Wilson III wrote:

> On 01/09/2013 08:27 AM, Vincent Diepeveen wrote:
>> What would be a rather interesting thought for building a single box
>> dirt cheap with huge 'RAM'
>> is the idea of having 1 fast RAID array of SSD's function as the  
>> 'RAM'.
>
> This may be a more inexpensive route, but let's all note that the raw
> latency differences between DDR2/3 RAM and /any/ SSD is multiple  
> orders
> of magnitude.  So for a single threaded application that has been  
> asked
> to run on all RAM, I have a strong suspicion that RAM latencies are  
> what
> it really does need -- not just reasonable latency and high  
> throughput.
>   But we should await Jorg's response on the application nature to
> better flesh that out.
>

I kind of disagree here.

Latency to a 4 socket box randomly to a block of 500GB ram will be in  
the 600 ns range.
And total calculation probably will be several microseconds  
(depending upon what you do).

5 TB of SD will be faster than 60 us. That's factor 100 slower in  
case processing that data
costs nearby nothing. And it's a factor 50 in case it costs some time  
to proces the data.

If more RAM speeds you up exponential claiming now that being single  
core with 500GB is
faster than 2000 cores with 50GB, means that using 5TB at 60 us  
latencies for the RAM,
is going to beat the crap out of that single core using 500GB ram.


>> You can get a bandwidth of 2 GB/s then to the SSD's "RAM" pretty
>> easily, yet for some calculations that bandwidth might be enough
>> given the fact you can then parallelize a few cores.
>
> I am at a loss as to how you can achieve that high of bandwidth  
> "pretty
> easily."

Most modern raid controllers easily deliver against 3GB/s.

> In the /absolute/ best case a single SATA SSD can serve reads
> at close to 400-500MB/s, and software RAIDing them will definitely not
> get you the 4x you speak of.

Easily 2.7GB/s, hands down, for a $300 raid controller.

Just put 16 SSD's in that array. This is not rocket science!

>   Moreover, most hardware RAID cards aren't
> built for that level of IOP/s or bandwidth since they usually have  
> HDDs
> strapped to them, so you are going to have to get a very pricey HW  
> RAID
> controller to try and achieve this.  And whether or not you are
> bottlenecked at the end of the day after the RAID controller is beyond
> my expertise.  Does anyone have ideas/suggestions on good RAID
> controllers for SSD specific workloads?
>
> To get anywhere near the aforementioned number, you are likely  
> going to
> have to drop a pretty penny on a Fusion-IO or Virident PCI-Express  
> Flash
> Device -- not buy just any SSD and expect to take their quoted numbers
> and 4x them (much less get the quoted numbers).  I suspect going  
> the RAM
> route may be comparably costly at this point.
>
>> The random latency to such SSD raid might be 70 microseconds on
>> average, maybe even faster with specific SSD's.
>
> Random read maybe -- certainly not random write latency.  Again, it's
> probably best to wait on Jorg to comment on the nature of the
> application to decide whether we should care about read or write  
> latency
> more.

That's all going in parallel. A single SSD has 16 parallel channels  
or so.
Just pick each time a different channel.

>
>> One might need to optimize which file system gets used in such case
>> and the way how to access files - as one probably wants to
>> avoid global locks that avoid several requests pending simultaneously
>> to the SSD's.
>
> You could instead just mmap the flash device directly if you really  
> just
> want to use it truly like 'RAM.'  Optimizing filesystems is entirely
> nontrivial.

Flash goes via USB - central lock here - central lock there. Not a  
good idea.
Flash is in fact the same thing like what SSD's have, performancewise  
spoken.

I'm on a mailing list optimizing for SSD and Flash: NILFS, check it out.

>
> Best,
>
> ellis
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin  
> Computing
> To change your subscription (digest mode or unsubscribe) visit  
> http://www.beowulf.org/mailman/listinfo/beowulf




More information about the Beowulf mailing list