<div dir="ltr">Thanks for the offer.<div><br></div><div>This is an academic exercise for now. Our budgets are committed to through 2026 for Frontier. 😄</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 10, 2020 at 4:11 PM Jeff Johnson <<a href="mailto:jeff.johnson@aeoncomputing.com">jeff.johnson@aeoncomputing.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Scott,<div><br></div><div>They are about to release a 85kW version of the rack, same dimensions. Let me know if you want me to connect you with their founder/inventor.</div><div><br></div><div>--Jeff</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 10, 2020 at 1:08 PM Scott Atchley <<a href="mailto:e.scott.atchley@gmail.com" target="_blank">e.scott.atchley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jeff,<div><br></div><div>Interesting, I have not seen this yet.</div><div><br></div><div>Looking at their 52 kW rack's dimensions, it works out to 3.7 kW/ft^2 for the enclosure if we do not count the row pitch. If we add 4-5 feet for row pitch, then it drops to 2.2-2.4 kW/ft^2. Assuming Summit's IBM AC922 nodes fit and again a row pitch of 4-5 feet, the performance per area would be 31-34 TF/ft^2. Both the performance per area and the power per are are close to Summit. Their PUE (1.15-1.2) is higher than we get on Summit (1.05 for 9 months and 1.1-1.2 for 3 months). It is very interesting for data centers that have widely varying loads for adjacent cabinets.</div><div><br></div><div>Scott</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 10, 2020 at 3:47 PM Jeff Johnson <<a href="mailto:jeff.johnson@aeoncomputing.com" target="_blank">jeff.johnson@aeoncomputing.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Scott,<div><br></div><div>It's not immersion but it's a different approach to the conventional rack cooling approach. It's really cool (literally and figuratively). They're based here in San Diego.</div><div><br></div><div><a href="https://ddcontrol.com/" target="_blank">https://ddcontrol.com/</a></div><div><br></div><div>--Jeff</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 10, 2020 at 12:37 PM Scott Atchley <<a href="mailto:e.scott.atchley@gmail.com" target="_blank">e.scott.atchley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi everyone,<div><br></div><div>I am wondering whether immersion cooling makes sense. We are most limited by datacenter floor space. We can manage to bring in more power (up to 40 MW for Frontier) and install more cooling towers (ditto), but we cannot simply add datacenter space. We have asked to build new building and the answer has been consistently "No."</div><div><br></div><div>Summit is mostly water cooled. Each node has cold plates on the CPUs and GPUs. Fans are needed to cool the memory and power supplies and is captured by rear-door heart exchangers. It occupies roughly 5,600 ft^2. With 200 PF of performance and 14 MW of power, that is 36 TF/ft^2 and 2.5 kW/ft^2.</div><div><br></div><div>I am wondering what the comparable performance and power is per square foot for the densest, deployed (not theoretical) immersion cooled systems. Any ideas?</div><div><br></div><div>To make the exercise even more fun, what is the weight per square foot for immersion systems? Our data centers have a limit of 250 or 500 pounds per square foot. I expect immersion systems to need higher loadings than that.</div><div><br></div><div>Thanks,</div><div><br></div><div>Scott</div></div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr">------------------------------<br>Jeff Johnson<br>Co-Founder<br>Aeon Computing<br><br><a href="mailto:jeff.johnson@aeoncomputing.com" target="_blank">jeff.johnson@aeoncomputing.com</a><br><a href="http://www.aeoncomputing.com" target="_blank">www.aeoncomputing.com</a><br>t: 858-412-3810 x1001 f: 858-412-3845<br>m: 619-204-9061<br><br>4170 Morena Boulevard, Suite C - San Diego, CA 92117<div><br></div><div>High-Performance Computing / Lustre Filesystems / Scale-out Storage</div></div></div></div></div></div>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr">------------------------------<br>Jeff Johnson<br>Co-Founder<br>Aeon Computing<br><br><a href="mailto:jeff.johnson@aeoncomputing.com" target="_blank">jeff.johnson@aeoncomputing.com</a><br><a href="http://www.aeoncomputing.com" target="_blank">www.aeoncomputing.com</a><br>t: 858-412-3810 x1001 f: 858-412-3845<br>m: 619-204-9061<br><br>4170 Morena Boulevard, Suite C - San Diego, CA 92117<div><br></div><div>High-Performance Computing / Lustre Filesystems / Scale-out Storage</div></div></div></div></div>
</blockquote></div>