<html><head></head><body>Heya, sorry to chime in a little late,<br>Some images of the box on tech crunch, some details, but the article was written during 2019 prior to the new publication you folks are talking about.<br><br><a href="https://techcrunch.com/2019/11/19/the-cerebras-cs-1-computes-deep-learning-ai-problems-by-being-bigger-bigger-and-bigger-than-any-other-chip/">https://techcrunch.com/2019/11/19/the-cerebras-cs-1-computes-deep-learning-ai-problems-by-being-bigger-bigger-and-bigger-than-any-other-chip/</a><br><br>All I can say is it really beats the bogomips off my IBM X3950x5 80core 2-node even with the XEON Phi cards I'm installing.<br><br>-I have also read an gaming publication article utilising the almost same content and images as tech crunch but with the gloss over of why it wont be playing crysis anytime soon ;)<br><br>Kind regards,<br>Darren Wise<br><br><div class="gmail_quote">On 14 June 2020 06:11:30 BST, Jonathan Engwall <engwalljonathanthereal@gmail.com> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div dir="auto">There is the strange part. How to utilize such a vast cpu?<div dir="auto">Storage should be the back end, unless the use is an api. In this case a gargantuan cpu sits in back, or so it seems.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Jun 13, 2020, 9:41 PM Chris Samuel <<a href="mailto:chris@csamuel.org">chris@csamuel.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 13/6/20 7:58 pm, Fischer, Jeremy wrote:<br>
<br>
> It’s my understanding that NeoCortex is going to have a petabyte or two <br>
> of NVME disk sitting in front of it with some HPE hardware and then <br>
> it’ll utilize the queues and lustre file system on Bridges2 as its front <br>
> end.<br>
<br>
There's more information here:<br>
<br>
<a href="https://www.psc.edu/3206-nsf-funds-neocortex-a-groundbreaking-ai-supercomputer-at-psc-2" rel="noreferrer noreferrer" target="_blank">https://www.psc.edu/3206-nsf-funds-neocortex-a-groundbreaking-ai-supercomputer-at-psc-2</a><br>
<br>
# Neocortex will use the HPE Superdome Flex, an extremely powerful,<br>
# user-friendly front-end high-performance computing (HPC) solution<br>
# for the Cerebras CS-1 servers. This will enable flexible pre- and<br>
# post-processing of data flowing in and out of the attached WSEs,<br>
# preventing bottlenecks and taking full advantage of the WSE<br>
# capability. HPE Superdome Flex will be robustly provisioned with<br>
# 24 terabytes of memory, 205 terabytes of high-performance flash<br>
# storage, 32 powerful Intel Xeon CPUs, and 24 network interface<br>
# cards for 1.2 terabits per second of data bandwidth to each<br>
# Cerebras CS-1.<br>
<br>
The way it reads both of these CS-1's will sit behind that single Flex.<br>
<br>
All the best,<br>
Chris<br>
-- <br>
Chris Samuel : <a href="http://www.csamuel.org/" rel="noreferrer noreferrer" target="_blank">http://www.csamuel.org/</a> : Berkeley, CA, USA<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank" rel="noreferrer">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>
</blockquote></div><br>-- <br>Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>