[Beowulf] Bolts of Thunder and Upgraded desktop interconnect silicon....
james_cuff at harvard.edu
Mon Jun 10 16:40:46 PDT 2013
On Mon, Jun 10, 2013 at 7:27 PM, Jeff Johnson
<jeff.johnson at aeoncomputing.com> wrote:
> Thunderbolt is packetized PCI-Express. It also interleaves encapsulated
> DisplayPort packets on the same chain. If there are no DisplayPort devices
> on the chain then the entire bandwidth is available for data.
> All of the Thunderbolt data devices on the market have an internal board
> that contains a Thuunderbolt chip that converts the packetized PCI-Express
> to standard format and that is fed into a PCI-Express/Whatever chip (SATA,
> SAS, USB3, etc).
> My guess is that Thunderbolt's progress will follow Intel's PCI-Express
> roadmap. When PCI-Express gets faster, Intel will roll a faster TB chip.
> Again, I am guessing. I am not reading off of any NDA material.
> I don't know what the interface latencies are. For interconnect use, I am
> guessing, you would start with the same construct used with PCI-Express host
> connections. I don't know if TB will recongize another host on a device
> chain or if it is single host/multi slave.
> /* disclaimer: the above was written under the influence of severe jet lag.
Thanks Jeff - really appreciate this.
I'm no interconnect engineer, but I see commoditization of small
component parts, packet based PCI-e and being a child of simple
upbringing I always attempt to put two and two together to come up
with moderate values of four.
Poking at http://www.intel.com/content/www/us/en/io/thunderbolt/thunderbolt-technology-developer.html
and the silicon/spec clearly looks like it could do the job.
It also looks like simple silicon (in comparison to many others) which
could drive stunning yield from a manufacturing perspective. This
then drives the one thing we folks in Beowulf HPC love - volume!
There has to be something here... I'm guessing some of our friends at
Intel may need to help us get to point B. "Volume driven
opportunities" are mentioned on the developers area of:
so... I dunno. maybe we just watch this space or it is just too hard
to talk about in a public forum right now, which I'm also cool with.
Thanks again - love to see other folks insight.
> On 6/10/13 3:57 PM, James Cuff wrote:
>> Hi all!
>> So a company based out of Cupertino mentioned using this silicon in a
>> revamp of their MacPro line today...
>> we appear to have a second version of a 20GB/s consumer connection
>> (latency unknown), and yet this search:
>> does not really go anywhere cool like a github or kernel.org repo....
>> Any qualified folks know where this thunderbolt stuff is all heading
>> and are able to talk in public?
>> yes I did move back to .edu just in case folks were doing a double
>> take. And yes, (like Dr. Layton) I do still think that cloud
>> as a service and HPC/HTC are still a really good idea for the right
>> algorithms and workloads! :-)
>> dr. james cuff, director of research computing & chief technology
>> harvard university | faculty of arts and sciences | division of science rm
>> 210, thirty eight oxford street, cambridge. ma. 02138 tel: +1 617 384 7647
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
> Jeff Johnson
> Aeon Computing
> jeff.johnson at aeoncomputing.com
> t: 858-412-3810 x101 f: 858-412-3845
> m: 619-204-9061
> /* New Address */
> 4170 Morena Boulevard, Suite D - San Diego, CA 92117
dr. james cuff, director of research computing & chief technology
architect harvard university | faculty of arts and sciences | division
of science rm 210, thirty eight oxford street, cambridge. ma. 02138
tel: +1 617 384 7647 | http://about.me/jcuff
More information about the Beowulf