In catching up on email from a week at the beach (got to meet RGB for
the first time since we were undergrads) and Slashdot had this item
<a href="http://hardware.slashdot.org/article.pl?sid=08/09/19/0126232">http://hardware.slashdot.org/article.pl?sid=08/09/19/0126232</a> regarding
IBM's 22nm process. The explanation (maybe a week old) is that IBM usss
mathematics to compensate for a lithographic process limited naturally
to much less accuracy, say 44 or 34. Slashdot complains that
"computational scaling" is not a good enough explanation and they want
to know more, which got me thinking.<br>
<br>
I recall the Hubble flaw; IIRC, the flaw in the mirror could (partly)
be compensated by mathematical analysis, as if the information content
were there, but distorted, so they just had to, um, re-tort. I imagine
something similar, in reverse, possible with lithography.<br>
<br>
Imagine building a process at say 44 nm, then measuring it's output at
22nm precision. I'm considering the 22nm scale measurement as a
distortion. Then compute the inverse; apply the inverse to your design;
and feed the distored, or as it were encoded, design to the input of
the process; it's (measured, not built) effect could be to produce a
correct feature at 22nm. <br>
<br>
Does that make physics sense? it does rather taste like cheating. <br>
<br>
Peter<br>