[Beowulf] OS for 64 bit AMD
jmdavis at mail2.vcu.edu
Sun Apr 3 22:11:17 PDT 2005
What are the good reasons not to have g77 in gcc4? I admit ignorance on
the subject of gcc4, but g77 is useful to many in the scientific and
Joe Landman wrote:
> Hi Bob:
> My main thesis really is that FC-x != FC-(x+1) in terms of core
> interfaces. As we have read from Toon, gcc4.0 won't have a g77 (for
> good reasons), and FC4 will be using gcc4.0. gcc4.0 != gcc 3.4. Of
> course these are not the only changes. The big issue was the stack
> size change. That one killed off my wireless driver and wreaked havoc
> with my graphics on my test machine.
> Bob Drzyzgula wrote:
>> I've been following this discussion, and I just wanted to
>> throw in my $0.02 on a couple of points:
>> What is much more important in a true "production"
>> environment is the length of time one can expect to
>> obtain patches for the OS. No "production shop" that
> I get the feeling that this is an unwinnable argument. One person
> (Mark) argues that support patches are effectively a tool to lock you
> in (and if I characterized this wrong Mark, please feel free to
> correct me), and you argue that this is the only reasonable feature of
> a production OS. I stand by my thesis that a production system is a
> long term stable (interfaces, drivers, ABI) and supportable system, in
> that RHEL3 u5 will not be significantly different than RHEL3 u4, and
> one should not expect broken interfaces or changed stacks, or similar
> bits between these releases.
>> really is running a "production application" is likely
>> to be replacing the OS on anything like the kind of
>> schedule that FC-x -- or even RHEL -- releases come
>> out. They are much more likely to qualify all their
>> applications on a specific OS release, move this new
>> image -- OS + applications -- into production, and run
>> it until there is some compelling reason to change,
>> and this compelling reason can be several years in
>> coming. Even OS patches would only be applied in
>> limited circumstances. These would be (a) to remedy a
>> locally-observed failure mode, (b) to support required
>> application updates, or (c) to address specific security
>> issues. In all cases except in the most severe security
>> problems, such patches would be applied after extensive
>> testing to verify that production activities would not
>> be affected.
> Agreed. This is SOP in most cases.
>> Now, in principal there is no real reason why --
>> vendor support notwithstanding -- a production shop
>> could not be set up to run on e.g. FC-3. However, the
>> disappearance of the official patch stream after a few
>> months would, or at least should, give one pause. Of
>> course there is Fedora Legacy, and one can always
>> patch the RPMs one's self. But it all starts to get
>> pretty tenuous and labor-intensive after a while. By
>> contrast, Red Hat is promising update support for RHEL
>> version for at least five years after release. *This*,
>> not the release cycle, is why production shops -- and
>> their application vendors -- will prefer RHEL over
>> FC-x. It really doesn't (or shouldn't) make a damn
>> bit of difference to a production shop how the OS is
>> characterized: "beta", "proving ground", "enterprise",
>> whatever. What really matters is the promises that are
>> made with respect to out-year support.
> The issue over proving ground vs beta vs enterprise was a semantic
> splitting of hairs that I am regretting spending cycles on. The real
> issue for the application vendors is the cost in the end. Anything
> that reduces cost is a good thing (longevity of platform, popularity
> of platform, few numbers of platforms). This is why frequent release
> cycle systems are not targetted, unless there is a compelling business
> case for it.
>> direction. RHEL can suck pretty bad in a research
>> environment, where you are likely to wind up with half
>> of the RH-supplied packages supplemented with your own
>> builds of more recent stuff piling up in /usr/local.
> <grimace> Yup. The problem is that when you start changing enough
> stuff out, you create yourself a new distribution and you own all the
> headache of this (substantial)
>> * I get a bit frustrated at the hostility toward
>> commercial applications and closed hardware, especially
>> to the extent that it gets directed toward the customers
>> of those products. If there existed an open replacement
> agreed. For some things there are no replacements.
>> The same goes for closed hardware. I don't much
>> care about high-end graphics cards, but storage
>> is a big issue. I've recently been looking for new
>> storage for a sizable network, and am finding that the
>> option of affordable external, high-speed (FC class)
>> RAID controllers serving up generic, high-speed,
>> high-reliability (e.g. not SATA) disk, has pretty much
>> vanished from the market over the past year or so. As
>> has been mentioned, everyone wants you to use their
>> JBODs, their disk modules, and in some cases their
>> HBAs and closed-source drivers. And they want you to
>> pay dearly for it. I hardly find this acceptable, but
> Not going after the SATA vs FC/SCSI point. There are some out there
> in the "white box" variety. Storcase, bowsystem, and a few others
> used to have bits like this, though I often hear people talk about
> only buying major brands for storage. I still have not seen
> affordable FC.
>> I honestly don't know what else to do except to decide
>> that capacity, throughput, reliability, availability and
>> manageability just aren't that important after all.
> They are, but there seems to have been technological shifts.
>> --Bob Drzyzgula
>>  Matlab is actually a poor example for this discussion
>> in that, to their credit, Mathworks in fact only
>> requires, beyond a 2.4 or 2.6 kernel, a specific glibc
>> version. 2.3.2.
More information about the Beowulf