[Beowulf] IEEE 1588 (PTP) - a better cluster clock?

Patrick Ohly patrick.ohly at intel.com
Fri Jul 20 00:09:45 PDT 2007

On Thu, 2007-07-19 at 12:50 +0200, Beat Rubischon wrote:
> Hello!
> Am 18.7.2007 17:35 Uhr schrieb "Patrick Ohly" unter
> <patrick.ohly at intel.com>:
> > For clusters it's time to replace it with a solution that works better
> > in a LAN.
> > [2] http://ptpd.sourceforge.net/
> It looks like an interesting tool which probably solves a lot of my troubles
> with inaccurate clocks.
> One thing I'm currently missing is a short howto to combine ptp and ntp. Is
> it OK when I use dommands like this or do I shoot myself into the feets?
> naster# ntpd
> master# ptpd -t -p
> slaves# ptpd -g

That's okay and will have the desired effect: the master (-p) takes its
time from NTP, broadcasts it via PTP without ptpd interfering with NTP
(-t = "do not adjust the system clock") and the slaves (-g) control the
system time via PTP.

However, there's a slightly more elegant setup which uses the fact that
PTP has a notion of clock quality and automatically determines which
clock is most suitable to act as master:
        master# ntpd
        master# ptpd -s 2 -i NTP -t
        slaves# ptpd

        -s 2 = stratum 2 = secondary standard reference clock
        -i NTP = gets its time from an NTP master clock

In this setup the master clock is detected as the best clock and used by
the slaves as before, but if the master fails, the slaves will choose a
new master among themselves. That way the cluster remains synchronized
within itself, it just will get out of sync with world time. When the
node with NTP recovers, its ptpd will be chosen as master clock again
and gradually move the cluster into sync with world time.

Clocks with stratum 1 (e.g. clocks getting their time from GPS) are even
better than the NTP master clock and will be used if you pluck them into
the Ethernet subnet.

Best Regards, Patrick Ohly

The content of this message is my personal opinion only and although
I am an employee of Intel, the statements I make here in no way
represent Intel's position on the issue, nor am I authorized to speak
on behalf of Intel on this matter.

More information about the Beowulf mailing list