By Andreja Jarc.
In a previous post I introduced tests with different IEEE 1588 / PTPv2 configurations where the PTP accuracy of the slave device was measured for each test case. The test cases performed were as follows:
- A direct link between GM and Slave as a baseline reference accuracy measurement.
- A 100 Mbit/s Boundary Clock (BC) in the PTP link between the GM and the Slave.
- A 100 Mbit/s Transparent Clock (TC) between the GM and the Slave.
- Two BCs (the same vendor and model) connected in series with and without network load.
- Two TCs (the same vendor and model) connected in series with and without network load.
- A standard Gigabit Ethernet switch, with and without network load.
PTP On-Path Test Cases
Test cases 1-3 have been addressed in Part I. In this post I will focus on tests No. 4 and 5.
Here, the performance of PTP Compliant switches, connected in series, first acting as Boundary Clocks and then as Transparent Clocks was measured and compared. Secondly, both setups underwent a stress test by injecting a high network load in the link connecting the switches, which carried the PTP messages.
Test case 4: two BCs in series
Test case 4 comprised two Boundary Clocks (BC) in series (the same vendor and model), additionally two laptops have been added to the installation in order to insert network load in the link between the Grandmaster (GM) and Slave. See the installation plan in Figure 1.
Figure 1: Setup with 2 BCs in the link between the GM and slave.
First of all, we observed the PTP performance in the network with 2 BCs in series, to see if there is a phase difference compared to the setup with only one BC. This part of the test ran for 6 hours (0-6:00 am).
In addition, the network was stressed with 50Mbit/s one way data traffic loading. This was a half of the BC switch capacity. BC was connected by a single wire attached between two laptops, which injected data load into the network. Therefore both types of packets: PTP and inserted data were sharing the same link. We checked the impact of the high data traffic in the on-path support PTP compliant network. The stress test was running for 2 hours (between 6 am – 8 am). See the measurements in Figure 2.
Figure 2: PTP performance with 2 BCs in series in the link between the GM and slave without a network load from 00:00 – 06:00 hrs. Secondly, PTP performance with a data load of 50 Mbit/s inserted in the link between the GM and the slave was measured between 06:00-08:00 hrs.
In the measurements without a data load we observed that accuracy stayed within limits ±40 ns, which is comparable with PTP accuracy of the network with only one BC clock.
Moreover, during the network stress test (between 6 am – 8 am) we observed no significant degradation in the PTP performance of the slave device as well. This is because of the specialized hardware timestamping in BC switches which mitigate the effects of timing impairments introduced by the higher data traffic between the master and slave. As a result, the accuracy stayed within designated limits of ±40 ns limits.
Test case 5: two TCs in series
Test case 5 is similar to the test case 4, with 2 TC switches instead of BCs. Again two tests were performed, without (0-6:00 am) and with data traffic (6 am-8 am) of 50Mbit/s. The results are shown in Figure 3. In both scenarios the PTP time transfer accuracy stays within ±30 ns. This again demonstrates good PTP on path support performance.
Figure 3: PTP performance with 2 TCs in the link between the GM and slave. From 00:00 – 06:00 hrs the test was performed without data stress and from 06:00-08:00 hrs data traffic at 50 Mbit/s was inserted in the link between GM and slave. The accuracy stayed within designated limits of ±30 ns.
The test case No. 6, a setup without on-path PTP support I will introduce in the next post.
For more information about Meinberg Synchronization Solutions visit our website: www.meinbergglobal.com or contact me at: andreja.jarc(at)meinberg.de.
- Treadmill test PTP observation