By Andreja Jarc.
This time I prepared some experiments with different IEEE 1588 / PTPv2 configurations where the accuracy of the end device was measured for each test case. In all Test cases a Meinberg M500 IMS (modular platform in a railmount formfactor) was used as Grandmaster (GM) and a Meinberg SyncBox as a PTP slave.
Subsequently different type of switches were added in the link between GM and slave. The switches were either PTP compliant, or a standard Gigabit Ethernet switch which is unaware of PTP. The PTP compliant switches were acting as a Boundary or Transparent clock, all from the same vendor, but not Meinberg. After all nodes had been properly synchronized, the accuracy of the PPS output of the slave was measured against the GPS sync PPS on the master system to determine PTP performance in the small network. The results are shown below in order which the test were performed. In this blog post I will present results from the first three test cases. The rest will be presented in a subsequent post.
Test cases scenarios:
1. A direct link between GM and Slave as a baseline reference accuracy measurement.
2. A 100 Mbit/s Boundary Clock (BC) in the PTP link between the GM and the Slave.
3. A 100 Mbit/s Transparent Clock (TC) between the GM and the Slave.
4. Two BCs (the same vendor and model) in series with and without network loading.
5. Two TCs in series with and without network loading.
6. A standard Gigabit Ethernet switch, with and without network loading.
Test case 1:
A direct link between a GM and a slave. The PTP configuration of both PTP nodes was as follows:
• Default E2E Profile (multicast)
• Network Protocol: UDP/IPv4
• Announce Message Interval: 2 seconds, Sync Message Interval: 1 seconds, Delay Request Interval: 8 seconds
• PTP card used for GM was a Meinberg TSUv3 (Gbit)
• PTP card used in the slave was Meinberg 10/100Mbit TSU
1PPS generated by the slave PTP card was serving as an extra PPS Input signal for the GM. This kind of setup enables measuring PTP accuracy of a slave compared with a GPS reference of the master. The measurements can be found in the Web GUI, from there follow to the Statistics dialog (XtraStats) where logged data for the incoming PPS signal is stored. The data can be conveniently depicted in graphical form as well. This is one possible approach how to measure PTP accuracy of a slave and thus also PTP performance of the network.
Figure 1: The setup of Test Case 1, a direct link between the GM and the slave. The experiment ran for 7 hours, PPS Offset (phase difference) was measured against the GPS steered internal clock of the GM.
Figure 2: PTP accuracy of the slave compared to the GPS reference in the master, with a direct connection between master and slave. PTP performance stays within ±10 ns accuracy range. Note the very high 5 ns measurement resolution which was sampled with Meinberg’s latest GPS180 receiver installed in the GM.
Test case 2:
A Boundary Clock (BC) from another vendor was inserted between the GM and the slave. The configuration on the BC was set the same way to match master / slave PTP configuration (see details in Test case 1).
Figure 3: Setup of the Test Case 2 and Test Case 3. Each testing scenario ran for just over 7 hours.
Figure 4: PTP performance of Test Case 2 (a BC in the link between the GM and the slave). The jitter increases by inserting a BC, but still it stays well maintained within a ±30 ns bounds.
Test case 3:
A TC between master and a slave. The connection scheme resembles Test case 2, except the BC was replaced by a TC.
Figure 5: PTP performance of Test Case 3 (TC in the link between the GM and the slave). Less jitter can be observerd in comparison with the setup including a BC. There are occasional jitter spikes, but these stayed within ±25 ns.
The main difference between a BC and TC is that a Boundary clock comprises its own local clock which is synchronized first with PTP coming from a master and then it sends its own generated PTP packets via a master port forward into a network. A Transparent Clock on the other hand forwards PTP messages directly from a master into a network with a correction field updated for the residence time spent in the switch. Due perhaps to this less complex mode of operation the TC contributes less jitter than a BC, as it is obvious from PTP performance in Figure 5.
Moreover, PTP packets generated by a BC get timestamps from an internal clock which is by quality and stability lower graded than from the GM which can be another source of jitter. However, even a BC in our tests stays within a ±30 ns bounds.
This was a 5-minute post for today. Next time I will show the results from data stress tests performed in these PTP setups.
For more information about Meinberg Synchronization Solutions visit our website: www.meinbergglobal.com or contact me at: andreja.jarc(at)meinberg.de.
- figure ptp 5
- how to perform a ptp
- ntp OR ptp output 1pps OR pps
- PTP in experiment
- ptp measured accuracy
- ptp performance test
- Ptp test card
- ptpd2 testing
- Test ptp