Hi. I have a configuration for failover route default with RPM and IP-Monitoring.
My configuration is:
show configuration services rpm | display set
set services rpm probe PRIN test test-8.8.8.8 target address 8.8.8.8
set services rpm probe PRIN test test-8.8.8.8 probe-count 3
set services rpm probe PRIN test test-8.8.8.8 probe-interval 5
set services rpm probe PRIN test test-8.8.8.8 test-interval 5
set services rpm probe PRIN test test-8.8.8.8 thresholds successive-loss 7
set services rpm probe PRIN test test-8.8.8.8 thresholds total-loss 7
set services rpm probe PRIN test test-8.8.8.8 destination-interface reth3.0
set services rpm probe PRIN test test-8.8.8.8 next-hop X.X.X.X
And the IP-Monitoring configuration is:
set services ip-monitoring policy Default_BKP match rpm-probe PRIN
set services ip-monitoring policy Default_BKP then preferred-route route 0.0.0.0/0 next-hop X.X.X.X
set services ip-monitoring policy Default_BKP then preferred-route route 0.0.0.0/0 preferred-metric 1
The probe result is OK, but the IP-MONitoring Status is FAIL:
> show services rpm probe-results
Owner: IPLAN, Test: test-8.8.8.8
Target address: 8.8.8.8, Probe type: icmp-ping
Destination interface name: reth3.0
Test size: 3 probes
Probe results:
Response received, Fri Dec 1 18:29:57 2017, No hardware timestamps
Rtt: 1741 usec, Round trip jitter: -479 usec, Round trip interarrival jitter: 4730 usec
Results over current test:
Probes sent: 1, Probes received: 1, Loss percentage: 0.000000
Measurement: Round trip time
Samples: 1, Minimum: 1741 usec, Maximum: 1741 usec, Average: 1741 usec, Peak to peak: 0 usec, Stddev: 0 usec,
Sum: 1741 usec
Measurement: Negative round trip jitter
Samples: 1, Minimum: 479 usec, Maximum: 479 usec, Average: 479 usec, Peak to peak: 0 usec, Stddev: 0 usec, Sum: 479 usec
Results over last test:
Probes sent: 3, Probes received: 3, Loss percentage: 0.000000
Test completed on Fri Dec 1 18:29:52 2017
Measurement: Round trip time
Samples: 3, Minimum: 1629 usec, Maximum: 2220 usec, Average: 1882 usec, Peak to peak: 591 usec, Stddev: 249 usec,
Sum: 5645 usec
Measurement: Positive round trip jitter
Samples: 1, Minimum: 591 usec, Maximum: 591 usec, Average: 591 usec, Peak to peak: 0 usec, Stddev: 0 usec, Sum: 591 usec
Measurement: Negative round trip jitter
Samples: 2, Minimum: 167 usec, Maximum: 256 usec, Average: 212 usec, Peak to peak: 89 usec, Stddev: 44 usec, Sum: 423 usec
Results over all tests:
Probes sent: 16051, Probes received: 15931, Loss percentage: 0.747617
Measurement: Round trip time
Samples: 15931, Minimum: 1512 usec, Maximum: 740514 usec, Average: 4735 usec, Peak to peak: 739002 usec, Stddev: 11771 usec,
Sum: 75434767 usec
Measurement: Positive round trip jitter
Samples: 8008, Minimum: 0 usec, Maximum: 738768 usec, Average: 5228 usec, Peak to peak: 738768 usec, Stddev: 15994 usec,
Sum: 41861962 usec
Measurement: Negative round trip jitter
Samples: 7922, Minimum: 1 usec, Maximum: 738297 usec, Average: 5284 usec, Peak to peak: 738296 usec, Stddev: 16085 usec,
Sum: 41861893 usec
And int the IP-Monitoring Output, the rpm status appear fail:
>
show services ip-monitoring status
Policy - Default_BKP (Status: FAIL)
RPM Probes:
Probe name Test Name Address Status
---------------------- --------------- ---------------- ---------
PRIN test-8.8.8.8 8.8.8.8 FAIL
Route-Action:
route-instance route next-hop state
----------------- ----------------- ---------------- -------------
inet.0 0.0.0.0/0 X.X.X.X APPLIED
{primary:node0}
Have anyone had this problem before? Can you tell me how solve this issue?
Regards
Sebastian