Quantcast
Channel: SRX Services Gateway topics
Viewing all articles
Browse latest Browse all 3959

SRX 3400 in Cluster config is synced but new configuration is not applying

$
0
0

Hi We have two SRX 3400 running in Cluster, in RG1 one physical interface is down so all services are running through one firewall everything was fine but suddenly when are applying new configuration it is commiting successfully on both firewalls but not applying on any of them.

 

For example see configuration this is committed with no errors and checked on both firewalls:

root@FW001> show configuration security policies from-zone ABC to-zone XYZ
policy term1 {
    match {
        source-address some-server;
        destination-address [remote-srv1, remote-srv2 remote-srv3 remote-srv4];
        application any;
    }
    then {
        permit;
        count;
    }
}

When we issue command to see active policy it only shows previously configured two destination addresses

{primary:node1}

root@FW001> show security policies from-zone ABC to-zone XYZ 
node1:
--------------------------------------------------------------------------
From zone: ABC, To zone: XYZ
  Policy: term1, State: enabled, Index: 95, Scope Policy: 0, Sequence number: 1
    Source addresses: some-server
    Destination addresses: remote-srv1, remote-srv2
    Applications: any
    Action: permit, count

checked everything only thing which I believe causing this error is when I use command "commit synchronize" it shows

node1: 
commit complete

I think commit is not syncing even though configuration on both nodes are same.

 

Need urgent solution please.

 

Troubleshooting Outputs:

 {primary:node1}[edit]
    root@FW001# commit synchronize force
    node1: 
    commit complete

root@QMIRWLFW001> show chassis cluster status      
Cluster ID: 1 

Node                  Priority          Status    Preempt  Manual failover

Redundancy group: 0 , Failover count: 1
    node0                   128         secondary      no       yes 
    node1                   255         primary        no       yes 

Redundancy group: 1 , Failover count: 1
    node0                   0           secondary      yes      yes 
    node1                   255         primary        yes      yes 


{primary:node1}
root@QMIRWLFW001> show chassis cluster interfaces    
Control link status: Up

Control interfaces: 
    Index   Interface        Status
    0       em0              Up    
    1       em1              Up    

Fabric link status: Up

Fabric interfaces: 
    Name    Child-interface    Status
    fab0    ge-0/0/7           Up    
    fab0   
    fab1    ge-8/0/7           Up    
    fab1   

Redundant-ethernet Information:     
    Name         Status      Redundancy-group
    reth0        Up          1                
    reth1        Up          1                
    reth2        Down        Not configured   
    reth3        Down        Not configured   
    reth4        Down        Not configured   
    reth5        Down        Not configured   
    reth6        Down        Not configured   
    reth7        Down        Not configured   

Interface Monitoring:
    Interface         Weight    Status    Redundancy-group
    xe-11/0/1         255       Up        1   
    xe-11/0/0         255       Up        1   
    xe-3/0/1          255       Down      1   
    xe-3/0/0          255       Up        1  


root@FW001> show configuration chassis cluster 
control-link-recovery;
reth-count 8;
redundancy-group 0 {
    node 0 priority 128;
    node 1 priority 129;
}
redundancy-group 1 {
    node 0 priority 129;
    node 1 priority 128;
    preempt;
    gratuitous-arp-count 4;
    interface-monitor {
        xe-3/0/0 weight 255;
        xe-3/0/1 weight 255;
        xe-11/0/0 weight 255;
        xe-11/0/1 weight 255;
    }
}



root@FW001> show chassis cluster information                                
node0:
--------------------------------------------------------------------------
Redundancy mode:
    Configured mode: active-active
    Operational mode: active-active

Redundancy group: 0, Threshold: 255, Monitoring failures: none
    Events:
        Feb 25 22:40:27.536 : hold->secondary, reason: Hold timer expired

Redundancy group: 1, Threshold: 0, Monitoring failures: interface-monitoring
    Events:
        Feb 25 22:40:27.552 : hold->secondary, reason: Hold timer expired

node1:
--------------------------------------------------------------------------
Redundancy mode:
    Configured mode: active-active
    Operational mode: active-active

Redundancy group: 0, Threshold: 255, Monitoring failures: none
    Events:
        Feb 25 22:28:22.278 : hold->secondary, reason: Hold timer expired
        Feb 25 22:30:45.291 : secondary->primary, reason: Remote node is in secondary hol

Redundancy group: 1, Threshold: 255, Monitoring failures: none
    Events:
        Feb 25 22:28:22.291 : hold->secondary, reason: Hold timer expired
        Feb 25 22:31:32.375 : secondary->primary, reason: Remote node is in secondary hol

 

 

Need urgent solution as we can't do new configuration its running all services with previous configuration.

 

JUNOS Software Release [11.4R7.5] on both nodes


Viewing all articles
Browse latest Browse all 3959

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>