Quantcast
Channel: SRX Services Gateway topics
Viewing all 3959 articles
Browse latest View live

service rpm for SRX345, issue ?

$
0
0

Hi, Guys,

 

Three infrastructures modes ( standalone SRX345, SRX345 HA active-standby, SRX345 HA active-active ) are tested ( with different Junos versions ). Two infrastructures get normal response of RTT results, except the HA active-standby mode, the issue is shown as below:

 


RPM configurations:
set services rpm probe WTT_Line_Test test HK-ISP_TEST probe-type icmp-ping
set services rpm probe WTT_Line_Test test HK-ISP_TEST target address 18.25.21.29
set services rpm probe WTT_Line_Test test HK-ISP_TEST probe-count 2
set services rpm probe WTT_Line_Test test HK-ISP_TEST probe-interval 5
set services rpm probe WTT_Line_Test test HK-ISP_TEST test-interval 5
set services rpm probe WTT_Line_Test test HK-ISP_TEST destination-interface reth1.110
set services rpm probe WTT_Line_Test test HK-ISP_TEST hardware-timestamp
set services rpm probe WTT_Line_Test test HK-ISP_TEST next-hop 18.25.21.29

 


The traceoption results ( normal ):

Dec 6 04:35:17 PING_TEST_COMPLETED: pingCtlOwnerIndex = WTT_Line_Test, pingCtlTestName = HK-ISP_TEST
Dec 6 04:35:17 RTM_CHANGE gencfg for probe WTT_Line_Test, test HK-ISP_TEST to state PASS
Dec 6 04:35:17 rmop_calc_jitter: rdiff: 5014077, sdiff: 5009520, jitter: 4557
Dec 6 04:35:17 rmop_calc_jitter: rdiff: 1003183, sdiff: 1004120, jitter: -937
Dec 6 04:35:17 rmop_calc_jitter: rdiff: 1002873, sdiff: 1004119, jitter: -1246
Dec 6 04:35:17 rmop_calc_jitter: rdiff: 1037034, sdiff: 1004116, jitter: 32918
Dec 6 04:35:17 test_done: sent 2, test 810

 

ISSUE :

root@13FwS345Prd1> show service rpm history-results owner WTT_Line_Test
Owner, Test Probe                       Sent Probe                                     received              Round trip time

WTT_Line_Test, HK-ISP_TEST Fri Dec 6 04:34:57 2019 Fri Dec 6 04:35:02 2019 Request timed out
WTT_Line_Test, HK-ISP_TEST Fri Dec 6 04:35:02 2019 Fri Dec 6 04:35:07 2019 Request timed out
WTT_Line_Test, HK-ISP_TEST Fri Dec 6 04:35:07 2019 Fri Dec 6 04:35:12 2019 Request timed out
WTT_Line_Test, HK-ISP_TEST Fri Dec 6 04:35:12 2019 Fri Dec 6 04:35:17 2019 Request timed out
WTT_Line_Test, HK-ISP_TEST Fri Dec 6 04:35:17 2019 Fri Dec 6 04:35:22 2019 Request timed out
WTT_Line_Test, HK-ISP_TEST Fri Dec 6 04:35:22 2019 Fri Dec 6 04:35:27 2019 Request timed out
WTT_Line_Test, HK-ISP_TEST Fri Dec 6 04:35:27 2019 Fri Dec 6 04:35:32 2019 Request timed out

 

 

 

Any reason the result of the comand " show service rpm history-result" can not show the RTT results ?

 

Thanks

 

 

 

 

 

 


Junos OS

$
0
0

I have 2 SRX240 with HA working fine before Electricity goes down and came back, Node 0 working fine while node 1 Junos OS corrupted and booting from Backup Image which is has different version of primary OS, i don’t have copy of primary OS, how i can re install Image again??

 

primary image is:

Model: srx240h
JUNOS Software Release [12.1X44-D45.2]

 

Backup image is:

Model: srx240h
JUNOS Software Release [11.4R1.6]

 

Issue with setting up network admin Auth via. ldap/NPS

$
0
0

Hi

 

So i am trying to setup our network applienses to use radius to authenticate our admins when they need to make changes to switches and firewalls.

 

I am testing the setup on a vSRX, but cant get it to work.

 

have gotten to the point where i via. a wireshark can see the radius request hit the server, but the radius does not see the request at all.

 

The setup is:

1 windows domain controller with NPS installed

1 vSRX setup to use radius

Used the following guide: https://ericrochow.wordpress.com/2012/09/26/configure-juniper-routers-for-aaa-with-microsoft-nps/

 

As it is now, the NPS log does not see the requests at all, but as started earlier i can see that the request packets are recieved on that specific server.

Does anybody know of a better guide to setup this or have any pointers?

Chassis cluster crashes after show securify flow session

$
0
0

We have a chassis cluster with two SRX 340. Almost everything seem to be working fine, but sometimes when using command "show securify flow session" the cluster is crashing... 

 

root@SRX1> show system information 
Model: srx340
Family: junos-es
Junos: 18.2R3.4
Hostname: SRX1

 

This chassis cluster is not on production yet, so there is almost no traffic here. Here's an example (I tried to stop it with ^C, because I've already seen that the output is really slow and that it's going to crash):

 

{primary:node0}
root@SRX1>
root@SRX1> show security flow session 
node0:
--------------------------------------------------------------------------

Session ID: 2, Policy name: self-traffic-policy/1, State: Active, Timeout: 1786, Valid
  In: 1.2.3.4/64591 --> 4.3.2.1/179;tcp, Conn Tag: 0x0, If: .local..0, Pkts: 19131, Bytes: 1007022, 
^C[abort]

{secondary-hold:node0}
root@SRX1> 

 

As you can see above, it's a primary node, but then it changes to secondary-hold.

root@SRX1> show chassis cluster status   
Monitor Failure codes:
    CS  Cold Sync monitoring        FL  Fabric Connection monitoring
    GR  GRES monitoring             HW  Hardware monitoring
    IF  Interface monitoring        IP  IP monitoring
    LB  Loopback monitoring         MB  Mbuf monitoring
    NH  Nexthop monitoring          NP  NPC monitoring              
    SP  SPU monitoring              SM  Schedule monitoring
    CF  Config Sync monitoring      RE  Relinquish monitoring
Cluster ID: 1
Node   Priority Status               Preempt Manual   Monitor-failures

Redundancy group: 0 , Failover count: 1
node0  100      secondary-hold       no      no       GR             
node1  1        primary              no      no       None           

Redundancy group: 1 , Failover count: 1
node0  0        secondary            no      no       CS             
node1  1        primary              no      no       None   
root@SRX1> show chassis cluster information detail 
node0:
--------------------------------------------------------------------------
Redundancy mode:
    Configured mode: active-active
    Operational mode: active-active
Cluster configuration:
    Heartbeat interval: 1000 ms
    Heartbeat threshold: 3
    Control link recovery: Disabled
    Fabric link down timeout: 66 sec
Node health information:
    Local node health: Not healthy
    Remote node health: Healthy

Redundancy group: 0, Threshold: 255, Monitoring failures: gres-not-ready
    Events:
        Dec  9 14:20:47.751 : hold->secondary, reason: Hold timer expired
        Dec  9 14:21:02.845 : secondary->primary, reason: Better priority (100/1)
        Dec  9 19:09:11.726 : primary->secondary-hold, reason: Control link (Flowd) down

Redundancy group: 1, Threshold: 0, Monitoring failures: cold-sync-monitoring
    Events:                             
        Dec  9 14:20:48.044 : hold->secondary, reason: Hold timer expired
        Dec  9 14:21:04.962 : secondary->primary, reason: Remote yield (0/0)
        Dec  9 19:09:11.787 : primary->secondary-hold, reason: Control link (Flowd) down
        Dec  9 19:09:12.851 : secondary-hold->secondary, reason: Ready to become secondary
Control link statistics:                
    Control link 0:                     
        Heartbeat packets sent: 17928   
        Heartbeat packets received: 17509
        Heartbeat packet errors: 0      
        Duplicate heartbeat packets received: 0
    Control recovery packet count: 0    
    Sequence number of last heartbeat packet sent: 17928
    Sequence number of last heartbeat packet received: 17916
Fabric link statistics:      
   Child link 0                        
        Probes sent: 69                 
        Probes received: 69             
    Child link 1                        
        Probes sent: 69                 
        Probes received: 69             
Switch fabric link statistics:          
    Probe state : DOWN                  
    Probes sent: 0                      
    Probes received: 0                  
    Probe recv errors: 0                
    Probe send errors: 0                
    Probe recv dropped: 0               
    Sequence number of last probe sent: 0
    Sequence number of last probe received: 0
Chassis cluster LED information:        
    Current LED color: Amber            
    Last LED change reason: Monitored objects are down
Control port tagging:                   
    Disabled                            
Cold Synchronization:                   
    Status:                             
        Cold synchronization completed for: N/A
        Cold synchronization failed for: N/A
        Cold synchronization not known for: N/A
        Current Monitoring Weight: 255  
    Progress:                           
        CS Prereq               0 of 1 SPUs completed
           1. if_state sync          1 SPUs completed
           2. fabric link            0 SPUs completed
           3. policy data sync       1 SPUs completed
           4. cp ready               0 SPUs completed
           5. VPN data sync          0 SPUs completed
           6. IPID data sync         0 SPUs completed
           7. All SPU ready          0 SPUs completed
           8. AppID ready            0 SPUs completed
           9. Tunnel Sess ready      0 SPUs completed
        CS RTO sync             0 of 1 SPUs completed
       CS Postreq              0 of 1 SPUs completed

    Statistics:
        Number of cold synchronization completed: 0
        Number of cold synchronization failed: 0

    Events:
        Dec  9 14:22:34.358 : Cold sync for PFE  is RTO sync in process
        Dec  9 14:22:34.803 : Cold sync for PFE  is Completed

Loopback Information:

    PIC Name        Loopback        Nexthop     Mbuf
    -------------------------------------------------
                    Success         Failure     Success    

Interface monitoring:
    Statistics:
        Monitored interface failure count: 0

    Events:
        Dec  9 14:22:37.137 : Interface ge-0/0/5 monitored by rg 1, changed state from Down to Up
        Dec  9 14:22:37.279 : Interface ge-0/0/4 monitored by rg 1, changed state from Down to Up
                                        
Fabric monitoring:                      
    Status:                             
        Fabric Monitoring: Enabled      
        Activation status: Active       
        Fabric Status reported by data plane: Up
        JSRPD internal fabric status: Up
                                        
Fabric link events:                     
        Dec  9 19:09:12.742 : Fabric monitoring is suspended due to USPIPC CONNECTION failure
        Dec  9 19:15:37.365 : Fabric monitoring is suspended by remote node
        Dec  9 19:17:36.806 : Fabric monitoring suspension is revoked by remote node
        Dec  9 19:17:40.808 : Child link-0 of fab0 is down, pfe notification
        Dec  9 19:17:40.808 : Child link-1 of fab0 is down, pfe notification
        Dec  9 19:17:40.808 : Child link-0 of fab1 is down, pfe notification
        Dec  9 19:17:40.808 : Child link-1 of fab1 is down, pfe notification
        Dec  9 19:17:42.758 : Child link-0 of fab0 is up, pfe notification
        Dec  9 19:17:42.758 : Child link-1 of fab0 is up, pfe notification
        Dec  9 19:17:43.755 : Fabric link up, link status timer
Control link status: Up
    Server information:
        Server status : Inactive
        Server connected to None
    Client information:
        Client status : Connected
        Client connected to 130.16.0.1/62845
Control port tagging:
    Disabled

Control link events:
        Dec  9 14:21:25.527 : Control link fxp1 is up
        Dec  9 14:22:06.528 : Control link fxp1 is up
        Dec  9 14:22:07.254 : Control link fxp1 is up
        Dec  9 14:22:09.124 : Control link fxp1 is up
        Dec  9 19:09:10.524 : Control link fxp1 is down
        Dec  9 19:09:10.530 : Control link down, flowd is down
        Dec  9 19:11:37.898 : Control link fxp1 is down
        Dec  9 19:11:38.113 : Control link fxp1 is down
        Dec  9 19:15:31.728 : Control link fxp1 is up
        Dec  9 19:15:38.247 : Control link up, link status timer

Hardware monitoring:                    
    Status:                             
        Activation status: Enabled      
        Redundancy group 0 failover for hardware faults: Enabled
        Hardware redundancy group 0 errors: 0
        Hardware redundancy group 1 errors: 0
                                        
Schedule monitoring:
    Status:                             
        Activation status: Disabled     
        Schedule slip detected: None    
        Timer ignored: No               
                                        
    Statistics:                         
        Total slip detected count: 2    
        Longest slip duration: 3(s)     

  Events:                             
        Dec  9 14:19:13.237 : Detected schedule slip
        Dec  9 14:20:13.562 : Cleared schedule slip
        Dec  9 19:10:53.065 : Detected schedule slip
        Dec  9 19:11:55.408 : Cleared schedule slip

Configuration Synchronization:
    Status:
        Activation status: Enabled
        Last sync operation: Auto-Sync
        Last sync result: Not needed
        Last sync mgd messages:

    Events:
        Dec  9 14:21:04.959 : Auto-Sync: Not needed.

Cold Synchronization Progress:
    CS Prereq               0 of 1 SPUs completed
       1. if_state sync          1 SPUs completed
       2. fabric link            0 SPUs completed
       3. policy data sync       1 SPUs completed
       4. cp ready               0 SPUs completed
       5. VPN data sync          0 SPUs completed
       6. IPID data sync         0 SPUs completed
       7. All SPU ready          0 SPUs completed
       8. AppID ready            0 SPUs completed
       9. Tunnel Sess ready      0 SPUs completed
    CS RTO sync             0 of 1 SPUs completed
    CS Postreq              0 of 1 SPUs completed
                                        
node1:                                  
--------------------------------------------------------------------------
Redundancy mode:                        
    Configured mode: active-active      
    Operational mode: active-active     
Cluster configuration:                  
    Heartbeat interval: 1000 ms         
    Heartbeat threshold: 3              
    Control link recovery: Disabled     
    Fabric link down timeout: 66 sec    
Node health information:                
    Local node health: Healthy          
    Remote node health: Not healthy

Redundancy group: 0, Threshold: 255, Monitoring failures: none
    Events:
        Dec  9 14:17:15.788 : hold->secondary, reason: Hold timer expired
        Dec  9 19:05:27.214 : secondary->primary, reason: Only node present

Redundancy group: 1, Threshold: 255, Monitoring failures: none
    Events:
        Dec  9 14:17:17.372 : hold->secondary, reason: Hold timer expired
        Dec  9 19:05:27.200 : secondary->ineligible, reason: Fabric link down
        Dec  9 19:05:27.269 : ineligible->primary, reason: Only node present
Control link statistics:
    Control link 0:
        Heartbeat packets sent: 17917
        Heartbeat packets received: 17517
        Heartbeat packet errors: 0
        Duplicate heartbeat packets received: 0
    Control recovery packet count: 0
    Sequence number of last heartbeat packet sent: 17917
    Sequence number of last heartbeat packet received: 17929
Fabric link statistics:
    Child link 0                        
        Probes sent: 35495              
        Probes received: 34394          
    Child link 1                        
        Probes sent: 35497              
        Probes received: 34394          
Switch fabric link statistics:    
   Probe state : DOWN                  
    Probes sent: 0                      
    Probes received: 0                  
    Probe recv errors: 0                
    Probe send errors: 0                
    Probe recv dropped: 0               
    Sequence number of last probe sent: 0
    Sequence number of last probe received: 0

Chassis cluster LED information:
    Current LED color: Green
    Last LED change reason: No failures
Control port tagging:
    Disabled

Cold Synchronization:
    Status:
        Cold synchronization completed for: N/A
        Cold synchronization failed for: N/A
        Cold synchronization not known for: N/A
        Current Monitoring Weight: 0

    Progress:
        CS Prereq               1 of 1 SPUs completed
           1. if_state sync          1 SPUs completed
           2. fabric link            1 SPUs completed
           3. policy data sync       1 SPUs completed
           4. cp ready               1 SPUs completed
           5. VPN data sync          1 SPUs completed
           6. IPID data sync         1 SPUs completed
           7. All SPU ready          1 SPUs completed
           8. AppID ready            1 SPUs completed
           9. Tunnel Sess ready      1 SPUs completed
        CS RTO sync             1 of 1 SPUs completed
        CS Postreq              1 of 1 SPUs completed
                                        
    Statistics:                         
        Number of cold synchronization completed: 0
        Number of cold synchronization failed: 0
  Events:                             
        Dec  9 14:18:47.255 : Cold sync for PFE  is RTO sync in process
        Dec  9 14:18:48.641 : Cold sync for PFE  is Post-req check in process
        Dec  9 14:18:50.645 : Cold sync for PFE  is Completed

Loopback Information:

    PIC Name        Loopback        Nexthop     Mbuf
    -------------------------------------------------
                    Success         Success     Success    

Interface monitoring:
    Statistics:
        Monitored interface failure count: 2

    Events:
        Dec  9 19:07:00.391 : Interface ge-0/0/4 monitored by rg 1, changed state from Up to Down
        Dec  9 19:07:00.548 : Interface ge-0/0/5 monitored by rg 1, changed state from Up to Down
        Dec  9 19:13:56.789 : Interface ge-0/0/4 monitored by rg 1, changed state from Down to Up
        Dec  9 19:13:56.817 : Interface ge-0/0/5 monitored by rg 1, changed state from Down to Up
                                        
Fabric monitoring:                      
    Status:                             
        Fabric Monitoring: Enabled      
        Activation status: Active       
        Fabric Status reported by data plane: Down
        JSRPD internal fabric status: Down
                                        
Fabric link events:                     
        Dec  9 19:13:50.856 : Child ge-5/0/8 of fab1 is down
        Dec  9 19:13:50.866 : Child ge-5/0/9 of fab1 is down
        Dec  9 19:13:52.851 : Fabric link fab1 is up
        Dec  9 19:13:52.852 : Child ge-5/0/8 of fab1 is up
        Dec  9 19:13:52.868 : Child ge-5/0/9 of fab1 is up
        Dec  9 19:13:53.612 : Fabric link fab0 is up
        Dec  9 19:13:53.613 : Child ge-0/0/8 of fab0 is up
        Dec  9 19:13:53.630 : Child ge-0/0/9 of fab0 is up
        Dec  9 19:13:55.649 : Child link-0 of fab0 is up, pfe notification
        Dec  9 19:13:55.649 : Child link-1 of fab0 is up, pfe notification

Control link status: Up
    Server information:                 
        Server status : Connected       
        Server connected to 129.16.0.1/64127
    Client information:
        Client status : Inactive
        Client connected to None
Control port tagging:
    Disabled

Control link events:
        Dec  9 14:15:21.808 : Control link fxp1 is down
        Dec  9 14:15:45.347 : Control link fxp1 is up
        Dec  9 14:17:20.899 : Control link fxp1 is up
        Dec  9 14:17:31.245 : Control link fxp1 is up
        Dec  9 19:05:27.200 : Control link down, link status timer
        Dec  9 19:05:27.219 : Control link fxp1 is up
        Dec  9 19:06:28.399 : Control link fxp1 is up
        Dec  9 19:07:05.419 : Control link fxp1 is up
        Dec  9 19:11:51.177 : Control link up, link status timer
        Dec  9 19:12:52.871 : Control link fxp1 is up

Hardware monitoring:
    Status:
        Activation status: Enabled
        Redundancy group 0 failover for hardware faults: Enabled
        Hardware redundancy group 0 errors: 0
        Hardware redundancy group 1 errors: 0
                                        
Schedule monitoring:
    Status:                             
        Activation status: Disabled     
        Schedule slip detected: None    
        Timer ignored: No               
                                        
    Statistics:                         
        Total slip detected count: 3    
        Longest slip duration: 6(s)     

    Events:                             
        Dec  9 14:15:37.423 : Detected schedule slip
        Dec  9 14:16:37.560 : Cleared schedule slip
        Dec  9 14:19:14.050 : Detected schedule slip
        Dec  9 14:20:14.126 : Cleared schedule slip
        Dec  9 19:06:43.691 : Detected schedule slip
        Dec  9 19:07:43.878 : Cleared schedule slip

Configuration Synchronization:
    Status:
        Activation status: Enabled
        Last sync operation: Auto-Sync
        Last sync result: Succeeded

    Events:
        Dec  9 14:17:46.062 : Auto-Sync: In progress. Attempt: 1
        Dec  9 14:19:07.641 : Auto-Sync: Clearing mgd. Attempt: 1
        Dec  9 14:19:14.043 : Auto-Sync: Succeeded. Attempt: 1

Cold Synchronization Progress:
    CS Prereq               1 of 1 SPUs completed
       1. if_state sync          1 SPUs completed
       2. fabric link            1 SPUs completed
       3. policy data sync       1 SPUs completed
       4. cp ready               1 SPUs completed
       5. VPN data sync          1 SPUs completed
       6. IPID data sync         1 SPUs completed
       7. All SPU ready          1 SPUs completed
       8. AppID ready            1 SPUs completed
       9. Tunnel Sess ready      1 SPUs completed
    CS RTO sync             1 of 1 SPUs completed
    CS Postreq              1 of 1 SPUs completed

 

Routing Same Network Over VPN Tunnels

$
0
0

Hi All,

 

I have a customer with a VPN setup to my one site with the termination on my SRX240H2. This is a route based VPN tunnel, and proxy IDs have to be configured on this tunnel. Here is my issue, which would be resolved on newer devices using traffic selectors:

 

I have two subnets that my customer needs access to. These subnets cannot be grouped. My customer has provided a single subnet for access to my site. However, since I need to use proxy id's with these subnets configured, I need to configure two tunnels, each tunnel with the following proxy IDs:

 

Tunnel 1

* customer network, my network A

Tunnel 2

* customer network, my network B 

 

However, because these are route based tunnels, I now have a static route for the customer network pointing to two tunnels. If traffic comes into Tunnel 2, the device may route it back through Tunnel 1. 

 

Is there a way to solve this issue that I'm missing? 

Unfortunately I cannot implement tunnels per routing-instance at this time. 

 

Anyone with good understanding of Unified Security Policies (SRX)

$
0
0

Been looking for awhile at wanting to use the unified security policies as they keep releasing compelling features based on it. Not to mention, newer versions of Junos will be defaulted to this and it is required for use in Security Director based on your Junos version. Just wanted to see if anyone had any good experience using these new Unified Security Policies (USP)

For example:

  • Applications can be matched as part of your security policy (traditional application firewall applied as a service to a rule is going away )

  • URL categories as part of your security policy match

  • Multiple IPS policies can be used

 

However there seems to be some caveats to using unified policies (USP):

  • If a USP is present in a zone based policy table; there will be no lookup performed in the global policy table if there is no match. If you utilize the global policy table this forces you to create all USPs in the global policy table

  • a mix of traditional and USP changes your policy lookup order. If you have a traditional security policy after a USP, the traditional policy will be matched first

  • It seems all deny rules must be as a USP or you get into the policy lookup order issue above.

  • Possible potential match policies where the APP hasn't been ID'd yet

 

 

event-trigger SRX345 on monitoring the interface bandwidth > 90%

$
0
0

Hi, Guys,

 

Just would like to know how to config the following requirement:

 

Conditions:

1. SRX345 is monitoring the WAN interfaces of bandwidth utilization.

2. If the utilization goes upto the 90%, some events will happen ( some commands).

 

If I can can create an event-option to achieve the task ? any configuration example ?

 

Many thanks

IPSEC VPN issue no chassic cluster - external-interface'(lo0.1) and 'routing-interface'(ge-5/0/12.0) belong to different zones. Re-route failed, pkt dropped.

$
0
0

I configured 2 ipsec vpn tunnels to AWS. Bofh tunnels are UP, but there's a problem with communicating with hosts in AWS:

 

After enabling traceoptions I see errors:

 

 

CID-1:RT:'external-interface'(lo0.1) and 'routing-interface'(ge-5/0/12.0) belong to different zones. Re-route failed, pkt dropped.

 

 

My current configuration indeed put lo0.1 in different routing-instance (vr1) and different security zone (vpn-aws) than ge-5/0/12.0 which is current interface to reaach the Internet on master routing-instance and in untrust zone.

 

root@SRX1# show security zones security-zone untrust    
host-inbound-traffic {
    system-services {
        ping;
    }
    protocols {
        bgp;
    }
}
interfaces {
    ge-0/0/12.0;
    ge-5/0/12.0;
}

{primary:node0}[edit]
root@SRX1# show security zones security-zone vpn-aws    
host-inbound-traffic {
    system-services {
        ike;
        ping;
    }
}
interfaces {
    st0.2;
    st0.1;
    lo0.1;
}

root@SRX1# show routing-instances 
vr1 {
    instance-type virtual-router;
    interface lo0.1;
    interface st0.1;
    interface st0.2;
    routing-options {
        static {
            route 10.1.0.0/16 next-hop [ st0.1 st0.2 ];
        }
     }
}

I tried to move interface lo0.1 from security-zone vpn-aws to untrust to resolve the issue, but it's not possible:

 

[edit security zones security-zone untrust]
  'interfaces lo0.1'
    Interface lo0.1 must be in the same routing instance as other interfaces in the zone
error: configuration check-out failed

So I have no idea what to do. I can't have bofh interfaces in different security zones, but in the same time I can't have bofh interfaces in the same security zone because of using different routing instances.


IpSec VPN mode

$
0
0

Hi all,

What is the difference between Main and Aggressice mode in IPsec vpn?

Which option of the Mode is best for the 250 spokes and 2 hubs in a network?

 

Thanks

A.

srx300 os recovery help T.T

$
0
0

Hello? 

I need your help!

The Device is srx300 

The problem is that the os of the device is faulty.

So I proceeded with the recovery with usb.

However, I keep getting an error.

 

===================================================================

 

loader> install file:///junos-srxsme-18.2R3.4.tgz
Target device selected for installation: internal media
/kernel data=0xff3050+0x1a658c syms=[0x4+0xb3f30+0x4+0x11271e]
Kernel entry at 0x801000c0 ...
init regular console
Primary ICache: Sets 16 Size 128 Asso 39
Primary DCache: Sets 8 Size 128 Asso 32
Secondary DCache: Sets 1024 Size 128 Asso 4
CIU_FUSE 0x3/0x3
GDB: debug ports: uart
GDB: current port: uart
KDB: debugger backends: ddb gdb
KDB: current backend: ddb
kld_map_v: 0x8ff80000, kld_map_p: 0x0
Running in PARTITIONED TLB MODE
Copyright (c) 1996-2019, Juniper Networks, Inc.
All rights reserved.
Copyright (c) 1992-2007 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
JUNOS 18.2R3.4 #0: 2019-06-21 00:08:52 UTC
builder@svl-junos-p001:/volume/build/junos/18.2/release/18.2R3.4/obj/octeon/junos/bsd/kernels/JSRXNLE/kernel
can't re-use a leaf (perf_mon)!
can't re-use a leaf (threshold)!
can't re-use a leaf (debug)!
JUNOS 18.2R3.4 #0: 2019-06-21 00:08:52 UTC
builder@svl-junos-p001:/volume/build/junos/18.2/release/18.2R3.4/obj/octeon/junos/bsd/kernels/JSRXNLE/kernel
real memory = 4294967296 (4194304K bytes)
avail memory = 2353434624 (2244MB)
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
Security policy loaded: JUNOS MAC/runasnonroot (mac_runasnonroot)
Security policy loaded: Junos MAC/veriexec (mac_veriexec)
Security policy loaded: JUNOS MAC/pcap (mac_pcap)
MAC/veriexec fingerprint module loaded: SHA1
MAC/veriexec fingerprint module loaded: SHA256
netisr_init: forcing maxthreads from 4 to 2
random: <Software, Yarrow> initialized
cpu0 on motherboard
: CAVIUM's OCTEON 70XX/71XX CPU Rev. 0.2 with no FPU implemented
L1 Cache: I size 78kb(128 line), D size 32kb(128 line), thirty two way.
L2 Cache: Size 512kb, 4 way
obio0 on motherboard
uart0: <Octeon-16550 channel 0> on obio0
uart0: console (9600,n,8,1)
twsi0 on obio0
set clock 0x49
xhci0: <Cavium Octeon 7xxx xHCI Host Driver> on obio0
usb0: <USB bus for xHCI Controller> on xhci0
usb0: USB revision 3.0
uhub0: vendor 0x0000 XHCI root hub, class 9/0, rev 3.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
xhci1: <Cavium Octeon 7xxx xHCI Host Driver> on obio0
usb1: <USB bus for xHCI Controller> on xhci1
usb1: USB revision 3.0
uhub1: vendor 0x0000 XHCI root hub, class 9/0, rev 3.00/1.00, addr 1
uhub1: 2 ports with 2 removable, self powered
cpld0 on obio0
pcib0: <Cavium on-chip PCIe HOST bridge> on obio0
Disabling Octeon big bar support
pcib0: Initialized controller
pci0: <PCI bus> on pcib0
pci0: <network, ethernet> at device 0.0 (no driver attached)
pci0: <network, ethernet> at device 0.1 (no driver attached)
gblmem0 on obio0
octpkt0: <Octeon RGMII> on obio0
cfi0: <Macronix MX25L64 - 8MB> on obio0
cfi1: <Macronix MX25L64 - 8MB> on obio0
umass0: Silicon Motion,Inc. SM3255AA MEMORY BAR, rev 2.00/1.00, addr 2
umass1: SanDisk Firebird USB Flash Drive, rev 2.00/1.26, addr 2
Timecounter "mips" frequency 1200000000 Hz quality 0
md0: Preloaded image </isofs-install-srxsme> 25704448 bytes at 0x8145fc34
da1 at umass-sim1 bus 1 target 0 lun 0
da1: <SanDisk Cruzer Blade 1.26> Removable Direct Access SCSI-5 device
da1: 40.000MB/s transfers
da1: 7633MB (15633408 512 byte sectors: 255H 63S/T 973C)
da0 at umass-sim0 bus 0 target 0 lun 0
da0: < USB MEMORY BAR 1000> Removable Direct Access SCSI-0 device
da0: 40.000MB/s transfers
da0: Attempt to query device size failed: NOT READY, Medium not present
random: unblocking device.
Kernel thread "wkupdaemon" (pid 58) exited prematurely.
(da0:umass-sim0:0:0:0): READ CAPACITY. CDB: 25 0 0 0 0 0 0 0 0 0
(da0:umass-sim0:0:0:0): CAM Status: SCSI Status Error
(da0:umass-sim0:0:0:0): SCSI Status: Check Condition
(da0:umass-sim0:0:0:0): NOT READY asc:3a,0
(da0:umass-sim0:0:0:0): Medium not present
(da0:umass-sim0:0:0:0): Unretryable error
Opened disk da0 -> 6
(da0:umass-sim0:0:0:0): READ CAPACITY. CDB: 25 0 0 0 0 0 0 0 0 0
(da0:umass-sim0:0:0:0): CAM Status: SCSI Status Error
(da0:umass-sim0:0:0:0): SCSI Status: Check Condition
(da0:umass-sim0:0:0:0): NOT READY asc:3a,0
(da0:umass-sim0:0:0:0): Medium not present
(da0:umass-sim0:0:0:0): Unretryable error
Opened disk da0 -> 6
Trying to mount root from cd9660:/dev/md0
WARNING: preposterous time in file system
WARNING: clock 11855 days greater than file system time
tty: not found
Starting JUNOS installation:
Source Package: disk1:/junos-srxsme-18.2R3.4.tgz
Target Media : internal
Product : srx300
Computing slice and partition sizes for /dev/da0 ...
awk: division by zero
input record number 1, file
source line number 3
The target media /dev/da0 (0 bytes) is too small.
The installation cannot proceed
ERROR: Target media is too small
Waiting (max 60 seconds) for system process `vnlru_mem' to stop...done
Waiting (max 60 seconds) for system process `vnlru' to stop...done
Waiting (max 60 seconds) for system process `bufdaemon' to stop...done
Waiting (max 60 seconds) for system process `syncer' to stop...
Syncing disks, vnodes remaining...0 0 done

syncing disks... All buffers synced.
Uptime: 22s
Rebooting...
cpu_reset: Stopping other CPUs


SPI stage 1 bootloader (Build time: May 3 2016 - 23:48:30)
early_board_init: Board type: SRX_300

U-Boot 2013.07-JNPR-3.1 (Build time: May 03 2016 - 23:48:31)

SRX_300 board revision major:1, minor:7, serial #: CV2616AF0140
OCTEON CN7020-AAP pass 1.2, Core clock: 1200 MHz, IO clock: 600 MHz, DDR clock: 667 MHz (1334 Mhz DDR)
Base DRAM address used by u-boot: 0x10fc00000, size: 0x400000
DRAM: 4 GiB
Clearing DRAM...... done
Using default environment

SF: Detected MX25L6405D with page size 256 Bytes, erase size 64 KiB, total 8 MiB
Found valid SPI bootloader at offset: 0x90000, size: 1481840 bytes


U-Boot 2013.07-JNPR-3.1 (Build time: May 03 2016 - 23:50:19)

Using DRAM size from environment: 4096 MBytes
checkboard siege
SATA0: not available
SATA1: not available
SATA BIST STATUS = 0x0
SRX_300 board revision major:1, minor:7, serial #: CV2616AF0140
OCTEON CN7020-AAP pass 1.2, Core clock: 1200 MHz, IO clock: 600 MHz, DDR clock: 667 MHz (1334 Mhz DDR)
Base DRAM address used by u-boot: 0x10f000000, size: 0x1000000
DRAM: 4 GiB
Clearing DRAM...... done
SF: Detected MX25L6405D with page size 256 Bytes, erase size 64 KiB, total 8 MiB
PCIe: Port 0 link active, 1 lanes, speed gen2
PCIe: Link timeout on port 1, probably the slot is empty
PCIe: Port 2 not in PCIe mode, skipping
Net: octeth0
Interface 0 has 1 ports (SGMII)
Type the command 'usb start' to scan for USB storage devices.

Boot Media: eUSB usb
Found TPM SLB9660 TT 1.2 by Infineon
TPM initialized
Hit any key to stop autoboot: 0
SF: Detected MX25L6405D with page size 256 Bytes, erase size 64 KiB, total 8 MiB
SF: 1048576 bytes @ 0x200000 Read: OK
## Starting application at 0x8f0000a0 ...
Consoles: U-Boot console
Found compatible API, ver. 3.1
USB1:
Starting the controller
USB XHCI 1.00
scanning bus 1 for devices... 2 USB Device(s) found
USB0:
Starting the controller
USB XHCI 1.00
scanning bus 0 for devices... 2 USB Device(s) found
scanning usb for storage devices... Device NOT ready
Request Sense returned 02 3A 00
2 Storage Device(s) found

FreeBSD/MIPS U-Boot bootstrap loader, Revision 2.8
(slt-builder@svl-ssd-build-vm06.juniper.net, Tue Feb 10 00:32:30 PST 2015)
Memory: 4096MB
SF: Detected MX25L6405D with page size 256 Bytes, erase size 64 KiB, total 8 MiB
[4]Booting from usb slice 1
\
can't load '/kernel'
can't load '/kernel.old'
Press Enter to stop auto bootsequencing and to enter loader prompt.

 

=========================================================================

 

It keeps repeating.

There was no solution.

What should I do?

Please Help me T.T..

 

Issue with LDAP Integration

$
0
0

Hi All,

 

I'm trying to get an SRX-345 connected to a DC using LDAP but it is currently failing. I have confirmed the assigned user and password set up on the DC for the LDAP connection and configured using guides but I get an authentication problem when connecting. Any ideas what I can do, I have reconfirmed and updated the configuration,  ensured account not locked out but still the same.

 

Here's the config I've used:

t services user-identification active-directory-access domain shared.services user firewall
set services user-identification active-directory-access domain shared.services user password "$9$dXbgok.PFnCDi39tuEhWLx-s4UDkQ39wYHmPQ9CKMW8Nd"
set services user-identification active-directory-access domain shared.services domain-controller shs.Services address 172.30.0.1
set services user-identification active-directory-access domain shared.services user-group-mapping ldap base DC=Shared,DC=Services
set services user-identification active-directory-access domain shared.services user-group-mapping ldap user firewall
set services user-identification active-directory-access domain shared.services user-group-mapping ldap user password "$9$/vZYC0BlKM7-wcyNbYgUD5QF6tOhclXNbAprvMXbwmf5Tn/"

et access profile LDAP authentication-order ldap
set access profile LDAP authentication-order password
set access profile LDAP ldap-options base-distinguished-name CN=B,DC=Sd,DC=Services
set access profile LDAP ldap-options search search-filter cn=
set access profile LDAP ldap-options search admin-search distinguished-name firewall
set access profile LDAP ldap-options search admin-search password "$9$Hq5QtuOhSe/CclvW-daZUimT6/tRcl.PA0ORlegoaJjH"
set access profile CASB-LDAP ldap-server 172.30.0.1

 

Here is the output for looking at domain controller status:

show services user-identification active-directory-access domain-controller status extensive
node0:
--------------------------------------------------------------------------

Domain: shared.services
Domain controller: shs.Shared.Services
Address: 172.30.0.1
Status: Disconnected
Reason: Authentication failed

]

 

Next steps and help appreciated.

 

Regards

 

Adrian

log in -- Host key verification failed

$
0
0

Hi,

 

We have obseved below log messages on SRX210 as well SRX300 while i am trying to authenticate. I could not log in the firewall.

Please suggest, how to fix it.

 

 

user@fw> ssh 10.21.8.100
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:xXA7F58CP343PY8KuG/Ingdw0JKLGkpvUtfLJeiMGf4.
Please contact your system administrator.
Add correct host key in /var/home/radius-admingroup-template-user/.ssh/known_hosts to get rid of this message.
Offending ED25519 key in /var/home/radius-admingroup-template-user/.ssh/known_hosts:3
ED25519 host key for 10.21.8.100 has changed and you have requested strict checking.
Host key verification failed.

 

Regards,

Nik

UDP / TCP port verification commands

$
0
0

Hi All, 

 

I checked some documents but could not find the exact commands to check which UDP or TCP ports are allowed 

I did see that this command "show system connections" list certain ports but does not show the ones that have been allowed ? For example, ports TCP 12500 is allowed and i see it in the "show security flow session" command but it is not listed in the "show system connections" command. 

 

Is there any specific commands to see what all UDP / TCP ports are allowed on the firewall ?

 

Please guide. Thanks

 

DNS Requests from same source port are intermittently dropped

$
0
0

Hello all,

 

We see an issue on our SRX340 where DNS packets from the same source port are intermittently dropped.

 

This problem is somewhat known in the Linux world and well described by this illustration: http://www.kunitake.org/chalow/images/CentOS6-ReplyLoss.png

 

There is a workaround by using the option "single-request-reopen" in /etc/resolv.conf, but we would like to solve the core issue.

Apparently there even is (or was) a Knowledge Base article on the topic (KB21575) which I cannot access.

 

Does anyone know how to resolve this or why the Knowledge Base article is not available anymore?

 

Thank you


Pascal

Leaking BGP-learned routes from a virtual-router instance to inet.0

$
0
0

Hello,

 

Anyone wants to help a bit? I think I'm missing something totally obvious here but cannot figure it out.

 

I have an SRX1400 cluster running JunOS 12.3X48-D85. There is a virtual-router type routing instance which is learning routes from a remote source by eBGP. I would want to import some of those routes into the master inet.0 routing table by using instance-import, but that's not doing anything. And I cannot figure out any troubleshooting tools for it either.

 

I know this would probably work just fine by using rib-groups, but for some reasons I would like to avoid using them and just use import policies. The main reason being that similar configuration will most likely be multiplied to some hundreds of  routing-instances and I don't want to end up configuring a separate rib-group for all of them. One generic import rule with suitable filter rules would be much more convinient.

 

The configuration which I believe should be correct is below, but I have been attempting quite a few different combinations already.. Including policy-statements which accept everything.

 

Import policy:

user@fwX_node0> show configuration policy-options policy-statement import-instances-to-default
term reject-default {
from {
route-filter 0.0.0.0/0 exact;
}
then reject;
}
term vpn-routers {
from {
instance vpn-routers;
protocol bgp;
route-filter 10.0.0.0/16 orlonger;
}
then accept;
}
term reject-rest {
then reject;
}

 

Main instance routing-options:

user@fwX_node0> show configuration routing-options
static {
...removed as unrelated
}
router-id 10.69.69.1;
autonomous-system 65400;
instance-import import-instances-to-default;

 

Routing-instance with BGP neighborhood to a remote device:

user@fwX_node0> show configuration routing-instances vpn-routers
instance-type virtual-router;
interface reth3.253;
routing-options {
static {
route 0.0.0.0/0 next-table inet.0;
route 172.16.0.0/16 reject;
route 10.56.0.0/16 reject;
route 10.57.0.0/16 reject;
}
router-id 100.69.0.1;
}
protocols {
bgp {
group dc1-vpn2 {
import 394682-vpn-routers-import; // Filters towards BGP, removing doesn't affect
export 394682-vpn-routers-export; // Filters towards BGP, removing doesn't affect
peer-as 65402;
neighbor 100.69.0.2 {
local-address 100.69.0.1;
}
}
}
}

 

Example route, missing from the main table:

user@fwX_node0> show route 10.57.114.2

inet.0: 1126 destinations, 1193 routes (1123 active, 3 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0 *[Static/5] 18w1d 19:33:07
> to xx.xx.xxx.xxx via reth0.0

vpn-routers.inet.0: 12 destinations, 12 routes (12 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.57.114.0/24 *[BGP/170] 06:30:01, MED 0, localpref 100
AS path: 65402 ?, validation-state: valid
> to 100.69.0.2 via reth3.253

{primary:node0}


How to Stop Our Public Interface from attempted logins ?

$
0
0

We are using a SRX345 as a public facing Internet router.  As would be expected, we have new unknown friends from all over trying to login into our new device.  We have used the login offset commands in place:

set system login retry-options tries-before-disconnect 3
set system login retry-options backoff-threshold 3
set system login retry-options backoff-factor 10
set system login retry-options lockout-period 20

The SSH root login deny command.

But we still she some interesting users names, besides root and admin in our log messages.

Is there a better way to limit the attention to our new interface ?

 

Thanks

 

 

 

Unable to ping localhost in a site to site VPN

$
0
0

I have set up a site to site vpn with a juniper srx340 to a cisco meraki. The VPN is up. When verifying with "show security ike sa" it shows the tunnel up. Also I use "show security ipsec sa" and the tunnel also show up. The problem is that I am not able to ping the customer local traffic. I see the route being made in the "run show route" below is the configuration. any help will be greating appricated 

 

proposal COSM-ike-phase1-proposal {
authentication-method pre-shared-keys;
dh-group group2;
authentication-algorithm sha1;
encryption-algorithm aes-256-cbc;
lifetime-seconds 28800;

 

policy COSM-ike-phase1-policy {
mode main;
proposals COSM-ike-phase1-proposal;
pre-shared-key ascii-text "$9$2kgZDHqf5z3qm0B1Rle4aJUkP3n9uOIX7.fQFtpvWhshshshsTYF%%L7VY"; ## SECRET-DATA

 

gateway gw-COSM {
ike-policy COSM-ike-phase1-policy;
address 259.166.277.20;
dead-peer-detection {
always-send;
interval 10;
threshold 3;
}
external-interface ge-0/0/0;
general-ikeid;

 

 

 

proposal COSM-ipsec-phase2-proposal {
protocol esp;
authentication-algorithm hmac-sha1-96;
encryption-algorithm aes-256-cbc;
lifetime-seconds 28800;

 

policy COSM-ipsec-phase2-policy {
proposals COSM-ipsec-phase2-proposal;

 

 

vpn vpn-COSM {
bind-interface st0.6;
ike {
gateway gw-COSM;
ipsec-policy COSM-ipsec-phase2-policy;
}
traffic-selector ts-1 {
local-ip 10.4.4.0/24;
remote-ip 10.20.30.0/23;
}
establish-tunnels immediately;

 

------------------------------------------------------------------------------------------------------------------

 

 

book1 {
address local-net 10.4.4.0/24;
}
attach {
zone trust;

 

 

book3 {
address SP-COSM-local 10.20.30.0/23;
}
attach {
zone vpn-customers;

 

-----------------------------------------------------------------------------

 

security-zone vpn-customers {
host-inbound-traffic {
system-services {
all;
}
protocols {
all;
}
}
interfaces {
st0.6 {
host-inbound-traffic {
system-services {
ssh;
ike;
ping;
http;
https;

 

------------------------------------------------------------------------------------------

 

 

show security policies from-zone trust to-zone vpn-customers
policy to-COSM {
match {
source-address any;
destination-address SP-COSM-local;
application any;
}
then {
permit;

 

-----------------------------------------------------------------------------

 

show security policies from-zone vpn-customers to-zone trust
policy from-COSM {
match {
source-address SP-COSM-local;
destination-address any;
application any;
}
then {
permit;
}
}

 

---------------------------------------------------------------------------

 

show interfaces st0
unit 6 {
family inet {
address 10.10.1.9/24;
}
}

-------------------------------------------------------------------------------

 

 

show routing-options
interface-routes {
rib-group inet if-rg;
}
static {
route 10.20.30.0/23 next-hop st0.6;

 

 

 

random kernel panic srx300 with 19.1R1.6

$
0
0

Hi

Anybody seen this behaviour before:

We have 10x SRX300 in  7 locations (3 in cluster configuration), The 2 clusters with version 19.1R1.6 are random crashing with kernel panic error below, support asks for RMA but it sounds to me the version could be the cause and not the hardware.  The frequency of crashing raises so the suggestion of replacing sounds reasonable.

 

xhci_process_cmd_event+0x124 (0xc6d02800,0xc6d51110,0x18411603,0x13c0401) ra 0x8016d420 sz 64
xhci_scan_ring_event+0xd8 (0xc6d02800,0xc6d51110,0x18411603,0x13c0401) ra 0x8016ee8c sz 64
xhci_intr+0x2a0 (0xc6d02800,0xc6d51110,0x18411603,0x13c0401) ra 0x80aeaabc sz 32
mips_handle_this_interrupt+0x8c (0xc6d02800,0xc6d51110,0x18411603,0x13c0401) ra 0x80aeab48 sz 40
mips_handle_interrupts+0x58 (0xc6d02800,0xc6d51110,0x18411603,0x13c0401) ra 0x80aeaf6c sz 48
mips_interrupt+0x224 (0xc6d02800,0xc6d51110,0x18411603,0x13c0401) ra 0x80e4bf5c sz 32
MipsUserIntr+0x1a8 (0xc6d02800,0xc6d51110,0x18411603,0x4038a298) ra 0 sz 0
pid 2057, process: flowd_octeon_hm
cpu:0-Trap cause = 3 (TLB miss (store) - kernel mode)
badvaddr = 0x1010, pc = 0x8016c234, ra = 0x8016d420, sr = 0x508008e3
panic: trap
cpuid = 0
KDB: stack backtrace:
0x4038a298+0x0 (0,0,0,0) ra 0x4038b6d0 sz 0
0x4038b67c+0x54 (0,0,0,0) ra 0x4027fb94 sz 48
0x4027fa68+0x12c (0,0,0,0) ra 0 sz 0
pid 2057, process: flowd_octeon_hm
Uptime: 10d13h13m33s
Cannot dump. No dump device defined.

 

Anybody seen this behaviour before?

Jbuf pool 1 utilization

$
0
0

Hi,

 

We are running a cluster of SRX-345's with firmware 15.1X49-D170.4. Every now and then I see this message in the logs:

 

Local4.Critical 192.168.0.254 Dec 18 09:36:01 SRX345-CL-N0 Warning: jbuf pool id 1 utilization level(99%) is above 90%!

 

Has anyone an idea how serious this is?

 

Best regards,

Steven

question about sending syslogs to splunk (missing the 'action' field)

$
0
0

Hello,

 

I have succesfully set up my srx to send its syslog and security log traffic to a remote instance of Splunk.  In Splunk I am able to write a search that gives me (almost) everything that i would need to see if i were trying to search traffic logs for troubleshooting or any other purpose really.  For some reason though I just cannot seem to get it to populate an 'action' field.  I'm not sure how to get it to intrepret the RT_FLOW_SESSION_CLOSE as a closed/permitted packet.  Anyone have any suggestion for me?  I have attached a few screenshots and config outputs if they help at all

 

this is my config group stanza formy primary node:

groups {
node0 {
system {
host-name test-fw1;
backup-router 10.10.0.1 destination 0.0.0.0/0;
syslog {
host 10.10.3.150 {
any any;
port 514;
source-address 10.10.0.6;
structured-data;
}
file traffic-log {
any any;
match RT_FLOW;
}
}
}
security {
log {
mode stream;
format sd-syslog;
source-address 10.10.0.6;
stream Splunk {
category all;
host {
10.10.3.150;
port 514;
}

 

splunk2.pngsplunk1.png

Viewing all 3959 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>