Reset Search
 

 

Article

KB44632 - Load balancing PCS with vTM

« Go Back

Information

 
Last Modified Date11/16/2020 6:57 PM
Synopsis

Load balancing PCS with vTM

This article provides an overview of different ways of load balancing PCS with vTM, and vTM specific tunings for this.
Problem or Goal
Cause
Solution

Load balancing of PCS with vTM can be achieved 2 ways:
1) with virtual servers and pools, and the PCSs in the pools
2) With GLB (aka OGS), where vTM manages a domain name, and chooses and returns an IP address of a PCS to clients.

Reasons to choose pool based:

  • Simpler to setup than GLB.
  • Can modify and inspect the PCS HTTPS traffic which is sometimes needed.
  • Failover time and load detection is faster, due to GLB being limited by DNS TTLs.

Reasons to choose GLB:

  • Should be used where PCSs are in different geographic locations (a local vTM could still be used in this case to load balance between PCSs within a single location).
  • vTMs don't have to handle the VPN traffic.

Pool based 

To configure pool based load balancing of PCS, use the wizard 'Load-balance Pulse Connect Secure'.  This asks for the IP addresses of the PCSs and sets up the pools.  IP Transparency is recommended for high traffic environments (see 2) below).  For further information, including how to setup vTM to loadbalance based on the number of users logged into each vTM, see Load Balancing Pulse Connect Secure with Pulse Traffic Manager Deployment Guide.   Because vTM needs to handle UDP traffic efficiently, vTM 20.2 or later is required for best performance, and once deployed it should be monitored and tunings applied if necessary.

Monitoring & tuning of vTM

(For PCS monitoring & tuning see KB44397 - Troubleshooting High Load and Performance Issues)

  1. To check for packets dropped by the kernel because of buffers filling up, use netstat -s on the command line, during peak times:
    # netstat -s | grep 'receive buffer errors' ; sleep 10 ; netstat -s | grep 'receive buffer errors' 
     749321 receive buffer errors
     749743 receive buffer errors
     

    To check for packets dropped by the vTM software, monitor the SNMP counters virtualserverUDPBytesInDropped and virtualserverUDPBytesOutDropped (aka UDPBytesInDropped and UDPBytesOutDropped in the Activity Graphs).

    If there are any drops, then increase the socket buffer sizes, by increasing these 4 sysctls: net.core.wmem_default, net.core.wmem_max, net.core.rmem_default, net.core.rmem_max.  10485760 (10MiB) is recommended as a starting point, here is a one line shell script with this:

    # export n=10485760 ; for i in wmem_max rmem_max rmem_default wmem_default ; do sysctl -w net.core.$i=$n ; done

    vTM will need to be restarted for this to take effect, or at least stop and start the relevant Virtual Server, which would be quicker.  To be persistent over reboots/upgrades, on vTM appliances (physical & virtual) these sysctls should be set in System > Sysctls, and on vTM software installs they should be set in /etc/sysctl.conf.

  2. To monitor the CPU usage, run top on the command line during peak times.

    • If the average goes above 80% cpu usage then it's recommended to add more CPUs.   

    • Also press 1 in top to get per CPU stats, if this shows unbalanced or high 'si' values, this can be caused by those CPUs handling all the interrupts for traffic between vTM and the PCSs.  2 possible solutions for this are to enable IP transparency, so that connections from vTM to PCS have many source IPs which helps to balance the interrupts, or enable flow hashing on the udp source port (by default it only hashes on the IP), with this command: 

      # ethtool -N <IFACE> rx-flow-hash udp4 sfdn

      where <IFACE> is the interface with the vTM to PCS traffic.  That command isn't persistent over reboots so it needs to be added to /etc/rc.local, but note that on appliances it's not preserved over upgrades.
       

  3. To monitor the socket count, use this:

    # cat /proc/net/sockstat
    sockets: used 51387
    TCP: inuse 3524 orphan 5 tw 454 alloc 3525 mem 4966
    UDP: inuse 47608 mem 40392
    UDPLITE: inuse 0
    RAW: inuse 1
    FRAG: inuse 0 memory 0
     
    which says 47608 sockets in use.  There is a limit of 64k sockets, and there is one socket per client.  

GLB (OGS)

To configure GLB, use the Optimal Gateway Selection wizard.  For further information please see the 'Using the Traffic Manager to Provide Optimal Gateway Selection' section of the User's Guide (available at https://www.pulsesecure.net/vadc-docs)

By default, only geo or active/passive load balancing algorithms are supported.  It is possible to configure it to perform more advanced load balancing, taking into account the user count, cpu load, and geo location, but this requires custom scripting to setup, please engage professional services for this.

Extra performance tuning for GLB is unlikely to be needed, but in case it is, the above tunings may be appropriate for the UDP DNS traffic (TCP DNS traffic would require different tunings).

Related Links
Attachment 1 
Created ByLaurence Darby

Feedback

 

Was this article helpful?


   

Feedback

Please tell us how we can make this article more useful.

Characters Remaining: 255