Detecting SSH tunnels

SSH is an incredibly powerful protocol whose footprint needs to be monitored closely in enterprises. The most common use of SSH is for totally legitimate purposes like terminal (ssh) or for file transfer (scp,sftp). Many users also use the less well known port forwarding feature of SSH to create ‘tunnels’. SSH tunnels bore through firewalls, NATs, and are almost totally opaque to Network Security Monitoring tools like Trisul, Bro, Suricata, Snort, and others. SSHv2 even has SOCKS5 support – this allows anyone to setup a full SOCKS5 proxy outside your network and hide all HTTP activity from the prying eyes of NSM tools. With HTTPS/SSL, security tools can get atleast a look at the unencrypted certificates and perform checks, with SSH everything goes dark right after the initial capabilities exchange.

There are two types of SSH tunnels. The forward tunnel allows an insider to get on the outside bypassing the NSM and Firewall/NAT sentries. The reverse tunnel allows an outsider to get on the inside. The reverse tunnel is also called an autossh tunnel after the popular tool used to setup and maintain this connection.

Here is how a Forward SSH tunnel looks like


Forward SSH tunnel hides activity. SSHv2 -D allows a full SOCK5 proxy outside your visibility zone

and a Reverse SSH , or autossh Tunnel


Reverse SSH tunnel allows someone to log on to an outside machine and pop up on the inside!

3 ways in which SSH tunnels create blind spots for monitoring

Here are the top three ways in which SSH tunnels create blind spots for NSM tools

  1. single flow : when you are monitoring on the perimeter SSH tunnels show up as a single flow while they multiplex several SSH or SFTP channels. So you miss that flow level insight.
  2. dynamic forwarding used as proxy using dynamic port forwarding users can proxy web traffic through the encrypted tunnel. This pretty much defeats every policy control you have at the boundary.
  3. 24×7 access into your network : reverse ssh tunnels can give outsiders persistent presence into deep parts of your inside network. Autossh is a very advanced tool that maintains the tunnel and keeps it from timing out.

It is quite obvious that any monitoring platform needs to help organizations stay right on top of SSH. Lets see what can be done.

Tracking and detection of SSH tunnels

In any large organization SSH is going to be pervasive. Therefore any naive SSH flow monitoring is going to be dominated by legitimate business applications. Here are some techniques we use in Trisul, you should be able to adapt these to your own toolchain.

We create a SSH Flow Tracker to first get baseline visibility and then to incrementally refine the tracking until we get a workable list of suspect flows. Flow Tracking is a streaming algorithm which plugs into the flows stream and continously snapshots Top-K every minute. There are several built-in Top-K algorithms (volume, upload, download, duration, based on ports, IPs, subnets, netflow). You can even write a fully customized tracker using the flowtracker LUA script ).


Flow tracker used to track long running SSH flows (sorted by duration)

When you first enter a unmapped network, you need to build a baseline statistical and flow profile first. Statistical visibility includes actual bandwidth of SSH traffic, servers, clients, gateways, number of SSH flows, number of concurrent SSH flows , scanning activity, etc. Then you need to step down one level into flow visibility. Identify the normal SSH flows, the elephant flows, the long running but low traffic flows, flows that involved scanners, malicious hosts,etc.

Here we explain how we can use a ‘flow tracker’ to build a baseline model and then refine it in three steps and then end with a discussion of detection of SSH tunnels.

  • Visibility stage 1 : Use a “SSH Flow Tracker” – track Top-K SSH flows by volume – this gives you a baseline idea of very large elephant SSH flows. Most of them will be normal business activity that is okay. If you dont even have this insight, you are basically running blind.
  • Visibility stage 2 : Use a “SSH Duration Tracker” – track Top-K SSH flows by duration – this gives you a really good idea of very long lived streams. Once again many of them will be legitimate business activity. That is okay – we will refine them and create a whitelist. Malicious Reverse SSH tunnels will typically not transmit much data as they “lie in wait” for the attacker to log in. They will be kept alive only because of ‘autossh’ keepalives. So right off the bat if you see long lived flows with very low traffic you can mark them for investigation. You can also create trackers for SSH flows by upload if you want to dig deeper.

Once you have the baseline normal visibility we need to feedback knowledge gained from (1) and (2) into the realtime part of Trisul. To do this

  • Visibility stage 3 : Use “SSH Trackers but use a whitelist” – in addition to the two trackers above you create yet another trackers using the flowtracker LUA API that would track “All SSH flows by duration that arent whitelisted”. It is not that difficult because the whitelists can be automatically generated from (1) and (2). For example you can identify host pairs or subnet pairs that constitute normal traffic.
  • Detection 1 : “Add detection” – In addition to the deep visibility above, if you can automatically detect SSH tunnels that would give you immediate rewards.

In the next section, we show how you can plug in the detection part using a simple example.

Detecting SSH tunnels using traffic analysis

Recently I came across a great presentation by John B Althouse III [ link ] where he used Bro (bro.org) to detect SSH tunnels carrying TTY (terminal) using traffic analysis.

The key insight found by the author is : if you observe the packet lengths of keystrokes attached to SSH you notice the following.

  1. when SSH directly transports TTY keystrokes,
    1. packet length = SSH header + 1 byte char code + padding + HMAC. This could be 36,40,48 bytes or so
  2. when SSH tunnels another SSH channel transporting TTY traffic
    1. packet length = SSH header + [previous SSH pkt] + HMAC. This could be 76, 84, 98 bytes and so on
    2. the exact lengths depend on the encryption block size and the HMAC algorithm + implementation of clients and servers.
    3. each keystroke is echoed back, so we can use that fact to tighten it up a bit

If you can rig a NSM tool to detect consecutive tunnel packet sizes of Server:76, Client:76,S:76,C:76,S:76,C:76 this is almost certainly a person typing in an interactive terminal on an SSH tunnel and getting echoed back. There you have it.

This is a traffic analysis attack essentially. OpenSSH does not implement the “random padding” feature of RFC4253 The secure shell protocol which says " the insertion of variable amounts of ‘random padding’ may help thwart traffic analysis". Until they fix that we can use this technique to detect tunnels.


Alerts generated by Trisul when the script triggers

Lets see how you can implement this analysis using the Trisul LUA API.

rev-ssh.lua

You can think of Trisul roughly as Bro but with LUA and an emphasis on traffic metering and streaming analytics. The LUA API lets you hook into TCP reassembly , HTTP file extraction, and a number of other points.

For more details about the script please visit Githiub rev-ssh.lua

4. Alerting and tagging

We dont want a flow to keep firing alerts when the user continues to type. So we use a dampening interval of 5 minutes.


    -- tag flow 
    engine:tag_flow(flowkey:id(),"REVSSH");

    -- alert 
    engine:add_alert("{B5F1DECB-51D5-4395-B71B-6FA730B772D9}", flowkey:id(),"REVSSH",1,"rev ssh detected by keypress detect method");

    sshF.lastalertts=timestamp
  end

Flow tagging allows you to generate a ‘tag stream’ that are merged with the main “flows stream” and persisted at the end of a time window. Flow tags are nothing but text labels that are attached to flows. You can then search for flow by the tag value.


Tags are arbitrary labels attached to flow. Showing a search tag=REVSSH in the Explore Flows tool

Conclusion

We think traffic and flow visibility is a very important part of Network Security Monitoring. As encryption spreads throughout the network we are going to need more and more of statistical and flow based approaches to gain insights into the network. You also need a real time API so you can plug in knowledge learnt from earlier “hunting expeditions”.

Getting started

If you want to get started on the Trisul platform and play with the scripting API, all you have to do is

  1. install Trisul using apt-get or yum (no signups or emails asked just apt-get/ yum)
  2. it runs free for the most recent 3 days window
  3. the LUA API is fully enabled
  4. install the LUA script into the directory mentioned in Installing LUA scripts
  5. now you can capture live traffic or process PCAP dumps
Free Download Trisul 6.0 ! Ready to go packages for Ubuntu and CentOS.

New Netflow based analytics in Trisul released

We are pleased to announce that a fresh build of Trisul Network Analytics 6.0 is now available for download.

In todays’ network environments, a full-on, lossless, flow metering system is a must. Even if you have advanced packet based NSM (Network Security Monitoring) platforms – Netflow visibility provides the widest baseline visibility and is easy to deploy.

We’ve got two new tools in the latest release of Trisul that will delight users of Netflow based analytics.

Routers and Interfaces Control Panel

Most of our Netflow users demand good support for heirarchical router and interface type drilldowns in addition to global visibility. We introduce a brand new “Router and Interface Panel” that lets you locate and drilldown into interfaces.

  • A Magic Map viz tool to help you quickly select busiest routers and interfaces. See the screenshot above.
  • Automatic discovery of routers and interfaces from Netflow traffic
  • Click on router to drill down into interfaces
  • Click to interface to analyze usage (total traffic, hosts, apps, flows)
  • Interface tracking (see below) gives you accurate breakups of interface traffic with no loss
  • Integrated with SNMP to automatically discover all names and aliases
  • Get email alerts when interface usage exceeds limits, include hosts, apps, conversations, flows in email

Interface Tracking

Tools that provide drilldowns based on Top-K flows suffer from a big flaw when working with large networks. The results just arent very accurate. We introduce a new streaming analytics tool called “Interface Tracker” that you can enable on very busy interfaces. This results in 100% accuracy when drilling down into Top-K type analysis on interfaces.

Other Netflow related features

  1. New Netflow Configuration Wizard helps you configure Trisul for Netflow in minutes
  2. Brand new Email alerts module with comprehensive context to every alert
  3. A new flowdirector module that allows you to process Netflow and SFlow on the same UDP Port

The latest Trisul 6.0 builds also include a huge number of packet analytics and platform based improvements. We will be introducing them to you shortly.

Free Download Trisul 6.0 ! Ready to go packages for Ubuntu and CentOS.

Unix socket madness with Trisul and IDS alerts

One of the things Trisul can do is to merge rich traffic analytics data with traditional IDS alerts from systems like Snort and Suricata. Up until Trisul 5.5 you could connect Trisul into a single Unix Socket to which Snort or Barynard2 would write alerts. Once alerts were inside Trisul they would be merged with other types of NSM data in real time. You could then ask queries like “Show me list of flows that produced a HIGH priority alert”

Why Unix Sockets?

Unix sockets are old school, yet they are used in a suprisingly large number of systems even today for IPC (Interprocess communications). There are basically two ways you can read alerts from a logging system like Suricata.

  1. Traditional Log Tailing Observe an output log file and process any new alerts.
  2. Network messaging Just read messages off a network feed. This can be unix sockets, TCP/UDP sockets or higher abstractions like message queues.

The advantages of Unix Sockets we see are

  1. Secure – purely local, unlike TCP/UDP sockets
  2. Message based – Unix socket that use SOCK_DGRAM preserve message boundaries. You can simply read the messages as sent
  3. chmod – can be secured using traditional linux chmod style permissions. For network sockets you need to use ACLs or other mechanisms like CURVE or TLS
  4. No waldo – A most loved feature of IDS log tail is the so called waldo file. You need to know where to position yourself on the log file when you restart the system.

So there! We like Unix Sockets and try to use it even for shiny new output formats like Suricata’s EVE JSON output format.

Let us now see some code that shows you how you can use Trisul’s new LuaJIT API to interface with Unix Sockets.

Trisul 6.0 Platform

Trisul 6.0 is now positioned as a platform that leverages the power of LuaJIT to provide plug-in points into our processing pipeline. Now you can write an ‘inputfilter’ LUA script to listen to ANY type of feed that would result in an arbitrary alert.

New scripts on github

We just released four scripts on Github Trisul-Scripts

  1. suricata_eve.lua (github) – EVEExtensible Event Format is a modern output format that uses JSON. If you havent used Suricata’s EVE output format you really should check it out.
  2. suricata_eve_unixsocket.lua – Using a bit of LuaJIT FFI read EVE from a Unix Socket
  3. snort_unixsocket.lua – Traditional “-A unsock” output format from snort
  4. barnyard2_unixsocket.lua – Unified2 binary format from Barnyard2.

The input filter framework

Input filters are LUA scripts you write that can drive trisul. You can write custom PCAP readers, alert monitors, or custom flow-like records and use that as the input. In the illustration below you can see how multiple scripts can listen on different inputs. The Trisul platform will take care of the orchestration, threading, preventing starvation etc.

Using Unix Sockets. LuaJIT FFI to the rescue

Sometimes LuaJIT feels like cheating because you can drop down to standard C at anytime and call linux APIs and use C structures. If you are already familiar with LuaJIT FFI all of this will be familiar to you. Others can read the LUA code to understand how this C-LuaJIT interface works. Once you have setup the LuaJIT “cdefs”, read the JSON message from the socket, all you have to do is to map the EVE message to a Trisul alert format.

Lets take a look at how we map the EVE format.

From eve_unixsocket.lua

The real action happens in the step_alert function. You need to map the EVE types to a LUA table that Trisul understands as per the inputfilter API



  -- setup unixsocket using FFI

  -- read JSON EVE

  -- filter out non 'alert'  types

  -- map to input filter

  local ret =  {

      AlertGroupGUID='{9AFD8C08-07EB-47E0-BF05-28B4A7AE8DC9}',     -- Trisul alert group = External IDS 
      TimestampSecs = tv_sec,                                      -- Epoch based time stamps
      TimestampUsecs = tv_usec,
      SigIDKey = p.alert["signature_id"],                          -- SigIDKey is mandatory 
      SigIDLabel = p.alert["signature"],                           -- User Label for the above SigIDKey 
      SourceIP = p["src_ip"],                                      -- IP and Port pretty direct mappings
      SourcePort = p["src_port"],
      DestIP = p["dest_ip"],
      DestPort = p["dest_port"],
      Protocol = protocol_num(p["proto"]),                         -- convert TCP to 6 
      SigRev = p.alert["rev"],
      Priority = p.alert["severity"],
      ClassificationKey = p.alert["category"],
      AlertStatus=p.alert["action"],                                -- allowed/blocked like ALARM/CLEAR
      AlertDetails=p.alert["signature"]                             -- why waste a text field 'AlertDetails'?
  };


Most of the fields are straight up one-to-one mapping. I found the AlertStatus and EVE alert.action to be quite interesting. Apparently Suricata uses that field in IPS mode to indicate whether the event resulted in a BLOCK or ALLOW. By mapping it into Trisul AlertStatus you get a neat integration that result in something like below.

Notice the ‘allowed’ tag.

Explore the API

Over the next few days we will be writing about the other new LuaJIT APIs in Trisul. One such API is the File Extraction API.

Install Trisul now, no sign up needed

Installing Trisul Network Analytics 6.0 is painless. There are no sign ups. It is as simple as apt-get install or yum install Head over to the Downloads area to get started. The APIs are included as part of the base package.

Until next time.

Free Download Trisul 6.0 !

Trisul 5.5. updates fixes Netflow v9 and SFlow issues

New builds of Trisul Network Analytics 5.5 are now available.

Summary

They fix the following issues.

  • Netflow v9 when used with large flow-cache timeout not reporting correct duration of flows
  • When you Enable Netflow v9 Ingress and Egress on the same interface ip flow ingress and ip flow egress and you have the MergeMultipleSource option enabled in the Netflow Configuration File you may notice that only the egress traffic is counted and the ingress is ignored.
  • When Trisul processes SFlow from multiple IPs on the same devices, the traffic may be incorrectly counted.

If you are using Trisul in Netflow v9 (Ingress+Egress) mode or are using SFlow you are encouraged to update immediately. Free Download of Ubuntu 14.04 LTS and CentOS7

Big new Release 6.0 coming

A quick heads up. We have a big new release of Trisul 6.0 coming up in a couple of weeks. In 6.0 you can use Trisul in a distributed mode, write your own application logic using LUA, and even roll out a probe + cloud deployment mode.

Free Download Trisul 6.0
Trisul Network Analytics 6.0 is now available – Enjoy!