Friday, May 25, 2012

CrackerSpotting


Recently one of our machines got hacked ... cracked. I was with a dentist settling my dues when my colleague in Boston called me and mentioned that he had spotted a cracker in our network.

I logged into the affected machine running an age-old 2.4.x linux kernel and some ancient Redhat distribution and found a ton of scan-ssh processes. Since the machine anyway wasn't used by us for anything useful as we had moved all our stuffs to github, I still had to backup a couple of stuffs before throwing it all away and reinstalling a newer kernel which happened to be a 11.10 Ubuntu Server.

Before re-installing a newer distro, I ran a readlink /proc/`pidof scanssh`/cwd to get the current working directory of the scan-ssh processes and quickly backed up all the helper programs used by the cracker/bot.

Turned out to be a brute force ssh scanner that was using a passfile consisting of a bunch of user names and passwords to try and ssh into accessible machines in the network. After that, the machine itself became a bot for further brute-force ssh attacks.  I spotted some old (but common) employee names in that passfile which I presume could have served as the backdoor to the machine as password could have been the username itself. (considering the machine was mostly storing trash)

Since we are just an amazingly thin company now (can't reveal our strength as thats a trade secret :), I had to stop whatever I was doing and focus on some sysadmin tasks to enable only ssh and ftp access for our customers for some versions of our software that weren't moved to github. Pretty simple requirement and just needed to avoid reinstallation-cycle overheads caused by such attacks since we don't have anyone employed who could waste time fighting bots.

I just did a couple of things to reduce the intensity of  future attacks and to keep the cracker/bots isolated to a virtualbox guest VM that was NAT-ted for ssh and ftp access through the host.

The guest VM is running a 11.04 Ubuntu server. After installation, I moved ssh on the host machine to a different port to re-use port 22 for the guest VM. Then I enabled port forwarding for the virtualbox guest to have ssh and ftp targeted at the external host to be forwarded to the guest VM.

Virtualbox supports port-forwarding from hosts for NAT'ed guests as documented in their manual. These 2 commands accomplished the same:

vboxmanage modifyvm VM_Name --natpf1 guestssh,tcp,,1322,,22
vboxmanage modifyvm VM_Name --natpf1 guestftp,ftp,,1321,,21

Above commands enabled port forwarding for ssh and ftp requests to port 1322 and 1321 on the host to be forwarded to guest VM_Name on port 22 and 21 (interface/nic 1) respectively.

I had to use port numbers above 1024 since virtualbox on the host was running as a non-root user and NAT-ing was failing when trying to setup forwarding on host ports 22 and 21.

But I still needed the external world to directly ssh/ftp into the host and get forwarded into the guest VM without specifying arcane port numbers as arguments to ssh and ftp.

All I needed was to forward incoming ssh and ftp traffic to the host to port 1322 and 1321 respectively and after that, virtualbox NAT-ing would forward it to the guest as configured.

iptables made it easy and the following iptables rule setup forwarding for ssh and ftp to port 1322 and 1321.

For the outside world to directly ssh into the guest VM serving ssh and ftp traffic:

iptables -t nat -A PREROUTING -p tcp --dport 22 -j REDIRECT --to-port 1322
iptables -t nat -A PREROUTING -p tcp --dport 21 -j REDIRECT --to-port 1321

Since ssh on the host machine was using a different port, I had to just setup forwarding to enable localhost ssh from the host machine to the guest VM.

iptables -t nat -A OUTPUT -p tcp --dport 22 -j REDIRECT --to-port 1322
iptables -t nat -A OUTPUT -p tcp --dport 21 -j REDIRECT --to-port 1321

The above made me do a: ssh user@localhost from the host machine to land into the NAT-ted guest directly.

Also for ftp to be NAT-ed through the host, I had to enable specific range of ports (above 1024) for passive ftp in the proftpd server and have them port forwarded through the host with a iptable rule.

Now that the 2 only-needed services are hosted inside the virtualbox VM, it makes life easy since I just need to recover from a snapshot if we are hit again by the same kind of attack after checking for the affected user from auth.logs. And its also easy to move the VM to another machine,etc. Now the configured user list on the machine is small and passwords aren't easy to be cracked through the brute-force scan-ssh methods.

I was still repeatedly seeing failed password attempt logs in /var/log/auth.log every 5 seconds, hinting at the brute-force scan-ssh attacks in progress. But once I switched the ssh to a different port on the host, I was no longer seeing such logs but it could be a matter of time before the ssh scanner locates the port and tries again.

Since I have one of the passfile used for the last successful brute-force attack on the old machine, I could just set up a trap by creating a user that exists in the passfile and then running a "whois" monitor that checks for the dummy user to login before performing a VM restore that could frustrate the cracker or bot. After all, this machine is only needed occasionally and allows for some foreplay with the cracker.

Something like a whois monitor that runs every minute on the guest checking against potential logins of the user in the passfile and then runs a script on the host machine that powers off the VM before recovering back into a working snapshot:


#!/usr/bin/env python
import os
userlist = os.popen("who | awk -F ' ' '{print $1}'").readlines()
userlist = [u.strip() for u in userlist]
passfile_list = open("passfile", "r").readlines()
passfile_list = [passfile.split(":")[0].strip() for passfile in passfile_list]
for u in userlist:
    if u in passfile_list:
        print 'Vulnerable user [%s] has logged in.Take action by calling out to VM host for recovery' %u
        os.system("ssh user@host ./vm_recovery")
    else:
        print 'user logged in [%s]' %u



The vm_recovery script called on the host from the guest could be as simple as:

vboxmanage controlvm VM_Name poweroff
vboxmanage snapshot VM_Name restore CrackerSnapshot
vboxmanage startvm -type headless VM_Name

And we should be back to playing games of frustration and agony with the ssh bot by using the victim user in the passfile to our advantage with little/no damage.