Thursday, October 31, 2013

How I backup my server configs with lsyncd

In the event that something happens to my server, or one of the services running on my machine, I like to have a backup of my config for that particular service. Ain't nothing worse than taking hours or even days to properly configure a service to have a hard drive crash, operating system got corrupted, an update replaced your config file with the default or maybe you accidentally deleted the config or made some mistakes modifying it. This post shows how i use lsyncd and logrotate to automatically sync my /etc directory on my server with a remote machine and use logrotate to create and rotate 2 backup copies of the /etc directory.

The machines involved are main_server and backup_server.

On the main_server:

# apt-get install lsyncd

Lsyncd will use ssh as its network transport and we will need to create ssh keys for unattended remote login access. Lsyncd service runs as the root user so will need to login into the root account to create your ssh keys.

# ssh-keygen -t rsa

Copy your public key to the backup_server to allow password less login with ssh keys.

# ssh-copy-id backup_server

Try and logging to the backup server to see that you can authenticate to the backup_server without your password. If you run into issues, check the /etc/ssh/sshd_config file and ensure that public key authentication is enabled.

# ssh backup_server

Now we can create and configure lsyncd on the main_server. A config file is not created by default (on ubuntu and debian) so we need to create one.

# touch /etc/lsyncd/lsyncd.conf.lua

Now a simple config like this will suffice.

settings = {       

        logfile = "/var/log/lsyncd.log",
        statusFile = "/var/log/lsyncd.stat",
        statusIntervall = 1,}
sync{       
        default.rsyncssh,       
        source="/etc/",
        host="backup_server",
        targetdir="/root/backups/main_server/etc/"
}

Now we can start our service

service lsynd start

Hopefully everything went well and no errors occurred. Note that the location /root/backups/main_server/etc will need to be created(or atleast exist) prior to starting the lsyncd service .Now you can login to the backup_server and confirm that files and directories are being syncd.

At this point, you can create a logrotate config in /etc/logrotate.d/. Since logrotate runs via a daily cron job, our config will get executed daily. What our config will do is create a gzip'd tar archive of the /root/backups/main_server/etc/ and keep 2 copies while being rotated daily. The config will look like this.

# touch /etc/logrotate.d/main_server
# nano /etc/logrotate.d/main_server

/var/backups/main_server.tar.gz{
rotate 2
daily
create 644 root root
prerotate
rm /var/backups/main_server.tar.gz
tar -P -zcf /var/backups/main_server.tar.gz /root/backups/main_server
endscript
}

You will want to create a dummy main_server.tar.gz file to satisfy the first run of the prerotate script.

# touch /var/backups/main_server.tar.gz

We're all done. To recap, lsyncd will sync the /etc folder in near real time (few seconds of delay) from the main_server with the backup_server. And on the backup_server, a copy of the sync'd folder is backed up daily with logrotate, maintaining 2 rotated copies.

Resources / Good Readering:

1. https://www.digitalocean.com/
2. https://github.com/axkibe/lsyncd/

Friday, October 25, 2013

Reviewing boot logs on Debian

Recently, I installed a debian system as a server from the netinst CD. From the debian website:

A network install or netinst CD is a single CD which enables you to install the entire operating system. This single CD contains just the minimal amount of software to start the installation and fetch the remaining packages over the Internet.

The install was straight forward and i only opted to install standard utilities and nothing else. Everything installed successfully and the system booted without a single problem.

It is good practice to familiarize yourself with what you see on the console when your system boots up. Therefore in the event that something goes wrong and services aren't being started at boot, you would already be familiar with what your system usually does when it boots up. Typically, messages will indicate that your services were started "OK", something has "Failed", "Warning" notifiers, example: Starting periodic scheduler: Cron or some other service has failed. The issue that you will encouter is that these messages that you see on the console scrolls by very quickly and you usually only have about a second to recognize something useful.

By default, my new debian system did not log these boot messages anywhere. It turned out that bootlogd, the daemon that is responsible for logging these messages to "/var/log/boot" was not installed by default. Runnign # aptitude install bootlogd, was all that was needed. Once you reboot your system, boot messages will be logged to the file, "/var/log/boot".

You can simply cat the contents of the log file to view its contents but there is an issue where the colored strings are not escaped and you end up with bash color codes. These colored strings are what you see at boot with strings like "failed", "ok","warn", etc. To fix this you can view the logs with the following command

# sed 's/\^\[/\o33/g;s/\[1G\[/\[27G\[/' /var/log/boot | more

You can set up an alias for this in your .bashrc file to make life easier.

Thursday, June 6, 2013

Authenticating an Stunnel server to an Stunnel client

Stunnel can take care of our network encryption needs for any TCP connection. I blogged about setting this up here. Stunnel makes use off SSL certificates which enables us to achieve these 3 main things, 1. Authentication, 2, Confidentiality and 3, Integrity. If you referenced my first post about stunnel, you may or may not have realized that i did not make use of the "authentication" feat of SSL. In this post, i aim to achieve that.

Essentially, the client will be able to connect to the server and verify that it's indeed connected to the correct server. By default the client will communicate with any stunnel server and accept the server's certificate without taking extra steps to verify it. What this means is that a malicious person can potentially setup a rogue stunnel server with their own certificate and pose as a ligitimate server. Because your client accepts everything blindly, it will make the assumption that it is connected to the right server, when indeed it might  not be. Lets fix that.

Step 1. Generate private key, certificate and PEM file.

# openssl genrsa -out server.key 2048
# openssl req -new -x509 -nodes -days 365 -key server.key -out server.crt
# cat server.key server.crt > server.pem
# chmod 640 server.pem
# chmod 640 server.key

Step 2. Run stunnel server.

# stunnel -f -d 443 -r 127.0.0.1:80 -p server.pem -s nobody -g nogroup -P /tmp/stunnel.pid

Step 3. Copy ONLY server.crt file to clients that would be connecting to the stunnel server. The server.pem and server.key file MUST be kept secret at all times.

Step 4. Run client with extra option -v 3 to enable certificate verification via locally installed cert.

# stunnel -c -f  -d 443 -r remote.server.com:443 -v 3 -A server.crt

Thats it.

If a malicious person tries to impersonate your server,it will fail to pass these checks:

1. When the initial connection between client and server is made, the server sends its certificate to the client. The client would only accept this certificate if it is within it's white list of certs "-A option".
2. If for some reason the malicious person was able to have the server send the correct cert, the client would challenge the server to prove that it has the matching private key for the cert. Because the malicious user does not have the private key and is unable to prove that it does, the client will terminiate the session.

Here is a screenshot of what things look like on the client under normal working conditions.



Here is a screenshot of what will happen on the client if a malicious server is used to impersonate a legitimate one.



The way we have setup stunnel we achieve the three main goals of SSL certificates.

1. Authentication. Server authenticates itself to the client before communication is established.
2. Confidentiality. Session is encrypted using symmetric cryptography.
3. Integrity. Ensures that the encrypted data has not been tampered with via a negotiated hashing algorithm.

Tuesday, February 26, 2013

SSH tunneling made easy with sshuttle.

It amazes me how many things you can do with ssh. They say netcat is the swiss army knfie networking tool, but ssh would be like the swiss army knife networking protocol. We know ssh is mainly used for remote administration via a terminal or command prompt, but it can also be used for other things like the tunneling of TCP connections. If you are familiar with this process, then you know how neat this can be, but configuring each port to be tunneled can get ugly. However, there is a cleaner and simpler way for tunneling your tcp connections over ssh using sshuttle. What you ultimatly get is a poor mans VPN. Once sshuttle is running, all your traffic will be proxied through an ssh connection (dns traffic can also be proxied if told to do so).

Here is an example of how i use sshuttle.

# ./sshuttle --dns -r username@remoteip.com 0/0

Resources / Good Reading:

sshuttle github