Terminate a running CentOS program using xkill


When an application is unresponsive in CentOS it sometimes requires the task be terminated. This is a neat trick to closing a application with minimal command line use.

Start by opening your terminal. Then type the following command:

[sixthpoint@new-host ~]$ xkill

Now click on the window you want to terminate. The program will automatically get the pid from the application window and terminate the process.

Output after click:

xkill: killing creator of resource 0x4a00034


Backup filesystem to Amazon S3


Every server needs to be backed up periodically. The trouble is finding an affordable place to store your filesystem if it contains large amounts of data. Amazon S3 is the solution with reasonably priced standard storage ($0.0300 per GB), as well as reduced redundancy storage ($0.0240 per GB) at the time of writing this article. Updated pricing can be seen at http://aws.amazon.com/s3/pricing/

This short tutorial will show how to backup a servers filesystem using s3cmd. S3cmd is a command line tool for uploading, retrieving, and managing data in Amazon S3. This implementation will use a cronjob to automate the backup processing. The filesystem will be scheduled to be synced nightly.

How to install s3cmd?

This example assumes you are using CentOS, or RHEL. The s3cmd library is included in the default rpm repositories.

yum install s3cmd

After installation the library will be ready to configure.

Configuring s3cmd

An Access Key and Secret Key are required from your AWS account. These credentials can be found on the IAM page.

Start by logging in to AWS and navigating to the Identity & Access Management (IAM) service. Here you will first create a new user. I have excluded my username below.

Next create a group. This group will hold the permission for our user to be able to access all your S3 buckets. Notice under permissions the group has been granted the right to “AmazonS3FullAccess” which means any user in this group can modify any S3 bucket. To grant your new user access to the group click “Add Users to Group” and select your new user from the list.

For s3cmd to connect to AWS it requres a set of user security credentials. Generate an access key for the new user by navigating back to the user details page. Look to the bottom of the page for the “Security Credentials” tab. Under Access Key click “Create Access Key”. It will generate a Access Key ID and Secret Access Key. Both these are required for configuring s3cmd.

You now have a user setup with permissions to access the S3 API. Back on your server you need to input your new access key into s3cmd. To begin configuration type:

s3cmd --configure

You should now see this page and be able to enter your Access Key Id and Secret Key.

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key: xxxxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: xxxxxxxxxx
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes

New settings:
  Access Key: xxxxxxxxxxxxxxxxxxxxxx
  Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  Encryption password: xxxxxxxxxx
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

At this point s3cmd is fully configured and ready to push data to S3. The final step is to create your own S3 bucket. This bucket will serve as the storage location for our filesystem.

Setting up your first S3 bucket

Navigate to the AWS S3 service and create a new bucket. You can give the bucket any name you want and pick the region for the data to be stored. This bucket name will be used in the s3cmd command.

Each file pushed to S3 is given a storage category of standard or reduced redundancy storage. This is configurable when syncing files. For the purpose of this tutorial all files will be stored in reduced redundancy storage.

Standard vs Reduced Redundancy Storage

The primary difference between the two options is durability; or how quickly do you need access to your data. Standard storage gives you nearly instant access to your data, where as reduced redundancy storage (RRS) may take up to several hours to retrieve the file(s). For the use case of this tutorial all files are storage in RRS. As noted previous RRS is considerably cheaper than standard storage.

Configuring a simple cronjob

To enter the cronjob editor simply type

crontab -e

Once in the editor create the cronjob below which will run Monday – Friday at 3:30 a.m. every morning.

30      3       *       *       1-5     /usr/bin/s3cmd sync -rv --config /root/.s3cfg --delete-removed --reduced-redundancy put /PATH/TO/FILESYSTEM/LOCATION/ s3://MYBUCKET/ >/dev/null 2>&1

This cronjob calls the s3cmd sync command and loads the default configuration which you have entered above. The –delete-removed option tells s3cmd to scan for locally deleted files, then remove them from the remove S3 bucket as well. The –reduced-redundancy option places all files in RRS for cost savings. Any folder location can be synced, just change the path to your desired location. Make sure to change mybucket to the name of your S3 bucket.

The server has now been configured to do nightly backups of the filesystem to AWS S3  using s3cmd library. Enjoy!

Using tmpwatch to free resources


Tmpwatch is a service that can recursively remove files that haven’t been accessed for a given period of time. In the case of CentOS it comes standard. If not enabled to run periodically, the tmp folder will expand until either the server is restarted, or it hits its disk resource limit. If the tmp folder does become to large, all programs that rely on temporary files will fail.

Ex: A apache webserver runs a php script which logs information for referencing. These log files are unwritable due to a lack of disk resources.

It will appear to be a permissions read / write error. However, a simple execution of the following tmpwatch command will free up space and delete all files older than 12 hours.

tmpwatch 12 /tmp

Note: Never delete all files in the tmp folder as they may be utilized for semaphore locking by various applications (mysql).

Installing EPEL repo on CentOS 7.x


The EPEL (Extra Packages for Enterprise Linux) repository offers a variety of packages that can enhance your programming experience. These packages compliment and extend the base packages that come with CentOS. Installing EPEL on CentOS 7 is straightforward (the following commands assume you have root privileges):

cd /tmp
wget http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/epel-release-7-1.noarch.rpm
yum install epel-release-7-1.noarch.rpm

That’s it. All the packages in the EPEL repo for CentOS 7.x and Red Hat Enterprise Linux (RHEL) version 7.x are now at your fingertips

Apache 403 Forbidden / permissions not set


When setting up a fresh install of apache on centos 6.x you may encounter a “403 forbidden” error stating proper permissions have not been set to access the index.html file.

This is due to SELinux not recognizing changed files in the document root. The cause comes from when you move “mv” files around. The original context is preserved in the kernels security module. To update SELinux you simply need to tell it to recursively index all files in your web directory using restorecon.

restorecon -r /var/www/html

Now all files should be accessible for apache.

Tomcat fresh install on Amazon EC2 Redhat Instance


This tutorial will demonstrate how to install a fresh version of apache tomcat 7.0.53 from source on an Amazon EC2 Redhat based instance. Including the installation of mysql, vsftpd, ssl (forced for the entire tomcat server), and iptables prerouting.

To begin, login to your EC2 instance and do a quick yum update. This will assure that all of your virtual machine’s libraries are up to date.

yum update 

When prompted, type “yes” to install updates. This update process can last several minutes.

The first library to install will be mysql. Run the following commands to install the server.

yum install mysql
yum install mysql-server
yum install mysql-devel 

Once installed turn on mysql to the chkconfig. This command makes it so mysql will automatically start on server reboot.

chkconfig mysqld on

Now you must configure mysql. Begin by starting the service.

service mysqld start 

It will output the following message:

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

To do so, start the server, then issue the following commands:

Run the following command to set your new password for root login.

/usr/bin/mysqladmin -u root password 'new-password'

Now login to mysql terminal by typing the following:

mysql -u root -p

It will prompt you for your password that you have just set above. Next step is to set up user permissions. This is accomplished by first creating a user, then assigning them permissions to access a given database.

#Create a new user, with password
CREATE USER 'username'@'%' IDENTIFIED BY 'user_password';

#Set to given database for a user
GRANT ALL PRIVILEGES ON database_name.* TO 'username'@'*' WITH GRANT OPTION;

#List all users and grants
SELECT user,host FROM mysql.user;

Mysql is now ready to use, you now have a user that should have grant permissions to access a given database (if you made one).

The next step is to setup apache tomcat 7.0.52. Navigate to the opt directory of your server. Then download the Tomcat file and extracting it.

cd /opt/
wget http://archive.apache.org/dist/tomcat/tomcat-7/v7.0.53/bin/apache-tomcat-7.0.53.tar.gz
tar -zxvf apache-tomcat-7.0.53.tar.gz
rm apache-tomcat-7.0.53.tar.gz

Tomcat comes loaded will all the files you need. You can test running the server by navigating to the bin directory and running the startup script.

cd /opt/apache-tomcat-7.0.53/bin/

Note: If tomcat fails to start; check to make sure that java jdk is installed.

java -version
java version "1.7.0_71"
OpenJDK Runtime Environment (rhel- u71-b14)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)

If no installation of java is found using yum install jdk 1.7

yum install java-1.7.0-openjdk java-1.7.0-openjdk-devel

It would be much nicer if you could start / stop the server like a service ex. “service tomcat start”. If you want tomcat to run as a server read the Tomcat Service Script tutorial.

Now I want tomcat to run on port 80. Port 80 is the standard port for all internet traffic. To direct traffic from port 80 to tomcat please follow my “Running Tomcat port 80” guide.

The next step is to enable SSL for security. In my case I want SSL to be force / required on all requests. Let’s say I have private data being transmitted so this is necessary.

First edit the conf/server.xml file. Note that the tomcat.keystore file should point to the location you placed your keystore file on the webserver. I have placed my in the root of the tomcat server.

<Connector port="8443" enableLookups="false" protocol="HTTP/1.1" proxyPort="443" keystorePass="changeit" keystoreFile="/opt/apache-tomcat-7.0.53/keys/tomcat.keystore" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" Server="My server name" clientAuth="false" sslProtocol="TLS" />

To force SSL on all connections edit the conf/web.xml file. At the end of the file before the closing tag add:

<!-- Require HTTPS for everything except /files and (favicon) and /css. -->

Tomcat will now force SSL on all incoming connections, it is ready for your war file. To upload a war file we need a ftp client. By default this Redhat instance does not come with the libraries configured. I choose to use vsftpd.

yum install vsftpd
yum install ftp

The next step is to configure permissions.

vi /etc/vsftpd/vsftpd.conf

Look for the following lines and uncomment / modify.


After edits are made, restart the service.

service vsftpd restart

Finally, you need to add a user to the system to login as.

adduser ec2-user
passwd ec2-user

Your server should now accept incoming connections via port 21 (FTP).

Once you login you will only have access to your home directory. Hence, you will not be able / have permissions to upload to the tomcat server directory in the opt folder. To fix this add a symbolic link in your home directory to the webapps directory of the tomcat installation.

ln -s /opt/apache-tomcat-8.0.8/webapps/ /home/ec2-user/webapps

Fixing offending key in SSH known hosts

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending key in /root/.ssh/known_hosts:1
RSA host key for 75.101.XXX.XXX has changed and you have requested strict checking.
Host key verification failed.

In the event the ip address of your server changes and you are using a private key there is a quick fix. From the example above you see that my offending key is “known_hosts:1” or key 1. To fix the error lets remove the line 1 of the known hosts. This solution was performed in CentOS 6.x using the sed command. The sed command is used for processing of files. Sed stands for Stream Editor which parses text files and used for making textual transformations to a file. The command is applied on the specified file on a line by line basis.

sed -i '1d' ~/.ssh/known_hosts

After running this command the offending key should be removed, and you should be prompted to add the new ip of the server to the known hosts.


Running tomcat port 80


The Hypertext Transfer Protocol (HTTP) is the foundation of data communication for the web. By default Tomcat does not use port 80 for communication. Tomcat runs on port 8080 instead. Using iptables all traffic can be pre-routed from port 80 to port 8080, or all traffic from port 443 (SSL) to port 8443 (tomcat SSL port). This walkthrough shows how to setup port 80 forwarding in Centos 6.x.

To do this modify your iptables file and replace the contents with the following.

vi /etc/sysconfig/iptables

Past in the following:

# Generated by iptables-save v1.4.18 on Mon Aug 19 16:38:51 2013
-A INPUT -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 8000 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 21100:21299 -j ACCEPT
# Completed on Mon Aug 19 16:38:51 2013
# Generated by iptables-save v1.4.18 on Mon Aug 19 16:38:51 2013
# These lines direct all traffic to tomcat
-A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
-A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8443

Finally, restart iptables to apply the changes:

service iptables restart

Apache Archiva 5 min install


Apache Archiva is a quick and easy solution to set up your own repository management server. In this example I use CentOS 6.x for my OS.

How To Install / Configure:

Start by downloading the standalone version of Archiva. I suggest placing it in the opt directory for reasons listed.

cd /opt
wget http://mirror.cc.columbia.edu/pub/software/apache/archiva/2.0.1/binaries/apache-archiva-2.0.1-bin.tar.gz
tar -xvf apache-archiva-2.0.1-bin.tar.gz

Now you need to specify the port for Archiva to run on. The default port is 8080 which can cause conflicts if you are using Tomcat which also defaults to 8080. I have changed the port to 8081.



Now at this point Archiva is ready to run. You can start Archiva by the following command.

/opt/apache-archiva-2.0.1/bin/archiva start

Archiva can now be accessed by going to http://localhost:8081/ in your browser. A simple GUI will allow you to setup administrative privileges.

Running as a service script

The above installation is great but begs for better integration with CentOS. On Linux, the bin/archiva script is suitable for linking from the /etc/init.d/ directory. Creating a custom service script in this directory will allow you to start / stop / restart Archiva easily. This directory is used to control services within the OS.

Start by creating the archiva service file

vim /etc/init.d/archiva
chmod 0777 archiva

I have chmod the archiva file so we can execute it as root. Then add the file the script below:

# Simple service script for Apache Archiva
# chkconfig: 35 20 80
# description: Archiva 2.0.1


case "$1" in
${ARCHIVA_PATH}/archiva start
${ARCHIVA_PATH}/archiva stop
${ARCHIVA_PATH}/archiva status
${ARCHIVA_PATH}/archiva restart
echo $"Usage: $0 {start|stop|status|restart}"
exit 1

Test the service script above by running the following commands. It should gracefully control the service.

service archiva start
service archiva stop

I don’t like to have to start archiva everytime I restart my server. Add Archiva to the chkconfig so it will automatically start on restart.

chkconfig --add archiva
chkconfig archiva on

Apache Archiva 2.0.1 is now installed on CentOS.

Dropbox repository error on CentOS 6.x


Installing Dropbox on CentOS 6.x causes a error coming from the repo:

[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"

The repo can be fixed by modifying the /etc/yum.repos.d/dropbox.repo file. Locate on line 3 the variable $releasever and replace it with 19. The end result is below will work with fedora 16, 17, 18, 19, 20.


name=Dropbox Repository

Test the results using yum

yum install nautilus-dropbox