Since the Oracle bought Sun (including MySQL), I wanted to improve my knowledge about other databases and to check the alternatives (in case “something” happens to MySQL).
Luckily, MySQL is still here and it still works fine but in the last couple of months, I wanted to dig a little bit about PostgreSQL. I thought that it would be easy to find online tutorials about this topic but I noticed that online tutorials and forums about PostgreSQL are not so popular. Don’t get me wrong, there are several blogs which fits perfectly in my learning habits but spending so much time just to find the basic things is not what I wanted.
I found a link about Packt Publishing free PostgreSQL backup e-book and I didn’t wait (LINK). I posted a comment and couple days later received Packt account with two books
Instant PostgreSQL Starter by Daniel K. Lyons was exactly what I was looking for. A small (48 pages) book was right away uploaded to my DropBox so I can have it where ever I go. The book covers everything you need to start with Postgresql.
You will learn how to install Postgresql, how to connect and create your first database. The book also covers basic SQL queries for creating tables, CRUD (create, read, update and delete records), etc. The last part of the book contains some features you should also know (how to store passwords, working with XML, Full text search and basic options for improving speed and security). Couple advices about backups are also there.
Using pgAdmin III as a GUI tool won’t be a problem because the book contains pgAdmin screenshots.
Don’t get me wrong…. The book has only 48 pages and it can be compared with a cheat sheet but it is a very good portable book. Learning Postgresql can take only one afternoon and in less than a weekend you’ll became a “usable” DBA.
After the Postgresql starter book, I switched to PostgreSQL Backup and Restore How-to. We usually like to say that backup doesn’t have the price and every DBA should know everything about creating and restoring backups.
This (55 page) book is a step-by-step guide to backing up and restoring your database which covers:
- basic and partial exports,
- simple restore,
- creating and restoring binary backups,
- compressing backups
- taking snapshots
- sync backups
- point in time recovery
- warm and hot standby restore
- streaming replication
Just like the first book, this one is also uploaded to my DropBox. Having those books with you is a nice idea and I highly recommend them to all Postgresql beginners. You can buy them as an e-book and as a hard copy.
So… After the first part (Link) where we talk about the installation,
the next step would be to create root user and to change postgres and root password.
[root@XTdata init.d]# su postgres
bash-3.2$ createuser -s root
bash-3.2$ createdb root --owner=root
[root@XTdata data]# psql
Type "help" for help.
root=# ALTER USER postgres WITH PASSWORD 'SomePAASWDe348';
root=# ALTER USER root WITH PASSWORD 'SomePAASWDe3489898';
Now, the next step would be to allow remote connections.
postgresql.conf is the main PostgreSQL config file. To be able to reach the server remotely, find the commented line
#listen_addresses = 'localhost' # what IP address(es) to listen on;
uncomment the line and replace the localhost with the servers IP address. (or replace it with * which means – listen on all interfaces)
listen_addresses = '*' # what IP address(es) to listen on;
PostgreSQL, by default, refuses all connections it receives from any remote host. The remote hosts can be controled via pg_hba.conf file (located in the same dir like postgresql.conf).
Add the next line
host all all 192.168.10.57/32 md5
where 192.168.10.57 is the remote host IP address.
Also, you can allow any host by replacing the 192.168.10.57/32 with 0.0.0.0/0.
The line syntax is
local DATABASE USER METHOD [OPTIONS]
host DATABASE USER ADDRESS METHOD [OPTIONS]
hostssl DATABASE USER ADDRESS METHOD [OPTIONS]
hostnossl DATABASE USER ADDRESS METHOD [OPTIONS]
which is documented inside the pg_hba.conf. Save the file and restart the server.
I prefer the pgAdmin III tool which can be used for remote management. Fire it up, select File, Add Server… Enter name, host, Username and password.
This should be enough for now…
As you probably know, the default logrotate period on RH based distros is 7 days. From my point of view, this number is to big for production servers (files can became extremely large so grep through them can be very slow).
To change this behavior, open /etc/logrotate.conf and replace weekly line with daily. Also, increase the number of files you would like to keep from 4 to something larger (for example 40 or 50 which means 40 or 50 days)
It should looks a like
# see "man logrotate" for details
# rotate log files weekly
# keep 4 weeks worth of backlogs
I wanted to enable time limit for some users so they are able to use FTP server only during working time.
For RH based systems with Vsftpd
Open /etc/security/time.conf and add
to the end.
Then open /etc/pam.d/vsftpd and add
account required pam_time.so
as the first line in account section so the file looks a like:
session optional pam_keyinit.so force revoke
auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed
auth required pam_shells.so
auth include password-auth
account required pam_time.so
account include password-auth
session required pam_loginuid.so
session include password-auth
Save the changes and try to login via ftp. In case you want to disable SSH, you should do the same:
account required pam_time.so
inside /etc/pam.d/sshd and
inside /etc/security/time.conf file.
I already wrote about Sysbench and MySQL tunning scripts. Those articles can be found HERE and HERE.
Now lets do some tests and lets see the dependance from write cache enabled/disabled option on your disks. Keep in mind that RH based distros will probably disable write cache (I tried Ubuntu on this server and the same thing happen – write cache was disabled after installation).
Continue reading MySQL Performance – Howto – part 2 (write cache dependance)
If you want to find the newest file inside directory, you can use the ls & tail combination.
bash$ ls -t | head -1
Another option is with the next command
bash$ ls -lrth | tail -1
-rw-r----- 1 user ugroup 85 Nov 11 15:38 id695.txt
| is used to send the output of the first command as input to the second one.
tail — outputs the last files
-1 — denotes the number of lines u want to display (in case you don’t set -1, by default you will get the last 10 lines)
If you need to copy HTML tables into Excel, Libreoffice Calc or other datasheet applications, you can think about Firefox and Dafizilla Table2Clipboard firefox addon.
This addon will allow you to select rows and columns from a table simply pressing Control key and picking rows/columns with left mouse button. The selection can be copied to clipboard and pasted into datasheet applications but without ugly results.
Pasting in plain text editors is also supported as CSV file.
Here is the LINK for this great tool.
Do not forget to donate 🙂
Using scp, rsync and ssh requires the password unless you add the public key from src host to authorised_keys on destination host.
scp and rsync commands are used to transfer or backup files between known hosts or by the same user on both the hosts.
Lets say you want to copy between two hosts SOURCE and DESTINATION. SOURCE is the host where you would run the scp, ssh or rsync command.
On the SOURCE host, run
It will prompt for a passphrase but do not enter anything. Instead, just press the enter key. It’ll generate an identification (private key) and a public key. Do not ever share the private key with anyone!
The public key will be generated in ~/.ssh/id_rsa.pub.
For example in root/.ssh/id_rsa.pub
Copy id_rsa.pub file to DESTINATION inside the /root/.ssh/
On DESTINATION host, login as the remote user which you plan to use (in this case root) and copy the contents of id_rsa.pub to /root/.ssh/authorized_keys
# cat id_rsa.pub >> /root/.ssh/authorized_keys
# chmod 700 /root/.ssh/authorized_keys
If this file authorized_keys does not exists, the command above will create it. Make sure you remove permission for others to read this file.
On some distros, ssh by default does not allow root to log in. To enable root login, edit /etc/ssh/sshd_config and changing the option of PermitRootLogin from no to yes (on DESTINATION host). Restart sshd to apply changes and that is it.
In case you want to ssh, scp or rsync from DESTINATION to SOURCE host you will be asked for password. You can reverse the steps above (generate the public key on DESTINATION and copy it to the SOURCE host) and it will work in both directions.
In case that one server gets hacked, the other one will be too 🙂
Taking care about disk usage is a very good habit. There are several options for this and now I will only describe df and du commands.
The df utility displays the disk space usage on all mounted filesystems.
[root@abc ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
35772016 20297132 13628408 60% /
/dev/sda1 101086 17692 78175 19% /boot
tmpfs 513044 0 513044 0% /dev/shm
Adding the -T option (type) will print the filesystem type. Also, the good idea is to add -h option because df measures the size in 1K blocks, which is difficult for a desktop user to recalculate.
Now we have
[root@abc ~]# df -h -T
Filesystem Type Size Used Avail Use% Mounted on
ext3 35G 20G 13G 60% /
/dev/sda1 ext3 99M 18M 77M 19% /boot
tmpfs tmpfs 502M 0 502M 0% /dev/shm
As you can see, it is much easier to understand.
You can use the du command to determine which files or directories need to be deleted or reduced. A simple du will print usage for the present working directory and its subdirectories, along with the size of each directory.
du directory will print the size of an particular dir.
-h – print the size of the podcasts directory in a more readable format
-c – prints the grand total size of the directory at the end.
-a – displays the file names along with directories
-s – display a summary, without showing all of the subdirectories.
For example, du -ch | grep total prints just one line with the total size of the directory.
Often the server admin has little control over the applications which uses MySQL and it is hard to find the bottlenecks. This blog post can’t bring the peace in the world, or help NASA to finally land on the Mars. Instead those tasks, I’ll try to solve something else and present my own experiences with MySQL storage engines (at least for MyISAM and InnoDB as the most popular).
Continue reading MySQL Performance – Howto – part 1 (high performance tuning scripts)