Category Archives: Red Hat/CentOS

CentOS server – NFS client/server howto

NFS stands for Network File System and through NFS, a client can read and/or write a remote share on an NFS server (like on local hard disk)

The first step to set up NFS client/server is to install nfs-utils and nfs-utils-lib packages on both systems (server and client)

yum install nfs-utils nfs-utils-lib
chkconfig --levels 235 nfs on 
service nfs start

For example, the server IP is 10.0.0.1 and the client 10.0.0.2.

I’d like to use /test and /var/test directories from the client system. To make them accessible we must “export” them on the server.

From the client system, the NFS share is usually accessed as the user “nobody”. If the directory isn’t owned by nobody, the read/write access from NFS client should be made as root.
In this howto, the /test dir will be used as root while the /var/test will be used as “nobody”. If /var/test directory doesn’t exist, create the dir and change the ownership to the user/group 65534 (nonexistant user/group).

mkdir /var/test
chown 65534:65534 /var/test

The next step (on the server side) is to modify /etc/exports

nano /etc/exports

and add the next lines

/test           10.0.0.2(rw,sync,no_root_squash,no_subtree_check)
/var/test        10.0.0.2(rw,sync,no_subtree_check)

The no_root_squash parameter means access dir as root (all files copied/created from client will be owned by root).

After you modify /etc/exports, run exportfs -a to make the changes effective.

exportfs -a

The next step (on the client side) is to create the directories where you want to mount the NFS shares

mkdir -p /mnt/test
mkdir -p /mnt/var/test

Mount NFS shares with

mount 10.0.0.1:/test /mnt/test
mount 10.0.0.1:/var/test /mnt/var/test

Verify the settings with:

df -h

The result should be something like

[root@client ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
....
10.0.0.1:/test    100G  25G   75G  25% /mnt/test
10.0.0.1:/var/test
                       100G  25G   75G  25% /mnt/var/test

and

mount

The result should be something like

[root@client ~]# mount
....
10.0.0.1:/test on /mnt/test type nfs (rw,addr=10.0.0.1)
10.0.0.1:/var/test on /mnt/var/test type nfs (rw,addr=10.0.0.1)

To mount the NFS shares at boot time, add the next lines in /etc/fstab file

10.0.0.1:/test  /mnt/test   nfs      rw,sync,hard,intr  0     0
10.0.0.1:/var/test  /mnt/var/test   nfs      rw,sync,hard,intr  0     0

Don’t forget to check the settings after reboot

Heart Bleed Bug – OpenSSL

A massive vulnerability has been found in OpenSSL, the open-source software package broadly used to encrypt Web communications. The flaw allows attackers to steal the information that is normally protected by SSL/TLS encryption (web applications, e-mail, instant messaging, VPNs, etc).

Essentially, that means a lot of Internet users are affected and passwords and credit card information could be available to hackers.

CentOS released the updated OpenSSL packages which should fix this issue.

# yum update openssl
# service httpd restart

For more information:
http://www.exploit-db.com/exploits/32745/
http://heartbleed.com/

Red Hat and the CentOS join forces

This year started very nice for us…

It seems that Red Hat will try to fight vs Oracle Unbreakable Linux which is very similar to CentOS project. According to press release:

  • Red Hat Enterprise Linux will remain the same (for commercial development and deployment),
  • CentOS provides a base for community adoption and integration of open source technologies on a Red Hat-based platform (community integration beyond the operating system)
  • Fedora will continue to serve as the upstream project on which future Red Hat Enterprise Linux releases are based (mostly untested software for testing and home use).

More info can be found HERE

Slow InnoDB insert/update

If you’re migrating from MyISAM to InnoDB or you’re using MySQL 5.5.x or newer (InnoDB default engine) you’ll probably be disappointed with INSERT/UPDATE queries (with InnoDB tables). InnoDB is a transaction-safe, ACID compliant MySQL storage engine and with default settings, the log buffer is written out to the log file at each transaction commit and the flush to disk operation is performed on the log file. This can be very slow (but very safe – every transaction is 100% written to the disk).

Since MyISAM is not an option, we need to tune up our server so it can be used with InnoDB correctly. According to MySQL site, the next couple things should be considered:

  • Use OPTIMIZE TABLE statement to reorganize the table and compact any wasted space. Of course this operation won’t help if your database is empty
  • Use AUTO_INCREMENT column as the primary key
  • If you’re storing variable-length strings or if the column may contain NULL values, use the VARCHAR data type instead of CHAR (smaller tables fit better in the buffer pool and reduce disk I/O)
  • Since InnoDB must flush the log to disk at each transaction commit (if that transaction made modifications to the database), attach several queries into a single transaction to reduce the number of flush operations
  • In case you’re not building a finance application which can’t afford data loss if a crash occurs, you can set the parameter innodb_flush_log_at_trx_commit parameter to 0. In this case, InnoDB tries to flush the log once per second and not after every transaction (default setting is 1 which mean flush the log after every transaction).
  • To reduce the amount of disk I/O used by queries to access InnoDB tables, you can increase the innodb_buffer_pool_size.
  • Big disk-bound operations are always expensive. Use DROP TABLE and CREATE TABLE to empty a table, not DELETE FROM….Also TRUNCATE TABLE is much faster then DELETE * FROM…
  • innodb_flush_method parameter can also help but you must test yourself to see the right combination for your hardware and your database (possible values: fdatasync, O_DSYNC, O_DIRECT).
  • Make your log files big, even as big as the buffer pool and make the log buffer quite large as well
  • Disable autocommit during import operation (surround it with SET autocommit and COMMIT statements)
    SET autocommit=0;
     SQL queries
    COMMIT;
  • Temporarily turning off the uniqueness checks during the import session will help.
    SET unique_checks=0;
     SQL queries
    SET unique_checks=1;
  • Turn off foreign key checks during imports.
    SET foreign_key_checks=0;
     SQL queries
    SET foreign_key_checks=1;
  • If you often have repeating queries for tables that are not updated frequently, enable the query cache with
    query_cache_type = 1
    query_cache_size = 10M
  • Use the multiple-row INSERT syntax to reduce communication overhead between the client and the server if you need to insert many rows:
    INSERT INTO tbl VALUES (1,2), (5,5), ...;

The list above is not the final one. Please check the next link for more details about those parameters. Link
In my case, I won’t change a lot of parameters. The only parameter which I will change is the innodb_flush_log_at_trx_commit = 1 (default value is 1).

Before and after performance will be tested with Sysbench (Link). Since reading is not problem right now, I’ll stick with the write operations.

R/W test

sysbench --num-threads=16 --max-requests=10000 --test=oltp --oltp-table-size=500000 --mysql-socket=/var/lib/mysql/mysql.sock --oltp-test-mode=complex --mysql-user=TEST_USER --mysql-password=TEST_PASSWORD run

The result

OLTP test statistics:
    queries performed:
        read:                            146216
        write:                           52220
        other:                           20446
        total:                           218882
    transactions:                        10002  (181.90 per sec.)
    deadlocks:                           442    (8.04 per sec.)
    read/write requests:                 198436 (3608.90 per sec.)
    other operations:                    20446  (371.85 per sec.)
 
Test execution summary:
    total time:                          54.9852s
    total number of events:              10002
    total time taken by event execution: 879.1034
    per-request statistics:
         min:                                 33.38ms
         avg:                                 87.89ms
         max:                                480.77ms
         approx.  95 percentile:             135.31ms
 
Threads fairness:
    events (avg/stddev):           625.1250/2.29
    execution time (avg/stddev):   54.9440/0.03

Total time: 54.98s

Now, when I change innodb_flush_log_at_trx_commit to 0 (default value was 1), I get:

OLTP test statistics:
    queries performed:
        read:                            140000
        write:                           50000
        other:                           20000
        total:                           210000
    transactions:                        10000  (780.35 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 190000 (14826.69 per sec.)
    other operations:                    20000  (1560.70 per sec.)
 
Test execution summary:
    total time:                          12.8147s
    total number of events:              10000
    total time taken by event execution: 204.8297
    per-request statistics:
         min:                                  1.19ms
         avg:                                 20.48ms
         max:                               1669.69ms
         approx.  95 percentile:              44.50ms
 
Threads fairness:
    events (avg/stddev):           625.0000/19.56
    execution time (avg/stddev):   12.8019/0.00

Total time: 12.81s

As you can see, changing innodb_flush_log_at_trx_commit from 1 to 0 increases the write speed but we can lose data in some cases (hardware or power failures, etc). To avoid this problem, use battery backups, UPS, RAID, …

CentOS 5 Call to undefined function sqlite_escape_string()

If you’re using PHP 5.2.x on RHEL/CentoOS and you received error

PHP Fatal error:  Call to undefined function sqlite_escape_string()

don’t worry. The reason for this is the missing sqlite extension which is not included in RHEL/Fedora/CentOS php packages by default.

To fix this issue, you can include it manually

wget http://museum.php.net/php5/php-5.2.XX.tar.gz
tar xzvf php-5.2.XX.tar.gz
cd php-5.2.XX/ext/sqlite/
phpize
./configure
make
make install
echo extension=sqlite.so >> /etc/php.d/sqlite.ini
service httpd restart

Replace XX with your PHP version (check the php version with “php -v”)

CentOS server – nginx howto

Nginx is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. Nginx now hosts nearly 12.18% (22.2M) of active sites across all domains. Nginx is known for its high performance and low resource consumption.

To add nginx yum repository, create a file named /etc/yum.repos.d/nginx.repo and paste one of the configurations below:

For CentOS

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1

For RHEL

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/rhel/$releasever/$basearch/
gpgcheck=0
enabled=1

Due to differences between how CentOS, RHEL, and Scientific Linux populate the $releasever variable, it is necessary to manually replace $releasever with either “5” (for 5.x) or “6” (for 6.x), depending upon your OS version.

Now, be sure that apache is not started

#service httpd stop
#chkconfig --level 235 httpd off

and install nginx with

#yum install nginx

Why PostgreSQL is not so popular? (howto part 2)

So… After the first part (Link) where we talk about the installation,
the next step would be to create root user and to change postgres and root password.

[root@XTdata init.d]# su postgres
bash-3.2$ createuser -s root
bash-3.2$ createdb root --owner=root
exit
 
[root@XTdata data]# psql
psql (9.2.4)
Type "help" for help.
 
root=# ALTER USER postgres WITH PASSWORD 'SomePAASWDe348';
ALTER ROLE
root=# ALTER USER root WITH PASSWORD 'SomePAASWDe3489898';
ALTER ROLE
root=# \q

Now, the next step would be to allow remote connections.

postgresql.conf is the main PostgreSQL config file. To be able to reach the server remotely, find the commented line

#listen_addresses = 'localhost'         # what IP address(es) to listen on;

uncomment the line and replace the localhost with the servers IP address. (or replace it with * which means – listen on all interfaces)

listen_addresses = '*'         # what IP address(es) to listen on;

PostgreSQL, by default, refuses all connections it receives from any remote host. The remote hosts can be controled via pg_hba.conf file (located in the same dir like postgresql.conf).

Add the next line

host    all             all             192.168.10.57/32         md5

where 192.168.10.57 is the remote host IP address.

Also, you can allow any host by replacing the 192.168.10.57/32 with 0.0.0.0/0.

The line syntax is

local      DATABASE  USER  METHOD  [OPTIONS]
host       DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
hostssl    DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
hostnossl  DATABASE  USER  ADDRESS  METHOD  [OPTIONS]

which is documented inside the pg_hba.conf. Save the file and restart the server.

I prefer the pgAdmin III tool which can be used for remote management. Fire it up, select File, Add Server… Enter name, host, Username and password.

This should be enough for now…

Logrotate settings

As you probably know, the default logrotate period on RH based distros is 7 days. From my point of view, this number is to big for production servers (files can became extremely large so grep through them can be very slow).

To change this behavior, open /etc/logrotate.conf and replace weekly line with daily. Also, increase the number of files you would like to keep from 4 to something larger (for example 40 or 50 which means 40 or 50 days)

It should looks a like

# see "man logrotate" for details
# rotate log files weekly
#weekly
daily
 
# keep 4 weeks worth of backlogs
rotate 70