Category Archives: Red Hat/CentOS

SSH2 extension for PHP on CentOS 6

Before we can build and install ssh2 extension, we’ll need a few packages

yum install gcc php-devel php-pear libssh2 libssh2-devel make

Install the extension via pecl

pecl install -f ssh2

On CentOS, PHP will not load extension automatically. To “fix” this, create ssh2.ini file inside /etc/php.d/ and add


Restart apache (service httpd restart) and test PHP with

php -m | grep ssh2

As response, you should get ssh2.

MyDumper – CentOS HowTo

Mydumper – MySQL backup tool created by Domas Mituzas and later supported by several other devs.

The main benefits are multi-threaded and fast backups with almost no locking (if not using non innodb tables), built-in compression, separate files for each table, making it easy to restore single tables or schema. It also has support to hard link files which can reduce the space needed for history of backups. Much faster than mysqldump. The main benefit for separate files is the ability to create backups in multiple threads (the same works for restoring process)

In short – Mydumper is how MySQL DBA and support engineer would imagine mysqldump.

To install mydumper follow the next steps

Install necessary devel libs and cmake

yum install glib2-devel mysql-devel zlib-devel pcre-devel openssl-devel cmake

Download mydumper – (or directly here

Extract the tar.gz archive with

tar -xvzf mydumper-0.6.2.tar.gz
cd mydumper-0.6.2
cmake .

Creating backup


Note: My advice is to create separate dir for every database.

Restore from backup


GNU bash Environment Variable Command Injection

You can test your server for bash command injection with

[root@ss ~]# env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
this is a test

Update bash with

# yum -y update bash

and you’ll get

[root@ss ~]# env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test

CentOS server – NFS client/server howto

NFS stands for Network File System and through NFS, a client can read and/or write a remote share on an NFS server (like on local hard disk)

The first step to set up NFS client/server is to install nfs-utils and nfs-utils-lib packages on both systems (server and client)

yum install nfs-utils nfs-utils-lib
chkconfig --levels 235 nfs on 
service nfs start

For example, the server IP is and the client

I’d like to use /test and /var/test directories from the client system. To make them accessible we must “export” them on the server.

From the client system, the NFS share is usually accessed as the user “nobody”. If the directory isn’t owned by nobody, the read/write access from NFS client should be made as root.
In this howto, the /test dir will be used as root while the /var/test will be used as “nobody”. If /var/test directory doesn’t exist, create the dir and change the ownership to the user/group 65534 (nonexistant user/group).

mkdir /var/test
chown 65534:65534 /var/test

The next step (on the server side) is to modify /etc/exports

nano /etc/exports

and add the next lines

/test ,sync,no_root_squash,no_subtree_check)

The no_root_squash parameter means access dir as root (all files copied/created from client will be owned by root).

After you modify /etc/exports, run exportfs -a to make the changes effective.

exportfs -a

The next step (on the client side) is to create the directories where you want to mount the NFS shares

mkdir -p /mnt/test
mkdir -p /mnt/var/test

Mount NFS shares with

mount /mnt/test
mount /mnt/var/test

Verify the settings with:

df -h

The result should be something like

[root@client ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
....    100G  25G   75G  25% /mnt/test
                       100G  25G   75G  25% /mnt/var/test



The result should be something like

[root@client ~]# mount
.... on /mnt/test type nfs (rw,addr= on /mnt/var/test type nfs (rw,addr=

To mount the NFS shares at boot time, add the next lines in /etc/fstab file  /mnt/test   nfs      rw,sync,hard,intr  0     0  /mnt/var/test   nfs      rw,sync,hard,intr  0     0

Don’t forget to check the settings after reboot

Heart Bleed Bug – OpenSSL

A massive vulnerability has been found in OpenSSL, the open-source software package broadly used to encrypt Web communications. The flaw allows attackers to steal the information that is normally protected by SSL/TLS encryption (web applications, e-mail, instant messaging, VPNs, etc).

Essentially, that means a lot of Internet users are affected and passwords and credit card information could be available to hackers.

CentOS released the updated OpenSSL packages which should fix this issue.

# yum update openssl
# service httpd restart

For more information:

Red Hat and the CentOS join forces

This year started very nice for us…

It seems that Red Hat will try to fight vs Oracle Unbreakable Linux which is very similar to CentOS project. According to press release:

  • Red Hat Enterprise Linux will remain the same (for commercial development and deployment),
  • CentOS provides a base for community adoption and integration of open source technologies on a Red Hat-based platform (community integration beyond the operating system)
  • Fedora will continue to serve as the upstream project on which future Red Hat Enterprise Linux releases are based (mostly untested software for testing and home use).

More info can be found HERE

Slow InnoDB insert/update

If you’re migrating from MyISAM to InnoDB or you’re using MySQL 5.5.x or newer (InnoDB default engine) you’ll probably be disappointed with INSERT/UPDATE queries (with InnoDB tables). InnoDB is a transaction-safe, ACID compliant MySQL storage engine and with default settings, the log buffer is written out to the log file at each transaction commit and the flush to disk operation is performed on the log file. This can be very slow (but very safe – every transaction is 100% written to the disk).

Since MyISAM is not an option, we need to tune up our server so it can be used with InnoDB correctly. According to MySQL site, the next couple things should be considered:

  • Use OPTIMIZE TABLE statement to reorganize the table and compact any wasted space. Of course this operation won’t help if your database is empty
  • Use AUTO_INCREMENT column as the primary key
  • If you’re storing variable-length strings or if the column may contain NULL values, use the VARCHAR data type instead of CHAR (smaller tables fit better in the buffer pool and reduce disk I/O)
  • Since InnoDB must flush the log to disk at each transaction commit (if that transaction made modifications to the database), attach several queries into a single transaction to reduce the number of flush operations
  • In case you’re not building a finance application which can’t afford data loss if a crash occurs, you can set the parameter innodb_flush_log_at_trx_commit parameter to 0. In this case, InnoDB tries to flush the log once per second and not after every transaction (default setting is 1 which mean flush the log after every transaction).
  • To reduce the amount of disk I/O used by queries to access InnoDB tables, you can increase the innodb_buffer_pool_size.
  • Big disk-bound operations are always expensive. Use DROP TABLE and CREATE TABLE to empty a table, not DELETE FROM….Also TRUNCATE TABLE is much faster then DELETE * FROM…
  • innodb_flush_method parameter can also help but you must test yourself to see the right combination for your hardware and your database (possible values: fdatasync, O_DSYNC, O_DIRECT).
  • Make your log files big, even as big as the buffer pool and make the log buffer quite large as well
  • Disable autocommit during import operation (surround it with SET autocommit and COMMIT statements)
    SET autocommit=0;
     SQL queries
  • Temporarily turning off the uniqueness checks during the import session will help.
    SET unique_checks=0;
     SQL queries
    SET unique_checks=1;
  • Turn off foreign key checks during imports.
    SET foreign_key_checks=0;
     SQL queries
    SET foreign_key_checks=1;
  • If you often have repeating queries for tables that are not updated frequently, enable the query cache with
    query_cache_type = 1
    query_cache_size = 10M
  • Use the multiple-row INSERT syntax to reduce communication overhead between the client and the server if you need to insert many rows:
    INSERT INTO tbl VALUES (1,2), (5,5), ...;

The list above is not the final one. Please check the next link for more details about those parameters. Link
In my case, I won’t change a lot of parameters. The only parameter which I will change is the innodb_flush_log_at_trx_commit = 1 (default value is 1).

Before and after performance will be tested with Sysbench (Link). Since reading is not problem right now, I’ll stick with the write operations.

R/W test

sysbench --num-threads=16 --max-requests=10000 --test=oltp --oltp-table-size=500000 --mysql-socket=/var/lib/mysql/mysql.sock --oltp-test-mode=complex --mysql-user=TEST_USER --mysql-password=TEST_PASSWORD run

The result

OLTP test statistics:
    queries performed:
        read:                            146216
        write:                           52220
        other:                           20446
        total:                           218882
    transactions:                        10002  (181.90 per sec.)
    deadlocks:                           442    (8.04 per sec.)
    read/write requests:                 198436 (3608.90 per sec.)
    other operations:                    20446  (371.85 per sec.)
Test execution summary:
    total time:                          54.9852s
    total number of events:              10002
    total time taken by event execution: 879.1034
    per-request statistics:
         min:                                 33.38ms
         avg:                                 87.89ms
         max:                                480.77ms
         approx.  95 percentile:             135.31ms
Threads fairness:
    events (avg/stddev):           625.1250/2.29
    execution time (avg/stddev):   54.9440/0.03

Total time: 54.98s

Now, when I change innodb_flush_log_at_trx_commit to 0 (default value was 1), I get:

OLTP test statistics:
    queries performed:
        read:                            140000
        write:                           50000
        other:                           20000
        total:                           210000
    transactions:                        10000  (780.35 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 190000 (14826.69 per sec.)
    other operations:                    20000  (1560.70 per sec.)
Test execution summary:
    total time:                          12.8147s
    total number of events:              10000
    total time taken by event execution: 204.8297
    per-request statistics:
         min:                                  1.19ms
         avg:                                 20.48ms
         max:                               1669.69ms
         approx.  95 percentile:              44.50ms
Threads fairness:
    events (avg/stddev):           625.0000/19.56
    execution time (avg/stddev):   12.8019/0.00

Total time: 12.81s

As you can see, changing innodb_flush_log_at_trx_commit from 1 to 0 increases the write speed but we can lose data in some cases (hardware or power failures, etc). To avoid this problem, use battery backups, UPS, RAID, …

CentOS 5 Call to undefined function sqlite_escape_string()

If you’re using PHP 5.2.x on RHEL/CentoOS and you received error

PHP Fatal error:  Call to undefined function sqlite_escape_string()

don’t worry. The reason for this is the missing sqlite extension which is not included in RHEL/Fedora/CentOS php packages by default.

To fix this issue, you can include it manually

tar xzvf php-5.2.XX.tar.gz
cd php-5.2.XX/ext/sqlite/
make install
echo >> /etc/php.d/sqlite.ini
service httpd restart

Replace XX with your PHP version (check the php version with “php -v”)