Saturday, July 23, 2016

Notes on Galera SST/IST network port order

SST:

pre-SST:
Donor:
MySQL(3306) and  Galera replication port(4567) are open.

Joiner: no ports are open

Note: clustercheck port(9200) using xinetd could be running on both joiner and donor 

MySQL is started on Joiner:
Joiner:
Galera replication port(4567) and SST port(4444) are open
Connects to Galera replication port(4567) of Donor

Donor:
MySQL(3306) and  Galera replication port(4567) are open
Connects to Galera replication port(4567) of Joiner
Connects to SST port(4444) of Joiner

When SST completes:
Donor:
MySQL(3306) and  Galera replication port(4567) are open
Maintains connection to Galera replication port(4567) of Joiner
Connection to SST port(4444) of joiner is closed

Joiner:
MySQL(3306) and  Galera replication port(4567) are open
SST port(4444) is closed
Maintains connection to Galera replication port(4567) of Donor

IST:
pre-SST:
Donor:
MySQL(3306) and  Galera replication port(4567) are open.

Joiner: no ports are open

Note: clustercheck port(9200) using xinetd could be running on both joiner and donor 

MySQL is started on Joiner:
Joiner:
Galera replication port(4567) and SST port(4444) are open
Connects to Galera replication port(4567) of Donor

Donor:
MySQL(3306) and  Galera replication port(4567) are open
Connects to Galera replication port(4567) of Joiner
Connects to SST port(4444) of Joiner

SST completes:
Donor:
MySQL(3306) and  Galera replication port(4567) are open
Maintains connection to Galera replication port(4567) of Joiner
Connection to SST port(4444) of joiner is closed

Joiner:
Galera replication port(4567) are open
SST port(4444) is closed
Maintains connection to Galera replication port(4567) of Donor

IST started:
Joiner:
Galera replication port(4567),  MySQL(3306) port, and IST port(4568) are open
Maintains connection to Galera replication port(4567) of Donor
Note: MySQL(3306) port opens a little bit after IST port is open and donor connects to it

Donor:
MySQL(3306) and  Galera replication port(4567) are open
Maintains connection to Galera replication port(4567) of Joiner
Connects to IST port(4568) of Joiner

When IST completes:
Donor:
MySQL(3306) and  Galera replication port(4567) are open
Maintains connection to Galera replication port(4567) of Joiner
Connection to IST port(4568) of joiner is closed

Joiner:
MySQL(3306) and  Galera replication port(4567) are open
IST port(4568) is closed
Maintains connection to Galera replication port(4567) of Donor







Friday, August 21, 2015

Sysbench File I/O tests on SSD

Here's an indication why the IO Scheduler for SSD disks should be noop or deadline. I've performed a sysbench File I/O test on an 100G SSD disk.

sysbench --test=fileio --file-total-size=40G prepare

echo "cfq" > /sys/block/sda/queue/scheduler
sysbench --num-threads=16 --test=fileio --file-total-size=40G --file-test-mode=rndrw --max-time=300 --max-requests=0 run

echo "deadline" > /sys/block/sda/queue/scheduler
sysbench --num-threads=16 --test=fileio --file-total-size=40G --file-test-mode=rndrw --max-time=300 --max-requests=0 run

echo "noop" > /sys/block/sda/queue/scheduler
sysbench --num-threads=16 --test=fileio --file-total-size=40G --file-test-mode=rndrw --max-time=300 --max-requests=0 run

Preparing the files:
sysbench --test=fileio --file-total-size=40G prepare
42949672960 bytes written in 310.16 seconds (132.06 MB/sec).

* Sequential write is 132MB/Sec for this disk

root@fisher-All-Series:/mnt/ssd/sysbench-tests# echo "cfq" > /sys/block/sda/queue/scheduler
root@fisher-All-Series:/mnt/ssd/sysbench-tests# sysbench --num-threads=16 --test=fileio --file-total-size=40G --file-test-mode=rndrw --max-time=300 --max-requests=0 run
sysbench 0.5:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16
Random number generator seed is 0 and will be ignored


Extra file open flags: 0
128 files, 320Mb each
40Gb total file size
Block size 16Kb
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!

Operations performed:  357906 reads, 238594 writes, 763476 Other = 1359976 Total
Read 5.4612Gb  Written 3.6407Gb  Total transferred 9.1019Gb  (31.067Mb/sec)
 1988.30 Requests/sec executed

General statistics:
    total time:                          300.0044s
    total number of events:              596500
    total time taken by event execution: 683.7045s
    response time:
         min:                                  0.00ms
         avg:                                  1.15ms
         max:                                221.08ms
         approx.  95 percentile:               6.49ms

Threads fairness:
    events (avg/stddev):           37281.2500/9609.62
    execution time (avg/stddev):   42.7315/11.04

root@fisher-All-Series:/mnt/ssd/sysbench-tests# 
root@fisher-All-Series:/mnt/ssd/sysbench-tests# echo "deadline" > /sys/block/sda/queue/scheduler
root@fisher-All-Series:/mnt/ssd/sysbench-tests# sysbench --num-threads=16 --test=fileio --file-total-size=40G --file-test-mode=rndrw --max-time=300 --max-requests=0  run
sysbench 0.5:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16
Random number generator seed is 0 and will be ignored


Extra file open flags: 0
128 files, 320Mb each
40Gb total file size
Block size 16Kb
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!

Operations performed:  371466 reads, 247634 writes, 792422 Other = 1411522 Total
Read 5.6681Gb  Written 3.7786Gb  Total transferred 9.4467Gb  (32.244Mb/sec)
 2063.62 Requests/sec executed

General statistics:
    total time:                          300.0069s
    total number of events:              619100
    total time taken by event execution: 73.9194s
    response time:
         min:                                  0.00ms
         avg:                                  0.12ms
         max:                                 31.94ms
         approx.  95 percentile:               0.04ms

Threads fairness:
    events (avg/stddev):           38693.7500/825.77
    execution time (avg/stddev):   4.6200/0.16

root@fisher-All-Series:/mnt/ssd/sysbench-tests# echo "noop" > /sys/block/sda/queue/scheduler
root@fisher-All-Series:/mnt/ssd/sysbench-tests# sysbench --num-threads=16 --test=fileio --file-total-size=40G --file-test-mode=rndrw --max-time=300 --max-requests=0 run
sysbench 0.5:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16
Random number generator seed is 0 and will be ignored


Extra file open flags: 0
128 files, 320Mb each
40Gb total file size
Block size 16Kb
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!

Operations performed:  444665 reads, 296435 writes, 948587 Other = 1689687 Total
Read 6.785Gb  Written 4.5232Gb  Total transferred 11.308Gb  (38.597Mb/sec)
 2470.19 Requests/sec executed

General statistics:
    total time:                          300.0173s
    total number of events:              741100
    total time taken by event execution: 199.5376s
    response time:
         min:                                  0.00ms
         avg:                                  0.27ms
         max:                                 22.52ms
         approx.  95 percentile:               0.81ms

Threads fairness:
    events (avg/stddev):           46318.7500/1365.75
    execution time (avg/stddev):   12.4711/0.15

Clearly, noop wins!

Just for kicks, I added nobarrier and noatime option on the ssd mount. Do note that nobarrier is unsafe for disk subsystem that does not have a working battery backup unit. Surprisingly, cfq wins in this benchmark:

cfq
sysbench 0.5:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16
Random number generator seed is 0 and will be ignored


Extra file open flags: 0
128 files, 320Mb each
40Gb total file size
Block size 16Kb
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!

Operations performed:  1706155 reads, 1137429 writes, 3639680 Other = 6483264 Total
Read 26.034Gb  Written 17.356Gb  Total transferred 43.39Gb  (148.1Mb/sec)
 9478.49 Requests/sec executed

General statistics:
    total time:                          300.0038s
    total number of events:              2843584
    total time taken by event execution: 3621.4250s
    response time:
         min:                                  0.00ms
         avg:                                  1.27ms
         max:                                 47.11ms
         approx.  95 percentile:               7.53ms

Threads fairness:
    events (avg/stddev):           177724.0000/1206.64
    execution time (avg/stddev):   226.3391/0.40

deadline
sysbench 0.5:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16
Random number generator seed is 0 and will be ignored


Extra file open flags: 0
128 files, 320Mb each
40Gb total file size
Block size 16Kb
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!

Operations performed:  1527184 reads, 1018116 writes, 3257894 Other = 5803194 Total
Read 23.303Gb  Written 15.535Gb  Total transferred 38.838Gb  (132.57Mb/sec)
 8484.28 Requests/sec executed

General statistics:
    total time:                          300.0019s
    total number of events:              2545300
    total time taken by event execution: 3884.6784s
    response time:
         min:                                  0.00ms
         avg:                                  1.53ms
         max:                                 46.72ms
         approx.  95 percentile:               7.59ms

Threads fairness:
    events (avg/stddev):           159081.2500/578.04
    execution time (avg/stddev):   242.7924/0.39

noop
sysbench 0.5:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16
Random number generator seed is 0 and will be ignored


Extra file open flags: 0
128 files, 320Mb each
40Gb total file size
Block size 16Kb
Number of IO requests: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!

Operations performed:  1498986 reads, 999314 writes, 3197715 Other = 5696015 Total
Read 22.873Gb  Written 15.248Gb  Total transferred 38.121Gb  (130.12Mb/sec)
 8327.61 Requests/sec executed

General statistics:
    total time:                          300.0022s
    total number of events:              2498300
    total time taken by event execution: 3877.6835s
    response time:
         min:                                  0.00ms
         avg:                                  1.55ms
         max:                                 49.04ms
         approx.  95 percentile:               7.62ms

Threads fairness:
    events (avg/stddev):           156143.7500/628.55
    execution time (avg/stddev):   242.3552/0.51

For now, we can conclude that noop and deadline are best IO schedulers for SSD compared to cfq when barrier option is set. When nobarrier option is set throughput increased around 3-4x times for each scheduler. I am curious why cfq ran supreme with this setting.  On the next post, we will run benchmarks on MySQL directly.






Friday, June 19, 2015

Notes: exporting database from MySQL and importing to MongoDB

MySQL data came from sample data of test-db - https://launchpad.net/test-db

mysqldump --tab=/tmp --fields-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=, --fields-enclosed-by='"' --lines-terminated-by=0x0d0a employees

mongoimport --db=employees -c=employees -f emp_no,birth_date,first_name,last_name,gender,hire_date --file=employees.txt --type=csv

Thursday, June 18, 2015

Static hosting with Node.js and Express

I'm trying to learn nodejs and I will use this knowledge to power the backend of a RIA such as ExtJS or Angular.

Anyway, to create a static folder to host web files in node, all you need to do is this:

npm install express
mkdir public
#place web documents in public folder

Write a node app:
app.js
var express = require('express');
var app=express();

app.use(express.static('public'));

app.listen(8080, function(){
  console.log("Static hosting activated");

});

Run the node app:
node app.js

Where is this useful? The public folder could be used to host your frontend application and library.

Saturday, February 14, 2015

Mapping CRUD routes manually

So, in route.rb, adding this CRUD route:

resources: employees

Generates these routes:
$rake routes

         employees  GET       /employees(.:format)              employees#index
                            POST     /employees(.:format)              employees#create
  new_employee GET        /employees/new(.:format)      employees#new
   edit_employee GET        /employees/:id/edit(.:format) employees#edit
           employee GET        /employees/:id(.:format)        employees#show
                           PATCH   /employees/:id(.:format)        employees#update
                           PUT         /employees/:id(.:format)        employees#update

                           DELETE /employees/:id(.:format)        employees#destroy

To enter these manually, add:
get   'employees'              => 'employees#index'
post    'employees'              => 'employees#create'
get      'employees/new'      => 'employees#new',    as: :new_employee
get      'employees/:id/edit' => 'employees#edit',   as: :edit_employee
get      'employees/:id'        => 'employees#show',   as: :employee
patch  'employees/:id'        => 'employees#update'
put      'employees/:id'       => 'employees#update'
delete 'employees/:id'        => 'employees#destroy'

For more information see, http://guides.rubyonrails.org/routing.html

Saturday, August 16, 2014

Simple bash script for getting the current, average and maximum status value in MySQL

Here's a script to get the current, average and maximum value of a status counter in MySQL. This is useful when you need to monitor a certain counter which you will use as a threshold for running administrative scripts such as pt-online-schema-change, pt-table-checksum, pt-stalk, etc:

counters.sh
#!/bin/bash

INTERVAL=1
USER=root
PASSWORD=msandbox
HOST=127.0.0.1
VARIABLE=Threads_running
MAX=1
AVG=1
COUNT=0
SUM=0
while [ 1 ]
do
  CURRENT=`mysql -h$HOST -u$USER -p$PASSWORD -BNe "SHOW GLOBAL STATUS LIKE '$VARIABLE'"|tr "\t" " "|cut -d " " -f2`
  if [ $CURRENT -gt $MAX ]; then
    MAX=$CURRENT
  fi
  SUM=`expr $SUM + $CURRENT`
  COUNT=`expr $COUNT + 1`
  AVG=`expr $SUM / $COUNT`
  echo -e "$VARIABLE: CUR=$CURRENT MAX=$MAX AVG=$AVG" 
  sleep $INTERVAL
done


Example:
./counters.sh 
Threads_running: CUR=1 MAX=1 AVG=1
Threads_running: CUR=1 MAX=1 AVG=1
Threads_running: CUR=1 MAX=1 AVG=1
Threads_running: CUR=1 MAX=1 AVG=1
Threads_running: CUR=137 MAX=137 AVG=28
Threads_running: CUR=140 MAX=140 AVG=46
Threads_running: CUR=140 MAX=140 AVG=60
Threads_running: CUR=136 MAX=140 AVG=69
Threads_running: CUR=125 MAX=140 AVG=75

Threads_running: CUR=144 MAX=144 AVG=82

Tuesday, June 17, 2014

Useful 1 liner perl script(s)

I'll be updating this article for every time I use a one-liner perl code I need to use at work.

1.  perl -n -e 'chomp;print $_ . " "';

If I need to concatenate input with spaces and I need to display it.
Example: 

#rpm -qa|grep rpm|perl -n -e 'chomp;print $_ . " "';
rpm-libs-4.8.0-37.el6.x86_64 rpm-4.8.0-37.el6.x86_64 rpm-python-4.8.0-37.el6.x86_64

#rpm -qa|grep perl|perl -n -e 'chomp;print $_ . " "' | xargs echo "yum install";
yum install perl-Pod-Escapes-1.04-136.el6.x86_64 perl-libs-5.10.1-136.el6.x86_64 perl-Module-Pluggable-3.90-136.el6.x86_64 perl-DBI-1.609-4.el6.x86_64 perl-Net-LibIDN-0.12-3.el6.x86_64 perl-Net-SSLeay-1.35-9.el6.x86_64 perl-Time-HiRes-1.9721-136.el6.x86_64 perl-Pod-Simple-3.13-136.el6.x86_64 perl-version-0.77-136.el6.x86_64 perl-5.10.1-136.el6.x86_64 perl-DBD-MySQL-4.013-3.el6.x86_64 perl-IO-Socket-SSL-1.31-2.el6.noarch