Verify database Objects:
1. Verify table.
a. SHOW TABLE STATUS LIKE '%user%'\G
b. SHOW TABLE STATUS from forum LIKE '%user%' \G
2. It will show you all tables have MyISAM engine.
a. SELECT TABLE_SCHEMA, table_name, table_type, engine From information_schema.tables where engine='MyISAM';
3. To know rows per table, you can use a query like this:
a. SELECT table_name, table_rows FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'forum';
4. Verify you procedure / function.
a. SHOW procedure/FUNCTION STATUS LIKE ´hello´G
b. show create procedure/function hello\G
5. Routines
a. select routine_name from information_schema.routines; // list of all the routines in the system
b. select routine_name, routine_schema, routine_type from information_schema.routines; // This lists all of the routines in the system. With additional Information like the database the routines belongs too and also distinct between the different routines.
c. select routine_name, routine_schema, routine_type from information_schema.routines where routine_schema = database() // This time we can see just the routines for the currently selected database.
6. Triggers
a. select TRIGGER_SCHEMA,TRIGGER_NAME from INFORMATION_SCHEMA.TRIGGERS;
b. select TRIGGER_NAME from INFORMATION_SCHEMA.TRIGGERS where TRIGGER_SCHEMA = database();
Thursday, December 31, 2009
Monday, December 28, 2009
MySQL Tips : Calculate database and table size
Few MySQL Tips:
a. calculate databases size :
-> It will show database size
-> It will show database size along with table size.
-> It will show table size of engine 'MEMORY'.
-> It will show 5 top tables using space.
a. calculate databases size :
-> It will show database size
SELECT s.schema_name, CONCAT(IFNULL(ROUND(SUM(t.data_length)/1024/1024,2),0.00),"Mb") as Data_size, CONCAT(IFNULL(ROUND(SUM(t.index_length)/1024/1024,2),0.00),"Mb") as Index_size,COUNT(table_name) total_tables FROM INFORMATION_SCHEMA.SCHEMATA s LEFT JOIN INFORMATION_SCHEMA.TABLES t ON s.schema_name = t.table_schema WHERE s.schema_name not in("mysql","information_schema","test") GROUP BY s.schema_name order by Data_size DESC;
-> It will show database size along with table size.
SELECT s.schema_name,t.table_name, CONCAT(IFNULL(ROUND(SUM(t.data_length)/1024/1024,2),0.00),"Mb") data_size,CONCAT(IFNULL(ROUND(SUM(t.index_length)/1024/1024,2),0.00),"Mb") index_size, t.ENGINE ENGINE, t.table_rows TABLE_ROWS,t.row_format TABLE_ROW_FORMAT,date(t.update_time) FROM INFORMATION_SCHEMA.SCHEMATA s LEFT JOIN INFORMATION_SCHEMA.TABLES t ON s.schema_name = t.table_schema WHERE s.schema_name not in ("mysql","information_schema") GROUP BY s.schema_name,t.table_name,TABLE_ROW_FORMAT,ENGINE ORDER BY TABLE_ROWS DESC,data_size DESC,index_size DESC;
-> It will show table size of engine 'MEMORY'.
SELECT concat(table_schema,'.',table_name) as Database_Tablename, table_rows as Rows, concat(round(data_length/(1024*1024),2),'M') DATA, concat(round(index_length/(1024*1024),2),'M') idx, concat(round((data_length+index_length)/(1024*1024),2),'M') total_size, round(index_length/data_length,2) idxfrac FROM information_schema.TABLES where ENGINE='MEMORY';
-> It will show 5 top tables using space.
SELECT concat(table_schema,'.',table_name) as Database_Tablename,
table_rows as Rows, concat(round(data_length/(1024*1024),2),'M') DATA,
concat(round(index_length/(1024*1024),2),'M') idx, concat(round((data_length+index_length)/(1024*1024),2),'M') total_size,
round(index_length/data_length,2) idxfrac FROM information_schema.TABLES where ORDER BY data_length+index_length DESC limit 5;
Wednesday, December 23, 2009
Temporarily stop/start a process in linux
Some time we have requirement that, particular job should stop for certain period of time and start again.
Most of us familiar with KILL command, but here is another feature of KILL command , which saves your life :-
have a fun :)
Most of us familiar with KILL command, but here is another feature of KILL command , which saves your life :-
#kill -STOP 10067 (where 10067 is process id)
#kill -CONT 10067 (where 10067 is process id)
have a fun :)
Wednesday, November 04, 2009
Extract single db/table from dump file
using sed to extract single database from the dumpfile:
=========================================
sed -n '/^-- Current Database: `dbname`/,/^-- Current Database: `/p' dumpfile > dbname.sql 2>error
eg:
sed -n '/^-- Current Database: `blogs`/,/^-- Current Database: `/p' dump.sql > blogs.sql 2>error
we can also use "awk" for the same.
extract single table from the dumpfile:-
=========================================
sed -n '/^-- Current Database: `dbname`/,/^-- Current Database: `/p' dumpfile > dbname.sql 2>error
eg:
sed -n '/^-- Current Database: `blogs`/,/^-- Current Database: `/p' dump.sql > blogs.sql 2>error
we can also use "awk" for the same.
extract single table from the dumpfile:-
Tuesday, October 27, 2009
nohup : commands keep executing even you exit from a shell prompt
nohup command if added in front of any command will continue running the command or process even if you shut down your terminal or close your session to machine
nohup command-name &
Where,
* command-name : is name of shell script or command name. You can pass argument to command or a shell script.
* & : nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.
examples:
# nohup mysql -q -uUSER1 -pPASS1 < dump.sql > dump.log 2> error.log &
source:
http://www.idevelopment.info/data/Unix/General_UNIX/GENERAL_RunningUNIXCommandsImmunetoHangups_nohup.shtml
http://www.cyberciti.biz/tips/nohup-execute-commands-after-you-exit-from-a-shell-prompt.html
nohup command-name &
Where,
* command-name : is name of shell script or command name. You can pass argument to command or a shell script.
* & : nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.
examples:
# nohup mysql -q -uUSER1 -pPASS1 < dump.sql > dump.log 2> error.log &
source:
http://www.idevelopment.info/data/Unix/General_UNIX/GENERAL_RunningUNIXCommandsImmunetoHangups_nohup.shtml
http://www.cyberciti.biz/tips/nohup-execute-commands-after-you-exit-from-a-shell-prompt.html
Wednesday, October 07, 2009
Reset your MySQL Table Auto Increment Value
Recently,I created tables using auto increment number as the Primary Key. As name say, the number will increase every time I insert new record into the table. At some point I wanted to delete the whole dummy data I inserted.
eg.
If I had a items table and I created a few new items and then deleted them, to set the auto increment value back to ‘100′ i would simply:
ALTER TABLE tablename AUTO_INCREMENT = value;
eg.
If I had a items table and I created a few new items and then deleted them, to set the auto increment value back to ‘100′ i would simply:
ALTER TABLE items AUTO_INCREMENT = 100;
Sunday, September 13, 2009
MySQL log file rotation
Rotating MySQL Log Files on Linux.
A. Check logrotate must on daily cron
#less /etc/cron.daily/logrotate
B. Check on logrotate.d there must be mysql
#less /etc/logrotate.d/mysql
Its default Content is:
Read uses of variable
My file :
C. Create a file /root/.my.cnf.
#vi /root/.my.cnf (# ATTENTION: This /root/.my.cnf should be readable ONLY)
Its contain ;
Run manually ;
#logrotate -f /path/to/some/logrotate.conf
if you want to force a rotatation which is uncalled for in the config files, you need to give it a "-f":
A. Check logrotate must on daily cron
#less /etc/cron.daily/logrotate
B. Check on logrotate.d there must be mysql
#less /etc/logrotate.d/mysql
Its default Content is:
# by setting the variable "err-log"
# in the [safe_mysqld] section as follows:
#
# [safe_mysqld]
# err-log=/var/lib/mysql/mysqld.log
#
# If the root user has a password you have to create a
# /root/.my.cnf configuration file with the following
# content:
#
# [mysqladmin]
# password =
# user= root
#
# where "" is the password.
#
# ATTENTION: This /root/.my.cnf should be readable ONLY
# for root !
/var/lib/mysql/mysqld.log {
# create 600 mysql mysql
notifempty
daily
rotate 3
missingok
compress
postrotate
# just if mysqld is really running
if test -x /usr/bin/mysqladmin && \
/usr/bin/mysqladmin ping &>/dev/null
then
/usr/bin/mysqladmin flush-logs
fi
endscript
}
Read uses of variable
My file :
# This logname can be set in /etc/my.cnf
# by setting the variable "err-log"
# in the [safe_mysqld] section as follows:
#
# [safe_mysqld]
# err-log=/var/lib/mysql/mysqld.log
#
# If the root user has a password you have to create a
# /root/.my.cnf configuration file with the following
# content:
#
# [mysqladmin]
# password =
# user= root
#
# where "" is the password.
#
# ATTENTION: This /root/.my.cnf should be readable ONLY
# for root !
# - I put everything in one block and added sharedscripts, so that mysql gets
# flush-logs'd only once.
# Else the binary logs would automatically increase by n times every day.
# - The error log is obsolete, messages go to syslog now.
/var/log/mysql/mysql.log /var/log/mysql/mysql-slow.log {
daily
rotate 7
missingok
create 640 mysql adm
compress
sharedscripts
postrotate
test -x /usr/bin/mysqladmin || exit 0
# If this fails, check debian.conf!
export HOME=/etc/mysql/my.cnf
MYADMIN="/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf"
if [ -z "`$MYADMIN ping 2>/dev/null`" ]; then
# Really no mysqld or rather a missing debian-sys-maint user?
# If this occurs and is not a error please report a bug.
if ps cax | grep -q mysqld; then
exit 1
fi
else
$MYADMIN flush-logs
fi
endscript
C. Create a file /root/.my.cnf.
#vi /root/.my.cnf (# ATTENTION: This /root/.my.cnf should be readable ONLY)
Its contain ;
[mysqladmin]
password = xxxx
user= root
Run manually ;
#logrotate -f /path/to/some/logrotate.conf
if you want to force a rotatation which is uncalled for in the config files, you need to give it a "-f":
Wednesday, September 09, 2009
MySQL: Analyze slow query log using mysqldumpslow
First enable slow query logging, then generate a slow query and finally look at the slow query log.
mysqldumpslow
This program parses and summarizes a 'slow query log'.
-v verbose
-d debug
-s=WORD
what to sort by (t, at, l, al, r, ar etc)
-r reverse the sort order (largest last instead of first)
-t=NUMBER
just show the top n queries
-a don't abstract all numbers to N and strings to 'S'
-n=NUMBER
abstract numbers with at least n digits within names
-g=WORD
grep: only consider stmts that include this string
-h=WORD
hostname of db server for *-slow.log filename (can be wildcard)
-i=WORD
name of server instance (if using mysql.server startup script)
-l don't subtract lock time from total time
eg.
1. mysqldumpslow -t 10 /var/lib/mysql/mysql-slow.log.1 > /tmp/top_ten_slow_query.txt
It will put top ten slow query in file /tmp/top_ten_slow_query.txt
2. mysqldumpslow -a -s c -t 10 /var/lib/mysql/mysql-slow.log.1 > /tmp/top_ten_repeat_slow_query.txt
It will put top ten repeat slow query in file top_ten_repeat_slow_query.txt
mysqldumpslow
This program parses and summarizes a 'slow query log'.
-v verbose
-d debug
-s=WORD
what to sort by (t, at, l, al, r, ar etc)
-r reverse the sort order (largest last instead of first)
-t=NUMBER
just show the top n queries
-a don't abstract all numbers to N and strings to 'S'
-n=NUMBER
abstract numbers with at least n digits within names
-g=WORD
grep: only consider stmts that include this string
-h=WORD
hostname of db server for *-slow.log filename (can be wildcard)
-i=WORD
name of server instance (if using mysql.server startup script)
-l don't subtract lock time from total time
eg.
1. mysqldumpslow -t 10 /var/lib/mysql/mysql-slow.log.1 > /tmp/top_ten_slow_query.txt
It will put top ten slow query in file /tmp/top_ten_slow_query.txt
2. mysqldumpslow -a -s c -t 10 /var/lib/mysql/mysql-slow.log.1 > /tmp/top_ten_repeat_slow_query.txt
It will put top ten repeat slow query in file top_ten_repeat_slow_query.txt
Thursday, July 30, 2009
AWK: Pattern Matching and Processing
awk 'pattern {action}' filename
Reads one line at a time from file, checks for pattern match, performs action if pattern matched pattern.
NR is a special awk variable meaning the line number of the current record
can use a line number, to select a specific line, by comparing it to NR (for example: NR == 2)
can specify a range of line numbers (for example: NR == 2, NR == 4)
can specify a regular expression, to select all lines that match
$n are special awk variables, meaning the value of the nth field (field delimiter is space or tab)
$0 is the entire record
can use field values, by comparing to $n (for example: $3 == 65)
every line is selected if no pattern is specified
Instructions
print - print line(s) that match the pattern, or print fields within matching lines
print is default if no action is specified
there are many, many instruction, including just about all C statements with similar syntax
other instructions will be covered in future courses
examples, using the file testfile.
awk 'NR == 2, NR == 4' testfile - print the 2nd through 4th lines (default action is to print entire line)
awk '/chevy/' testfile - print only lines matching regular expression, same as grep 'chevy' testfile
awk '{print $3, $1}' testfile - print third and first field of all lines (default pattern matches all lines)
awk '/chevy/ {print $3, $1}' testfile - print third and first fiield of lines matching regular expression
awk '$3 == 65' testfile - print only lines with a third field value of 65
awk '$5 < = 3000' testfile - print only lines with a fifth field value that is less than or equal to 3000
awk '{print $1}' testfile - print first field of every record
awk '{print $3 $1}' testfile
awk '{print $3, $1}' testfile - inserts output field separator (variable OFS, default is space)
awk -F, '{print $2}' testfile - specifies that , is input field separator, default is space or tab
awk '$2 ~ /[0-9]/ {print $3, $1}' testfile - searches for reg-exp (a digit) only in the second field
awk '{printf "%-30s%20s\n", $3, $2}' testfile - print 3rd field left-justified in a 30 character field, 2nd field right-justified in a 20 character field, then skip to a new line (required with printf)
awk '$3 <= 23' testfile - prints lines where 3rd field has a value <= 23
awk '$3 <='$var1' {print $3}' testfile - $var1 is a shell variable, not an awk variable, e.g. first execute: var1=23
awk '$3<='$2' {$3++} {print $0}' testfile - if field 3 <= argument 2 then increment field 3, e.g. first execute: set xxx 23
awk '$3> 1 && $3 < 23' testfile - prints lines where 3rd field is in range 1 to 23
awk '$3 < 2 || $3 > 4' testfile - prints lines where 3rd field is outside of range 2 to 4
awk '$3 < "4"' testfile - double quotes force string comparison
NF is an awk variable meaning # of fields in current record
awk '! (NF == 4)' testfile - lines without 4 fields
NR is an awk variable meaning # of current record
awk 'NR == 2,NR==7' testfile - range of records from record number 2 to 7
BEGIN is an awk pattern meaning "before first record processed"
awk 'BEGIN {OFS="~"} {print $1, $2}' testfile - print 1st and 2nd field of each record, separated by ~
END is an awk pattern meaning "after last record processed"
awk '{var+=$3} END {print var}' testfile - sum of 3rd fields in all records
awk '{var+=$3} END {print var/NR}' testfile - average of 3rd fields in all records - note that awk handles decimal arithmetic
awk '$5 > var {var=$5} END {print var}' testfile - maximum of 5th fields in all records
awk '$5 > var {var=$5} END {print var}' testfile - maximum of 5th fields in all records
sort -rk5 testfile | awk 'NR==1 {var=$5} var==$5 {print $0}' - print all records with maximum 5th field
Simple awk operations involving functions within the command line:
awk '/chevy/' testfile
# Match lines (records) that contain the keyword chevy note that chevy is a regular expression...
awk '{print $3, $1}' testfile
# Pattern not specified - therefore, all lines (records) for fields 3 and 1 are displayed
# Note that comma (,) between fields represents delimiter (ie. space)
awk '/chevy/ {print $3, $1}' testfile
# Similar to above, but for chevy
awk '/^h/' testfile
# Match testfile that begin with h
awk '$1 ~ /^h/' testfile ### useful ###
# Match with field #1 that begins with h
awk '$1 ~ /h/' testfile
# Match with field #1 any epression containing the letter h
awk '$2 ~ /^[tm]/ {print $3, $2, "$" $5}' testfile
# Match testfile that begin with t or m and display field 3 (year), field 2 (model name) and then $ followed by field 4 (price)
--------------------------------------------------------------------------------------------------
Complex awk operations involving functions within the command line:
awk ?/chevy/ {print $3, $1}? testfile
# prints 3rd & 1st fields of record containing chevy
awk ?$1 ~ /^c/ {print $2, $3}? testfile
# print 2nd & 3rd fields of record with 1st field beginning with c
awk ?NR==2 {print $1, $4}? testfile
# prints 1st & 4th fields of record for record #2
awk ?NR==2, NR==8 {print $2, $3}? testfile
# prints 2nd & 3rd fields of record for records 2 through 8
awk ?$3 >= 65 {print $3, $1}? testfile
# prints 3rd & 1st fields of record with 3rd field >= 65
awk ?$5 >= ?2000? && $5 < ?9000? {print $2, $3}? testfile
# prints 2nd & 3rd fields of record within range of 2000 to under 9000
Reads one line at a time from file, checks for pattern match, performs action if pattern matched pattern.
NR is a special awk variable meaning the line number of the current record
can use a line number, to select a specific line, by comparing it to NR (for example: NR == 2)
can specify a range of line numbers (for example: NR == 2, NR == 4)
can specify a regular expression, to select all lines that match
$n are special awk variables, meaning the value of the nth field (field delimiter is space or tab)
$0 is the entire record
can use field values, by comparing to $n (for example: $3 == 65)
every line is selected if no pattern is specified
Instructions
print - print line(s) that match the pattern, or print fields within matching lines
print is default if no action is specified
there are many, many instruction, including just about all C statements with similar syntax
other instructions will be covered in future courses
examples, using the file testfile.
awk 'NR == 2, NR == 4' testfile - print the 2nd through 4th lines (default action is to print entire line)
awk '/chevy/' testfile - print only lines matching regular expression, same as grep 'chevy' testfile
awk '{print $3, $1}' testfile - print third and first field of all lines (default pattern matches all lines)
awk '/chevy/ {print $3, $1}' testfile - print third and first fiield of lines matching regular expression
awk '$3 == 65' testfile - print only lines with a third field value of 65
awk '$5 < = 3000' testfile - print only lines with a fifth field value that is less than or equal to 3000
awk '{print $1}' testfile - print first field of every record
awk '{print $3 $1}' testfile
awk '{print $3, $1}' testfile - inserts output field separator (variable OFS, default is space)
awk -F, '{print $2}' testfile - specifies that , is input field separator, default is space or tab
awk '$2 ~ /[0-9]/ {print $3, $1}' testfile - searches for reg-exp (a digit) only in the second field
awk '{printf "%-30s%20s\n", $3, $2}' testfile - print 3rd field left-justified in a 30 character field, 2nd field right-justified in a 20 character field, then skip to a new line (required with printf)
awk '$3 <= 23' testfile - prints lines where 3rd field has a value <= 23
awk '$3 <='$var1' {print $3}' testfile - $var1 is a shell variable, not an awk variable, e.g. first execute: var1=23
awk '$3<='$2' {$3++} {print $0}' testfile - if field 3 <= argument 2 then increment field 3, e.g. first execute: set xxx 23
awk '$3> 1 && $3 < 23' testfile - prints lines where 3rd field is in range 1 to 23
awk '$3 < 2 || $3 > 4' testfile - prints lines where 3rd field is outside of range 2 to 4
awk '$3 < "4"' testfile - double quotes force string comparison
NF is an awk variable meaning # of fields in current record
awk '! (NF == 4)' testfile - lines without 4 fields
NR is an awk variable meaning # of current record
awk 'NR == 2,NR==7' testfile - range of records from record number 2 to 7
BEGIN is an awk pattern meaning "before first record processed"
awk 'BEGIN {OFS="~"} {print $1, $2}' testfile - print 1st and 2nd field of each record, separated by ~
END is an awk pattern meaning "after last record processed"
awk '{var+=$3} END {print var}' testfile - sum of 3rd fields in all records
awk '{var+=$3} END {print var/NR}' testfile - average of 3rd fields in all records - note that awk handles decimal arithmetic
awk '$5 > var {var=$5} END {print var}' testfile - maximum of 5th fields in all records
awk '$5 > var {var=$5} END {print var}' testfile - maximum of 5th fields in all records
sort -rk5 testfile | awk 'NR==1 {var=$5} var==$5 {print $0}' - print all records with maximum 5th field
Simple awk operations involving functions within the command line:
awk '/chevy/' testfile
# Match lines (records) that contain the keyword chevy note that chevy is a regular expression...
awk '{print $3, $1}' testfile
# Pattern not specified - therefore, all lines (records) for fields 3 and 1 are displayed
# Note that comma (,) between fields represents delimiter (ie. space)
awk '/chevy/ {print $3, $1}' testfile
# Similar to above, but for chevy
awk '/^h/' testfile
# Match testfile that begin with h
awk '$1 ~ /^h/' testfile ### useful ###
# Match with field #1 that begins with h
awk '$1 ~ /h/' testfile
# Match with field #1 any epression containing the letter h
awk '$2 ~ /^[tm]/ {print $3, $2, "$" $5}' testfile
# Match testfile that begin with t or m and display field 3 (year), field 2 (model name) and then $ followed by field 4 (price)
--------------------------------------------------------------------------------------------------
Complex awk operations involving functions within the command line:
awk ?/chevy/ {print $3, $1}? testfile
# prints 3rd & 1st fields of record containing chevy
awk ?$1 ~ /^c/ {print $2, $3}? testfile
# print 2nd & 3rd fields of record with 1st field beginning with c
awk ?NR==2 {print $1, $4}? testfile
# prints 1st & 4th fields of record for record #2
awk ?NR==2, NR==8 {print $2, $3}? testfile
# prints 2nd & 3rd fields of record for records 2 through 8
awk ?$3 >= 65 {print $3, $1}? testfile
# prints 3rd & 1st fields of record with 3rd field >= 65
awk ?$5 >= ?2000? && $5 < ?9000? {print $2, $3}? testfile
# prints 2nd & 3rd fields of record within range of 2000 to under 9000
Tuesday, July 28, 2009
MySQL Replication : Purged binary logs
Yesterday I have found that there is no space left on server of MySQL master and on Slave. Once I debugged I have come to know that there is GB's of bin-log files on Master and relay-log on Slave.
Its due to I have forget to add a expire_logs_days Variable in my.cnf during the configuration of replication server.
# expire_logs_days = 7
It will purged binary logs older than 7 days.The old logs will be purged during the next bin-log switch.
Or, You can also delete bin-log manually using command :
PURGE BINARY LOGS TO 'mysql-bin.010';
PURGE BINARY LOGS BEFORE '2008-04-02 22:46:26';
Its due to I have forget to add a expire_logs_days Variable in my.cnf during the configuration of replication server.
# expire_logs_days = 7
It will purged binary logs older than 7 days.The old logs will be purged during the next bin-log switch.
Or, You can also delete bin-log manually using command :
PURGE BINARY LOGS TO 'mysql-bin.010';
PURGE BINARY LOGS BEFORE '2008-04-02 22:46:26';
Thursday, July 02, 2009
MYSQL: slow queries log
MySQL has built-in functionality that allows you to log SQL queries to a file , You can enable the full SQL queries logs to a file or only slow running queries log. It is easy for us to troubleshoot/ debug the sql statement if SQL queries log enable , The slow query log is used to find queries that take a long time to execute and are therefore candidates for optimization.
To enable you just need to add some lines to your my.cnf file, and restart. Add the following:
* To enable slow Query Log only
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
After enabling slow query, mysqld writes a statement to the slow query log file and it consists of all SQL statements that took more than long_query_time seconds to execute. The time to acquire the initial table locks is not counted as execution time. mysqld only log after SQL statements has been executed and after all locks have been released, so log order might be different from execution order. The minimum and default values of long_query_time are 1 and 10, respectively.
* To enable full Log Query
log=/var/log/mysqldquery.log
The above will log all queries to the log file.
Selecting Queries to Optmize
• The slow query log
– Logs all queries that take longer than long_query_time
– Can also log all querie s that don’t use indexes with
--log-queries-not-using-indexes
– To log slow administatve commands use
--log-slow-admin-statements
– To analyze the contents of the slow log use
mysqldumpslow
To enable you just need to add some lines to your my.cnf file, and restart. Add the following:
* To enable slow Query Log only
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
After enabling slow query, mysqld writes a statement to the slow query log file and it consists of all SQL statements that took more than long_query_time seconds to execute. The time to acquire the initial table locks is not counted as execution time. mysqld only log after SQL statements has been executed and after all locks have been released, so log order might be different from execution order. The minimum and default values of long_query_time are 1 and 10, respectively.
* To enable full Log Query
log=/var/log/mysqldquery.log
The above will log all queries to the log file.
Selecting Queries to Optmize
• The slow query log
– Logs all queries that take longer than long_query_time
– Can also log all querie s that don’t use indexes with
--log-queries-not-using-indexes
– To log slow administatve commands use
--log-slow-admin-statements
– To analyze the contents of the slow log use
mysqldumpslow
Tuesday, June 30, 2009
MYSQL: Query Execution Basics
1. The client sends the SQL statement to the server.
-> The protocol is halfduplex, which means that at any given time the MySQL server can be either sending or receiving messages, but not both. It also means there is no way to cut a message short.
The client sends a query to the server as a single packet of data. This is why the max_packet_size configuration variable is important if you have large queries. Once the client sends the query, it doesn’t have the ball anymore; it can only wait for results. The response from the server usually consists of many packets of data.When the server responds, the client has to receive the entire result set. It cannot simply fetch a few rows and then ask the server not to bother sending the rest. If the client needs only the first few rows that are returned, it either has to wait for all of the server’s packets to arrive and then discard the ones it doesn’t need, or disconnect ungracefully. Neither is a good idea, which is why appropriate LIMIT clauses are so important.Here’s another way to think about this: when a client fetches rows from the server, it thinks it’s pulling them. But the truth is, the MySQL server is pushing the rows as it generates them. The client is only receiving the pushed rows; there is no way for it to tell the server to stop sending rows. The client is “drinking from the fire hose,” so to speak.
2. The server checks the query cache. If there’s a hit, it returns the stored result from the cache; otherwise, it passes the SQL statement to the next step.
-> Before even parsing a query, MySQL checks for it in the query cache, if the cache is enabled. This operation is a case sensitive hash lookup. If the query differs from a similar query in the cache by even a single byte, it won’t match, and the query processing will go to the next stage.
If MySQL does find a match in the query cache, it must check privileges before
returning the cached query. This is possible without parsing the query, because
MySQL stores table information with the cached query. If the privileges are OK,
MySQL retrieves the stored result from the query cache and sends it to the client,
bypassing every other stage in query execution. The query is never parsed, optimized,
or executed.
3. The server parses, preprocesses, and optimizes the SQL into a query execution
plan.
-> MySQL’s parser breaks the query into tokens and builds a “parse tree”
from them. The parser uses MySQL’s SQL grammar to interpret and validate the
query. For instance, it ensures that the tokens in the query are valid and in the proper order, and it checks for mistakes such as quoted strings that aren’t terminated. The preprocessor then checks the resulting parse tree for additional semantics that the parser can’t resolve. For example, it checks that tables and columns exist, and it resolves names and aliases to ensure that column references aren’t ambiguous.Next, the preprocessor checks privileges. This is normally very fast unless your server has large numbers of privileges.
-> The parse tree is now valid and ready for the optimizer to turn it into a query execution plan. A query can often be executed many different ways and produce the same result. The optimizer’s job is to find the best option. MySQL uses a cost-based optimizer, which means it tries to predict the cost of various execution plans and choose the least expensive. The unit of cost is a single random four-kilobyte data page read.
4. The query execution engine executes the plan by making calls to the storage engine API.
-> The parsing and optimizing stage outputs a query execution plan, which MySQL’s
query execution engine uses to process the query. The plan is a data structure; it is
not executable byte-code, which is how many other databases execute queries. In contrast to the optimization stage, the execution stage is usually not all that complex: MySQL simply follows the instructions given in the query execution plan.
Many of the operations in the plan invoke methods implemented by the storage
engine interface, also known as the handler API. Each table in the query is represented by an instance of a handler. If a table appears three times in the query, for example, the server creates three handler instances. Though we glossed over this before, MySQL actually creates the handler instances early in the optimization stage. The optimizer uses them to get information about the tables, such as their column names and index statistics.
5. The server sends the result to the client.
-> The final step in executing a query is to reply to the client. Even queries that don’t return a result set still reply to the client connection with information about the query, such as how many rows it affected.
-> The protocol is halfduplex, which means that at any given time the MySQL server can be either sending or receiving messages, but not both. It also means there is no way to cut a message short.
The client sends a query to the server as a single packet of data. This is why the max_packet_size configuration variable is important if you have large queries. Once the client sends the query, it doesn’t have the ball anymore; it can only wait for results. The response from the server usually consists of many packets of data.When the server responds, the client has to receive the entire result set. It cannot simply fetch a few rows and then ask the server not to bother sending the rest. If the client needs only the first few rows that are returned, it either has to wait for all of the server’s packets to arrive and then discard the ones it doesn’t need, or disconnect ungracefully. Neither is a good idea, which is why appropriate LIMIT clauses are so important.Here’s another way to think about this: when a client fetches rows from the server, it thinks it’s pulling them. But the truth is, the MySQL server is pushing the rows as it generates them. The client is only receiving the pushed rows; there is no way for it to tell the server to stop sending rows. The client is “drinking from the fire hose,” so to speak.
2. The server checks the query cache. If there’s a hit, it returns the stored result from the cache; otherwise, it passes the SQL statement to the next step.
-> Before even parsing a query, MySQL checks for it in the query cache, if the cache is enabled. This operation is a case sensitive hash lookup. If the query differs from a similar query in the cache by even a single byte, it won’t match, and the query processing will go to the next stage.
If MySQL does find a match in the query cache, it must check privileges before
returning the cached query. This is possible without parsing the query, because
MySQL stores table information with the cached query. If the privileges are OK,
MySQL retrieves the stored result from the query cache and sends it to the client,
bypassing every other stage in query execution. The query is never parsed, optimized,
or executed.
3. The server parses, preprocesses, and optimizes the SQL into a query execution
plan.
-> MySQL’s parser breaks the query into tokens and builds a “parse tree”
from them. The parser uses MySQL’s SQL grammar to interpret and validate the
query. For instance, it ensures that the tokens in the query are valid and in the proper order, and it checks for mistakes such as quoted strings that aren’t terminated. The preprocessor then checks the resulting parse tree for additional semantics that the parser can’t resolve. For example, it checks that tables and columns exist, and it resolves names and aliases to ensure that column references aren’t ambiguous.Next, the preprocessor checks privileges. This is normally very fast unless your server has large numbers of privileges.
-> The parse tree is now valid and ready for the optimizer to turn it into a query execution plan. A query can often be executed many different ways and produce the same result. The optimizer’s job is to find the best option. MySQL uses a cost-based optimizer, which means it tries to predict the cost of various execution plans and choose the least expensive. The unit of cost is a single random four-kilobyte data page read.
4. The query execution engine executes the plan by making calls to the storage engine API.
-> The parsing and optimizing stage outputs a query execution plan, which MySQL’s
query execution engine uses to process the query. The plan is a data structure; it is
not executable byte-code, which is how many other databases execute queries. In contrast to the optimization stage, the execution stage is usually not all that complex: MySQL simply follows the instructions given in the query execution plan.
Many of the operations in the plan invoke methods implemented by the storage
engine interface, also known as the handler API. Each table in the query is represented by an instance of a handler. If a table appears three times in the query, for example, the server creates three handler instances. Though we glossed over this before, MySQL actually creates the handler instances early in the optimization stage. The optimizer uses them to get information about the tables, such as their column names and index statistics.
5. The server sends the result to the client.
-> The final step in executing a query is to reply to the client. Even queries that don’t return a result set still reply to the client connection with information about the query, such as how many rows it affected.
Sunday, June 14, 2009
How do I Use the Linux Top Command?
The Unix top command is designed to help users determine which processes are running and which applications are using more memory or processing power than they should be.
The top command is very easy to use but you should know the things in details. The output of to is :
top output:
The first line in top:
top - 22:09:08 up 14 min, 1 user, load average: 0.21, 0.23, 0.30
“22:09:08″ is the current time; “up 14 min” shows how long the system has been up for; “1 user” how many users are logged in; “load average: 0.21, 0.23, 0.30″ the load average of the system (1minute, 5 minutes, 15 minutes).
Load average is an extensive topic and to understand its inner workings can be daunting. The simplest of definitions states that load average is the cpu utilization over a period of time. A load average of 1 means your cpu is being fully utilized and processes are not having to wait to use a CPU. A load average above 1 indicates that processes need to wait and your system will be less responsive. If your load average is consistently above 3 and your system is running slow you may want to upgrade to more CPU’s or a faster CPU.
The second line in top:
Tasks: 82 total, 1 running, 81 sleeping, 0 stopped, 0 zombie
Shows the number of processes and their current state.
The third lin in top:
Cpu(s): 9.5%us, 31.2%sy, 0.0%ni, 27.0%id, 7.6%wa, 1.0%hi, 23.7%si, 0.0%st
Shows CPU utilization details. “9.5%us” user processes are using 9.5%; “31.2%sy” system processes are using 31.2%; “27.0%id” percentage of available cpu; “7.6%wa” time CPU is waiting for IO.
When first analyzing the Cpu(s) line in top look at the %id to see how much cpu is available. If %id is low then focus on %us, %sy, and %wa to determine what is using the CPU.
The fourth and fifth lines in top:
Mem: 255592k total, 167568k used, 88024k free, 25068k buffers
Swap: 524280k total, 0k used, 524280k free, 85724k cached
Describes the memory usage. These numbers can be misleading. “255592k total” is total memory in the system; “167568K used” is the part of the RAM that currently contains information; “88024k free” is the part of RAM that contains no information; “25068K buffers and 85724k cached” is the buffered and cached data for IO.
So what is the actual amount of free RAM available for programs to use ?
The answer is: free + (buffers + cached)
88024k + (25068k + 85724k) = 198816k
How much RAM is being used by progams ?
The answer is: used - (buffers + cached)
167568k - (25068k + 85724k) = 56776k
The processes information:
Top will display the process using the most CPU usage in descending order. Lets describe each column that represents a process.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3166 apache 15 0 29444 6112 1524 S 6.6 2.4 0:00.79 httpd
PID - process ID of the process
USER - User who is running the process
PR - The priority of the process
NI - Nice value of the process (higher value indicates lower priority)
VIRT - The total amount of virtual memory used
RES - Resident task size
SHR - Amount of shared memory used
S - State of the task. Values are S (sleeping), D (uninterruptible sleep), R (running), Z (zombies), or T (stopped or traced)
%CPU - Percentage of CPU used
%MEM - Percentage of Memory used
TIME+ - Total CPU time used
COMMAND - Command issued
Interacting with TOP
Now that we are able to understand the output from TOP lets learn how to change the way the output is displayed.
Just press the following key while running top and the output will be sorted in real time.
M - Sort by memory usage
P - Sort by CPU usage
T - Sort by cumulative time
z - Color display
k - Kill a process
q - quit
If we want to kill the process with PID 3161, then press “k” and a prompt will ask you for the PID number, and enter 3161.
Command Line Parameters with TOP
You can control what top displays by issuing parameters when you run top.
- d - Controls the delay between refreshes
- p - Specify the process by PID that you want to monitor
-n - Update the display this number of times and then exit
If we want to only monitor the http process with a PID of 3166
$ top -p 3166
If we want to change the delay between refreshes to 5 seconds
$ top -d 5
The top command is very easy to use but you should know the things in details. The output of to is :
top output:
top - 22:09:08 up 14 min, 1 user, load average: 0.21, 0.23, 0.30
Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie
Cpu(s): 9.5%us, 31.2%sy, 0.0%ni, 27.0%id, 7.6%wa, 1.0%hi, 23.7%si, 0.0%st
Mem: 255592k total, 167568k used, 88024k free, 25068k buffers
Swap: 524280k total, 0k used, 524280k free, 85724k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3166 apache 15 0 29444 6112 1524 S 6.6 2.4 0:00.79 httpd
3161 apache 15 0 29444 6112 1524 S 5.9 2.4 0:00.79 httpd
3164 apache 15 0 29444 6112 1524 S 5.9 2.4 0:00.75 httpd
3169 apache 15 0 29444 6112 1524 S 5.9 2.4 0:00.74 httpd
3163 apache 15 0 29444 6112 1524 S 5.6 2.4 0:00.76 httpd
3165 apache 15 0 29444 6112 1524 S 5.6 2.4 0:00.77 httpd
3167 apache 15 0 29444 6112 1524 S 5.3 2.4 0:00.73 httpd
3162 apache 15 0 29444 6112 1524 S 5.0 2.4 0:00.77 httpd
3407 root 16 0 2188 1012 816 R 1.7 0.4 0:00.51 top
240 root 15 0 0 0 0 S 0.3 0.0 0:00.08 pdflush
501 root 10 -5 0 0 0 S 0.3 0.0 0:01.20 kjournald
2794 root 18 0 12720 1268 560 S 0.3 0.5 0:00.73 pcscd
1 root 15 0 2060 636 544 S 0.0 0.2 0:03.81 init
2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
5 root 10 -5 0 0 0 S 0.0 0.0 0:00.07 events/0
The first line in top:
top - 22:09:08 up 14 min, 1 user, load average: 0.21, 0.23, 0.30
“22:09:08″ is the current time; “up 14 min” shows how long the system has been up for; “1 user” how many users are logged in; “load average: 0.21, 0.23, 0.30″ the load average of the system (1minute, 5 minutes, 15 minutes).
Load average is an extensive topic and to understand its inner workings can be daunting. The simplest of definitions states that load average is the cpu utilization over a period of time. A load average of 1 means your cpu is being fully utilized and processes are not having to wait to use a CPU. A load average above 1 indicates that processes need to wait and your system will be less responsive. If your load average is consistently above 3 and your system is running slow you may want to upgrade to more CPU’s or a faster CPU.
The second line in top:
Tasks: 82 total, 1 running, 81 sleeping, 0 stopped, 0 zombie
Shows the number of processes and their current state.
The third lin in top:
Cpu(s): 9.5%us, 31.2%sy, 0.0%ni, 27.0%id, 7.6%wa, 1.0%hi, 23.7%si, 0.0%st
Shows CPU utilization details. “9.5%us” user processes are using 9.5%; “31.2%sy” system processes are using 31.2%; “27.0%id” percentage of available cpu; “7.6%wa” time CPU is waiting for IO.
When first analyzing the Cpu(s) line in top look at the %id to see how much cpu is available. If %id is low then focus on %us, %sy, and %wa to determine what is using the CPU.
The fourth and fifth lines in top:
Mem: 255592k total, 167568k used, 88024k free, 25068k buffers
Swap: 524280k total, 0k used, 524280k free, 85724k cached
Describes the memory usage. These numbers can be misleading. “255592k total” is total memory in the system; “167568K used” is the part of the RAM that currently contains information; “88024k free” is the part of RAM that contains no information; “25068K buffers and 85724k cached” is the buffered and cached data for IO.
So what is the actual amount of free RAM available for programs to use ?
The answer is: free + (buffers + cached)
88024k + (25068k + 85724k) = 198816k
How much RAM is being used by progams ?
The answer is: used - (buffers + cached)
167568k - (25068k + 85724k) = 56776k
The processes information:
Top will display the process using the most CPU usage in descending order. Lets describe each column that represents a process.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3166 apache 15 0 29444 6112 1524 S 6.6 2.4 0:00.79 httpd
PID - process ID of the process
USER - User who is running the process
PR - The priority of the process
NI - Nice value of the process (higher value indicates lower priority)
VIRT - The total amount of virtual memory used
RES - Resident task size
SHR - Amount of shared memory used
S - State of the task. Values are S (sleeping), D (uninterruptible sleep), R (running), Z (zombies), or T (stopped or traced)
%CPU - Percentage of CPU used
%MEM - Percentage of Memory used
TIME+ - Total CPU time used
COMMAND - Command issued
Interacting with TOP
Now that we are able to understand the output from TOP lets learn how to change the way the output is displayed.
Just press the following key while running top and the output will be sorted in real time.
M - Sort by memory usage
P - Sort by CPU usage
T - Sort by cumulative time
z - Color display
k - Kill a process
q - quit
If we want to kill the process with PID 3161, then press “k” and a prompt will ask you for the PID number, and enter 3161.
Command Line Parameters with TOP
You can control what top displays by issuing parameters when you run top.
- d - Controls the delay between refreshes
- p - Specify the process by PID that you want to monitor
-n - Update the display this number of times and then exit
If we want to only monitor the http process with a PID of 3166
$ top -p 3166
If we want to change the delay between refreshes to 5 seconds
$ top -d 5
Friday, April 17, 2009
Block IP addresses using IPtables
Block a particular
#service iptables start
#iptables -I INPUT -s 10.1.24.4 -j DROP
This command will simply drop any packet coming from the address 10.1.24.4
To list the chains:
#iptables -L -n
To make persist :
#service iptables status
#iptables-save (copy output)
#emacs /etc/sysconfig/iptables (paste output)
#service iptables restart
make sure iptables service start on default.
#service iptables start
#iptables -I INPUT -s 10.1.24.4 -j DROP
This command will simply drop any packet coming from the address 10.1.24.4
To list the chains:
#iptables -L -n
To make persist :
#service iptables status
#iptables-save (copy output)
#emacs /etc/sysconfig/iptables (paste output)
#service iptables restart
make sure iptables service start on default.
Monday, March 30, 2009
creating bulk users in linux
Today I have configured NX server.
Now, Next task is to create users and really its very time consuming and boring task.
Usually you use useradd command to create a new user or update default new user information from command line.
So i have explore Linux and searched on Google , I have found few scripts to do this. But later I have found one good and easy solution.
Here is that,
Update and create new users in bulk.
newusers command reads a file of user name and clear-text password pairs and uses this information to update a group of existing users or to create new users. Each line is in the same format as the standard password file.
This command is intended to be used in a large system environment where many accounts are updated at a single time (batch mode). Since username and passwords are stored in clear text format make sure only root can read/write the file. Use chmod command:
# touch /root/bulk-user-add.txt
# chmod 0600 /root/bulk-user-add.txt
Create a user list as follows. Open file:
# emacs /root/bulk-user-add.txt
Append username and password:
sanjay:mypass99:555:555:Sanjay Singh:/home/Sanjay:/bin/bash
frampton:mypass99n:556:556:Frampton Martin:/home/Frampton:/bin/bash
----
--
---
barun:mypass99:560:560:Barun Ghosh:/home/Barun:/bin/bash
Now create users in batch:
# newusers /root/bulk-user-add.txt
Read man page of newusers for more information.
May be I will automate entire procedure using a php
Now, Next task is to create users and really its very time consuming and boring task.
Usually you use useradd command to create a new user or update default new user information from command line.
So i have explore Linux and searched on Google , I have found few scripts to do this. But later I have found one good and easy solution.
Here is that,
Update and create new users in bulk.
newusers command reads a file of user name and clear-text password pairs and uses this information to update a group of existing users or to create new users. Each line is in the same format as the standard password file.
This command is intended to be used in a large system environment where many accounts are updated at a single time (batch mode). Since username and passwords are stored in clear text format make sure only root can read/write the file. Use chmod command:
# touch /root/bulk-user-add.txt
# chmod 0600 /root/bulk-user-add.txt
Create a user list as follows. Open file:
# emacs /root/bulk-user-add.txt
Append username and password:
sanjay:mypass99:555:555:Sanjay Singh:/home/Sanjay:/bin/bash
frampton:mypass99n:556:556:Frampton Martin:/home/Frampton:/bin/bash
----
--
---
barun:mypass99:560:560:Barun Ghosh:/home/Barun:/bin/bash
Now create users in batch:
# newusers /root/bulk-user-add.txt
Read man page of newusers for more information.
May be I will automate entire procedure using a php
Tuesday, March 24, 2009
Boot time parameters of Linux kernel
Boot time parameters you should know about the Linux kernel.
The Linux kernel accepts boot time parameters as it starts to boot system. This is used to inform kernel about various hardware parameter.
The kernel command line syntax
name=value1,value2,value3…
Where,
* name : Keyword name, for example, init, ro, boot etc
Common Boot time parameters
init
This sets the initial command to be executed by the kernel. Default is to use /sbin/init, which is the parent of all processes.
To boot system without password pass /bin/bash or /bin/sh as argument to init
init=/bin/bash
single
The most common argument that is passed to the init process is the word 'single' which instructs init to boot the computer in single user mode, and not launch all the usual daemons
root=/dev/device
This argument tells the kernel what device (hard disk, floppy disk) to be used as the root filesystem while booting. For example following boot parameter use /dev/sda1 as the root file system:
root=/dev/sda1
If you copy entire partition from /dev/sda1 to /dev/sdb1 then use
root=/dev/sdb1
ro
This argument tells the kernel to mount root file system as read-only. This is done so that fsck program can check and repair a Linux file system. Please note that you should never ever run fsck on read/write file system.
rw
This argument tells the kernel to mount root file system as read and write mode.
panic=SECOND
Specify kernel behavior on panic. By default, the kernel will not reboot after a panic, but this option will cause a kernel reboot after N seconds. For example following boot parameter will force to reboot Linux after 10 seconds
panic=10
maxcpus=NUMBER
Specify maximum number of processors that an SMP kernel should make use of. For example if you have four cpus and would like to use 2 CPU then pass 2 as a number to maxcpus (useful to test different software performances and configurations).
maxcpus=2
debug
Enable kernel debugging. This option is useful for kernel hackers and developers who wish to troubleshoot problem
selinux [0|1]
Disable or enable SELinux at boot time.
• Value 0 : Disable selinux
• Value 1 : Enable selinux
raid=/dev/mdN
This argument tells kernel howto assembly of RAID arrays at boot time. Please note that When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time.
mem=MEMEORY_SIZE
This is a classic parameter. Force usage of a specific amount of memory to be used when the kernel is not able to see the whole system memory or for test. For example:
mem=1024M
The kernel command line is a null-terminated string currently up to 255 characters long, plus the final null. A string that is too long will be automatically truncated by the kernel, a boot loader may allow a longer command line to be passed to permit future kernels to extend this limit (H. Peter Anvin ).
Other parameters
initrd /boot/initrd.img
An initrd should be loaded. the boot process will load the kernel and an initial ramdisk; then the kernel converts initrd into a "normal" ramdisk, which is mounted read-write as root device; then /linuxrc is executed; afterwards the "real" root file system is mounted, and the initrd file system is moved over to /initrd; finally the usual boot sequence (e.g. invocation of /sbin/init) is performed. initrd is used to provide/load additional modules (device driver). For example, SCSI or RAID device driver loaded using initrd.
hdX =noprobe
Do not probe for hdX drive. For example, disable hdb hard disk:
hdb=noprobe
If you disable hdb in BIOS, Linux will still detect it. This is the only way to disable hdb.
ether=irq,iobase,[ARG1,ARG2],name
Where,
• ether: ETHERNET DEVICES
For example, following boot argument force probing for a second Ethernet card (NIC), as the default is to only probe for one (irq=0,iobase=0 means automatically detect them).
ether=0,0,eth1
How to begin the enter parameters mode?
You need to enter all this parameter at Grub or Lilo boot prompt. For example if you are using Grub as a boot loader, at Grub prompt press 'e' to edit command before booting.
1) Select second line
2) Again, press 'e' to edit selected command
3) Type any of above parameters.
See an example of "recovering grub boot loader password", for more information. Another option is to type above parameters in grub.conf or lilo.conf file itself.
More detail
The Linux kernel accepts boot time parameters as it starts to boot system. This is used to inform kernel about various hardware parameter.
The kernel command line syntax
name=value1,value2,value3…
Where,
* name : Keyword name, for example, init, ro, boot etc
Common Boot time parameters
init
This sets the initial command to be executed by the kernel. Default is to use /sbin/init, which is the parent of all processes.
To boot system without password pass /bin/bash or /bin/sh as argument to init
init=/bin/bash
single
The most common argument that is passed to the init process is the word 'single' which instructs init to boot the computer in single user mode, and not launch all the usual daemons
root=/dev/device
This argument tells the kernel what device (hard disk, floppy disk) to be used as the root filesystem while booting. For example following boot parameter use /dev/sda1 as the root file system:
root=/dev/sda1
If you copy entire partition from /dev/sda1 to /dev/sdb1 then use
root=/dev/sdb1
ro
This argument tells the kernel to mount root file system as read-only. This is done so that fsck program can check and repair a Linux file system. Please note that you should never ever run fsck on read/write file system.
rw
This argument tells the kernel to mount root file system as read and write mode.
panic=SECOND
Specify kernel behavior on panic. By default, the kernel will not reboot after a panic, but this option will cause a kernel reboot after N seconds. For example following boot parameter will force to reboot Linux after 10 seconds
panic=10
maxcpus=NUMBER
Specify maximum number of processors that an SMP kernel should make use of. For example if you have four cpus and would like to use 2 CPU then pass 2 as a number to maxcpus (useful to test different software performances and configurations).
maxcpus=2
debug
Enable kernel debugging. This option is useful for kernel hackers and developers who wish to troubleshoot problem
selinux [0|1]
Disable or enable SELinux at boot time.
• Value 0 : Disable selinux
• Value 1 : Enable selinux
raid=/dev/mdN
This argument tells kernel howto assembly of RAID arrays at boot time. Please note that When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time.
mem=MEMEORY_SIZE
This is a classic parameter. Force usage of a specific amount of memory to be used when the kernel is not able to see the whole system memory or for test. For example:
mem=1024M
The kernel command line is a null-terminated string currently up to 255 characters long, plus the final null. A string that is too long will be automatically truncated by the kernel, a boot loader may allow a longer command line to be passed to permit future kernels to extend this limit (H. Peter Anvin ).
Other parameters
initrd /boot/initrd.img
An initrd should be loaded. the boot process will load the kernel and an initial ramdisk; then the kernel converts initrd into a "normal" ramdisk, which is mounted read-write as root device; then /linuxrc is executed; afterwards the "real" root file system is mounted, and the initrd file system is moved over to /initrd; finally the usual boot sequence (e.g. invocation of /sbin/init) is performed. initrd is used to provide/load additional modules (device driver). For example, SCSI or RAID device driver loaded using initrd.
hdX =noprobe
Do not probe for hdX drive. For example, disable hdb hard disk:
hdb=noprobe
If you disable hdb in BIOS, Linux will still detect it. This is the only way to disable hdb.
ether=irq,iobase,[ARG1,ARG2],name
Where,
• ether: ETHERNET DEVICES
For example, following boot argument force probing for a second Ethernet card (NIC), as the default is to only probe for one (irq=0,iobase=0 means automatically detect them).
ether=0,0,eth1
How to begin the enter parameters mode?
You need to enter all this parameter at Grub or Lilo boot prompt. For example if you are using Grub as a boot loader, at Grub prompt press 'e' to edit command before booting.
1) Select second line
2) Again, press 'e' to edit selected command
3) Type any of above parameters.
See an example of "recovering grub boot loader password", for more information. Another option is to type above parameters in grub.conf or lilo.conf file itself.
More detail
Sunday, March 22, 2009
MYSQL REPLICATION & DISASTER RECOVERY
MySQL’s built-in replication capability is the foundation for building large, high performance applications on top of MySQL. Replication lets you configure one or more servers as slaves, or replicas, of another server.
1. Setting up Replication:
Three threads are involved in Replication: One on the master and two on the slave.
• The I/O thread on the slave connects to the master and requests the binary update log. The Binary log dump thread on the master sends the binary update log to the slave on request.
• Once on the slave, the I/O thread reads the data sent by the master and copies it to the relay log in the data directory.
• The third thread, also on the slave, is the SQL Thread, which read and executes the queries from the relay log to bring the slave in alignment with the master.
2. Replication with example:
Version: Both master and slave should be the same version. Otherwise replication will be improper.
Network Configuration Settings
Master Server IP Address: 10.5.1.10
Slave Server Primary IP Address (Ethernet eth0): 10.5.1.11
Slave Server Secondary IP Address (Ethernet eth1): 10.5.1.10 (By default it is disabled)
3. MySQL Replication Installation:
Step1:
Install MySQL on master 1 and slave 1. Configure network services on both systems, like
Master 1/Slave 2 IP: 10.5.1.10
Master 2/Slave 1 IP: 10.5.1.11
Step2:
On Master 1, make changes in my.cnf:
[mysqld]
logbin= mysqlbin
binlogdodb= # input the database which should be replicated or ignore this command to replicate all the databases.
binlogignoredb= mysql # input the database that should be ignored for replication
serverid=1
auto_increment_increment=2
auto_increment_offset=1
Step 3:
On master 1, create a replication slave account in mysql.
mysql> grant replication slave on *.* to 'replication'@10.5.1.11 identified by 'slave';
Restart the mysql master1.
Step 4:
Now edit my.cnf on Slave1 or Master2:
[mysqld]
serverid =2
masterhost= 10.5.1.10
masteruser= replication
masterpassword= slave
masterport= 3306
Step 5:
Restart th MySQL Slave 1.
Login to the MySQL command prompt and start the slave replication.
[root@Slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
mysql> start slave;
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.5.1.10
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000018
Read_Master_Log_Pos: 2953
Relay_Log_File: slavetest-relay-bin.000065
Relay_Log_Pos: 235
Relay_Master_Log_File: mysql-bin.000018
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 2953
Relay_Log_Space: 235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)
mysql>
Above highlighted rows must be indicate related log files and Slave_IO_Running and
Slave_SQL_Running: must be to YES.
Step 6:
On Master 1:
[root@TESTDB~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
mysql> show master status;
+------------------------------------+-----------+--------------------+------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB|
+------------------------------------+-----------+--------------------+------------------------+
|MysqlMYSQL01-bin.000008| 410 | | mysql |
+------------------------------------+-----------+--------------------+------------------------+
1 row in set (0.00 sec)
The above scenario is for master-slave, now we will create a slave master scenario for the same systems and it will work as master master.
Step 7:
Edit on Master 2/ Slave 1, edit my.cnf and master entries into it:
[mysqld]
logbin=mysqlbin #information for becoming master added
binlogignoredb=mysql
Step 8:
Create a replication slave account on master2 for master1:
mysql> grant replication slave on *.* to 'slavereplication'@10.5.1.10 identified by 'slave';
Step 9:
Edit my.cnf on master1 for information of its master.
[mysqld]
auto_increment_increment=2
auto_increment_offset=1 #information for becoming slave.
masterhost= 10.5.1.11
masteruser= slavereplication
masterpassword= slave
masterport= 3306
master_connect_retry=60
Step 10:
Restart both mysql master1 and master2.
Step 11:
Monitor Mysql Replication using :
• show slave status\G
• show processlist\G
• show master status;
4. Fail over Configuration Procedure :
Step1:
Login to Slave Server Using Secure Shell in Linux or Putty tool
[root@TESTDB ~]# ssh 10.5.1.11
root@10.5.1.11 password: xxxxx
Last login: Tue Oct 7 15:30:41 2008 from 10.5.1.10
[root@slavetest ~]#
or Use Putty Tool in Windows
Enter the slave IP address and user name and password
After login into slave machine proceed the below steps.
Step2:
Connect the Ethernet Interface cable.
Step3:
Enable the secondary Ethernet interface
[root@slavetest ~]# ifup eth1
Step4: Login into Mysql
[root@slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
You Will get Welcome message and mysql prompt as show above.
Step5:
Check the Slave Status
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.5.1.10
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000018
Read_Master_Log_Pos: 2953
Relay_Log_File: slavetest-relay-bin.000065
Relay_Log_Pos: 235
Relay_Master_Log_File: mysql-bin.000018
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 2953
Relay_Log_Space: 235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)
mysql>
Above Result will display Slave Replication status.
Make sure that the slave has processed any statements in their relay log. On slave, issue STOP SLAVE IO_THREAD, then check the output of SHOW PROCESSLIST until you see Has read all relay log; waiting for the slave I/O thread to update it. When this is true for all slaves, they can be reconfigured to the new setup.
mysql> Stop Slave io_thread;
mysql> show processlist\G
*************************** 1. row ***************************
Id : 203
User: slavereplication
Host: 10.5.1.10:59795
db : NULL
Command: Binlog Dump
Time: 158086
State: Has sent all binlog to slave; waiting for binlog to be updated
Info: NULL
*************************** 2. row ***************************
Id: 230
User: system user
Host:
db: NULL
Command: Connect
Time: 152220
State: Has read all relay log; waiting for the slave I/O thread to update it
Info: NULL
*************************** 3. row ***************************
Id: 251
User: root
Host: localhost
db: NULL
Command: Query
Time: 0
State: NULL
Info: show processlist
3 rows in set (0.00 sec)
mysql>
Step6:
Making Slave as Master server
mysql> show master status;
+----------------------+----------+-------------------+-----------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+----------------------+----------+-------------------+-----------------------------+
| mysql-bin.000006 | 783 | | mysql,mysql |
+----------------------+----------+-------------------+-----------------------------+
1 row in set (0.00 sec)
mysql>
(Note : This is very important make a note down of mysql log file and position when the slave becomes master).
mysql> stop slave;
This command will stop the slave replication. Now it is ready to serve as a master, start the application in the server.
5. Restoration Procedure:
Note: Bring down the secondary Ethernet interface eth1 down in slave server before your plan to restore. Now once the original master server problem has been fixed and making it as live server. When Master is up again, you must issue the CHANGE MASTER, so that Master becomes a slave of S1 and picks up each Web Client writes that it missed while it was down.
Step1:
Connect to the Current Master server
[root@TESTDB ~]# ssh 10.5.1.11
root@10.5.1.11's password: xxxxxxx
Last login: Tue Oct 7 15:30:41 2008 from 10.5.1.10
[root@mastertest ~]#
or
Use Putty Tool in Windows
Enter the slave IP address and user name and password
After login into machine proceed the below steps.
Step 2:
Unplug the secondary Ethernet Interface in current Master.
Step3:
Disable the secondary Ethernet Interface in current Master.
[root@mastertest ~]# ifdown eth1
Now Boot the Original Master Server and if the Ethernet cable is UN-plugged. Plug the Ethernet Interface in the Original Master server.
Step4:
Login To Original_Mater Server Using Secure Shell in Linux or Putty tool
[root@TESTDB ~]# ssh 10.5.1.10
root@10.5.1.10's password: xxxxxxx
Last login: Tue Oct 7 15:30:41 2008 from 10.5.1.10
[root@mastertest ~]#
Or Use Putty Tool in Windows
Enter the slave IP address and user name and password
After login into machine proceed the below steps.
Step5:
Login into Mysql
[root@slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
You Will get Welcome message and mysql prompt as show above.
Step6:
Configure Old_master as a Slave Server of Current_Master Server
mysql>change master to
>master_host=10.5.1.11', (slave server ip address)
>master_user='slavereplication', (slavereplication-mysql replication user created in the
slave server)
>master_password='slave', (slave- mysql replication user password created in the
slave server)
>master_log_file='mysql-bin.0000xx', (xx: position of current_master server log file name as shown in show master status, at the time of
making slave as master)
> master_log_pos=xx; (xx:position of current_master log position as shown
in show master status, at the time of making slave as master)
mysql>
mysql>start slave;
Now it will get updates, which are missed during the failure. Once all update has been finished make master a master again.
Step7:
To make Master a master again (because it is the most powerful machine, for example), use the preceding procedure as if Slave 1 was unavailable and Master was to be the new master.
[root@mastertest ~]# service mysql stop
[root@mastertest ~]# cd /var/lib/mysql
[root@mastertest ~]# mv master.info relay-log.info servername-relay.bin* /root
[root@mastertest ~]# service mysql start
Step8:
Making original Slave as a slave replication server
[root@slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> start slave;
6. Replication Files:
Let’s take a look at some of the files replication uses. You already know about the binary log and the relay log, but there are several other files too. Where MySQL places them depends mostly on your configuration settings. Different MySQL versions place them in different directories by default. You can probably find them either in the data directory or in the directory that contains the server’s .pid file (possibly /var/run/mysqld/ on Unix-like systems).
Here they are:
• mysql-bin.index : A server that has binary logging enabled will also have a file named the same as the binary logs, but with a .index suffix. This file keeps track of the binary log files that exist on disk. It is not an index in the sense of a table’s index; rather, each line in the file contains the filename of a binary log file. You might be tempted to think that this file is redundant and can be deleted (after all, MySQL could just look at the disk to find its files), but don’t. MySQL relies on this index file, and it will not recognize a binary log file unless it’s mentioned here.
• mysql-relay-bin.index : This file serves the same purpose for the relay logs as the binary log index file does for the binary logs.
• master.info : This file contains the information a slave server needs to connect to its master. Don’t delete it, or your slave will not know how to connect to its master after it restarts. This file contains the replication user’s password, in plain text, so you may want to restrict its permissions.
• relay-log.info: This file contains the slave’s current binary log and relay log coordinates (i.e., the slave’s position on the master). Don’t delete this either, or the slave will forget where it was replicating from after a restart and might try to replay statements it has already executed.
These files are a rather crude way of recording MySQL’s replication and logging state. Unfortunately, they are not written synchronously, so if your server loses power and the files haven’t yet been flushed to disk, they can be inaccurate when the server restarts.
1. Setting up Replication:
Three threads are involved in Replication: One on the master and two on the slave.
• The I/O thread on the slave connects to the master and requests the binary update log. The Binary log dump thread on the master sends the binary update log to the slave on request.
• Once on the slave, the I/O thread reads the data sent by the master and copies it to the relay log in the data directory.
• The third thread, also on the slave, is the SQL Thread, which read and executes the queries from the relay log to bring the slave in alignment with the master.
2. Replication with example:
Version: Both master and slave should be the same version. Otherwise replication will be improper.
Network Configuration Settings
Master Server IP Address: 10.5.1.10
Slave Server Primary IP Address (Ethernet eth0): 10.5.1.11
Slave Server Secondary IP Address (Ethernet eth1): 10.5.1.10 (By default it is disabled)
3. MySQL Replication Installation:
Step1:
Install MySQL on master 1 and slave 1. Configure network services on both systems, like
Master 1/Slave 2 IP: 10.5.1.10
Master 2/Slave 1 IP: 10.5.1.11
Step2:
On Master 1, make changes in my.cnf:
[mysqld]
logbin= mysqlbin
binlogdodb=
binlogignoredb= mysql # input the database that should be ignored for replication
serverid=1
auto_increment_increment=2
auto_increment_offset=1
Step 3:
On master 1, create a replication slave account in mysql.
mysql> grant replication slave on *.* to 'replication'@10.5.1.11 identified by 'slave';
Restart the mysql master1.
Step 4:
Now edit my.cnf on Slave1 or Master2:
[mysqld]
serverid =2
masterhost= 10.5.1.10
masteruser= replication
masterpassword= slave
masterport= 3306
Step 5:
Restart th MySQL Slave 1.
Login to the MySQL command prompt and start the slave replication.
[root@Slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
mysql> start slave;
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.5.1.10
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000018
Read_Master_Log_Pos: 2953
Relay_Log_File: slavetest-relay-bin.000065
Relay_Log_Pos: 235
Relay_Master_Log_File: mysql-bin.000018
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 2953
Relay_Log_Space: 235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)
mysql>
Above highlighted rows must be indicate related log files and Slave_IO_Running and
Slave_SQL_Running: must be to YES.
Step 6:
On Master 1:
[root@TESTDB~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
mysql> show master status;
+------------------------------------+-----------+--------------------+------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB|
+------------------------------------+-----------+--------------------+------------------------+
|MysqlMYSQL01-bin.000008| 410 | | mysql |
+------------------------------------+-----------+--------------------+------------------------+
1 row in set (0.00 sec)
The above scenario is for master-slave, now we will create a slave master scenario for the same systems and it will work as master master.
Step 7:
Edit on Master 2/ Slave 1, edit my.cnf and master entries into it:
[mysqld]
logbin=mysqlbin #information for becoming master added
binlogignoredb=mysql
Step 8:
Create a replication slave account on master2 for master1:
mysql> grant replication slave on *.* to 'slavereplication'@10.5.1.10 identified by 'slave';
Step 9:
Edit my.cnf on master1 for information of its master.
[mysqld]
auto_increment_increment=2
auto_increment_offset=1 #information for becoming slave.
masterhost= 10.5.1.11
masteruser= slavereplication
masterpassword= slave
masterport= 3306
master_connect_retry=60
Step 10:
Restart both mysql master1 and master2.
Step 11:
Monitor Mysql Replication using :
• show slave status\G
• show processlist\G
• show master status;
4. Fail over Configuration Procedure :
Step1:
Login to Slave Server Using Secure Shell in Linux or Putty tool
[root@TESTDB ~]# ssh 10.5.1.11
root@10.5.1.11 password: xxxxx
Last login: Tue Oct 7 15:30:41 2008 from 10.5.1.10
[root@slavetest ~]#
or Use Putty Tool in Windows
Enter the slave IP address and user name and password
After login into slave machine proceed the below steps.
Step2:
Connect the Ethernet Interface cable.
Step3:
Enable the secondary Ethernet interface
[root@slavetest ~]# ifup eth1
Step4: Login into Mysql
[root@slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
You Will get Welcome message and mysql prompt as show above.
Step5:
Check the Slave Status
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.5.1.10
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000018
Read_Master_Log_Pos: 2953
Relay_Log_File: slavetest-relay-bin.000065
Relay_Log_Pos: 235
Relay_Master_Log_File: mysql-bin.000018
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 2953
Relay_Log_Space: 235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)
mysql>
Above Result will display Slave Replication status.
Make sure that the slave has processed any statements in their relay log. On slave, issue STOP SLAVE IO_THREAD, then check the output of SHOW PROCESSLIST until you see Has read all relay log; waiting for the slave I/O thread to update it. When this is true for all slaves, they can be reconfigured to the new setup.
mysql> Stop Slave io_thread;
mysql> show processlist\G
*************************** 1. row ***************************
Id : 203
User: slavereplication
Host: 10.5.1.10:59795
db : NULL
Command: Binlog Dump
Time: 158086
State: Has sent all binlog to slave; waiting for binlog to be updated
Info: NULL
*************************** 2. row ***************************
Id: 230
User: system user
Host:
db: NULL
Command: Connect
Time: 152220
State: Has read all relay log; waiting for the slave I/O thread to update it
Info: NULL
*************************** 3. row ***************************
Id: 251
User: root
Host: localhost
db: NULL
Command: Query
Time: 0
State: NULL
Info: show processlist
3 rows in set (0.00 sec)
mysql>
Step6:
Making Slave as Master server
mysql> show master status;
+----------------------+----------+-------------------+-----------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+----------------------+----------+-------------------+-----------------------------+
| mysql-bin.000006 | 783 | | mysql,mysql |
+----------------------+----------+-------------------+-----------------------------+
1 row in set (0.00 sec)
mysql>
(Note : This is very important make a note down of mysql log file and position when the slave becomes master).
mysql> stop slave;
This command will stop the slave replication. Now it is ready to serve as a master, start the application in the server.
5. Restoration Procedure:
Note: Bring down the secondary Ethernet interface eth1 down in slave server before your plan to restore. Now once the original master server problem has been fixed and making it as live server. When Master is up again, you must issue the CHANGE MASTER, so that Master becomes a slave of S1 and picks up each Web Client writes that it missed while it was down.
Step1:
Connect to the Current Master server
[root@TESTDB ~]# ssh 10.5.1.11
root@10.5.1.11's password: xxxxxxx
Last login: Tue Oct 7 15:30:41 2008 from 10.5.1.10
[root@mastertest ~]#
or
Use Putty Tool in Windows
Enter the slave IP address and user name and password
After login into machine proceed the below steps.
Step 2:
Unplug the secondary Ethernet Interface in current Master.
Step3:
Disable the secondary Ethernet Interface in current Master.
[root@mastertest ~]# ifdown eth1
Now Boot the Original Master Server and if the Ethernet cable is UN-plugged. Plug the Ethernet Interface in the Original Master server.
Step4:
Login To Original_Mater Server Using Secure Shell in Linux or Putty tool
[root@TESTDB ~]# ssh 10.5.1.10
root@10.5.1.10's password: xxxxxxx
Last login: Tue Oct 7 15:30:41 2008 from 10.5.1.10
[root@mastertest ~]#
Or Use Putty Tool in Windows
Enter the slave IP address and user name and password
After login into machine proceed the below steps.
Step5:
Login into Mysql
[root@slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
You Will get Welcome message and mysql prompt as show above.
Step6:
Configure Old_master as a Slave Server of Current_Master Server
mysql>change master to
>master_host=10.5.1.11', (slave server ip address)
>master_user='slavereplication', (slavereplication-mysql replication user created in the
slave server)
>master_password='slave', (slave- mysql replication user password created in the
slave server)
>master_log_file='mysql-bin.0000xx', (xx: position of current_master server log file name as shown in show master status, at the time of
making slave as master)
> master_log_pos=xx; (xx:position of current_master log position as shown
in show master status, at the time of making slave as master)
mysql>
mysql>start slave;
Now it will get updates, which are missed during the failure. Once all update has been finished make master a master again.
Step7:
To make Master a master again (because it is the most powerful machine, for example), use the preceding procedure as if Slave 1 was unavailable and Master was to be the new master.
[root@mastertest ~]# service mysql stop
[root@mastertest ~]# cd /var/lib/mysql
[root@mastertest ~]# mv master.info relay-log.info servername-relay.bin* /root
[root@mastertest ~]# service mysql start
Step8:
Making original Slave as a slave replication server
[root@slavetest ~]# mysql -u root -p
Enter password: xxxxx (Please consult MySQL Administrator/IT Manager)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 250
Server version: 5.0.54a-enterprise-gpl-log MySQL Enterprise Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql> start slave;
6. Replication Files:
Let’s take a look at some of the files replication uses. You already know about the binary log and the relay log, but there are several other files too. Where MySQL places them depends mostly on your configuration settings. Different MySQL versions place them in different directories by default. You can probably find them either in the data directory or in the directory that contains the server’s .pid file (possibly /var/run/mysqld/ on Unix-like systems).
Here they are:
• mysql-bin.index : A server that has binary logging enabled will also have a file named the same as the binary logs, but with a .index suffix. This file keeps track of the binary log files that exist on disk. It is not an index in the sense of a table’s index; rather, each line in the file contains the filename of a binary log file. You might be tempted to think that this file is redundant and can be deleted (after all, MySQL could just look at the disk to find its files), but don’t. MySQL relies on this index file, and it will not recognize a binary log file unless it’s mentioned here.
• mysql-relay-bin.index : This file serves the same purpose for the relay logs as the binary log index file does for the binary logs.
• master.info : This file contains the information a slave server needs to connect to its master. Don’t delete it, or your slave will not know how to connect to its master after it restarts. This file contains the replication user’s password, in plain text, so you may want to restrict its permissions.
• relay-log.info: This file contains the slave’s current binary log and relay log coordinates (i.e., the slave’s position on the master). Don’t delete this either, or the slave will forget where it was replicating from after a restart and might try to replay statements it has already executed.
These files are a rather crude way of recording MySQL’s replication and logging state. Unfortunately, they are not written synchronously, so if your server loses power and the files haven’t yet been flushed to disk, they can be inaccurate when the server restarts.
Subscribe to:
Posts (Atom)
SHOW ENGINE INNODB STATUS
The SHOW ENGINE INNODB STATUS command in MySQL provides detailed information about the internal state of the InnoDB storage engine. This ...
-
Change Views DEFINER without ALTER VIEW: UPDATE `mysql`.`proc` p SET definer = ‘root@localhost’ WHERE definer=’root@foobar’ AND db=’w...
-
The Unix top command is designed to help users determine which processes are running and which applications are using more memory or process...
-
MySQL's InnoDB storage engine data refresh every situation. This post from InnoDB down, look at the data from the memory to the InnoDB ...