Monday, December 30, 2013

Recover Deleted File


grep -a -B 25 -A 100 'some string in file' /dev/sda1 > results.txt


grep searches through a file and prints out all the lines that match some pattern. Here, the pattern is some string that is known to be in the deleted file. The more specific this string can be, the better. The file being searched by grep (/dev/sda1) is the partition of the hard drive the deleted file used to reside in. The -a flag tells grep to treat the hard drive partition, which is actually a binary file, as text. Since recovering the entire file would be nice instead of just the lines that are already known, context control is used. The flags -B 25 -A 100 tell grep to print out 25 lines before a match and 100 lines after a match. Be conservative with estimates on these numbers to ensure the entire file is included (when in doubt, guess bigger numbers). Excess data is easy to trim out of results, but if you find yourself with a truncated or incomplete file, you need to do this all over again. Finally, the > results.txt instructs the computer to store the output of grep in a file called results.txt. [Ref]

Thursday, October 17, 2013

Convert Dmesg timestamp to Human readable format.

I am using ubuntu 12.04 where util-linux package is update and you have the option of converting dmesg timestamp to human readable format just by passing option "-T". But for those who are using the dmesg which doesn't support "-T" option can use below simple program to convert it into human readable format.

Program_file: dmesg_realtime.sh
Parameters: dmesg_timestamp

#cat dmesg_realtime.sh
#!/bin/bash
ut=`cut -d' ' -f1 < /proc/uptime`
ts=`date +%s`
realtime_date=`date -d"70-1-1 + $ts sec - $ut sec + $1 sec" +"%F %T"`
echo $realtime_date

#./dmesg_realtime.sh 8642755.690405
2013-08-16 08:48:09


Tuesday, September 24, 2013

Top 10 MySQL Mistakes Made By PHP Developers

I just came across the Blog written by "Craig Buckler" Director of OptimalWorks which will be useful for the PHP Developers.

Monday, August 12, 2013

Postfix as a spam trap server

Reference here

If you want to build a Spam trap with Postfix this can be done very very easy. You don't even have to configure Postfix to act as a Spam trap.
Postfix ships with a neat tool called smtp-sink which does the trick.
smtp-sink is mainly intended to act as a testing tool for SMTP clients which need a Server to play with. So you can configure it to log the whole conversation or even dump each received mail to a file. The latter is needed for a spamtrap.

There is no configuration file to configure smtp-sink. Everything is done via command-line options.
smtp-sink -c -d "%Y%m%d%H/%M." -f . -u postfix -R /tmp/ -B "550 5.3.0 The recipient does not like your mail. Don't try again." -h spamtrap.example.com 25 1024
Let's have a closer look to each parameter.
  • -u postfix
    Runs the program under the user "postfix"
  • -R /tmp/
    Sets the output directory to /tmp/. In this directory the mails will be stored. If you have a high spam volume (hundreds of Spam per minute) it is recommended to write the mails to a ramdisk
  • -d "%Y%m%d%H/%M."
    Writes the mail to a directory of the format "YearMonthDayHour" and in this directory the files are name "Month.RandomID". Note that the dates are in UTC
  • -c
    Write statistics about connection counts and message counts to stdout while running
  • -f .
    Reject the mail after END-OF-DATA. But the mail will be saved. Cool, isn't it?!
  • -B "550 5.3.0 The recipient does not like your mail. Don't try again"
    This is the rejection message after END-OF-DATA.
  • -h spamtrap.example.com
    Announce the hostname spamtrap.example.com
  • 25
    The port to listen on. Can be prepended with an IP or host if you want to bind on a special interface.
  • 1024
    The backlog count of connections that can wait in the TCP/IP stack before they get a free slot for sending mail.
You can find more information in the man page of smtp-sink, but these are the important ones to run a catch-all spamtrap.
In this configuration the program accepts any mail with any size from any sender to any recipient with IPv4 and IPv6. The only restrictions are that there are only 256 simultaneous connections possible with 1024 queued connections and the program is flagged experimental.
So do not use smtp-sink in a production environment.

The next step of a Spamtrap is to read the saved files, parse and interpret them and then do whatever is needed. For example block further connections from that IP via a firewall, feed it to a blacklist, scan for viruses or create checksums from these mails.

The -B option is only valid in newer versions of Postfix. In 2.7.1 it is missing. In 2.8.2 it is present. Somewhere in-between it was introduced. 

Thursday, July 25, 2013

MySQL Scaling technique.

Global Configuraiton Level:

  1. thread_cache_size
    Change if you do a lot of new connections.
  2. table_cache
    Change if you have many tables or simultaneous connections
  3. delay_key_write
    Set if you need to buffer all key writes
  4. max_heap_table_size
    Used with GROUP BY
  5. sort_buffer
    Used with ORDER BY and GROUP BY
  6. query_cache_type
    Set this ON if you are repeating the sql queries default OFF
  7. query_cache_size
    Set this to any perticuler value >= query_cache_limit. To disabled query_cache_size set the value to "0".
MyISAM

  1. key_buffer_size
    Change if you have enough RAM to store available MyISAM table Index
  2. myisam_sort_buffer_size
    Useful when Repairing tables.
  3. myisam_use_mmap
    Use memory mapping for reading and writing MyISAM tables.

InnoDB

  1. innodb_buffer_pool_size
    Change if you have enough RAM to store available InnoDB table Index.
  2. innodb_support_xa
    Turn off if you don't need it for safe binary logging or replication
  3. innodb_doublewrite
    If enable then 5-10% performance loss due to use of doublewrite
  4. innodb_lock_wait_timeout
    To remove the deadlock process after certain timeout value.
  5. innodb_thread_concurrency
    A recommended value is 2 times the number of CPUs plus the number of disks.
  6. innodb_flush_method
    On some systems where InnoDB data and log files are located on a SAN, it has been found that setting innodb_flush_method to O_DIRECT can degrade performance of simple SELECT statements by a factor of three.
  7. innodb_flush_log_at_trx_commit
    For the greatest possible durability and consistency in a replication setup using InnoDB with transactions, you should use innodb_flush_log_at_trx_commit=1, sync_binlog=1
  8. innodb_file_per_table
    This will create the file per table same as MyISAM

System Level:

1. Disable DNS Hostname Lookup
3. RAID 10 is must for high I/O performance.
4. ResierFS is recomended filesystem by most of the blog posts but xfs is doing good for us over RAID 10.

Architectural Level:

1. Use VARCHAR datatype instead of CHAR.
2. AUTO_INCREMENT should be BIGINT if there are Million Row insert/delete.