30 Jan 2015
Despite the common opinion unix.stackexchange.com/get-file-created-creation-time
Linux offers three timestamps for files: time of last access of contents (atime), time of last modification of contents (mtime), and time of last modification of the inode (metadata, ctime).
You may recover the file creation date if you deal with capble filesystem like EXT4 - journaling file system for Linux:
Improved timestamps
… Ext4 provides timestamps measured in nanoseconds. In addition, ext4 also adds support for date-created timestamps.
But there no consensus in the community on that so
… as Theodore Ts’o points out, while it is easy to add an extra creation-date field in the inode (thus technically enabling support for date-created timestamps in ext4), it is more difficult to modify or add the necessary system calls, like stat() (which would probably require a new version) and the various libraries that depend on them (like glibc). These changes would require coordination of many projects. So even if ext4 developers implement initial support for creation-date timestamps, this feature will not be available to user programs for now.
Which end up with the Linus final quote
Let’s wait five years and see if there is actually any consensus on it being needed and used at all, rather than rush into something just because “we can”.
So what to do? Let’s chill out
Now let’s question yourself how would you extract this information? We end up with the STAT
utility
NAME
stat - display file or file system status
SYNOPSIS
stat [OPTION]... FILE...
DESCRIPTION
Display file or file system status.
and DEBUGFS
utilities
NAME
debugfs - ext2/ext3/ext4 file system debugger
SYNOPSIS
debugfs [ -Vwci ] [ -b blocksize ] [ -s superblock ] [ -f cmd_file ] [ -R request ] [ -d data_source_device ] [ device ]
DESCRIPTION
The debugfs program is an interactive file system debugger. It can be used to examine and change the state of an ext2, ext3, or ext4 file system.
device is the special file corresponding to the device containing the file system (e.g /dev/hdXX).
So we compound both command in one
$ debugfs -R 'stat <filename>' </dev/sdXX - partition name>
to finally get the crtime
- creation time:
Inode: 1071162 Type: regular Mode: 0644 Flags: 0x80000
Generation: 1324300925 Version: 0x00000000:00000001
User: 105 Group: 114 Size: 1831803
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 3592
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x54cba040:2718d1c8 -- Fri Jan 30 16:16:16 2015
atime: 0x54cba167:94c3cfa0 -- Fri Jan 30 16:21:11 2015
mtime: 0x54cba040:2718d1c8 -- Fri Jan 30 16:16:16 2015
crtime: 0x54ca6763:94c3c518 -- Thu Jan 29 18:01:23 2015
Size of extra inode fields: 28
EXTENTS:
(0): 4229445, (1-7): 4261097-4261103, ...
So lets write the xstat
utility before the consensus will come :)
now put it in ~/.bashrc
or ~/.profile
and voilà:
$ xstat *
Tue Jan 13 17:41:05 2015 404.html
Thu Feb 5 23:19:19 2015 about.md
Sun Jan 18 01:28:51 2015 archives.html
Tue Jan 13 17:41:05 2015 atom.xml
Thu Feb 5 22:32:52 2015 categories.html
Thu Feb 5 23:24:40 2015 _config.yml
Tue Jan 13 17:41:05 2015 _drafts
Thu Feb 5 21:50:47 2015 Gemfile
Thu Feb 5 21:50:47 2015 Gemfile.lock
Tue Jan 13 17:41:05 2015 _includes
Tue Jan 13 17:41:05 2015 index.html
Tue Jan 13 17:41:05 2015 _layouts
Tue Feb 3 22:42:06 2015 learning-nosql-php.html
Tue Jan 13 17:41:05 2015 LICENSE.md
Thu Feb 5 20:36:30 2015 plugins
Tue Jan 13 17:41:05 2015 _posts
Tue Jan 13 17:41:05 2015 public
Tue Jan 13 17:41:05 2015 README.md
Tue Jan 13 17:41:05 2015 search
Wed Jan 14 20:08:17 2015 _site
Thu Feb 5 22:51:27 2015 tags.html
23 Jan 2015
Today we will deal with temporary tables and files.
At first lets examine the MySQL whether it actually uses temporary tables writings with mysqltuner
:~$ sudo mysqltuner
[!!] Temporary tables created on disk: 28% (324K on disk / 1M total)
Yes, it is definitely does. Just one thing to know is that
Temporary tables are not always flushed to disk, since the time to live of temporary table is rather small.
Lets find where MySQL saves temporary tables
sudo cat /etc/mysql/my.cnf | grep tmpdir
tmpdir = /tmp
If you had no success with previous one find out with the query
mysql> SHOW GLOBAL VARIABLES LIKE 'tmpdir';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| tmpdir | /tmp |
+---------------+-------+
1 row in set (0.00 sec)
Let’s make sure that MySQL intensively writing in this folder using great iwatch
command
:~$ sudo iwatch /tmp/
[23/gen/2015 10:43:08] IN_CREATE /tmp//#sql_a87_0.MYI
[23/gen/2015 10:43:08] IN_CREATE /tmp//#sql_a87_0.MYD
[23/gen/2015 10:43:08] IN_CLOSE_WRITE /tmp//#sql_a87_0.MYD
[23/gen/2015 10:43:08] IN_CLOSE_WRITE /tmp//#sql_a87_0.MYI
[23/gen/2015 10:43:08] IN_CLOSE_WRITE /tmp//#sql_a87_0.MYI
[23/gen/2015 10:43:08] IN_CLOSE_WRITE /tmp//#sql_a87_0.MYD
This is a good sign, so lets do the rest.
Add the following string to the /etc/fstab
:~$ sudo cat /etc/fstab
tmpfs /tmp tmpfs nodev,nosuid,size=256M 0 0
Now let’s apply it without reboot
Now check how faster is scrolling the listing in iwatch /tmp
.
This optimization will be useful also for many other services such as anti-viruses, PHP and web servers, Java and so on.
22 Jan 2015
Are you tired to see these lines in Apache log
cat /var/log/apache2/other_vhosts_access.log | grep "wp-login.php"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:14 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:14 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:14 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:14 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:15 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:15 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:15 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:15 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
example.com:80 95.211.131.148 - - [20/Jan/2015:12:40:15 +0100] "POST /wp-login.php HTTP/1.0" 200 211 "-" "-"
Actually your server is working hard managing multiple attempts to login, especially in such frameworks like
Wordpress and Joomla. The saturation of database connections results in Denial-of-Service and the website downtimes!
To block the unsolicited requests and avoid website downtimes we just need to follow some steps
STEP 1: Fail2ban installation
Install Fail2ban
:
STEP 2: Fail2ban jail configuration
Now lets configure Fail2ban
to ban the attacker.
sudo vim /etc/fail2ban/jail.conf
add the following to the end of the file
[framework-ddos]
enabled = true
port = 80,443
protocol = tcp
filter = framework-ddos
logpath = /var/log/apache2/other_vhosts_access.log
maxretry = 10
# findtime: 10 mins
findtime = 600
# bantime: 1 week
bantime = 604800
The default installation of ISPConfig writes into log file loceted in /var/log/apache2/other_vhosts_access.log
in the following format
example.com:80 95.211.131.148 - - [22/Jan/2015:17:10:52 +0100] "GET /wp-login.php HTTP/1.1" 200 22457 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +https://www.google.com/bot.html)"
The next is the most important part, the filter configuration
vim /etc/fail2ban/filter.d/framework-ddos.conf
and put the following regular expressions
[Definition]
failregex = .*:(80|443) <HOST> .*(GET|POST) .*/xmlrpc.php
.*:(80|443) <HOST> .*(GET|POST) .*/wp-login.php
.*:(80|443) <HOST> .*(GET|POST) /administrator/index.php HTTP
Restart Fail2ban
sudo /etc/init.d/fail2ban restart
STEP 3: Testing and monitoring
Start monitoring the log file
sudo tail -f /var/log/fail2ban.log
once you’ve seen the first attacker
2015-01-20 12:40:35,205 fail2ban.actions: WARNING [framework-ddos] Ban 95.211.131.148
Check out the iptables
for the action applied correct firewall rules
Chain fail2ban-framework-ddos (1 references)
target prot opt source destination
DROP all -- 95.211.131.148 0.0.0.0/0
This is it!
16 Dec 2014
Nowadays we inherit a lot of old databases.
The typical problem is to extract data from badly encoded fields.
This happens when the browser encoding is forced to let say UTF8
and MySQL
is accepting the the default LATIN1
encoding. In this case
the problem does not manifests immediately since the byte sequence corresponding to
the single character remains immute during the saving and retrieval, but become a problem
when dumped and migrated.
Lets get workaround this problem. At first find non ASCII
characters in the dump file
grep --color='auto' -P "[\x80-\xFF]" FILENAME
Now let’s work it out with iconv
iconv --verbose -f LATIN1 -t UTF8//TRANSLIT FILENAME_latin1 > FILENAME_utf8
If you get the followinf message
iconv: illegal input sequence at position <NUMBER>
this is a good sign of badly encoded character, you may correct it with vim, just type in command mode
Taking into account that you’re working with UTF8
locale session in terminal
user@host:~$ locale
LANG=en_US.UTF-8
LANGUAGE=en_US:
LC_CTYPE="en_US.UTF-8"
After you’re finished, just save the file and import it into UTF8
encoded fields of the database!
26 Nov 2014
If you ever woundered why your 10Gbit link on Proxmox node is used only by a few percent during the migration, so you came to the right place.
The main reason is the security measures taken to protect virtual machine memory during the migration. All volume of memory will be transmitted via secure tunnel and that penalizes the speed:
Nov 24 12:26:41 starting migration of VM 123 to node 'proxmox1' (10.0.1.1)
Nov 24 12:26:41 copying disk images
Nov 24 12:26:41 starting VM 123 on remote node 'proxmox1'
Nov 24 12:26:43 starting ssh migration tunnel
Nov 24 12:26:43 starting online/live migration on localhost:60000
Nov 24 12:26:43 migrate_set_speed: 8589934592
Nov 24 12:26:43 migrate_set_downtime: 0.1
Nov 24 12:26:45 migration status: active (transferred 133567908, remaining 930062336), total 1082789888)
Nov 24 12:26:47 migration status: active (transferred 273781779, remaining 788221952), total 1082789888)
...
Nov 24 12:26:58 migration status: active (transferred 1036346176, remaining 20889600), total 1082789888)
Nov 24 12:26:58 migration status: active (transferred 1059940218, remaining 11558912), total 1082789888)
Nov 24 12:26:59 migration speed: 64.00 MB/s - downtime 54 ms
Nov 24 12:26:59 migration status: completed
Nov 24 12:27:02 migration finished successfuly (duration 00:00:21)
TASK OK
If your configured your Proxmox cluster to use the dedicated network isolated from the public one so you may low down the security level
$ cat /etc/pve/datacenter.cfg
....
migration_unsecure: 1
This is it:
Nov 24 12:42:19 starting migration of VM 100 to node 'proxmox2' (10.0.1.2)
Nov 24 12:42:19 copying disk images
Nov 24 12:42:19 starting VM 100 on remote node 'proxmox2'
Nov 24 12:42:35 starting ssh migration tunnel
Nov 24 12:42:36 starting online/live migration on 10.0.1.2:60000
Nov 24 12:42:36 migrate_set_speed: 8589934592
Nov 24 12:42:36 migrate_set_downtime: 0.1
Nov 24 12:42:38 migration status: active (transferred 728684636, remaining 5655494656), total 6451433472)
Nov 24 12:42:40 migration status: active (transferred 1465523175, remaining 4865253376), total 6451433472)
....
Nov 24 12:42:55 migration status: active (transferred 7115710846, remaining 69742592), total 6451433472)
Nov 24 12:42:55 migration speed: 323.37 MB/s - downtime 262 ms
Nov 24 12:42:55 migration status: completed
Nov 24 12:42:58 migration finished successfuly (duration 00:00:39)
TASK OK
now you’re using all available bandwidth for migration, that also is very useful during migration heavy loaded instances.