Saturday, December 13

Japan's Best Butt or Reuters taking the piss out of Japanese Media?




What really happened?
Did the conservative Sloggi share-holders ruin a promising beauty contest?
If they were competing for the Best Butt in Japan, then why on earth force the women to wear back-to-the-50's attire?
You can hardly see the butts, under less important circumstances, I wouldn't be bothered to complain about it, but it really pisses me off that some conservative-populists have prevented me from seeing the best butt in Japan, I find it a nuisance. So I do.





As if Sloggi didn't include strings in their collection, this photo makes it very clear.
Only I can't comprehend why they weren't allowed to wear them?
These Japanese petite female beauties would have walked the catwalk a lot less embarrassed, only if the contestants would have been allowed to wear the Sloggi strings of their own choice.
Or doesn't Sloggi know strings dignify the sensual curves of a female body?
When the point is arranging a contest about the best butt of the country, then don't go and censor them - butts.

Thursday, December 11

How to Increase the number of file descriptors on your Linux Server







Today I thought to myself, if I don't publish anything geeky before X-mas, I won't get any presents from Santa, and my credibility as a geek will suffer.
There always seems to be an ulterior motive behind many things, and mine is this. I'm seeking a job, and the CTO's and such surely Google all the applicants names because there is so much bluffing and arse covering going on in the IT business, that you wouldn't believe.
Well, perhaps this article will make me look more tech savvy online like I am in real life.

If you don't know what a file descriptor is, you may find this article extremely boring. However if you are interested, I found a good definition from the web for you:
A small positive integer that the system uses instead of the file name to identify an open file.

Most Linux distros need at least some tuning out of the box. It's not cost-effective to run production servers with default settings. There are many things that can be done to optimize the current kernel, as dynamic as it is compared to the previous major release 2.4 the number of open file descriptors still matters.
Also I/O Scheduler is set to fair as default, this should be changed according to your servers role and in my opinion one should set it to pre-emptive when in doubt. Well I haven't code-read the algorithms' source code, I'm very slow in that and must do it with a book in the other hand as I am not as much of a coder as a sys admin.

While talking about schedulers, another interesting one is the Symmetric Multi Processing scheduler, this one has been already patched in the beginning of the year 2008, and handles multiple and multi-core cpus' well, as long as the application is SMP aware. Before the patching, it didn't load balance threads in an effective manner. Okay, back to the issue at hand.

If I'm wrong about any of this, or your opinion differs, comments are welcome.

I am no kernel developer, but merely a geek who's barely keeping his head above the water when it comes to kernel development.
I tested to verify and double-check all this several of months ago at my previous working place.

The system wide hard limit for open file descriptors is defined by the kernel.

These can be changed on the kernel sources and recompiled, the values are found here:

/usr/src/linux-2.6.xinclude/linux/fs.h

/usr/src/linux-2.6.x

include/linux/limits.h

NR_OPEN = maximum number of open files per process
NR_FILE = total number of files that can be open in the system at any time

The below red is from the fs.h file.

/*

* It's silly to have NR_OPEN bigger than NR_FILE, but you can change

* the file limit at runtime and only root can increase the per-process

* nr_file rlimit, so it's safe to set up a ridiculously high absolute

* upper limit on files-per-process.

*

* Some programs (notably those using select()) may have to be

* recompiled to take full advantage of the new limits..

*/

/* Fixed constants first: */

#undef NR_OPEN

#define NR_OPEN (1024*1024) /* Absolute upper limit on fd num */

#define INR_OPEN 1024 /* Initial setting for nfile rlimits */

Based on the above, the max open file descriptors per process can easily be altered, and one can query the “system wide” max open file descriptors with either:

sysctl -a | grep fs.file-max

or

cat /proc/sys/fs/file-max

The default value for this on SuSE Linux kernel 2.6.25.5-1.1 is 50156.

This can be changed by writing it in the sysctl or directly to /proc.

To check if the actual limits have been reached system wide, can be done with,

cat /proc/sys/fs/file-nr

or alternatively,

sysctl –a | grep fs.file-nr

768 0 50156

Where the 1st value is: total allocated file descriptors

2nd value: total free allocated file descriptors

3rd value:
maximum number of file descriptors allowed on the system

There is also something called the “ulimit” which is a user ID specific limit for number of open file descriptors and it is valid per user/group and log-in session.

This limit can be increased by editing the file /etc/security/limits.conf and it does make sense to edit it for a single process like squid.

squid hard nofile 4096

to check how many open file descriptors are used per process ID, do like this, first get the PID:

ps aux | grep squid

squid 18680 42.4 1.1 61156 11284 ? Sl Feb16 1715:55 (squid) -sYD

Then you can check the number of open files used by the PID:

lsof | grep 18680 | wc –l

32

However as the number of open files is not exactly the same thing as the number of open file descriptors like you can see below we get a different value.

ls -l /proc/18680/fd/

total 15

dr-x------ 2 root root 0 Feb 19 15:39 .

dr-xr-xr-x 3 squid nogroup 0 Feb 19 05:24 ..

lrwx------ 1 root root 64 Feb 19 15:39 0 -> /dev/null

lrwx------ 1 root root 64 Feb 19 15:39 1 -> /dev/null

l-wx------ 1 root root 64 Feb 19 15:39 10 -> pipe:[29382153]

lrwx------ 1 root root 64 Feb 19 15:39 12 -> socket:[420808351]

l-wx------ 1 root root 64 Feb 19 15:39 13 -> /var/cache/squid/swap.state

lr-x------ 1 root root 64 Feb 19 15:39 16 -> pipe:[29382165]

l-wx------ 1 root root 64 Feb 19 15:39 17 -> pipe:[29382165]

lrwx------ 1 root root 64 Feb 19 15:39 2 -> /dev/null

lrwx------ 1 root root 64 Feb 19 15:39 3 -> /var/log/squid/cache.log

lrwx------ 1 root root 64 Feb 19 15:39 4 -> socket:[29382150]

l-wx------ 1 root root 64 Feb 19 15:39 5 -> /var/log/squid/access.log

l-wx------ 1 root root 64 Feb 19 15:39 6 -> /var/log/squid/store.log

lr-x------ 1 root root 64 Feb 19 15:39 7 -> pipe:[29382152]

lrwx------ 1 root root 64 Feb 19 15:39 8 -> socket:[420808349]

lrwx------ 1 root root 64 Feb 19 15:39 9 -> socket:[420808350]

All comments, suggestions or corrections are welcome, remember that this applies mostly to high loaded servers, tweaking these values isn't always necessary, but it's great fun :-)