In todays system you will hardly find the Too many open files error. However, if you are running Java application and processing it taking time you see these errors in your application log.
Why do this error occur?
The kernel’s out of file handles. Go to /proc/sys/fs and look at “file-max” as compared to “file-nr”. I think “nr” is the number currently in use… but I could swear I’ve seen it go higher than “max”.
What’s the Solution?
The solution is just to put larger numbers in the “file-max” files. And increase the user limit in limits.conf
Increase the max files
The file-max setting tells kernel it can open those many number of files. To bring changes immediately in effect issue following command in command line.
echo "100000" > /proc/sys/fs/file-max
To make the changes permanently so that it survive the reboots you need to make an entry in /etc/sysctl.conf. Open sysctl.conf and following entry at End of File.
fs.file-max = 100000
Increase the user limit
We have to increase the limit of files user can open. Default it is 1024 file which is enough for normal system. However, if your system is busy server and getting more than 3000 hits per second you need to increase the user limit. To increase the user file limit there are 2 ways:
- Dirty and quick way.
- Systematic and recommended way.
Dirty and quick way
Dirty and quick way is to edit /etc/bashrc file. Open the file and at the end add following entry:
ulimit -n 16000
Be aware that this will set the limit for all the users.
Systematic and recommended way
Systematic and recommended way is to edit /ets/security/limits.conf file.
– Exit all shell sessions for the user you want to change limits on.
– As root, edit the file /etc/security/limits.conf and add these two lines toward the end:
user1 soft nofile 16000 user1 hard nofile 20000
** the two lines above changes the max number of file handles - nofile - to new settings.
– Save the file.
– Login as the user1 again. The new changes will be in effect.