Category Archives: Unix

Unix (Solaris, AIX, etc)

Missing disk space Linux/Unix: when df disagrees with du -s

A common situation many admins find themselves in is where they quickly have to clear down disk space.  So for instance, say /u01 is filling up.  The Oracle admin knows that the database will simply stop if he doesn’t take action quickly.  With the judicious use of du -s he finds some large directories and quickly deletes a few temporary files he know the database doesn’t immediately need.  He does a ‘df -h’ to find that it hasn’t made any difference!  He then does his ‘du -s’ and it shows the space has been freed up.  He doesn’t know it, but he has deleted at least one open file whose space won’t be freed up until the process is closed.  What he should have done is this:

echo "" > offendingfile

where offendingfile is the huge file.

In the case of the Oracle admin it’s likely his only choice is to restart the database.  Consider a more general case where a Linux/Unix admin has deleted files but has lost track of where the files were and what might be using them.  Or one admin deleted the files and scarpered leaving the other trying to clean up the mess.  He is left with the bigger challenge of trying to find what process is holding what files open.

A starting point: lsof

The lsof command can be a good starting point, however you are now looking for a needle in smaller haystack, so you will have to do some further filtering.  On CentOS 6 it will mark files which have been deleted, however it seems to throw up quite a few false positives.

To illustrate the problem of open files I have created some C code which will create a big file and sleep for 1,000 seconds.  Compiling and running the binary I will get a 10 Mbyte file:

/var/tmp/SampleBigFile

If I then remove the file I have then created the situation described above.  On CentOS 6 I could run:

lsof | fgrep '(deleted)'

but that produces 24 results (among which are files that haven’t been deleted, like /usr/bin/gnome-screensaver), so it would be a good idea to shrink the range.  For instance it’s likely in this situation that is just one file system that is full so you could grep for its mount point.  That does it nicely in our example:

[root@centos6 ~]# lsof | fgrep '(deleted)' | fgrep /var
createope 11012 admin 3u REG 253,3 10485761 693 /var/tmp/SampleBigFile (deleted)
[root@centos6 ~]#

In MacOS (Darwin) there is no ‘(deleted)’ label so go straight for checking for /var:

vger:~ root# lsof | egrep 'REG.*/var/tmp'
mysqld    346 _mysql 4u  REG 14,18        0 6217706 /private/var/tmp/ibu4Nw9X
mysqld    346 _mysql 5u  REG 14,18        0 6217707 /private/var/tmp/ib6jCfyT
mysqld    346 _mysql 6u  REG 14,18        0 6217708 /private/var/tmp/ibu9Zqxb
mysqld    346 _mysql 7u  REG 14,18        0 6217709 /private/var/tmp/iboukiVq
mysqld    346 _mysql 11u REG 14,18        0 6217710 /private/var/tmp/ibLRW39J
createope 42775 admin 3u REG 14,18 10485761 6308941 /private/var/tmp/SampleBigFile
vger:~ root#

(REG indicates a regular file.)  While our big file is clearly identifiable here, if it wasn’t you could try something like sort -k7 to sort on file size.

Sleep command for a random amount of time

Most Unix/Linux users will be familiar the sleep command which you can use to delay for the specified number of seconds.  A few years ago I had need for a sleep command which would sleep for a random amount of time, so I came up with some code which as it happens is a nice example of interrupt handling in Unix/Linux.

The code I’ve written to do this random sleep can be freely used but I would like you to leave the reference to this site, http://linuxtech.ie.  It has no external dependencies, so this should compile it:

cc -Wall -O randsleep.c -o randsleep

The two options I use are to warn about any dodgy coding (-Wall) and -O for optimisation (not really an issue here!).

It is used like this:

randsleep [-v] <lower limit> <upper limit>

The -v option will echo the random time it has calculated, e.g:

vger:~(217)+>- randsleep -v 2 7
Sleeping for 3.49 seconds
vger:~(218)+>- randsleep -v 2 7
Sleeping for 5.30 seconds
vger:~(219)+>-

 

Tracking what process is generating network traffic

Every now and again I’ve had situations where I see Internet traffic which doesn’t correspond to any obvious activity I’ve initiated.  Well on Linux and Unix (and hence MacOS) it’s easy enough to track down the offending process.

(The commands tcpdump and lsof must be run as root or under sudo.)  Doing a tcpdump (snoop in Solaris) I could see the traffic and it was obvious which was the source/destintation causing most of the traffic:

mistral:~(1)+>- tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on en0, link-type EN10MB (Ethernet), capture size 65535 bytes
23:01:28.320299 IP 193.120.121.160.http > 10.1.1.183.49977: Flags [.], seq 2435402503:2435403911, ack 3052965256, win 4692, options [nop,nop,TS val 1326515797 ecr 324351108], length 1408
23:01:28.320923 IP 193.120.121.160.http > 10.1.1.183.49977: Flags [.], seq 1408:2816, ack 1, win 4692, options [nop,nop,TS val 1326515797 ecr 324351108], length 1408
23:01:28.321015 IP 10.1.1.183.49977 > 193.120.121.160.http: Flags [.], ack 2816, win 8104, options [nop,nop,TS val 324351161 ecr 1326515797], length 0
23:01:28.333327 IP 193.120.121.160.http > 10.1.1.183.49977: Flags [.], seq 2816:4224, ack 1, win 4692, options [nop,nop,TS val 1326515804 ecr 324351108], length 1408
[...]
23:01:32.717541 IP 10.1.1.183.49977 > 193.120.121.160.http: Flags [.], ack 767360, win 8104, options [nop,nop,TS val 324355362 ecr 1326520067], length 0
23:01:32.718204 IP 193.120.121.160.http > 10.1.1.183.49977: Flags [.], seq 767360:768768, ack 1, win 4692, options [nop,nop,TS val 1326520068 ecr 324355189], length 1408
23:01:32.718325 IP 10.1.1.183.49977 > 193.120.121.160.http: Flags [.], ack 768768, win 8192, options [nop,nop,TS val 324355363 ecr 1326520068], length 0
23:01:32.875095 IP 193.120.121.160.http > 10.1.1.183.49977: Flags [.], seq 768768:770176, ack 1, win 4692, options [nop,nop,TS val 1326520238 ecr 324355191], length 1408
^C
970 packets captured
1004 packets received by filter
0 packets dropped by kernel
mistral:~(2)+>-

In this example my local address is 10.1.1.183 and it can also be seen that the port of process which is talking to the Internet is 49977.  I used lsof to find out what process had that port open, and then used ps to show me the details of the process:

mistral:-(2)+>- lsof -i TCP:49977
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
SoftwareU 11684 _softwareupdate 15u IPv4 0x39d5096a0cd02d61 0t0 TCP 10.1.1.183:49977->193.120.121.160:http (ESTABLISHED)
mistral:~(3)+>- ps -lfp 11684
 UID PID PPID F CPU PRI NI SZ RSS WCHAN S ADDR TTY TIME CMD STIME
 200 11684 1 4004 0 63 0 3672308 135516 - Ss 0 ?? 0:28.45 /System/Library/ 10:58pm
mistral:~(4)+>- ps -fp 11684
 UID PID PPID C STIME TTY TIME CMD
 200 11684 1 0 10:58pm ?? 0:29.15 /System/Library/CoreServices/Software Update.app/Contents/Resources/SoftwareUpdateCheck -Check YES
mistral:~(5)+>- id
uid=0(root) gid=0(wheel) groups=0(wheel),404(com.apple.sharepoint.group.2),401(com.apple.access_screensharing),1(daemon),2(kmem),3(sys),4(tty),5(operator),8(procview),9(procmod),12(everyone),20(staff),29(certusers),33(_appstore),61(localaccounts),80(admin),98(_lpadmin),100(_lpoperator),204(_developer),403(com.apple.access_ssh)
mistral:~(6)+>-

And that’s it, the process was the automatic OS updater.

Finding a severe resource hog on your server

Have you ever experienced the situation where a server which becomes bafflingly unresponsive to the point when even your monitoring services are failing to report.  Then to have the server start responding, sometimes after a reboot?  You suspect it was a process that went berserk, but which one?  The monitoring software itself was out of action during the crisis, so it can’t tell you anything.

I have come across this quite a few times in my career, most recently in clustered database servers.  Sometimes these outages can be so severe that even very lightweight monitoring software like sar and Xymon (formerly known as Hobbit and Big Brother) can be taken out of action.  In the past I have resorted to using a loop to save the output of the command top to a file every 30 seconds, hoping to catch an event, but that is ugly for so many reasons.

SQLite to the rescue

SQLite is a serverless implementation of SQL.  It is a tiny binary—less than 50 kbyte on Mac OS X—and it stores its data in a file.  The way SQLite helps us here is that it allows us to store the data and analyse and/or trim it using all sorts of criteria.

So first create the database file:

sqlite3 storetops.sqlite3

That will give you a sqlite> prompt at which you can type this:

CREATE TABLE savetop (
HostName TEXT, -- So we know which host this data came from
DateAndTime TEXT,
Load REAL,
TopProcesses TEXT
) ;

Type .quit to exit.  (SQL professionals will balk at my not using the DATETIME type, but SQLite contains no such data type.)  You next want to write a simple script which will write to that at intervals of, say, 30 seconds.  I’ve written a basic performance-monitoring script which you can use on Linux or MacOS X.

Here is a sample of data I collected on my Mac:

sqlite> SELECT HostName,DateAndTime,Load FROM savetop ;
vger|2012-09-03 19:51:30|0.93
vger|2012-09-03 19:51:43|1.3
vger|2012-09-03 19:51:55|1.17
vger|2012-09-03 19:52:08|1.79
vger|2012-09-03 19:52:20|1.66
vger|2012-09-03 19:52:33|1.44
vger|2012-09-03 19:52:45|1.22
vger|2012-09-03 19:52:57|1.34
vger|2012-09-03 19:53:10|1.36
vger|2012-09-03 19:53:22|1.23
vger|2012-09-03 22:10:11|1.06
vger|2012-09-03 22:10:24|1.59
vger|2012-09-03 22:10:36|1.46
vger|2012-09-03 22:10:49|1.24
vger|2012-09-03 22:11:01|1.2
vger|2012-09-03 22:11:13|1.01
vger|2012-09-03 22:11:26|1.38
vger|2012-09-03 22:11:38|1.48
vger|2012-09-03 22:11:51|1.33
vger|2012-09-03 22:12:03|1.71
sqlite>

If there is an item of interest I can examine its top processes:

SELECT TopProcesses FROM savetop WHERE DateAndTime='2012-09-03 22:10:36' ;

Suppose I want to count the number of events where the load was greater than 1.4:

sqlite> SELECT COUNT(HostName) FROM savetop WHERE Load>1.4 ;
7
sqlite>

Of course this is artificial—normally we are looking at much higher high loads—however it illustrates the advantage of this approach.  If you are monitoring over a long period it’s likely the SQLite file will get very large but that is also very easy to remedy:

sqlite> SELECT COUNT(HostName) FROM savetop ;
20
sqlite> DELETE FROM savetop WHERE Load<1.4 ;
sqlite> SELECT COUNT(HostName) FROM savetop ;
7
sqlite>

Use of load rather than CPU usage

While this is beyond the immediate scope of this post, some of you might be wondering why I am using load instead of CPU usage.  Most operating systems (including Windows) have built-in strategies for handing CPU hogs, for instance in Linux and Unix the process priority for a CPU hog is automatically downgraded.  The result is a server whose CPU is flat out can be still quite usable.  Load—which represents the number of processes waiting to execute—is a much more reliable indicator of a server in distress.  High load can be caused by too many processes trying to access the CPUs and/or I/O delays, which in turn can be caused by busy or slow disks.  On Solaris and Windows you can determine if a server is CPU-bound by checking the percentage of time runnable processes (those not waiting for I/O or sleeping for other reasons) are waiting for CPUs; if this is higher than, say, 5% then the server is CPU-bound.