Contents:
uutot.awk - Report UUCP Statistics
phonebill - Track Phone Usage
combine - Extract Multipart uuencoded Binaries
mailavg - Check Size of Mailboxes
adj - Adjust Lines for Text Files
readsource - Format Program Source Files for troff
gent - Get a termcap Entry
plpr - lpr Preprocessor
transpose - Perform a Matrix Transposition
m1 - Simple Macro Processor
This chapter contains a miscellany of scripts contributed by Usenet users. Each program is introduced with a brief description by the program's author. Our comments are placed inside brackets [like this]. Then the full program listing is shown. If the author did not supply an example, we generate one and describe it after the listing. Finally, in a section called "Program Notes," we talk briefly about the program, highlighting some interesting points. Here is a summary of the scripts:
Report UUCP statistics.
Track phone usage.
Extract multipart uuencoded binaries.
Check size of mailboxes.
Adjust lines for text files.
Format program source files for troff.
Get a termcap entry.
lpr preprocessor.
Perform a matrix transposition.
A very simple macro processor.
Contributed by Roger A. Cornelius
Here's something I wrote in nawk in response to all the C versions of the same thing which were posted to alt.sources awhile back. Basically, it summarizes statistics of uucp connections (connect time, throughput, files transmitted, etc.). It only supports HDB-style log files, but will show statistics on a site-by-site, or on an overall (all sites), basis. [It also works with /usr/spool/uucp/SYSLOG.]
I use a shell wrapper which calls "awk -f" to run this, but it's not necessary. Usage information is in the header. (Sorry about the lack of comments.)
# @(#) uutot.awk - display uucp statistics - requires new awk # @(#) Usage:awk -f uutot.awk [site ...] /usr/spool/uucp/.Admin/xferstats # Author: Roger A. Cornelius (rac@sherpa.uucp) # dosome[]; # site names to work for - all if not set # remote[]; # array of site names # bytes[]; # bytes xmitted by site # time[]; # time spent by site # files[]; # files xmitted by site BEGIN { doall = 1; if (ARGC > 2) { doall = 0; for (i = 1; i < ARGC-1; i++) { dosome[ ARGV[i] ]; ARGV[i] = ""; } } kbyte = 1024 # 1000 if you're not picky bang = "!"; sending = "->"; xmitting = "->" "|" "<-"; hdr1 = "Remote K-Bytes K-Bytes K-Bytes " \ "Hr:Mn:Sc Hr:Mn:Sc AvCPS AvCPS # #\n"; hdr2 = "SiteName Recv Xmit Total " \ "Recv Xmit Recv Xmit Recv Xmit\n"; hdr3 = "-------- --------- --------- --------- -------- " \ "-------- ----- ----- ---- ----"; fmt1 = "%-8.8s %9.3f %9.3f %9.3f %2d:%02d:%02.0f " \ "%2d:%02d:%02.0f %5.0f %5.0f %4d %4d\n"; fmt2 = "Totals %9.3f %9.3f %9.3f %2d:%02d:%02.0f " \ "%2d:%02d:%02.0f %5.0f %5.0f %4d %4d\n"; } { if ($6 !~ xmitting) # should never be next; direction = ($6 == sending ? 1 : 2) site = substr($1,1,index($1,bang)-1); if (site in dosome || doall) { remote[site]; bytes[site,direction] += $7; time[site,direction] += $9; files[site,direction]++; } } END { print hdr1 hdr2 hdr3; for (k in remote) { rbyte += bytes[k,2]; sbyte += bytes[k,1]; rtime += time[k,2]; stime += time[k,1]; rfiles += files[k,2]; sfiles += files[k,1]; printf(fmt1, k, bytes[k,2]/kbyte, bytes[k,1]/kbyte, (bytes[k,2]+bytes[k,1])/kbyte, time[k,2]/3600, (time[k,2]%3600)/60, time[k,2]%60, time[k,1]/3600, (time[k,1]%3600)/60, time[k,1]%60, bytes[k,2] && time[k,2] ? bytes[k,2]/time[k,2] : 0, bytes[k,1] && time[k,1] ? bytes[k,1]/time[k,1] : 0, files[k,2], files[k,1]); } print hdr3 printf(fmt2, rbyte/kbyte, sbyte/kbyte, (rbyte+sbyte)/kbyte, rtime/3600, (rtime%3600)/60, rtime%60, stime/3600, (stime%3600)/60, stime%60, rbyte && rtime ? rbyte/rtime : 0, sbyte && stime ? sbyte/stime : 0, rfiles, sfiles); }
A test file was generated to test Cornelius' program. Here are a few lines extracted from /usr/spool/uucp/.Admin/xferstats (because each line in this file is too long to print on a page, we have broken the line following the directional arrow for display purposes only):
isla!nuucp S (8/3-16:10:17) (C,126,25) [ttyi1j] -> 1131/4.880 secs, 231 bytes/sec isla!nuucp S (8/3-16:10:20) (C,126,26) [ttyi1j] -> 149/0.500 secs, 298 bytes/sec isla!sue S (8/3-16:10:49) (C,126,27) [ttyi1j] -> 646/25.230 secs, 25 bytes/sec isla!sue S (8/3-16:10:52) (C,126,28) [ttyi1j] -> 145/0.510 secs, 284 bytes/sec uunet!uisla M (8/3-16:15:50) (C,951,1) [cui1a] -> 1191/0.660 secs, 1804 bytes/sec uunet!uisla M (8/3-16:15:53) (C,951,2) [cui1a] -> 148/0.080 secs, 1850 bytes/sec uunet!uisla M (8/3-16:15:57) (C,951,3) [cui1a] -> 1018/0.550 secs, 1850 bytes/sec uunet!uisla M (8/3-16:16:00) (C,951,4) [cui1a] -> 160/0.070 secs, 2285 bytes/sec uunet!daemon M (8/3-16:16:06) (C,951,5) [cui1a] <- 552/2.740 secs, 201 bytes/sec uunet!daemon M (8/3-16:16:09) (C,951,6) [cui1a] <- 102/1.390 secs, 73 bytes/sec
Note that there are 12 fields; however, the program really only uses fields 1, 6, 7, and 9. Running the program on the sample input produces the following results:
$nawk -f uutot.awk uutot.test
Remote K-Bytes K-Bytes K-Bytes Hr:Mn:Sc Hr:Mn:Sc AvCPS AvCPS # # SiteName Recv Xmit Total Recv Xmit Recv Xmit Recv Xmit -------- --------- --------- --------- -------- -------- ----- ----- ---- ---- uunet 0.639 2.458 3.097 0:04:34 2:09:49 2 0 2 4 isla 0.000 2.022 2.022 0:00:00 0:13:58 0 2 0 4 -------- --------- --------- --------- -------- -------- ----- ----- ---- ---- Totals 0.639 4.480 5.119 0:04:34 2:23:47 2 1 2 8
This nawk application is an excellent example of a clearly written awk program. It is also a typical example of using awk to change a rather obscure UNIX log into a useful report.
Although Cornelius apologizes for the lack of comments that explain the logic of the program, the usage of the program is clear from the initial comments. Also, he uses variables to define search patterns and the report's layout. This helps to simplify conditional and print statements in the body of the program. It also helps that the variables have names which aid in immediately recognizing their purpose.
This program has a three-part structure, as we emphasized in Chapter 7, Writing Scripts for awk. It consists of a BEGIN procedure, in which variables are defined; the body, in which each line of data from the log file is processed; and the END procedure, in which the output for the report is generated.