Red Hat Linux Security and Optimization
Red Hat Linux Security and Optimization
and Optimization
Red Hat Linux Security
speeding up Web servers to boosting the performance of Samba. He then Mohammed J. Kabir
explains how to secure your Red Hat Linux system, offering hands-on
techniques for network and Internet security as well as in-depth coverage
of Linux firewalls and virtual private networks.
Complete with security utilities and ready-to-run scripts on CD-ROM,
this official Red Hat Linux guide is an indispensable resource.
C D - R O M F E AT U R E S
®
■ Upgrade and configure your hardware to boost performance Filter, John the Ripper,
■ Customize the kernel and tune the filesystem for optimal results L i d s, L S O F, N e s s u s,
■ Use JFS and LVM to enhance the filesystem reliability and manageability Netcat, Ngrep, Nmap,
■ Tweak Apache, Sendmail, Samba, and NFS servers for increased speed OpenSSH, OpenSSL,
■ Protect against root compromises by enabling LIDS and Libsafe in the Po s t f i x , S A I N T t r i a l
kernel version, SARA, Snort,
■ Use PAM, OpenSSL, shadow passwords, OpenSSH, and xinetd to enhance Swatch, tcpdump,
network security Tripwire Open Source
®
■ Set up sensible security on Apache and reduce CGI and SSI risks Linux Edition, Vetescan,
■ Secure BIND, Sendmail, ProFTPD, Samba, and NFS servers and Whisker
■ Create a highly configurable packet filtering firewall to protect your
network Scripts from the book
■ Build a secure virtual private network with FreeS/WAN
Plus a searchable
■ Use port scanners, password crackers, and CGI scanners to locate vulner-
abilities before the hackers do e-version of the book
M OHAMMED J. K ABIR is the founder and CEO of Evoknow, Inc., a company Reader Level
Intermediate to
specializing in customer relationship management software development. TOO L S ON
Advanced TY C
His books include Red Hat Linux 7 Server, Red Hat Linux Administrator’s I
D
UR
Handbook, Red Hat Linux Survival Guide, and Apache Server 2 Bible. Shelving Category
-R
SEC
Networking
OM
Reviewed and Approved by the Experts at Red Hat $49.99 USA
$74.99 Canada S E C U R I T Y TO O L S
ISBN 0-7645-4754-2 £39.99 UK incl.VAT ON CD-ROM
Linux Solutions from the Experts at Red Hat
54999 Cover design by
Michael J. Freeland
www.redhat.com Kabir
Cover photo ©
www.hungryminds.com ®
H. Armstrong Roberts ® P R E S S™
9 780764 547546 7 8 5 5 5 5 04474 6
014754-2 FM.F 11/5/01 9:03 AM Page i
Red Hat Linux Security and Optimization Philippines; by Contemporanea de Ediciones for
Published by Venezuela; by Express Computer Distributors for the
Hungry Minds, Inc. Caribbean and West Indies; by Micronesia Media
909 Third Avenue Distributor, Inc. for Micronesia; by Chips
New York, NY 10022 Computadoras S.A. de C.V. for Mexico; by Editorial
Norma de Panama S.A. for Panama; by American
www.hungryminds.com
Bookshops for Finland.
Copyright © 2002 Hungry Minds, Inc. All rights
For general information on Hungry Minds’ products
reserved. No part of this book, including interior
and services please contact our Customer Care
design, cover design, and icons, may be reproduced
department within the U.S. at 800-762-2974, outside
or transmitted in any form, by any means
the U.S. at 317-572-3993 or fax 317-572-4002.
(electronic, photocopying, recording, or otherwise)
without the prior written permission of the publisher. For sales inquiries and reseller information,
including discounts, premium and bulk quantity
Library of Congress Control Number: 2001092938
sales, and foreign-language translations, please
ISBN: 0-7645-4754-2 contact our Customer Care department at
Printed in the United States of America 800-434-3422, fax 317-572-4002 or write to Hungry
10 9 8 7 6 5 4 3 2 1 Minds, Inc., Attn: Customer Care Department, 10475
1B/SX/RR/QR/IN Crosspoint Boulevard, Indianapolis, IN 46256.
Distributed in the United States by Hungry Minds, For information on licensing foreign or domestic
Inc. rights, please contact our Sub-Rights Customer Care
department at 212-884-5000.
Distributed by CDG Books Canada Inc. for Canada;
by Transworld Publishers Limited in the United For information on using Hungry Minds’ products
Kingdom; by IDG Norge Books for Norway; by IDG and services in the classroom or for ordering
Sweden Books for Sweden; by IDG Books Australia examination copies, please contact our Educational
Publishing Corporation Pty. Ltd. for Australia and Sales department at 800-434-2086 or fax
New Zealand; by TransQuest Publishers Pte Ltd. for 317-572-4005.
Singapore, Malaysia, Thailand, Indonesia, and Hong For press review copies, author interviews, or other
Kong; by Gotop Information Inc. for Taiwan; by ICG publicity information, please contact our Public
Muse, Inc. for Japan; by Intersoft for South Africa; Relations department at 317-572-3168 or fax
by Eyrolles for France; by International Thomson 317-572-4168.
Publishing for Germany, Austria, and Switzerland; For authorization to photocopy items for corporate,
by Distribuidora Cuspide for Argentina; by LR personal, or educational use, please contact
International for Brazil; by Galileo Libros for Chile; Copyright Clearance Center, 222 Rosewood Drive,
by Ediciones ZETA S.C.R. Ltda. for Peru; by WS Danvers, MA 01923, or fax 978-750-4470.
Computer Publishing Corporation, Inc., for the
LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND AUTHOR HAVE USED THEIR
BEST EFFORTS IN PREPARING THIS BOOK. THE PUBLISHER AND AUTHOR MAKE NO REPRESENTATIONS
OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS
BOOK AND SPECIFICALLY DISCLAIM ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. THERE ARE NO WARRANTIES WHICH EXTEND BEYOND THE
DESCRIPTIONS CONTAINED IN THIS PARAGRAPH. NO WARRANTY MAY BE CREATED OR EXTENDED BY
SALES REPRESENTATIVES OR WRITTEN SALES MATERIALS. THE ACCURACY AND COMPLETENESS OF
THE INFORMATION PROVIDED HEREIN AND THE OPINIONS STATED HEREIN ARE NOT GUARANTEED OR
WARRANTED TO PRODUCE ANY PARTICULAR RESULTS, AND THE ADVICE AND STRATEGIES
CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY INDIVIDUAL. NEITHER THE PUBLISHER NOR
AUTHOR SHALL BE LIABLE FOR ANY LOSS OF PROFIT OR ANY OTHER COMMERCIAL DAMAGES,
INCLUDING BUT NOT LIMITED TO SPECIAL, INCIDENTAL, CONSEQUENTIAL, OR OTHER DAMAGES.
FULFILLMENT OF EACH COUPON OFFER IS THE SOLE RESPONSIBILITY OF THE OFFEROR.
Trademarks: are trademarks or registered trademarks of Hungry Minds, Inc. All other trademarks are the
property of their respective owners. Hungry Minds, Inc., is not associated with any product or vendor
mentioned in this book.
Credits
ACQUISITIONS EDITOR PROJECT COORDINATOR
Debra Williams Cauley Maridee Ennis
Preface
This book is focused on two major aspects of Red Hat Linux system administration:
performance tuning and security. The tuning solutions discussed in this book will
help your Red Hat Linux system to have better performance. At the same time, the
practical security solutions discussed in the second half of the book will allow you
to enhance your system security a great deal. If you are looking for time saving,
practical solutions to performance and security issues, read on!
Preface vii
Part V: Firewalls
This part of the book shows to create packet filtering firewall using iptables, how to
create virtual private networks, and how to use SSL based tunnels to secure access
to system and services. Finally, you will be introduced to an wide array of security
tools such as security assessment (audit) tools, port scanners, log monitoring and
analysis tools, CGI scanners, password crackers, intrusion detection tools, packet
filter tools, and various other security administration utilities.
Appendixes
These elements include important references for Linux network users, plus an
explanation of the attached CD-ROM.
◆ When you are asked to enter a command, you need press the Enter or the
Return key after you type the command at your command prompt.
◆ A monospaced font is used to denote configuration or code segment.
The Note icon indicates that something needs a bit more explanation.
The Tip icon tells you something that is likely to save you some time and
effort.
014754-2 FM.F 11/5/01 9:03 AM Page viii
The cross-reference icon tells you that you can find additional information
in another chapter.
Acknowledgments
While writing this book, I often needed to consult with many developers whose
tools I covered in this book. I want to specially thank a few such developers who
have generously helped me present some of their great work.
Huagang Xie is the creator and chief developer of the LIDS project. Special
thanks to him for responding to my email queries and also providing me with a
great deal of information on the topic.
Timothy K. Tsai, Navjot Singh, and Arash Baratloo are the three members of the
Libsafe team who greatly helped in presenting the Libsafe information. Very special
thanks to Tim for taking the time to promptly respond to my emails and providing
me with a great deal of information on the topic.
I thank both the Red Hat Press and Hungry Minds teams who made this book a
reality. It is impossible to list everyone involved but I must mention the following
kind individuals.
Debra Williams Cauley provided me with this book opportunity and made sure I
saw it through to the end. Thanks, Debra.
Terri Varveris, the acquisitions editor, took over in Debra’s absence. She made
sure I had all the help needed to get this done. Thanks, Terri.
Pat O’Brien, the project development editor, kept this project going. I don’t know
how I could have done this book without his generous help and suggestions every
step of the way. Thanks, Pat.
Matt Hayden, the technical reviewer, provided numerous technical suggestions,
tips, and tricks — many of which have been incorporated in the book. Thanks, Matt.
Sheila Kabir, my wife, had to put up with many long work hours during the few
months it took to write this book. Thank you, sweetheart.
ix
014754-2 FM.F 11/5/01 9:03 AM Page x
014754-2 FM.F 11/5/01 9:03 AM Page xi
Contents at a Glance
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . ix
Part V Firewalls
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . ix
xiv Contents
Contents xv
xvi Contents
Contents xvii
xviii Contents
Contents xix
xx Contents
Part V Firewalls
Contents xxi
xxii Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
Part I
System Performance
CHAPTER 1
Performance Basics
CHAPTER 2
Kernel Tuning
CHAPTER 3
Filesystem Tuning
024754-2 Pt1.F 11/5/01 1:16 PM Page 2
034754-2 Ch01.F 11/5/01 9:03 AM Page 3
Chapter 1
Performance Basics
IN THIS CHAPTER
RED HAT LINUX is a great operating system for extracting the last bit of performance
from your computer system, whether it’s a desktop unit or a massive corporate net-
work. In a networked environment, optimal performance takes on a whole new
dimension — the efficient delivery of security services — and the system administra-
tor is the person expected to deliver. If you’re like most system administrators,
you’re probably itching to start tweaking — but before you do, you may want to
take a critical look at the whole concept of “high performance.”
Today’s hardware and bandwidth — fast and relatively cheap — has spoiled many
of us. The long-running craze to buy the latest computer “toy” has lowered hard-
ware pricing; the push to browse the Web faster has lowered bandwidth pricing
while increasing its carrying capacity. Today, you can buy 1.5GHz systems with
4GB of RAM and hundreds of GB of disk space (ultra-wide SCSI 160, at that) with-
out taking a second mortgage on your house. Similarly, about $50 to $300 per
month can buy you a huge amount of bandwidth in the U.S. — even in most metro-
politan homes.
Hardware and bandwidth have become commodities in the last few years — but
are we all happy with the performance of our systems? Most users are likely to agree
that even with phenomenal hardware and bandwidth, their computers just don’t
seem that fast anymore — but how many people distinguish between two systems
that seem exactly the same except for processor speed? Unless you play demanding
computer games, you probably wouldn’t notice much difference between 300MHz
and 500MHz when you run your favorite word processor or Web browser.
Actually, much of what most people accept as “high performance” is based on
their human perception of how fast the downloads take place or how crisp the video
on-screen looks. Real measurement of performance requires accurate tools and
repeated sampling of system activity. In a networked environment, the need for such
measurement increases dramatically; for a network administrator, it’s indispensable. 3
034754-2 Ch01.F 11/5/01 9:03 AM Page 4
Accordingly, this chapter introduces a few simple but useful tools that measure and
monitor system performance. Using their data, you can build a more sophisticated per-
ception of how well your hardware actually performs. When you’ve established a reli-
able baseline for your system’s performance, you can tune it to do just what you want
done — starting with the flexibility of the Red Hat Linux operating system, and using
its advantages as you configure your network to be fast, efficient, and secure.
Here ps reports that three programs are running under the current user ID: su,
bash, and ps itself. If you want a list of all the processes running on your system,
you can run ps aux to get one. A sample of the ps aux command’s output (abbre-
viated, of course) looks like this:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.1 0.1 1324 532 ? S 10:58 0:06 init [3]
root 2 0.0 0.0 0 0 ? SW 10:58 0:00 [kflushd]
root 3 0.0 0.0 0 0 ? SW 10:58 0:00 [kupdate]
root 4 0.0 0.0 0 0 ? SW 10:58 0:00 [kpiod]
root 5 0.0 0.0 0 0 ? SW 10:58 0:00 [kswapd]
root 6 0.0 0.0 0 0 ? SW< 10:58 0:00 [mdrecoveryd]
root 45 0.0 0.0 0 0 ? SW 10:58 0:00 [khubd]
root 349 0.0 0.1 1384 612 ? S 10:58 0:00 syslogd -m 0
root 359 0.0 0.1 1340 480 ? S 10:58 0:00 klogd
rpc 374 0.0 0.1 1468 576 ? S 10:58 0:00 portmap
[Remaining lines omitted]
034754-2 Ch01.F 11/5/01 9:03 AM Page 5
Sometimes you may want to run ps to monitor a specific process for a certain
length of time. For example, say you installed a new Sendmail mail-server patch
and want to make sure the server is up and running — and you also want to know
whether it uses more than its share of system resources. In such a case, you can
combine a few Linux commands to get your answers — like this:
For example, you run watch --interval=30 “ps auxw | grep sendmail. By
running the ps program every 30 seconds you can see how much resource sendmail
is using.
Combining ps with the tree command, you can run pstree, which displays a
tree structure of all processes running on your system. A sample output of pstree
looks like this:
init-+-apmd
|-atd
|-crond
|-identd---identd---3*[identd]
|-kflushd
|-khubd
|-klogd
|-kpiod
|-kswapd
|-kupdate
|-lockd---rpciod
|-lpd
|-mdrecoveryd
|-6*[mingetty]
|-named
|-nmbd
|-portmap
|-rhnsd
|-rpc.statd
|-safe_mysqld---mysqld---mysqld---mysqld
|-sendmail
|-smbd---smbd
|-sshd-+-sshd---bash---su---bash---man---sh---sh-+-groff---grotty
| | `-less
| `-sshd---bash---su---bash---pstree
|-syslogd
|-xfs
`-xinetd
034754-2 Ch01.F 11/5/01 9:03 AM Page 6
You can see that the parent of all processes is init. One branch of the tree is cre-
ated by safe_mysqld, spawning three mysqld daemon processes. The sshd branch
shows that the sshd daemon has forked two child daemon processes — which have
open bash shells and launched still other processes. The pstree output was gener-
ated by one of the sub-branches of the sshd daemon.
By default, top updates its screen every second — an interval you can change by
using the d seconds option. For example, to update the screen every 5 seconds, run
the top d 5 command. A 5- or 10-second interval is, in fact, more useful than the
default setting. (If you let top update the screen every second, it lists itself in its
own output as the main resource consumer.) Properly configured, top can perform
interactive tasks on processes.
If you press the h key while top is running, you will see the following output
screen:
Using the keyboard options listed in the output shown here, you can
www.blakeley.com/resources/vtad
You’re warned if the current values are not ideal. Typically, the Linux ker-
nel allows three to four times as many open inodes as open files.
◆ Check the /proc/sys/net/ipv4/ip_local_port_range file to confirm
that the system has 10,000 to 28,000 local ports available.
This can boost performance if you have many proxy server connections to
your server.
The default ruleset also checks for free memory limits, fork rates, disk I/O
rates, and IP packet rates. Once you have downloaded Vtad, you can run
it quite easily on a shell or xterm window by using perl vtad.pl com-
mand. Here is a sample output of the script.
034754-2 Ch01.F 11/5/01 9:03 AM Page 10
Summary
Knowing how to measure system performance is critical in understanding bottle-
necks and performance issues. Using standard Red Hat Linux tools, you can mea-
sure many aspects of your system’s performance. Tools such as ps, top, and vmstat
tell you a lot of how a system is performing. Mastering these tools is an important
step for anyone interested in higher performance.
044754-2 Ch02.F 11/5/01 9:03 AM Page 11
Chapter 2
Kernel Tuning
IN THIS CHAPTER
IF YOU HAVE INSTALLED THE BASIC Linux kernel that Red Hat supplied, probably it
isn’t optimized for your system. Usually the vendor-provided kernel of any OS is a
“generalist” rather than a “specialist” — it has to support most installation scenarios.
For example, a run-of-the-mill kernel may support both EIDE and SCSI disks (when
you need only SCSI or EIDE support). Granted, using a vendor-provided kernel is
the straightforward way to boot up your system — you can custom-compile your
own kernel and tweak the installation process when you find the time. When you
do reach that point, however, the topics discussed in this chapter come in handy.
1. Download the source code from www.kernel.org or one of its mirror sites
(listed at the main site itself).
11
044754-2 Ch02.F 11/5/01 9:03 AM Page 12
In this chapter, I assume that you have downloaded and extracted (using the
tar xvzf linux-2.4.1.tar.gz command) the kernel 2.4.1 source dis-
tribution from the www.kernel.org site.
At this point you have the kernel source distribution ready for configuration.
Now you are ready to select a kernel configuration method.
◆ make config. This method uses the bash shell; you configure the kernel
by answering a series of questions prompted on the screen. (This approach
may be too slow for advanced users; you can’t go back or skip forward.)
◆ make menuconfig. You use a screen-based menu system (a much more
flexible method) to configure the kernel. (This chapter assumes that you
use this method.)
◆ make xconfig. This method, which uses the X Window system (a Linux
graphical interface), is geared to the individual user’s desktop environ-
ment. I do not recommend it for server administrators; the X Window sys-
tem is too resource-intensive to use on servers (which already have
enough to do).
044754-2 Ch02.F 11/5/01 9:03 AM Page 14
If this isn’t the first time you are configuring the kernel, run make mrproper
from the /usr/src/linux directory to remove all the existing object files
and clean up the source distribution. Then, from the /usr/src/linux
directory — which is a symbolic link to the Linux kernel (in this example,
/usr/src/linux-2.4.1) — run the make menuconfig command to
configure Linux.
Using menuconfig
When you run the make menuconfig command, it displays a list of submenus in a
main menu screen. The result looks like this:
In the preceding list, ---> indicates a submenu, which you may also find within
a top-level submenu (such as Network device support menu).
◆ Use Up and Down arrow keys on your keyboard to navigate the sub-
menus. Press the Enter key to select a menu.
◆ Press the space bar to toggle a highlighted option on or off.
Any organization that depends on Linux should have at least one separate
experimental Linux system so administrators can try new Linux features
without fearing data losses or downtime.
044754-2 Ch02.F 11/5/01 9:03 AM Page 16
◆ Modules
When you choose to compile a feature part of the kernel, it becomes part
of the kernel image. This means that this feature is always loaded in the
kernel.
HARDWARE
Think of kernel as the interface to your hardware. The better it is tuned to your
hardware, the better your system works. The following hardware-specific options
provide optimal configuration for your system.
CPU SUPPORT Linux kernel can be configured for the Intel x86 instruction set on
these CPUs:
◆ “386” for
■ AMD/Cyrix/Intel 386DX/DXL/SL/SLC/SX
■ Cyrix/TI486DLC/DLC2
■ UMC 486SX-S
■ NexGen Nx586
◆ “486” for
■ AMD/Cyrix/IBM/Intel 486DX/DX2/DX4
■ AMD/Cyrix/IBM/Intel SL/SLC/SLC2/SLC3/SX/SX2
■ UMC U5D or U5S
◆ “586” for generic Pentium CPUs, possibly lacking the TSC (time stamp
counter) register.
◆ “Pentium-Classic” for the Intel Pentium.
◆ “K6” for the AMD K6, K6-II and K6-III (also known as K6-3D).
You can find your processor by running the command cat /proc/cpuinfo in
another window. The following code is a sample output from this command.
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 8
model name : Pentium III (Coppermine)
stepping : 1
cpu MHz : 548.742
cache size : 256 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat
pse36 mmx fxsr sse
bogomips : 1094.45
The first line in the preceding code shows how many processors you have in the
system. (0 represents a single processor, 1 is two processors, and so on.) “Model
name” is the processor name that should be selected for the kernel.
1. Select the Processor type and features submenu from the main menu.
The first option in the submenu is the currently chosen processor for your
system. If the chosen processor isn’t your exact CPU model, press the
enter key to see the list of supported processors.
2. Select the math emulation support.
If you use a Pentium-class machine, math emulation is unnecessary. Your
system has a math co-processor.
044754-2 Ch02.F 11/5/01 9:03 AM Page 19
If you don’t know whether your system has a math co-processor, run the cat
/proc/cpuinfo and find the fpu column. If you see ‘yes’ next to fpu, you have a
math coprocessor (also known as an fpu, or floating-point unit).
If you have a Pentium Pro; Pentium II or later model Intel CPU; or an Intel
clone such as Cyrix 6x86, 6x86MX AMD K6-2 (stepping 8 and above), and
K6-3, enable the Memory Type Range Register (MTRR) support by choosing
the Enable MTRR for PentiumPro/II/III and newer AMD K6-2/3 systems
option. MTRR support can enhance your video performance.
3. If you have a system with multiple CPUs and want to use multiple CPUs
using the symmetric multiprocessing support in the kernel, enable the
Symmetric multi-processing (SMP) support option.
When you use SMP support, you can’t use the advanced power manage-
ment option.
MEMORY MODEL This tells the new kernel how much RAM you have or plan on
adding in the future.
The Intel 32-bit address space enables a maximum of 4GB of memory to be used.
However, Linux can use up to 64GB by turning on Intel Physical Address Extension
(PAE) mode on Intel Architecture 32-bit (IA32) processors such as Pentium Pro,
Pentium II, and Pentium III. In Linux terms, memory above 4GB is high memory.
To enable appropriate memory support, follow these steps:
1. From the main menu, select Processor type and features submenu
2. Select High Memory Support option.
To determine which option is right for your system, you must know the
amount of physical RAM you currently have and will add (if any).
044754-2 Ch02.F 11/5/01 9:03 AM Page 20
When the new kernel is built, memory should be auto-detected. To find how
much RAM is seen by the kernel, run cat /proc/meminfo, which displays output
as shown below.
In the preceding list, MemTotal shows the total memory seen by kernel. In this
case, it’s 384060 kilobytes (384MB). Make sure your new kernel reports the amount
of memory you have installed. If you see a very different number, try rebooting the
kernel and supplying mem=“nnnMB” at the boot prompt (nnn is the amount of
memory in MB). For example, if you have 2GB of RAM, you can enter
mem=“2048MB” at the LILO prompt. Here’s an example of such a prompt:
DISK SUPPORT Hard disks are generally the limiting factor in a system’s perfor-
mance. Therefore, choosing the right disk for your system is quite important.
Generally, there are three disk technologies to consider:
044754-2 Ch02.F 11/5/01 9:03 AM Page 21
◆ EIDE/IDE/ATA
SCSI rules in the server market. A server system without SCSI disks is
unthinkable to me and many other server administrators.
◆ Fiber Channel
Fiber Channel disk is the hottest, youngest disk technology and not widely
used for reasons such as extremely high price and interconnectivity issues
associated with fiver technology. However, Fiber Channel disks are taking
market share from SCSI in the enterprise or high-end storage area networks.
If you need Fiber Channel disks, you need to consider a very high-end disk
subsystem such as a storage area network (SAN) or a storage appliance.
Choosing a disk for a system (desktop or server) becomes harder due to the buzz-
words in the disk technology market. Table 2-1 explains common acronyms.
Most people either use IDE/EIDE hard disks or SCSI disks. Only a few keep both
types in the same machine, which isn’t a problem. If you only have one of these
044754-2 Ch02.F 11/5/01 9:03 AM Page 23
two in your system, enable support for only the type you need unless you plan on
adding the other type in the future.
If you use at least one EIDE/IDE/ATA hard disk, follow these steps:
1. Select the ATA/IDE/MFM/RLL support option from the main menu and
enable the ATA/IDE/MFM/RLL support option by including it as a module.
2. Select the IDE, ATA, and ATAPI Block devices submenu and enable the
Generic PCI IDE chipset support option.
3. If your disk has direct memory access (DMA) capability, then:
■ Select the Generic PCI bus-master DMA support option.
■ Select the Use PCI DMA by default when available option to make use
of the direct memory access automatically.
You see a lot of options for chipset support. Unless you know your chipset
and find it in the list, ignore these options.
1. Select the SCSI support submenu and choose SCSI support from the sub-
menu as a module.
2. Select the SCSI disk support option as a module.
3. Select support for any other type of other SCSI device you have, such as
tape drive or CD.
4. Select the SCSI low-level drivers submenu, and then select the appropriate
driver for your SCSI host adapter.
5. Disable Probe all LUNs because it can hang the kernel with some SCSI
hardware.
6. Disable Verbose SCSI error reporting.
7. Disable SCSI logging facility.
044754-2 Ch02.F 11/5/01 9:03 AM Page 24
If you will use only one type of disks (either EIDE/IDE/ATA or SCSI), disabling
support in the kernel for the other disk type saves memory.
PLUG AND PLAY DEVICE SUPPORT If you have Plug and Play (PNP) devices in
your system, follow these steps to enable PNP support in the kernel:
BLOCK DEVICE SUPPORT To enable support for block devices in the kernel, fol-
low these steps:
If you want to use RAM as a filesystem, RAM disk support isn’t best. Instead,
enable Simple RAM-based filesystem support under File systems submenu.
A loopback device, such as loop0, enables you to mount an ISO 9660 image
file (CD filesystem), then explore it from a normal filesystem (such as ext2).
◆ If you connect your system to an Ethernet (10 or 100 Mbps), select the
Ethernet (10 or 100 Mbps) submenu, choose Ethernet (10 or 100 Mbps)
support, and implement one of these options:
■ If your network interface card vendor is listed in the Ethernet (10 or
100 Mbps) support menu, select the vendor from that menu.
■ If your PCI-based NIC vendor isn’t listed in the Ethernet (10 or 100
Mbps) support menu, select your vendor in the EISA, VLB, PCI and on-
board controllers option list.
If you don’t find your PCI NIC vendor in the Ethernet (10 or 100 Mbps) sup-
port menu or the EISA, VLB, PCI and on-board controllers option list, choose
the PCI NE2000 and clones support option.
■ If your ISA NIC vendor isn’t listed in the Ethernet (10 or 100 Mbps)
support menu, select your vendor in the Other ISA cards option.
If you don’t find your ISA NIC vendor in the Ethernet (10 or 100 Mbps) sup-
port menu or the Other ISA cards option list, choose the NE2000/NE1000
support option.
◆ If you have at least one gigabit (1000 Mbps) adapter, choose the Ethernet
(1000 Mbps) submenu and select your gigabit NIC vendor.
◆ If you have the hardware to create a wireless LAN, select the Wireless LAN
support and choose appropriate wireless hardware.
USB SUPPORT If you have at least one USB device to connect to your Linux sys-
tem, select the USB support and choose the appropriate options for such features as
USB audio/multimedia, modem, and imaging devices.
NETWORKING SUPPORT Even if you don’t want to network the system, you must
configure the networking support from the General setup submenu using the
Networking support option. (Some programs assume that kernel has networking
support. By default, networking support is built into the kernel.)
044754-2 Ch02.F 11/5/01 9:03 AM Page 26
Check the Networking options submenu to confirm that these options are
enabled; enable them if they aren’t already enabled:
◆ TCP/IP networking
PCI SUPPORT Most modern systems use PCI bus to connect to many devices. If
PCI support isn’t enabled on the General setup submenu, enable it.
CONSOLE SUPPORT The system console is necessary for a Linux system that
needs to be managed by a human, whether the system is a server, desktop, or lap-
top. The system console
◆ Choose the Console drivers submenu, then select the VGA text console
option.
◆ If you want to choose video mode during boot up, apply these steps:
■ Select Video mode selection support option
■ Enter vga=ask option to the LILO prompt during the boot up process
You can add this option to the /etc/lilo.conf file and rerun LILO using the
/sbin/lilo command.
◆ Select the character devices submenu and enable Virtual terminals option.
044754-2 Ch02.F 11/5/01 9:03 AM Page 27
Most users want to enable the Support for console on virtual terminal
option.
This is the pseudo filesystem used by the kernel and other programs.
These should be enabled by default. To ensure that these filesystems are supported,
select the File systems submenu and choose these filesystem types from the list.
PARALLEL PORT SUPPORT To use a parallel port printer or other parallel port
devices, you must enable parallel port support from the Parallel port support sub-
menu from the main menu. Follow these steps:
If you have audio/video capture hardware or radio cards, follow these steps to
enable support:
JOYSTICK SUPPORT Joystick support depends on the Input core support. Follow
these steps for joystick support:
1. Select Input core support submenu, then enable input core support.
2. Choose Joystick support, then select the Character devices menu.
3. On the the Joysticks submenu, choose the appropriate joystick controller
for your joystick vendor.
◆ Select the General setup submenu and choose the Power Management
support option.
◆ If your system has Advanced Power Management BIOS, choose Advanced
Power Management BIOS support.
If you have PCMCIA network devices, follow these steps to support them:
1. Select the PCMCIA network device support option from the Network
device support submenu.
2. Select appropriate vendor from the list.
PPP SUPPORT Most desktop or laptop systems use the Point-to-Point Protocol
(PPP) for dialup network communication.
To enable PPP support, select the PPP (point-to-point protocol) support option
from the Network device support submenu.
SERVER OPTIONS
Usually, a server system doesn’t need support for such features as sound, power
management, multimedia, and infrared connectivity, so you shouldn’t enable any
of these features in the kernel.
A few very important kernel configuration options can turn your system into a
highly reliable server. These options are discussed in the following sections.
SOFTWARE RAID SUPPORT If you will use software RAID for your server, follow
these steps to enable it:
PSEUDO TERMINAL (PTY) SUPPORT If you use the server to enable many users to
connect via SSH or telnet, you need pseudo terminal (PTY) support. Follow these
steps:
1. Enable PTY support from the Character device submenu by selecting the
Maximum number of Unix98 PTYs in use (0-2048) option.
By default, the system has 256 PTYs. Each login requires a single PTY.
2. If you expect more than 256 simultaneous login sessions, set a value
between 257 and 2048.
Each PTY uses at least 2 MB of RAM. Make sure you have plenty of RAM for
the number of simultaneous login sessions you select.
REAL-TIME CLOCK SUPPORT FOR SMP SYSTEM If you use multiple CPU (enabled
Symmetric Multi Processing support), enable the enhanced Real Time Clock (RTC)
so that it’s set in an SMP-compatible fashion. To enable RTC, enable Enhanced Real
Time Clock Support option from the Character devices submenu.
044754-2 Ch02.F 11/5/01 9:03 AM Page 31
If you get any error messages from the preceding command, you might have
a source distribution integrity problem. In such cases, you must download a
new copy of the latest stable kernel source and reconfigure it from the
beginning.
After you have run this command, you are ready to compile the kernel and its
modules.
The following sections explain how to compile both the kernel image and the
modules images.
044754-2 Ch02.F 11/5/01 9:03 AM Page 32
COMPILING THE KERNEL IMAGE To create the kernel image file, run the make
bzImage command from /usr/src/linux as root.
Depending on your processor speed, the compile time can vary from a few
minutes to hours. On my Pentium III 500 MHz system with 384MB of RAM,
the kernel compiles in less than five minutes.
Once the make bzImage command is finished, a kernel image file called bzImage
is created in a directory specific to your system architecture. For example, an x86
system’s new kernel bzImage file is in /usr/src/linux/arch/i386/boot.
COMPILING AND INSTALLING THE MODULES In the process of the kernel con-
figuration, you have set up at least one feature as kernel modules and, therefore,
you need to compile and install the modules.
Use the following commands to compile and install the kernel modules.
make modules
make modules_install
If you are compiling the same version of the kernel that is currently running
on your system, first back up your modules from /lib/modules/x.y.z
(where x.y.z is the version number of the current kernel). You can simply run
cp -r /lib/modules/x.y.z /lib/modules/x.y.z.current (by
replacing x.y.z with appropriate version number) to create a backup module
directory with current modules.
Once the preceding commands are done, all new modules will be installed in a
new subdirectory in the /lib directory.
This is a very important step.You must take great care so you can still boot
the old kernel if something goes wrong with the new kernel.
044754-2 Ch02.F 11/5/01 9:03 AM Page 33
Now you can install the new kernel and configure LILO to boot either kernel.
cp /usr/src/linux/arch/i386/boot/bzImage /boot/vmlinuz-2.4.1
CONFIGURING LILO
LILO is the boot loader program and it must be configured before you can boot the
new kernel.
Edit the LILO configuration file called /etc/lilo.conf as follows:
1. Copy the current lilo section that defines the current image and its
settings.
For example, Listing 2-1 shows a sample /etc/lilo.conf file with a sin-
gle kernel definition. As it stand right now, lilo boots the kernel labeled
linux (because default = linux is set).
2. Copy the following lines and append to the end of the current
/etc/lilo.conf file.
image=/boot/vmlinuz-2.4.0-0.99.11
label=linux
read-only
root=/dev/hda1
044754-2 Ch02.F 11/5/01 9:03 AM Page 34
3. Change the image path to the new kernel image you copied. For example,
if you copied the new kernel image
/usr/src/linux/arch/i386/boot/bzImage to the /boot/vmlinuz-
2.4.1 directory, then set image=/boot/vmlinuz-2.4.1.
4. Change the label for this new segment to linux2. The resulting file is
shown in Listing 2-2.
Never experiment with new kernel from a remote location. Always restart
the system from the system console to load a new kernel for the first time.
1. Reboot the system from the console, using the /sbin/shutdown -r now
command.
During the reboot process, you see the lilo prompt.
2. At the lilo prompt, enter linux2.
044754-2 Ch02.F 11/5/01 9:03 AM Page 35
With the new label linux2 associated with the new kernel, your system
attempts to load the new kernel. Assuming everything goes well, it should
boot up normally and the login prompt should appear.
3. At the login prompt, log in as root from the console.
4. When you are logged in, run the uname -a command, which should dis-
play the kernel version number along with other information.
Here’s a sample output:
Linux rhat.nitec.com 2.4.1 #2 SMP Wed Feb 14 17:14:02 PST
2001 i686 unknown
I marked the version number in bold. The #2 reflects the number of times
I built this kernel.
Run the new kernel for several days before making it the default for your system.
If the kernel runs for that period without problems — provided you are ready to
make this your default kernel — simply edit the /etc/lilo.conf file, change
default=linux to default=linux2, and rerun /sbin/lilo to reconfigure lilo.
Thousands of files can be opened from the message queue. These steps
allow extra filehandles to accomodate them:
044754-2 Ch02.F 11/5/01 9:03 AM Page 36
Using too many threads will reach the system’s simultaneous process
capacity. To set per process filehandle limit, follow these steps:
1. Edit the /etc/security/limits.conf file and add the following lines:
* soft nofile 1024
* hard nofile 8192
In the preceding code, the open files (filehandles) and max user
processes line are bold. To enable users to run fewer processes, (about
8192 at most), add the following lines in /etc/security/limits.conf file.
044754-2 Ch02.F 11/5/01 9:03 AM Page 37
This setting applies to both processes and the child threads that each
process opens.
You can also configure how much memory a user can consume by using soft
and hard limits settings in the same file. The memory consumption is con-
trolled using such directives as data, memlock, rss, and stack. You can also
control the CPU usage of a user. Comments in the file provide details on how
to configure such limits.
Summary
Configuring a custom kernel suits your system needs. A custom kernel is a great
way to keep your system lean and mean, because it won’t have unnecessary kernel
modules or potential crashes due to untested code in the kernel.
044754-2 Ch02.F 11/5/01 9:03 AM Page 38
054754-2 Ch03.F 11/5/01 9:03 AM Page 39
Chapter 3
Filesystem Tuning
IN THIS CHAPTER
A WISE ENGINEER ONCE TOLD ME that anyone you can see moving with your naked
eye isn’t fast enough. I like to spin that around and say that anything in your com-
puter system that has moving parts isn’t fast enough. Disks, with moving platters,
are the slowest devices, even today. The filesystems that provide a civilized inter-
face to your disks are, therefore, inherently slow. Most of the time, the disk is the
bottleneck of a system.
In this chapter, you tune disks and filesystems for speed, reliability, and easy
administration.
SCSI PERFORMANCE
If you have a modern, ultra-wide SCSI disk set up for your Red Hat Linux system,
you are already ahead of the curve and should be getting good performance from
your disks. If not (even if so), the difference between SCSI and IDE is useful to
explore:
◆ SCSI disk controllers handle most of the work of transferring data to and
from the disks; IDE disks are controlled directly by the CPU itself. On a
busy system, SCSI disks don’t put as much load on the CPU as IDE
39
drives add.
054754-2 Ch03.F 11/5/01 9:03 AM Page 40
◆ SCSI disks have wider data transfer capabilities, whereas IDE disks are still
connected to the system via 16-bit bus.
If you need high performance, SCSI is the way to go. Buy brandname SCSI
adapters and ultra-wide, 10K-RPM or larger SCSI disks and you have done
pretty much all you can do to improve your disk subsystem.
Whether you choose SCSI or IDE disks, multiple disks are a must if you are seri-
ous about performance.
◆ At minimum, use two disks — one for operating systems and software, the
other for data.
◆ For Web servers, I generally recommend a minimum of three disks. The
third disk is for the logs generated by the Web sites hosted on the
machine. Keeping disk I/O spread across multiple devices minimizes wait
time.
Of course, if you have the budget for it, you can use fiber channel disks or a
storage-area network (SAN) solution. Enterprises with high data-storage
demands often use SANs. A less expensive option is a hardware/software
RAID solution, which is also discussed in this chapter.
EIDE PERFORMANCE
You can get better performance from your modern EIDE drive. Before doing any
tinkering and tuning, however, you must determine how well your drive is per-
forming. You need a tool to measure the performance of your disk subsystem. The
hdparam tool is just right for the job; you can download the source distribution of
this tool from metalab.unc.edu/pub/Linux/system/hardware/ and compile and
install it as follows:
3. Change to the newly created subdirectory and run the make install
command to compile and install the hdparam binary and the manual page.
The binary is by default installed in /usr/local/sbin directory. It’s
called hdparam.
Back up your data before using hdparam. Because hdparam enables you to
change the behavior of your IDE/EIDE disk subsystem — and Murphy’s Law
always lurks in the details of any human undertaking — a misconfiguration
could cause your system to hang. Also, to make such an event less likely,
experiment with hdparam in single-user mode before you use it. You can
reboot your system and force it into single-user mode by entering linux
single at the lilo prompt during bootup.
After you have installed the hdparam tool, you are ready to investigate the
performance of your disk subsystem. Assuming your IDE or EIDE hard disk
is /dev/hda, run the following command to see the state of your hard disk
configuration:
hdparam /dev/hda
/dev/hda:
multcount = 0 (off)
I/O support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 0 (off)
keepsettings = 0 (off)
nowerr = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 2494/255/63, sectors = 40079088, start = 0
As you can see, almost everything in this default mode is turned off; changing
some defaults may enhance your disk performance. Before proceeding, however, we
need more information from the hard disk. Run the following command:
hdparm -i /dev/hda
054754-2 Ch03.F 11/5/01 9:03 AM Page 42
/dev/hda:
Model=WDC WD205AA, FwRev=05.05B05, SerialNo=WD-WMA0W1516037
Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq }
RawCHS=16383/16/63, TrkSize=57600, SectSize=600, ECCbytes=40
BuffType=DualPortCache, BuffSize=2048kB, MaxMultSect=16, MultSect=16
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=40079088
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes: pio0 pio1 pio2 pio3 pio4
DMA modes: mdma0 mdma1 *mdma2 udma0 udma1 udma2 udma3 udma4
The preceding command displays the drive identification information (if any)
that was available the last time you booted the system — for example, the model,
configuration, drive geometry (cylinders, heads, sectors), track size, sector size,
buffer size, supported DMA mode, and PIO mode. Some of this information will
come in handy later; you may want to print this screen so you have it in hard copy.
For now, test the disk subsystem by using the following command:
/dev/hda:
Timing buffer-cache reads: 128 MB in 1.01 seconds = 126.73 MB/sec
Timing buffered disk reads: 64 MB in 17.27 seconds = 3.71 MB/sec
These actual numbers you see reflect the untuned state of your disk subsystem.
The -T option tells hdparam to test the cache subsystem (that is, the memory,
CPU, and buffer cache). The -t tells hdparam to report stats on the disk (/dev/hda),
reading data not in the cache. Run this command a few times and figure an aver-
age of the MB per second reported for your disk. This is roughly the performance
state of your disk subsystem. In this example, the 3.71MB per second is the read
performance, which is low.
Now improve the performance of your disk. Go back to the hdparam -i
/dev/hda command output and look for MaxMultSect value. In this example, it’s
16. Remember that the hdparam /dev/hda command showed that multcount value
to be 0 (off). This means that multiple-sector mode (that is, IDE block mode) is
turned off.
The multiple sector mode is a feature of most modern IDE hard drives. It enables
the drive to transfer multiple disk sectors per I/O interrupt. By default, it’s turned
off. However, most modern drives can perform 2, 4, 8, or 16 sector transfers per I/O
interrupt. If you set this mode to the maximum possible value for your drive (the
MaxMultiSect value), you should see your system’s throughput increase from 5 to
50 percent (or more) — while reducing the operating system overhead by 30 to 50
054754-2 Ch03.F 11/5/01 9:03 AM Page 43
percent. In this example, the MaxMultiSect value is 16, so the -m option of the
hdparam tool to set this and see how performance increases. Run the following
command:
Running the performance test using the hdparam -tT /dev/hda command
demonstrates the change. For the example system, the change looks like this:
/dev/hda:
Timing buffer-cache reads: 128 MB in 1.01 seconds = 126.73 MB/sec
Timing buffered disk reads: 64 MB in 16.53 seconds = 3.87 MB/sec
The performance of the drive has gone up from 3.71MB per second to 3.87MB
per second. Not much, but not bad. Probably your drive can do better than that if
your disk and controller are fairly new. You can probably achieve 20 to 30MB per
second.
If hdparam reported that your system’s I/O support setting is 16-bit, and you
have a fairly new (one or two years old) disk subsystem, try enabling 32-bit I/O
support. You can do so by using the -c option for hdparam and selecting one of its
three values:
The command uses the -m16 option (mentioned earlier) and adds -c3 to enable
32-bit I/O support. Now running the program with the -t option shows the follow-
ing results:
/dev/hda:
Timing buffered disk reads: 64 MB in 8.96 seconds = 7.14 MB/sec
The performance of the disk subsystem has improved — practically doubled — and
you should be able to get even more.
◆ If your drive supports direct memory access (DMA), you may be able to
use the -d option, which enables DMA mode.
054754-2 Ch03.F 11/5/01 9:03 AM Page 44
◆ Typically, -d1 -X32 options or -d1 -X66 options are used together to
apply the DMA capabilities of your disk subsystem.
■ The first set of options (-d1 -X32) enables the multiword DMA mode2
for the drive.
■ The next set of options (-d1 -X66) enables UltraDMA mode2 for drives
that support UltraDMA burst timing feature.
These options can dramatically increase your disk performance. (I have
seen 20MB per second transfer rate with these options on various new
EIDE/ATA drives.)
◆ -u1 can boost overall system performance by enabling the disk driver to
unmask other interrupts during the processing of a disk interrupt. That
means the operating system can attend to other interrupts (such as the
network I/O and serial I/O) while waiting for a disk-based data transfer to
finish.
hdparam offers many other options — but be careful with them. Most of them can
corrupt data if used incorrectly. Always back up your data before playing with the
hdparam tool. Also, after you have found a set of options to work well, you should
put the hdparam command with options in the /etc/rc.d/rc.local script so that
they are set every time you boot the system. For example, I have added the follow-
ing line in the /etc/rc.d/rc.local file in one of my newer Red Hat Linux
systems.
3. After you know the average size of the filesystem, you can determine
whether to change the block size. Say you find out your average file size
is 8192, which is 2 × 4096. You can change the block size to 4096, pro-
viding smaller, more manageable files for the ext2 filesystem.
4. Unfortunately, you can’t alter the block size of an existing ext2 filesystem
without rebuilding it. So you must back up all your files from the file-
system and then rebuild it using the following command:
For example, if you have backed up the /dev/hda7 partition and want to
change the block size to 4096, the command would look like this:
Changing the block size to a higher number than the default (1024) may yield
significant performance in raw reading speed by reducing number of seeks, poten-
tially faster fsck session during boot, and less file fragmentation.
However, increasing the block size blindly (that is, without knowing the average
file size) can result in wasted space. For example, if the average file size is 2010
bytes on a system with 4096 byte blocks, each file wastes on an average 4096 –
2010 = 2086 bytes! Know your file size before you alter the block size.
After you have installed the e2fsprogs utilities you can start using them as dis-
cussed in the following section.
/sbin/tune2fs -l /dev/hda7
The very first setting I would like for you to understand is the error behavior.
This setting dictates how kernel behaves when errors are detected on the filesystem.
There are three possible values for this setting:
◆ Continue
◆ Panic
The next setting, mount count, is the number of time you have mounted this
filesystem.
The next setting shows the maximum mount count (20), which means that after
the maximum number of read/write mode mounts the filesystem is subject to a fsck
checking session during the next boot cycle.
The last checked setting shows the last date at which an fsck check was performed.
The check interval for two consecutive fsck sessions. The check interval is only used if
the maximum read/write mount count isn’t reached during the interval. If you don’t
unmount the filesystem for 6 months, then although the mount count is only 2, the
fsck check is forced because the filesystem exceeded the check interval. The next fsck
check date is shown in next check after setting. The reserved block UID and GID set-
tings show which user and group has ownership of the reserved portion of this filesys-
tem. By default, the reserved portion is to be used by super user (UID = 0, GID = 0).
054754-2 Ch03.F 11/5/01 9:03 AM Page 48
The e2fsck utility asks you repair questions, which you can avoid by using the
-p option.
A journaling filesystem doesn’t log data in the log; it simply logs meta-data
related to disk operations so replaying the log only makes the filesystem
consistent from the structural relationship and resource allocation point of
view. Some small data loss is possible. Also, logging is subject to the media
errors like all other activity. So if the media is bad, journaling won’t help
much.
Journaling filesystem is new to Linux but has been around for other platforms.
There are several flavors of experimental journaling filesystem available today:
JFS has been ported from AIX, IBM’s own operating system platform, and
still not ready for production use. You can find more information on JFS
at http://oss.software.ibm.com/developerworks/opensource/jfs.
◆ Red Hat’s own ext3 filesystem which is ext2 + journaling capabilities.
It’s also not ready for prime time. You can download the alpha release of
ext3 ftp site at ftp://ftp.linux.org.uk/pub/linux/sct/fs/jfs/.
◆ ReiserFS developed by Namesys is currently included in the Linux kernel
source distribution. It has been used more widely than the other journal-
ing filesystems for Linux. So far, it is leading the journaling filesystem
arena for Linux.
◆ I discuss how you can use ReiserFS today in a later section. ReiserFS was
developed by Hans Reiser who has secured funding from commercial
companies such as MP3, BigStorage.com, SuSe, and Ecila.com. These
companies all need better, more flexible filesystems, and can immediately
channel early beta user experience back to the developers. You can find
more information on ReiserFS at http://www.namesys.com.
◆ XFS journaling filesystem developed by Silicon Graphics, Inc. (SGI).
Because ReiserFS is included with Linux kernel 2.4.1 (or above), I discuss how
you can use it in the following section.
054754-2 Ch03.F 11/5/01 9:03 AM Page 50
As of this writing the ReiserFS filesystem can’t be used with NFS without
patches, which aren’t officially available for the kernel 2.4.1 o above yet.
Don’t choose the Have reiserfs do extra internal checking option under
ReiserFS support option. If you set this to yes, then reiserfs performs exten-
sive checks for internal consistency throughout its operation, which makes it
very slow.
5. Ensure that all other kernel features that you use are also selected as usual
(see Tuning kernel for details).
6. Exit the main menu and save the kernel configuration.
7. Run the make dep command to as suggested by the menuconfig program.
8. Run make bzImage to create the new kernel. Then run make modules and
make modules_install to install the new modules in appropriate location.
9. Change directory to arch/i386/boot directory. Note, if your hardware
architecture is Intel, you must replace i386 and possibly need further
instructions from a kernel HOW-TO documentation to compile and install
your flavor of the kernel. I assume that most readers are i386-based.
054754-2 Ch03.F 11/5/01 9:03 AM Page 51
11. Run the /sbin/lilo command to reconfigure LILO and reboot your sys-
tem. At the lilo prompt enter linux2 and boot the new kernel. If you have
any problem, you should be able to reboot to your standard linux kernel,
which should be default automatically.
12. After you have booted the new kernel, you are ready to use ReiserFS
(reiserfs).
Using ReiserFS
Because ReiserFS (reiserfs) is still under the “experimental” category, I highly rec-
ommend restricting it to a non-critical aspect of your system. Ideally, you want to
dedicate an entire disk or at least one partition for ReiserFS and use it and see how
you like it.
To use ReiserFS with a new partition called /dev/hda7, simply do the following:
1. As root, ensure that the partition is set as Linux native (83) by using
fdisk or another disk-partitioning tool.
3. Create a mount point for the new filesystem. For example, I can create a
mount point called /jfs, using the mkdir /jfs command.
4. Mount the filesystem, using the mount -t reiserfs /dev/hda7 /jfs
command. Now you can access it from /jfs mount point.
Benchmarking ReiserFS
To see how a journaling filesystem stacks up against the ext2 filesystem, here’s a
little benchmark you can do on your own.
Now you are ready to run the benchmark test. Run the following command from
the /tmp directory as root:
You are asked to confirm that you want to lose all data in /dev/hda7. Because
you have already emptied this partition for testing, specify yes and continue. This
test creates 100,000 files that range in size from 1K to 4K, in both the ReiserFS
(reiserfs) and ext2 filesystems by creating each of these two filesystem in
/dev/hda7 in turn. The results are recorded in the /tmp/log file. Here is a sample
/tmp/log file:
The report shows that to create 100K files of size 1K–4K, Reiserfs (reiserfs)
took 338.68 real-time seconds while ext2 took 3230.40 real-time seconds. So the
performance is nice.
Although journaling filesystem support is very new to Linux, it’s gotten a lot of
attention from the industry interested in using Linux in the enterprise, so journal-
ing filesystems will mature in a fast track. I recommend that you use this flavor of
the journaling filesystem on an experimental level and become accustomed to its
sins and fancies.
Now lets look at another enterprising effort in the Linux disk management called
Logical Volume Management or LVM for short. LVM with journaling filesystems
will ensures Linux’s stake in the enterprise-computing world. Good news is that
you don’t have to have a budget of the size of a large enterprise to get the high reli-
ability and flexibility of a LVM based disk subsystem. Lets see how you can use
LVM today.
054754-2 Ch03.F 11/5/01 9:03 AM Page 54
Continued
054754-2 Ch03.F 11/5/01 9:03 AM Page 56
ln -s /etc/rc.d/init.d/lvm /etc/rc.d/rc3.d/S25lvm
ln -s /etc/rc.d/init.d/lvm /etc/rc.d/rc3.d/K25lvm
4. To confirm that the volume group is created using the /dev/hda7 physical
volume, run the /sbin/pvdisplay /dev/hda7 command to display stats
as shown below:
--- Physical volume ---
PV Name /dev/hda7
VG Name big_disk
PV Size 12.85 GB / NOT usable 2.76 MB [LVM: 133
KB]
PV# 1
054754-2 Ch03.F 11/5/01 9:03 AM Page 58
PV Status available
Allocatable yes
Cur LV 0
PE Size (KByte) 4096
Total PE 3288
Free PE 3288
Allocated PE 0
PV UUID 2IKjJh-MBys-FI6R-JZgl-80ul-uLrc-PTah0a
As you can see, the VG Name (volume group name) for /dev/hda7 is
big_disk, which is exactly what we want. You can run the same command
for /dev/hdc1 as shown below:
--- Physical volume ---
PV Name /dev/hdc1
VG Name big_disk
PV Size 3.91 GB / NOT usable 543 KB [LVM: 124
KB]
PV# 2
PV Status available
Allocatable yes
Cur LV 0
PE Size (KByte) 4096
Total PE 1000
Free PE 1000
Allocated PE 0
PV UUID RmxH4b-BSfX-ypN1-cfwO-pZHg-obMz-JKkNK5
5. You can also display the volume group information by using the
/sbin/vgdisplay command, which shows output as follows:
In the preceding report, the total volume group size (VG Size) is roughly
the sum of the two physical volumes we added to it.
6. Run the /sbin/lvcreate --L10G -nvol1 big_disk command to create a
10GB logical volume called /dev/big_disk/vol1, using the big_disk volume
group.
You use disk striping and then use -i option to specify the number of phys-
ical volumes to scatter the logical volume and -I option to specify the
number of kilobytes for the granularity of the stripes. Stripe size must be
2^n (n = 0 to 7). For example, to create a striped version of the logical
volume( using the two physical volumes you added in the volume group
earlier), you can run the /sbin/lvcreate -i2 -I4 -L10G -nvol1
big_disk command. I don’t recommend striping because currently you
can’t add new physical volumes to a striped logical volume, which sort of
defeats the purpose of LVM.
You are all set with a new logical volume called vol1. The LVM package includes
a set of tools to help you manage your volumes.
054754-2 Ch03.F 11/5/01 9:03 AM Page 60
◆ The /sbin/pvscan utility enables you to list all physical volumes in your
system.
◆ The /sbin/pvchange utility enables you to change attributes of a physical
volume.
◆ The /sbin/pvcreate utility enables you to create a new physical volume.
◆ The /sbin/pvmove utility enables you to move data from one physical
volume to another within a volume group.
For example, say that you have a logical volume group called vol1 consist-
ing of a single volume group called big_disk, which has two physical vol-
umes /dev/hda7 and /dev/hdc1. You want to move data from /dev/hda7 to
/dev/hdc1 and remove /dev/hda7 with a new disk (or partition). In such
case, first ensure that /dev/hdc1 has enough space to hold all the data from
/dev/hda7. Then run the /sbin/pvmove /dev/hda8 /dev/hdc1 command.
◆ The vgscan utility enables you to list all the volume groups in your
system.
◆ The /sbin/vgcfgbackup utility backs up a volume group descriptor area.
◆ The /sbin/vgscan utility scans all disks for volume groups and also
builds the /etc/lvmtab and other files in /etc/lvmtab.d directory,
which are used by the LVM module.
◆ The /sbin/vgsplit utility enables you to split a volume group.
1. su to root and run /sbin/pvscan to view the state of all your physical
volumes. Here’s a sample output.
pvscan -- reading all physical volumes (this may take a
while...)
pvscan -- ACTIVE PV “/dev/hda7” of VG “big_disk” [12.84 GB
/ 2.84 GB free]
pvscan -- ACTIVE PV “/dev/hdc1” of VG “big_disk” [3.91 GB /
3.91 GB free]
pvscan -- total: 2 [16.75 GB] / in use: 2 [16.75 GB] / in no
VG: 0 [0]
054754-2 Ch03.F 11/5/01 9:03 AM Page 63
As you can see here, the total volume group size is about 16 GB.
3. Using the fdisk utility, change the new partition’s system ID to 8e (Linux
LVM). Here’s a sample /sbin/fdisk /dev/hdc session on my system.
Command (m for help): p
Disk /dev/hdc: 255 heads, 63 sectors, 1583 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 1 510 4096543+ 8e Linux LVM
/dev/hdc2 511 1583 8618872+ 83 Linux
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): 8e
Changed system type of partition 2 to 8e (Linux LVM)
Command (m for help): p
Disk /dev/hdc: 255 heads, 63 sectors, 1583 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 1 510 4096543+ 8e Linux LVM
/dev/hdc2 511 1583 8618872+ 8e Linux LVM
Command (m for help): v
62 unallocated sectors
Command (m for help): w
The partition table has been altered!
054754-2 Ch03.F 11/5/01 9:03 AM Page 64
In this report, the volume group size has increased to about 25GB because
we added approximately 8GB to 16GB of existing volume space.
7. You must unmount the logical volumes that use the volume group. In my
example, I can run umount /dev/big_disk/vol1 to unmount the logical
volume that uses the big_disk volume group.
If you get a device busy error message when you try to unmount the
filesystem, you are either inside the filesystem mount point or you have at
least one user (or program) currently using the filesystem. The best way to
solve such a scenario is to take the system down to single-user mode from
the system console, using the /etc/rc.d/rc 1 command and staying out
of the mount point you are trying to unmount.
054754-2 Ch03.F 11/5/01 9:03 AM Page 65
8. Increase the size of the logical volume. If the new disk partition is (say)
8GB and you want to extend the logical volume by that amount, do so
using the /sbin/lvextend -L +8G /dev/big_disk/vol1 command. You
should see output like the following:
lvextend -- extending logical volume “/dev/big_disk/vol1” to
18 GB
lvextend -- doing automatic backup of volume group “big_disk”
lvextend -- logical volume “/dev/big_disk/vol1” successfully
extended
9. After the logical volume has been successfully extended, resize the filesys-
tem accordingly:
◆ If you use a reiserfs filesystem, you can run
/sbin/resize_reiserfs -f /dev/big_disk/vol1
Every time I reduced a logical volume to a smaller size, I had to recreate the
filesystem. All data was lost. The lesson? Always back up the logical volume
first.
054754-2 Ch03.F 11/5/01 9:03 AM Page 66
1. Move the data on the physical volume /dev/hda7 to another disk or par-
tition in the same volume group. If /dev/hdc1 has enough space to keep
the data, you can simply run the /sbin/pvmove /dev/hda7 /dev/hdc1
command to move the data. If you don’t have the space in either you
must add a disk to replace /dev/hda7 if you want to save the data.
2. Remove the physical volume /dev/hda7 from the volume group, using the
following command:
/sbin/vgreduce big_disk /dev/hda7
◆ If you use ext2 filesystem for the logical volume, run the following
command:
with software RAID because I have used hardware RAID devices extensively and
found them to be very suitable solutions. In almost all situations where RAID is a
solution, someone is willing to pay for the hardware. Therefore, I can’t recommend
software RAID as a tested solution with anywhere near the confidence I have in
hardware RAID.
managed using Web. They are good for small- to mid-range organizations and
often very easy to configure and manage.
1. Get the latest Linux kernel source from www.kernel.org and then (as
root) extract it into the /usr/src/linux-version directory, (where ver-
sion is the current version of the kernel). Here I assume this to be 2.4.1.
2. Select the File systems submenu. Using the spacebar, select Simple
RAM-based file system support to be included as a kernel module and
exit the submenu.
3. Ensure that all other kernel features that you use are also selected as usual
(see “Tuning the kernel” for details).
4. Exit the main menu and save the kernel configuration.
5. Run the make dep command to as suggested by the menuconfig program.
6. Run make bzImage to create the new kernel. Then run make modules and
make modules_install to install the new modules in appropriate
locations.
7. Change directory to arch/i386/boot directory. Note, if your hardware
architecture is Intel, you must replace i386 and possibly need further
instructions from a kernel HOW-TO documentation to compile and install
your flavor of the kernel. I assume that most readers are i386-based.
8. Copy the bzImage to /boot/vmlinuz-2.4.1 and edit the
/etc/lilo.conf file to include a new configuration such as the
following:
image=/boot/vmlinuz-2.4.1
label=linux3
read-only
root=/dev/hda1
054754-2 Ch03.F 11/5/01 9:03 AM Page 69
9. Run the /sbin/lilo command to reconfigure LILO and reboot your sys-
tem. At the lilo prompt, enter linux3 and boot the new kernel. If you
have any problem, you should be able to reboot to your standard Linux
kernel, which should be default automatically.
10. After you have booted the new kernel, you are now ready to use the
ramfs capability. Create a directory called ramdrive by using the mkdir
/ramdrive command.
11. Mount the ramfs filesystem by using the mount -t ramfs none
/ramdrive command.
When the system is rebooted or you unmount the filesystem, all contents
are lost. This is why it should be a temporary space for high-speed access.
Because ramfs is really not a block device, such programs as df and du can’t
see it. You can verify that you are really using RAM by running the cat
/proc/mounts command and look for an entry such as the following:
none /ram ramfs rw 0 0
You can specify options using -o option when mounting the filesystem just
like mounting a regular disk-based filesystem. For example, to mount the
ramfs filesystem as read-only, you can use -o ro option. You can also spec-
ify special options such as maxsize=n where n is the number of kilobytes to
allocate for the filesystem in RAM; maxfiles=n where n is the number of all
files allowed in the filesystem; maxinodes=n where n is the maximum
number of inodes (default is 0 = no limits).
If you run a Web server, you should find many uses for a RAM-based filesystem.
Elements such as common images and files of your Web site that aren’t too big can
be kept in the ramfs filesystem. You can write a simple shell script to copy the con-
tents from their original location on each reboot. Listing 3-3 creates a simple script
for that.
054754-2 Ch03.F 11/5/01 9:03 AM Page 70
3. Create a directory called ram using the mkdir /ram command. If you keep
the files you want to load in RAM in any other location than /www/com-
monfiles, then modify the value for the ORIG_DIR variable in the script.
For example, if your files are in the /www/mydomain/htdocs/common
directory, then set this variable to this directory.
054754-2 Ch03.F 11/5/01 9:03 AM Page 71
4. If you run your Web server using any other username and group than
httpd, then change the USER and GROUP variable values accordingly. For
example, if you run Apache as nobody (user and group), then set
USER=nobody and GROUP=nobody.
5. Assuming you use Apache Web server, create an alias in your httpd.conf
file such as the following:
Alias /commonfiles/ “/ram/commonfiles/”
Whenever Apache Web server needs to access /commonfiles/*, it now uses the
version in the RAM, which should be substantially faster than the files stored in the
original location. Remember, the RAM-based version disappears whenever you
reboot or unmount the filesystem. So never update anything there unless you also
copy the contents back to a disk-based directory.
Summary
In this chapter you learned about how to tune your disks and filesystems. You
learned to tune your IDE/EIDE drives for better performance; you learned to
enhance ext2 performance along with using journaling filesystems like ReiserFS,
logical volume management, and RAM-based filesystems.
054754-2 Ch03.F 11/5/01 9:03 AM Page 72
064754-2 Pt2.F 11/5/01 9:03 AM Page 73
Part II
Network and Service Performance
CHAPTER 4
Network Performance
CHAPTER 5
Web Server Performance
CHAPTER 6
E-Mail Server Performance
CHAPTER 7
NFS and Samba Server Performance
064754-2 Pt2.F 11/5/01 1:16 PM Page 74
074754-2 Ch04.F 11/5/01 9:03 AM Page 75
Chapter 4
Network Performance
IN THIS CHAPTER
◆ Using IP accounting
THE NETWORK DEVICES (such as network interface cards, hubs, switches, and routers)
that you choose for your network have a big effect on the performance of your net-
work so it’s important to choose appropriate network hardware. Because network
hardware is cheap today, using high performance PCI-based NIC or 100Mb switches
is no longer a pipe dream for network administrators. Like the hardware, the high-
speed bandwidth is also reasonably cheap. Having T1 connection at the office is no
longer a status symbol for network administrators. Today, burstable T3 lines are
even available in many places. So what is left for network tuning? Well, the very
design of the network of course! In this chapter I discuss how you can design high-
performance networks for both office and public use. However, the Ethernet Local
Area Network (LAN) tuning discussion is limited to small- to mid-range offices
where the maximum number of users is fewer than a thousand or so. For larger-
scale networks you should consult books that are dedicated to large networking
concepts and implementations. This chapter also covers a Web network design that
is scalable and can perform well under heavy load.
75
074754-2 Ch04.F 11/5/01 9:03 AM Page 76
PC PC
HUB
PC PC
As the company grows bigger, the small LAN started to look like the one shown
in Figure 4-2.
PC PC PC PC
HUB HUB
PC PC PC PC
MAC PC
HUB
PC MAC
As the company prospers, the number of people and the computers grow and
eventually you have a network that looks as shown in Figure 4-3.
In my experience, when a network of cascading Ethernet hubs reaches about 25
or more users, typically it has enough diverse users and tasks that performance
starts to degrade. For example, I have been called in many times to analyze net-
works that started degrading after adding only a few more machines. Often those
“few more machines” were run by “network-heavy” users such as graphic artists
who shared or downloaded huge art and graphics files throughout the day as part
of their work or research. Today it’s even easier to saturate a 10Mb Ethernet with
live audio/video feeds (or other apps that kill network bandwidth) that office users
sometimes run on their desktops. So it’s very important to design a LAN that can
perform well under a heavy load so everyone’s work gets done fast.
074754-2 Ch04.F 11/5/01 9:03 AM Page 77
Although commonly used, Ethernet hubs are not the best way to expand a LAN
to support users. Network expansions should be well planned and implemented,
using appropriate hardware; the following sections discuss how you can do that.
Network: 192.168.3.0
Network
Gateway
eth1
Management & Administration
eth2
Network: 192.168.2.0
Network: 192.168.1.0
Here the departments are segmented in the different IP networks and intercon-
nected by a network gateway. This gateway can be a Red Hat Linux system with IP
074754-2 Ch04.F 11/5/01 9:03 AM Page 78
forwarding turned on and a few static routing rules to implement the following
standard routing policy:
Here’s an example: John at the marketing department wants to access a file from
Jennifer, who works in the same department. When John accesses Jennifer’s shared
drive, the IP packets his system transmits and receives aren’t to be seen by anyone
in the management/administration or development/production departments. So if
the file is huge, requiring three minutes to transfer, no one in the other department
suffers network degradation. Of course marketing personnel who are accessing the
network at the time of the transfer do see performance degrade. But you can reduce
such degradation by using switching Ethernet hardware instead of simple Ethernet
hubs (I cover switches in a later section).
The network gateway computer in Figure 4-4 has three Ethernet interface (NIC)
cards; each of these cards is connected to a different department (that is, network).
The marketing and sales department is on a Class C (192.168.3.0) network, which
means this department can have 254 host computers for their use. Similarly, the
other departments have their own Class C networks. Here are the steps needed to
create such a setup.
There are many ways to do this configuration. For example, instead of using
different Class C networks to create departmental segments, you can use a
set of Class B subnets (or even a set of Class C subnets, depending on the
size of your departments). In this example, I use different Class C networks to
make the example a bit simpler to understand.
2. On your Red Hat Linux system designated to be the network gateway, turn
on IP forwarding.
074754-2 Ch04.F 11/5/01 9:03 AM Page 79
You may already have IP forwarding turned on; to check, run the cat
/proc/sys/net/ipv4/ip_forward command. 1 means that IP forward-
ing is on and 0 means that IP forwarding is turned off.
4. Connect the appropriate network to the proper Ethernet NIC on the gate-
way computer. The 192.168.1.0 network should be connected to the
eth0, 192.168.2.0 should be connected to eth1, and 192.168.3.0
should be connected to eth2. Once connected, you can simply restart the
machine — or bring up the interfaces, using the following commands from
the console.
074754-2 Ch04.F 11/5/01 9:03 AM Page 80
/sbin/ifconfig eth0 up
/sbin/ifconfig eth1 up
/sbin/ifconfig eth2 up
5. Set the default gateway for each of the networks. For example, all the
computers in the 192.168.1.0 network should set their default route to
be 192.168.1.254, which is the IP address associated with the eth0
device of the gateway computer.
That’s all there is to isolating each department into its own network. Now
traffic from one network only flows to the other when needed. This
enables the bandwidth on each department to be available for its own use
most of the time.
Hub Switch
Hub Switch
One common use for an Ethernet switch is to break a large network into seg-
ments. While it’s possible to attach a single computer to each port on an
Ethernet switch, it’s also possible to connect other devices such as a hub. If
your network is large enough to require multiple hubs, you could connect
each of those hubs to a switch port so that each hub is a separate segment.
Remember that if you simply cascade the hubs directly, the combined net-
work is a single logical Ethernet segment.
The fast Ethernet with switching hardware can bring a high degree of perfor-
mance to your LAN. Consider this option if possible. If you have multiple depart-
ments to interconnect, consider an even faster solution between the departments.
The emerging gigabit Ethernet is very suitable for connecting local area networks
to form a wide area network (WAN).
Gigabit Switch
Location A or Fiber Switch Location B
Gigabit Switch
Location D or Fiber Switch Location C
Here the four locations A, B, C, and D are interconnected using either a gigabit
or fiver switched backbone. A large bandwidth capacity in the backbone has two
benefits:
servers. In such a case, NFS and database traffic should be isolated so that they
don’t compete with Web traffic. Figure 4-7 shows a modified network diagram for
the same network.
Router
Internet
Router
Internet
Here the database and the NFS server are connected to a switch that is connected
to the second NIC of each Web server. The other NIC of each Web server is connected
to a switch that is in turn connected to the load balancing hardware. Now, when a
Web request comes to a Web server, it’s serviced by the server without taking away
from the bandwidth of other Web servers. The result is a tremendous increase in net-
work efficiency, which trickles down to more positive user experience.
After you have a good network design, your tuning focus should be shifted to
applications and services that you provide. In many cases, depending on your
074754-2 Ch04.F 11/5/01 9:03 AM Page 85
network load, you may have to consider deploying multiple servers of the same
kind to implement a more responsive service. This is certainly true for the Web. In
the following section I show you a simple-to-use load-balancing scheme using a
DNS trick.
www1 IN A 192.168.1.10
www2 IN A 192.168.1.20
www IN CNAME www1
www IN CNAME www2
Restart your name server and ping the www.yourdomain.com host. You see the
192.168.1.10 address in the ping output. Stop and restart pinging the same host,
and you’ll see the second IP address being pinged, because the preceding configura-
tion tells the name server to cycle through the CNAME records for www. The www.
yourdomain.com host is both www1.yourdomain.com and www2.yourdomain.com.
Now, when someone enters www.yourdomain.com, the name server sends the
first address once, then sends the second address for the next request, and keeps
cycling between these addresses.
One of the disadvantages of the round-robin DNS trick is that the name server
can’t know which system is heavily loaded and which isn’t — it just blindly cycles.
If a server crashes or becomes unavailable for some reason, the round-robin DNS
trick still returns the broken server’s IP on a regular basis. This could be chaotic,
because some people get to the sites and some won’t.
If your load demands better management and your server’s health is essential to
your operation, then your best choice is to get a hardware solution that uses the
new director products such as Web Director (www.radware.com/), Ace Director
(www.alteon.com/), or Local Director (www.cisco.com/). I have used both Local
Director and Web Director with great success.
IP Accounting
As you make headway in tuning your network, you also have a greater need to
determine how your bandwidth is used. Under Linux, you can use the IP account-
ing scheme to get that information.
074754-2 Ch04.F 11/5/01 9:03 AM Page 86
Knowing how your IP bandwidth is used helps you determine how to make
changes in your network to make it more efficient. For example, if you discover
that one segment of your network has 70 percent of its traffic going to a different
segment on average, you may find a way to isolate that traffic by providing a direct
link between the two networks. IP accounting helps you determine how IP packets
are passed around in your network.
To use IP accounting, you must configure and compile the kernel with network
packet-filtering support. If you use the make menuconfig command to configure
the kernel, you can find the Network packet filtering (replaces ipchains)
feature under the Networking optionssubmenu. Build and install the new kernel
with packet filtering support (See the Tuning Kernel chapter for details on compil-
ing and installing a custom kernel).
Here the first states that a new rule be appended (-A) to the FORWARD chain such
that all packets destined for the 192.168.1.0 network be counted when the pack-
ets travel via the eth2 interface of the gateway machine. Remember, the eth2 inter-
face is connected to the ISP network (possibly via a router, DSL device, or Cable
modem). The second rule states that another rule be appended to the FORWARD chain
such that any IP packet originated from the 192.168.1.0 network and passing
through the eth2 interface be counted. These two rules effectively count all IP
074754-2 Ch04.F 11/5/01 9:03 AM Page 87
packets (whether incoming or outgoing) that move between the 192.168.1.0 net-
work and the Internet. To do the same for the 192.168.2.0 network, use the fol-
lowing rules:
After you have set up the preceding rules, you can view the results from time to
time by using the /sbin/iptables -L –v -n command. I usually open an SSH ses-
sion to the network gateway and run /usr/bin/watch –n 3600 /sbin/iptables
-L –v -n to monitor the traffic on an hourly basis.
If you are interested in finding out what type of network services are requested
by the departments that interact with the Internet, you can do accounting on that,
too. For example, if you want to know how much of the traffic passing through the
eth2 interface is Web traffic, you can implement a rule such as the following:
This records traffic meant for port 80 (www port in /etc/services). You can
add similar rules for other network services found in the /etc/services files.
Summary
The state of your network performance is the combined effect of your operating
system, network devices, bandwidth, and the overall network design you choose to
implement.
074754-2 Ch04.F 11/5/01 9:03 AM Page 88
084754-2 Ch05.F 11/5/01 9:03 AM Page 89
Chapter 5
◆ Controlling Apache
THE DEFAULT WEB SERVER software for Red Hat Linux is Apache — the most popular
Web server in the world. According to Apache Group (its makers), the primary mis-
sion for Apache is accuracy as an HTTP protocol server first; performance (per se)
is second. Even so, Apache offers good performance in real-world situations — and
it continues to get better. As with many items of technology, proper tuning can give
an Apache Web server excellent performance and flexibility. In this chapter, I focus
on Apache tuning issues — and introduce you to the new kernel-level HTTP daemon
(available for the 2.4 and later kernels) that can speed the process of Web design.
Apache architecture makes the product extremely flexible. Almost all of its pro-
cessing — except for core functionality that handles requests and responses — hap-
pens in individual modules. This approach makes Apache easy to compile and
customize.
In this book (as in my other books), a common thread running through all of
the advice that bears repeating: Always compile your server software if you
have access to the source code. I believe that the best way to run Apache is to
compile and install it yourself. Therefore my other recommendations in this
section assume that you have the latest distribution of the Apache source
code on your system.
1. Know what Apache modules you currently have; decide whether you
really need them all. To find out what modules you currently have
installed in Apache binary code (httpd), run the following command
while logged in as root:
/usr/local/apache/bin/httpd –l
If you installed a default Apache binary, you can also find out what mod-
ules are installed by default by running the configuration script using the
following command:
./configure --help
Option Meaning
--cache-file=FILE Cache test results in FILE
--help Print this message
084754-2 Ch05.F 11/5/01 9:03 AM Page 91
Option Meaning
--no-create Do not create output files
--quiet or --silent Do not print ‘checking...’ messages
--version Print the version of autoconf that created configure
Directory and filenames:
--prefix=PREFIX Install architecture-independent files in PREFIX
[/usr/local/apache2]
--exec-prefix=EPREFIX Install architecture-dependent files in EPREFIX [same
as prefix]
--bindir=DIR User executables in DIR [EPREFIX/bin]
--sbindir=DIR System admin executables in DIR [EPREFIX/sbin]
--libexecdir=DIR Program executables in DIR [EPREFIX/libexec]
--datadir=DIR Read-only architecture-independent data in DIR
[PREFIX/share]
--sysconfdir=DIR Read-only single-machine data in DIR [PREFIX/etc]
--sharedstatedir=DIR Modifiable architecture-independent data in DIR
[PREFIX/com]
--localstatedir=DIR Modifiable single-machine data in DIR [PREFIX/var]
--libdir=DIR Object code libraries in DIR [EPREFIX/lib]
--includedir=DIR C header files in DIR [PREFIX/include]
--oldincludedir=DIR C header files for non-GCC in DIR [/usr/include]
--infodir=DIR Info documentation in DIR [PREFIX/info]
--mandir=DIR man documentation in DIR [PREFIX/man]
--srcdir=DIR Find the sources in DIR [configure dir or ...]
--program-prefix=PREFIX Prepend PREFIX to installed program names
--program-suffix=SUFFIX Append SUFFIX to installed program names
--program-transform- Run sed PROGRAM on installed program names
name=PROGRAM
--build=BUILD Configure for building on BUILD [BUILD=HOST]
--host=HOST Configure for HOST
--target=TARGET Configure for TARGET [TARGET=HOST]
--disable-FEATURE Do not include FEATURE (same as --enable-
FEATURE=no)
--enable-FEATURE[=ARG] Include FEATURE [ARG=yes]
Continued
084754-2 Ch05.F 11/5/01 9:03 AM Page 92
Option Meaning
--with-PACKAGE[=ARG] Use PACKAGE [ARG=yes]
--without-PACKAGE Do not use PACKAGE (same as --with-PACKAGE=no)
--x-includes=DIR X include files are in DIR
--x-libraries=DIR X library files are in DIR
--with-optim=FLAG Obsolete (use OPTIM environment variable)
--with-port=PORT Port on which to listen (default is 80)
--enable-debug Turn on debugging and compile-time warnings
--enable-maintainer-mode Turn on debugging and compile-time warnings
--enable-layout=LAYOUT Use the select directory layout
--enable-modules= Enable the list of modules specified
MODULE-LIST
--enable-mods- Enable the list of modules as shared objects
shared=MODULE-LIST
--disable-access Host-based access control
--disable-auth User-based access control
--enable-auth-anon Anonymous user access
--enable-auth-dbm DBM-based access databases
--enable-auth-db DB-based access databases
--enable-auth-digest RFC2617 Digest authentication
--enable-file-cache File cache
--enable-dav-fs DAV provider for the filesystem
--enable-dav WebDAV protocol handling
--enable-echo ECHO server
--enable-charset-lite Character set translation
--enable-cache Dynamic file caching
--enable-disk-cache Disk caching module
--enable-ext-filter External filter module
--enable-case-filter Example uppercase conversion filter
--enable-generic- Example of hook exporter
hook-export
--enable-generic- Example of hook importer
hook-import
084754-2 Ch05.F 11/5/01 9:03 AM Page 93
Option Meaning
--enable-optional- Example of optional function importer
fn-import
--enable-optional- Example of optional function exporter
fn-export
--disable-include Server-Side Includes
--disable-http HTTP protocol handling
--disable-mime Mapping of file-extension to MIME
--disable-log-config Logging configuration
--enable-vhost-alias Mass -hosting module
--disable-negotiation Content negotiation
--disable-dir Directory request handling
--disable-imap Internal imagemaps
--disable-actions Action triggering on requests
--enable-speling Correct common URL misspellings
--disable-userdir Mapping of user requests
--disable-alias Translation of requests
--enable-rewrite Regex URL translation
--disable-so DSO capability
--enable-so DSO capability
--disable-env Clearing/setting of ENV vars
--enable-mime-magic Automatically determine MIME type
--enable-cern-meta CERN-type meta files
--enable-expires Expires header control
--enable-headers HTTP header control
--enable-usertrack User-session tracking
--enable-unique-id Per-request unique IDs
--disable-setenvif Base ENV vars on headers
--enable-tls TLS/SSL support
--with-ssl Use a specific SSL library installation
--with-mpm=MPM Choose the process model for Apache to use:
MPM={beos threaded prefork
spmt_os2 perchild}
Continued
084754-2 Ch05.F 11/5/01 9:03 AM Page 94
Option Meaning
--disable-status Process/thread monitoring
--disable-autoindex Directory listing
--disable-asis As-is filetypes
--enable-info Server information
--enable-suexec Set UID and GID for spawned processes
--disable-cgid CGI scripts
--enable-cgi CGI scripts
--disable-cgi CGI scripts
--enable-cgid CGI scripts
--enable-shared[=PKGS] Build shared libraries [default=no]
--enable-static[=PKGS] Build static libraries [default=yes]
--enable-fast- Optimize for fast installation [default=yes]
install[=PKGS]
--with-gnu-ld Assume the C compiler uses GNU lD [default=no]
--disable-libtool-lock Avoid locking (might break parallel builds)
--with-program-name Alternate executable name
--with-suexec-caller User allowed to call SuExec
--with-suexec-userdir User subdirectory
--with-suexec-docroot SuExec root directory
--with-suexec-uidmin Minimal allowed UID
--with-suexec-gidmin Minimal allowed GID
--with-suexec-logfile Set the logfile
--with-suexec-safepath Set the safepath
--with-suexec-umask Amask for suexec’d process
2. Determine whether you need the modules that you have compiled in
Apache binary (httpd). By removing unnecessary modules, you achieve a
performance boost (because of the reduced size of the binary code file)
and — potentially, at least — greater security.
084754-2 Ch05.F 11/5/01 9:03 AM Page 95
For example, if you plan never to run CGI programs or scripts, you can
remove the mod_cgi module — which reduces the size of the binary file
and also shuts out potential CGI attacks, making a more secure Apache envi-
ronment. If can’t service CGI requests, all CGI risk goes to zero.To know which
modules to keep and which ones to remove, know how each module func-
tions; you can obtain this information at the www.apache.org Web site.
Reading the Apache documentation for each module can help you deter-
mine whether you have any use for a moduleot.
Make a list of modules that you can do without and continue to the next
step.
3. After you decide which default modules you don’t want to keep, simply
run the configuration script from the top Apache directory, specifying the
--disable-module option for each module you want to remove. Here’s
an example:
./configure --prefix=/usr/local/apache \
--disable-cgi \
--disable-imap \
--disable-userdir \
--disable-autoindex \
--disable-status
◆ The more processes you run, the more load your CPUs must handle.
◆ The more processes you run, the more RAM you need.
◆ The more processes you run, the more operating-system resources (such as
file descriptors and shared buffers) you use.
Of course, more processes could also mean more requests serviced — hence more
hits for your site. So set these directives by balancing experimentation, require-
ments, and available resources.
StartServers
StartServers is set to 3 by default, which tells Apache to start three child servers
as it starts.
You can start more servers if you want, but Apache is pretty good at increasing
the number of child processes as needed based on load. So, changing this is not
required.
SENDBUFFERSIZE
This directive sets the size of the TCP send buffer to the number of bytes specified.
LISTENBACKLOG
This directive defends against a known type of security attack called denial of ser-
vice (DoS) by enabling you to set the maximum length of the queue that handles
pending connections.
Increase this value if you detect that you are under a TCP SYN flood attack
(a type of DoS attack); otherwise you can leave it alone.
TIMEOUT
In effect, the Web is really a big client/server system in which the Apache server
responds to requests. The requests and responses are transmitted via packets of
data. Apache must know how long to wait for a certain packet. This directive con-
figures the time in seconds.
The time you specify here is the maximum time Apache waits before it breaks a
connection. The default setting enables Apache to wait for 300 seconds before it
disconnects itself from the client. If you are on a slow network, you may want to
increase the time-out value to decrease the number of disconnects.
Currently, this time out setting applies to:
MAXCLIENTS
This directive limits the number of simultaneous requests that Apache can service.
When you use the default MPM module (threaded) the number of simultaneous
request is equal to the value of this directive multiplied by the value of the
ThreadsPerChild directive. For example, if you have MaxClients set to default (256)
and ThreadsPerChild set to default (50) the Apache server can service a total of
12800 (256 x 50) requests. When using the perfork MPM the maximum number of
requests is limited by only the value of MaxClients. The default value (256) is the
maximum setting for this directive. If you wish to change this to a higher number,
you will have to modify the HARD_SERVER_LIMIT constant in mpm_default.h file
in the source distribution of Apache and recompile and reinstall it.
MAXREQUESTSPERCHILD
This directive sets the number of requests a child process can serve before getting
killed.
The default value of 0 makes the child process serve requests forever. I do not
like the default value because it allows Apache processes to slowly consume large
amounts of memory when a faulty mod_perl script or even a faulty third-party
Apache module leaks memory. If you do not plan to run any third-party Apache
modules or mod_perl scripts, you can keep the default setting or else set it to a rea-
sonable number. A setting of 30 ensures that the child process is killed after pro-
cessing 30 requests. Of course, new child processes are created as needed.
MAXSPARESERVERS
This directive lets you set the number of idle Apache child processes that you want
on your server.
If the number of idle Apache child processes exceeds the maximum number
specified by the MaxSpareServers directive, then the parent process kills off the
excess processes. Tuning of this parameter should only be necessary for very busy
sites. Unless you know what you are doing, do not change the default.
MINSPARESERVERS
The MinSpareServers directive sets the desired minimum number of idle child
server processes. An idle process is one that is not handling a request. If there are
fewer idle Apache processes than the number specified by the MinSpareServers
directive, then the parent process creates new children at a maximum rate of 1 per
second. Tuning of this parameter should only be necessary on very busy sites.
Unless you know what you are doing, do not change the default.
KEEPALIVE
The KeepAlive directive enables you to activate/deactivate persistent use of TCP
connections in Apache.
Older Apache servers (prior to version 1.2) may require a numeric value
instead of On/Off when using KeepAlive This value corresponds to the
maximum number of requests you want Apache to entertain per request. A
limit is imposed to prevent a client from taking over all your server
resources. To disable KeepAlive in the older Apache versions, use 0 (zero)
as the value.
KEEPALIVETIMEOUT
If you have the KeepAlive directive set to on, you can use the KeepAliveTimeout
directive to limit the number of seconds Apache will wait for a subsequent request
before closing a connection. After a request is received, the timeout value specified
by the Timeout directive applies.
KEEPALIVETIMEOUT
If you have the KeepAlive directive set to on, you can use the KeepAliveTimeout
directive to limit the number of seconds Apache will wait for a subsequent request
before closing a connection. After a request is received, the timeout value specified
by the Timeout directive applies.
RLIMITCPU
The RLimitCPU directive enables you to control the CPU usage of Apache children-
spawned processes such as CGI scripts. The limit does not apply to Apache children
themselves or to any process created by the parent Apache server.
The RLimitCPU directive takes the following two parameters:The first parameter
sets a soft resource limit for all processes and the second parameter, which is
optional, sets the maximum resource limit. Note that raising the maximum resource
limit requires that the server be running as root or in the initial startup phase.For
each of these parameters, there are two possible values:
◆ and max is the maximum resource limit allowed by the operating system.
084754-2 Ch05.F 11/5/01 9:03 AM Page 101
RLIMITMEM
The RLimitMEM directive limits the memory (RAM) usage of Apache children-
spawned processes such as CGI scripts. The limit does not apply to Apache chidren
themselves or to any process created by the parent Apache server.
The RLimitMEM directive takes two parameters. The first parameter sets a soft
resource limit for all processes, and the second parameter, which is optional, sets
the maximum resource limit. Note that raising the maximum resource limit requires
that the server be started by the root user. For each of these parameters, there are
two possible values:
RLIMITNPROC
The RLimitNPROC directive sets the maximum number of simultaneous Apache
children-spawned processes per user ID.
The RLimitNPROC directive takes two parameters. The first parameter sets the
soft resource limit for all processes, and the second parameter, which is optional,
sets the maximum resource limit. Raising the maximum resource limit requires that
the server be running as root or in the initial startup phase. For each of these para-
meters, there are two possible values:
If your CGI processes are run under the same user ID as the server process,
use of RLimitNPROC limits the number of processes the server can launch
(or “fork”). If the limit is too low, you will receive a “Cannot fork process” type
of message in the error log file. In such a case, you should increase the limit
or just leave it as the default.
084754-2 Ch05.F 11/5/01 9:03 AM Page 102
LIMITREQUESTBODY
The LimitRequestBody directive enables you to set a limit on the size of the HTTP
request that Apache will service. The default limit is 0, which means unlimited. You
can set this limit from 0 to 2147483647 (2GB).
LIMITREQUESTFIELDS
The LimitRequestFields directive allows you to limit number of request header
fields allowed in a single HTTP request. This limit can be 0 to 32767 (32K). This
directive can help you implement a security measure against large request based
denial of service attacks.
LIMITREQUESTFIELDSIZE
The LimitRequestFieldsize directive enables you to limit the size (in bytes) of a
request header field. The default size of 8190 (8K) is more than enough for most sit-
uations. However, if you experience a large HTTP request-based denial of service
attack, you can change this to a smaller number to deny requests that exceed the
limit. A value of 0 sets the limit to unlimited.
LIMITREQUESTLINE
The LimitRequestLine directive sets the limit on the size of the request line. This
effectively limits the size of the URL that can be sent to the server. The default limit
should be sufficient for most situations. If you experience a denial of service attack
that uses long URLs designed to waste resources on your server, you can reduce the
limit to reject such requests.
084754-2 Ch05.F 11/5/01 9:03 AM Page 103
CLEARMODULELIST
You can use the ClearModuleList directive to clear the list of active modules and
to enable the dynamic module-loading feature. Then use the AddModule directive to
add modules that you want to activate.
Syntax: ClearModuleList
Default setting: None
Context: Server config
ADDMODULE
The AddModule directive can be used to enable a precompiled module that is cur-
rently not active. The server can have modules compiled that are not actively in
use. This directive can be used to enable these modules. The server comes with a
preloaded list of active modules; this list can be cleared with the ClearModuleList
directive. Then new modules can be added using the AddModule directive.
After you have configured Apache using a combination of the mentioned direc-
tives, you can focus on tuning your static and dynamic contents delivery mecha-
nisms. In the following sections I show just that.
the near future. Some dynamic contents systems even create dynamically and peri-
odically generated static Web pages as cache contents for faster delivery. Because
serving a static page usually is faster than serving a dynamic page, the static page
is not going away soon. In this section I improve the speed of static page delivery
using Apache and the new kernel HTTP module.
/.htaccess
%DocRoot%/.htaccess
%DocRoot%/training/.htaccess
%DocRoot%/training/linux/.htaccess
%DocRoot%/training/linux/sysad/.htaccess
where %DocRoot% is the document root directory set by the DocumentRoot direc-
tive in the httpd.conf file. So if this directory is /www/nitec/htdocs, then the
following checks are made:
/.htaccess
/www/.htaccess
/www/nitec/.htaccess
/www/nitec/htdocs/.htaccess
/www/nitec/htdocs/training/.htaccess
/www/nitec/htdocs/training/linux/.htaccess
/www/nitec/htdocs/training/linux/sysad/.htaccess
Apache looks for the .htaccess file in each directory of the translated (from the
requested URL) path of the requested file (intro.html). As you can see, a URL that
requests a single file can result in multiple disk I/O requests to read multiple files.
This can be a performance drain for high-volume sites. In such case, your best
choice is to disable .htaccess file checks altogether. For example, when the follow-
ing configuration directives are placed within the main server section (that is, not
within a VirtualHost directive) of the httpd.conf file, it disables checking for
.htaccess for every URL request.
<Directory />
AllowOverride None
</Directory>
084754-2 Ch05.F 11/5/01 9:03 AM Page 105
When the preceding configuration is used, Apache simply performs a single disk
I/O to read the requested static file and therefore gain performance in high-volume
access scenarios.
◆ A CGI script is started every time a request is made, which means that if
the Apache server receives 100 requests for the same script, there are 100
copies of the same script running on the server, which makes CGI a very
unscalable solution.
◆ A CGI script can’t maintain persistent connection to a back-end data-
base, which means a connection needs to be established every time a
script needs to access a database server. This effectively makes CGI scripts
slow and resource hungry.
084754-2 Ch05.F 11/5/01 9:03 AM Page 106
◆ CGI scripts are often hacks that are quickly put together by a system-
inexperienced developer and therefore poses great security risks.
Unfortunately, many Web sites still use CGI scripts because they are easy
to develop and often freely available. Stay away from CGI scripts and use
more scalable and robust solutions such as the mod_perl, mod_fastcgi,
or even Java servlets (discussed in the following sections).
Using mod_perl
The Apache mod_perl module alone keeps Perl in the mainstream of Web develop-
ment. This module for Apache enables you to create highly scalable, Perl-based
Web applications that can apply the following facts:
Fortunately, switching your Perl-based CGI scripts to mod_perl isn’t hard at all.
In the following section I show how you can install mod_perl for Apache and also
develop performance-friendly mod_perl scripts.
INSTALLING MOD_PERL
3. Run the make; make install commands to build mod_perl binaries and
Perl modules.
4. Change directory to ../apache_x.y.z and run:
./configure –prefix=/usr/local/apache \
--activate-module=src/modules/perl/libperl.a
If you want to enable or disable other Apache modules, make sure you
add the appropriate --enable-module and --disable-module options in
the preceding command line. For example, the following configuration
creates a very lean and mean Apache server with mod_perl support:
./configure --prefix=/usr/local/apache \
--disable-module=cgi \
--disable-module=imap \
--disable-module=userdir \
--disable-module=autoindex \
--disable-module=status \
--activate-module=src/modules/perl/libperl.a
5. Run the make; make install commands to build and install the Apache
Web server.
CONFIGURING MOD_PERL
Here’s how you can configure mod_perl for Apache:
use strict;
2. Tell Apache to execute the startup script (called startup.pl in the previ-
ous step) when it starts. You can do that by adding the following directive
in the httpd.conf file.
PerlRequire /www/mysite/perl-bin/startup.pl
3. If you know that you are using a set of Perl modules often, you can pre-
load them by adding use modulename () line in the startup.pl script
before the 1; line. For example, if you use the CGI.pm module (yes, it
works with both CGI and mod_perl scripts) in many of your mod_perl
scripts, you can simply preload it in the startup.pl script, as follows:
use CGI ();
use strict;
1;
I have added CGI->compile(‘:all’); line after use CGI (); line because
CGI.pm doesn’t automatically load all its methods by default; instead, it
provides the compile() function to force loading of all methods.
084754-2 Ch05.F 11/5/01 9:03 AM Page 109
4. Determine how you want to make your mod_perl scripts available in your
Web site. I prefer specifying a <Location> directive for each script, as in
the following example:
<Location /cart>
SetHandler perl-script
PerlHandler ShoppingCart
</Location>
<Location /cart/calc>
SetHandler perl-script
PerlHandler Calc
</Location>
Now all you need is mod_perl scripts to try out your new mod_perl-enabled
Apache server. Because mod_perl script development is largely beyond the scope of
this book, I provide a basic a test script called HelloWorld.pm (shown in Listing 5-1).
You can put the HelloWorld.pm in a location specified by the use lib line your
startup.pl script and create a configuration such as the following in httpd.conf.
<Location /test>
SetHandler perl-script
PerlHandler HelloWorld
</Location>
084754-2 Ch05.F 11/5/01 9:03 AM Page 111
After you have the preceding configuration, start or restart the Apache server
and access the HelloWorld.pm script using http://your.server.com/test. You
should see the “Hello World” message, the PID of the Apache child server, and a
count of how many similar requests this child server has served so far.
If you run this test (that is, access the /test URL) with the default values for the
MinSpareServers, MaxSpareServers, StartServers, MaxRequestsPerChild,
MaxClients directives, you may get confused. Because your default settings are
likely to cause Apache to run many child servers and because Apache chooses the
child server per /test request, you may find the count to go up and down as your
subsequent /test requests are serviced by any of the many child servers. If you
keep making requests for the /test URL, eventually you see that all child servers
are reporting upwards count until it dies because of the MaxRequestsPerChild
setting. This is why it’s a good idea to set these directives as follows for testing
purposes:
MinSpareServers 1
MaxSpareServers 1
StartServers 1
MaxRequestsPerChild 10
MaxClients 1
Restart the Apache server and access /test and you see that Apache services each
10 of your requests using a single child server whose count only increases.
Use of mod_perl scripts within your Apache server ensures that your response
time is much better than CGI equivalent. However, heavy use of mod_perl scripts
also creates some side-effects that can be viewed as performance problems, which I
cover in the next section.
1
http://www.domain.com/index.html Static Page Server
Welcome to DOMAIN.COM
Contents of index.html page
Click login to enter our intranet. 2
See our privacy policy for details.
<a href=http://myapps.domain.com/login>login</a>
<a href=http://www.domain.com/privacy.html>privacy</a>
3
login Static Page Server
If you must keep the mod_perl and static contents on the same Linux system
running Apache, you still can ensure that fat Apache child processes aren’t serving
static pages. Here’s a solution that I like:
1. Compile and install the mod_proxy module for your Apache Web server
2. Copy your existing httpd.conf file to httpd-8080.conf and modify the
Port directive to be Port 8080 instead of Port 80. Remove all
mod_perl-specific configurations from httpd.conf so that all your
mod_perl configurations are in httpd-8080.conf file.
You can change myapps to whatever you like. If you do change this, make
sure you change it in every other location that mentions it. Here we are
telling the Apache server serving static pages that all requests to /myapps
URL are to be serviced via the proxy module, which should get the
response from the Apache server running on the same Linux system
(127.0.0.1 is the local host) but on port 8080.
4. Add the following configuration in httpd-8080.conf to create a
mod_perl script location.
<Location /myapps>
SetHandler perl-script
084754-2 Ch05.F 11/5/01 9:04 AM Page 113
PerlHandler MyApp1
</Location>
Now start (or restart) the Apache server (listening on port 80) as usual using the
apachectl command. However, you must start the Apache on port 8080 using the
/usr/local/apache/bin/httpd –f /usr/local/apache/conf/httpd-8080.
conf command. This assumes that you have installed /usr/local/apache directory; if
that isn’t so, make sure you change the path. Now you have two Apache parent
daemons (which run as root) running two sets of children — where one set services
the static pages and uses the proxy module to fetch the dynamic, mod_perl script
pages using the ProxyPass directive. This enables you to service the static pages
using a set of child servers that aren’t running any Perl code whatsoever. On the
other hand, the server on port 8080 services only dynamic requests so you effec-
tively have a configuration that is very performance-friendly.
Scripts running under mod_perl run fast because they are loaded within each
child server’s code space. Unlike its CGI counterpart, a mod_perl script can keep
persistent connection to an external database server — thus speeding up the gener-
ation of database-driven dynamic content. However, a new problem introduces
itself if you run a very large Web server. When you run 50 or 100 or more Apache
server processes to service many simultaneous requests, it’s possible for Apache to
eventually open up that many database connections and keep each connection per-
sist for the duration of each child. Say that you run a Web server system where you
run 50 Apache child processes so that you can service about 50 requests per second
and you happen to have a mod_perl-based script that opens a database connection
in the initialization stage. As requests come to your database script, eventually
Apache manages to service such requests using each of its child processes and thus
opening up 50 database connections. Because many database servers allocate
expensive resources on a per-connection basis, this could be a major problem on
the database side. For example, when making such connections to an IBM
Universal Database Server (UDB) Enterprise Edition running on a remote Linux sys-
tem, each Apache child has a counter-part connection related process on the data-
base server. If such environment uses load balancing hardware to balance incoming
requests among a set of mod_perl-enabled Apache Web server there is likely to be
a scenario when each Web server system running 50 Apache child processes have
all opened up connection to the database server. For example, if such an environ-
ment consists of 10 Web servers under the load-balancing hardware, then the total
possible connections to the database server is 10 x 50 or 500, which may create an
extensive resource load on the database server.
One possible solution for such a scenario is to find a way to have the database
time-out any idle connections, make the mod_perl script code detect a stale con-
nection, and have it reinitiate connection. Another solution is to create a persistent
database proxy daemon that each Web server uses to fetch data from the database.
084754-2 Ch05.F 11/5/01 9:04 AM Page 114
Fortunately, FastCGI or Java Servlets have more native solution for such prob-
lems and should be considered for heavily used database-driven applications.
Here’s another performance-boosting Web technology called FastCGI.
Using FastCGI
Like mod_perl scripts, FastCGI applications run all the time (after the initial load-
ing) and therefore provide a significant performance advantage over CGI scripts.
Table 5-2 summarizes the differences between a FastCGI application and mod_perl
script.
Apache platform dependent No. FastCGI applications can Yes. Only Apache supports
run on non-Apache Web mod_perl module
servers, such as IIS and
Netscape Web Server.
Perl-only solution No. FastCGI applications can be Yes
developed in many languages,
such as C, C++, and Perl.
Runs as external process Yes No
Can run on remote machine Yes No
Multiple instances of the Typically a single FastCGI Number of instances of
application/script are run application is run to respond mod_perl script equal to
to many requests that are the number of child
queued. However, if the load Apache server processes.
is high, multiple instances of
the same application are run
Wide support available Yes. However, at times I get Yes. There are many
the impression that FastCGI mod_perl sites on the
development is slowing down, Internet and support via
but I can’t verify this or back Usenet or Web is
this up available.
084754-2 Ch05.F 11/5/01 9:04 AM Page 115
Database connectivity Because all requests are sent Because each Apache
to a single FastCGI application, child process runs the
you only need to maintain a mod_perl script, each
single database connection child can potentially have
with the back-end database a database connection to
server. However, this can the back-end database.
change when Apache FastCGI This means you can
process manager spawns potentially end up with
additional FastCGI application hundreds of database
instances due to heavy load. connections from even a
Still, the number of FastCGI single Apache server
instances of an application is system.
likely to be less than the number
of Apache child processes.
Like mod_perl, the Apache module for FastCGI, mod_fastcgi, doesn’t come
with the standard Apache distribution. You can download it from www.fastcgi.
com. Here’s how you can install it.
1. Su to root.
2. Extract the mod_fastcgi source distribution using the tar xvzf
mod_fastcgi.x.y.z.tar.gz command. Then copy the mod_fastcgi
source directory to the /usr/src/redhat/SOURCES/apache_x.y.z/
src/modules/fastcgi directory.
If you already compiled Apache with many other options and would like
to retain them, simply run the following command from the
/usr/src/redhat/SOURCES/apache_x.y.z directory.
./config.status --activate-
module=src/modules/fastcgi/libfastcgi.a
4. Run the make; make install command from the same directory to com-
pile and install the new Apache with mod_fastcgi support.
5. You are ready to configure Apache to run FastCGI applications. First
determine where you want to keep the FastCGI applications and scripts.
Ideally, you want to keep this directory outside the directory specified in
the DocumentRoot directive. For example, if your set DocumentRoot to
/www/mysite/htdocs, consider using /www/mysite/fast-bin as the
FastCGI application/script directory. I assume that you will use my advice
and do so. To tell Apache that you have created a new FastCGI applica-
tion/script directory, simply use the following configuration:
Alias /apps/ “/www/mysite/fast-bin/”
<Directory “/www/mysite/fast-bin”>
Options ExecCGI
SetHandler fastcgi-script
</Directory>
This tells Apache that the alias /apps/ points to the /www/mysite/fast-
bin directory — and that this directory contains applications (or scripts)
that must run via the fastcgi-script handler.
6. Restart the Apache server and you can access your FastCGI
applications/scripts using the http://www.yourdomain.com/fast-
bin/appname URL where www.yourdomain.com should be replaced with
your own Web server hostname and appname should be replaced with the
FastCGI application that you have placed in the /www/mysite/fast-bin
directory. To test your FastCGI setup, you can simply place the following
test script (shown in Listing 5-2) in your fast-bin directory and then
access it.
#
# Start the FastCGI request loop
#
while (new CGI::Fast) {
print header;
print “This is a FastCGI test script” . br;
print “The request is serviced by script PID: $$” . br;
print “Your request number is : “, $counter++, br;
}
exit 0;
When you run the script in Listing 5-2, using a URL request such as
http://www.yourserver.com/fast-bin/testfcgi.pl, you see that the
PID doesn’t change and the counter changes as you refresh the request
again and again. If you run ps auxww | grep testfcgi on the Web
server running this FastCGI script, you see that there is only a single
instance of the script running and it’s serving all the client requests. If the
load goes really high, Apache launches another instance of the script.
FastCGI is a great solution for scaling your Web applications. It even enables
you to run the FastCGI applications/scripts on a remote application server. This
means you can separate your Web server from your applications and thus gain bet-
ter management and performance potentials. Also, unlike with mod_perl, you
aren’t limited to Perl-based scripts for performance; with FastCGI, you can write
your application in a variety of application programming languages, such as C,
C++, and Perl.
Quite interestingly, Java has begun to take the lead in high-performance Web
application development. Java used to be considered slow and too formal to write
Web applications, even only a few years ago. As Java has matured, it has become a
very powerful Web development platform. With Java you have Java Servlets, Java
Server Pages, and many other up and coming Java technologies that can be utilized
to gain high scalability and robustness. Java also gives you the power to create dis-
tributed Web applications easily.
Here’s how you can install and configure it for your system.
1. Su to root and extract the source distribution using the tar xvzf suid-
version.tar.gz (where version is the latest version number of the
Squid software).
2. Run the ./configure --prefix=/usr/local/squid command to config-
ure Squid source code for your system.
3. Run make all; make install to install Squid in the
/usr/local/squid directory.
084754-2 Ch05.F 11/5/01 9:04 AM Page 119
with subnet 255.255.255.0, then you can define the following line in
squid.conf to create an ACL for your network.
6. Add the following line just before the http_access deny all line.
http_access allow local_net
This tells Squid to enable machines in local_net ACL access to the proxy-
cache using the following line in squid.conf.
7. Tell Squid the username of the cache-manager user. If you want to use
[email protected] as the cache-manager user account, define
the following line in the squid.conf file:
cache_mgr webmaster
8. Tell Squid which user and group it should run as. Add the following lines
in squid.conf
cache_effective_user nobody
cache_effective_group nogroup
Here, Squid is told to run as the nobody user and use permissions for the group
called nogroup.
Save the squid.conf file and run the following command to create the cache
directories.
/usr/local/squid/squid –z
Now you can run the /usr/local/squid/bin/squid & command to start Squid
for the first time. You can verify it’s working in a number of ways:
◆ Running squid –k check && echo “Squid is running” tells you when
Squid is active.
Now for the real test: If you configure the Web browser on a client machine to
use the Squid proxy, you should see results. In Netscape Navigator, select Edit →
Preferences and then select Proxies from within the Advanced category. By select-
ing Manual Proxy Configuration and then clicking View, you can specify the IP
address of the Squid server as the http, FTP, and Gopher proxy server. The default
proxy port is 3128; unless you have changed it in the squid.conf file, place that
number in the port field.
084754-2 Ch05.F 11/5/01 9:04 AM Page 121
You should be able to browse any Web site as if you don’t use a proxy. You can
double-check that Squid is working correctly by checking the log file
/usr/local/squid/logs/access.log from the proxy server and making sure the
Web site you were viewing is in there.
By adding the preceding line, you have defined an ACL rule called BadWords
that matches any URL containing the words foo or bar. This applies to
http://foo.deepwell.com/pictures and http://www.thekennedycompound.
com/ourbar.jpg because they both contain words that are members of BadWords.
You can block your users from accessing any URLs that match this rule by
adding the following command to the squid.conf file:
Almost every administrator using word-based ACLs has a story about not
examining all the ways a word can be used. Realize that if you ban your users
from accessing sites containing the word “sex,” you are also banning them
from accessing www.buildersexchange.com and any others that may
have a combination of letters matching the forbidden word.
Because all aspects of how Squid functions are controlled within the squid.conf
file, you can tune it to fit your needs. For example, you can enable Squid to use
16MB of RAM to hold Web pages in memory by adding the following line:
cache_mem 16 MB
By trial and error, you may find you need a different amount.
The cache_mem isn’t the amount of memory Squid consumes; it only sets
the maximum amount of memory Squid uses for holding Web pages, pic-
tures, and so forth. The Squid documentation says you can expect Squid to
consume up to three times this amount.
084754-2 Ch05.F 11/5/01 9:04 AM Page 122
emulate_httpd_log on
you arrange that the files in /var/log/squid are written in a form like the Web
server log files. This arrangement enables you to use a Web statistics program such
as Analog or Webtrends to analyze your logs and examine the sites your users are
viewing.
Some FTP servers require that an e-mail address be used when one is logging in
anonymously. By setting ftp_user to a valid e-mail address, as shown here, you
give the server at the other end of an FTP session the data it wants to see:
ftp_user [email protected]
You may want to use the address of your proxy firewall administrator. This
would give the foreign FTP administrator someone to contact in case of a
problem.
If you type in a URL and find that the page doesn’t exist, probably that page
won’t exist anytime in the near future. By setting negative_ttl to a desired num-
ber of minutes, as shown in the next example, you can control how long Squid
remembers that a page was not found in an earlier attempt. This is called negative
caching.
negative_ttl 2 minutes
This isn’t always a good thing. The default is five minutes, but I suggest reduc-
ing this to two minutes or possibly one minute, if not disabling it all together. Why
would you do such a thing? You want your proxy to be as transparent as possible.
If a user is looking for a page she knows exists, you don’t want a short lag time
between the URL coming into the world and your user’s capability to access it.
Ultimately, a tool like Squid should be completely transparent to your users. This
“invisibility” removes them from the complexity of administration and enables
them to browse the Web as if there were no Web proxy server. Although I don’t
detail that here, you may refer to the Squid Frequently Asked Questions at http://
squid.nlanr.net/Squid/FAQ/FAQ.html. Section 17 of this site details using
Squid as a transparent proxy.
Also, if you find yourself managing a large list of “blacklisted” sites in the
squid.conf file, think of using a program called a redirector. Large lists of ACL
rules can begin to slow a heavily used Squid proxy. By using a redirector to do this
same job, you can improve on Squid’s efficiency of allowing or denying URLs
according to filter rules. You can get more information on Squirm — a full-featured
redirector made to work with Squid — from http://www.senet.com.au/squirm/.
084754-2 Ch05.F 11/5/01 9:04 AM Page 123
The cachemgr.cgi file comes in the Squid distribution. It’s a CGI program that
displays statistics of your proxy and stops and restarts Squid. It requires only a few
minutes of your time to install, but it gives you explicit details about how your
proxy is performing. If you’d like to tune your Web cache, this tool can help.
If you are interested in making Squid function beyond the basics shown in this
chapter, check the Squid Web page at http://squid.nlanr.net/.
Summary
In this chapter, you explored tuning Apache for performance. You examined the
configuration directives that enable you to control Apache’s resource usage so it
works just right for your needs. You also encountered the new HTTP kernel module
called khttpd, along with techniques for speeding up both dynamic and static
Web-site contents. Finally, the chapter profiled the Squid proxy-cache server and
the ways it can help you enhance the Web-browsing experience of your network
users
084754-2 Ch05.F 11/5/01 9:04 AM Page 124
094754-2 Ch06.F 11/5/01 9:04 AM Page 125
Chapter 6
◆ Tuning sendmail
◆ Using Postfix
SENDMAIL IS THE DEFAULT Mail Transport Agent (MTA) for not only Red Hat Linux
but also many other Unix-like operating systems. Therefore, Sendmail is the most
widely deployed mail server solution in the world. In recent years, e-mail has taken
center stage in modern business and personal communication — which has
increased the demand for reliable, scalable solutions for e-mail servers. This
demand helped make the MTA market attractive to both open-source and commer-
cial software makers; Sendmail now has many competitors. In this chapter, I show
you how to tune Sendmail and a few worthy competing MTA solutions for higher
performance.
expensive — and old — methodology, though it’s only a big problem for sites with
heavy e-mail load.
So consider the administrative complexity, potential security risks, and perfor-
mance problems associated with Sendmail before you select it as your MTA. Even
so, system administrators who have taken the time to learn to work with Sendmail
should stick with it because Sendmail is about as flexible as it is complex. If you
can beat the learning curve, go for it.
These days, open-source Sendmail has major competitors: commercial Sendmail,
qmail, and Postfix. Commercial Sendmail is ideal for people who love Sendmail and
want to pay for added benefits such as commercial-grade technical support, other
derivative products, and services. Postfix and qmail are both open-source products.
A LOOK AT QMAIL
The qmail solution has momentum. Its security and performance are very good.
However, it also suffers from administration complexity problems. It isn’t an easy
solution to manage. I am also not fond of qmail license, which seems to be a bit
more restrictive than most well known open-source projects. I feel that the qmail
author wants to control the core development a bit more tightly than he probably
should. However, I do respect his decisions, especially because he has placed a
reward for finding genuine bugs in the core code. I have played with qmail a short
time and found the performance to be not all that exciting, especially because a
separate process is needed to handle each connection. My requirements for high
performance were very high. I wanted to be able to send about a half million
e-mails per hour. My experiments with qmail did not result in such a high number.
Because most sites aren’t likely to need such a high performance, I think qmail is
suitable for many sites but it didn’t meet either my performance or administration
simplicity requirements. So I have taken a wait-and-see approach with qmail.
A LOOK AT POSTFIX
Postfix is a newcomer MTA. The Postfix author had the luxury of knowing all the
problems related to Sendmail and qmail. So he was able to solve the administration
problem well. Postfix administration is much easier than both Sendmail and qmail,
which is a big deal for me because I believe software that can be managed well can
be run well to increase productivity.
Some commercial MTA solutions have great strength in administration — and
even in performance. My favorite commercial outbound MTA is PowerMTA from
Port25.
In this chapter, I tune Sendmail, Postfix, and PowerMTA for performance.
Tuning Sendmail
The primary configuration file for Sendmail is /etc/mail/sendmail.cf, which
appears very cryptic to beginners. This file is generated by running a command
such as m4 < /path/to/chosen.mc > /etc/mail/sendmail.cf, where
094754-2 Ch06.F 11/5/01 9:04 AM Page 127
/path/to/chosen.mc file is your chosen M4 macro file for the system. For example,
I run the following command from the /usr/src/redhat/SOURCES/sendmail-
8.11.0/cf/cf directory to generate the /etc/mail/sendmail.cf for my system:
The linux-dnsbl.mc macro file instructs m4 to load other macro files such as
cf.m4, cfhead.m4, proto.m4, version.m4 from the /usr/src/redhat/SOURCES/
sendmail-8.11.0/cf/m4 subdirectory. Many of the options discussed here are
loaded from these macro files. If you want to generate a new /etc/mail/
sendmail.cf file so that your changes aren’t lost in the future, you must change
the macro files in cf/m4 subdirectory of your Sendmail source installation.
If you don’t have these macro files because you installed a binary RPM distribu-
tion of Sendmail, you must modify the /etc/mail/sendmail.cf file directly.
In any case, always back up your working version of /etc/mail/sendmail.cf
before replacing it completely using the m4 command as shown in the preceding
example or modifying it directly using a text editor.
Now, here’s what you can tune to increase Sendmail performance.
O MaxMessageSize=1000000
This tells Sendmail to set the maximum message size to 1,000,000 bytes (approx.
1MB). Of course, you can choose a different number to suit your needs. Any mes-
sage larger than the set value of the MaxMessageSize option will be rejected.
Caching Connections
Sendmail controls connection caches for IPC connections when processing the
queue using ConnectionCacheSize and ConnectionCacheTimeout options.
It searches the cache for a pre-existing, active connection first. The Connection-
CacheSize defines the number of simultaneous open connections that are permitted.
The default is two, which is set in /etc/mail/sendmail.cf as follows:
O ConnectionCacheSize=2
094754-2 Ch06.F 11/5/01 9:04 AM Page 128
define(‘confMCI_CACHE_SIZE’, 4)dnl
Here, the maximum number of simultaneous connections is four. Note that set-
ting this too high will create resource problems on your system, so don’t abuse it.
O ConnectionCacheTimeout=5m
Which means that maximum idle time is five minutes. I don’t recommend
changing this option.
QUEUE=1h
This line is used by the /etc/rc.d/init.d/sendmail script to supply the value for the
-q command line option for the Sendmail binary (/usr/sbin/sendmail).
The default value of 1h (one hour) is suitable for most sites, but if you frequently
find that the mailq | wc -l command shows hundreds of mails in the queue, you
may want to adjust the value to a smaller number, such as 30m (30 minutes).
define(‘confTO_QUEUERETURN’, ‘5d’)dnl
define(‘confTO_QUEUERETURN_NORMAL’, ‘5d’)dnl
define(‘confTO_QUEUERETURN_URGENT’, ‘2d’)dnl
define(‘confTO_QUEUERETURN_NONURGENT’, ‘7d’)dnl
094754-2 Ch06.F 11/5/01 9:04 AM Page 129
O Timeout.queuereturn=5d
O Timeout.queuereturn.normal=5d
O Timeout.queuereturn.urgent=2d
O Timeout.queuereturn.non-urgent=7d
Here, the default bounce message is sent to the sender after five days, which is
set by the Timeout.queuereturn (that is, the confTO_QUEUERETURN option line in
your mc file). If the message was sent with a normal priority, the sender receives this
bounce message within five days, which is set by Timeout.queuereturn.normal
option (that is, the confTO_QUEUERETURN_NORMAL in your mc file).
If the message was sent as urgent, the bounce message is sent in two days, which
is set by Timeout.queuereturn.urgent (that is, the confTO_QUEUERETURN_URGENT
option in the mc file).
If the message is sent with low priority level, it’s bounced after seven days,
which is set by the Timeout.queuereturn.non-urgent option (that is, the
confTO_QUEUERETURN_NONURGENT option in the mc file).
If you would like the sender to be warned prior to the actual bounce, you can use
the following settings in your mc file:
define(‘confTO_QUEUEWARN’, ‘4h’)dnl
define(‘confTO_QUEUEWARN_NORMAL’, ‘4h’)dnl
define(‘confTO_QUEUEWARN_URGENT’, ‘1h’)dnl
define(‘confTO_QUEUEWARN_NONURGENT’, ‘12h’)dnl
O Timeout.queuewarn=4h
O Timeout.queuewarn.normal=4h
O Timeout.queuewarn.urgent=1h
O Timeout.queuewarn.non-urgent=12h
Here, the default warning (stating that a message could not be delivered) message
is sent to the sender after four hours. Similarly, senders who use priority settings
when sending mail can get a warning after four hours, one hour, and 12 hours for
normal-, urgent-, and low-priority messages respectively.
You can control the minimum time a failed message must stay in the queue
before it’s retried using the following line in your mc file:
define(‘confMIN_QUEUE_AGE’, ‘30m’)dnl
This results in the following line in your /etc/mail/sendmail.cf file after it’s
regenerated.
O MinQueueAge=30m
This option states that the failed message should sit in the queue for 30 minutes
before it’s retried.
Also, you may want to reduce the priority of a failed message by setting the fol-
lowing option in your mc file:
define(‘confWORK_TIME_FACTOR’, ‘90000’)
O RetryFactor=90000
This option sets a retry factor that is used in the calculation of a message’s pri-
ority in the queue. The larger the retry factor number, the lower the priority of the
failed message becomes.
define(confCONNECTION_RATE_THROTTLE’, ‘5’)dnl
O ConnectionRateThrottle=5
Now Sendmail will accept only five connections per second. Because Sendmail
doesn’t pre-fork child processes, it starts five child processes per second at peak
094754-2 Ch06.F 11/5/01 9:04 AM Page 131
load. This can be dangerous if you don’t put a cap in the maximum number of chil-
dren that Sendmail can start. Luckily, you can use the following configuration
option in your mc file to limit that:
define(‘confMAX_DAEMON_CHILDREN’, ‘15’)dnl
O MaxDaemonChildren=15
This limits the maximum number of child processes to 15. This throttles your
server back to a degree that will make it unattractive to spammers, since it really
can’t relay that much mail (if you’ve left relaying on).
define(‘confQUEUE_LA’, ‘5’)dnl
O QueueLA=5
Here, Sendmail will stop delivery attempts and simply queue mail when system
load average is above five. You can also refuse connection if the load average goes
above a certain threshold by defining the following option in your mc file:
define(‘confREFUSE_LA’, ‘8’)dnl
O RefuseLA=8
Here, Sendmail will refuse connection after load average goes to eight or above.
Note that locally produced mail isn’t still accepted for delivery.
define(‘confSEPARATE_PROC’, ‘True’)dnl
O ForkEachJob=True
This command forces Sendmail to fork a child process to handle each message in
the queue — which reduces the amount of memory consumed because queued mes-
sages won’t have a chance to pile up data in memory.
However, all those individual child processes impose a significant performance
penalty — so this option isn’t recommended for sites with high mail volume.
define(‘confMAX_QUEUE_RUN_SIZE’,’10000’)
O MaxQueueRunSize=10000
Here, Sendmail will stop reading mail from the queue after reading 10,000 mes-
sages. Note that when you use this option, message prioritization is disabled.
define(‘confMIN_FREE_BLOCKS’, ‘100’)dnl
O MinFreeBlocks=100
This setting tells Sendmail to refuse e-mail when fewer than 100 1K blocks of
space are available in the queue directory.
Tuning Postfix
Postfix is the new MTA on the block. There is no RPM version of the Postfix distri-
bution yet, but installing it is simple. I show the installation procedure in the
following section.
Installing Postfix
Download the source distribution from www.postfix.org site. As of this writing
the source distribution was postfix-19991231-pl13.tar.gz. When you get the
source, the version number may be different; always use the current version num-
ber when following the instructions given in this book.
1. Su to root.
2. Extract the source distribution in /usr/src/redhat/SOURCES directory
using the tar xvzf postfix-19991231-pl13.tar.gz command. This
will create a subdirectory called postfix-19991231-pl13. Change to the
postfix-19991231-pl13 directory.
If you don’t have the latest Berkeley DB installed, install it before continuing.
You can download the latest Berkeley DB source from ww.sleepycat.com.
a World-writeable
This is the default.
b Sticky (1733)
c More restricted (mode 1730)
Because the maildrop directory is world-writeable, there is no need to run
any program with special privileges (set-UID or set-GID), and the spool
files themselves aren’t world-writeable or otherwise accessible to other
users. I recommend that you keep the defaults.
postfix start
The first time you start the application, you will see warning messages as it cre-
ates its various directories. If you make any changes to configuration files, reload
Postfix:
postfix reload
master.cf file to have the maxproc column set to 100 for smtp service as shown
below.
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (50)
# ==========================================================================
smtp inet n - n - 100 smtpd
message_size_limit = 1048576
qmgr_message_active_limit = 1000
queue_minfree = 1048576
094754-2 Ch06.F 11/5/01 9:04 AM Page 136
Here, Postfix will refuse mail when the queue directory (that is, the disk partition
the queue directory is in) is 1048576 bytes (1MB) in size.
maximal_queue_lifetime = 5
Here, Postfix will return the undelivered message to the sender after five days of
retries. If you would like to limit the size of the undelivered (bounce) message sent
to the sender, use the following parameter:
bounce_size_limit = 10240
Here, Posfix returns 10240 bytes (10K) of the original message to the sender.
queue_run_delay = 600
Here the parameter specifies that queues may run every 600 seconds (10 minutes).
1. Edit the /etc/pmta/license file and insert the evaluation license data
you received from Port25 via e-mail.
2. Edit the /etc/pmta/config file and set the postmaster directive to an
appropriate e-mail address.
For example, replace #postmaster [email protected] with something like
postmaster [email protected].
3. If you use Port25’s Perl submission API to submit mail to the PowerMTA
(pmta) daemon, then change directory to /opt/pmta/api and extract the
Submitter-1.02.tar.gz (or a later version) by using the tar xvzf
Submitter-1.02.tar.gz command.
4. Change to the new subdirectory called Submitter-1.02 and run the fol-
lowing Perl commands — perl Makefile.PL; make; make test; make
install — in exactly that sequence. Doing so installs the Perl submitter
API module.
spool /spooldisk1
spool /spooldisk2
spool /spooldisk3
Here, PowerMTA is told to manage spooling among three directories. Three dif-
ferent fast (ultra-wide SCSI) disks are recommended for spooling. Because spooling
on different disks reduces the I/O-related wait for each disk, it yields higher perfor-
mance in the long run.
You can view the current file-descriptor limits for your system by using the cat
/proc/sys/fs/file-max command.
Use the ulimit -Hn 4096 command to set the file descriptor limit to 4096
when you start PowerMTA from the /etc/rc.d/init.d/pmta script.
<domain *>
max-smtp-out 20 # max. connections *per domain*
bounce-after 4d12h # 4 days, 12 hours
retry-after 60m # 60 minutes
log-resolution no
log-connections no
log-commands no
log-data no
</domain>
Here the max-smtp-out directive is set to 20 for all (*) domains. At this setting,
PowerMTA opens no more than 20 connections to any one domain. If, however,
you have an agreement with a particular domain that allows you to make more
connections, you can create a domain-specific configuration to handle that excep-
tion. For example, to connect 100 simultaneous PowerMTA threads to your friend’s
domain (myfriendsdomain.com), you can add the following lines to the /etc/
pmta/config file:
<domain myfriendsdomain.com>
max-smtp-out 100
</domain>
094754-2 Ch06.F 11/5/01 9:04 AM Page 139
Don’t create such a configuration without getting permission from the other
side. If the other domain is unprepared for the swarm of connecitons, you
may find your mail servers blacklisted.You may even get into legal problems
with the remote party if you abuse this feature.
Monitoring performance
Because PowerMTA is a high-performance delivery engine, checking on how it’s
working is a good idea. You can run the /usr/sbin/pmta show status command
to view currently available status information. Listing 6-1 shows a sample status
output.
Here, in the Top/Hour row, PowerMTA reports that it has sent 131,527 messages
in an hour. Not bad. But PowerMTA can do even better. After a few experiments, I
have found it can achieve 300-500K messages per hour easily — on a single PIII Red
Hat Linux system with 1GB of RAM.
PowerMTA is designed for high performance and high volume. Its multithreaded
architecture efficiently delivers a large number of individual e-mail messages to
many destinations.
094754-2 Ch06.F 11/5/01 9:04 AM Page 140
Summary
Sendmail, Postfix, and PowerMTA are common Mail Transport Agents (MTAs).
They can be fine-tuned for better resource management and higher performance.
104754-2 Ch07.F 11/5/01 9:04 AM Page 141
Chapter 7
◆ Tuning Samba
◆ Tuning NFS server
The TCP_NODELAY option tells the Samba server to send as many packets as nec-
essary to keep the delay low. The SO_RCVBUF and SO_SNDBUF options set the send
and receive window (buffer) size to 8K (8192 bytes), which should result in good
performance. Here we are instructing the Samba server to read/write 8K data before
requesting an acknowledgement (ACK) from the client side.
oplocks = true
Newer versions of Samba (2.0.5 or later) support a new type of opportunistic lock
parameter called level2 oplocks. This type of oplock is used for read-only access.
When this parameter is set to true, you should see a major performance gain in con-
current access to files that are usually just read. For example, executable applications
that are read from a Samba share can be accessed faster due to this option.
Samba also has a fake oplocks parameter that can be set to true to grant
oplocks to any client that asks for one. However, fake oplocks are depreciated and
should never be used in shares that enable writes. If you enable fake oplocks for
shares that clients can write to, you risk data corruption.
Note that when you enable oplocks for a share such as the following:
[pcshare]
comment = PC Share
path = /pcshare
public = yes
104754-2 Ch07.F 11/5/01 9:04 AM Page 143
writable = yes
printable = no
write list = @pcusers
oplocks = true
you may want to tell Samba to ignore oplock requests by clients for files that are
writeable. You can use the veto oplock files parameter to exclude such files. For
example, to exclude all files with doc extension from being oplocked, you can use
When the amount of data transferred is larger than the specified read size
parameter, the server either begins to write data to disk before it receives the whole
packet from the network or to write to the network before all data has been read
from the disks.
may actually reduce performance. The only sure way to tell whether write raw =
yes helps your Server or not is to try using Samba while setting write raw = no.
If you see a performance drop, enable it again.
◆ If you set the strick locking parameter to yes, then Samba server will
perform lock checks on each read/write operation, which will severely
decrease performance. So don’t use this option; especially on Samba
shares that are really remote NFS-mounted filesystems.
◆ If you set the strict sync parameter to yes, the Samba server will write
each packet to disk and wait for the write to complete whenever the client
sets the sync bit in a packet. This will cause severe performance problems
when working with Windows clients running MS explorer or other pro-
grams like it, which set the sync bit for every packet.
[printers]
comment = All Printers
path = /var/spool/samba
browseable = no
guest ok = no
printable = yes
104754-2 Ch07.F 11/5/01 9:04 AM Page 145
Here, all the printers managed by Samba are accessible via guest accounts.
However, it isn’t often desirable to use guest accounts to access shares. For exam-
ple, enabling guest access to a user’s home directory isn’t desirable for the obvious
reasons.
Unfortunately, maintaining Linux user accounts for all your Windows users can
be a tough task, especially because you must manually synchronize the addition
and removal of such users. Fortunately, if you use domain-level security you can
automate this process using the following parameters in the global section:
Whenever a Windows user (or a remote Samba client) attempts to access your
Samba server (using domain level security), it creates a new user account if the pass-
word server (typically the Primary Domain Controller) authenticates the user. Also,
the user account is removed if the password server fails to authenticate the user.
This means that if you add a new user account in your Windows 2000/NT domain
and your Samba server uses a Windows 2000/NT server for domain-level security,
the corresponding Linux account on the Samba server is automatically managed.
3. The total amount of system memory is shown under the column heading
total:; divide this number by 1,048,576 (1024x1024) to get the total
(approximate) memory size in megabytes. In the preceding example, this
number is 251MB.
Interestingly, total memory is never reported accurately by most PC sys-
tem BIOS, so you must round the number based on what you know about
the total memory. In my example, I know that the system should have
256MB of RAM, so I use 256MB as the memory size in this test.
If you have RAM > 1GB, I recommend using 512MB as the RAM size for this
experiment. Although you may have 1GB+ RAM, pretend that you have
512MB for this experiment.
104754-2 Ch07.F 11/5/01 9:04 AM Page 147
This command runs the time command, which records execution time of
the program named as the first argument. In this case, the dd command
is timed. The dd command is given an input file (using if option) called
/dev/zero. This file is a special device that returns a 0 (zero) character
when read. If you open this file for reading, it keeps returning a 0 character
until you close the file. This gives us an easy source to fill out an output
file (specified using the of option) called /mnt/nfs1/512MB.dat; the dd
command is told to use a block size (specified using bs option) of 16KB and
write a total of 32,768 blocks (specified using the count option). Because
16KB/block times 32,768 blocks equal 512MB, we will create the file we
intended. After this command is executed, it prints a few lines such as the
following:
32768+0 records in
32768+0 records out
1.610u 71.800s 1:58.91 61.7% 0+0k 0+0io 202pf+0w
Here the dd command read 32,768 records from the /dev/zero device and
also wrote back the same number of records to the /mnt/nfs1/512MB.dat
file. The third line states that the copy operation took one minute and
58.91 seconds. Write this line in a text file as follows:
Write, 1, 1.610u, 71.800s, 1:58.91, 61.7%
Here, you are noting that this was the first (1st) write experiment.
6. We need to measure the read performance of your current NFS setup. We
can simply read the 512MB file we created earlier and see how long it
takes to read it back. To read it back and time the read access, you can
run the following command:
time dd if=/mnt/nfs1/512MB.dat \
of=/dev/null \
bs=16k count=32768
104754-2 Ch07.F 11/5/01 9:04 AM Page 148
Here, you are noting that this was the first (1st) read experiment.
7. Remove the 512MB.dat file from /mnt/nfs1 and umount the partition
using the umount /mnt/nfs1 command. The unmounting of the NFS
directory ensures that disk caching doesn’t influence your next set of tests.
8. Repeat the write and read back test (Steps 5 - 7) at least five times. You
should have a set of notes as follows:
Read, 1, 1.971u, 38.970s, 2:10.44, 31.3%
Read, 2, 1.973u, 38.970s, 2:10.49, 31.3%
Read, 3, 1.978u, 38.971s, 2:10.49, 31.3%
Read, 4, 1.978u, 38.971s, 2:10.49, 31.3%
Read, 5, 1.978u, 38.971s, 2:10.49, 31.3%
9. Calculate the average read and write time from the fifth column (shown in
bold).
You have completed the first phase of this test. You have discovered the average
read and write access time for a 512MB file. Now you can start the second phase of
the test as follows:
1. Unmount the /mnt/nfs1 directory on the NFS client system using the
umount /mnt/nfs1 command.
2. Modify the /etc/fstab file on the NFS client system such that the
/mnt/nfs1 filesystem is mounted with the rsize=8192, wsize=8192
options as shown below:
nfs-server-host:/nfs1 /mnt/nfs1 nfs \
rsize=8192, wsize=8192 0 0
104754-2 Ch07.F 11/5/01 9:04 AM Page 149
If the block size change works for you, keep the rsize=8192, wsize=8192 (or
whatever you find optimal via further experiment) in the /etc/fstab line for the
/mnt/nfs1 definition.
three deciles, you may want to increase the number of nfsd instances. To change
the number of NFS daemons started when your server boots up, do the following:
1. su to root.
2. Stop nfsd using the /etc/rc.d/init.d/nfs stop command if you run it
currently.
3. Modify the /etc/rc.d/init.d/nfs script so that RPCNFSDCOUNT=8 is set
to an appropriate number of NFS daemons.
4. Start nfsd using the /etc/rc.d/init.d/nfs start command.
1. su to root.
2. Stop nfsd using the /etc/rc.d/init.d/nfs stop command if you run it
currently.
3. Modify the /etc/rc.d/init.d/nfs script so that just before the NFS dae-
mon (nfsd) is started using the daemon rpc.nfsd $RPCNFSDCOUNT line,
the following lines are added:
echo 262144 > /proc/sys/net/core/rmem_default
echo 262144 > /proc/sys/net/core/rmem_max
4. Right after the daemon rpc.nfsd $RPCNFSDCOUNT line, add the following
lines:
echo 65536 > /proc/sys/net/core/rmem_default
echo 65536 > /proc/sys/net/core/rmem_max
Now each NFS daemon started by the /etc/rc.d/init.d/nfs script uses 32K
buffer space in the socket input queue.
Because NFS protocol uses fragmented UDP packets, the preceding high to low
threshold used by Linux matters in NFS performance.
◆ You can view the current value of your high threshold size; run
cat /proc/sys/net/ipv4/ipfrag_high_thresh
cat /proc/sys/net/ipv4/ipfrag_low_thresh
Summary
In this chapter you learned to tune the Samba and NFS servers.
104754-2 Ch07.F 11/5/01 9:04 AM Page 152
114754-2 Pt3.F 11/5/01 9:04 AM Page 153
Part III
System Security
CHAPTER 8
Kernel Security
CHAPTER 9
Securing Files and Filesystems
CHAPTER 10
PAM
CHAPTER 11
OpenSSL
CHAPTER 12
Shadow Passwords and OpenSSH
CHAPTER 13
Secure Remote Passwords
CHAPTER 14
Xinetd
114754-2 Pt3.F 11/5/01 1:16 PM Page 154
124754-2 Ch08.F 11/5/01 9:04 AM Page 155
Chapter 8
Kernel Security
IN THIS CHAPTER
THIS CHAPTER PRESENTS kernel- or system-level techniques that enhance your over-
all system security. I cover the Linux Intrusion Detection System (LIDS) and Libsafe,
which transparently protect your Linux programs against common stack attacks.
155
124754-2 Ch08.F 11/5/01 9:04 AM Page 156
LIDS enhances system security by reducing the root user’s power. LIDS also
implements a low-level security model — in the kernel — for the following purposes:
◆ Security protection
◆ Incident detection
◆ Incident-response capabilities
LIDS can detect when someone scans your system using port scanners — and
inform the system administrator via e-mail. LIDS can also notify the system admin-
istrator whenever it notices any violation of imposed rules — and log detailed mes-
sages about the violations (in LIDS-protected, tamper-proof log files). LIDS can not
only log and send e-mail about detected violations, it can even shut down an
intruder’s interactive session
I use LIDS 1.0.5 for kernel 2.4.1 in the instructions that follow. Make sure you
change the version numbers as needed. Extract the LIDS source distribution
in the /usr/local/src directory using the tar xvzf lids-1.0.5-
2.4.1.tar.gz command from the /usr/local/src directory. Now you
can patch the kernel.
Make sure that /usr/src/linux points to the latest kernel source distrib-
ution that you downloaded.You can simply run ls -l /usr/src/linux to
see which directory the symbolic link points to. If it points to an older kernel
source, remove the link using rm -f /usr/src/linux and re-link it using
ln -s /usr/src/linux-version /usr/src/linux, where version is
the kernel version you downloaded. For example, ln -s /usr/src/
linux-2.4.1 /usr/src/linux links the latest kernel 2.4.1 source to
/usr/src/linux.
Patching, compiling, and installing the kernel with LIDS. Before you can use
LIDS in your system you need to patch the kernel source, then compile and install
the updated kernel. Here is how you can do that:
Instead of using make menuconfig command, you can also use the make
config command to configure the kernel.
124754-2 Ch08.F 11/5/01 9:04 AM Page 158
4. From the main menu, select the Code maturity level options submenu
and choose the Prompt for development and/or incomplete code/
drivers option by pressing the spacebar key; then exit this submenu.
5. Select the Sysctl support from the General setup submenu; then exit
the submenu.
6. From the main menu, select the Linux Intrusion Detection System
submenu.
This submenu appears only if you have completed Steps 4 and 5, at the
bottom of the main menu; you may have to scroll down a bit.
7. From the LIDS submenu, select the Linux Intrusion Detection System
support (EXPERIMENTAL) (NEW) option.
The default limits for managed, protected objects, ACL subjects/objects, and
protected processes should be fine for most systems. You can leave them
as is.
LIDS is enabled during bootup (as described later in the chapter), so it’s likely
that you will run other programs before running LIDS. When you select this
option, however, you can also disable execution of unprotected programs
altogether using the Do not execute unprotected programs
before sealing LIDS option. I don’t recommend that you disable
unprotected programs completely during bootup unless you are absolutely
sure that everything you want to run during boot (such as the utilities and
daemons) is protected and doesn’t stop the normal boot process.
Leave the default 60-second delay between logging of two identical entries.
Doing so helps preserve your sanity by limiting the size of the log file. The
delay will ensure too many log entries are not written too fast.
10. Select the Port Scanner Detector in kernel option and the Send
security alerts through network option. Don’t change the default
values for the second option.
11. Save your kernel configuration and run the following commands to
compile the new kernel and its modules (if any).
make depend
make bzImage
make modules
make modules_install
If you aren’t compiling a newer kernel version than what is running on the
system, back up the /bin/modules/current-version directory
(where current-version is the current kernel version). For example, if
you are compiling 2.4.1 and you already have 2.4.1 running, then run the
cp -r /lib/modules/2.4.1 /lib/modules/2.4.1.bak command
to back-up the current modules. In case of a problem with the new kernel,
you can delete the broken kernel’s modules and rename this directory with
its original name.
If /dev/hda1 isn’t the root device, make sure you change it as appropriate.
14. Run /sbin/lilo to reconfigure LILO.
When the LILO is reconfigured, the kernel configuration is complete.
Here’s how to compile and install the LIDS administrative program lidsadm.
1. Assuming that you have installed the LIDS source in the /usr/local/src
directory, change to /usr/local/src/lids-1.0.5-2.4.1/lidsadm-1.0.5.
2. Run the make command, followed by the make install command.
These commands perform the following actions:
■ Install the lidsadm program in /sbin.
■ Create the necessary configuration files (lids.cap, lids.conf,
lids.net, lids.pw) in /etc/lids.
3. Run the /sbin/lidsadm -P command and enter a password for the LIDS
system.
This password is stored in the /etc/lids/lids.pw file, in RipeMD-160
encrypted format.
4. Run the /sbin/lidsadm -U command to update the inode/dev numbers.
5. Configure the /etc/lids/lids.net file. A simplified default
/etc/lids/lids.net file is shown in Listing 8-1.
You don’t need a real mail account for the from address. The MAIL_TO
option should be set to the e-mail address of the administrator of the sys-
tem being configured. Because the root address, root@localhost, is the
default administrative account, you can leave it as is. The MAIL_SUBJECT
option is obvious and should be changed as needed.
6. Run the /sbin/lidsadm -L command, which should show output like the
following:
LIST
Subject ACCESS TYPE Object
-----------------------------------------------------
Any File READ /sbin
Any File READ /bin
Any File READ /boot
Any File READ /lib
Any File READ /usr
Any File DENY /etc/shadow
/bin/login READ /etc/shadow
/bin/su READ /etc/shadow
Any File APPEND /var/log
Any File WRITE /var/log/wtmp
/sbin/fsck.ext2 WRITE /etc/mtab
Any File WRITE /etc/mtab
Any File WRITE /etc
/usr/sbin/sendmail WRITE /var/log/sendmail.st
/bin/login WRITE /var/log/lastlog
/bin/cat READ /home/xhg
Any File DENY /home/httpd
/usr/sbin/httpd READ /home/httpd
Any File DENY /etc/httpd/conf
/usr/sbin/httpd READ /etc/httpd/conf
/usr/sbin/sendmail WRITE /var/log/sendmail.st
/usr/X11R6/bin/XF86_SVGA NO_INHERIT RAWIO
/usr/sbin/in.ftpd READ /etc/shadow
/usr/sbin/httpd NO_INHERIT HIDDEN
7. Add the following line to the /etc/rc.d/rc.local file to seal the kernel
during the end of the boot cycle:
/sbin/lidsadm -I
When the system boots and runs the /sbin/lidsadm -I command from the
/etc/rc.d/rc.local script, it seals the kernel and the system is protected by LIDS.
Administering LIDS
After you have your LIDS-enabled Linux system in place, you can modify your ini-
tial settings as the needs of your organization change. Except for the
/etc/lids/lids.net file, you must use the /sbin/lidsadm program to modify
the LIDS configuration files: /etc/lids/lids.conf, /etc/lids/lids.pw, and
/etc/lids/lids.cap.
After you make changes in a LIDS configuration file (using the lidsadm com-
mand), reload the updated configuration into the kernel by running the /sbin/
lidsadm -S -- + RELOAD_CONF command.
To add a new ACL in the /etc/lids/lids.conf file, use the /sbin/lidsadm
command like this:
◆ The -o object option specifies the name of the object, which can be one
of the following:
■ File
■ Directory
■ Capability
Each ACL requires a named object.
◆ The -j TARGET option specifies the target of the ACL.
■ When the new ACL has a file or directory as the object, the target can
be READ, WRITE, APPEND, DENY, or IGNORE.
■ If the object is a Linux capability, the target must be either INHERIT or
NO_INHERIT. This defines whether the object’s children can have the
same capability.
No program can write to the file or directory. Because you don’t specify a
subject in any of the preceding commands, the ACL applies to all programs.
After you run the preceding command and the LIDS configuration is reloaded,
you can run commands such as ls -l /etc/shadow and cat /etc/shadow to
check whether you can access the file. None of these programs can see the file
because we implicitly specified the subject as all the programs in the system.
However, if a program such as /bin/login should access the /etc/shadow file,
you can allow it to have read access by creating a new ACL, as in the following
command:
DELETING AN ACL To delete all the ACL rules, run the /sbin/lidsadm -Z com-
mand. To delete an individual ACL rule, simply specify the subject (if any) and/or
the object of the ACL. For example, if you run /sbin/lidsadm -D -o /bin com-
mand, all the ACL rules with /bin as the object are deleted. However, if you run
/sbin/lidsadm -D -s /bin/login -o /bin, then only the ACL that specifies
/bin/login as the subject and /bin as the object is deleted.
Specifying the -Z option or the -D option without any argument deletes all
your ACL rules.
Apart from protecting your files and directories using the preceding technique,
LIDS can use the Linux Capabilities to limit the capabilities of a running program
(that is, process). In a traditional Linux system, the root user (that is, a user with
UID and GID set to 0) has all the “Capabilities” or ability to perform any task by
124754-2 Ch08.F 11/5/01 9:04 AM Page 167
running any process. LIDS uses Linux Capabilities to break down all the power of
the root (or processes run by root user) into pieces so that you can fine-tune the
capabilities of a specific process. To find more about the available Linux
Capabilities, see the /usr/include/linux/capability.h header file. Table 8-1
lists all Linux Capabilities and their status (on or off) in the default LIDS
Capabilities configuration file /etc/lids/lids.cap.
The default settings for the Linux Capabilities that appear in Table 8-1 are stored
in the /etc/lids/lids.cap file, as shown in Listing 8-2.
The + sign enables the capability; the - sign disables it. For example, in the pre-
ceding listing, the last Linux Capability called CAP_INIT_KILL is enabled, which
means that a root-owned process could kill any child process (typically daemons)
created by the init process. Using a text editor, enable or disable the Linux
Capabilities you want.
-30:CAP_INIT_KILL
After you have reloaded the LIDS configuration (using the /sbin/lidsadm -S --
+ RELOAD_CONF command) or rebooted the system and sealed the kernel (using the
124754-2 Ch08.F 11/5/01 9:04 AM Page 171
This labels the process as hidden in the kernel and it can’t be found using any
user-land tools such as ps, top, or even by exploring files in the /proc filesystem.
◆ ioperm/iopi
◆ /dev/port
◆ /dev/mem
◆ /dev/kmem
For example, when this capability is off (as in the default) the /sbin/lilo
program can’t function properly because it needs raw device-level access to the
hard disk.
But some special programs may want this capability to run properly, such as
XF86_SVGA. In this case, we can add the program in the exception list like this:
This makes XF86_SVGA have the capability of CA_SYS_RAWIO while other programs
are unable to obtain CAP_SYS_RAWIO.
124754-2 Ch08.F 11/5/01 9:04 AM Page 172
◆ Multicasting
The default setting (this capability is turned off) is highly recommended. For one
of the preceding tasks, simply take down LIDS temporarily using the /sbin/
lidsadm -S -- -LIDS command.
PROTECTING THE LINUX IMMUTABLE FLAG FOR FILES The ext2 filesystem has
an extended feature that can flag a file as immutable. This is done using the chattr
command. For example, the chattr +i /path/to/myfile turns /path/to/myfile
into an immutable file. A file with the immutable attribute can’t be modified or
deleted or renamed, nor can it be symbolically linked. However, the root user can
change the flag by using the chattr -i /path/to/myfile command. Now, you
can protect immutable files even from the super user (root) by disabling the
CAP_LINUX_IMMUTABLE capability.
DETECTING SCANNERS If you have enabled the built-in port scanner during
kernel compilation as recommended in the Patching, compiling, and installing the
124754-2 Ch08.F 11/5/01 9:04 AM Page 173
kernel with LIDS section, you can detect port scanners. This scanner can detect
half-open scan, SYN stealth port scan, Stealth FIN, Xmas, Null scan, and so on. The
detector can spot such tools as Nmap and Satan — and it’s useful when the raw
socket (CAP_NET_RAW) is disabled.
RESPONDING TO AN INTRUDER
When LIDS detects a violation of any ACL rule, it can respond to the action by the
following methods:
◆ Logging the message. When someone violates an ACL rule, LIDS logs a
message using the kernel log daemon (klogd).
◆ Sending e-mail to appropriate authority. LIDS can send e-mail when a
violation occurs. This feature is controlled by the /etc/lids/lids.net
file.
◆ Hanging up the console. If you have enabled this option during kernel
patching for LIDS (as discussed in Step 9 in the section called “Patching,
compiling, and installing the kernel with LIDS”), the console is dropped
when a user violates an ACL rule.
You can address this problem by including a dynamically loadable library called
libsafe in the kernel. The libsafe program has distinctive advantages:
The libsafe solution is based on a middleware software layer that intercepts all
function calls made to library functions that are known to be vulnerable. In response
to such calls, libsafe creates a substitute version of the corresponding function to
carry out the original task — but in a manner that contains any buffer overflow within
the current stack frame. This strategy prevents attackers from “smashing” (overwrit-
ing) the return address and hijacking the control flow of a running program.
libsafe can detect and prevent several known attacks, but its real benefit is that
it can prevent yet unknown attacks — and do it all with negligible performance
overhead.
That said, most network-security professionals accept that fixing defective (vul-
nerable) programs is the best solution to buffer-overflow attacks — if you know that
a particular program is defective. The true benefit of using libsafe and other alter-
native security measures is protection against future buffer overflow attacks on
programs that aren’t known to be vulnerable.
strcpy(char *dest, const char *src) May overflow the dest buffer
strcat(char *dest, const char *src) May overflow the dest buffer
getwd(char *buf) May overflow the buf buffer
gets(char *s) May overflow the s buffer
[vf]scanf(const char *format, ...) May overflow its arguments
realpath(char *path, char resolved_path[]) May overflow the path buffer
www.research.avayalabs.com/project/libsafe
To use libsafe, download the latest version (presently 2.0) and extract it into
the /usr/local/src directory. Then follow these steps:
3. Before you can use libsafe, you must set the LD_PRELOAD environment
variable for each of the processes you want to protect with libsafe. Simply
add the following lines to your /etc/bashrc script:
LD_PRELOAD=/lib/libsafe.so.1
export LD_PRELOAD
After adding libsafe protection for your processes, use your programs as you
would normally. libsafe transparently checks the parameters for supported unsafe
functions. If such a violation is detected, libsafe takes the following measures:
For greater security, the dynamic loader disregards environmental variables such
as LD_PRELOAD when it executes set-UID programs. However, you can still use
libsafe with set-UID programs if you use one of the following two methods:
◆ If you have a version of ld.so that’s more recent than 1.9.0, you can set
LD_PRELOAD to contain only the base name libsafe.so.1 without having
to include the directory.
If you use this approach, the file is found if it’s in the shared library path
(which usually contains /lib and /usr/lib).
Because the search is restricted to the library search path, this also works
for set-UID programs.
124754-2 Ch08.F 11/5/01 9:04 AM Page 177
This line makes libsafe easier to turn off if something goes wrong.
After you have installed libsafe and appropriately configured either LD_PRELOAD
or /etc/ld.so.preload, libsafe is ready to run. You can monitor processes with
no changes.
If a process attempts to use one of the monitored functions to overflow a buffer
on the stack, the following actions happen immediately:
◆ A violation is declared.
◆ A core dump and a stack dump are produced (provided the corresponding
options are enabled) during compilation. (See the libsafe/INSTALL file.)
Programs written in C have always been plagued with buffer overflows. Two
reasons contribute to this:
libsafe in action
libsafe uses a novel method to detect and handle buffer-overflow attacks. Without
requiring source code, it can transparently protect processes against stack-smashing
attacks — even on a system-wide basis — by intercepting calls to vulnerable library
functions, substituting overflow-resistant versions of such functions, and restricting
any buffer overflow to the current stack frame.
The key to using libsafe effectively is to estimate a safe upper limit on the size
of buffers — and to instruct libsafe to impose it automatically. This estimation
can’t be performed at compile time; the size of the buffer may not yet be known
then. For the most realistic estimate of a safe upper limit, calculate the buffer size
after the start of the function that makes use of the buffer. This method can help
you determine the maximum buffer size by preventing such local buffers from
extending beyond the end of the current stack frame — thus enabling the substitute
version of the function to limit how many times a process may write to the buffer
without exceeding the estimated buffer size. When the return address from that
function (which is located on the stack) can’t be overwritten, control of the process
can’t be commandeered.
Summary
LIDS is a great tool to protect your Linux system from intruders. Since LIDS is a
kernel level intrusion protection scheme, it is hard to defeat using traditional hack-
ing tricks. In fact, a sealed LIDS system is very difficult to hack. Similarly, a system
with Libsafe support can protect your programs against buffer overflow attacks,
which are the most common exploitations of weak server software. By implement-
ing LIDS and Libsafe on your system, you are taking significant preventive mea-
sures against attacks. These two tools significantly enhance overall system security.
134754-2 Ch09.F 11/5/01 9:04 AM Page 179
Chapter 9
FILES are at the heart of modern computing. Virtually everything you do with a com-
puter these days creates, accesses, updates, or deletes files in your computer or on a
remote server. When you access the Web via your PC, you access files. It doesn’t mat-
ter if you access a static HTML page over the Web or run a Java Servlet on the server,
everything you do is about files. A file is the most valuable object in computing.
Unfortunately, most computer users don’t know how to take care of their files.
For example, hardly anyone takes a systematic, process-oriented approach to stor-
ing files by creating a manageable directory hierarchy. Often over the past decade I
have felt that high schools or and colleges should offer courses to teach everyone to
manage computer files.
Although lack of organization in file management impedes productivity, it isn’t
the only problem with files. Thanks to many popular personal operating systems
from one vendor, hardly anyone with a PC knows anything about file security.
When users migrate from operating systems such as MS-DOS and Windows 9x,
they are 100 percent unprepared to understand how files work on Linux or other
Unix/Unix-like operating systems. This lack of understanding can became a serious
security liability, so this chapter introduces file and directory permissions in terms
of their security implications. I also examine technology that helps reduce the secu-
rity risks associated with files and filesystems.
of the file, the result is a probable security problem for the user or the system. It’s
very important that everyone — both user and system administrator — understand
file permissions in depth. (If you already do, you may want to skim or skip the next
few sections.)
Number of links 1
Group Intranet
Filename milkyweb.txt
Here the milkyweb.txt file is owned by a user called sheila. She is the only
regular user who can change the access permissions of this file. The only other user
who can change the permissions is the superuser (that is, the root account). The
group for this file is intranet. Any user who belongs to the intranet group can
access (read, write, or execute) the file under current group permission settings
(established by the owner).
134754-2 Ch09.F 11/5/01 9:04 AM Page 181
To become a file owner, a user must create the file. Under Red Hat Linux, when
a user creates a file or directory, its group is also set to the default group of the user
(which is the private group with the same name as the user). For example, say that
I log in to my Red Hat Linux system as kabir and (using a text editor such as vi)
create a file called todo.txt. If I do an ls –l todo.txt command, the following
output appears:
As you can see, the file owner and the group name are the same; under Red Hat
Linux, user kabir’s default (private) group is also called kabir. This may be con-
fusing, but it’s done to save you some worries, and of course you can change this
behavior quite easily. Under Red Hat Linux, when a user creates a new file, the fol-
lowing attributes apply:
For example:
This command makes user sheila the new owner of the file kabirs_plans.txt.
If the superuser would also like to change the group for a file or directory, she
can use the chown command like this:
For example:
The preceding command not only makes sheila the new owner, but also resets
the group of the file to admin.
If the superuser wants to change the user and/or the group ownership of all the
files or directories under a given directory, she can use the –R option to run the
chown command in recursive mode. For example:
The preceding command changes the user and group ownership of the
/home/kabir/plans/ directory — and all the files and subdirectories within it.
Although you must be the superuser to change the ownership of a file, you can
still change a file or directory’s group as a regular user using the chgrp command.
If I run the preceding command to change the group for all the HTML files in a
directory, I must also be part of the httpd group. You can find what groups you are
in using the groups command without any argument. Like the chown command,
chgrp uses –R to recursively change group names of files or directories.
Octal Binary
0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111
When any of these digits is omitted, the space next to the leftmost digit is
considered a zero.
Table 9-3 shows a few example permission values that use octal digits.
0400 Only read (r) permission for the file owner. This is equivalent to
400, where the missing octal digit is treated as a leading zero.
0440 Read (r) permission for both the file owner and the users in the
group. This is equivalent to 440.
Continued
134754-2 Ch09.F 11/5/01 9:04 AM Page 184
To come up with a suitable permission setting, first determine what access the user,
the group, and everyone else should have and consider if the set-UID, set-GID, or
sticky bit is necessary. After you have determined the need, you can construct each
octal digit using 4 (read), 2 (write), and 1 (execute), or construct a custom value by
adding any of these three values. Although using octal numbers to set permissions
may seem awkward at the beginning, with practice their use can become second
nature.
134754-2 Ch09.F 11/5/01 9:04 AM Page 185
◆ Whom does the permission affect? You have the following choices:
■ u (user)
■ g (group)
■ o (others)
■ a (all)
◆ What permission type should you set? You have the following choices:
■ r (read)
■ w (write)
■ x (execute)
■ s (set-UID or set-GID)
■ t (sticky bit)
◆ What is the action type? Are you setting the permission or removing it?
When setting the permissions, + specifies an addition and – specifies a
removal.
For example, a permission string such as u+r allows the file owner read
access to the file. A permission string such as a+rx allows everyone to
read and execute a file. Similarly, u+s makes a file set-UID; g+s makes it
set-GID.
The preceding command changes permissions for files ending with the extension
.pl. It sets write and execute permissions for each .pl file (7 = 4 [read] + 2 [write]
+ 1 [execute]) and grants them to the file’s owner. The command also sets the files
as readable and executable (5 = 4 [read] + 1 [execute]) by the group and others.
134754-2 Ch09.F 11/5/01 9:04 AM Page 186
You can accomplish the same using the string method, like this:
Here a+rx allows read (r) and execute (x) permissions for all (a), and u+w allows
the file owner (u) to write (w) to the file.
Remember these rules for multiple access strings:
If you want to change permissions for all the files and subdirectories within a
directory, you can use the –R option to perform a recursive permission operation.
For example:
Here the 750 octal permission is applied to all the files and subdirectories of the
/www/mysite directory.
The permission settings for a directory are like those for regular files, but not
identical. Here are some special notes on directory permissions:
◆ If you have write permission for a directory, you can create, delete, or
modify any files or subdirectories within that directory — even if someone
else owns the file or subdirectory.
◆ Hard
◆ Soft (symbolic)
134754-2 Ch09.F 11/5/01 9:04 AM Page 187
Here I discuss the special permission issues that arise from links.
Now, if the root user creates a hard link (using the command line Len todo.txt
plan) called plan for todo.txt, the ls –l output looks like this:
As you can see, the hard link, plan, and the original file (todo.txt) have the
same file size (as shown in the fourth column) and also share the same permission
and ownership settings. Now, if the root user runs the following command:
It gives the ownership of the hard link to a user called sheila; will it work as
usual? Take a look at the ls –l output after the preceding command:
As you can see, the chown command changed the ownership of plan, but the
ownership of todo.txt (the original file) has also changed. So when you change
the ownership or permissions of a hard link, the effect also applies to the original
file.
Here you can see that the plan file is a symbolic (soft) link for todo.txt. Now,
suppose the root user changes the symbolic link’s ownership, like this:
The question is, can user kabir write to todo.txt using the symbolic link
(plan)? The answer is no, unless the directory in which these files are stored is
owned by kabir. So changing a soft link’s ownership doesn’t work in the same way
as with hard links. If you change the permission settings of a soft link, however, the
file it points to gets the new settings, as in this example:
This changes the todo.txt file’s permission as shown here in the ls –l listing:
So be cautious with links; the permission and ownership settings on these spe-
cial files are not intuitive.
lazyppl:x:666:netrat,mkabir,mrfrog
Here the user group called lazyppl has three users (netrat, mkabir, mrfrog) as
members.
By default, Red Hat Linux supplies a number of user groups, many of which
don’t even have a user as a member. These default groups are there for backward
compatibility with some programs that you may or may not install. For example,
the Unix-to-Unix Copy (uucp) program can use the uucp group in /etc/group, but
probably you aren’t going to use uucp to copy files over the Internet. You are more
likely to use the FTP program instead.
mrfrog:x:505:
This group is used whenever mrfrog creates files or directories. But you may
wonder why mrfrog needs a private user group like that when he already owns
everything he creates. The answer, again, has security ramifications: The group pre-
vents anyone else from reading mrfrog’s files. Because all files and directories cre-
ated by the mrfrog user allow access only to their owner (mrfrog) and the group
(again mrfrog), no one else can access his files.
webmaster:x:508:
webmaster:x:508:mrfrog,kabir,sheila
Now you can change the /www/public/htdocs directory permission, using the
chown :webmaster /www/public/htdocs.
If you want to change the group ownership for all subdirectories under the
named directory, use the -R option with the chown command.
Now the three users can access files in that directory only if the file-and-directory
permissions allow the group users to view, edit, and delete files and directories. To
make sure they can, run the chmod 770 /www/public/htdocs command. Doing so
allows them read, write, and execute permission for this directory. However, when
any one of them creates a new file in this directory, it is accessible only by that per-
son; Red Hat Linux automatically sets the file’s ownership to the user and group
134754-2 Ch09.F 11/5/01 9:04 AM Page 190
ownership to the user’s private group. For example, if the user kabir runs the touch
myfile.txt command to create an empty file, the permission setting for this file is
as shown in the following line:
This means that the other two users in the webmaster group can read this file
because of the world-readable settings of the file, but they can’t modify it or remove
it. Because kabir wants to allow them to modify or delete this file, he can run the
chgrp webmaster myfile.txt command to change the file’s group permission as
shown in the following line:
Now everyone in the webmaster group can do anything with this file. Because
the chgrp command is cumbersome to run every time someone creates a new file,
you can simply set the SGID bit for the directory by using the chmod 2770
/www/public/htdocs command. This setting appears as the following when the ls
-l command is run from the /www/public directory.
If any of the webmaster members creates a file in the htdocs directory, the com-
mand gives the group read and write permissions to the file by default.
When you work with users and groups, back up original files before making
any changes. This saves you a lot of time if something goes wrong with the
new configuration. You can simply return to the old configurations by
replacing the new files with old ones. If you modify the /etc/group file
manually, make sure you have a way to check for the consistency of informa-
tion between the /etc/group and /etc/passwd files.
Checking Consistency
of Users and Groups
Many busy and daring system administrators manage the /etc/group and
/etc/passwd file virtually using an editor such as vi or emacs. This practice is
very common and quite dangerous. I recommend that you use useradd, usermod,
and userdel commands to create, modify, and delete users and groupadd, groupmod,
and groupdel to create, modify, and delete user groups.
134754-2 Ch09.F 11/5/01 9:04 AM Page 191
When you use these tools to manage your user and group files, you should end up
with a consistent environment where all user groups and users are accounted for.
However, if you ever end up modifying these files by hand, watch for inconsistencies
that can become security risks or at least create a lot of confusion. Also, many system
administrators get in the habit of pruning these files every so often to ensure that no
unaccounted user group or user is in the system. Doing this manually every time is
very unreliable. Unfortunately, no Red Hat-supplied tool exists that can ensure that
you don’t break something when you try to enhance your system security. This
bugged me enough times that I wrote a Perl script called chk_pwd_grp.pl, shown in
Listing 9-1, that performs the following consistency checks:
# Open file
open(PWD, $passwdFile) || die “Can’t read $passwdFile $!\n”;
# Declare variables
my (%userByUIDHash, %uidByGIDHash,
%uidByUsernameHash, $user,$uid,$gid);
# Set line count
my $lineCnt = 0;
# Parse the file and stuff hashes
while(<PWD>){
chomp;
$lineCnt++;
# Parse the current line
($user,undef,$uid,$gid) = split(/:/);
# Detect duplicate usernames
if (defined $userByUIDHash{$uid} &&
$user eq $userByUIDHash{$uid}) {
warn(“Warning! $passwdFile [Line: $lineCnt] : “ .
“multiple occurance of username $user detected\n”);
# Detect
} elsif (defined $userByUIDHash{$uid}) {
warn(“Warning! $passwdFile [Line: $lineCnt] : “ .
“UID ($uid) has been used for user $user “ .
“and $userByUIDHash{$uid}\n”);
}
$userByUIDHash{$uid} = $user;
$uidByGIDHash{$gid} = $uid;
$uidByUsernameHash{$user} = $uid;
}
close(PWD);
return(\%userByUIDHash, \%uidByGIDHash, \%uidByUsernameHash);
}
sub get_group_info {
my ($groupFile, $userByUIDRef, $uidByGIDRef, $uidByUsernameRef) = @_;
open(GROUP, $groupFile) || die “Can’t read $groupFile $!\n”;
my (%groupByGIDHash,
%groupByUsernameHash,
%groupByUserListHash,
%groupBySizeHash,
%gidByGroupHash,
$group,$gid,
$userList);
my $lineCnt = 0;
while(<GROUP>){
chomp;
$lineCnt++;
Continued
134754-2 Ch09.F 11/5/01 9:04 AM Page 194
Warning! /etc/passwd [Line: 2] : UID (0) has been used for user hacker and root
Warning! /etc/group [Line: 3] : user xyz does not exist in /etc/passwd
Warning! /etc/group : Non-standard user group testuser does not have any member.
Warning! /etc/passwd : user hacker belongs to an invalid group (GID=666)
Total users : 27
Total groups : 40
Private user groups : 14
Public user groups : 11
GROUP TOTAL
===== =====
daemon 4
bin 3
adm 3
sys 3
lp 2
disk 1
wheel 1
root 1
news 1
mail 1
uucp 1
I have many warnings as shown in the first few lines of the above output. Most
of these warnings need immediate action:
◆ The /etc/passwd file (line #2) has a user called hacker who uses the
same UID (0) as root.
This is definitely very suspicious, because UID (0) grants root privilege!
This should be checked immediately.
134754-2 Ch09.F 11/5/01 9:04 AM Page 197
◆ A user called xyz (found in /etc/group line #3) doesn’t even exist in the
/etc/passwd file.
This means there is a group reference to a user who no longer exists. This
is definitely something that has potential security implications so it also
should be checked immediately.
◆ A non-standard user group called testuser exists that doesn’t have any
members.
A non-standard user group is a group that isn’t one of the following:
■ /etc/group by default
◆ The script also reports the current group and account information in a
simple text report, which can be very useful to watch periodically.
Now you receive an e-mail report from the user and group consistency checker
script, chk_pwd_grp.pl, on a weekly basis to the e-mail address used for ADMIN.
Before you can enhance file and directory security, establish the directory
scheme Red Hat Linux follows.This helps you plan and manage files and
directories.
◆ After you have commented this line out by placing a # character in front
of the line, you can create a new line like this:
LABEL=/usr /usr ext2 ro,suid,dev,auto,nouser,async 1 2
134754-2 Ch09.F 11/5/01 9:04 AM Page 201
◆ The new fstab line for /usr simply tells mount to load the filesystem
using ro,suid,dev,auto,nouser, and async mount options. The defaults
option in the commented-out version expanded to rw,suid,dev,auto,
nouser, and async. Here you are simply replacing rw (read-write) with ro
(read-only).
◆ Reboot your system from the console and log in as root.
◆ Change directory to /usr and try to create a new file using a command
such as touch mynewfile.txt in this directory. You should get an error
message such as the following:
touch: mynewfile.txt: Read-only filesystem
◆ As you can see, you can no longer write to the /usr partition even with a
root account, which means it isn’t possible for a hacker to write there
either.
Whenever you need to install some software in a directory within /usr, you
can comment out the new /usr line and uncomment the old one and reboot the
system. Then you can install the new software and simply go back to read-only
configuration.
If you don’t like to modify /etc/fstab every time you write to /usr, you
can simply make two versions of /etc/fstab called /etc/fstab.usr-ro
(this one has the read-only, ro,flag for /usr line) and /etc/fstab/usr-rw
(this one has the read-write, rw, flag for the /usr line) and use a symbolic
link (using the ln command) to link one of them to /etc/fstab as
desired.
This script segment ensures that all users with UID > 14 get a umask setting of
002 and users with UID < 14, which includes root and the default system accounts
such as ftp and operator, get a umask setting of 022. Because ordinary user UID
starts at 500 (set in /etc/login.defs; see UID_MIN) they all get 002, which trans-
lates into 775 permission setting. This means that when an ordinary user creates a
file or directory, she has read, write, and execute for herself and her user group
(which typically is herself, too, if Red Hat private user groups are used) and the rest
of the world can read and execute her new file or change to her new directory. This
isn’t a good idea because files should never be world-readable by default. So I rec-
ommend that you do the following:
◆ Modify /etc/profile and change the umask 002 line to umask 007, so
that ordinary user files and directories have 770 as the default permission
settings. This file gets processed by the default shell /bin/bash. The
default umask for root is 022, which translates into a 755 permission
mode. This is a really bad default value for all the users whose UID is less
then 14. . Change the umask to 077, which translates a restrictive (that is,
only file owner access) 700 permission mode. The modified code segment
in /etc/profile looks like this:
if [ `id -gn` = `id -un` -a `id -u` -gt 14 ]; then
umask 077
else
umask 007
fi
◆ Modify the /etc/csh.login file and perform the preceding change. This
file is processed by users who use /bin/csh or /bin/tcsh login shells.
If you use the su command to become root, make sure you use the su -
command instead of su without any argument. The - ensures that the new
shell acts like a login shell of the new user’s (that is, root.) In other words,
using the - option, you can instruct the target shell (by default it’s bin/bash
unless you changed the shell using the chsh command) to load appropriate
configuration files such as /etc/profile or /etc/csh.login.
134754-2 Ch09.F 11/5/01 9:04 AM Page 203
When you run this script as a cron job from /etc/cron.weekly, it sends e-mail
to ADMIN every week (so don’t forget to change root@localhost to a suitable
e-mail address), listing all world-writeable files and directories, as well as all world-
executable files. An example of such an e-mail report (slightly modified to fit the
page) is shown in the following listing:
When you receive such e-mails, look closely; spot and investigate the files and
directories that seem fishy (that is, out of the ordinary). In the preceding example,
the rootkit directory and the hack.o in /tmp would raise a red flag for me; I would
investigate those files immediately. Unfortunately, there’s no surefire way to spot
suspects — you learn to suspect everything at the beginning and slowly get a work-
ing sense of where to look. (May the force be with you.)
In addition to world-writeables, two other risky types of files exist that you
should keep an eye open for: SUID and SGID files.
#
use strict;
# Log file path
my $LOG_FILE = “/var/log/custom.log”;
# Open log file
open(LOG,”>>$LOG_FILE”) || die “Can’t open $LOG_FILE $!\n”;
# Write an entry
print LOG “PID $$ $0 script was run by $ENV{USER}\n”;
# Close log file
close(LOG);
# Exit program
exit 0;
This script simply writes a log entry in /var/log/custom.log file and exits.
When an ordinary user runs this script she gets the following error message:
Can’t open /var/log/custom.log Permission denied
The final line of Listing 9-5 shows that the /var/log/custom.log cannot be opened,
which is not surprising. Because the /var/log directory isn’t writeable by an ordi-
nary user; only root can write in that directory. But suppose the powers-that-be
require ordinary users to run this script. The system administrator has two dicey
alternatives:
◆ Setting the UID of the script to root and allowing ordinary users to run it
Because opening the /var/log to ordinary users is the greater of the two evils,
the system administrator (forced to support setuid.pl) goes for the set-UID
approach. She runs the chmod 5755 setuid.pl command to set the set-uid bit
for the script and allow everyone to run the script. When run by a user called
kabir, the script writes the following entry in /var/log/custom.log:
When this C program is compiled (using the gcc -o test write2var.c com-
mand), it can run as ./go from the command-line. This program writes to
/var/log/test.log if it’s run as root, but must run as a set-UID program if an
ordinary user is to run it. If this program is set-UID and its source code isn’t
available, the hacker can simply run the strings ./go command — or run the
strace ./go command to investigate why a set-UID program was necessary — and
try to exploit any weakness that shows up. For example, the strings go command
shows the following output:
/lib/ld-linux.so.2
__gmon_start__
134754-2 Ch09.F 11/5/01 9:04 AM Page 207
libc.so.6
strcpy
__cxa_finalize
malloc
fprintf
__deregister_frame_info
fclose
stderr
fopen
_IO_stdin_used
__libc_start_main
fputs
__register_frame_info
GLIBC_2.1.3
GLIBC_2.1
GLIBC_2.0
PTRh
Cannot allocate memory to store filename.
/var/log/test.log
Cannot open the log file.
Wrote this line
Notice the line in bold; even a not-so-smart hacker can figure that this program
reads or writes to /var/log/test.log. Because this is a simple example, the
hacker may not be able to do much with this program, but at the least he can cor-
rupt entries in the /var/log/test.log file by manually editing it.Similarly, a set-
GID (SGID) program can run using its group privilege. The example ls -l output in
the following listing shows a setuid and setgid file.
Both the set-UID and the set-GID fields are represented using the s character
(shown in bold for emphasis). Listing 9-7 shows a script called find_suid_sgid.sh
that you can run from /etc/cron.weekly; it e-mails you an SUID/SGID report
every week.
Using chattr
The ext2 filesystem used for Red Hat Linux provides some unique features. One of
these features makes files immutable by even the root user. For example:
chattr +i filename
This command sets the i attribute of a file in an ext2 filesystem. This attribute
can be set or cleared only by the root user. So this attribute can protect against file
accidents. When this attribute is set, the following conditions apply:
When you need to clear the attribute, you can run the following command:
chattr –i filename
Using lsattr
If you start using the chattr command, sometimes you notice that you can’t modify
or delete a file, although you have the necessary permission to do so. This happens if
you forget that earlier you set the immutable attribute of the file by using chattr —
and because this attribute doesn’t show up in the ls output, the sudden “freezing” of
the file content can be confusing. To see which files have which ext2 attributes, use
the lsattr program.
Unfortunately, what you know now about file and filesystem security may be old
news to informed bad guys with lots of free time to search the Web. Use of tools such
as chattr may make breaking in harder for the bad guy, but they don’t make your
files or filesystems impossible to damage. In fact, if the bad guy gets root-level
privileges, ext2 attributes provide just a simple hide-and-seek game.
# Create a MD5 digest for the data we read from the file
my $newDigest = get_digest($data);
# Write the digest to the checksum file for this input file
write_data_to_file($chksumFile, $newDigest);
# Create a new digest for the data read from the current
# version of the file
my $newDigest = get_digest($data);
sub get_data_from_file {
# Load data from a given file
#
my $filename = shift;
local $/ = undef;
open(FILE, $filename) || die “Can’t read $filename $!\n”;
my $data = <FILE>;
close(FILE);
return $data;
}
sub get_digest {
# Calculate a MD5 digest for the given data
#
my $data = shift;
my $ctx = Digest::MD5->new;
$ctx->add($data);
my $digest;
134754-2 Ch09.F 11/5/01 9:04 AM Page 213
$digest = $ctx->digest;
#$digest = $ctx->hexdigest;
#$digest = $ctx->b64digest;
return $digest;
}
sub syntax {
# Print syntax
#
die “Syntax: $0 /dir/files\nLimited wild card supported.\n”;
}
sub get_chksum_file {
# Create the path (based on the given filename) for the checksum file
#
my $filename = shift;
my $chksumFile = sprintf(“%s/%s/%s/%s.md5”,
$SAFE_DIR,
lc substr(basename($filename),0,1),
lc substr(basename($filename),1,1),
basename($filename) );
return $chksumFile;
}
# END OF SCRIPT
The script simply reads all the files in /etc/pam.d directory and creates MD5
checksums for each file. The checksum files are stored in a directory pointed by the
$SAFE_DIR variable in the script. By default, it stores all checksum files in
/usr/local/md5. Make sure you change the $SAFE_DIR from /usr/local/md5 to
an appropriate path the you can later write-protect. For example, use /mnt/floppy
to write the checksums to a floppy disk (which you can later write-protect).
After the checksum files are created, every time you run the script with the same
arguments, it compares the old checksum against one it creates from the current
contents of the file. If the checksums match, then your file is still authentic, because
you created the checksum file for it last time. For example, running the
./md5_fic.pl /etc/pam.d/* command again generates the following output:
Because the files have not changed between the times you executed these two
commands, the checksums still match; therefore each of the files passed.
Now if you change a file in the /etc/pam.d directory and run the same com-
mand again, you see a *FAILED* message for that file because the stored MD5 digest
does not match the newly computed digest. Here’s the output after I modified the
/etc/pam.d/su file.
You can also run the script for a single file. For example, the ./md5_fic.pl
/etc/pam.d/su command produces the following output:
A file integrity checker relies solely on the pristine checksum data. The data
mustn’t be altered in any way. Therefore, it’s extremely important that you don’t
keep the checksum data in a writeable location. I recommend using a floppy disk (if
you have only a few files to run the checksum against), a CD-ROM, or a read-only
disk partition.
Write-protect the floppy, or mount a partition read-only after you check the
checksum files.
it computes new signatures for current files and directories and compares them
with the original signatures stored in the database. If it finds a discrepancy, it
reports the file or directory name along with information about the discrepancy.
You can see why Tripwire can be a great tool for helping you determine which
files were modified in a break-in. Of course, for that you must ensure the security
of the database that the application uses. When creating a new server system, many
experienced system administrators do the following things:
1. Ensure that the new system isn’t attached to any network to guarantee
that no one has already installed a Trojan program, virus program, or
other danger to your system security.
2. Run Tripwire to create a signature database of all the important system
files, including all system binaries and configuration files.
3. Write the database in a recordable CD-ROM.
This ensures that an advanced bad guy can’t modify the Tripwire database
to hide Trojans and modified files from being noticed by the application.
Administrators who have a small number of files to monitor often use a
floppy disk to store the database. After writing the database to the floppy
disk, the administrator write-protects the disk and, if the BIOS permits,
configures the disk drive as a read-only device.
4. Set up a cron job to run Tripwire periodically (daily, weekly, monthly)
such that the application uses the CD-ROM database version.
GETTING TRIPWIRE
Red Hat Linux includes the binary Tripwire RPM file. However, you can download
the free (LGPL) version of Tripwire from an RPM mirror site such as
http://fr.rpmfind.net. I downloaded the Tripwire source code and binaries
from this site by using http://fr.rpmfind.net/linux/rpm2html/search.
php?query=Tripwire.
The source RPM that I downloaded was missing some installation scripts, so
I downloaded the source again from the Tripwire Open Source development site at
the http://sourceforge.net/projects/tripwire/ site. The source code I down-
loaded was called tripwire-2.3.0-src.tar.gz. You may find a later version
there when you read this. In the spirit of compiling open-source software from
the source code, I show compiling, configuring, and installing Tripwire from the
tripwire-2.3.0-src.tar.gz file.
When following the instructions given in the following section, replace the
version number with the version of Tripwire you have downloaded.
134754-2 Ch09.F 11/5/01 9:04 AM Page 217
If you want to install Tripwire from the binary RPM package, simply run the
rpm -ivh tripwire-version.rpm command. You still must configure
Tripwire by running twinstall.sh. Run this script from the /etc/trip-
wire directory and skip to Step 7 in the following section.
COMPILING TRIPWIRE
To compile from the source distribution, do the following:
1. su to root.
2. Extract the tar ball, using the tar xvzf tripwire-2.3.0-src.tar.gz
command. This creates a subdirectory called
/usr/src/redhat/SOURCES/tripwire-2.3.0-src.
3. Run the make release command to compile all the necessary Tripwire
binaries. (This takes a little time, so do it just before a coffee break.)
After it is compiled, install the binaries: Change directory to
/usr/src/redhat/SOURCES/tripwire-2.3.0-src/install. Copy the
install.cfg and install.sh files to the parent directory using the cp
install.* .. command.
4. Before you run the installation script, you may need to edit the
install.cfg file, which is shown in Listing 9-9. For example, if you
aren’t a vi editor fan, but rather camp in the emacs world, you change the
TWEDITOR field in this file to point to emacs instead of /usr/bin/vi. I
wouldn’t recommend changing the values for CLOBBER, TWBIN, TWPOLICY,
TWMAN, TWDB, TWDOCS, TWSITEKEYDIR, TWLOCALKEYDIR settings.
However, you may want to change the values for TWLATEPROMPTING,
TWLOOSEDIRCHK, TWMAILNOVIOLATIONS, TWEMAILREPORTLEVEL,
TWREPORTLEVEL, TWSYSLOG, TWMAILMETHOD, TWMAILPROGRAM,
and so on. The meaning of these settings are given in the comment lines
above each setting in the install.cfg file.
5. Run the ./install.sh command. This walks you through the installation
process. You are asked to press Enter, accept the GPL licensing agreement,
and (finally) to agree to the locations to which files copy.
6. After the files are copied, you are asked for a site pass phrase.
This pass phrase encrypts the Tripwire configuration and policy files.
Enter a strong pass phrase (that is, not easily guessable and at least eight
characters long) to ensure that these files aren’t modified by any unknown
party.
7. Choose a local pass phrase. This pass phrase encrypts the Tripwire data-
base and report files.
Choose a strong pass phrase..
134754-2 Ch09.F 11/5/01 9:04 AM Page 220
Attribute Meaning
Attribute Meaning
Here the rule being defined is called the OS Utilities rule; it has a severity
rating of 100 — which means violation of this rule is considered a major problem;
the +pinugtsdrbamcCMSH-l properties of /bin/ls is checked. Table 9-6 describes
each of these property/mask characters.
Property or Description
Mask
Property or Description
Mask
i Inode number
l File is increasing in size
m Modification timestamp
n Inode reference count or number of links
p Permission bits of file or directory
r ID of the device pointed to by an inode belonging to a device
file
s Size of a file
t Type of file
u Owner’s user ID
C CRC-32 value
H Haval value
M MD5 value
S SHA value
Another way to write the previous rule is shown in the following line:
The first method is preferable because it can group many files and directories
under one rule. For example, all the listed utilities in the following code fall under
the same policy:
SEC_CRIT = +pinugtsdrbamcCMSH-l;
(Rulename= “OS Utilities”, severity=100)
{
/bin/ls -> $(SEC_CRIT);
/bin/login -> $(SEC_CRIT);
134754-2 Ch09.F 11/5/01 9:04 AM Page 223
The preceding code uses the SEC_CRIT variable, which is defined before it’s
used in the rule.This variable is set to +pinugtsdrbamcCMSH-l and substi-
tuted in the rule statements using $(SEC_CRIT). This can define one vari-
able with a set of properties that can be applied to a large group of files
and/or directories. When you want to add or remove properties, you simply
change the mask value of the variable; the change is reflected everywhere
the variable is used. Some built-in variables are shown in Table 9-7.
TABLE 9-7: A SELECTION OF BUILT-IN VARIABLES FOR THE TRIPWIRE POLICY FILE
Variable Meaning
Dynamic +pinugtd-srlbamcCMSH. Good for user directories and files that are
dynamic and sub of changes.
Growing +pinugtdl-srbamcCMSH. Good for files that grow in size.
◆ Don’t create multiple rules that apply to the same file or directory, as in
this example:
/usr -> $(ReadOnly);
/usr -> $(Growing);
134754-2 Ch09.F 11/5/01 9:04 AM Page 224
In the second line of the example, when you check a file with the path
/usr/local/home/filename, Tripwire checks the properties substituted
by the variable $(Dynamic).
/usr/sbin/tripwire –-init
This command applies the policies listed in the /etc/tripwire/tw.pol file and
creates a database in var/lib/tripwire/k2.intevo.com.
After you have created the database, move it to a read-only medium such as a
CD-ROM or a floppy disk (write-protected after copying) if possible.
---------------------------------------------------------------------
Signatures for file: /usr/sbin/tripwire
CRC32 BmL3Ol
MD5 BrP2IBO3uAzdbRc67CI16i
SHA F1IH/HvV3pb+tDhK5we0nKvFUxa
HAVAL CBLgPptUYq2HurQ+sTa5tV
---------------------------------------------------------------------
134754-2 Ch09.F 11/5/01 9:04 AM Page 225
You can keep the signature in a file by redirecting it to that file. (Print the sig-
nature too.) Don’t forget to generate a signature for the siggen utility itself, also. If
you ever get suspicious about Tripwire not working right, run the siggen utility on
each of these files and compare the signatures. If any of them don’t match, then
you shouldn’t trust those files; replace them with fresh new copies and launch an
investigation into how the discrepancy happened.
Two rules violations exist, which are marked using the ‘*’ sign on the very left of
the lines.
◆ The “Tripwire Data Files” rule. The report also states that there’s another
violation for the “Critical configuration files” rule. In both cases, a file has
been modified that was supposed to be. Now, the Object summary section
of the report shows the following lines:
===============================================================================
Object summary:
===============================================================================
-------------------------------------------------------------------------------
# Section: Unix Filesystem
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol)
Severity Level: 100
-------------------------------------------------------------------------------
Remove the “x” from the adjacent box to prevent updating the database
with the new values for this object.
Modified:
[x] “/etc/tripwire/tw.pol”
-------------------------------------------------------------------------------
134754-2 Ch09.F 11/5/01 9:04 AM Page 227
As shown, Tripwire shows exactly which files were modified and what rules
these files fall under. If these modifications are okay, I can simply leave the ‘x’
marks in the appropriate sections of the report and exit the editor. Tripwire updates
the database per my decision. For example, if I leave the ‘x’ marks on for both files,
next time when the integrity checker is run, it doesn’t find these violations any
more because the modified files are taken into account in the Tripwire database.
However, if one of the preceding modifications was not expected and looks suspi-
cious, Tripwire has done its job!
This script checks whether the Tripwire database file exists or not. If it exists, the
script then looks for the configuration file. When both files are found, it runs the
/usr/sbin/tripwire command in a non-interactive mode. This results in a report
file; if you have configured rules using the emailto attribute, e-mails are sent to
the appropriate person(s).
134754-2 Ch09.F 11/5/01 9:04 AM Page 228
The update method should save you a little time because it doesn’t create
the entire database again.
Attribute Meaning
Attribute Meaning
To test your e-mail settings, you can run Tripwire using the /usr/sbin/tripwire
-m t -email your@emailaddr command. Remember to change the you@emailaddr
to your own e-mail address.
Setting up Integrity-Checkers
When you have many Linux systems to manage, it isn’t always possible to go from
one machine to another to perform security checks — in fact, it isn’t recommended.
When you manage a cluster of machines, it’s a good idea to centralize security as
much as possible. As mentioned before, Tripwire can be installed and set up as a cron
job on each Linux node on a network, but that becomes a lot of work (especially on
larger networks). Here I discuss a new integrity checker called Advanced Intrusion
Detection Environment (AIDE), along with a Perl-based utility called Integrity
Checking Utility (ICU) that can automate integrity checking on a Linux network.
Setting up AIDE
AIDE is really a Tripwire alternative. The author of AIDE liked Tripwire but wanted to
create a free replacement of Tripwire with added functionality. Because Tripwire Open
Source exists, the “free aspect” of the AIDE goal no longer makes any difference, but
the AIDE tool is easy to deploy in a network environment with the help of ICU.
134754-2 Ch09.F 11/5/01 9:04 AM Page 231
You can get Tripwire company to sell you a shrink-wrapped, integrity check-
ing solution that works in a cluster of Linux hosts. So inquire about this with
Tripwire.
1. su to root.
2. Extract the source tar ball.
For version 0.7, use the tar xvzf aide-0.7.tar.gz command in the
/usr/src/redhat/SOURCES directory. You see a new subdirectory called
aide-0.7.
ICU requires that you have SSH1 support available in both the ICU server
and ICU client systems. You must install OpenSSH (which also requires
OpenSSL) to make ICU work. See Chapter 12 and Chapter 11 for information
on how to meet these prerequisites.
Setting up ICU
To use the Perl-based utility called Integrity Checking Utility (ICU) you have to set
up the ICU server and client software.
Here’s how you can compile ICU on the server that manages the ICU checks on
other remote Linux systems:
1. su to root on the system where you want to run the ICU service.
This is the server that launches ICU on remote Linux systems and performs
remote integrity checking and also hosts the AIDE databases for each host.
2. Extract the source in /usr/src/redhat/SOURCES.
A new subdirectory called ICU-0.2 is created.
3. Run the cp -r /usr/src/redhat/SOURCES/ICU-0.2 /usr/local/ICU
command to copy the source in /usr/local/ICU, which makes setup
quite easy because the author of the program uses this directory in the
default configuration file.
4. Create a new user account called icu, using the adduser icu command.
■ Change the ownership of the /usr/local/ICU directory to the new
user by running the chown -R icu /usr/local/ICU command.
■ Change the permission settings using the chmod -R 700
/usr/local/ICU command so that only the new user can access files
in that directory.
5. Edit the ICU configuration file (ICU.conf) using your favorite text editor.
■ Modify the icu_server_name setting to point to the ICU server that
launches and runs ICU on remote Linux machines.
This is the machine you are currently configuring.
■ Change the admin_e-mail setting to point to your e-mail address.
If you don’t use Sendmail as your mail server, change the sendmail
setting to point to your Sendmail-equivalent mail daemon.
6. The default configuration file has settings that aren’t compatible with
OpenSSH utilities. Change these settings as shown here:
7. Remove the -l option from the following lines of the scp (secure copy)
command settings in the ICU.conf file:
get_bin_cmd = %scp% -1 -P %port% -i %key_get_bin_priv% \
root@%hostname%:%host_basedir%/aide.bin %tmp_dir%/ 2>&1
get_conf_cmd = %scp% -1 -P %port% -i %key_get_conf_priv% \
root@%hostname%:%host_basedir%/aide.conf %tmp_dir%/ 2>&1
get_db_cmd = %scp% -1 -P %port% -i %key_get_db_priv% \
root@%hostname%:%host_basedir%/aide.db %tmp_dir%/ 2>&1
8. su to the icu user using the su icu command. Run the ./ICU.pl -G
command to generate five pairs of keys in the keys directory.
9. Run the ./ICU.pl -s to perform a sanity check, which ensures that
everything is set up as needed.
If you get error messages from this step, fix the problem according to the
messages displayed.
10. Copy and rename the AIDE binary file from /usr/local/bin to
/usr/local/ICU/binaries/aide.bin-i386-linux,using the following
command:
cp /usr/local/bin/aide /usr/local/ICU/binaries/aide.
bin-i386-linux
I recommend that you read the man page for AIDE configuration (using the
man aide.conf command) before you modify this file. For now, you can
leave the configuration as is.
2. Perform a sanity check for the host using the ./ICU.pl -s -r hostname
command. For the preceding example, this command is ./ICU.pl -s -r
134754-2 Ch09.F 11/5/01 9:04 AM Page 234
4. FTP the .tar file to the desired remote Linux system whose files you want
to bring under integrity control. Log in to the remote Linux system and su
to root.
5. Run the tar xvf hostname.icu-install.tar command to extract it in
a temporary directory. This creates a new subdirectory within the extrac-
tion directory called hostname-icu-install.
6. From the new directory, run the ./icu-install.sh command to install a
copy of aide.conf and aide.db to the /var/adm/.icu directory
7. Append five public keys to ~/.ssh/authorized_keys - key_init_db.pub to ini-
tialize the database, key_check.pub to run an AIDE check, key_get_bin.pub
to send aide.bin (the AIDE binary), key_get_conf.pub to send aide.conf
(configuration) and key_get_db.pub to send aide.db (the integrity database).
The keys don’t use any pass phrase because they are used via cron to run
automatic checks.
Now you can start ICU checks from the ICU server.
When initializing a new host, the first integrity database and configuration
are saved as /usr/localICU/databases/hostname/archive/aide.db-first-
TIMESTAMP.gz and /usr/localICU/databases/hostname/archive/aide.
conf-first-.TIMESTAMP. For example, /usr/localICU/databases/k2.intevo.
com/archive/aide.db-first-Sat Dec 23 11:30:50 2000.gz and /usr/
localICU/databases/k2.intevo.com/archive/aide.conf-first-Sat Dec 23
11:30:50 2000 are the initial database and configuration files created when the
preceding steps were followed by a host called k2.intevo.com.
After the database of the remote host is initialized, you can run file-system
integrity checks on the host.
134754-2 Ch09.F 11/5/01 9:04 AM Page 236
If you don’t use the -v option in the preceding command, ICU.pl is less
verbose. The -v option is primarily useful when you run the command from
an interactive shell. Also, you can add the -d option to view debugging
information if something isn’t working right.
been added and eleven files were modified. The e-mail sent to the administrator
(kabir@ r2d2.intevo.com) looks like this:
With the AIDE and ICU combo, you can detect filesystem changes quite easily.
You can, in fact, automate this entire process by running the ICU checks on remote
machines as a cron job on the ICU server. Here’s how:
This runs filesystem integrity checks on the named host at 01:15 AM every
morning. After you create a cron job for a host, monitor the log file
(/usr/local/ICU/logs/hostname.log) for this host on the ICU server next morn-
ing to ensure that ICU.pl ran as intended.
If you have a lot of remote Linux systems to check, add a new entry in the
/var/spool/cron/icu file (using the crontab -e command), as shown
in the preceding example. However, don’t schedule the jobs too close to each
other. If you check five machines, don’t start all the ICU.pl processes at the
same time. Spread out the load on the ICU server by scheduling the checks at
15- to 30-minute intervals.This ensures the health of your ICU server.
When ICU.pl finds integrity mismatches, it reports it via e-mail to the adminis-
trator. It’s very important that the administrator reads her e-mail or else she won’t
know about a potential break-in.Doing Routine Backups
Protecting your system is more than keeping bad guys out. Other disasters
threaten your data. Good, periodic backup gives you that protection.
The most important security advice anyone can give you is back up regularly.
Create a maintainable backup schedule for your system. For example, you can per-
form incremental backups on weekdays and schedule a full backup over the week-
end. I prefer removable media-based backup equipment such as 8mm tape drives or
DAT drives. A removable backup medium enables you to store the information in a
secure offsite location. Periodically check that your backup mechanism is function-
ing as expected. Make sure you can restore files from randomly selected backup
media. You may recycle backup media, but know the usage limits that the media
manufacturer claims.
Another type of “backup” you should do is backtracking your work as a system
administrator. Document everything you do, especially work that you do as a supe-
ruser. This documentation enables you to trace problems that often arise while you
are solving another.
I keep a large history setting (a shell feature that remembers N number last
commands), and I often print the history in a file or on paper. The script
command also can record everything you do while using privileged accounts.
134754-2 Ch09.F 11/5/01 9:04 AM Page 239
You need ext2 file-system utilities to compile the dump/restore suite. The ext2 file-
system utilities (e2fsprogs) contain all of the standard utilities for creating, fixing,
configuring, and debugging ext2 filesystems. Visit http://sourceforge.net/
projects/e2fsprogs for information on these utilities.
This command displays permissions for all the dot files ending in “rc” in the
/home directory hierarchy. If your users’ home directories are kept in /home, this
shows you which users may have a permission problem.
Summary
Improper file and directory permissions are often the cause of many user support
incidents and also the source of many security problems. Understanding of file and
directory permissions is critical to system administration of a Linux system. By
setting default permissions for files, dealing with world-accessible and set-UID and
set-GID files, taking advantage of advanced ext2 filesystem security features, using
file integrity checkers such as Tripwire, AIDE, ICU, etc. you can enhance your
system security.
144754-2 Ch10.F 11/5/01 9:04 AM Page 241
Chapter 10
PAM
IN THIS CHAPTER
◆ What is PAM?
◆ How does PAM work?
◆ Enhancing security with PAM modules
What is PAM?
You may wonder how programs such as chsh, chfn, ftp, imap, linuxconf, rlogin,
rexec, rsh, su, login, and passwd suddenly understand the shadow password
scheme (see Chapter 12) and use the /etc/shadow password file instead of the
/etc/passwd file for authentication. They can do so because Red Hat distributes
these programs with shadow password capabilities. Actually, Red Hat ships these
programs with the much grander authentication scheme — PAM. These PAM-aware
programs can enhance your system security by using both the shadow password
scheme and virtually any other authentication scheme.
Traditionally, authentication schemes are built into programs that grant privi-
leges to users. Programs such as login or passwd have the necessary code for
authentication. Over time, this approach proved virtually unscaleable, because
incorporating a new authentication scheme required you to update and recompile
privilege-granting programs. To relieve the privilege-granting software developer
from writing secure authentication code, PAM was developed. Figure 10-1 shows
how PAM works with privilege-granting applications.
241
144754-2 Ch10.F 11/5/01 9:04 AM Page 242
User
1 9 6
PAM-aware 7
privilege granting application
Request information
Conversion functions from the user
for PAM
2 5 8
Authentication module(s)
4
Figure 10-1: How PAM-aware applications work
6. The conversation functions request information from the user. For example,
they ask the user for a password or a retina scan.
7. The user responds to the request by providing the requested information.
8. The PAM authentication modules supply the application with an authenti-
cation status message via the PAM library.
9. If the authentication process is successful, the application does one of the
following:
■ Grants the requested privileges to the user
■ Informs the user that the process failed
Think of PAM as a facility that takes the burden of authentication away from
the applications and stacks multiple authentication schemes for one application.
For example, the PAM configuration file for the rlogin application is shown in
Listing 10-1.
Currently, four module types exist, which are described in Table 10-1.
144754-2 Ch10.F 11/5/01 9:04 AM Page 244
The control flag defines how the PAM library handles a module’s response. Four
control flags, described in Table 10-2, are currently allowed.
required This control flag tells the PAM library to require the success of
the module specified in the same line. When a module returns a
response indicating a failure, the authentication definitely fails,
but PAM continues with other modules (if any). This prevents
users from detecting which part of the authentication process
failed, because knowing that information may aid a potential
attacker.
requisite This control flag tells the PAM library to abort the
authentication process as soon as the PAM library receives a
failure response.
sufficient This control flag tells the PAM library to consider the
authentication process complete if it receives a success
response. Proceeding with other modules in the configuration
file is unnecessary.
optional This control flag is hardly used. It removes the emphasis on the
success or failure response of the module.
144754-2 Ch10.F 11/5/01 9:04 AM Page 245
The module path is the path of a pluggable authentication module. Red Hat
Linux stores all the PAM modules in the /lib/security directory. You can supply
each module with optional arguments, as well.
In Listing 10-1, the PAM library calls the pam_securetty.so module, which
must return a response indicating success for successful authentication. If the mod-
ule’s response indicates failure, PAM continues processing the other modules so
that the user (who could be a potential attacker) doesn’t know where the failure
occurred. If the next module (pam_rhosts_auth.so) returns a success response, the
authentication process is complete, because the control flag is set to sufficient.
However, if the previous module (pam_securetty.so) doesn’t fail but this one fails,
the authentication process continues and the failure doesn’t affect the final result.
In the same fashion, the PAM library processes the rest of the modules.
The order of execution exactly follows the way the modules appear in the con-
figuration. However, each type of module (auth, account, password, and session)
is processed in stacks. In other words, in Listing 10-1, all the auth modules are
stacked and processed in the order of appearance in the configuration file. The rest
of the modules are processed in a similar fashion.
/etc/pam.d directory. Because each application has its own configuration file, cus-
tom authentication requirements are easily established for them. However, too
many custom authentication requirements are probably not a good thing for man-
agement. This configuration management issue has been addressed with the recent
introduction of a PAM module called the pam_stack.so. This module simply can
jump to another PAM configuration while in the middle of one. This can be better
explained with an example. Listing 10-2 shows /etc/pam.d/login, the PAM con-
figuration file for the login application.
When the PAM layer is invoked by the login application, it looks up this file
and organizes four different stacks:
◆ Auth stack
◆ Account stack
◆ Password stack
◆ Session stack
In this example, the auth stack consists of the pam_securetty, pam_stack, and
pam_nologin modules. PAM applies each of the modules in a stack in the order
they appear in the configuration file. In this case, the pam_securetty module must
(because of the “required” control flag) respond with a failure for the auth process-
ing to continue. After the pam securetty module is satisfied, the auth processing
moves to the pam_stack module. This module makes PAM read a configuration file
specified in the service=configuration argument. Here, the system-auth configura-
tion is provided as the argument; therefore, it’s loaded. The default version of this
configuration file is shown in Listing 10-3.
As shown, this configuration has its own set of auth, account, password, and
session stacks. Because the pam_stack module can jump to a central configura-
tion file like this one, it enables a centralized authentication configuration, which
leads to better management of the entire process. You can simply change the system-
auth file and affect all the services that use the pam_stack module to jump to
it. For example, you can enforce time-based access control using a module called
pam_time (the Controlling access by time section explains this module) for every
type of user access that understands PAM. Simply add the necessary pam_time con-
figuration line in the appropriate stack in the system-auth configuration file.
Typically, when you are establishing a new PAM-aware application on Red Hat
Linux, it should include the PAM configuration file. If it doesn’t include one or it
includes one that appears to not use this centralized configuration discussed, you
can try the following:
Most PAM-aware applications are shipped with their own PAM configuration
files. But even if you find one that is not, it’s still using PAM. By default, when
PAM can’t find a specific configuration file for an application, it uses the default
/etc/pam.d/other configuration. This configuration file is shown in Listing 10-4.
This configuration simply denies access using the pam_deny module, which
always returns failure status. I recommend that you keep this file the way it is so
that you have a “deny everyone access unless access is permitted by configuration”
type of security policy.
◆ pam_access.so
The positive sign in the first field indicates a grant permission; the nega-
tive sign indicates a deny permission. The user list consists of at least one
comma-separated username or group. The third field can either be a list of
tty devices or host/domain names.
When you want to restrict a certain user from logging in via the physical
console, use the tty list. To restrict login access by hostname or domain
names (prefixed with a leading period), specify the host or domain names
in a comma-separated list. Use ALL to represent everything; use EXCEPT
to exclude list on the right side of this keyword; use LOCAL to match any-
thing without a period in it. For example:
-:sysadmin:ALL EXCEPT LOCAL
Here, users in the sysadmin group are denied login access unless they log
in locally using the console.
■ pam_console.so
■ This module controls which PAM-aware, privileged commands such as
/sbin/shutdown, /sbin/halt, and /sbin/reboot an ordinary user
can run.
See the Securing console access using mod_console section.
◆ pam_cracklib.so
This module checks the strength of a password using the crack library.
◆ pam_deny.so
This module always returns false. For example, it’s used in the
/etc/pam.d/other configuration to deny access to any user who is try-
ing to access a PAM-aware program without a PAM configuration file.
◆ pam_env.so
This module accesses STDIN and STDOUT data that passes between the
user and the application. It’s currently not used in any default configura-
tion shipped with Red Hat Linux.
◆ pam_ftp.so
Using this module you can give a group (in /etc/group) access to PAM-
aware programs.
◆ pam_issue.so
This module displays the contents of the /etc/issue file during login
process.
◆ pam_lastlog.so
◆ pam_limits.so
This module reads a file and performs an action (that is, it enables or
denies access) based on the existence or non-existence of an item such as
username, tty, host, groups, or shell.
◆ pam_localuser.so
This module returns success when the user being authenticated is found in
the /etc/passwd file of the server. Optionally, you can specify a different
file for the file argument, as in the following example:
auth required /lib/security/pam_localuser.so \
file=/etc/myusers
◆ pam_mail.so
This module checks whether the user has new or unread e-mail and dis-
plays a message.
◆ pam_mkhomedir.so
This module creates a user’s home directory upon the first successful
login. It’s very useful if you use a central user information repository
(using LDAP orNIS) to manage a large number of users. For example, after
you create the user account on the LDAP server, the home directories
aren’t needed on all the machines the user has access privileges to if these
machines use the following line in the configuration of access control
application such as login and sshd:
session required /lib/security/pam_mkhomedir.so \
skel=/etc/skel/ umask=0022
This line assumes that you want to get the user resource files (such as dot
files for shell) from the /etc/skel directory and also set a umask of 0022
for the new home directory and dot files.
◆ pam_motd.so
Displays the /etc/motd (message of the day) file when a user successfully
logs in. You can also display a different file using the
mod=/path/to/filename option.
144754-2 Ch10.F 11/5/01 9:04 AM Page 252
◆ pam_nologin.so
This module can restrict all users but root from logging into the system.
◆ pam_pwdb.so
◆ pam_radius.so
Stay away from rhosts enabled services such as rlogin and rsh.These
were considered major security holes for years.
144754-2 Ch10.F 11/5/01 9:04 AM Page 253
◆ pam_rootok.so
This is the root access module. When you are logged in as root and run
programs such as shutdown, reboot, halt, or any other privileged com-
mand, normally you should be authenticated.
If this module is used in the PAM configuration of such commands, the root
user is excused from entering her password. The most useful place for this
module is in /etc/pam.d/su because root shouldn’t need a password to
su to an ordinary user.
◆ pam_securetty.so
This module reads the /etc/securetty file and checks to ensure that
root user can’t log in from any tty device not mentioned in this file.
◆ pam_shells.so
◆ pam_stack.so
This module can jump out of the current configuration file in /etc/pam.d to
another one. For example, does /etc/pam.d/sshd have the following line?
auth required /lib/security/pam_stack.so \
service=system-auth
◆ pam_stress.so
This module enables you to stress test your applications. I have never used
this module and don’t know of any good application for it.
◆ pam_tally.so
This module tracks access attempts for a user account. It can deny access
after a specified number of failures.
◆ pam_time.so
This module no longer exists. It’s now a symbolic link for the
pam_unix.so module.
◆ pam_unix_auth.so
This module no longer exists. It’s now a symbolic link for the
pam_unix.so module.
◆ pam_unix_passwd.so
This module no longer exists. It’s now a symbolic link for the
pam_unix.so module.
◆ pam_unix_session.so
This module no longer exists. It’s now a symbolic link for the
pam_unix.so module.
◆ pam_unix.so
This is the standard password module that can work with both
/etc/passwd and the /etc/shadow files.
◆ pam_warn.so
◆ pam_wheel.so
This module restricts root access to users belonging to the wheel group.
For example, the /etc/pam.d/su configuration includes a line such as the
following:
auth required /lib/security/pam_wheel.so use_uid
This makes PAM confirm that the person trying to su to root (that is, not try-
ing to su to a non-root user) belongs to the wheel group in /etc/group file.
◆ pam_xauth.so
This module works as a session module for forwarding xauth keys when
programs such as su and linuxconf are used under X Window.
A few of these modules can enhance and enforce your system security policy.
◆ Their name
◆ Time of day
2. Devise an access policy for login service. This example assumes that users
should log in after 6 a.m. and no later than 8 p.m.
3. Configure the /etc/security/time.conf file. The configuration lines in
this file have the following syntax:
services;ttys;users;times
You can use some special characters in the fields. Table 10-3 describes the spe-
cial characters.
144754-2 Ch10.F 11/5/01 9:04 AM Page 256
Character Meaning
Field Description
services A list of services that are affected by the time restriction. For example,
for control of login and su using one rule, specify the service to be
login&su in a configuration line.
ttys A list of terminals that are affected by the time restriction. For
example, for control of only pseudoterminals and not the console
terminals, specify ttyp*!tty*, where ttyp* lists all the
pseudoterminals used in remote login via services such as Telnet, and
tty* lists all the console terminals.
users A list of users who are affected by the time restriction. For example,
to specify all the users, use the wildcard character * in a configuration
line.
time A list of times when the restrictions apply. You can specify time as a
range in a 24-hour clock format. For example, to specify a range from
8 p.m. to 6 a.m., specify 2000–0600 (that is, HHMM format, where
HH is 00–23 and MM is 00–59). You can also specify days by using a
two-character code, such as Mo (Monday), Tu (Tuesday), We
(Wednesday), Th (Thursday), Fr (Friday), Sa (Saturday), and Su
(Sunday). You can also use special codes, such as Wk for all weekdays,
Wd for weekends, and Al for all seven days. For example, to restrict
access to a service from 8 p.m. to 6 a.m. every day, specify a time
range as !Al2000–0600.
144754-2 Ch10.F 11/5/01 9:04 AM Page 257
For the ongoing example, you can create a time-based rule that prohibits login
access from 8 p.m. to 6 a.m. for all users who access the system via remote means
(such as Telnet) by adding the following line to the /etc/security/time.conf
file:
login;ttyp*;*;!Al2000-0600
To enable a user called kabir access to the system at any time, but make all
other users follow the preceding rule, modify the rule this way:
login;ttyp*;*!kabir;!Al2000-0600
This module checks for the existence of the /etc/nologin file. If this file exits,
the module returns failure but displays the contents of this file on screen so users
see the reason for the restriction.
Typically, the /etc/nologin file is created when the system administrator (root)
should perform maintenance tasks such as the following:
To disable user login, create the file by running the touch /etc/nologin com-
mand. If you want to write a note to the users, modify this file and tell the users
why they can’t access your system at this time. After you are done, remove the
/etc/nologin file so that users can log in again.
144754-2 Ch10.F 11/5/01 9:04 AM Page 258
If you enable multiple login methods such as ssh or telnet (not recom-
mended), make sure each of the PAM configuration files for these services
requires the pam_nologin configuration as shown in the beginning of this
section. Also, if you use multiple auth lines in a PAM configuration such as
the /etc/pam.d/login, make sure the nologin line appears before
any auth line with the sufficient control flag. Otherwise, the /etc/
nologin isn’t displayed because the module may not be used.
The following list defines the codes used in the preceding format:
■ maxlogins
■ nofiles
■ rss
■ fsize
■ stack
For example:
Here, the user kabir has a hard limit of five (5) on the number of processes he
can run. In other words, if this user tries to run more than five processes, the sys-
tem refuses to run more than five. For example, after logging in, the user runs the
ps auxww | grep ^ kabir command to see the number of processes owned by
him. The command returns the following lines:
kabir 1626 0.0 0.5 2368 1324 pts/2 S 14:36 0:00 -tcsh
kabir 1652 0.0 0.2 2552 752 pts/2 R 14:40 0:00 ps auxww
kabir 1653 0.0 0.2 1520 596 pts/2 R 14:40 0:00 grep ^kabir
User kabir shows one shell process (tcsh) and two other processes (ps and
grep) that are part of the preceding command. Now, running the man perl com-
mand shows the following message:
The command failed because it tried to fork more than five processes. Such con-
trol over the number of processes that a user can run could have a great impact on
overall system reliability, which translates well for security. A reliable system is
predictable and predictable systems are more secure than the ones that aren’t.
144754-2 Ch10.F 11/5/01 9:04 AM Page 260
Also the <console>, <floppy>, and <cdrom> aliases (also known as classes)
must point to the desired devices. The default values for these aliases are also found
in the same file. They are shown below:
As shown, the values contain wildcards and simple, regular expressions. The
default values should cover most typical situations.
As mentioned before, the pam_console module also controls which PAM-aware,
privileged commands such as /sbin/shutdown, /sbin/halt, and /sbin/reboot
an ordinary user can run. Let’s take a look at what happens when an ordinary user
runs the shutdown command.
◆ The user enters the shutdown -r now command at the console prompt to
reboot the system.
◆ The /usr/bin/shutdown script, which is what the user runs, runs a
program called consolehelper. This program in turn uses a program
called userhelper that runs the /sbin/reboot program. In this process,
the PAM configuration for the reboot program (stored in /etc/pam.d/
reboot) is applied.
◆ In the /etc/pam.d/reboot file you will see that the pam_console module is
used as an auth module, which then checks for the existence of a file
called /etc/security/console.apps/reboot. If this file exists and the
user meets the authentication and authorization requirements of the
/etc/pam.d/reboot configuration, the reboot command is executed.
144754-2 Ch10.F 11/5/01 9:04 AM Page 261
If the user runs the shutdown command using the -h option, the
/usr/bin/shutdown script uses the /sbin/halt program in place of
/sbin/reboot and uses halt-specific PAM configuration files.
◆ This makes sure that even if someone can access a user account or opened
shell (perhaps you didn’t log out when you walked away from the system),
he must know the user’s password to shut down, reboot, or halt the
machine. In my recent security analysis experience, I found instances where
many organizations housed their Web servers in ISP co-location facilities,
which are very secured from outside. However, many of the servers had
physical consoles attached to them and often had opened shell running
simple stats programs such as top and vmstat. Anyone could stop these
programs and simply pull a prank by typing shutdown, reboot, or, even
worse — halt! It is essential in these situations to require the password, using
the configuration line discussed in the preceding text.
It’s a big step towards security management that Red Hat Linux ships with PAM
and PAM-aware applications. To follow the PAM happenings, visit the primary
PAM distribution site at www.us.kernel.org/pub/linux/libs/pam/ frequently.
Summary
PAM is a highly configurable authentication technology that introduces a layer of
middleware between the application and the actual authentication mechanism. In
addition to this, PAM can handle account and session data, which is something that
normal authentication mechanisms don’t do very well. Using various PAM modules,
you can customize authentication processes for users, restrict user access to console
and applications based on such properties as username, time, and terminal location.
144754-2 Ch10.F 11/5/01 9:04 AM Page 262
154754-2 Ch11.F 11/5/01 9:04 AM Page 263
Chapter 11
OpenSSL
IN THIS CHAPTER
ONLY A FEW YEARS AGO, the Internet was still what it was initially intended to be —
a worldwide network for scientists and engineers. By virtue of the Web, however,
the Internet is now a network for everyone. These days, it seems as though every-
one and everything is on the Internet. It’s also the “new economy” frontier; thou-
sands of businesses, large and small, for better or worse, have set up e-commerce
sites for customers around the world. Customers are cautious, however, because
they know that not all parts of the Internet are secured.
To eliminate this sense of insecurity in the new frontier, the Netscape
Corporation invented a security protocol that ensures secured transactions between
the customer’s Web browser and the Web server. Netscape named this protocol
Secured Sockets Layer (SSL). Quickly SSL found its place in many other Internet
applications, such as e-mail and remote access. Because SSL is now part of the
foundation of the modern computer security infrastructure, it’s important to know
how to incorporate SSL in your Linux system. This chapter shows you how.
Symmetric encryption
Symmetric encryption is like the physical keys and locks you probably use every
day. Just as you would lock and unlock your car with the same key, symmetric
encryption uses one key to lock and unlock an encrypted message.
Because this scheme uses one key, all involved parties must know this key for
the scheme to work.
Asymmetric encryption
Asymmetric encryption works differently from symmetric encryption. This scheme
has two keys:
◆ A public key
◆ A private key
The extra key is the public key (so this scheme is also known as public key
encryption).
When data is encrypted with the public key, it can only be decrypted using the
private key, and vice versa. Unlike symmetric encryption, this scheme doesn’t
require that the sender know the receiver’s private key to unlock the data. The pub-
lic key is widely distributed, so anyone who needs a secure data communication
can use it. The private key is never distributed; it’s always kept secret.
3. Using the server’s public key, the client can decrypt the digest message.
The client then creates a digest of the identity message and compares it
with the digest sent by the server.
A match between the digest and the original message confirms the identity
of the server. Why? The server initially sent a certificate signed by a known
CA, so the client is absolutely sure to whom this public key belongs.
However, the client needed proof that the server that sent the certificate
is the entity that it claims to be, so the server sent a simple identification
message along with a public-key-encrypted digest of the same message. If
the sending server hadn’t had the appropriate private key, it would have
been unable to produce the same digest that the client computed from the
identification message.
encrypt data and transmit it to the client. Why do that all over again? Largely
because symmetric encryption is much faster than asymmetric encryption.
If an impostor sits between the client and the server system, and is capable of
intercepting the transmitted data, what damage can it do? It doesn’t know the
secret symmetric key that the client and the server use, so it can’t determine the
content of the data; at most, it can introduce garbage in the data by injecting its
own data into the data packets.
To avoid this, the SSL protocol allows for a message-authentication code (MAC).
A MAC is simply a piece of data computed by using the symmetric key and the
transmitted data. Because the impostor doesn’t know the symmetric key, it can’t
compute the correct value for the MAC. For example, a well-known cryptographic
digest algorithm called MD5 (developed by RSA Data Security, Inc.) can generate
128-bit MAC values for each transmitted data packet. The computing power and
time required to successfully guess the correct MAC value this way is almost
nonexistent. SSL makes secure commerce possible on the Internet.
OBTAINING SSL
For many years, SSL was available mainly in commercial Linux software such as
Stronghold, an Apache-based, commercial Web server. Because of patent and US
export restrictions, no open-source versions of SSL for Linux were available for a
long time. Recently, the OpenSSL Project has changed all that.
Understanding OpenSSL
The OpenSSL Project is an open-source community collaboration to develop
commercial-grade SSL, Transport Layer Security (TLS), and full-strength, general-
purpose cryptography library packages. The current implementation of SSL is also
called OpenSSL. OpenSSL is based on SSLeay library, which has been developed by
Eric A. Young and Tim J. Hudson. The OpenSSL software package license allows
both commercial and noncommercial use of the software.
Uses of OpenSSL
SSL can be used in many applications to enhance and ensure transactional data
security: OpenSSL simply makes that capability more widely available. This section
examines using OpenSSL for the following security tasks:
154754-2 Ch11.F 11/5/01 9:04 AM Page 267
Getting OpenSSL
OpenSSL binaries are currently shipped with the Red Hat Linux distribution in RPM
packages. So you can either use the RPM version supplied by Red Hat or you can
simply download the source code from the official OpenSSL Web site at
www.openssl.org/source.
As mentioned throughout the book, I prefer that security software be installed
from source distribution downloaded from authentic Web or FTP sites. So, in the
following section I discuss the details of compiling and installing OpenSSL from the
official source distribution downloaded from the OpenSSL Web site.
If you must install OpenSSL from the RPM, use a trustworthy, binary RPM
distribution, such as the one found on the official Red Hat CD-ROM.
To install OpenSSL binaries from an RPM package, simply run the
rpm –ivh openssl-packagename.rpm command.
OpenSSL prerequisites
The OpenSSL source distribution requires that you have Perl 5 and an ANSI C com-
piler. I assume that you installed both Perl 5 and gcc (C compiler) when you set up
your Linux system.
154754-2 Ch11.F 11/5/01 9:04 AM Page 268
For example, to extract the openssl-0.9.6.tar.gz file, I can run the tar
xvzf openssl-0.9.6.tar.gz command. The tar command creates a direc-
tory called openssl-version, which in my example is openssl-0.9.6.
You can delete the tar ball at this point if disk space is an issue for you. First,
however, make sure you have successfully compiled and installed OpenSSL.
At this point, feel free to read the README or INSTALL files included in the distri-
bution. The next step is to configure the installation options; certain settings are
needed before you can compile the software.
To install OpenSSL in the default /usr/local/ssl directory, run the following
command:
./config
./config --prefix=/opt/security
You can use many other options with the config or Configure script to prepare
the source distribution for compilation. These options are listed and explained in
Table 11-1.
154754-2 Ch11.F 11/5/01 9:04 AM Page 269
Rsaref This option forces building of the RSAREF toolkit. To use the
RSAREF toolkit, make sure you have the RSAREF library
(librsaref.a) in your default library search path.
no-threads This option disables support for multithreaded applications.
threads This option enables support for multithreaded applications.
no-shared This option disables the creation of a shared library.
Shared This option enables the creation of a shared library.
no-asm This option disables the use of assembly code in the source tree.
Use this option only if you are experiencing problems in
compiling OpenSSL.
386 Use this only if you are compiling OpenSSL on an Intel 386
machine. (Not recommended for newer Intel machines.)
no-<cipher> OpenSSL uses many cryptographic ciphers such as bf, cast,
des, dh, dsa, hmac, md2, md5, mdc2, rc2, rc4, rc5, rsa, and
sha. If you want to exclude a particular cipher from the
compiled binaries, use this option.
-Dxxx, -lxxx, -Lxxx, These options enable you to specify various system-dependent
-fxxx, -Kxxx options. For example, Dynamic Shared Objects (DSO) flags, such
as -fpic, -fPIC, and -KPIC can be specified on the command
line. This way one can compile OpenSSL libraries with Position
Independent Code (PIC), which is needed for linking it into DSOs.
Most likely you won’t need any of these options to compile
OpenSSL. However, if you have problems compiling it, you can
try some of these options with appropriate values. For example,
if you can’t compile because OpenSSL complains about missing
library files, try specifying the system library path using the
–L option.
154754-2 Ch11.F 11/5/01 9:04 AM Page 270
After you have run the config script without any errors, run the make utility. If
the make command is successful, run make test to test the newly built binaries.
Finally, run make install to install OpenSSL in your system.
If you have problems compiling OpenSSL, one source of the difficulty may
be a library-file mismatch — not unusual if the latest version of software like
OpenSSL is being installed on an old Linux system. Or the problem may be
caused by an option, specified in the command line, that’s missing an essen-
tial component. For example, if you don’t have the RSAREF library (not
included in Red Hat Linux) installed on your system and you are trying to use
the rsaref option, the compilation fails when it tries to build the binaries.
Here some traditional programming wisdom comes in handy: Make sure
you know exactly what you’re doing when you use specific options. If nei-
ther of these approaches resolves the problem, try searching the OpenSSL
FAQ page at www.openssl.org/support/faq.html. Or simply install
the binary RPM package for OpenSSL.
What is a certificate?
In an SSL transaction, a certificate is a body of data placed in a message to serve as
proof of the sender’s authenticity. It consists of encrypted information that associ-
ates a public key with the true identity of an individual, server, or other entity,
known as the subject. It also includes the identification and electronic signature of
the issuer of the certificate. The issuer is known as a Certificate Authority (CA).
A certificate may contain other information that helps the CA manage certifi-
cates (such as a serial number and period of time when the certificate is valid).
Using an SSL-enabled Web browser (such as Netscape Navigator or Microsoft
Internet Explorer), you can view a server’s certificate easily.
The identified entity in a certificate is represented by distinguished name fields
(as defined in the X509 standard). Table 11-2 lists common distinguished name
fields.
154754-2 Ch11.F 11/5/01 9:04 AM Page 271
https://extranet.domain.com/login.servlet
Her Web browser initiates the SSL connection request. Your extranet Web server
uses its private key to encrypt data it sends to her Web browser — which decrypts
the data using your Web server’s public key.
Because the Web server also sends the public key to the Web browser, there’s no
way to know whether the public key is authentic. What stops a malicious hacker
from intercepting the information from your extranet server and sending his own
public key to your client? That’s where the CA comes in to play. After verifying
information regarding your company in the offline world, a CA has issued you a
server certificate — signed by the CA’s own public key (which is well known).
Genuine messages from your server carry this certificate. When the Web browser
receives the server certificate, it can decrypt the certificate information using the
well-known CA’s public key. This ensures that the server certificate is authentic. The
Web browser can then verify that the domain name used in the authentic certificate
is the same as the name of the server it’s communicating with.
154754-2 Ch11.F 11/5/01 9:04 AM Page 272
Similarly, if you want to ensure that a client is really who she says she is, you could
enforce a client-side certificate restriction, creating a closed-loop secured process
for the entire transaction.
If each party has a certificate that validates the other’s identity, confirms the
public key, and is signed by a trusted agency, then they both are assured that
they are communicating with whom they think they are.
◆ Commercial CA
◆ Self-certified private CA
Commercial CA
A commercial Certificate Authority’s primary job is to verify the authenticity of
other companies’ messages on the Internet. After a CA verifies the offline authen-
ticity of a company by checking various legal records (such as official company
registration documents and letters from top management of the company), one of
its appropriately empowered officers can sign the certificate. Only a few commer-
cial CAs exist; the two best known are
◆ Verisign (www.verisign.com)
◆ Thawte (www.thawte.com)
Self-certified, private CA
A private CA is much like a root-level commercial CA: It’s self-certified. However, a
private CA is typically used in a LAN or WAN environment (or in experimenting with
SSL). For example, a university with a WAN that interconnects departments may
decide on a private CA instead of a commercial one. If you don’t expect an unknown
user to trust your private CA, you can still use it for such specific purposes.
154754-2 Ch11.F 11/5/01 9:04 AM Page 273
To meet this requirement, usually you follow the CA’s guidelines for veri-
fying individuals or organizations. Consult with your chosen CA to find
out how to proceed.
◆ Submit a Certificate Signing Request (CSR) in electronic form.
Typically, if you plan to get your Web server certified, be prepared to submit
copies of legal documents such as business registration or incorporation papers.
Here, I show you how you can create a CSR using OpenSSL.
After running this command, you are asked for a pass phrase (that is, password)
for use in encrypting the private key. Because the private key is encrypted using the
des3 cipher, you are asked for the pass phrase every time your server is started. If
this is undesirable, you can create an unencrypted version of the private key by
removing the –des3 option in the preceding command line.
To ensure a high level of security, use an encrypted private key. You don’t
want someone else who has access to your server to see (and, possibly, later
use) your private key.
Run the sign-server-cert.sh script to approve and sign the server cer-
tificate you created using the new-server-cert.sh script.
◆ Creating a user or client certificate
Run the sign-user-cert.sh script to sign a user certificate. Also, run the
p12.sh script to package the private key, the signed key, and the CA’s
Public key into a file with a .p12 extension. This file can then be
imported into applications such as e-mail clients for use.
Summary
OpenSSL is an integral part of security. The more you get used to OpenSSL, the
more easily you can incorporate it in many services. You learn about using
OpenSSL with Apache and other applications to enhance security, in many chapters
in this book.
164754-2 Ch12.F 11/5/01 9:04 AM Page 277
Chapter 12
◆ Exploring OpenSSH
Ever wonder what would it be like if you could remove all nonconsole user
access from your Internet server — or from the Linux system in your LAN? If a user
had only one way to gain shell access to your Linux system — via the console —
perhaps the number of break-ins would drop substantially. Of course, that would
turn Linux into Windows NT! Or would it?
Actually, removing user access altogether isn’t quite practical for most Linux
installations. So you must understand the risks involving user accounts and reduce
the risks as much as you can. In this chapter you learn exactly that. Typically, a
user accesses a Linux system via many means such as Web, Telnet, FTP, rlogin, rsh,
or rexec. Here I discuss only the non-anonymous types of user access that require 277
Linux user accounts.
164754-2 Ch12.F 11/5/01 9:04 AM Page 278
If you run a Linux system that allows shell access to many users, make sure
the /var/log directory and its files aren’t readable by ordinary users. I
know of many incidents when ordinary “unfriendly” users gained access to
other user accounts by simply browsing the /var/log/messages log file.
Every time login fails because of a username and/or password mismatch, the
incident is recorded in the /var/log/messages file. Because many users
who get frustrated with the login process after a few failed attempts often
type their passwords in the login: prompt instead of the password:
prompt, there may be entries in the messages file that show their pass-
words. For example, log entries may show that user mrfrog failed to log in a
few times, then got in via Telnet, but one entry (in bold) reveals the user’s
password when he mistakenly entered the password as a response to the
login: prompt.
login: FAILED LOGIN 2 FROM neno FOR mrfrog,
Authentication failure
PAM_unix: Authentication failure; (uid=0) ->
mysecretpwd for system-auth service
login: FAILED LOGIN 3 FROM neno FOR mysecretpwd,
Authentication failure
PAM_unix: (system-auth) session opened for user mrfrog
by (uid=0)
Now if anyone but the root user can access such a log file, disaster may
result. Never let anyone but the root account access your logs!
164754-2 Ch12.F 11/5/01 9:04 AM Page 279
username:password:uid:gid:fullname:homedir:shell
As mentioned before, /etc/passwd is a world readable text file that holds all user
passwords in an encoded form. The password file should be world-readable; after all,
many applications depend on user information such as user ID, group ID, full name,
or shell for their services. To improve the security of your user-authentication
process, however, you can take several measures immediately. The upcoming sections
describe them.
username:password:last:may:must:warn:expire:disable:reserved
The /etc/passwd file format remains exactly the same as it was — except
the password field is always set to ‘x’ instead of the encoded user password.
164754-2 Ch12.F 11/5/01 9:04 AM Page 281
mrfrog:$1$ar/xabcl$XKfp.T6gFb6xHxol4xHrk.:11285:0:99999:7:::
This line defines the account settings for a user called mrfrog. Here mrfrog has
last changed his password 11285 days since January 1, 1970. Because the minimum
number of days he must wait before he can change the password is set to 0, he can
change it at any time. At the same time, this user can go on for 99,999 days with-
out changing the password.
PASS_MAX_DAYS 99999
PASS_MIN_DAYS 0
PASS_MIN_LEN 5
PASS_WARN_AGE 7
The PASS_MAX_DAYS entry dictates how long a user can go on without changing
her password. The default value is 99999, which means that a user can go for
approximately 274 years before changing the password. I recommend changing
this to a more realistic value. An appropriate value probably is anywhere from 30
to 150 days for most organizations. If your organization frequently faces password
security problems, use a more restrictive number in the 15- to 30-day range.
The PASS_MIN_DAYS entry dictates how long the user must wait before she can
change her password since her last change. The default value of 0 lets the user
change the password at any time. This user flexibility can be good if you can
ensure that your users choose hard-to-guess passwords. The PASS_MIN_LEN entry
sets the minimum password length. The default value reflects the frequently used
minimum size of 5. The PASS_WARN_AGE entry sets the reminder for the password
change. I use the following settings in many systems that I manage:
PASS_MAX_DAYS 150
PASS_MIN_DAYS 0
PASS_MIN_LEN 5
PASS_WARN_AGE 7
164754-2 Ch12.F 11/5/01 9:04 AM Page 282
After you modify the /etc/login.defs file, make sure your aging policy works
as expected.
testuser:$1$/fOdEYFo$qcxNCerBbSE6unDn2uaCb1:11294:0:150:7:::
Here the last password change was on Sunday, December 3, 2000, which makes
11,294 days since January 1, 1970. Now, if I want to see what happens after 150
days have elapsed since the last change, I can simply subtract 150+1 from 11,295
and set the last change value like this:
testuser:$1$/fOdEYFo$qcxNCerBbSE6unDn2uaCb1:11143:0:150:7:::
Now, if I try to log in to the system using this account, I must change the pass-
word because it has aged. Once you have tested your settings by changing appro-
priate values in the /etc/shadow file, you have a working password-aging policy.
Remove the test user account using the userdel testuser command.
The pwck command can do exactly that. This command performs integrity
checking for both of the password files and the /etc/group file, too.
Although shadow passwords and password aging are great ways to fight user
security risks, the clear-text password risk still remains. To eliminate that risk, stop
using shell access that requires clear-text passwords.
Normally you should have only one superuser (that is, root) account in your
/etc/passwd and /etc/shadow files. For security, periodically scan these
files so you know there’s only one root entry. The grep ‘:x:0:’ /etc/
passwd command displays all users who have root access.
Don’t continue if you are currently using Telnet to access the server.You
must follow the steps below from the console.
◆ Using vi or another text editor, open the /etc/services file. Search for
the string telnet, and you should see a line such as the following:
telnet 23/tcp
◆ Insert a # character before the word telnet, which should make the line
look like this:
#telnet 23/tcp
This command disables the Telnet service immediately. Verify that Telnet
service is no longer available by running the telnet localhost 23 com-
mand; you should get the following error message:
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
If you don’t get this error message, xinetd hasn’t been restarted properly
in the last step of the example. Retry that command and return to
verifying.
As an added security precaution, remove the /usr/sbin/in.telnetd
Telnet daemon.
Although Telnet is the most frequently used method for accessing a remote sys-
tem, you may also have rlogin, rsh, or rexec services turned on. Check the fol-
lowing directory carefully:
If you don’t see a disabled = yes line in the service definition, add one in each
of these files and then restart xinetd.
If it isn’t practical to access the system via the console, use Secure Shell (SSH)
for remote access. SSH encrypts all your traffic, including your passwords, when
you connect to another machine over the network, effectively eliminating risks
associated with eavesdropping in a network connection.
164754-2 Ch12.F 11/5/01 9:04 AM Page 285
openssh-version.rpm
openssh-clients-version.rpm
openssh-server-version.rpm
openssh-version.src.rpm
You need only the first three RPM if you want to install the OpenSSH binaries.
OpenSSH uses OpenSSL (See Using OpenSSL chapter) and the general-purpose, in-
memory compression/decompression library called Zlib. Red Hat supplies Zlib RPMs,
which should be already installed on your system. You can check this using the rpm
–qa | grep zlib command. If you don’t already have Zlib installed, download and
install the Zlib RPM packages (zlib-version.rpm, zlib-devel-version.
rpm) from a Red Hat RPM site. You can also download the Zlib source code from
ftp://ftp.freesoftware.com/pub/infozip/zlib/, then compile and install it.
Once your system meets all the OpenSSH prerequisites, you can install OpenSSH.
I downloaded the following RPM packages:
openssh-2.3.0p1-1.i386.rpm
openssh-clients-2.3.0p1-1.i386.rpm
openssh-server-2.3.0p1-1.i386.rpm
openssh-2.3.0p1-1.src.rpm
To avoid or reduce future debugging time, it’s better to install the client soft-
ware on the server and thus remove the issues that occur because of remote
access. Running the client from the server ensures that you aren’t likely to
face DNS issues or other network issues. Once you get the client working on
the server, you can try a remote client knowing that the software works and
any problem probably is related to network configuration and availability.
164754-2 Ch12.F 11/5/01 9:04 AM Page 286
I like to have source code available — so I installed all the preceding packages
using the rpm -ivh openssh*.rpm command. If you decide to compile the source
code (openssh-version.src.rpm), see the following instructions after you run the
rpm –ivh openssh-version.src.rpm command:
Because the source distribution doesn’t install all the necessary configura-
tion files, be sure to install all the binary RPMs first — and then compile and
install the source on top of them.
3. Run ./configure, then make, and finally make install to install the
OpenSSH software.
4. Replace the binary RPM installed lines in the /etc/pam.d/sshd file with
the lines shown in the listing that follows. The new file tells the SSH dae-
mon to use the system-wide authentication configuration (found in the
/etc/pam.d/system-auth file).
#%PAM-1.0
auth required /lib/security/pam_stack.so service=system-auth
auth required /lib/security/pam_nologin.so
account required /lib/security/pam_stack.so service=system-auth
password required /lib/security/pam_stack.so service=system-auth
session required /lib/security/pam_stack.so service=system-auth
ssh_host_dsa_key
ssh_host_dsa_key.pub
ssh_host_key
ssh_host_key.pub
sshd_config
164754-2 Ch12.F 11/5/01 9:04 AM Page 287
Files ending with a .pub extension store the public keys for the OpenSSH server.
The files with the .key extension store the private keys. The private keys shouldn’t
be readable by anyone but the root user. The very last file, sshd_config, is the
configuration file. Listing 12-1 shows the default version of this file (slightly mod-
ified for brevity).
◆ Port specifies the port number that sshd binds to listen for connections.
■ The default value of 22 is standard.
■ You can add multiple Port directives to make sshd listen to multiple
ports.
A non-standard port for SSH (a port other than 22) can stop some port
scans.
164754-2 Ch12.F 11/5/01 9:04 AM Page 288
Once you’ve made all the necessary changes to the /etc/ssh/ssh_config file,
you can start sshd. The next subsections discuss the two ways you can run sshd:
◆ standalone service
◆ xinetd service
STANDALONE SERVICE
The standalone method is the default method for running sshd. In this method, the
daemon is started at server startup, using the /etc/rc.d/init.d/sshd script. This
script is called from the appropriate run-level directory. For example, if you boot
your Linux system in run-level 3 (default for Red Hat Linux), you can call the script
by using the /etc/rc.d/rc3.d/S55sshd link, which points to the /etc/rc.d/
init.d/sshd script.
To run sshd in standalone mode, you must install the openssh-server-
version.rpm package. If you have installed sshd only by compiling the source
code, follow these steps:
Continued
164754-2 Ch12.F 11/5/01 9:04 AM Page 290
2. Link this script to your run-level directory, using the following command:
ln –s /etc/rc.d/init.d/sshd /etc/rc.d/rc3.d/S55sshd
This form of the command assumes that your run level is 3, which is
typical.
/etc/rc.d/init.d/sshd start
When you run the preceding command for the very first time, you see output
like this:
Before the SSH daemon starts for the very first time, it creates both public and
private RSA and DSA keys — and stores them in the /etc/ssh directory. Make sure
that the key files have the permission settings shown here:
The files ending in _key are the private key files for the server and must not be
readable by anyone but the root user. To verify that the SSH daemon started, run
ps aux | grep sshd, and you should see a line like this one:
Once the SSH daemon starts, SSH clients can connect to the server. Now, if you
make configuration changes and want to restart the server, simply run the
/etc/rc.d/init.d/sshd restart command. If you want to shut down the sshd
server for some reason, run the /etc/rc.d/init.d/sshd stop command.
3. Force xinetd to load its configuration using the killall –USR1 xinetd
command.
Now you can set up SSH clients. Typically, most people who access a Linux
server are running sshd from another Linux system (or from a PC running
Windows or some other operating system).
openssh-version.rpm
openssh-clients-version.rpm
Try the client software on the server itself so that you know the entire
client/server environment is working before attempting to connect from a
remote client system.
164754-2 Ch12.F 11/5/01 9:04 AM Page 294
If you are following my recommendations, then you already have these two
packages installed on your server. If that is the case, go forward with the configu-
ration as follows:
4. The first time you try to connect to the OpenSSH server, you see a mes-
sage that warns you that ssh, the client program, can’t establish the
authenticity of the server. An example of this message is shown here:
The authenticity of host ‘k2.nitec.com’ can’t be established.
You are asked whether you want to continue. Because you must trust your
own server, enter yes to continue. You are warned that this host is perma-
nently added to your known host list file. This file, known_hosts, is cre-
ated in the .ssh directory.
5. You are asked for the password for the given username. Enter appropriate
password.
To log in without entering the password, copy the identity.pub file from
your workstation to a subdirectory called .ssh in the home directory on
the OpenSSH server. On the server, rename this identity.pub file to
authorized_keys, using the mv identity.pub authorized_keys
command. Change the permission settings of the file to 644, using the
chmod 644 authorized_keys command. Doing so ensures that only
164754-2 Ch12.F 11/5/01 9:04 AM Page 295
you can change your public key and everyone else can only read it. This
allows the server to authenticate you by using the your public key, which is
now available on both sides.
6. Once you enter the correct password, you are logged in to your OpenSSH
server using the default SSH1 protocol. To use the SSH2 protocol:
■ Use the -2 option
■ Create RSA keys using the ssh-keygen command.
If you enter a pass phrase when you generate the keys using ssh-keygen pro-
gram, you are asked for the pass phrase every time ssh accesses your private key
(~/.ssh/identity) file. To save yourself from repetitively typing the pass phrase,
you can run the script shown in Listing 12-3.
Continued
164754-2 Ch12.F 11/5/01 9:04 AM Page 296
}
start_agent()
{
if [ “$RUNNING” = “1” ]; then
. “${LOCKFILE}” > /dev/null
else
ssh-agent -s > “${LOCKFILE}”
. “${LOCKFILE}” > /dev/null
fi
}
kill_agent()
{
check_stale_lock
if [ -e “${LOCKFILE}” ]; then
. “${LOCKFILE}” > /dev/null
case “$SHELL_TYPE” in
sh)
PARAMS=”-s”
;;
csh)
PARAMS=”-c”
;;
*)
PARAMS=””
;;
esac
ssh-agent ${PARAMS} -k > /dev/null
rm -f “${LOCKFILE}”
fi
print_kill
exit 0
}
print_agent()
{
case “$SHELL_TYPE” in
csh)
echo “setenv SSH_AUTH_SOCK $SSH_AUTH_SOCK;”
echo “setenv SSH_AGENT_PID $SSH_AGENT_PID;”
;;
sh)
echo “SSH_AUTH_SOCK=$SSH_AUTH_SOCK; export SSH_AUTH_SOCK;”
echo “SSH_AGENT_PID=$SSH_AGENT_PID; export SSH_AGENT_PID;”
;;
esac
echo “echo Agent pid $PID”
Continued
164754-2 Ch12.F 11/5/01 9:04 AM Page 298
When you run this script once, you can use ssh multiple times without entering
the pass phrase every time. For example, after you run this script you can start the
X Window System as usual using startx or other means you use. If you run ssh
for remote system access from xterm, the pass phrase isn’t required after the very
first time. This can also be timesaving for those who use ssh a lot.
◆ Be root only if you must. Having root access doesn’t mean you should
log in to your Linux system as the root user to read e-mail or edit a text
file. Such behavior is a recipe for disaster! Use a root account only to
■ Modify a system file that can’t be edited by an ordinary user account
■ Enable a service or to do maintenance work, such as shutting down the
server
◆ Choose a very difficult password for root.
root is the Holy Grail for security break-ins. Use an unusual combination of
characters, pun\ctuation marks, and numbers.
◆ Cycle the root password frequently. Don’t use the same root password
more than a month. Make a mental or written schedule to change the
root password every month.
◆ Never write down the root password. In a real business, usually the
root password is shared among several people. So make sure you notify
appropriate coworkers of your change, or change passwords in their pres-
ence. Never e-mail the password to your boss or colleagues.
If you look at the /etc/inittab file, you notice that it has lines such as the
following:
# Run gettys in standard runlevels
1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
164754-2 Ch12.F 11/5/01 9:04 AM Page 300
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6
These lines tie vc/1 through vc/6 to tty1 through tty6. You can remove
the rest of the unused virtual consoles and TTYs from the
/etc/securetty file (the lines for vc/7 through vc/11 and tty7
through tty11).
The /etc/securetty file must not be readable by anyone other than the root
user account itself. Because login-related processes run as root, they can access the
file to verify that root-account access is authorized for a certain tty device. If
pseudo-terminal devices such as pts/0, pts/1, and pts/3 are placed in this file,
you can log in as the root user — which means that anyone else can try brute-force
hacks to break in, simply by trying to log in as root. To ensure that this file has the
appropriate permission settings that don’t allow others to change the file, run the
chown root /etc/securetty and chmod 600 /etc/securetty commands.
To run the su session as a login session of the new user, use the - option. For
example, su – gunchy switches to the user gunchy and runs such files as
.login, .profile, and .bashrc files as if the user had logged in
directly.
164754-2 Ch12.F 11/5/01 9:04 AM Page 301
Similarly, to become the root user from an ordinary user account, run the su
root command. You are asked for the root password. Once you enter the appropri-
ate password, you are in.
You can switch back and forth between your root session and the original ses-
sion by using the suspend and fg commands. For example, you can su to root
from an ordinary user account and then if you must return to the original user
shell, simply run the suspend command to temporarily stop the su session. To
return to the su session run the fg command.
The su command is a PAM-aware application and uses the /etc/pam.d/su con-
figuration file as shown in Listing 12-4.
The preceding configuration file allows the root user to su to any other user
without a password, which makes sense because going from high privilege to low
privilege isn’t insecure by design. However, the default version of this file also per-
mits any ordinary user who knows the root password to su to root. No one but the
root user should know his or her password; making the root account harder to
access for unauthorized users who may have obtained the password makes good
security sense. Simply uncomment (that is, remove the # character from) the fol-
lowing line:
Now the users who are listed in the wheel group in the /etc/group file can use
the su command to become root.
164754-2 Ch12.F 11/5/01 9:04 AM Page 302
Now, if you want to enable a user to become root via the su facility, simply add
the user into the wheel group in the /etc/group file. For example, the following
line from my /etc/group file shows that only root and kabir are part of the
wheel group.
wheel:x:10:root,kabir
Don’t use a text editor to modify the /etc/group file. Chances of making
human mistakes such as typos or syntax errors are too great and too risky.
Simply issue the usermod command to modify a user’s group privileges. For
example, to add kabir to the wheel group, run the usermod -G wheel
kabir command.
The su command is great to switch over from an ordinary user to root but it’s
an all-or-nothing type of operation. In other words, an ordinary user who can su to
root gains access to all that root can do. This is often not desirable. For example,
say you want a coworker to be able to start and stop the Web server if needed. If
you give her the root password so that she can su to root to start and stop the
Web server, nothing stops her from doing anything else root can do. Thankfully,
there are ways to delegate selected root tasks to ordinary users without giving
them full root access.
To install the latest sudo binary RPM package suitable for your Red Hat
Linux architecture (such as i386, i686, or alpha), download it from the RPM
Finder Web site and install it using the rpm command. For example, the lat-
est binary RPM distribution for i386 (Intel) architecture is sudo-
1.6.3-4.i386.rpm. Run the rpm –ivh sudo-1.6.3-4.i386.rpm
command to install the package.
After downloading the source RPM package, complete the following steps to
compile and install sudo on your system.
1. su to root.
2. Run rpm –ivh sudo-1.6.3-4.src.rpm command to extract the sudo tar
ball in /usr/src/redhat/SOURCES directory.
Change your current directory to /usr/src/redhat/SOURCES. If you run
ls –l sudo* you see a file such as the following:
5. Run make to compile. If you don’t get any compilation errors, you can run
make install to install the software.
By default, the visudo command uses the vi editor. If you aren’t a vi fan
and prefer emacs or pico, you can set the EDITOR environment variable to
point to your favorite editor, which makes visudo run the editor of your
choice. For example, if you use the pico editor, run export
EDITOR=/usr/bin/pico for a bash shell, or run setenv EDITOR
/usr/bin/pico editor for csh, tcsh shells. Then run the visudo com-
mand to edit the /etc/sudoers contents in the preferred editor.
The default /etc/sudoers file has one configuration entry as shown below:
This default setting means that the root user can run any command on any host
as any user. The /etc/sudoers configuration is quite extensive and often confus-
ing. The following section discusses a simplified approach to configuring sudo for
practical use.
Two types of configuration are possible for sudo:
164754-2 Ch12.F 11/5/01 9:04 AM Page 305
◆ Aliases. An alias is a simple name for things of the same kind. There are
four types of aliases supported by sudo configuration.
■ Host_Alias = list of one or more hostnames. For example, WEB-
SERVERS = k2.nitec.com, everest.nitec.com defines a host alias
called WEBSERVERS, which is a list of two hostnames.
■ User_Alias = list of one or more users. For example, JRADMINS =
dilbert, catbert defines a user alias called JRADMIN, which is a list
of two users.
■ Cmnd_Alias = list of one or more commands. For example,
COMMANDS = /bin/kill, /usr/bin/killall defines a command
alias called COMMANDS, which is a list of two commands.
◆ User specifications. A user specification defines who can run what com-
mand as which user.
For example:
JRADMINS WEBSERVER=(root) COMMANDS
This user specification says sudo allows the users in JRADMINS to run pro-
grams in COMMANDS on WEBSERVER systems as root. In other words, it
specifies that user dlibert and catbert can run /bin/kill or /usr/
bin/killall command on k2.nitec.om, everest.nitec.com as root.
Listing 12-5 is an example configuration.
The preceding configuration authorizes user sheila and kabir to run (via sudo)
the kill commands (/bin/kill and /usr/bin/killall) as root on
www.nitec.com. In other words, these two users can kill any process on
www.nitec.com. How is this useful? Let’s say that user sheila discovered that a
program called oops.pl that the system administrator (root) ran before going to
lunch has gone nuts and is crawling the Web server. She can kill the process with-
out waiting for the sysadmin to return. User sheila can run the ps auxww | grep
oops.pl command to check whether the oops.pl program is still running. The out-
put of the command is:
root 11681 80.0 0.4 2568 1104 pts/0 S 11:01 0:20 perl /tmp/oops.pl
She tries to kill it using the kill -9 11681 command, but the system returns
11681: Operation not permitted error message. She realizes that the process is
164754-2 Ch12.F 11/5/01 9:04 AM Page 306
owned by root (as shown in the ps output) and runs sudo kill -9 11681 to kill
it. Because she is running the sudo command for the very first time, she receives
the following message from the sudo command.
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these two things:
#1) Respect the privacy of others.
#2) Think before you type.
Password:
At this point she is asked for her own password (not the root password) and
once she successfully provides the password, sudo runs the requested command,
which kills the culprit process immediately. She then verifies that the process is
no longer running by rerunning the ps auxww | grep oops.pl command. As
shown sudo can safely delegate system tasks to junior-level administrators or
coworkers. After all, who likes calls during the lunch?. Listing 12-6 presents a prac-
tical sudo configuration that I use to delegate some of the Web server administra-
tion tasks to junior administrators.
This configuration allows two junior Web administrators (wsajr1 and wsajr2) to
start, restart, stop the Apache Web server using the /usr/local/apache/
bin/apachectl command. They can also kill any process on the server and even
reboot or halt the server if need be. All this can happen without having the full
root access.
164754-2 Ch12.F 11/5/01 9:04 AM Page 307
Commands that allow shell access (such as editors like vi or programs like
less) shouldn’t run via the sudo facility, because a user can run any com-
mand via the shell and gain full root access intentionally or unintentionally.
The configuration I use is quite simple compared to what is possible with sudo.
(Read the sudoers man pages for details.) However, it’s a good idea to keep your
/etc/sudoers configuration as simple as possible. If the program you want to give
access to others is complex or has too many options, consider denying it com-
pletely. Don’t give out sudo access to users you don’t trust. Also, get in the habit of
auditing sudo-capable users frequently using the logs.
Defaults syslog=auth
To keep a separate sudo log besides syslog managed log files, you can add a
line such as the following to /etc/sudoers:
This forces sudo to write a log entry to the /var/log/sudo.log file every time
it’s run.
Monitoring Users
There are some simple tools that you can use every day to keep yourself informed
about who is accessing your system. These tools aren’t exactly monitoring tools by
design, but you can certainly use them to query your system about user activity.
Often I have discovered (as have many other system administrators) unusual activ-
ity with these tools, perhaps even by luck, but why quibble? The tools have these
capabilities; an administrator should be aware of them. In this section I introduce
some of them.
164754-2 Ch12.F 11/5/01 9:04 AM Page 308
If you simply want a count of the users, run who –q. The w command provides
more information than who does. Here’s an example output of the w command.
Here user swang appears to read e-mail using pine, user jasont is modifying his
.plan file, user zippy seems to be running nothing other than the tcsh shell, and
user mimi is running the text-based Web browser Lynx.
When you have a lot of users (in the hundreds or more), running w or who can
generate more output than you want to deal with. Instead of running the who or w
commands in such cases, you can run the following script from Listing 12-7 to
check how many unique and total users are logged in.
You can run this script from the command line as sh who.sh at any time to
check how many total and unique users are logged in. Also, if you want to run the
command every minute, use the watch –n 60 sh /path/to/who.sh command.
This command runs the who.sh script every 60 seconds. Of course, if you want to
run it at a different interval, change the number accordingly.
To use the finger command on the local users; you don’t need the finger
daemon.
All the commands (who, w, last, finger) discussed in this section depend on
system files such as /var/log/wtmp and /var/run/utmp files. Make sure that these
files aren’t world-writeable; otherwise a hacker disguised as an ordinary user can
remove his tracks.
Creating a User-Access
Security Policy
System and network administrators are often busy beyond belief. Any administra-
tor that manages ten or more users know that there’s always something new to take
care of every day. I often hear that user administration is a thankless job, but it
doesn’t have to be. With a little planning and documentation, an administrator can
make life easier for herself and everyone else involved. If every administrator
164754-2 Ch12.F 11/5/01 9:04 AM Page 310
would craft a tight security policy and help users understand and apply it, user-
access-related security incidents would subside dramatically. Follow these guide-
lines for creating a user security policy.
Creating a User-Termination
Security Policy
It is absolutely crucial that your organization create a user-termination security
policy to ensure that people who leave the organization can’t become potential
security liabilities. By enforcing a policy upon user termination, you can make sure
your systems remain safe from any ill-conceived action taken by an unhappy
employee.
When a user leaves your organization, you have two alternatives for a first
response:
◆ Disable the user account so it can’t log in to the system, using the user-
mod –s /bin/true username command.
#!/bin/sh
echo “Sorry, you are no longer allowed to access our systems.”;
exit 0;
Set the nologin script’s ownership to root with the chown root /bin/nologin
command. Make it executable for everyone by using the chmod 755 /bin/nologin
command. Run the usermod –s /bin/nologin username command. When a ter-
minated user tries to log in, the script runs and displays the intended message.
Summary
This chapter examined the risks associated with user access and some responses to
the risks — such as using shadow passwords, securing a user-authentication process
by using an OpenSSH service, restricting the access granted to the root user
account, and delegating root tasks to ordinary users in a secure manner.
164754-2 Ch12.F 11/5/01 9:04 AM Page 312
174754-2 Ch13.F 11/5/01 9:04 AM Page 313
Chapter 13
1. Download the latest SRP source distribution from the preceding Web site.
As of this writing the source distribution is called srp-1.7.1.tar.gz. As
usual, make sure that you replace the version number (1.7.1) with the
appropriate version number of the distribution you are about to install.
2. Once downloaded, su to root and copy the .tar file in the /usr/src/
redhat/SOURCES directory.
313
174754-2 Ch13.F 11/5/01 9:04 AM Page 314
I assume that you have extracted and compiled OpenSSL source in the
/usr/src/redhat/SOURCES/openssl-0.9.6 directory. Run the config-
ure script as shown below:
./configure --with-openssl=/usr/src/redhat/SOURCES/openssl-0.9.6 \
--with-pam
5. Once the SRP source is configured for OpenSSL and PAM support by the
options used in the preceding command, run the make and make install
commands to install the software.
At this point you have compiled and installed SRP, but you still need the
Exponential Password System (EPS) support for SRP applications.
Establishing Exponential
Password System (EPS)
The SRP source distribution includes the EPS source, which makes installation easy.
However, the default installation procedure didn’t work for me, so I suggest that
you follow my instructions below.
1. su to root.
2. Change the directory to /usr/src/redhat/SOURCES/srp-
1.7.1/base/pam_eps.
3. Install the PAM modules for EPS in the /lib/security directory with the
following command:
install -m 644 pam_eps_auth.so pam_eps_passwd.so
/lib/security
4. Run the /usr/local/bin/tconf command. You can also run it from the
base/src subdirectory of the SRP source distribution.
The tconf command generates a set of parameters for the EPS password
file.
5. Choose the predefined field option.
The tconf utility also creates /etc/tpasswd and /etc/tpasswd.conf
files.
174754-2 Ch13.F 11/5/01 9:04 AM Page 315
The more bits that you require for security, the more verification time costs
you.
At this point, you have the EPS support installed but not in use. Thanks to the
PAM technology used by Linux, upgrading your entire (default) password authenti-
cation to EPS is quite easy. You modify a single PAM configuration file.
Continued
174754-2 Ch13.F 11/5/01 9:04 AM Page 316
3. Notice the lines in bold. The first bold line indicates that the ESP auth
module for PAM can satisfy authentication requirements. The second bold
line specifies that the pam_eps_passwd.so PAM module for EPS is used
for password management. The placement of these lines (in bold) is very
important. No line with sufficient control fag can come before the
pam_eps_auth.so or pam_eps_passwd.so lines.
Now you can convert the passwords in /etc/passwd (or in /etc/shadow) to EPS
format.
Ordinary user passwords may need to be changed by using the root account
once before map_eps_passwd.so will write to the /etc/tpasswd file.
This bug or configuration problem may be already corrected for you if you
are using a newer version.
174754-2 Ch13.F 11/5/01 9:04 AM Page 317
Once you have converted user passwords in this manner you can start using the
SRP version of applications such as Telnet.
2. Run make and make install to the Telnet server (telnetd) software in
/usr/local/sbin and the Telnet client (telnet) in /usr/local/bin.
Now you have an SRP-enabled Telnet server. Try the service by running the
SRP-enabled Telnet client (found in the /usr/local/bin directory) using the
/usr/local/bin/telnet localhost command. When prompted for the username
and password, use an already SRP-converted account. The username you use to
connect to the SRP-enabled Telnet server via this client must have an entry in
/etc/tpasswd, or the client automatically fails over to non-SRP (clear-text pass-
word) mode. Here’s a sample session:
174754-2 Ch13.F 11/5/01 9:04 AM Page 319
$ telnet localhost 23
Trying 127.0.0.1...
Connected to localhost.intevo.com (127.0.0.1).
Escape character is ‘^]’.
[ Trying SRP ... ]
SRP Username (root): kabir
[ Using 1024-bit modulus for ‘kabir’ ]
SRP Password:
[ SRP authentication successful ]
[ Input is now decrypted with type CAST128_CFB64 ]
[ Output is now encrypted with type CAST128_CFB64 ]
Last login: Tue Dec 26 19:30:08 from reboot.intevo.com
1. su to root and change the directory to the FTP subdirectory of your SRP
source distribution, which for my version is /usr/src/redhat/SOURCES/
srp-1.7.1/ftp.
2. Run make and make install to the FTP server (ftpd) software in /usr/
local/sbin and the FTP client (ftp) in /usr/local/bin.
If you don’t want to fall back to regular FTP authentication (using a clear-text
password) when SRP authentication fails, add server_args = -a line
after the socket_type line in the preceding configuration file.
Now you have an SRP-enabled FTP server. Try the service by running the SRP-
enabled FTP client (found in the /usr/local/bin directory) using the
/usr/local/bin/ftp localhost command. When prompted for the username
and password, use an already SRP-converted account. The username you use to
connect to the SRP-enabled FTP server via this client must have an entry in
/etc/tpasswd, or the client automatically fails over to non-SRP (clear-text pass-
word) mode. Here’s a sample session:
$ /usr/local/bin/ftp localhost
Connected to localhost.intevo.com.
220 k2.intevo.com FTP server (SRPftp 1.3) ready.
SRP accepted as authentication type.
Name (localhost:kabir): kabir
SRP Password:
SRP authentication succeeded.
Using cipher CAST5_CBC and hash function SHA.
200 Protection level set to Private.
232 user kabir authorized by SRP.
230 User kabir logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
NONE (1)
BLOWFISH_ECB (2)
BLOWFISH_CBC (3)
174754-2 Ch13.F 11/5/01 9:04 AM Page 322
BLOWFISH_CFB64 (4)
BLOWFISH_OFB64 (5)
CAST5_ECB (6)
CAST5_CBC (7)
CAST5_CFB64 (8)
CAST5_OFB64 (9)
DES_ECB (10)
DES_CBC (11)
DES_CFB64 (12)
DES_OFB64 (13)
DES3_ECB (14)
DES3_CBC (15)
DES3_CFB64 (16)
DES3_OFB64 (17)
Also, MD5 and SHA hash functions are supported. By default, the CAST5_CBC
cipher and SHA hash function are used. To specify a different cipher, use the -c
option. For example, the /usr/local/bin/ftp -c blowfish_cfb64 localhost
command uses the BLOWFISH_CFB64 cipher, not CAST5_CBC. To use the MD5 hash
function, use the -h option. The /usr/local/bin/ftp -h md5 localhost com-
mand uses the MD5 hash function, not SHA.
Details of these ciphers or hash functions are beyond the scope of this book. You
can learn about these ciphers at security-related Web sites. See Appendix C for
online resources.
To connect to your SRP-enabled FTP server from other Linux workstations,
install SRP support along with the SRP-enabled FTP client on them. There are also
SRP-enabled FTP clients for non-Linux systems.
Summary
Transmitting plain-text passwords over a network such as Internet is very risky.
Secure Remote Password (SRP) protocol provides you with an alternative to send-
ing plain-text passwords over the network. Using SRP you can secure the authenti-
cation aspect of such protocols as Telnet and FTP.
184754-2 Ch14.F 11/5/01 9:04 AM Page 323
Chapter 14
xinetd
IN THIS CHAPTER
◆ What is xinetd?
◆ Redirecting services
AS A SECURE REPLACEMENT for the inetd daemon, xinetd offers greater flexibility
and control. The xinetd daemon has the same functionality as inetd, but adds
access control, port binding, and protection from denial-of-service attacks.
One drawback is its poor support for Remote Procedure Call (RPC)-based services
(listed in /etc/rpc). Because most people don’t run RPC-based services, this
doesn’t matter too much. If you need RPC-based services, you can use inetd to run
those services while running xinetd to manage your Internet services in a secure,
controlled manner. In this chapter I dicuss how you can set up xinetd and manage
various services using it in a secure manner.
What Is xinetd?
Typically, Internet services on Linux are run either in stand-alone or xinetd-run
mode. Figure 14-1 shows a diagram of what the stand-alone mode looks like.
As shown in stand-alone mode, a parent or master server is run at all times. This
master server
◆ Preforks multiple child servers that wait for requests from the master
server.
When the master server receives a request from a client system, it simply passes
the request information to one of its ready-to-run child server processes. The child
server interacts with the client system and provides necessary service.
323
184754-2 Ch14.F 11/5/01 9:04 AM Page 324
Request #1
User #1 Master Server
Request #N (Listens for connection
User #N on a certain port)
Pre-forks a number of
child servers
There is no master server other than the xinetd server itself. The server is
responsible for listening to all necessary ports for all the services it manages. Once
a connection for a particular service arrives, it forks the appropriate server pro-
gram, which in turn services the client and exits. If the load is high, xinetd services
multiple requests by running multiple servers.
However, because xinetd must fork a server as requests arrive, the penalty is too
great for anything that receives or can receive heavy traffic. For example, running
Apache as a xinetd service is practical only for experimentation and internal pur-
poses. It isn’t feasible or run Apache as a xinetd-run service for a high-profile Web
site. The overhead of forking and establishing a new process for each request is too
much of a load and a waste of resources.
184754-2 Ch14.F 11/5/01 9:04 AM Page 325
Because load is usually not an issue, plenty of services can use xinetd. FTP ser-
vice and POP3 service by using xinetd, for example, are quite feasible for even
large organizations.
Setting Up xinetd
By default, xinetd gets installed on your Red Hat Linux 7.x system. Make sure,
though, that you always have the latest version installed. In the following section I
show installation of and configuration for the latest version of xinetd.
Getting xinetd
As with all open-source Linux software, you have two choices for sources of
xinetd:
◆ Install a binary RPM distribution of xinetd from the Red Hat CD-ROM or
download it from a Red Hat RPM site such as http://rpmfind.net.
◆ Download the source RPM distribution, then compile and install it
yourself.
I prefer the source distribution, so I recommend that you try this approach, too.
However, if you must install the binary RPM, download it and run the rpm –ivh
xinetd-version-architecture.rpm file to install. In the following section, I
show how you can compile and install xinetd.
To configure the TCP wrapper (tcpd) for xinetd, run the configure
script with the --with-libwrap=/usr/lib/libwrap.a option. This
controls access by using /etc/hosts.allow and /etc/hosts.deny
files. Choose this option only if you invest a great deal of time creating these
two files.
4. Run make and make install if you don’t receive an error during step 3.
If you get an error message and can’t resolve it, use the binary RPM
installation.
5. Create a directory called /etc/xinetd.d.
This directory stores the xinetd configuration files for each service you
want to run via xinetd.
6. Create the primary xinetd configuration file called /etc/xinetd.conf as
shown in Listing 14-1.
If you don’t know your run level, run the run level command and the
number returned is your current run level.
184754-2 Ch14.F 11/5/01 9:04 AM Page 329
To automatically run xinetd in other run levels you may choose to use,
create a similar link in the appropriate run-level directory.
The default values section enclosed within the curly braces {} have the follow-
ing syntax:
The following are common xinetd service attributes and their options.
◆ bind IP Address
■ IDONLY — This flag accepts connections from only those clients that
have an identification (identd) server.
■ NORETRY — This flag instructs the server not to fork a new service
process again if the server fails.
■ NAMEINARGS — This flag specifies that the first value in the
server_args attribute is used as the first argument when starting the
service specified. This is most useful when using tcpd; you would spec-
ify tcpd in the server attribute (and ftpd -l as the service) in the
server_args attribute.
Identifies the service. By default, the service’s name is the same as the id
attribute.
◆ instances number
◆ log_on_success keyword
◆ log_on_failure keyword
Specifies the port number for a service. Use this only if your service port
isn’t defined in /etc/services.
◆ protocol keyword
◆ server path
■ dgram (UDP)
■ raw
■ seqpacket.
◆ type keyword
◆ xinetd
◆ wait yes | no
The default attributes found in the /etc/xinetd.conf file applies to each man-
aged service. As shown in Listing 14-1, the defaults section:
This means that when xinetd is in charge of managing the FTP service, it
allows 60 FTP sessions to go on simultaneously.
◆ Tells xinetd to use the syslog (the authpriv facility) to log information.
As mentioned earlier, each service has its own configuration file (found in the
/etc/xinetd.d directory), and that’s what you normally use to configure it. For
example, a service called myservice would be managed by creating a file called
/etc/xinetd.d/myservice, which has lines such as the following:
service myservice
{
attribute1 operator value1, value2, ...
attribute2 operator value1, value2, ...
. . .
attributeN operator value1, value2, ...
}
You can start quickly with only the default configuration found in the
/etc/xinetd.conf file. However, there is a lot of per-service configuration that
should be done (discussed in later sections) before your xinetd configuration is
complete.
If you prefer the kill command, you can use kill –USR1 xinetd PID
or killall -USR1 xinetd to soft-reconfigure xinetd. A soft reconfigu-
ration using the SIGUSR1 signal makes xinetd reload the configuration
files and adjust accordingly. To do a hard reconfiguration of the xinetd
process, simply replace USR1 with USR2 (SIGUSR2 signal). This forces
xinetd to reload the configuration and remove currently running services.
no_access = 0.0.0.0/0
The 0.0.0.0/0 IP address range covers the entire IP address space. The
no_access attribute set to such an IP range disables access from all possible IP
addresses — that is, everyone. You must open access on a per-service basis.
Here is how you can fine tune the default configuration:
defaults
{
instances = 20
log_type = SYSLOG authpriv
log_on_success = HOST PID
log_on_failure = HOST RECORD
# Maximum number of connections allowed from
# a single remote host.
per_source = 10
# Deny access to all possible IP addresses. You MUST
# open access using only_from attribute in each service
# configuration file in /etc/xinetd.d directory.
no_access = 0.0.0.0/0
# Disable services that are not to be used
disabled = rlogin rsh rexec
}
After you create the defaults section as shown here, you can start xinetd. You
can then create service-specific configuration files and simply reload your xinetd
configuration as needed.
service myinetservice
{
socket_type = stream
wait = no
user = root
server = /path/to/myinetserviced
server_args = arg1 arg2
}
To set up services such as FTP, Telnet, and finger, all you need is a skeleton
configuration as in the preceding listing; change the values as needed. For exam-
ple, Listing 14-4 shows /etc/xinetd.d/ftp, the FTP configuration file.
184754-2 Ch14.F 11/5/01 9:04 AM Page 336
service ftp
{
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.ftpd
server_args = -l -a
}
You are adding values to the list already specified in the log_on_success
attribute in the defaults section in /etc/xinetd.conf. Similarly, you can override
a default value for your service configuration. Say you don’t want to log via
syslog, and prefer to log by using a file in the /var/log directory. You can over-
ride the default log_type setting this way:
Also, you can add new attributes as needed. For example, to control the FTP
server’s priority by using the nice attribute, you can add it into your configuration.
The completed example configuration is shown in Listing 14-5.
service ftp
{
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.ftpd
184754-2 Ch14.F 11/5/01 9:04 AM Page 337
server_args = -l –a
log_on_success += DURATION USERID
nice = 10
}
This makes sure that only the computers in the 192.168.0.0 network can access
the Telnet service.
If you want to limit access to one or a few IP addresses instead of a full network,
you can list the IP addresses as values for the only_from attribute as shown in this
example:
Although only_from makes the service available to all usable IP addresses rang-
ing from 192.168.0.1 to 192.168.0.254, the noaccess attribute disables the IP
addresses that fall under the 192.168.0.128 network.
184754-2 Ch14.F 11/5/01 9:04 AM Page 338
If you want to allow access to the service from a network 192.168.0.0/24 but
also want to block three hosts (with IP addresses 192.168.0.100, 192.168.0.101, and
192.168.0.102), the configuration that does the job is as follows:
When a user tries connecting to the service before or after these hours, access is
denied.
Reducing Risks of
Denial-of-Service Attacks
Denial-of-Service (DoS) attacks are very common these days. A typical DoS attacker
diminishes your system resources in such a way that your system denies responses
to valid user requests. Although it’s hard to foolproof a server from such attacks,
precautionary measures help you fight DoS attacks effectively. In this section, I dis-
cuss how xinetd can reduce the risk of DoS attacks for services it manages.
Here, xinetd starts a maximum of ten servers to service multiple requests. If the
number of connection requests exceeds ten, the requests exceeding ten are refused
until at least one server exits.
Also, xinetd can write logs to a file of your choice. The log_type syntax for
writing logs is:
When the log file reaches 8MB, you see an alert entry in syslog and when the log
file reaches the 10MB limit, xinetd stops any service that uses the log file.
Limiting load
You can use the maxload attribute to specify the system load at which xinetd stops
accepting connection for a service. This attribute has the following syntax:
max_load number
This number specifies the load at which the server stops accepting connections;
the value for the load is based on a one-minute CPU load average, as in this
example:
184754-2 Ch14.F 11/5/01 9:04 AM Page 340
When the system load average goes above 2.9, this service is temporarily dis-
abled until the load average lowers.
The nice attribute sets the process priority of the server started by xinetd as
shown in this example:
This ensures that the service started by xinetd has a low priority.
Here xinetd starts a maximum of 10 servers and waits 60 seconds if this limit is
reached. During the wait period, the service isn’t available to any new client.
Requests for service are denied.
184754-2 Ch14.F 11/5/01 9:04 AM Page 341
Creating an
Access-Discriminative Service
Occasionally, a service like HTTP or FTP has to run on a server in a way that dis-
criminates according to where the access request came from. This access discrimi-
nation allows for tight control of how the service is available to the end user.
For example, if you have a system with two interfaces (eth0 connected to the
local LAN and eth1 connected to an Internet router), you can provide FTP service
with a different set of restrictions on each interface. You can limit the FTP service
on the public (that is, eth1), Internet-bound interface to allow FTP connections only
during office hours when a system administrator is on duty and let the FTP service
run unrestrictedly when requested by users in the office LAN. Of course, you don’t
want to let Internet users access your FTP site after office hours, but you want
hardworking employees who are working late to access the server via the office
LAN at any time.
You can accomplish this using the bind attribute to bind an IP address to a spe-
cific service. Because systems with multiple network interfaces have multiple IP
addresses, this attribute can offer different functionality on a different interface
(that is, IP address) on the same machine.
Listing 14-6 shows the /etc/xinetd.d/ftp-worldwide configuration file used
for the public FTP service.
service ftp
{
id = ftp-worldwide
wait = no
user = root
server = /usr/sbin/in.ftpd
server_args = -l
instances = 10
cps = 5
nice = 10
only_from = 0.0.0.0/0
bind = 169.132.226.215
access_times = 08:00-17:00
}
◆ The id field sets a name (“ftp-worldwide”) for the FTP service that is
available to the entire world (the Internet).
184754-2 Ch14.F 11/5/01 9:04 AM Page 342
◆ The service runs with a low process-priority level (10), using the nice
attribute.
Listing 14-7 shows the private (that is, office LAN access only) FTP service con-
figuration file called /etc/xinetd.d/ftp-office.
service ftp
{
id = ftp-office
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.ftpd
server_args = -l
only_from = 192.168.1.0/24
bind = 192.168.1.215
}
Here the private FTP service is named ftp-office — using the id attribute — and it’s
bound to the 192.168.1.0 network. Every host on this Class C network can access
this FTP server. But no external server (for example, one on the Internet) has access
to this server.
When xinetd receives a connection for the service with the redirect attribute, it
spawns a process and connects to the port on the IP or hostname specified as the
value of the redirect attribute. Here’s how you can use this attribute.
Say that you want to redirect all Telnet traffic destined for the Telnet server (run-
ning on IP address 169.132.226.215) to 169.132.226.232. The machine with the
169.132.226.215 IP address needs the following /etc/xinetd.d/telnet configuration:
service telnet
{
flags = REUSE
socket_type = stream
protocol = tcp
wait = no
user = root
bind = 169.132.226.215
redirect = 169.132.226.232
}
service telnet2323
{
id = telnet2323
flags = REUSE
socket_type = stream
protocol = tcp
wait = no
user = root
bind = 169.132.226.232
port = 2323
server = /usr/sbin/in.telnetd
}
Here the id field distinguishes the special service and port attribute lets xinetd
know that you want to run the Telnet daemon on port 2323 on 169.132.226.232. In
184754-2 Ch14.F 11/5/01 9:04 AM Page 344
Internet Router
neon.nitec.com
Network Gateway System (running xinetd)
eth1
169.132.226.215: Port23
192.168.1.215: Port23
eth0
LAN: 192.168.1.0/24
eth0: 192.168.1.215
Telnet Server
service telnet
{
flags = REUSE
socket_type = stream
protocol = tcp
wait = no
user = root
bind = 169.132.226.215
redirect = 192.168.1.215 23
}
This makes xinetd run the TCP wrapper (/usr/sbin/tcpd) with the command-
line argument /usr/sbin/in.fingerd (which is the finger daemon).
socket_type = stream
wait = no
user = root
server = /usr/local/sbin/sshd
server_args = -i
log_on_success += DURATION USERID
log_on_failure += USERID
nice = 10
}
Using xadmin
The xinetd daemon provides an internal administrative service called xadmin. This
service provides information about the xinetd-run services. You can set up this
service configuration file, /etc/xinetd.d/xadmin, this way:
service xadmin
{
type = INTERNAL UNLISTED
port = 9100
protocol = tcp
socket_type = stream
wait = no
instances = 1
only_from = localhost
cps = 1
}
The configuration tells xinetd to run only one instance of the xadmin service on
the nonstandard (that is, not listed in /etc/services) port, 9100, using TCP.
Because xadmin shows information about the xinetd-run services, it isn’t advis-
able to make this service available to the public. That’s why the configuration
makes this service available only on localhost (that is, 127.0.0.1). You must log
on to the system locally to access this service. Only one connection per second is
allowed for this service.
To run this service from localhost, run the telnet localhost 9100 command.
Listing 14-8 shows a sample session when connected to this service.
184754-2 Ch14.F 11/5/01 9:04 AM Page 347
The xadmin commands entered at the > prompt are shown in boldface. The help
command lists available xadmin commands. The show run command shows infor-
mation about currently running services that xinetd started. In the example, ftp
and xadmin are the only services run by xadmin. The show avail command shows
the configured services.
Summary
The xinetd daemon is a secure replacement for the traditional inetd daemon. It
allows each service to have its own configuration file and provides a greater flexi-
bility in controlling access to the services it manages. It offers good support for
handling many denial of service attacks as well.
194754-2 Pt4.F 11/5/01 9:04 AM Page 349
Part IV
Network Service Security
CHAPTER 15
Web Server Security
CHAPTER 16
DNS Server Security
CHAPTER 17
E-Mail Server Security
CHAPTER 18
FTP Server Security
CHAPTER 19
Samba and NFS Server Security
194754-2 Pt4.F 11/5/01 1:16 PM Page 350
204754-2 Ch15.F 11/5/01 9:04 AM Page 351
Chapter 15
APACHE, THE DEFAULT WEB-SERVER program for Red Hat Linux, is the most widely
used Web server in the world. Apache developers pay close attention to Web secu-
rity issues, which keeps Apache in good shape for keeping security holes to a min-
imum in server code. However, most security issues surrounding Web sites exist
because of software misconfiguration or misunderstanding of underlying server
technology. This chapter examines some common Web security risks — and some
ways to reduce or eliminate them.
Configuring Sensible
Security for Apache
Sensible security configuration for Apache includes creating dedicated user and
group accounts, using a security-friendly directory structure, establishing permis-
sions and index files, and disabling risky defaults. The following sections provide a
closer look.
For clarity, this chapter refers to the Apache-dedicated user and group
accounts as the httpd user and the httpd group; you may want to use a
different name for these accounts.
When you use a dedicated user and group for Apache, permission-specific
administration of your Web content becomes simpler to do: Just ensure that only
the Apache user has read access to your Web content. If you want to create a direc-
tory to which some CGI scripts may write data, enable write permissions for only
the Apache user.
◆ CustomLog and ErrorLog store access and error log files. You can specify
two different directories for each of these directives, but keeping one log
directory for all the log files is usually more manageable in the long run.
I recommend using a directory structure where all four primary directories are
independent of each other — meaning no primary directory is a subdirectory of any
other.
Not even the Apache user or group should have access to the log directory. The
following example shows such a directory structure:
/
+---home
| +---httpd (ServerRoot)
+---www
| +---htdocs (DocumentRoot)
| +---cgi-bin (ScriptAlias)
| +---logs (CustomLog and ErrorLog)
.
This directory structure is quite safe in many ways. To understand why, first look
at the following Apache configuration in httpd.conf.
ServerRoot /home/httpd
DocumentRoot /www/htdocs
ScriptAlias /cgi-bin/ “/www/cgi-bin/”
CustomLog /www/logs/access.log common
ErrorLog /www/logs/error.log
204754-2 Ch15.F 11/5/01 9:04 AM Page 354
Because all these major directories are independent (not one is a subdirectory of
another) they are safe. A permissions mistake in one directory doesn’t affect the
others.
3. Change the ownership of the DocumentRoot directory (and all the subdi-
rectories below it) with this command:
chown –R httpd.webteam /www/htdocs
This command sets the directory ownership to Apache (that is, the httpd
user) and sets the group ownership to webteam, which includes the html-
guru user. This means both the Apache and htmlguru accounts can access
the document tree.
4. Change the permission of the DocumentRoot directory (and all the subdi-
rectories below it) this way:
chmod -R 2570 /www/htdocs
This command makes sure that the files and subdirectories under the
DocumentRoot are readable and executable by the Apache user and that
the webteam group can read, write, and execute everything. It also ensures
that whenever a new file or directory is created in the document tree, the
webteam group has access to it.
One great advantage of this method is that adding new users to the
webteam is as simple as running the following command:
You can find which group(s) a user belongs to by running the group
<username> command.
ScriptAlias should be accessible only to the CGI developers and the Apache
user. I recommend that you create a new group called webdev for the developer(s).
Although the developer group (webdev) needs read, write, and execute access for
the directory, the Apache user requires only read and execute access. Don’t allow
the Apache user to write files in this directory. For example, say you have the fol-
lowing ScriptAlias in httpd.conf:
If httpd is your Apache user and webdev is your developer group, set the per-
missions for /www/cgi-bin like this:
Alternatively, if you want only one user (say, cgiguru) to develop CGI scripts,
you can set the file and directory permission this way:
Here the user cgiguru owns the directory and the group (specified by the Group
directive) used for Apache server and is the group owner of the directory and its files.
The log directory used in CustomLog and ErrorLog directives should be writable
only by the root user. The recommended permissions setting for such a directory
(say, /www/logs) is:
Don’t allow anyone (including the Apache user or group) to read, write, or
execute files in the log directory specified in CustomLog and ErrorLog
directives.
204754-2 Ch15.F 11/5/01 9:04 AM Page 356
■ If it can read this file in the requested directory, the contents of the file
are displayed.
■ If such a file doesn’t exist, Apache checks whether it can create a
dynamic listing for the directory. If that action is allowed, then Apache
creates dynamic listings and displays the contents of the directory to
the user.
One common reason that many Web sites have an exposed directory or two is
that someone creates a new directory and forgets to create the index file — or
uploads an index file in the wrong case (INDEX.HTML or INDEX.HTM, for example).
If this happens frequently, a CGI script can automatically redirect users to your
home page or perhaps to an internal search engine interface. Simply modify the
DirectoryIndex directive so it looks like this:
Now add a CGI script such as the one shown in Listing 15-1 in the
ScriptAlias-specified directory.
This script runs if Apache doesn’t find the directory index files (index.html or
index.htm). The script simply redirects a user, whose URL points to a directory with
no index file, to the home page of the Web site.
Change /cgi-bin/ from the path of the directive if you use another alias
name.
If you want to not display any directory listings, you can simply disable direc-
tory listings by setting the following configuration:
<Directory />
Options -Indexes
</Directory >
You may also want to tell Apache not to allow symbolic links; they can
expose part of the disk space that you don’t want to make public. To do so,
use the minus sign when you set the Options directive so it looks like this:
-FollowSymLinks.
<Directory />
Order deny,allow
Deny from all
</Directory>
This segment disables all access first. For access to a particular directory, use the
<Directory...> container again to open that directory. For example, if you want
to permit access to /www/htdocs, add the following configuration:
<Directory “/www/htdocs”>
Order deny,allow
Allow from all
</Directory>
This method — opening only what you need — is highly recommended as a pre-
ventive security measure.
<Directory />
AllowOverride None
</Directory>
204754-2 Ch15.F 11/5/01 9:04 AM Page 359
This disallows user overrides and speeds up processing (because the server no
longer looks for the per-directory access control files (.htaccess) for each request).
CGI scripts are typically the cause of most Web security incidents.
◆ No SSI support.
SSI pages are often problematic since some SSI directives that can be
incorporated in a page can allow running CGI programs
◆ No standard World Wide Web URLs.
The preceding paranoid configuration can be achieved using the following con-
figuration command:
./configure --prefix=/home/apache \
--disable-module=include \
--disable-module=cgi \
--disable-module=userdir \
--disable-module=status
Once you have run the preceding configuration command from the src direc-
tory of the Apache source distribution, you can make and install Apache (in
/home/apache) using the preceding paranoid configuration.
204754-2 Ch15.F 11/5/01 9:04 AM Page 360
Information leaks
Vandals can make many CGI scripts leak information about users or the resources
available on a Web server. Such a leak helps vandals break into a system. The more
information a vandal knows about a system, the better informed the break-in
attempt, as in the following example:
http://unsafe-site.com/cgi-bin/showpage.cgi?pg=/doc/article1.html
http://unsafe-site.com/cgi-bin/showpage.cgi?pg=/etc/passwd
This displays the user password file for the entire system if the showpage.cgi
author does not protect the script from such leaks.
http://unsafe-site.com/cgi-bin/showlist.pl?start=1&stop=15
Say that this URL allows a site visitor to view a list of classifieds advertisements
in a Web site. The start=1 and stop=15 parameters control the number of records
displayed. If the showlist.pl script relies only on the supplied start and stop val-
ues, then a vandal can edit the URL and supply a larger number for the stop para-
meter to make showlist.pl display a larger list then usual. The vandal’s
modification can overload the Web server with requests that take longer to process,
making real users wait (and, in the case of e-commerce, possibly move on to a com-
petitor’s site).
In this case, the system call runs the /bin/mail program and supplies it the
value of variable $subject as the subject header and the value of variable
$emailAddress as the e-mail address of the user and redirects the contents of the
file named by the $thankYouMsg variable. This works, and no one should normally
know that your application uses such a system call. However, a vandal interested in
breaking into your Web site may examine everything she has access to, and try
entering irregular values for your Web form. For example, if a vandal enters
vandal@emailaddr < /etc/passwd; as the e-mail address, it fools the script into
sending the /etc/passwd file to the vandal-specified e-mail address.
If you use the system() function in your CGI script, use the -T option in
your #!/path/to/perl line to enable Perl’s taint-checking mode and also
set the PATH (environment variable) using set $ENV{PATH} = ‘/path/
to/commands/you/call/via/system’ to increase security.
exec(), piped open(), and eval() functions. Similarly, in C the popen() and sys-
tem() functions are potential security hazards. All these functions/commands typi-
cally invoke a subshell (such as /bin/sh) to process the user command.
Even shell scripts that use system(), exec() calls can open a port of entry for
vandals. Backtick quotes (features available in shell interpreters and Perl that cap-
ture program output as text strings) are also dangerous.
To illustrate the importance of careful use of system calls, take a look this
innocent-looking Perl code segment:
#!/usr/bin/perl
# Purpose: to demonstrate security risks in
# poorly written CGI script.
# Get the domain name from query string
# environment variable.
#
# Print the appropriate content type.
# Since whois output is in plain text
# we choose to use text/plain as the content-type here.
print “Content-type: text/plain\n\n”;
# Here is the bad system call
system(“/usr/bin/whois $domain”);
# Here is another bad system call using backticks.
# my $output = `/usr/bin/whois $domain`;
# print $output;
exit 0;
This little Perl script should be a Web-based whois gateway. If this script is
called whois.pl, and it’s kept in the cgi-bin directory of a Web site called
unsafe-site.com, a user can call this script this way:
http://unsafe-site.com/cgi-bin/script.pl?domain=anydomain.com
The script takes anydomain.com as the $domain variable via the QUERY_STRING
variable and launches the /usr/bin/whois program with the $domain value as the
argument. This returns the data from the whois database that InterNIC maintains.
This is all very innocent, but the script is a disaster waiting to happen. Consider the
following line:
http://unsafe-site.com/cgi-bin/script.pl?domain=nitec.com;ps
This does a whois lookup on a domain called nitec.com and provides the out-
put of the Unix ps utility that shows process status. This reveals information about
the system that shouldn’t be available to the requesting party. Using this technique,
anyone can find out a great deal about your system. For example, replacing the ps
command with df (a common Unix utility that prints a summary of disk space)
204754-2 Ch15.F 11/5/01 9:04 AM Page 363
enables anyone to determine what partitions you have and how full they are. I
leave to your imagination the real dangers this security hole could pose.
Don’t trust any input. Don’t make system calls an easy target for abuse.
Two overall approaches are possible if you want to make sure your user input is
safe:
For example, for the preceding whois.pl script, you can add the follow-
ing line:
$domain =~ s/[\/ ;\[\]\<\>&\t]//g;
The best way to handle user input is by establishing rules to govern it, clarifying
If (for example) you are expecting an e-mail address as input (rather than just
scanning it blindly for shell metacharacters), use a regular expression such as the
following to detect the validity of the input as a possible e-mail address:
$email = param(‘email-addr’);
if ($email=~ /^[\w-\.]+\@[\w-\.]+$/) {
print “Possibly valid address.”
}
else {
print “Invalid email address.”;
}
Just sanitizing user input isn’t enough. Be careful about how you invoke exter-
nal programs; there are many ways you can invoke external programs in Perl.
Some of these methods include:
All these constructions can be risky if they involve user input that may con-
tain shell metacharacters. For system() and exec(), there’s a somewhat
obscure syntactical feature that calls external programs directly rather than
through a shell. If you pass the arguments to the external program (not in
one long string, but as separate elements in a list), Perl doesn’t go through
the shell, and shell metacharacters have no unwanted side effects, as
follows:
system “/usr/bin/sort”,”data.dat”;
You can use this feature to open a pipe without using a shell. By calling open
the character sequence -| , you fork a copy of Perl and open a pipe to the
copy.Then, the child copy immediately forks another program, using the first
argument of the exec function call.
To read from a pipe without opening a shell, you can use the -| character
sequence:
These forms of open()s are more secure than the piped open()s. Use these
whenever applicable.
Many other obscure features in Perl can call an external program and lie to it
about its name. This is useful for calling programs that behave differently depend-
ing on the name by which they were invoked. The syntax is
Vandals sometimes alter the PATH environment variable so it points to the pro-
gram they want your script to execute — rather than the program you’re expecting.
Invoke programs using full pathnames rather than relying on the PATH environ-
ment variable. That is, instead of the following fragment of Perl code
system(“cat /tmp/shopping.cart.txt”);
use this:
If you must rely on the path, set it yourself at the beginning of your CGI script,
like this:
$ENV{‘PATH’}=”bin:/usr/bin:/usr/local/bin”;
◆ Include the previous line toward the top of your script whenever you use
taint checks.
Even if you don’t rely on the path when you invoke an external program,
there’s a chance that the invoked program does.
◆ You must adjust the line as necessary for the list of directories you want
searched.
◆ It’s not a good idea to put the current directory into the path.
For example:
Here the hidden tag stores state=CA, which can be retrieved by the same appli-
cation in a subsequent call. Hidden tags are common in multiscreen Web applica-
tions. Because users can manually change hidden tags, they shouldn’t be trusted at
all. A developer can use two ways of protecting against altered data:
204754-2 Ch15.F 11/5/01 9:05 AM Page 367
◆ Use a security scheme to ensure that data hasn’t been altered by the user.
In the following example CGI script, shown in Listing 15-2, I demonstrate the
MD5 message digest algorithm to protect hidden data.
Continued
204754-2 Ch15.F 11/5/01 9:05 AM Page 368
Continued
204754-2 Ch15.F 11/5/01 9:05 AM Page 370
This is a simple multiscreen CGI script that asks the user for a name in the first
screen and an e-mail address in the following screen and finally prints out a mes-
sage. When the user moves from one screen to another, the data from the previous
screen is carried to the next screen through hidden tags. Here’s how this script
works.
The first screen asks the user for her name. Once the user enters her name, the
following screen asks for the user’s e-mail address. The HTML source of this screen
is shown in Listing 15-3.
</HEAD>
<BODY>
<H2>Screen 2</H2>
<HR SIZE=”0” COLOR=”black”>
<FORM METHOD=”POST”
ENCTYPE=”application/x-www-form-urlencoded”>
Enter email:
<INPUT TYPE=”text”
NAME=”email”
SIZE=30>
<INPUT TYPE=”hidden”
NAME=”name”
VALUE=”Cynthia”>
<INPUT TYPE=”hidden”
NAME=”digest”
VALUE=”IzrSJlLrsWlYHNfshrKw/A”>
<INPUT TYPE=”submit”
NAME=”.submit”
VALUE=” Next “>
</FORM>
</BODY>
</HTML>
Notice that the hidden data is stored using the following lines:
<INPUT TYPE=”hidden”
NAME=”name”
VALUE=”Cynthia”>
<INPUT TYPE=”hidden”
NAME=”digest”
VALUE=”IzrSJlLrsWlYHNfshrKw/A”>
The first hidden data tag line stores name=Cynthia and the second one stores
digest=IzrSJlLrsWlYHNfshrKw/A. The second piece of data is the message digest
generated for the name entered in screen 1. When the user enters her e-mail address
in the second screen and continues, the final screen is displayed.
However, before the final screen is produced, a message digest is computed for
the name field entered in screen 1. This digest is compared against the digest created
earlier to verify that the value entered for the name field in screen 1 hasn’t been
altered in screen 2. Because the MD5 algorithm creates the same message digest for
a given data set, any differences between the new and old digests raise a red flag,
and the script displays an alert message and refuses to complete processing. Thus,
if a vandal decides to alter the data stored in screen 2 (shown in Listing 15-3) and
submits the data for final processing, the digest mismatch allows the script to detect
the alteration and take appropriate action. In your real-world CGI scripts (written in
Perl) you can use the create_message_digest() subroutine to create a message
digest for anything.
204754-2 Ch15.F 11/5/01 9:05 AM Page 372
You can download and install the latest version of Digest::MD5 from
CPAN by using the perl –MCPAN –e shell command, followed by the
install Digest::MD5 command at the CPAN shell prompt.
suEXEC
Apache includes a support application called suEXEC that lets Apache users run
CGI and SSI programs under UIDs that are different from the UID of Apache.
suEXEC is a setuid wrapper program that is called when an HTTP request is made
for a CGI or SSI program that the administrator designates to run as a UID other
than that of the Apache server. In response to such a request, Apache provides the
suEXEC wrapper with the program’s name and the UID and GID. suEXEC runs the
program using the given UID and GID.
Before running the CGI or SSI command, the suEXEC wrapper performs a set of
tests to ensure that the request is valid.
◆ This testing procedure ensures that the CGI script is owned by a user who
can run the wrapper and that the CGI directory or the CGI script isn’t
writable by anyone but the owner.
◆ After the security checks are successful, the suEXEC wrapper changes the
UID and the GID to the target UID and GID via setuid and setgid calls,
respectively.
204754-2 Ch15.F 11/5/01 9:05 AM Page 373
◆ The group-access list is also initialized with all groups in which the user is
a member. suEXEC cleans the process’s environment by
■ Establishing a safe execution path (defined during configuration).
■ Passing through only those variables whose names are listed in the safe
environment list (also created during configuration).
The suEXEC process then becomes the target CGI script or SSI command
and executes.
./configure --prefix=/path/to/apache \
--enable-suexec \
--suexec-caller=httpd \
--suexec-userdir=public_html
--suexec-uidmin=100 \
--suexec-gidmin=100
--suexec-safepath=”/usr/local/bin:/usr/bin:/bin”
◆ --suexec-caller=httpd changes httpd to the UID you use for the User
directive in the Apache configuration file. This is the only user account
permitted to run the suEXEC program.
◆ --suexec-userdir=public_html defines the subdirectory under users’
home directories where suEXEC executables are kept. Change
public_html to whatever you use as the value for the UserDir directive,
which specifies the document root directory for a user’s Web site.
◆ --suexec-uidmin=100 defines the lowest UID permitted to run suEXEC-
based CGI scripts. This means UIDs below this number can’t run CGI or
SSI commands via suEXEC. Look at your /etc/passwd file to make sure
the range you chose doesn’t include the system accounts that are usually
lower than UIDs below 100.
◆ --suexec-gidmin=100 defines the lowest GID permitted as a target group.
This means GIDs below this number can’t run CGI or SSI commands via
suEXEC. Look at your /etc/group file to make sure that the range you
chose doesn’t include the system account groups that are usually lower
than UIDs below 100.
204754-2 Ch15.F 11/5/01 9:05 AM Page 374
This tells you that the suEXEC is active. Now, test suEXEC’s functionality. In the
httpd.conf file, add the following lines:
UserDir public_html
AddHandler cgi-script .pl
To access the script via a Web browser, I request the following URL:
http://wormhole.nitec.com/~kabir/test.pl
A CGI script is executed only after it passes all the security checks performed by
suEXEC. suEXEC also logs the script request in its log file. The log entry for my
request is
204754-2 Ch15.F 11/5/01 9:05 AM Page 375
If you are really interested in knowing that the script is running under the user’s
UID, insert a sleep command (such as sleep(10);) inside the foreach loop, which
slows the execution and allows commands such as top or ps on your Web server
console to find the UID of the process running test.pl. You also can change the
ownership of the script using the chown command, try to access the script via your
Web browser, and see the error message that suEXEC logs. For example, I get a
server error when I change the ownership of the test.pl script in the ~kabir/
public_html directory as follows:
Here, the program is owned by UID 0, and the group is still kabir (500), so
suEXEC refuses to run it, which means suEXEC is doing what it should do.
To ensure that suEXEC will run the test.pl program in other directories, I cre-
ate a cgi-bin directory in ~kabir/public_html and put test.cgi in that direc-
tory. After determining that the user and group ownership of the new directory and
file are set to user ID kabir and group ID kabir, I access the script by using the fol-
lowing command:
http://wormhole.nitec.com/~kabir/cgi-bin/test.pl
If you have virtual hosts and want to run the CGI programs and/or SSI commands
using suEXEC, use User and Group directives inside the <VirtualHost . . .>
container. Set these directives to user and group IDs other than those the Apache
server is currently using. If only one, or neither, of these directives is specified for a
<VirtualHost> container, the server user ID or group ID is assumed.
For security and efficiency, all suEXEC requests must remain within either a top-
level document root for virtual host requests or one top-level personal document
root for userdir requests. For example, if you have four virtual hosts configured,
structure all their document roots from one main Apache document hierarchy if
you plan to use suEXEC for virtual hosts.
CGIWrap
CGIWrap is like the suEXEC program because it allows CGI scripts without compro-
mising the security of the Web server. CGI programs are run with the file owner’s
permission. In addition, CGIWrap performs several security checks on the CGI script
and isn’t executed if any checks fail.
204754-2 Ch15.F 11/5/01 9:05 AM Page 376
User ID
mailto:Username@subnet1/mask1,subnet2/mask2. . .
For example, if the following line is found in the allow file (you specify the
filename),
mailto:[email protected]/255.255.255.255
user kabir’s CGI scripts can be run by hosts that belong in the 192.168.1.0
network with netmask 255.255.255.0.
After you run the Configure script, you must run the make utility to create the
CGIWrap executable file.
ENABLING CGIWRAP
To use the wrapper application, copy the CGIWrap executable to the user’s cgi-bin
directory. This directory must match what you have specified in the configuration
process. The simplest starting method is keeping the ~username/public_html/
cgi-bin type of directory structure for the CGI script directory.
1. After you copy the CGIWrap executable, change the ownership and per-
mission bits like this:
chown root CGIWrap
chmod 4755 CGIWrap
204754-2 Ch15.F 11/5/01 9:05 AM Page 377
http://www.yourdomain.com/cgi-bin/cgiwrapd/username/
scriptname
5. If the script is an nph-style script, you must run it using the following
URL:
http://www.yourdomain.com/cgi-bin/nph-
cgiwrap/username/scriptname
Use of cgi-bin alias has become overwhelmingly popular. This alias is set
using the ScriptAlias directive in httpd.conf for Apache as shown in
this example:
ScriptAlias /cgi-bin/ “/path/to/real/cgi/directory/”
You can use nearly anything to create an alias like this. For example, try
ScriptAlias /apps/ “/path/to/real/cgi/directory/”
Now the apps in the URL serve the same purpose as cgi-bin. Thus, you
can use something nonstandard like the following to confuse vandals:
204754-2 Ch15.F 11/5/01 9:05 AM Page 378
Many vandals use automated programs to scan Web sites for features and
other clues. A nonstandard script alias such as the one in the preceding
example usually isn’t incorporated in any automated manner.
Many sites boldly showcase what type of CGI scripts they run, as in this
example:
http://www.domain.com/cgi-bin/show-catalog.pl
The preceding URL provides two clues about the site: It supports CGI
scripts, and it runs Perl scripts as CGI scripts.
If, instead, that site uses
http://www.domain.com/ext/show-catalog
then it becomes quite hard to determine anything from the URL. Avoid
using the .pl and .cgi extensions.
To change an existing script’s name from a .pl, .cgi, or other risky exten-
sion type to a nonextension name, simply rename the script. You don’t have
to change or add any new Apache configuration to switch to nonextension
names.
<Directory />
Options IncludesNOEXEC
</Directory>
This disables exec and includes SSI commands everywhere on your Web space.
You can enable these commands whenever necessary by defining a directory con-
tainer with narrower scope. See the following example:
204754-2 Ch15.F 11/5/01 9:05 AM Page 379
<Directory />
Options IncludesNOEXEC
</Directory>
<Directory “/ssi”>
Options +Include
</Directory>
This configuration segment disables the exec command everywhere but the
/ssi directory.
Avoid using the printenv command, which prints out a listing of all exist-
ing environment variables and their values, as in this example:
<--#printenv -->
This command displays all the environment variables available to the Web
server — on a publicly accessible page — which certainly gives away clues
to potential bad guys. Use this command only when you are debugging SSI
calls, never in a production environment.
As shown, there are a great deal of configuration and policy decisions (what to
allow and how to allow it) that you must make to ensure Web security. Many
become frustrated after implementing a set of security measures, because they don’t
know what else is required. Once you have implemented a set of measures, such as
controlled CGI and SSI requests as explained above, focus your efforts on logging.
Logging Everything
A good Web administrator closely monitors server logs, which provide clues to
unusual access patterns. Apache can log access requests that are successful and that
result in error in separate log files as shown in this example:
The first directive, CustomLog, logs each incoming request to your Web site, and
the second directive, ErrorLog, records only the requests that generated an error
condition. The error log is a good place to check problems that are reported by your
Web server. You can use a robust log analysis program like Wusage
(www.boutell.com) to routinely analyze and monitor log files. If you notice, for
204754-2 Ch15.F 11/5/01 9:05 AM Page 380
example, someone trying to supply unusual parameters to your CGI scripts, con-
sider it a hostile attempt and investigate the matter immediately. Here’s a process
that you can use:
6. Send an e-mail to the technical contact address at the ISP regarding the
incident and supply the log snippet for his review. Write your e-mail in a
polite and friendly manner.
The ISP at the other end is your only line of defense at this point. Politely
request a speedy resolution or response.
7. If you can’t take the script offline because it’s used too heavily by other
users, you can decide to ban the bad guy from using it. Say you run your
script under the script alias ext which is set up as follows:
ScriptAlias /ext/ “/some/path/to/cgi/scripts/”
Replace 192.168.1.100 with the IP address of the bad guy. This configura-
tion runs your script as usual for everyone but the user on the IP address
given in the Deny from line. However, if the bad guy’s ISP uses dynami-
cally allocated IP addresses for its customers, then locking the exact IP
address isn’t useful because the bad guy can come back with a different IP
address next time. In such a case, you must consider locking the entire IP
network. For example, if the ISP uses 192.168.1.0, then you must remove
the 100 from the Deny from line to block the entire ISP. This is a drastic
measure and may block a lot of innocent users at the ISP from using this
script, so exercise caution when deciding to block.
8. Wait a few days for the technical contact to respond. If you don’t hear
from him, try to contact him through the Web site. If the problem persists,
contact your legal department to determine what legal actions you can
take to require action from the ISP.
Logs are great, but they’re useless if the bad guys can modify them. Protect your
log files. I recommend keeping log files in their own partition where no one but the
root user has access to make any changes.
Make sure that the directories specified by ServerRoot, CustomLog, and
ErrorLog directives aren’t writable by anyone but the root user. Apache users and
groups don’t need read or write permission in log directories. Enabling anyone
other than the root user to write files in the log directory can cause a major secu-
rity hole. To ensure that only root user has access to the log files in a directory
called /logs, do the following:
1. Change the ownership of the directory and all the files within it to root
user and root group by using this command:
chown –R root:root /logs
Logging access requests can monitor and analyze who is requesting information
from your Web site. Sometimes access to certain parts of your Web site must be
restricted so that only authorized users or computers can access the contents.
Restricting Access to
Sensitive Contents
You can restrict access by IP or hostname or use username/password authentication
for sensitive information on your Web site. Apache can restrict access to certain
sensitive contents using two methods:
Using IP or hostname
In this authentication scheme, access is controlled by the hostname or the host’s IP
address. When a request for a certain resource arrives, the Web server checks
whether the requesting host is allowed access to the resource; then it acts on the
findings.
The standard Apache distribution includes a module called mod_access, which
bases access control on the Internet hostname of a Web client. The hostname can be
◆ An IP address
The module supports this type of access control by using the three Apache
directives:
◆ allow
◆ deny
◆ order
The allow directive can define a list of hosts (containing hosts or IP addresses)
that can access a directory. When more than one host or IP address is specified,
they should be separated with space characters. Table 15-1 shows the possible val-
ues for the directive.
204754-2 Ch15.F 11/5/01 9:05 AM Page 383
all allow from all This reserved word allows access for all
hosts. The example shows this option.
A fully qualified allow from Only the host that has the specified
domain name wormhole.nitec.com FQDN is allowed access. The allow
(FQDN) of a host directive in the example allows access
only to wormhole.nitec.com. This
compares whole components;
toys.com would not match
etoys.com.
A partial domain allow from .mainoffice. Only the hosts that match the partial
name of a host nitec.com hostname have access. The example
permits all the hosts in the
.mainoffice.nitec.com network
access to the site. For example,
developer1.mainoffice.
nitec.com and developer2.
mainoffice.nitec.com have access
to the site. However, developer3.
baoffice.nitec.com isn’t allowed
access.
A full IP address allow from Only the specified IP address is allowed
of a host 192.168.1.100 access. The example shows a full IP
address (all four octets are present),
192.168.1.100, that is allowed access.
A partial allow from When not all four octets of an IP
IP address 192.168.1 address are present in the allow
allow from directive, the partial IP address is
130.86 matched from left to right, and hosts
that have the matching IP address
pattern (that is, it’s part of the same
subnet) have access. In the first
example, all hosts with IP addresses in
the range of 192.168.1.1 to
192.168.1.255 have access. In the
second example, all hosts from the
130.86 network have access.
Continued
204754-2 Ch15.F 11/5/01 9:05 AM Page 384
The deny directive is the exact opposite of the allow directive. It defines a list of
hosts that can’t access a specified directory. Like the allow directive, it can accept
all the values shown in Table 15-1.
The order directive controls how Apache evaluates both allow and deny direc-
tives. For example:
<Directory “/mysite/myboss/rants”>
order deny, allow
deny from myboss.mycompany.com
allow from all
</Directory>
This example denies the host myboss.mycompany.com access and gives all other
hosts access to the directory. The value for the order directive is a comma-
separated list, which indicates which directive takes precedence. Typically, the one
that affects all hosts (in the preceding example, the allow directive ) is given
lowest priority.
Although allow, deny and deny, allow are the most widely used values for the
order directive, another value, mutual-failure, can indicate that only those hosts
appearing on the allow list but not on the deny list are granted access.
In all cases, every allow and deny directive is evaluated.
If you are interested in blocking access to a specific HTTP request method, such
as GET, POST, and PUT, the <Limit> container, you can do so as shown in this
example:
204754-2 Ch15.F 11/5/01 9:05 AM Page 385
<Location /cgi-bin>
<Limit POST>
order deny,allow
deny from all
allow from yourdomain.com
</Limit>
</Location>
This example allows POST requests to the cgi-bin directory only if they are
made by hosts in the yourdomain.com domain. This means if this site has some
HTML forms that send user input data via the HTTP POST method, only the users in
yourdomain.com can use these forms effectively. Typically, CGI applications are
stored in the cgi-bin directory, and many sites feature HTML forms that dump
data to CGI applications through the POST method. Using the preceding host-based
access control configuration, a site can allow anyone to run a CGI script but allow
only a certain site (in this case, yourdomain.com) to actually post data to CGI
scripts. This gives the CGI access in such a site a bit of read-only character.
Everyone can run applications that generate output without taking any user input,
but only users of a certain domain can provide input.
DocumentRoot “/www/htdocs”
AccessFileName .htaccess
AllowOverride All
Assume also that you want to restrict access to the following directory, such that
only a user named reader with the password bought-it can access the directory:
/www/htdocs/readersonly
204754-2 Ch15.F 11/5/01 9:05 AM Page 386
The htpasswd utility asks for the password of reader. Enter bought-it
and then reenter the password to confirm. After you reenter the password,
the utility creates a file called .htpasswd in the /www/secrets directory.
Note the following:
■ The -c option tells htpasswd that you want a new user file. If you
already had the password file and wanted to add a new user, you
would not want this option.
■ Place the user file outside the document root directory of the
apache.nitec.com site, as you don’t want anyone to download it via
the Web.
■ Use a leading period (.) in the filename so it doesn’t appear in the out-
put on your Unix system. Doing so doesn’t provide any real benefits
but can help identify a Unix file because its use is a traditional Unix
habit. Many configuration files in Unix systems have leading periods
(.login and .profile).
2. Execute the following command:
cat /www/secrets/.htpasswd
This should show a line like the following (the password won’t be exactly
the same as this example):
reader:hulR6FFh1sxK6
This command confirms that you have a user called reader in the
.htpasswd file. The password is encrypted by the htpasswd program,
using the standard crypt() function.
3. Create an .htaccess file.
Using a text editor, add the following lines to a file named
/www/htdocs/readersonly/.htaccess:
This is really just a label that goes to the Web browser so that the user
is provided with some clue about what she will access. In this case, the
“Readers Only” string indicates that only readers can access this
directory.
■ AuthType specifies the type of authentication.
■ AuthUserFile specifies the filename and path for the user file.
No users except the file owner and Apache should have access to these files.
Clicking Reload or refresh in the browser requests the same URL again,
and the browser receives the same authentication challenge from the
server. This time enter reader as the username and bought-it as the pass-
word, and click OK. Apache now gives you directory access.
204754-2 Ch15.F 11/5/01 9:05 AM Page 388
You can change the Authentication Required message if you want by using the
ErrorDocument directive:
Insert this line in your httpd.conf file and create a nice message in the
nice_401message.html file to make your users happy.
This addition is almost the same configuration that I discussed in the pre-
vious example, with two changes:
■ A new directive, AuthGroupFile, points to the .htgroup group file
created earlier.
■ The require directive line requires a group called smart_readers.
This means Apache allows access to anyone that belongs to the group.
4. Make sure .htaccess, .htpasswd, and .htgroup files are readable only by
Apache, and that no one but the owner has write access to the files.
204754-2 Ch15.F 11/5/01 9:05 AM Page 389
1. Modify the .htaccess file (from the preceding example) to look like this:
AuthName “Readers Only”
AuthType Basic
AuthUserFile /www/secrets/.htpasswd
AuthGroupFile /www/secrets/.htgroup
require group smart_readers
order deny, allow
deny from all
allow from classroom.nitec.com
This third directive effectively tells Apache that any hosts in the
classroom.nitec.com domain are welcome to this directory.
AuthUserFile /www/secrets/.htpasswd
AuthGroupFile /www/secrets/.htgroup
require group smart_readers
order deny, allow
deny from all
allow from classroom.nitec.com
satisfy any
The satisfy directive takes either the all value or the any value. Because
you want the basic HTTP authentication activated only if a request comes
from any host other than the classroom.nitec.com domain, specify any
for the satisfy directive. This effectively tells Apache to do the following:
IF (REMOTE_HOST NOT IN .classroom.nitec.com DOMAIN) THEN
Basic HTTP authentication Required
ENDIF
For example, a large archive of bitmap images is useless to a robot that is trying to
index HTML pages. Serving these files to the robot wastes resources on your server
and at the robot’s location.
This protocol is currently voluntary, and etiquette is still evolving for robot
developers as they gain experience with Web robots. The most popular search
engines, however, abide by the Robot Exclusion Protocol. Here is what a robot or
spider program does:
2. If this URL exists, the robot parses its contents for directives that instruct
the robot to index the site. As a Web server administrator, you can create
directives that make sense for your site. Only one robots.txt file may
exist per site; this file contains records that may look like the following:
User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /~kabir/
You need a separate Disallow line for every URL prefix you want to
exclude. For example, your command line should not read like this:
You should not have blank lines in a record. They delimit multiple records.
Regular expressions aren’t supported in the User-agent and Disallow lines. The
asterisk in the User-agent field is a special value that means any robot.
Specifically, you can’t have lines like either of these:
Disallow: /tmp/*
Disallow: *.gif
204754-2 Ch15.F 11/5/01 9:05 AM Page 392
User-agent: *
Disallow: /
User-agent: *
Disallow:
You can create the same effect by deleting the robots.txt file. To exclude a sin-
gle robot called WebCrawler, add these lines:
User-agent: WebCrawler
Disallow: /
To allow a single robot called WebCrawler to access the site, use the following
configuration:
User-agent: WebCrawler
Disallow:
User-agent: *
Disallow: /
User-agent: *
Disallow: /daily/changes_to_often.html
◆ Whenever storing a content file, such as an HTML file, image file, sound
file, or video clip, the publisher must ensure that the file is readable by the
Web server (that is, the username specified by the User directive). No one
but the publisher user should have write access to the new file.
◆ Any file or directory that can’t be displayed directly on the Web browser
because it contains information indirectly accessed by using an applica-
tion or script shouldn’t be located under a DocumentRoot-specified direc-
tory. For example, if one of your scripts needs access to a data file that
shouldn’t be directly accessed from the Web, don’t keep the data file inside
the document tree. Keep the file outside the document tree and have your
script access it from there.
◆ Any time a script needs a temporary file, the file should never be created
inside the document tree. In other words, don’t have a Web server writable
directory within your document tree. All temporary files should be created
in one subdirectory outside the document tree where only the Web server
has write access. This ensures that a bug in a script doesn’t accidentally
write over any existing file in the document tree.
◆ To fully enforce copyright, include both visible and embedded copyright
notices on the content pages. The embedded copyright message should be
kept at the beginning of a document, if possible. For example, in an HTML
file you can use a pair of comment tags to embed the copyright message
at the beginning of the file. For example, <!-- Copyright (c) 2000 by
YourCompany; All rights reserved. --> can be embedded in every
page.
◆ If you have many images that you want to protect from copyright theft,
look into watermarking technology. This technique invisibly embeds
information in images to protect the copyright. The idea is that if you
detect a site that’s using your graphical contents without permission, you
can verify the theft by looking at the hidden information. If the informa-
tion matches your watermark ID, you can clearly identify the thief and
proceed with legal action. (That’s the idea, at least. I question the strength
of currently available watermarking tools; many programs can easily
remove the original copyright owner’s watermarks. Watermark technology
is worth investigating, however, if you worry about keeping control of
your graphical content.)
Creating a policy is one thing and enforcing it is another. Once you create your
own publishing policy, discuss this with the people you want to have using it. Get
their feedback on each policy item — and, if necessary, refine your policy to make it
useful.
204754-2 Ch15.F 11/5/01 9:05 AM Page 394
Using Apache-SSL
I want to point out a common misunderstanding about Secure Sockets Layer (SSL).
Many people are under the impression that having an SSL-enabled Web site auto-
matically protects them from all security problems. Wrong! SSL protects data traf-
fic only between the user’s Web browser and the Web server. It ensures that data
isn’t altered during transit. It can’t enhance your Web site’s security in any other
way.
Apache doesn’t include an SSL module in the default distribution, but you can
enable SSL for Apache by using the Apache-SSL source patch. The Apache-SSL
source patch kit can be downloaded from www.apache-ssl.org. The Apache-
SSL source patch kit turns Apache into a SSL server based on either SSLeay or
OpenSSL.
In the following section, I assume that you have already learned to install
OpenSSL (if not, see Chapter 11), and that you use OpenSSL here.
For example, the Apache source path for Apache 2.0.01 is /usr/src/
redhat/SOURCES/apache_2.0.01.
1. su to root.
2. Change the directory to the Apache source distribution (/usr/src/
redhat/SOURCES/apache_x.y.zz).
This compiles and installs both standard (httpd) and SSL-enabled (httpsd)
Apache. Now you need a server certificate for Apache.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
-----
Country Name (2 letter code) [GB]:US
State or Province Name (full name) [Some-State]:California
Locality Name (eg, city) []:Sacramento
Organization Name (eg, company; recommended) []:MyORG
Organizational Unit Name (eg, section) []:CS
server name (eg. ssl.domain.tld; required!!!)
[]:shea.intevo.com
Email Address []:[email protected]
/usr/src/redhat/SOURCES/apache_x.xx/SSLconf/conf/httpsd.pem
mv /usr/local/apache/conf/httpsd.conf /usr/local/apache/conf/httpd.conf
You have two choices when it comes to using SSL with Apache. You can either
enable SSL for the main server or for virtual Web sites. Here I show you how you
can enable SSL for your main Apache server. Modify the httpd.conf file as follows
204754-2 Ch15.F 11/5/01 9:05 AM Page 397
1. By default, Web browsers send SSL requests to port 443 of your Web
server, so if you want to turn the main Apache server into an SSL-enabled
server, change the Port directive line to be
Port 443
2. Add the following lines to tell Apache how to generate random data
needed for encrypting SSL connections:
SSLRandomFile file /dev/urandom 1024
SSLRandomFilePerConnection file /dev/urandom 1024
3. If you want to reject all requests but the secure requests, insert the follow-
ing directive:
SSLRequireSSL
6. Add the following to enable the cache server port and cache timeout
values:
SSLCacheServerPort logs/gcache_port
SSLSessionCacheTimeout 15
7. Tell Apache where you are keeping the server certificate file.
■ If you created the server certificate by using the instructions in Chapter
11, your server certificate should be in /usr/local/ssl/certs.
■ If you apply the test certificate now (using the make certificate
command discussed earlier), then your test certificate is in
/path/to/apache_x.y.zz/SSLconf/conf, and it’s called httpsd.pem.
Set the following directive to the fully qualified path of your server cer-
tificate as shown with the following code.
SSLCertificateFile \
/path/to/apache_x.y.zz/SSLconf/conf/httpsd.pem
8. Set the following directives as shown, and save the httpd.conf file.
SSLVerifyClient 3
SSLVerifyDepth 10
SSLFakeBasicAuth
SSLBanCipher NULL-MD5:NULL-SHA
204754-2 Ch15.F 11/5/01 9:05 AM Page 398
Summary
Web servers are often the very first target of most hack attacks. By fortifying your
Web server using techniques to reduce CGI and SSI risks and logging everything,
you can ensure the security of your Web sites. Not allowing spiders and robots to
index sensitive areas of your Web site and restricting access by username or IP
address can be quite helpful in combating Web vandalism.
214754-2 Ch16.F 11/5/01 9:05 AM Page 399
Chapter 16
ACCORDING TO A RECENT Men & Mice Domain Health Survey, three out of four
Internet domains have incorrect DNS configurations. Incorrect DNS configuration
often leads to security break-ins. This chapter examines correcting, verifying, and
securing DNS configuration using various techniques.
How can you protect your DNS server from spoofing attacks? Begin with the fol-
lowing two principles:
◆ Ensure that you are running the latest release version of DNS server
software.
Running the latest stable DNS server software is as simple as getting the
source or binary distribution of the software from the server vendor and
installing it. Most people run the Berkeley Internet Name Domain (BIND)
server. The latest version of BIND is at www.isc.org/products/BIND.
Keeping your DNS configuration correct and secure is the challenge.
Getting Dlint
Here’s how you can install Dlint on your system.
You can download Dlint from www.domtools.com/dns/dlint.shtml. As of this
writing the latest version of Dlint is 1.4.0. You can also use an online version of
Dlint at www.domtools.com/cgi-bin/dlint/nph-dlint.cgi. The online version
has time restrictions, so I recommend it only for trying the tool.
Installing Dlint
Dlint requires DiG and Perl 5. DiG is a DNS query utility found in the BIND distrib-
ution. Most likely you have it installed. Run the dig localhost any command to
find out. If you don’t have it, you can get DiG from www.isc.org/bind.html. I
assume that you have both DiG and Perl 5 installed on your Linux system.
To install Dlint do the following:
1. su to root.
2. Extract the Dlint source package using a suitable directory.
I extracted the dlint1.4.0.tar package in the /usr/src/redhat/
SOURCES directory using the tar xvf dlint1.4.0.tar command. A new
subdirectory gets created when you extract the source distribution.
3. Run the which perl command to see where the Perl interpreter is
installed.
4. Run the head -1 digparse command to see the very first line of the
digparse Perl script used by Dlint. If the path shown after #! matches the
path shown by the which perl command, don’t change it. If the paths
don’t match, modify this file using a text editor, and replace the path after
#! with the path of your Perl interpreter.
5. Run the make install command to install Dlint, which installs the dlint
and digparse scripts in /usr/local/bin.
Now you can run Dlint.
214754-2 Ch16.F 11/5/01 9:05 AM Page 402
Running Dlint
The main script in the Dlint package is called dlint. You can run this script using
the following command:
For example, to run dlint for a domain called intevo.com, you can execute
/usr/local/bin/dlint intevo.com. Listing 16-1 shows an example output.
As you can see, dlint is verbose. The lines that start with a semicolon are com-
ments. All other lines are warnings or errors. Here domain.com has a set of prob-
lems. ns1.domain.com has an A record, but the PTR record points to
k2.domain.com instead. Similarly, the ns2.domain.com host has the same problem.
This means the domain.com configuration has the following lines:
ns1 IN A 172.20.15.1
ns2 IN A 172.20.15.1
k2 IN A 172.20.15.1
1 IN PTR k2.intevo.com.
214754-2 Ch16.F 11/5/01 9:05 AM Page 404
The dlint program suggests using CNAME records to resolve this problem. This
means the configuration should be:
ns1 IN A 172.20.15.1
ns2 IN CNAME ns1
k2 IN CNAME ns1
1 IN PTR ns1.intevo.com.
After fixing the errors in the appropriate configuration DNS files for
domain.com, the following output is produced by the /usr/local/bin/dlint
domain.com command.
;; ============================================================
;; dlint of domain.com run ending normally.
;; run ending: Fri Dec 29 13:38:01 EST 2000
As shown no error messages are reported. Of course, Dlint (dlint) can’t catch all
errors in your configuration, but it’s a great tool to perform a level of quality con-
trol when you create, update, or remove DNS configuration information.
Securing BIND
BIND is the most widely used DNS server for Linux. BIND was recently overhauled
for scalability and robustness. Many DNS experts consider earlier versions of BIND
(prior to 9.0) to be mostly patchwork.
Fortunately BIND 9.0 is written by a large team of professional software devel-
opers to support the next generation of DNS protocol evolution. The new BIND sup-
ports back-end databases, authorization and transactional security features,
SNMP-based management, and IPv6 capability. The code base of the new bind is
audited and written in a manner that supports frequent audits by anyone who is
interested.
The new BIND now supports the DNSSEC and TSIG standards.
acl “dns-ip-list” {
172.20.15.100;
172.20.15.123;
};
zone “yourdomain.com” {
type master;
file “mydomain.dns”;
allow-query { any; };
allow-update { none; };
allow-transfer { dns-ip-list; };
};
214754-2 Ch16.F 11/5/01 9:05 AM Page 406
Unfortunately, malicious hackers can use IP spoofing tricks to trick a DNS server
into performing zone transfers. Avoid this by using Transaction Signatures. Let’s
say that you want to limit the zone transfer for a domain called yourdomain.com to
two secondary name servers with IP addresses 172.20.15.100 (ns1.yourdomain.
com) and 172.20.15.123 (ns2.yourdomain.com). Here’s how you can use TSIG to
ensure that IP spoofing tricks can’t force a zone transfer between your DNS server
and a hacker’s DNS server.
Make sure that the DNS servers involved in TSIG-based zone transfer
authentication keep the same system time. You can create a cron job entry
to synchronize each machine with a remote time server using rdate or ntp
tools.
4. Using the key string displayed by the preceding step, create the following
statement in the named.conf file of both ns1.yourdomain.com and
ns2.yourdomain.com.
key zone-xfr-key {
algorithm hmac-md5;
secret “YH8Onz5x0/twQnvYPyh1qg==”;
};
214754-2 Ch16.F 11/5/01 9:05 AM Page 407
Use the actual key string found in the file you generated. Don’t use the key
from this example.
server 172.20.15.123 {
keys { zone-xfr-key; };
};
server 172.20.15.100 {
keys { zone-xfr-key; };
};
};
key zone-xfr-key {
algorithm hmac-md5;
secret “YH8Onz5x0/twQnvYPyh1qg==”;
};
server 172.20.15.123 {
keys { zone-xfr-key; };
};
zone “yourdomain.com” {
type master;
file “mydomain.dns”;
allow-query { any; };
allow-update { none; };
allow-transfer { dns-ip-list; };
};
214754-2 Ch16.F 11/5/01 9:05 AM Page 408
};
key zone-xfr-key {
algorithm hmac-md5;
secret “YH8Onz5x0/twQnvYPyh1qg==”;
};
server 172.20.15.100 {
keys { zone-xfr-key; };
};
zone “yourdomain.com” {
type master;
file “mydomain.dns”;
allow-query { any; };
allow-update { none; };
allow-transfer { dns-ip-list; };
};
The preceding steps ensures zone transfers between the given hosts occur in a
secure manner. To test that a shared TSIG key is used for zone-transfer authentica-
tion, you can do the following:
◆ The secondary DNS server should transfer the missing zone file from the
primary DNS server. You should see the zone file created in the appropri-
ate directory. If for some reason this file isn’t created, look at /var/log/
messages for errors, fix the errors, and redo this verification process.
214754-2 Ch16.F 11/5/01 9:05 AM Page 409
◆ If you change the shared TSIG key in any of the two hosts by one character,
the zone transfer isn’t possible. You get an error message in /var/log/
messages that states that TSIG verification failed because of a bad key.
◆ Because the named.conf file on both machines now has a secret key,
ensure that the file isn’t readable by ordinary users.
options {
As with the version number, you don’t want to give your host information. In
the sprit of making a potential attacker’s job harder, I recommend that you
don’t use HINFO or TXT resource records in your DNS configuration files.
Limiting Queries
Anyone can perform a query with most DNS servers on the Internet. This is
absolutely unacceptable for a secure environment. A DNS spoof attack usually
relies on this fact, and an attacker can ask your DNS server to resolve a query for
which it can’t produce an authoritative answer. The spoof may ask your server to
resolve a query that requires it to get data from the hacker’s own DNS server. For
example, a hacker runs a DNS server for the id10t.com domain, and your DNS
server is authoritative for the yourdomain.com domain. Now, if you allow anyone
to query your server for anything, the hacker can ask your server to resolve
gotcha.id10t.com. Your DNS server gets data from the hacker’s machine, and the
hacker plays his spoofing tricks to poison your DNS cache.
Now, say that your network address is 168.192.1.0. The following statement
makes sure that no one outside your network can query your DNS server for any-
thing but the domains it manages.
options {
allow-query { 168.192.1.0/24; };
};
The allow-query directive makes sure that all the hosts in the 168.192.1.0
network can query the DNS server. If your DNS server is authoritative for the
yourdomain.com zone, you can have the following /etc/named.conf segment:
options {
allow-query { 168.192.1.0/24; };
};
zone “yourdomain.com” {
type master;
file “yourdomain.com”;
allow-query { any; };
};
zone “1.168.192.in-addr.arpa” {
type master;
file “db.192.168.1”;
allow-query { any; };
};
214754-2 Ch16.F 11/5/01 9:05 AM Page 411
This makes sure that anyone from anywhere can query the DNS server for your-
domain.com but only the users in the 168.192.1.0 network can query the DNS
server for anything.
You can also disable recursion completely, for everyone, by using the following
option in the global options section:
recursion no;
You can’t disable recursion on a name server if other name servers use it as a
forwarder.
Ideally, you should set your authoritative name server(s) to perform no recur-
sion. Only the name server(s) that are responsible for resolving DNS queries for
your internal network should perform recursion. This type of setup is known as
split DNS configuration.
For example, say that you have two name servers — ns1.yourdomain.com (pri-
mary) and ns2.yourdomain.com (secondary) — responsible for a single domain
called yourdomain.com. At the same time you have a DNS server called ns3.your-
domain.com, which is responsible for resolving DNS queries for your 192.168.1.0
network. In a split DNS configuration, you can set both ns1 and ns2 servers to use
no recursion for any domain other than yourdomain.com and allow recursion on
ns3 using the allow-recursion statement discussed earlier.
options no-fetch-glue
214754-2 Ch16.F 11/5/01 9:05 AM Page 412
1. su to root.
2. Create a new user called dns by using the useradd dns -d /home/dns
command.
3. Run the mkdir -p /home/dns/var/log /home/dns/var/run /home/
dns/var/named /home/dns/etc command to create all the necessary
directories.
4. Copy the /etc/named.conf file, using the cp /etc/named.conf
/home/dns/etc/ command.
6. Run the chown -R dns:dns /home/dns command to make sure that all
files and directories needed by named are owned by user dns and its pri-
vate group called dns.
Now you can run the name server using the following command:
/usr/local/sbin/named -t /home/dns -u dns
If you plan to run named as root, don’t specify the -u dns command.
1. Create a pair of public and private keys for the domain.com domain. From
the /var/named directory, run the /usr/local/sbin/dnssec-keygen -a
DSA -b 768 -n ZONE domain.com command.
This command creates a 768-bit DSA-based private and public key pair. It
creates a public key file called Kdomain.com.+003+29462.key and a pri-
vate key file called Kdomain.com.+003+29462.private.
The 29462 number is called a key tag, and it varies. Insert the public key
in the zone file (domain.com.db) with a line like this at the beginning of
the file:
$INCLUDE /var/named/Kdomain.com.+003+29462.key
This command creates a key set with a time-to-live value of 3,600 seconds
(1 hour) and expiring in 30 days. This command creates a file called
domain.com.keyset.
zone “domain.com” IN {
type master;
file “domain.db.signed”;
allow-update { none; };
};
214754-2 Ch16.F 11/5/01 9:05 AM Page 414
Summary
Every Internet request (Web, FTP, email) requires at least one DNS query. Since
BIND is the most widely used DNS server available today, it is very important that
your BIND server is configured well for enhanced security. Checking the DNS con-
figuration using Dlint, using transaction signatures for zone transfer, and using
DNSSEC ensures that your DNS server is as secure as it can be.
224754-2 Ch17.F 11/5/01 9:05 AM Page 415
Chapter 17
◆ Securing IMAP
◆ Securing POP3
Traditionally, each MTA also allows anyone to relay messages to another MTA.
For example, only a few years ago you could have configured your e-mail program
to point to the mail server for the nitec.com domain and sent a message to your
friend at [email protected]. This means you could have simply
used my mail server to relay a message to your friend. What’s wrong with that?
Nothing — provided the relaying job doesn’t do the following:
The MAPS RBL authority are quite reasonable about removing a black-listed
server from their list once the server authority demonstrates that the server
is no longer an open mail relay.
224754-2 Ch17.F 11/5/01 9:05 AM Page 417
◆ Spammers use tools that search the Internet for open relays automatically.
If you want a secure, relatively hassle-free network, I recommend that you take
action to stop open mail from relaying via your e-mail servers.
1. Log on to your Linux system (or any system that has nslookup and Telnet
client tools).
2. Run the nslookup -q=mx domain.com command where domain.com is
the domain name for which you want to find the MX records.
The MX records in a DNS database point to the mail servers of a domain. In
this example, I use a fictitious domain called openrelay-ok.com as the
example domain. Note the mail servers to which the MX records of the
domain actually point. The domain should have at least one mail server
configured for it. In this example, I assume the mail server pointed to by the
MX record for the openrelay-ok.com domain is mail.openrelay-ok.com.
3. Run the telnet mailserver-host 25 command, where mailserver-
host is a mail server hostname. I ran the telnet mail.openrelay-ok.
com 25 command to connect to port 25 (standard SMTP port) of the tested
mail server.
4. Once connected, enter the ehlo localhost command to say (sort of )
hello to the mail server. The mail server replies with a greeting message
and waits for input.
5. Enter the mail from: [email protected] command to tell the mail server
that you want to send mail to a Hotmail address called [email protected].
I recommend using any address outside your domain when replacing
[email protected]. The server acknowledges the sender address using a
response such as 250 [email protected]... Sender ok.
If the server responds with a different message, stating that the sender’s
e-mail address isn’t acceptable, make sure you are entering the command
correctly. If you still get a negative response, the server isn’t accepting the
e-mail destination, which is a sign that the server probably has special
MAIL FROM checking. This means the server probably won’t allow open
relay at all. Most likely you get the okay message; if so, continue.
At this point, you have instructed the mail server that you want to send
an e-mail from [email protected].
224754-2 Ch17.F 11/5/01 9:05 AM Page 418
6. Tell the server that you want to send it to [email protected] using the rcpt
to: [email protected] command. If the server accepts it by sending a
response such as 250 [email protected]... Recipient ok, then you have
found an open mail relay. This is because the mail server accepted mail
from [email protected], and agreed to send it to [email protected].
If you aren’t performing this test on a yahoo.com mail server, then the
mail server shouldn’t accept mail from just anyone outside the domain to
send to someone else outside the domain. It’s an open mail relay; a spam-
mer can use this mail server to send mail to people outside your domain,
as shown in Listing 17-1.
If you perform the preceding test on a mail server that doesn’t allow open mail
replay, the output looks very different. Here’s an example Telnet session on port 25
of a mail server called mail.safemta.com, showing how the test does on a pro-
tected mail server; you get the following ouput:
The listing shows the mail server rejecting the recipient address given in the
rcpt to: [email protected] command.
If your mail server doesn’t reject open mail relay requests, secure it now!
Securing Sendmail
Sendmail is the most widely distributed MTA and currently the default choice for
Red Hat Linux. Fortunately, by default the newer versions of Sendmail don’t allow
open relay functionality. Although this feature is available in the latest few ver-
sions, I recommend that you download and install the newest version of Sendmail
from either an RPM mirror site or directly from the official open source Sendmail
site at www.sendmail.org.
224754-2 Ch17.F 11/5/01 9:05 AM Page 420
I strongly recommend that you download both the binary RPM version (from
an RPM mirror site such as www.rpmfind.net) and the source distribution
(from www.sendmail.org). Install the RPM version using the rpm -ivh
sendmail-version.rpm command where sendmail-version.rpm is
the latest binary RPM of Sendmail. Installing the binary RPM version ensures
that the configuration files and directories are automatically created for you.
You can decide not to install the binary RPM and simply compile and install
the source distribution from scratch. The source distribution doesn’t have a
fancy installation program, so creating and making configuration files and
directories are a lot of work.To avoid too many manual configurations, I sim-
ply install the binary distribution and then compile and install the source on
top of it. .
In the following section, when I mention a Sendmail feature using the FEATURE
(featurename) syntax, add it to the appropriate whatever.mc file and recreate
the /etc/mail/sendmail.cf file. In my example, I add it to /usr/src/
redhat/SOURCES/sendmail-8.11.0/cf/cf/linux-dnsbl.mc, which is shown in
Listing 17-2.
For example, if I recommend a feature called xyz using the FEATURE(xyz) nota-
tion, add the feature to the configuration file; then create the /etc/mail/
sendmail.cf file, using the preceding command. The latest version of Sendmail
gives you a high degree of control over the mail-relaying feature of your MTA.
224754-2 Ch17.F 11/5/01 9:05 AM Page 422
USING FEATURE(ACCESS_DB)
This feature enables the access-control database, which is stored in the
/etc/mail/access file. The entries in this file have the following syntax:
LHS{tab}RHS
LHS Meaning
RHS Meaning
RELAY Enable mail relay for the host or domain named in the LHS
OK Accept mail and ignore other rules
REJECT Reject mail
DISCARD Silently discard mail; don’t display an error message
ERROR RFC821-CODE Display RFC 821 error code and a text message
text message
ERROR RFC1893-CODE Display RFC 1893 error code and a text message
text message
spamfactory.com REJECT
bad.scam-artist.com REJECT
REJECT MAIL FROM AN E-MAIL ADDRESS To reject mail from an e-mail address
called [email protected], use
From:[email protected] REJECT
224754-2 Ch17.F 11/5/01 9:05 AM Page 424
You don’t receive mail from the preceding address, but you can still send mes-
sages to the address.
To:busyrebooting.com RELAY
Connect:imbuddies.com RELAY
From:[email protected] OK
From:[email protected] OK
From:myfoe.net REJECT
All e-mail except that from the first two addresses is rejected.
USING FEATURE(RELAY_ENTIRE_DOMAIN)
The database for this feature is stored in /etc/mail/relay-domains. Each line in
this file lists one Internet domain. When this feature is set, it allows relaying of all
hosts in one domain. For example, if your /etc/mail/relay-domain file looks like
the following line, mail to and from kabirsfriends.com is allowed:
kabirsfriends.com
USING FEATURE(RELAY_HOSTS_ONLY)
If you don’t want to enable open mail relay to and from an entire domain, you can
use this feature to specify each host for which your server is allowed to act as a
mail relay.
224754-2 Ch17.F 11/5/01 9:05 AM Page 425
FEATURE(`dnsbl’,`rbl.maps.vix.com’)dnl
FEATURE(`dnsbl’,`dul.maps.vix.com’)dnl
FEATURE(`dnsbl’,`relays.mail-abuse.org’)dnl
> .D{client_addr}127.0.0.2
> Basic_check_relay <>
Basic_check_rela input: < >
Basic_check_rela returns: $# error $@ 5.7.1 $: “550 Mail from “ 127.0.0.2 “
refused by blackhole site rbl.maps.vix.com”
224754-2 Ch17.F 11/5/01 9:05 AM Page 426
Here you can see that the address is blacklisted. Press Ctrl+Z to put the current
process in the background, and then enter kill %1 to terminate the process.
The current version of Sendmail supports Simple Authentication and Security
Layer (SASL), which can authenticate the user accounts that connect to it. Because
a user must use authentication, spammers who (aren’t likely to have user accounts
on your system) can’t use it as an open mail relay. (This new feature is not yet
widely used.)
Before you can use the SASL-based authentication, however, install the Cyrus
SASL library package (as shown in the next section).
When following these instructions, make sure you replace SASL version
number 1.5.24 with the version number you download.
If you change directory to /usr/lib and run the ls -l command, you see the
SASL library files installed.
cp -r /etc/mail /etc/mail.bak
cp /usr/sbin/sendmail /usr/sbin/sendmail.bak
cp /usr/sbin/makemap /usr/sbin/makemap.bak
cp /usr/bin/newaliases /usr/bin/newaliases.bak
1. Extract the Sendmail source distribution using the tar xvzf sendmail.
8.11.0.tar.gz command. This creates a subdirectory called sendmail-8.
11.0. Change to this subdirectory.
2. Run the following commands to extract and install the Sendmail configu-
ration files in the appropriate directories.
mkdir -p /etc/mail
cp etc.mail.tar.gz /etc/mail
cp site.config.m4 sendmail-8.11.0/devtools/Site/
cp sendmail.init /etc/rc.d/init.d/sendmail
APPENDDEF(`confENVDEF’, `-DSASL’)
APPENDDEF(`conf_sendmail_LIBS’, `-lsasl’)
APPENDDEF(`confLIBDIRS’, `-L/usr/local/lib/sasl’)
APPENDDEF(`confINCDIRS’, `-I/usr/local/include’)
APPENDDEF(`confENVDEF’, `-DSASL’)
APPENDDEF(`conf_sendmail_LIBS’, `-lsasl’)
APPENDDEF(`confLIBDIRS’, `-L/usr/local/lib/sasl’)
APPENDDEF(`confINCDIRS’, `-I/usr/local/include’)
250-ONEX
250-XUSR
250-AUTH DIGEST-MD5 CRAM-MD5
250 HELP
As shown, the newly built Sendmail now supports the SMTP AUTH command and
offers DIGEST-MD5 CRAM-MD5 as an authentication mechanism.
The SMTP AUTH allows relaying for senders who successfully authenticate them-
selves. Such SMTP clients as Netscape Messenger and Microsoft Outlook can use
SMTP authentication via SASL.
FEATURE(local_procmail)dnl
MAILER(procmail)dnl
1. su to root.
224754-2 Ch17.F 11/5/01 9:05 AM Page 430
◆ LOGFILE
Specifies the fully qualified path of the log. The default value allows
the sanitizer to create a log file called procmail.log in a user’s home
directory.
The default value is $HOME/procmail.log.
◆ POISONED_EXECUTABLES
Only the header part of the trapped message goes to the e-mail list.
For this variable to take effect, the SECURITY_NOTIFY variable must be set
to at least one e-mail address.
◆ SECURITY_NOTIFY_SENDER_POSTMASTER
When set to a value such as YES, an e-mail goes to the violator’s postmas-
ter address.
◆ SECURITY_NOTIFY_RECIPIENT
When set to a filename, the intended recipient receives the contents of the
file as a notice stating that an offending e-mail has been quarantined.
◆ SECRET
Specifies the path of the file that quarantines the poisoned attachment.
The default value is /var/spool/mail/quarantine.
◆ SECURITY_QUARANTINE_OPTIONAL
■ When set to YES, a poisoned message is still sent to the intended
recipient.
■ When set to NO, it is bounced.
◆ POISONED_SCORE
The sanitizer looks at the embedded macro and tries to match macro frag-
ments with known poisoned macro-fragment code. As it finds question-
able macro fragments, it keeps a growing score. When the score reaches
the value specified by the variable, the macro is considered dangerous
(that is, poisoned).
The default value is 25.
◆ MANGLE_EXTENSIONS
If you want to keep a history of macro scores for profiling to see whether
your POISONED_SCORE is a reasonable value, set SCORE_HISTORY to the
name of a file. The score of each scanned document is saved to this file.
The default value is /var/log/macro-scanner-scores.
◆ SCORE_ONLY
When this variable is set to YES, the sanitizer doesn’t act when it detects a
macro.
◆ SECURITY_STRIP_MSTNEF
When the sanitizer runs, it creates a log file (default filename is procmail.log —
set using the LOGFILE variable) that should be periodically reviewed and removed
by the user. When attachments are poisoned, they are kept in a mailbox file called
/var/spool/mail/quarantine (set by the SECURITY_QUARANTINE variable). You
can the unmangle attachments fairly easily. For example, you can designate a
workstation where you download mangled files from the quarantine mailbox, then
disconnect the network cable before renaming and reading the emails with poten-
tially dangerous attachments.
224754-2 Ch17.F 11/5/01 9:05 AM Page 435
From now on, when MAIL FROM is set to [email protected], Word (.doc)
and Excel (.xls) attachments aren’t mangled.
UNMANGLING ATTACHMENTS
To unmangle an attachment from the /var/spool/mail/quarantine mailbox file,
do the following:
3. Now, locate the original Content-Type header, which usually has a value
such as application/something (application/octet-stream in the
preceding example).
4. Place the MIME boundary marker string (shown in bold in the preceding
listing) just in front of the original Content-Type header. Remove every-
thing above the marker string so that you end up with something like the
Listing 17-6.
5. Save the file; then load it (using a mail-user agent like Pine) and send it
to the intended user.
The user still must rename the file because it is mangled. In the preceding
example, the original filename was cool.exe, which was renamed to
cool.16920DEFANGED-exe. The user must rename this file to cool.exe by
removing the 16920DEFANGED string from the middle. If you change this
in the Content-Type header during the previous step, the sanitizer
catches it and quarantines it again. So let the user rename it, which is
224754-2 Ch17.F 11/5/01 9:05 AM Page 437
much safer because she can run her own virus checks on the attachment
once she has saved the file to her system’s hard drive.
Now you have a reasonably good tool for handling inbound attachments before
they cause any real harm. Inbound mail is just half of the equation, though. When
you send outbound e-mail from your network, make sure you and your users take
all the necessary steps, such as virus-checking attachments and disabling potential
embedded macros. Another helpful tool is the zip file. Compressing files and send-
ing them in a zip file is better than sending them individually, because the zip file
isn’t executable and it gives the receiving end a chance to save and scan for
viruses.
Outbound-only Sendmail
Often a Linux system is needed for sending outbound messages but not for receiv-
ing inbound mail. For example, machines designated as monitoring stations (which
have no users receiving e-mail) have such a need. To meet it, you have to alter the
standard Sendmail installation (which normally keeps the inbound door open).
Typically, Sendmail is run using the /etc/rc.d/init.d/sendmail script at
start-up. This script starts the sendmail daemon using a command line such as:
Here the -bd option specifies that Sendmail run in the background in daemon
mode (listening for a connection on port 25). The -q10m option specifies that the
queue be run every ten minutes. This is fine for a full-blown Sendmail installation
in which inbound and outbound e-mail are expected. In outbound-only mode, the
-bd option isn’t needed. Simply run Sendmail using xinetd. Here’s how.
1. su to root.
2. Force Sendmail to run whenever a request to port 25 is detected.
Do this by making xinetd listen for the connection and start a Sendmail
daemon when one is detected. So create a xinetd configuration file called
/etc/xinetd.d/sendmail for Sendmail, as shown following.
service smtp
{
socket_type = stream
wait = no
user = root
server = /usr/sbin/sendmail
server_args = -bs
log_on_success += DURATION USERID
log_on_failure += USERID
224754-2 Ch17.F 11/5/01 9:05 AM Page 438
nice = 10
disable = no
only_from = localhost
}
3. Run the queue every ten minutes so that outgoing mail is attempted for
delivery six times per hour. For this, add a line in the /etc/crontab file
as shown next:
10 * * * * /usr/sbin/sendmail -q
The only_from directive in the xinetd configuration file for Sendmail is set to
localhost, which effectively tells xinetd not to start the Sendmail daemon for
any request other than the local ones. The cron entry in /etc/crontab ensures that
the mail submitted to the queue gets out.
Here xinetd helps us restrict access to the mail service from outside. But bear in
mind that running Sendmail via xinetd requires that every time a new SMTP request
is made, the xinetd daemon starts a Sendmail process. This can consume resources if
you are planning to have a lot of STMP connections open simultaneously.
1. su to root.
2. Create a user called mail using the useradd mail -s /bin/false
command.
3. Run the following commands to change permissions on files and directo-
ries used by Sendmail.
chown root:mail /var/spool/mail
chmod 1775 /var/spool/mail
chown -R :mail /var/spool/mail/*
chmod -R 660 /var/spool/mail/*
chown mail:mail /usr/sbin/sendmail
224754-2 Ch17.F 11/5/01 9:05 AM Page 439
As shown, Sendmail is run as the mail user. You can enter quit to exit the Telnet
session to port 25. Now you have a Sendmail server that runs as an ordinary user.
224754-2 Ch17.F 11/5/01 9:05 AM Page 440
Securing Postfix
As the new kid on the MTA block, Postfix has the luxury of built-in security fea-
tures. It’s a suite of programs instead of a single binary server like Sendmail.
header_checks = regexp:/etc/postfix/reject-headers
regexp REJECT
The left side (regexp) is basic regular expression. Here’s an example of the
/etc/postfix/reject-headers file:
All e-mail sent received with To: header containing at least the [email protected]
string is rejected. Similarly, all e-mail sent from an account called mailer-
[email protected] is rejected. The third regular expression states that any
e-mail with the subject “Make money fast” is rejected.
You can also use Perl-compatible regular expressions (PCREs) for more advanced
matches. For a file containing PCREs instead of basic regular expressions, simply
use the following configuration in the main.cf file.
header_checks = pcre:/etc/postfix/reject-headers
1. In the main.cf configuration file, define your network addresses using the
following line:
mynetworks = 192.168.1.0/24
2. To block access to your mail server by any host other than the ones in
your network, add:
smtpd_client_restrictions = permit_mynetworks,\
reject_unknown_client
You can also use an access map file to reject or allow a single host or IP address.
Here is how:
1. To use the access map for this purpose, add the following line:
smtpd_client_restrictions = hash:/etc/postfix/access
maps_rbl_domains = mail-abuse.org
smtpd_client_restrictions = reject_maps_rbl
The first line sets the hosts that need to be contacted to get the RBL list and the
second line sets the restrictions that need to be applied.
224754-2 Ch17.F 11/5/01 9:05 AM Page 442
masquerade_domains = $mydomain
masquerade_exceptions = root
◆ The first line tells Postfix to enable address masquerading for your
domain, set by the $mydomain variable.
This means [email protected] appears as
[email protected].
◆ The second line tells Postfix to leave the root user alone.
Summary
An unprotected mail server is often an open mail relay, which allows spammers to
send emails to people who do not want them. Such an abuse can waste resources
and cause legal problems for companies that leave their email system open to such
attacks. By controlling which hosts your mail server allows mail relay services,
enabling the RBL rules, sanitizing incoming emails using procmail, and running
Sendmail without root privileges, you can ensure the safety of your email service.
234754-2 Ch18.F 11/5/01 9:05 AM Page 443
Chapter 18
◆ Securing WU-FTPD
Securing WU-FTPD
WU-FTPD is the default FTP server for Red Hat Linux. It’s one of the most widely
used FTP server packages. Recently, WU-FTPD (versions earlier than 2.6.1) was in the
spotlight for security issues. However, all the known security holes have been
patched in Red Hat Linux 7. Downloading the latest WU-FTPD source RPM package
from an RPM finder site, such as http://www.rpmfind.net, and installing it as
follows enhances security of your FTP service. The latest stable version is always
the best candidate for a new FTP server installation.
2. Extract the source tar ball from the RPM package by using the rpm -ivh
filename command, where filename is the name of the source RPM file
(for example, wu-ftpd-2.6.1.src.rpm) in the /usr/src/redhat/
SOURCES directory. You should now have a file called wu-ftpd-
version.tar.gz (for example, wu-ftpd-2.6.1.tar.gz) in the /usr/
src/redhat/SOURCES directory. Change your current directory to this
directory.
3. Extract the .tar file by using the tar xvzf filename command.
The file extracts itself to a new subdirectory named after the filename
(with the filename in Step 2, for example, the directory would be named
wu-ftpd-2.6.1).
6. After you configure the source, install the binaries by running the make
and make install commands.
The first line in Listing 18-1 instructs PAM to load the pam_listfile module
and read the /etc/ftpusers file. If the /etc/ftpusers file contains a line match-
ing the username given at the FTP authentication, PAM uses the sense argument to
determine how to handle the user’s access request. Because this argument is set to
deny, the user is denied access if the username is found in the /etc/ftpusers file.
Dec 12 15:21:16 k2 ftpd[1744]: PAM-listfile: Refused user kabir for service ftp
1. Place a # in front of the pam_listfile line (the first line shown in Listing
18-1) in your /etc/pam.d/ftp configuration file. Doing so comments out
the configuration.
2. Add the following configuration line as the first line in the file.
auth required /lib/security/pam_listfile.so item=user \
sense=allow file=/etc/userlist.ftp onerr=fail
Don’t allow the root user or anyone in the wheel group to access your FTP
server.
You can learn about umask and file permissions in detail in Chapter 9.
For example, if you want permissions so only the user who created a particular
file and his group can read the file uploaded on an FTP server, you can modify the
server_args line in the /etc/xinetd.d/wu-ftpd file. By default, this line looks
like
server_args = -l -a
The following steps set a default umask for all files uploaded via FTP.
server_args = -l –a –u026
2. Restart the xinetd server (killall –USR1 xinetd) and FTP a file via a
user account to check whether the permissions are set as expected.
3. To disallow one client from seeing another client’s files, use the -u option
along with a special ownership setting. For example, say that you keep all
your Web client files in the /www directory, where each client site has a
subdirectory of its own (for example, /www/myclient1, /www/myclient2,
and so on), and each client has an FTP account to upload files in these
directories. To stop a client from seeing another’s files:
234754-2 Ch18.F 11/5/01 9:05 AM Page 448
If you run a Web server called httpd and have a client user called
myclient1, then the command you use should look like this:
This command sets all the files and subdirectory permissions for /www/
myclient1 to 2750, which allows the files and directories to be readable,
writable, and executable by the owner (myclient1) and only readable and
executable by the Web server user (httpd). The set-GID value (2) in the
preceding command ensures that when new files are created, their permis-
sions allow the Web server user to read the file.
restricts a user’s access to the home directory. To create a chroot jail, follow these
steps.
1. Install the anonymous FTP RPM package from your Red Hat Linux
CD-ROM or by downloading the anon-ftp-version.rpm package (for
example, anonftp-3.0-9.i386.rpm) from an RPM-finder Web site such
as www.rpmfind.net.
2. Install the package using the rpm -ivh anon-ftp-version.rpm com-
mand.
3. Run the cp -r /home/ftp /home/chroot command. This copies every-
thing from the home directory of the FTP user to /home/chroot. The files
in /home/chroot are needed for a minimal execution environment for
chrooted FTP sessions. After that, you can run the rpm -e anon-ftp-
version command to delete the now-unneeded anonymous FTP package.
4. Install the chrootusers.pl script that you can find on the CD that came
with this book (see the CD Appendix for info on where to find the script).
5. Run the chmod 700 /usr/local/bin/chrootusers.pl command to
make sure the script can be run only by the root user.
6. Create a new group called ftpusers in the /etc/group file by using the
groupadd ftpusers command.
8. You add users to your new chroot jail by running the chrootusers.pl
script. If you run this script without any argument, you see the following
output that shows all the options the script can use.
In this command, lowest_ID is the lowest user ID number in the desired range,
and highest_ID is the highest user ID number in the desired range. For example, if
you want to configure each user account with UIDs between 100 and 999, low-
est_ID is 100 and highest_ID is 999. If you run this command with --start-
uid=100 and --end-uid=999, it displays the following output:
sh /tmp/chrootusers.pl.sh
If you choose, you can review the /tmp/chrootusers.pl.sh script using the
more /tmp/chrootusers.pl.sh command or a text editor. Whether you review
the file or not, finish by running the sh /tmp/chrootusers.pl.sh command to
create the chroot jails for the users within the specified UID range.
falls between 100 and 999; only that user’s account is chrooted. The process of cre-
ating a chroot jail looks like this:
This ensures that the user account has all the necessary files to maintain
chrooted FTP sessions:
Because the user sheila likely doesn’t want to be called 501 (her user
number), this entry allows commands like ls to display user and group
names instead of user ID and group ID values when listing files and direc-
tories in an FTP session.
6. The script sets permissions for the copied files and directories so that user
sheila can access them during FTP sessions.
7. Repeat the preceding steps for each user whose UID falls in the range
given in the chrootusers.pl script.
LOGGING EVERYTHING
The more network traffic WU-FTPD logs the more you can trace back to the server.
By default, WU-FTPD logs file transfers only to and from the server (inbound and
outbound, respectively) for all defined user classes (anonymous, real, and guest).
Following are all the log directives you can add to /etc/ftpaccess file.
◆ File transfers: Log file transfers if you want to see which files are being
uploaded to or downloaded from the server by which user. Use the follow-
ing directive in the /etc/ftpaccess file:
log transfers anonymous,real,guest inbound,outbound
directive in the /etc/ftpaccess file. Here are some sample uses of the noretrieve
directive:
◆ To prohibit anyone from retrieving any files from the sensitive /etc direc-
tory, you can add the following line in /etc/ftpaccess:
noretrieve /etc
◆ If you run an Apache Web server on the same system that runs your WU-
FTPD server, you can add the following line in /etc/ftpaccess to ensure
that directory-specific Apache configuration files (found in users’ Web
directories) can’t be downloaded by anyone. Here's an example:
noretrieve .htaccess .htpasswd
If you decide to deny such privileges only to anonymous users, you can do
so by using the following modified directive:
noretrieve .htaccess .htpasswd class=anonymous
◆ If you have set up a noretrieve for a certain directory but you want to
allow users to download files from that directory, use the allow-
retrieve file_extension directive. For example, allow-retrieve
/etc/hosts allows users to download the /etc/hosts file even when
noretrieve /etc is in effect.
1. Disable upload for all directories within your anonymous FTP account by
using the upload directive like this:
upload /var/ftp * no
For the example, discussed in the previous step this directive is set as
follows:
noretrieve /var/ftp/incoming
For example, to deny all the hosts in 192.168.1.0/24 (a Class C network,) you can
add the following line in the /etc/ftpaccess file:
deny 192.168.1.0/24
Each line in this file can list one IP address or network in the CIDR format (for
example, 192.168.1.0/24). Using an external file can keep your /etc/ftpaccess
configuration file simpler to maintain, because it won’t get cluttered by lots of IP
addresses.
In most cases, users don’t need an explanation for access denial, but if your pol-
icy requires such civilized behavior, you can display the contents of a message file
when denying access. For example, the following command shows everyone being
denied access to the contents of the /etc/ftphosts.bad.msg file:
Using ProFTPD
Earlier, this chapter demonstrated how to configure the WU-FTPD server to enhance
FTP security. But WU-FTPD isn’t the only FTP server you can get for Red Hat Linux;
another FTP server called ProFTPD is now common — and as with the “cola wars,”
each server has its following (in this case, made up of administrators). Here are
some of the benefits of ProFTPD:
ProFTPD can also use an Apache-like optional feature that sets up per-
directory configuration files, which can create a configuration specific to one
directory.
Configuring ProFTPD
The ProFTPD configuration file is /etc/proftpd.conf. The default copy of this file
(without any comment lines) is shown in Listing 18-2.
If you are familiar with Apache Web server configuration, you can see a couple
of close similarities here:
Before you configure ProFTPD, however, know the meaning of each directive for
the configuration file. For a complete list of ProFTPD directives and their usage, read
the ProFTPD user’s guide online at the ProFTPD Web site, http://www.proftpd.net.
Here I discuss only the directives shown in the default proftpd.conf file and the
ones that have security and access control implications. They are as follows:
◆ ServerName: Gives your FTP server a name. Replace the default value,
ProFTPD Default Installation, with something that doesn’t state what
type of FTP server it is. For example, you could use something like
ServerName “FTP Server”
Leave DefaultServer alone at least until you are completely familiar with all
the directives and creating virtual FTP servers using ProFTPD.
◆ Port: Specifies the port that ProFTPD listens to for incoming connections.
The default value of 21 should be left alone, or you must instruct your
FTP users to change their FTP client software to connect to a different
port on your FTP server.
◆ Umask: Specifies the default file and directory creation umask setting. The
default value of 022 creates files or directories with 755 permissions,
which is too relaxed. A value of 027 is recommended, and it ensures that
only the owner and the group can access the file or directory.
◆ MaxInstances: Specifies how many ProFTPD instances are run simultane-
ously before refusing connections. This directive is effective only for the
standalone mode of operation.
◆ User: Although the initial ProFTPD daemon must be run as root, for secu-
rity ProFTPD switches to an ordinary user and group when the FTP service
interacts with a client, thus reducing the risk of allowing a client process
to interact with a privileged process. This directive simply tells ProFTPD
which user to switch to.
◆ Group: Just as with the User directive, the Group directive specifies the
user and group that run the ProFTPD server. Just as with the User direc-
tive, the initial ProFTPD daemon must be run as root.
Although the default value of the User directive (nobody) is suitable for
most Linux installations, the Group value (nogroup) is not. There’s no
nogroup in the default /etc/group file with Red Hat Linux. To solve this
problem you can set the Group value to nobody (because there’s a group
called nobody in /etc/group that by default doesn’t have any members).
If you want to keep the default value for the Group directive for some rea-
son, you can run the groupadd nogroup command to add a new group
called nogroup.
allows FTP clients to overwrite any file they want. This is a bad idea and
should be set to off. Whenever one is trying to create a secure environ-
ment, the first step is to shut all doors and windows and only open those
that are well guarded or a necessity.
2. Install the proftpd script from the CD that comes with this book (see the
CD Appendix for info on where to find the script) in your
/etc/rc.d/init.d directory.
3. Add a new group called nogroup using the groupadd nogroup command.
4. Execute the following command:
ln -s /etc/rc.d/init.d/proftpd /etc/rc.d/rc3.d/S95proftpd
At any time (as root) you can run the /etc/rc.d/init.d/proftpd start com-
mand to start the ProFTPD service. If you make any configuration change in
/etc/proftpd.conf, reload the configuration using the /etc/rc.d/init.d/
proftpd reload command. Similarly, the /etc/rc.d/init.d/proftpd stop
command can stop the service.
To perform a scheduled shutdown of the FTP service you can also use the
/usr/local/sbin/ftpshut HHMM command. For example, the /usr/
local/sbin/ftpshut 0930 command stops the FTP service at 9:30 a.m.
To prevent any connection before a certain time of the scheduled shutdown
you can use the -l minute option. By default, ProFTPD doesn’t allow any
new connections ten minutes before a scheduled shutdown. Remember to
remove the /etc/shutmsg file when you want to enable the FTP service
again.
Monitoring ProFTPD
ProFTPD includes a few utilities that can help you monitor the service. For exam-
ple, you can run ProFTP bundled utilities to count concurrent FTP connections or
determine which users are connected to your server. In this section, I detail these
utilities.
When you introduce a new FTP service, monitor it very closely for at least a few
days. Typically, you should run commands like the preceding using the watch facil-
ity. For example, the watch -n 60 /usr/local/bin/ftpcount command can run
on a terminal or xterm to display a count of FTP users every 60 seconds.
Here two users (sheila and kabir) are connected from two different machines
on a LAN. The output shows:
Securing ProFTPD
ProFTPD has a number of directives that you can use in the /etc/proftpd.conf
file to create a secure and restrictive FTP service. In this section, I show you how
you can use such directives to restrict FTP connections by remote IP addresses,
enable PAM-based user authentication (see Chapter 10 for details on PAM itself), or
create a chroot jail for FTP users.
234754-2 Ch18.F 11/5/01 9:05 AM Page 463
<Limit LOGIN>
Order Allow, Deny
Allow from goodIPaddr1, goodIPaddr2
Deny from all
</Limit>
<Limit LOGIN>
Order Allow, Deny
Allow from 216.112.169.138
Deny from all
</Limit>
%PAM-1.0
auth required /lib/security/pam_listfile.so item=user \
sense=deny file=/etc/ftpusers onerr=succeed
auth required /lib/security/pam_stack.so service=system-auth
auth required /lib/security/pam_shells.so
account required /lib/security/pam_stack.so service=system-auth
session required /lib/security/pam_stack.so service=system-auth
AuthPAMAuthoritative on
AuthPAMConfig ftp
This makes sure ProFTPD obeys PAM as the ultimate user-authentication and
authorization middleware; it tells ProFTPD to use the /etc/pam.d/ftp configura-
tion for user authentication.
If you want different virtual FTP sites to authenticate differently, you can
use the preceding directives in a <VirtualHost> configuration.
DefaultRoot “~”
This directive tells ProFTPD to chroot a user to his home directory (because ~
expands to the home directory of the user). To limit the jail to be a subdirectory of
the home directory, you can use DefaultRoot “~/directory” instead. In contrast
to WU-FTPD (discussed earlier in the chapter), you don’t have to copy a lot of files to
support the chroot jail for each user. You also don’t need the chrootusers.pl
script with ProFTPD.
234754-2 Ch18.F 11/5/01 9:05 AM Page 465
You can limit the jail to a group of users, as shown in this example:
DefaultRoot “~” untrusted
Here only the users in the untrusted group in the /etc/group file are
jailed. Similarly, if you want to jail all the users in a group called everyone
but want to spare the users in a smaller group called buddies, you can use
the same directive like this:
DefaultRoot “~” everyone, !buddies
The ! (bang) sign in front of the second group tells ProFTPD to exclude the
users in the group from being chroot-jailed.
<Directory /*>
<Limit MKD RMD>
DenyAll
</Limit>
</Directory>
<Directory /*>
<Limit MKD RMD>
DenyAll
AllowGroup staff
234754-2 Ch18.F 11/5/01 9:05 AM Page 466
</Limit>
</Directory>
Similarly, you can use the AllowUser directive to specify users to create and
delete directories. If you want to deny these rights to only a few users, create a
group (say, badusers) in /etc/group and use the following configuration to allow
creation and deletion to everyone but the users in that group:
<Directory /*>
<Limit MKD RMD>
Order deny,allow
DenyGroup badusers
AllowAll
</Limit>
</Directory>
<Directory /*>
<Limit RETR>
DenyAll
</Limit>
</Directory>
<Directory /*>
<Limit CWD>
DenyAll
</Limit>
</Directory>
With this configuration in place, users who FTP to the server can’t change the
directory. You can use DenyGroup groupname instead of the DenyALL to limit the
scope of the configuration to one user group (groupname) defined in the
/etc/group file.
SIMPLIFYING FILE TRANSFERS FOR NOVICE USERS Many users get lost when
they see too many directories or options available to them. Most users won’t mind
or know if you simplify their computer interactions. So if you have a group of users
234754-2 Ch18.F 11/5/01 9:05 AM Page 467
who must retrieve or store files in the FTP server on a regular basis, you may want
to consider locking their access into one directory. For example, you have a group
defined in /etc/group called novices, and you want to allow the users in this
group to retrieve files from a directory called /files/download and upload files
into a directory called /files/upload. The configuration you add to /etc/
proftpd.conf file to do the job looks like this:
<Directory /files/download>
<Limit READ>
AllowGroup novices
</Limit>
<Limit WRITE>
DenyGroup novices
</Limit>
</Directory>
<Directory /files/upload>
<Limit READ>
DenyGroup novices
</Limit>
<Limit WRITE>
AllowGroup novices
</Limit>
</Directory>
<Directory path>
<Limit DIRS>
DenyGroup group_name
</Limit>
</Directory>
<Directory /my/mp3s>
<Limit DIRS>
DenyGroup newfriends
</Limit>
</Directory>
234754-2 Ch18.F 11/5/01 9:05 AM Page 468
DefaultRoot “/www”
<Directory /www>
HideNoAccess on
<Limit ALL>
IgnoreHidden on
</Limit>
</Directory>
This configuration allows a user to see only what you permit. When a user con-
nects to a system with the preceding configuration, he sees only the files and direc-
tories in /www for which he has access privileges. This is true for every user who
connects to this system. The HideNoAccess directive hides all the entries in the
/www directory that a user cannot access. The IgnoreHidden directive tells ProFTPD
to ignore any command from a user that is one of the hidden commands.
<Anonymous ~ftp>
User ftp
Group ftp
RequireValidShell off
UserAlias anonymous ftp
MaxClients 20
<Directory *>
<Limit WRITE>
DenyAll
234754-2 Ch18.F 11/5/01 9:05 AM Page 469
</Limit>
</Directory>
<Directory incoming>
<Limit WRITE>
AllowAll
</Limit>
<Limit READ>
DenyAll
</Limit>
</Directory>
</Anonymous>
◆ User and Group: Both these directives ensure that all anonymous sessions
are owned by User ftp and Group ftp.
◆ RequireValidShell: Because the built-in user FTP doesn’t have a valid
shell listed in /etc/passwd, this directive tells ProFTPD to still allow
anonymous sessions for User ftp.
◆ UserAlias: This directive assigns anonymous to the FTP account so that
anyone can log in to the anonymous FTP site, using either ftp and
anonymous as the username.
When a ProFTPD server with Linux Capabilities receives a client request, the
ProFTP server launches a child server process to service the request. However, after
a user authentication is successful, the child server process drops all the Linux
Capabilities except cap_net_bind_service, which allows it to bind to a standard
port defined in /etc/services. At this point, the child process can’t return to root
privileges. This makes ProFTPD very safe, because even if a hacker tricks it to run
an external program, the new process that a ProFTPD creates can’t perform any
privileged operations because it doesn’t have any Linux Capabilities — not even
cap_net_bind_service.
To use Linux Capabilities, you must compile ProFTPD with a (still-unofficial)
module called mod_linuxprivs. Here’s how.
Make sure you install the source for the kernel you are currently running.
For example, to install the source for the 2.4.1 kernel, you can run rpm -
ivh kernel-source-2.4.1.i386.rpm command. The kernel source
should be installed in /usr/src/linux-2.4.1 (your version number is
likely to be different), and a symbolic link for /usr/src/linux points to
the /usr/src/linux-2.4.1 directory.
3. Open the directory to which you extracted the ProFTPD source distribution
earlier (when compiling it); then run the following command:
./configure --sysconfdir=/etc \
--with-modules=mod_linuxprivs
Compiled-in modules:
mod_core.c
mod_auth.c
mod_xfer.c
mod_site.c
234754-2 Ch18.F 11/5/01 9:05 AM Page 471
mod_ls.c
mod_unixpw.c
mod_log.c
mod_pam.c
mod_linuxprivs.c
Summary
FTP servers are often the target of hackers who often trick the system to download
password files or other secrets that they should not have access to. By restricting
FTP access by username, setting default file permission, and using chroot jailed FTP
service, you can protect against FTP server attacks.
234754-2 Ch18.F 11/5/01 9:05 AM Page 472
244754-2 Ch19.F 11/5/01 9:05 AM Page 473
Chapter 19
A LINUX SYSTEM IN a network often acts as a file server. To turn your Linux box
into a file server you have two choices: Samba and NFS. In this chapter, I discuss
how you can secure both of these services.
◆ Share
◆ User
◆ Server
◆ Domain
The security level is set using the security parameter in the global section in
/etc/samba/smb.conf file.
473
244754-2 Ch19.F 11/5/01 9:05 AM Page 474
USER-LEVEL SECURITY
When the security parameter is set to user, the Samba server operates using user-
level security. In the latest Samba version, this is the default security level.
When a client connects to the server, the Samba server notifies it of the security
level in use. In user-level security, the client sends a username and password pair to
authenticate. The Samba server can authenticate the user only based on the given
username and password pair or the client’s host information. The server has no way
to know which resource the client requests after authentication; therefore, it can’t
base acceptance or rejection of the request on any resource restriction.
Because Samba is a PAM-aware application, it uses PAM to authenticate the
user, in particular the /etc/pam.d/samba configuration file. By default, this con-
figuration file uses the /etc/pam.d/system-auth configuration with the
pam_stack module. This means Samba authentication becomes synonymous with
regular user-login authentication (which involves /etc/shadow or /etc/passwd
files) under Linux. If you enable encrypted passwords in /etc/samba/smb.conf,
the /etc/samba/smbpasswd file is used instead (see the “Avoiding plain-text pass-
words” section later in this chapter for details).
Once access is granted, the client can connect to any share without resupplying
a password or a username/password pair. If a client successfully logs on to the
Samba server as a user called joe, the server grants the client all the access privi-
leges associated with joe; the client can access files owned by joe.0.
A client can maintain multiple user-authenticated sessions that use different
username/password pairs. Thus, a client system can access a particular share by
using the appropriate username/password pair — and access a whole different share
by using a different username/password pair.
SHARE-LEVEL SECURITY
Share-level security is active when /etc/samba/smb.conf uses security =
share. In share-level security, the client is expected to supply a password for each
share it wants to access. However, unlike Windows 2000/NT, Samba doesn’t use a
share/password pair. In fact, when Samba receives a password for a given share
access request, it simply tries to match the password with a previously given user-
name and tries to authenticate the username/password pair against the standard
Unix authentication scheme (using /etc/shadow or /etc/passwd files) or the
encrypted Samba passwords (using the /etc/samba/smbpasswd file.)
If a username isn’t found — and/or the requested share is accessible as a guest
account (that is, one in which the guest ok parameter is set to yes) — the connec-
tion is made via the username found in the guest account parameter in the
/etc/samba/smb.conf file.
One consequence of this security mode is that you don’t have to make a Linux
account for every Windows user you want to connect to your Samba server. For
example, you can set the guest account = myguest parameter in the
/etc/samba/smb.conf file and create a Linux user called myguest with its own
password — and provide that password to Windows users who want to connect to
shares where you’ve set the guest ok parameter to yes.
244754-2 Ch19.F 11/5/01 9:05 AM Page 475
SERVER-LEVEL SECURITY
Server-level security is active when the security = server parameter is used in
the /etc/samba/smb.conf file. In this mode, the Samba server informs the client
that it is operating under user-level security, which forces the client to supply a
username/password pair. The given username/password pair is then used to authen-
ticate the client via an external password server whose NetBIOS name is set by
using the password server parameter. The password server must be a Samba or
Windows 2000/NT server, running under the user-level security mode.
If you have enabled encrypted passwords on the Samba server, also enable
encrypted passwords on the password server. Typically, server-level security dele-
gates authentication service to a Windows 2000/NT server.
DOMAIN-LEVEL SECURITY
Domain-level security is active when security = domain is used in the
/etc/samba/smb.conf file. It is identical to server-level security with the follow-
ing exceptions:
Whenever a client connects to the Samba server (SMBSRV) it uses the SKINET
domain’s primary domain controller as the password server and authenticates the
client request.
244754-2 Ch19.F 11/5/01 9:05 AM Page 476
The preceding configuration tells Samba server to use encrypted passwords and
authenticate users by using the /etc/samba/smbpasswd file when the security
parameter is set to user.
When the preceding configuration is in use, the Samba server appends an eight-
byte random value during the session-setup phase of the connection. This random
value is stored in the Samba server as the challenge. The client then encrypts the
challenge — using its user password — and sends the response. The server encrypts the
challenge (using the hashed password stored in /etc/samba/smbpasswd) and checks
whether the client-supplied response is the same as the one it calculated. If the
responses match, then the server is satisfied that the client knows the appropriate user
password; the authentication phase is complete. In this method of authentication, the
actual password isn’t transmitted over the network, which makes the process very
secure. Also, the Samba server never stores the actual password in the /etc/samba/
smbpasswd file — only an encrypted, hashed version of the password is stored.
I highly recommend that you enable an encrypted password mode for authenti-
cation. Once you enable encrypted passwords, create an /etc/samba/smbpasswd
file by converting the /etc/passwd like this:
1. su to root.
2. Generate a base /etc/samba/smbpasswd file, using the following command.
cat /etc/passwd | /usr/bin/mksmbpasswd.sh > \
/etc/samba/smbpasswd
3. Use the smbpasswd command to create passwords for the users. For exam-
ple, the smbpasswd sheila command creates a password for an existing
user called sheila in /etc/samba/smbpasswd file.
All users on your system can change their encrypted Samba passwords by
using this command (without the argument).
Now your Samba server requires encrypted passwords for client access — which
may be less convenient to users but is also vastly more secure.
244754-2 Ch19.F 11/5/01 9:05 AM Page 477
When this parameter is set, users from trusted domains can access the Samba
server — although they still need Linux user accounts on the Samba server. Add the
following parameters in the global section of the /etc/samba/smb.conf file:
With these parameters in place, when a user from a trusted domain attempts
access on the Samba server (on a different domain) for the very first time, the
Samba server performs domain-level security measures. It asks the primary domain
controller (PDC) of its own domain to authenticate the user. Because you have
already set up a trust relationship with the user’s domain, the PDC for the Samba
server can authenticate the user. The Samba server then creates a Linux user
account and allows the user to access the requested resource.
When the allow trusted domain parameter is set to no, trust relation-
ships are ignored.
The first parameter, interfaces, defines the list of network interfaces that
Samba listens to. The 192.168.1.10/24 network is tied to the eth0 (192.168.1.10)
interface and, hence, it is added in the interface list. The 127.0.0.1 interface is
added because it is the local loopback interface and is used by smbpasswd to con-
nect to the local Samba server. The next parameter, bind interfaces only, is
set to yes to tell the Samba server to listen to only the interfaces listed in the
interfaces parameter.
You can also use interface device names such as eth0 and eth1 with the
interfaces parameter.
If you use IP networks as the value for the hosts allow parameter, you can
exclude certain IP addresses too. For example, the following lines give all hosts in
192.168.1.0/24 network access to the Samba server except 192.168.1.100:
You can use hostnames instead of IP addresses. For example, the following
allows access to three computers in the network:
GETTING PAM_SMB
You can download the pam_smb source from the FTP site located at ftp://ftp.
samba.org/pub/samba/pam_smb.
1. su to root.
2. Extract the pam_smb distribution file in a directory by using the tar xvzf
pam_smb.tar.gz command. Change your current directory to the newly
created directory called pam_smb.
3. Run the ./configure command to prepare the source distribution. To
place the pamsmbd daemon somewhere other than default location
(/usr/local/sbin), run the ./configure --
sbindir=/path/to/pamsmbd where /path/to/pamsmbd is the path where
you want to keep the daemon binary. To disable encrypted passwords you
can run the configure script with the --disable-encrypt-pass option.
4. Run the make command. Copy the pam_smb_auth.so file to /lib/
security by using the cp pam_smb_auth.so /lib/security command.
#%PAM-1.0
# This file is auto-generated.
# User changes are destroyed the next time authconfig is run.
auth sufficient /lib/security/pam_smb_auth.so
#auth sufficient /lib/security/pam_unix.so likeauth nullok md5 shadow
auth required /lib/security/pam_deny.so
account sufficient /lib/security/pam_unix.so
account required /lib/security/pam_deny.so
password required /lib/security/pam_cracklib.so retry=3
password sufficient /lib/security/pam_unix.so nullok use_authtok md5
shadow
244754-2 Ch19.F 11/5/01 9:05 AM Page 480
Now your Linux system performs user authentication by using the Windows NT
domain controllers specified in /etc/pam_smb.conf. In the sample configuration
file shown in Listing 19-1, the authentication process uses the PDCSRV machine in
the SKINET domain. If PDCSRV is down, BDCSRV is used.
244754-2 Ch19.F 11/5/01 9:05 AM Page 481
Are you mapping users between your Linux system and the Windows NT
system performing the authentication? You can use the /etc/ntmap.db
database created using the makemap command. See the ntmap.example
file found in the source distribution for details. You can use the ntmap.sh
script to convert ntmap.example to /etc/ntmap.sh.
1. su to root.
2. Extract the Samba source by using the tar xvzf samba-source.tar.gz
command, where samba-source.tar.gz is the name of your source dis-
tribution, and change the directory to the newly created subdirectory.
3. Modify the makefile command by uncommenting the SSL_ROOT line.
4. Run make and make install to compile and install Samba binaries with
SSL support.
5. Configure Samba to use SSL, as shown in the following section.
1. To enable SSL support in Samba, add the ssl = yes parameter in the
global section of your /etc/samba/smb.conf file.
2. By default, when ssl = yes is set, Samba communicates only via SSL.
■ If you want SSL connections limited to a certain host or network, ssl
hosts = hostname, IP-address, IP/mask parameter limits SSL
connection to name hosts or IP addresses. For example, using ssl
hosts = 192.168.1.0/24 in the global section of your
244754-2 Ch19.F 11/5/01 9:05 AM Page 482
Once you configure the /etc/samba/smb.conf file for SSL, you can run Samba
with SSL support.
1. Using an encrypted (with a pass phrase) private key requires that you
don’t start Samba automatically from the /etc/rc.d/rc3.d/Sxxsmb
(where xx is a number) symbolic link. It doesn’t start and waits for the
pass phrase.
2. If you must use a pass phrase for the private key, remove the symbolic
link (/etc/rc.d/rc3.d/Sxxsmb) and always start the Samba server
manually.
3. If you created an unencrypted private key, you can start the Samba server
as usual. To manually start the server, run the /etc/rc.d/init.d/smb
start command.
Samba is started as usual. The daemon asks for the private key’s pass phrase
before it goes to the background if the private key is encrypted. If you start smbd
from inetd, this won’t work. Therefore, you must not encrypt your private key if
you run smbd from inetd.
/apps devpc.nitec.com(ro)
Here the devpc.nitec.com client system has read-only access to the /apps
directory. I recommend using the ro option to export directories that store applica-
tion binaries.
244754-2 Ch19.F 11/5/01 9:05 AM Page 484
Here, anonuid and anongid are specified to allow root squashing to UID 500
and GID 666.
If you prefer to squash all the UID/GID pairs to an anonymous UID/GID pair, you
can use the all_squash option as shown in this example:
Here the /proj directory is exported to all hosts in the nitec.com domain, but
all accesses are made as UID 500 and GID 666.
If you want a list of UIDs and GIDs that should be squashed using the anony-
mous UID/GID pair, you can use the squash_uids and squash_gids options as
shown in this example:
244754-2 Ch19.F 11/5/01 9:05 AM Page 485
Here all the UIDs and GIDs in the range 0–100 are squashed, using the anony-
mous UID 500 and GID 666.
An external map file can map NFS client–supplied UIDs and GIDs to any UID or
GID you want. You can specify the map by using the map_static option as shown
in this example:
Here the /proj directory is exported to all the nitec.com hosts, but all NFS
client–supplied UIDs and GIDs are mapped using the /etc/nfs.map file. An exam-
ple of this map file is below:
Now you know all the frequently used options for creating the /etc/export file.
Whenever the /etc/exports file is changed, however, the system has to let the
NFS daemons know about this change. A script called exportfs can restart these
daemons, like this:
/usr/sbin/exportfs
Now, to make sure both rpc.mountd and rpc.nfsd are running properly, run a
program called rpcinfo like this:
rpcinfo -p
This shows that mountd and nfsd have announced their services and are work-
ing fine. At this point, the NFS server is fully set up.
SECURING PORTMAP
The portmap setting, in combination with rpc.nfsd, can be fooled — making files
accessible on NFS servers without any privileges. Fortunately, the portmap Linux
uses is relatively secure against attack; you can secure it further by adding the fol-
lowing line in the /etc/hosts.deny file:
portmap: ALL
The system denies portmap access for everyone. Now the /etc/hosts.allow
file must be modified like this:
portmap: 192.168.1.0/255.255.255.0
This allows all hosts from the 192.168.1.0 network access to portmap-administered
programs such as nfsd and mountd.
Now, if a user with UID 0 (the root user) on the client attempts access (read,
write, or delete) on the filesystem, the server substitutes the UID of the server’s
nobody account. This means the root user on the client can’t access or change files
that only the root on the server can access or change. To grant root access to an
NFS, use the no_root_squash option instead.
244754-2 Ch19.F 11/5/01 9:05 AM Page 487
Summary
Samba and NFS are very important protocols for modern organizations. Many
organizations use these services to share disks and printers among many users.
Securing Samba and NFS service is, therefore, a very important step in enhancing
your overall network security. By centralizing user authentication using a Samba or
Windows 2000/NT based server, avoiding plain-text passwords, and restricting
244754-2 Ch19.F 11/5/01 9:05 AM Page 488
access via IP addresses you can ensure that your Samba service is as secure as it
can be. Similarly, by enabling read-only access to NFS mounted directories, secur-
ing portmap service and squashing root access, and using non set-UID and non
executable file settings, you can secure your NFS server.
254754-2 Pt5.F 11/5/01 9:05 AM Page 489
Part V
Firewalls
CHAPTER 20
Firewalls, VPNs, and SSL Tunnels
CHAPTER 21
Firewall Security Tools
254754-2 Pt5.F 11/5/01 1:16 PM Page 490
264754-2 Ch20.F 11/5/01 9:05 AM Page 491
Chapter 20
◆ Using FreeS/Wan
A FIREWALL IS A device that implements your security policy by shielding your net-
work from external threats. This device can take various forms — from dedicated,
commercial hardware that you buy from a vendor to a Linux system with special
software. This chapter covers turning your Linux system into various types of fire-
walls to implement your security policy and protect your network.
Because network security is big business, many classifications of firewalls exist;
thanks to commercial computer security vendors, many firewall systems are really
hybrids that perform diverse functions. Here I focus on two types of firewall — the
packet filter and the proxy firewall — and examine how you can use virtual private
networks (VPNs) to help implement secure access to your system.
Packet-Filtering Firewalls
A packet filter (sometimes called a filtering gateway or a screening router) is a fire-
wall that analyzes all IP packets flowing through it and determines the fate of each
packet according to rules that you create. A packet filter operates at the network
and transport layer of the TCP/IP protocol stack. It examines every packet that
enters the protocol stack. The network and the transport headers of a packet are
examined for information on
◆ Connection status
491
264754-2 Ch20.F 11/5/01 9:05 AM Page 492
You can set a packet filter to use certain features of a packet as criteria for allow-
ing or denying access:
◆ Protocols. A typical filter can identify the TCP, UDP, and ICMP protocols,
so you can allow or deny packets that operate under these protocols.
◆ Source/destination. You can allow or deny packets that come from a par-
ticular source and/or are inbound to a particular destination (whether an
IP address or port).
◆ Status information. You can specify field settings that either qualify or
disqualify a packet for admission to your system. For example, if a TCP
packet has the ACK field set to 0, you can deny the packet under a policy
that does not allow incoming connection requests to your network.
http://netfilter.filewatcher.org.
The netfilter subsystem of the Linux kernel 2.4 allows you to set up, maintain,
and inspect the packet-filtering rules in the kernel itself. It is a brand new packet-
filtering solution, more advanced than what was available to Linux kernel before
2.4.x; netfilter provides a number of improvements, and it has now become an
even more mature and robust solution for protecting corporate networks.
However, don’t think of netfilter as merely a new packet-filter implementa-
tion. It is a complete framework for manipulating packets as they traverse the parts
of the kernel. netfilter includes
◆ Support for stateful inspection of packets (in which the kernel keeps track
of the packets’ state and context).
◆ Support for the development of custom modules to perform specific
functions.
Netfilter contains data structures called tables within the kernel to provide packet-
filtering. Each table contains lists of rules called chains. There are three tables:
◆ filter
◆ nat
◆ mangle
Each rule contains a set of criteria and a target. When the criteria are met, the
target is applied to the packet. For example, the filter table (which is the default
table), contains the INPUT, FORWARD, and OUPUT chains. These function as follows:
264754-2 Ch20.F 11/5/01 9:05 AM Page 493
◆ The INPUT chain within the filter table holds the rules for the packets
that are meant for the system that is examining the packet.
◆ The FORWARD chain contains the rules for the packets that are passing
through the system that is examining the packets.
◆ The OUTPUT chain holds the rules for packets that are created by the sys-
tem itself (which is also examining the packets).
The following listing shows a way to visualize the relationship among tables,
chains, and rules. For example, the filter table has INPUT, FORWARD, and OUTPUT
chains where each of these chains can have 1 to N number of rules.
filter table
|
+---INPUT
| |
| +---input rule 1
| +---input rule 2
| +---input rule 3
| | ...
| +---input rule N
|
+---FORWARD
| |
| +---forward rule 1
| +---forward rule 2
| +---forward rule 3
| | ...
| +---forward rule N
|
+---OUTPUT
|
+---output rule 1
+---output rule 2
+---output rule 3
| ...
+---output rule N
nat table
|
+---PREROUTING
| |
| +---pre routing rule 1
| +---pre routing rule 2
| +---pre routing rule 3
| | ...
| +---pre routing rule N
|
264754-2 Ch20.F 11/5/01 9:05 AM Page 494
+---OUTPUT
| |
| +---output rule 1
| +---output rule 2
| +---output rule 3
| | ...
| +---output rule N
|
+---POSTROUTING
|
+---post routing rule 1
+---post routing rule 2
+---post routing rule 3
| ...
+---post routing rule N
mangle table
|
+---PREROUTING
| |
| +---pre routing rule 1
| +---pre routing rule 2
| +---pre routing rule 3
| | ...
| +---pre routing rule N
|
+---OUTPUT
|
+---output rule 1
+---output rule 2
+---output rule 3
| ...
+---output rule N
The target of a rule can vary according to the table — or even the chain — it occu-
pies. Table 20-1 is a list of targets available in each table.
TABLE 20-1: TARGETS FOR RULES IN FILTER, MANGLE, AND NAT TABLES
REJECT Yes Yes Yes When this target is used for a matched packet,
an error response is sent back to the system
that created the packet.
DENY Yes Yes Yes The matched packet is simply dropped when
this is the target of a rule. Unlike REJECT, no
error response is sent to the system generating
the packet.
ACCEPT Yes Yes Yes The matched packet is accepted when this
target is in use.
TOS No Yes No This target allows you to set the Type Of Service
(TOS) byte (8-bits) field in the IP header of the
packet.
MIRROR Yes Yes Yes When this target is in use, the matched packet’s
source and destination addresses are reversed
and the packet is retransmitted.
MARK No Yes No This target is used to set the mark field value of
a packet. The marking of a packet can be used
by routers to change the routing behavior for
such packets.
MASQUERADE No No Yes This target is used to alter the source address of
the matched packet. The source address is
replaced with the IP address of the packet
filter’s interface, which the packet will go out
from. This target is used for packet-filtering
systems that use dynamic IP address to connect
to another network (such as the Internet) and
also acts as a gateway between the networks.
If you have static a IP address for the interface
that connects to the outside network, use SNAT
target instead.
Continued
264754-2 Ch20.F 11/5/01 9:05 AM Page 496
TABLE 20-1: TARGETS FOR RULES IN FILTER, MANGLE, AND NAT TABLES
(Continued)
I assume that you have downloaded and extracted the latest, stable Linux
kernel from www.kernel.org or a geographically closer mirror site.
2. From the main menu select Networking options submenu. From the
Networking options submenu select the Network packet filtering
(replaces ipchains) option to enable netfilter support.
I don’t recommend using ipchains and ipfwadm support; they are in Linux
past and, therefore, let them stay there!
Don’t select the Fast switching option; it’s incompatible with packet-
filtering support.
264754-2 Ch20.F 11/5/01 9:05 AM Page 498
When the kernel is installed and compiled, you can start creating your packet-
filtering rules. The next section shows how to do so with use of iptables.
Creating Packet-Filtering
Rules with iptables
The iptables program is used to administer packet-filtering rules under the
netfilter infrastructure. Here I provide you with the basics of how you can use
this tool, but I strongly encourage you to read the man pages for iptables after you
read this section. Another reason to read the man pages is that the iptables pro-
gram has many command-line switches, but only the common ones are discussed
here.
For example:
This is often considered as a preferred default policy for the filter table. Here all
packets are denied or rejected by default. All packets that are input to the firewall
system are denied; all packets that are generated by the firewall system itself are
rejected; all packets that are to be forwarded through the system are also rejected.
This is called the “deny everything by default” policy. Security experts prefer the
concept of locking all doors and windows down and then opening the ones that are
needed. This default packet-filtering policy serves the same purpose and is recom-
mended by me. This policy, created using the combination of these three rules,
should be always part of your packet-filtering firewall policy.
Appending a rule
To append a new rule to a specific chain, the syntax is
For example:
Here both of the rules are appended to the input chain of the default table (fil-
ter). You can explicitly specify the filter table using the -t table_name option. For
example, the following lines of code are exactly equivalent to the two rules men-
tioned earlier:
The very first rule specifies that any packet destined for the system from another
host whose IP address is 192.168.1.254 be accepted by the system. The second rule
specifies that all IP packets from any host in the entire 192.168.1.0/24 network be
dropped. Avid readers will notice that the first IP address 192.168.1.254 is within
the 192.168.1.0/24 network. So the second rule drops it! However, since the first
matching rule wins, all the packets from the 192.168.1.254 host are accepted even
if the next rule says they should be dropped.
So the order of rules is crucial. One exception to the first matching rule wins is
the default policy rule — any default policy rule, as in this example:
Here the very first rule denies all packets but since it is the default policy
(denoted by the -P option) it still allows other rules to be examined. In other words,
the position of a policy rule does not prohibit other rules from being evaluated.
You can write chain names in upper case or lower case. For example, INPUT
and input are both acceptable.
Deleting a rule
To delete a rule from a chain, run /sbin/iptables -D chain_name rule_number.
For example, /sbin/iptables -D input 1 deletes the first rule in the input chain
of the default filter table.
To delete all the rules from a given chain, run /sbin/iptables -F chain_name.
For example, /sbin/iptables -F input flushes (that is, empties) all the rules in the
input chain.
You can also delete a rule by replacing the -A (append) option with -D (delete)
option in your rule specification. For example, say that you have added the follow-
ing rule:
Now if you want to delete it without listing all rules and determining the rule
number as discussed earlier, simply do the following:
For example:
This inserts the rule in the number one position in the input chain of the default
filter table.
For example:
This replaces the rule in the number one position in the input chain of the
default filter table with the new rule.
You can use the iptables program to create/modify/delete rules and policies in
packet-filtering firewalls.
Hub
Here the Linux PC has been set up with two Ethernet interfaces (eth0 and eth1).
The eth0 is connected to the DSL router and it has a real IP address. The internal,
private LAN interface of the Linux PC (eth1) has a private IP (192.168.1.254) and
the two other systems in this LAN also have private IP addresses.
264754-2 Ch20.F 11/5/01 9:05 AM Page 502
In this configuration, all outgoing packets from the private network flow
through the Linux firewall machine to the outside world; similarly, all incoming
packets from the Internet come to the private network via the firewall machine.
This Linux PC must implement the following:
Enabling IP forwarding is quite easy. You can simply add the following com-
mand in the /etc/rc.d/rc.local script to enable IP forwarding when you boot up
your Linux system.
/sbin/sysctl -w net.ipv4.conf.all.forwarding=1
You can also execute it from the shell to turn on IP forwarding at any time.
Since you have already enabled the netfilter support in the kernel as discussed in
the previous section, you can enable the masquerading and packet filtering using
iptables, as shown in the script called soho-firewall.sh, Listing 20-1.
This script allows you to change the necessary variables to make it work in a
similar, real-world environment. When the script is executed as shown in Listing
20-1, it effectively creates the following packet-filtering rules.
/sbin/iptables -F
/sbin/iptables -P input DENY
/sbin/iptables -P output REJECT
/sbin/iptables -P forward REJECT
/sbin/iptables -A input -i lo -j ACCEPT
/sbin/iptables -A output -i lo -j ACCEPT
/sbin/iptables -A input -i eth1 -s 192.168.1.0/24 -j ACCEPT
/sbin/iptables -A output -i eth1 -d 192.168.1.0/24 -j ACCEPT
/sbin/iptables -t nat -A POSTROUTING -o eth0 -d ! 192.168.1.0/24 -j MASQUERADE
/sbin/iptables -A FORWARD -s 192.168.1.0/24 -j ACCEPT
/sbin/iptables -A FORWARD -d 192.168.1.0/24 -j ACCEPT
◆ The very first rule flushes all the existing rules in the filter table.
◆ The next three rules should be familiar to you since they are the default
policy rules. These rules state that no packet can enter, leave, or be for-
warded to/from this system. Basically, these three rules lock up the IP
traffic completely.
264754-2 Ch20.F 11/5/01 9:05 AM Page 504
◆ The next two rules enable traffic to and from the local loopback interface
(lo) so that you can access other systems when logged onto the firewall
machine itself.
◆ The next rule specifies that any packet with source IP residing in the
192.168.1.0/24 network be accepted on the eth1 interface. Remember that
the eth1 interface in Figure 20-2 is the internal, private network interface
of the firewall system. We want this interface to accept packets from other
nodes (that is, hosts) in the private network.
◆ The next rule specifies that packets generated (that is, output) by the fire-
wall itself for the 192.168.1.0/24 network be allowed.
◆ The next rule is the masquerading rule. It uses the nat table and the
POSTROUTING chain. This rule states that all packets not destined for the
192.168.1.0/24 network be masqueraded. For example, if a packet from
the Windows PC (192.168.1.1) system destined for an external system
with IP address 207.183.233.200 is detected by the firewall machine, it
changes the source address of this packet such that the 207.183.233.200
sees 207.183.233.18 (eth0) address as the source address. When response
packet from 207.183.233.200 arrives at 207.183.233.18, the NAT facility
retranslates the destination address to be 192.168.1.1.
At this point, what you have is a firewall system that forwards masqueraded IP
traffic to the external interface (eth0) but no inbound packet can be seen from the
outside world by the internal, private network. You need to now categorically open
connectivity from external interface to the internal network.
◆ Outbound traffic rule – Create a rule that allows the external interface
of the firewall to send (that is, output) HTTP requests using the standard
port 80.
◆ Inbound traffic rule – Create a rule that allows the external interface of
the firewall to receive (that is, input) HTTP responses from outside Web
servers.
Here is a rule that you can add to the soho-firewall.sh script to implement the
outbound traffic rule:
This rule looks as follows when shell script variables are substituted for appro-
priate values:
This rule allows the firewall to output HTTP packets on port 80 destined to any
IP address using the external interface. The 1024–65535 range specifies that the
Web browser can use any of the non-privileged ports (that is, ports greater than
0–1023) in the connection and, therefore, the firewall will not block packets with
such a source port number.
Similarly, you need the following line in the script to accept HTTP response traf-
fic destined for the external interface.
In this case, packets sent by outside Web servers to the external interface of the
firewall via the external IP address of the firewall as the destination are accepted
for any unprivileged port as long as the source port is HTTP (80).
As you can see, opening bi-directional connection requires that you know which
port a particular service runs on. For example, suppose you want to allow the out-
side world access to your Web server in the private network.
◆ Create a rule that allows any IP address to connect to the HTTP port (80)
on your firewall machine’s external interface.
◆ Create a rule that allows the firewall/Web server to respond to unprivi-
leged ports (which Web clients use for client-side connection) when source
IP address is any IP and source port is HTTP (80).
The following lines can be added to the soho-firewall.sh script to
implement these rules.
# internal network
$IPTABLES -A input -i $EXTERNAL_LAN_INTERFACE
-p tcp ! -y \
-s 0/0 1024-65535 \
-d $EXTERNAL_LAN_INTERFACE_ADDR 80 \
-j ACCEPT
# Allow internal HTTP response to go out to the
# world via the external interface of the firewall
$IPTABLES -A output -i $EXTERNAL_LAN_INTERFACE
-p tcp \
-s $EXTERNAL_LAN_INTERFACE_ADDR 80 \
-d 0/0 1024-65535 \
-j ACCEPT
If you want to enable HTTPS (Secure HTTP) connections, add the following
lines:
# Enable incoming HTTPS connections
$IPTABLES -A input -i $EXTERNAL_LAN_INTERFACE
-p tcp ! -y \
-s 0/0 443 \
-d $EXTERNAL_LAN_INTERFACE_ADDR 1024-65535 \
-j ACCEPT
# Enable outgoing HTTPS connections
$IPTABLES -A output -i $EXTERNAL_LAN_INTERFACE
-p tcp \
-s $EXTERNAL_LAN_INTERFACE_ADDR 1024-65535 \
-d 0/0 443 \
-j ACCEPT
In order to interact with the Internet, you are most likely to need a few other ser-
vices enabled, such as DNS, SMTP, POP3, and SSH. In the following sections, I
show you how.
NAMED_ADDR=”207.183.233.100”
264754-2 Ch20.F 11/5/01 9:05 AM Page 507
Now, to allow your private network to access the name server, you should add
the following lines in the script.
The very first rule ensures that the firewall allows outputting of DNS query pack-
ets on port 53 via the UDP protocol. The second rule ensures that the firewall allows
DNS query response packets originating from the name server to be accepted.
Sometimes, when the DNS response is too large to fit in a UDP packet, a
name server uses a TCP packet to respond.This occurs rarely unless you have
systems in your internal network that want to perform DNS zone transfers,
which are often large. If you want to ensure that rare large DNS responses
are handled properly, add two more of the same rules as shown above
except replace the -p udp protocol to -p tcp. Also use the ! -y option for
the input rule.
Now if you run a DNS server internally on the private LAN or even on the fire-
wall machine (not recommended), then you need to allow the caching server to per-
form DNS queries to your ISP’s DNS server when it cannot resolve an internal
request from the cache. In such case, add the following lines to the same script.
-j ACCEPT
# Allow external name server to connect to firewall
# system’s external interface in response to a
# resolver query
$IPTABLES -A input -i $EXTERNAL_LAN_INTERFACE
-p udp \
-s $NAMED_ADDR 53 \
-d $EXTERNAL_LAN_INTERFACE_ADDR 53 \
-j ACCEPT
Note that SSH clients use 1020 or higher ports and, therefore, you need to use
1020-65535 range.
5. Create an output rule that allows the external interface of the firewall to
send packets to the server.
Internal clients can access the internal interface of the firewall and their
requests for the external service is automatically forwarded to the external
interface. And this is why we create the output rule for the external inter-
face. This rule appears as follows:
/sbin/iptables -A output -i external_interface \
-p protocol_name \
-s external_interface_address UNASSIGNED_PORTS
\
-d external_server_address ASSIGNED_PORT \
-j ACCEPT
6. Create an input rule that allows the external server to respond with
packets.
The rule looks like this:
/sbin/iptables -A input -i external_interface \
-p protocol_name [ ! -y ] \
-s external_server_address ASSIGNED_PORT \
-d external_interface_address UNASSIGNED_PORTS \
-j ACCEPT
#!/bin/sh
#
# Packet filtering firewall for dummies
#
IPTABLES=/sbin/iptables
# Drop all rules in the filter table
$IPTABLES -F FORWARD
$IPTABLES -F INPUT
$IPTABLES -F OUTPUT
# Create default drop-and-reject-everything policy
$IPTABLES -P input DENY
264754-2 Ch20.F 11/5/01 9:05 AM Page 512
Creating Transparent,
proxy-arp Firewalls
Suppose that you have a small office where several Linux/Windows 9x/2K/NT
machines are connected to the internet using a modem or DSL/cable/ISDN router.
You want to allow the world access to the Web server in your network and also
want to implement some of the packet-filtering rules you learned earlier.
Suppose your ISP provided you with an Internet connection via the network
X.Y.Z.32/27.
You have each machine on the network use the X.Y.Z.35 address as its Internet
gateway address — which allows each machine to send and receive packets to and
from the outside world. Now you have two choices for your firewall:
◆ Set up a firewall machine between your LAN and the Internet feed (which
is the typical place for a firewall).
264754-2 Ch20.F 11/5/01 9:05 AM Page 513
The problem with the typical placement of a firewall is that if the firewall
machine is down, the entire LAN is disconnected from the Internet. If
Internet access is a big part of your business, that spells trouble — especially
for a small company.
In this case, the firewall machine is less of a potential obstacle to your com-
pany’s Internet connection. If the firewall machine is down, you can simply take
out the network cable connected to eth0 on the firewall and reconnect it to the hub
on the other side (eth1 side) and you should have full Internet connectivity. You
might need to have your ISP refresh the arp cache and/or reboot some of your
machines, but your network regains Internet access in a very short time.
The idea behind this setup is as follows: the Linux firewall machine has two net-
work interfaces eth0 and eth1 set to the same IP address (X.Y.Z.50) and has APR
proxying and IP forwarding turned on. This allows the machine to see all the pack-
ets that are either originated from your LAN or that are coming to your LAN from
the outside world via the ISP feed. Therefore, you can use packet filtering on such
packets like a regular firewall system.
The only real advantage is that if the firewall is down for upgrade, or other
uncontrollable reasons, you can recover your connectivity in a much shorter time
than if you had a regular firewall configuration where you had private IP addresses
for your LAN and all the hosts were pointing to the firewall as their gateway to the
Internet. In this configuration, the host machines don’t even know that their pack-
ets are being scanned, forwarded by the man-in-the-middle type of transparent
proxy-arp based firewall system. Assuming that you have a Linux system with two
Ethernet network cards (eth0, eth1) installed, here is how you can create such as a
setup.
■ IP address
■ Network address
■ Network mask
4. Add the following lines in your /etc/rc.d/rc.local script to enable the
proxy_arp feature for both of your Linux network interfaces.
/sbin/sysctl net.ipv4.conf.eth0.proxy_arp=1
/sbin/sysctl net.ipv4.conf.eth1.proxy_arp=1
This tells the kernel that packets for the X.Y.Z.35 address (that is, the
address of the router) is routed on eth0 and the rest of the network is
available on eth1. Since you have enabled IP forwarding between the
interfaces, any outside packet destined for an internal LAN host are seen
by eth0 and forwarded onto eth1.
At this point, either you can wait awhile for the arp caches to expire or restart
your router. At that point you should be able to get back and forth between the
router and the other servers on the network. If you look at your arp cache on a
server, it will show the mac address of the router as the mac address of eth1 on your
Linux firewall. After you have this layer working, you can add your rules.
the packet-filtering aspect of the solution; later sections cover other security com-
ponents that you can integrate with packet filtering to build a robust, well-secured
environment for your organization.
A multiple-firewall environment, known as a Demilitarized Zone (DMZ), keeps
the corporate public servers such as the Web server (Apache server), FTP server,
mail server, and DNS server (if any) behind the primary firewall. The internal net-
work (consisting of employee workstations, and possibly internal-only servers)
resides behind another, separate firewall. Each of these firewalls has a distinct pur-
pose; an examination of each purpose follows.
◆ Masquerade the internal traffic meant for the DMZ services or for the out-
side world.
◆ Allow only incoming traffic that has been generated in response to inter-
nal requests. Incoming packets to this firewall must not have SYN bit set.
◆ Limit the internal network’s access to outside services
◆ Masquerade the packets that are generated by the internal firewall (on
behalf of internal nodes) to the outside world to make the packets appear
to be coming from the primary firewall’s external interface.
When the firewall is doing its job correctly, a typical scenario looks like this:
3. The primary firewall sees this outgoing packet, masquerades it, and for-
wards to the outside world.
4. The Web server on the outside knows only that the external IP address of
the primary firewall has requested access to an outside domain.
Accordingly, the Web server sends response packets to the DMZ site,
which ensures that outside systems do not know who the real user is —
and (more importantly) can’t get hold of the user-account information for
unauthorized uses.
Note that you can decide to not masquerade outgoing packets on the internal
firewall — the external firewall masquerades them anyway — but if you masquerade
outgoing packets on the internal firewall, you can use simpler packet-filtering rules
on the external (primary) firewall. When implemented consistently, such an
arrangement can enhance performance without harming security. Because the pri-
mary firewall sees all packets from the internal network as coming from one
address — that of the internal firewall’s external interface — that one IP is the only
one you have to deal with when you create access rules on the primary firewall.
Implementing these firewalls for the given scenario is a distinct process. Note
that when you follw the instructions in the next two subsections, you must change
IP addresses to match those on your own network environment.
INTERNAL_LAN_INTERFACE=”eth1”
INTERNAL_LAN_INTERFACE_ADDR=”192.168.2.254”
EXTERNAL_INTERFACE=”eth0”
EXTERNAL_INTERFACE_ADDR=”192.168.1.1”
2. Flush out all the rules in the filter table by appending the following
lines to the script:
# Drop all rules in the filter table
$IPTABLES -F FORWARD
$IPTABLES -F INPUT
$IPTABLES -F OUTPUT
5. Enable the Internet Control Message Protocol (ICMP) packets used by ping
and other services:
$IPTABLES -A INPUT -p icmp -j ACCEPT
8. Set up masquerading for everything not destined for the internal LAN:
$IPTABLES -t nat -A POSTROUTING -o $EXTERNAL_LAN_INTERFACE \
-d ! $INTERNAL_LAN \
-j MASQUERADE
9. Forward only those packets that come from the internal LAN by append-
ing these lines:
At this point, you have a firewall that can masquerade internal traffic meant
for outside servers. Now you have to decide what types of services your
internal users may access — for example, HTTP, SMTP, POP3, DNS, and AUTH
(part of identd) — and create specific firewall rules for each service.
-d $DMZ_LAN -j ACCEPT
# Setup masquerading for packets generated by the
# internal private LAN that is not meant for itself
$IPTABLES -t nat -A POSTROUTING -o $EXTERNAL_LAN_INTERFACE \
-s $INTERNAL_LAN \
-d ! $INTERNAL_LAN \
-j MASQUERADE
# Forward packets from DMZ LAN
$IPTABLES -A FORWARD -s $DMZ_LAN -j ACCEPT
$IPTABLES -A FORWARD -d $DMZ_LAN -j ACCEPT
The next step is to add rules to enable the services offered in the DMZ.
The following code enables auth client service for your mail server.
REDIRECTING PACKETS
Say you want to redirect all internal network (192.168.2.0/24) packets going to a
specific Web site at IP address 193.1.1.193 and port 80 to be redirected to a Web
server at IP address 193.1.1.195. You can add the following rule to the primary fire-
wall rule file:
Whenever a packet from 193.1.1.193 machine enters the firewall, it’s logged. To
control the number of log entry, you can use the -m limit --limit rate options.
For example:
The above rule will log a maximum of 10 packets from 193.1.1.193 host. You
can also use a prefix to identify a special IP address using the --log-prefix
prefix option. For example:
The above rule will prefix the log entry for 193.1.1.193 using the words
‘BAD GUY’.
264754-2 Ch20.F 11/5/01 9:05 AM Page 523
◆ Minimize-Delay
◆ Maximize-Throughput
◆ Maximize-Reliability
◆ Minimize-Cost
◆ Normal-Service
For example, to provide interactive performance to telnet while using ftp at the
same time:
STATEFUL INSPECTION
Netfilter provides the ability to associate all the packets of a particular connec-
tion to each other; packets that are not considered part of an existing connection
can be denied or rejected. Packet-filtering rules can now be created to base their
target on one of the following states:
For example, to accept packets that are part of an established connection, you
can define a rule such as:
Or you can forward a packet from one interface to another if the packet is part
of a related or established connection using the following rule:
◆ DNAT: This target specifies that the destination address of the packet
should be modified.
◆ SNAT: This target specifies that the source address of the packet should be
modified, including all future packets in this connection.
◆ REDIRECT: This is a specialized case of DNAT that alters the destination IP
address to send the packet to the machine itself.
To test an existing rule, simply replace the -A (if you appended the rule) or -I (if
you inserted the rule) with -C and insert values for source and destination IP
addresses. For example, say you have a rule such as the following:
This rule states that a packet from any IP address (-s 0/0) destined to port 80 of
IP address 192.168.1.100 on Ethernet interface eth1 is to be rejected. To test this
rule, you can run the following code:
You should see an output stating that the packet is rejected. If your rule calls for
address translation, you should see a message stating that packet is masqueraded;
similarly, if your rule calls for dropping the packet, you should see a message stat-
ing so.
PEER-TO-PEER SERVICES
Most people on the Internet (and almost everywhere else) have heard of Napster.
I loved Napster in the beginning — but later, when I noticed Napster clients on the
office network, I had to stop loving it (at least in the office). I could not allow users
to run Napster (even before its legal imbroglio) because its peer-to-peer file-sharing
software could have opened a security hole in our corporate network — exposing a
treasure trove of sensitive data, intellectual properties, and client databases. So an
addendum to our (normally very flexible) user policy had to exclude all peer-to-
peer software in the office environment. However, the addendum also needed spe-
cific enforcement (junior engineers took it upon themselves to find a way to get
around the restriction).
Peer-to-peer (P2P) networking is not new, but the availability of high-speed net-
working (such as DSL and cable) on the home front made P2P quickly popular. In
my humble opinion, Napster brought P2P on the spotlight. Also, AOL Instant
Messaging (AIM) became the dominant personal-networking tool of choice. These
tools are great for personal use but since they are major security hazards, they are
not welcome in the business environment just yet. Most of these network toys com-
municate using simple, clear-text protocols that are (for that reason) unsafe — and
264754-2 Ch20.F 11/5/01 9:05 AM Page 526
many of these tools access the hard disk of the computer they run on. Hard-drive
access is exactly what we had to deny to Java applets if we were to keep it secure.
Many of us were extremely reluctant to let these personal, social-networking tools
get full access to sensitive private data. The following sections cover how you can
block these services if you need to.
1. Download the Napster client software on a PC, run it and use the netstat
-n command to view which servers it connects to. You will notice that
Napster servers are on 64.124.41.0 network. You can run ping
server.napster.com a few times and find out that the hostname net-
work mentioned earlier is load-balanced by using multiple IPs.
2. Add the following rules to drop all incoming and outgoing packets to and
from the Napster network.
/sbin/iptables -A input -s 64.124.41.0/24 -j DROP
/sbin/iptables -A output -d 64.124.41.0/24 -j DROP
As mentioned before, this technique blocks the casual Napster user. If you have
network-savvy users who have friends with Internet servers, you might still have a
Napster problem in your network; troublemakers can still connect to Napster via a
remote proxy server. In such a case, your only option is to pay a visit to the user’s
superior and report the issue. Of course, if you are dealing with a campus environ-
ment where students are bypassing your Napster blockade, your only hope is to fig-
ure out a way to reduce the bandwidth available to such activity.
If the Napster users in your network are using open source Napster Servers
(OpenNap), you have to block them individually as well.You can find a list of
OpenNap servers at www.napigator.com/list.php.
1. Download AIM and use netstat -n to detect which server it connects to.
Typically, AIM clients connect to the login.oscar.aol.com server on
port 5190. Run nslookup -q=a login.oscar.aol.com from the com-
mand line to discover the IP addresses that are in the load-balancing pool.
For example, when I ran the nslookup command mentioned earlier, I got
Name: login.oscar.aol.com
Addresses: 152.163.241.128, 152.163.242.24, 152.163.242.28,
152.163.241.120
After you have added this pair of rules for each IP address that points to
this server, your system should effectively block AIM.
Don’t forget to check for new IP addresses that might show up as AOL
decides to add or drop servers in their load-balancing server pool.
These measures to block P2P file sharing and communication software are
(at most) interim solutions.We must find better ways to allow/deny access to
these interesting software in the corporate environment.There should be an
initiative between the developers and the systems security experts to come
to some common ground to allow both sides to be in a win-win situation.
Until then, your best bet is to keep monitoring these software products and
their vendors on an ongoing basis, staying ready to plug any security hole
that opens.
264754-2 Ch20.F 11/5/01 9:05 AM Page 528
In general, I recommend not using NAT when considering IPSEC based VPN
technology. If you use Network Address Translation (NAT), then you might
not be able to use IPSEC with AH protocol. NAT changes the source and/or
destination of IP packets, which appears as an attack when IPSEC checks the
264754-2 Ch20.F 11/5/01 9:05 AM Page 529
packet header signature. IPSEC using the ESP in tunnel mode encapsulates
the entire original packet into a new packet and the remote IPSEC gateway
using the same ESP protocol in tunnel mode will only evaluate the original
packet stored within the received packet. This ensures that the original
packet is secured within the new packet and works with NAT.
1. As root, you must compile and install the latest Linux kernel from the
kernel source distribution.
See Chapter 2 for details on how to compile, install, and boot a new ker-
nel. You must continue with the following instruction after you have suc-
cessfully booted from a custom compiled kernel.
2. Download FreeS/WAN source distribution from the following site or its
mirror sites:
ftp://ftp.xs4all.nl/pub/crypto/freeswan/
Extract the source into the /usr/src directory by copying the tar ball in
that directory and running tar xvzf freeswan-version.tar.gz com-
mand. This will create a new subdirectory in /usr/src.
3. From the freeswan directory in /usr/src, run make menugo, which will
run the make menuconfig command and also run other freeswan instal-
lation scripts to allow you customize kernel configuration, compile it and
also create a pair of RSA authentication keys, which are stored in
/etc/ipsec.secret file.
When you see the menu-based kernel-configuration screen, you can cus-
tomize anything you want — or simply save the configuration to let the
freeswan installation continue. If any errors are detected during this
process, you should review the out.kbuild or out.kinstall files to get
the details on the errors you encounter.
The RSA keys generated by the freeswan configuration scripts are suit-
able only for authentication, not for encryption. IPSEC uses them only for
authentication.
264754-2 Ch20.F 11/5/01 9:05 AM Page 530
4. To install the kernel the easy way, from the FreeS/WAN directory run the
make kinstall command to install the new kernel and any modules that
are needed.
You should back up your existing kernel and modules and also create a LILO
configuration that allows you to boot the old kernel in case the new kernel
doesn’t boot. (See Chapter 2 for details.)
Look for boot messages stating that Kernel (code) Linux IP Security (KLIPS) is
initializing and Pluto is starting. You can run dmesg | grep -i KLIPS
and/or dmesg | grep -i pluto if you missed the boot messages.
Creating a VPN
Here I show you how you can create an IPSEC VPN using FreeS/WAN.
For my example, each of the unjoined local area networks — one in San Jose,
California, and another one in New York City — has a FreeS/WAN gateway machine
connected to the Internet. Creating a IPSEC VPN to exist between these two net-
works is (as you may expect) a two-part process unto itself.
2. Make sure you have compiled the kernel with netfilter support; this
configuration procedure uses packet-filtering rules to restrict access to the
San Jose gateway.
3. Modify the /etc/ipsec.conf to have the lines shown in Listing 20-3.
config setup
interfaces=”ipsec0=eth0”
klipsdebug=none
plutodebug=none
plutoload=%search
plutostart=%search
conn %default
keyingtries=0
authby=rsasig
conn sjca-nyc
left=207.177.175.60
leftsubnet=192.168.1.0/24
leftnexthop=207.177.175.1
right=207.183.233.17
rightsubnet=192.168.2.0/24
rightnexthop=207.183.233.1
auto=start
[email protected]
[email protected]
leftrsasigkey=0x01035d3db6bdabeb8d9a62eb8d798d92a1
rightrsasigkey=0x01032d2dbadfeeddead62eb8d798d92b2
Here the first line of the file tells KLIPS that you are describing a machine
configuration. The next line defines the interfaces that FreeS/WAN should
use. In the example just given, interfaces=”ipsec0=eth0” tells the sys-
tem to use eth0 interface as the first IPSEC (ipsec0) interface. In most
cases, you can set interfaces=%defaultroute; the typical default route
is also the default connection to the Internet.
Why specify the next hop address or default route for the gateway machine?
The KLIPS does not use the default “default route” used by the normal net-
work subsystem, so you must specify this. If the gateway machines are
directly connected by themselves, you do not need to set this.
: rsa {
#Insert contents of the /tmp/rsakey.txt file here
}
264754-2 Ch20.F 11/5/01 9:05 AM Page 534
If you are setting up a VPN, test to make sure you can start a complete VPN
connection from one gateway. The other gateway should start automati-
cally, as soon as it recognizes a connection-negotiation request coming from
the gateway that initiates the connection.
264754-2 Ch20.F 11/5/01 9:05 AM Page 536
1. As root, extract the source into /usr/src, using the tar xvzf stunnel-
version.tar.gz command.
Now you are ready to use Stunnel to wrap many popular services.
Securing IMAP
You have two ways of using Stunnel with the IMAP service on your system,
depending on how it’s configured. You can run the IMAP service directly via stun-
nel or run IMAP service as a xinetd managed service. The first method is only rec-
ommended if you have IMAP clients who do not use SSL protocol for connection.
264754-2 Ch20.F 11/5/01 9:05 AM Page 537
Disable imapd as you run in the current method; instead run imapd using
the following command from a startup script such as /etc/rc.d/rc.
local:
/usr/sbin/stunnel -p /usr/local/ssl/certs/stunnel.pem \
-d 993 \
-r localhost:143
This command runs stunnel using the specified certificate file on the
IMAPS port (993) and proxy for the imapd daemon running on localhost
port 143. The command also allows your non-SSL IMAP clients to con-
nect on the standard IMAP port (143), and you can configure the SSL-
capable IMAP clients to connect to port IMAPS (993) instead.
/usr/sbin/stunnel -p /usr/local/ssl/certs/stunnel.pem \
-d 993 \
-l /usr/sbin/imapd
This approach yields the same result by running the imap daemon (specified by
-l) rather than connecting to a daemon that is already running on the IMAP port
(143).
service imap
{
disable = no
socket_type = stream
wait = no
user = root
port = 143
server = /usr/sbin/stunnel
server_args = stunnel imapd -l /usr/sbin/imapd --imapd
log_on_success += USERID
log_on_failure += USERID
#env = VIRTDOMAIN=virtual.hostname
}
264754-2 Ch20.F 11/5/01 9:05 AM Page 538
Don’t forget to reload xinetd configuration using the killall -USR1 xinetd
command.
Securing POP3
To connect to your POP3 mail server via SSL, reconfigure your /etc/xinetd.d/
pop3s configuration script as follows:
service pop3s
{
disable = no
socket_type = stream
wait = no
user = root
server = /usr/sbin/stunnel
server_args = stunnel pop3s -l /usr/sbin/ipop3d --ipop3d
log_on_success += USERID
log_on_failure += USERID
}
If you have POP3 client software that cannot use SSL, then you can use a POP3
redirector service as follows:
1. Set up a normal (that is, not using Stunnel) POP3 server on a system that
uses the following /etc/xinetd.d/pop3 service configuration.
service pop3
{
disable = no
socket_type = stream
wait = no
user = root
server = /usr/sbin/stunnel
server_args = -c -r pop3server-using-
stunnel:pop3s
log_on_success += USERID
log_on_failure += USERID
}
2. Set up a POP3 server using stunnel, as shown in the first xinetd configu-
ration example (/etc/xinetd.d/pop3s).
3. Change the pop3server-using-stunnel to the hostname of the POP3 server
that is using Stunnel. This will allow non-SSL capable POP3 clients to
connect to a host that uses stunnel to forward POP3 traffic to another
server using SSL.
264754-2 Ch20.F 11/5/01 9:05 AM Page 539
/usr/local/sbin/stunnel -d 25 \
-p /var/lib/ssl/certs/server.pem \
-r localhost:smtp
Note that you are securing SMTP delivery between the end-user and your mail
server. If the mail is to be sent to another server outside your domain, it will not be
secure. This special scenario is only applicable for situations for an organization
where all mail is sent internally via the secure method and external mail goes
through some other SMTP relay.
Summary
Packet filtering is a means to impose control on the types of traffic permitted to
pass from one IP network to another. The packet filter examines the header of the
packet and makes a determination of whether to pass or reject the packet based
upon the contents of the header. Packet filtering capability found in the iptables
allows you to make security rules for incoming and outgoing IP packets. On the
other hand, security tools such as a VPN and stunnel allow you to secure two net-
works or various network services, respectively.
264754-2 Ch20.F 11/5/01 9:05 AM Page 540
274754-2 Ch21.F 11/5/01 9:05 AM Page 541
Chapter 21
IN THIS CHAPTER, I introduce you to various security tools that you can use to audit,
monitor, and detect vulnerabilities in your individual Linux system or an entire
network.
Make sure you have permission before using SAINT to audit a system or a
network. If you let SAINT loose on a system or network that you don’t have
permission for, you may be in legal trouble — the owner may consider your
audit an intrusion attempt!
◆ Root kits
1. Download the latest source distribution from the official SAINT Web site
at http://www.wwdsi.com/saint.
The source distribution is usually called saint-version.tar.gz (e.g.,
saint-3.1.2.tar.gz).
To start, run ./saint from the command-line. SAINT starts using a Web
browser.
SAINT home
WWDSI
SAINT Home
Data Management
Target Selection
Data Analysis
Configuration Management
SAINT Documentation
Troubleshooting
This option configures the probe level; the default is set to Heavy. For example,
if you are aware of the SANS top 10 security threats (see http://www.
sans.org/topten.htm) and want to check if your system exhibits such threats,
select the Top 10 option.
When you make a change to any configuration options in this page, save
the changes using the Change the configuration file link.
If you plan on using SAINT for a number of hosts on a regular basis, create a
file with hostnames and use it here to save some typing and avoid errors.
The first time you start a scan in each SAINT session, it warns you about not
connecting to external Web servers while using SAINT.
Scanning can temporarily take many resources from the target machine, so
don’t perform heavy scans on busy machines. Scan when the target system
or network isn’t heavily loaded with other work.
3. If you are behind a firewall, check the Firewall Support option from
below.
Firewall Support
Is the host you are scanning behind a firewall? If it is, you
should enable firewall support, or your results might not be
accurate, or you might get no results at all.
(*) No Firewall Support
( ) Firewall Support
Only choose firewall support if SAINT isn’t running on the target system
itself or SAINT system and the target system or network has a firewall
between them. Making SAINT aware of the firewall’s presence allows for a
more accurate scan.
4. Click Start.
Once the scan is complete you see a results screen. Here is an example
result of running a heavy scan on a single host called rhat.nitec.com.
SAINT data collection
_____________________________________________________________
Data collection in progress...
03/26/01-08:41:37 bin/timeout 60 bin/fping rhat.nitec.com
03/26/01-08:41:37 bin/timeout 60 bin/tcpscan.saint
12754,15104,16660,20432,27665,33270,1-1525,1527-5404,5406-888
7,8889-9999 rhat.nitec.com
274754-2 Ch21.F 11/5/01 9:05 AM Page 546
19,53,69,111,137-139,161-162,177,8999,1-18,20-52,54-68,70-110
,112-136,140-160,163-176,178-176
0,1763-2050,32767-33500 rhat.nitec.com
03/26/01-08:42:22 bin/timeout 20 bin/ftp.saint
rhat.nitec.com
03/26/01-08:42:22 bin/timeout 20 bin/relay.saint
rhat.nitec.com
03/26/01-08:42:22 bin/timeout 20 bin/login.saint -o -u
root -p root telnet rhat.nitec.com
03/26/01-08:42:22 bin/timeout 20 bin/sendmail.saint smtp
rhat.nitec.com
03/26/01-08:42:22 bin/timeout 90 bin/http.saint 1263
rhat.nitec.com
03/26/01-08:42:22 bin/timeout 20 bin/login.saint -r -u
wank -p wank telnet rhat.nitec.com
03/26/01-08:42:24 bin/timeout 90 bin/http.saint http
rhat.nitec.com
03/26/01-08:42:24 bin/timeout 20 bin/rlogin.saint
rhat.nitec.com
03/26/01-08:42:24 bin/timeout 20 bin/rsh.saint -u root
rhat.nitec.com
03/26/01-08:42:24 bin/timeout 20 bin/statd.saint Linux 7.0
rhat.nitec.com
03/26/01-08:42:24 bin/timeout 20 bin/ssh.sara
rhat.nitec.com
03/26/01-08:42:25 bin/timeout 20 bin/rsh.saint
rhat.nitec.com
03/26/01-08:42:25 bin/timeout 60 bin/smb.saint
rhat.nitec.com
03/26/01-08:42:27 bin/timeout 20 bin/mountd.sara
rhat.nitec.com
274754-2 Ch21.F 11/5/01 9:05 AM Page 547
As you can see, a number of scans were run including UDP, TCP, DNS,
HTTP, and RPC. SAINT also tries to detect the remote software platform
and version.
5. Click Continue with report and analysis for a summary of your scan
results.
Table of contents
Vulnerabilities
Host Information
* By Class of Service
* By System Type
* By Internet Domain
* By Subnet
* By Host Name
Trust
* Trusted Hosts
* Trusting Hosts
274754-2 Ch21.F 11/5/01 9:05 AM Page 548
You can analyze the gathered data in many ways. The first option allows you to
analyze the data by approximating the danger level. An example of this analysis
for rhat.nitec.com is shown below:
________________________________________________________________________________
Table of contents
This problem, vulnerability in the DNS software (BIND 8.2.2), is listed as one of
the top 10 security threads by SANS (http://www.sans.org).
A potential problem (marked as YELLOW) is detected:
274754-2 Ch21.F 11/5/01 9:05 AM Page 549
Here, SAINT reports that the rhat.nitec.com system allows for excessive finger
information when finger service is used to find information on a user. This can
potentially lead to other security risks.
Finally, it also lists some potential vulnerabilities (marked as BROWN), which are:
The printer daemon (lpd) and the Remote Procedure Call (RPC) stat daemon
(rpc.statd) are enabled and may be associated with security issues.
Once you have identified vulnerabilities in a system or network, use the
(Common Vulnerabilities and Exposures) CVE links to learn more about the prob-
lems. For example, to learn more about the DNS vulnerability detected in the scan
for rhat.nitec.com, I can click on the (CVE 1999-0849 1999-0851) links listed in
the analysis and find all the details.
The next step is to shut off the program or service that is vulnerable. This may
not be possible if your business depends on this service or program. So I recom-
mend that you find a fix as quickly as possible and replace the faulty program or
service as soon as possible.
SARA
The original authors of SAINT wrote another security audit tool called Security
Auditor’s Research Assistant (SARA), which can be downloaded from http://www-
arc.com/sara. SARA has a special firewall mode that allows detects hosts without
using ping packets.
The source distribution can be installed as follows:
VetesCan
VetesCan is a bulk vulnerability scanner that you can download from http://www.
self-evident.com. It contains many programs to check for network security
exploits.
Once you have downloaded and extracted the source distribution, run the sup-
plied install script from the source directory and the scanner is compiled and
built. Running a scan on a host is as simple as running ./vetescan hostname from
the same directory.
◆ Reverse-ident scanning
Replace the hostname with the target hostname. The /32 tells nmap to scan only
the given host on the network. If you specify /24, it scans all the hosts on the class
C network where the given host resides. If you specify /16, it scans the class B net-
work and /8 scans the class A network.
If you don’t know the hostname, you can use the IP address instead. For example:
This command tells nmap to scan the TCP ports (-sT) on the 192.168.1.100 host
only and guess its operating system. The output of the preceding command is
shown below:
As you can see it has detected several TCP ports that are open and also guessed
the operating system of the machine responding to the 192.168.1.100 address.
Here nmap scans the UDP ports for the specified host.
When PortSentry detects activity on a monitored port, it can report it and also
perform a specified action, which can include denying further attempts to access
your system. Typically, a hacker scans ports for weakness in a service connected to
a port and then attacks the service. Since PortSentry can detect access attempts to
multiple monitored ports, it can deny the hacker the time he needs to launch mas-
sive attacks to say create buffer overflow or other kinds of attacks. When it detects
a port scan, the following action is taken:
TCP_PORTS=”1,11,15,79,111,119,143,540,635,1080,1524,2000,5742,6667,12345,12346,2
0034,31337,32771,32772,327
73,32774,40421,49724,54320”
UDP_PORTS=”1,7,9,69,161,162,513,635,640,641,700,32770,32771,32772,32773,32774,31
337,54321”
ADVANCED_PORTS_TCP=”1023”
ADVANCED_PORTS_UDP=”1023”
ADVANCED_EXCLUDE_TCP=”113,139”
ADVANCED_EXCLUDE_UDP=”520,138,137,67”
IGNORE_FILE=”/usr/local/psionic/portsentry/portsentry.ignore”
HISTORY_FILE=”/usr/local/psionic/portsentry/portsentry.history”
BLOCKED_FILE=”/usr/local/psionic/portsentry/portsentry.blocked”
BLOCK_UDP=”1”
BLOCK_TCP=”1”
KILL_ROUTE=”/sbin/iptables -I input -s $TARGET$ -j DROP -l”
KILL_HOSTS_DENY=”ALL: $TARGET$”
KILL_RUN_CMD=”/bin/echo Attack host: $TARGET$ port: $PORT$ >>
/var/log/attacks.log”
SCAN_TRIGGER=”0”
Option Meaning
Option Meaning
Mode Meaning
-tcp Basic port-bound TCP mode. PortSentry binds to all the ports listed in
TCP_PORTS option in portsentry.conf file.
-udp Basic port-bound UDP mode. PortSentry binds to all the ports listed in
UDP_PORTS option in portsentry.conf file.
-stcp Stealth TCP scan detection mode. PortSentry uses a raw socket to monitor all
incoming packets. If an incoming packet is destined for a monitored port it
reacts to block the host. This method detects connect() scans, SYN/half-open
scans, and FIN scans.
-sudp Stealth UDP scan detection mode. This operates the same as the preceding TCP
stealth mode. UDP ports need to be listed and they are then monitored. This
doesn’t bind any sockets, and it reacts to any UDP packet.
-atcp Advanced TCP stealth scan detection mode. PortSentry listens to all the ports
under ADVANCED_PORTS_TCP option specified port number. Any host
connecting to any port in this range that isn’t excluded (in
ADVANCED_EXCLUDE_TCP option) is blocked. This mode is the most sensitive
and the most effective of all the protection options. Make sure you know which
ports you need to exclude for this mode to work properly. For example, if you
include the AUTH (113) port here, an outgoing FTP connection may be rejected
by an external FTP server that sends a request to identd daemon on your
system.
PortSentry allows FTP servers to make a temporary data connection back to the
client on a high (> 1024) port.
-audp Advanced UDP stealth scan detection mode. This is a very advanced option
and you stand a good chance of causing false alarms. PortSentry makes no
distinction between broadcast and direct traffic. If you have a router on your
local network putting out RIP broadcasts, then you probably block them. Use
this option with extreme caution. Be sure you use exclusions in the
ADVANCED_EXCLUDE_UDP option.
The last line indicates that PortSentry is running. You should see all the ports
you told it to watch in the log. If a port is used, PortSentry warns you that it
couldn’t bind to it and continues until all the other ports are bound. In the preced-
ing sample log entries, finger port (79) and sunrpc port (111) couldn’t be bound to
PortSentry because other services are already bound to them.
Note that for the advanced stealth scan detection mode PortSentry only
lists the ports that it doesn’t listen for.This is an inverse binding.
Now, to see what happens when another host tries to connect to a monitored
port, do the following experiment:
4. For advanced mode, you can telnet to any port not excluded to trip an
alarm. If you disconnect and try to telnet again, you should find that the
target system is unreachable.
To make sure PortSentry starts up each time you boot the system, append the
following line in /etc/rc.d/rc.local script. Don’t forget to change the mode
argument to one of the six possible modes.
/usr/local/psionic/portsentry mode
◆ The client, nessus, interferes with the user through X11/GTK+ interface or
command line.
These commands don’t extract all the source code in nessus-libraries, lib-
nasl, nessus-core, and nessus-plugins directories. Here the software ver-
sion is 1.0.7a.
2. Compile the nessus-libraries by changing your current directory to
nessus-libraries and running ./configure and make && make install
to install the libraries.
3. Add /usr/local/lib in /etc/ld.so.conf and run ldconfig.
4. Compile libnasl by changing your current directory to libnasl and run-
ning ./configure and make && make install to install the libraries.
5. Compile nessus-core by changing your current directory to nessus-core
and running ./configure and make && make install to install the core.
If you don’t use The X Window System or you don’t want the nessus client to
use GTK, you can compile a stripped-down version of the client that works
from the command-line.To do this, add the --disable-gtk option to con-
figure while building nessus-core as follows: ./configure --disable-
gtk ; make && make install
------------------------------
Login : renaud
Password : secret
Authentication type (cipher or plaintext) [cipher] : cipher
Now enter the rules for this user, and hit ctrl-D once you are done :
(the user can have an empty rule set)
^D
Login : renaud
Password : secret
Authentication : cipher
Rules :
Is that ok (y/n) ? [y] y
user added.
To restrict the test using one or more rules, you can add rules using the User
tab window to exclude one or more systems from a scan.
Once you have started the scan, you see a graphical status for each host
showing. After the scan is complete, a report window appears showing all
the findings. For each host, you have a detailed report, which also points
out the potential solution and/or Common Vulnerability Exposure (CVE)
identification number.
Using Strobe
Strobe is a high speed TCP port scanner. You can download it from http://www.
insecure.org/nmap/scanners. Once you have downloaded the source distribu-
tion, you can extract it and run make install from the new strobe subdirectory.
You can run strobe from this directory using the ./strobe hostname -f com-
mand. For example:
strobe nano.nitec.com -f
274754-2 Ch21.F 11/5/01 9:05 AM Page 562
The preceding command scans a host called nano.nitec.com in fast mode (-f).
Here’s a sample output:
You can also generate statistics on the scan by using the -s option.
◆ Every hour logcheck script is run, which runs the logtail program.
◆ The logtail program finds the position in the log where logcheck finished
last time (if any).
◆ The logcheck script performs keyword searches on the new log entries for
■ Active system attacks
■ Security violations
■ Unusual system activity
◆ If there is anything to report, it sends an email to the administrator
INSTALLING LOGCHECK
To install logcheck from the source distribution do the following:
1. As root extract the source distribution using the tar xvzf logcheck-
version.tar.gz command. Enter the newly created logceck-version
subdirectory.
2. Run make linux to compile and install. The logcheck configuration files
are stored in /usr/local/etc directory. The logcheck.sh script is also
stored in /usr/local/etc. However the logtail program is stored in
/usr/local/bin.
CONFIGURING LOGCHECK
The logcheck.sh is the main script. There are four configuration files:
◆ logcheck.hacking
To exclude a violation keyword from being reported, you can use the
logcheck.violations.ignore file. Any keyword that is found in
logcheck.violations.ignore isn’t reported, even if logcheck.
violations includes it. Note that keywords searches for the ignore
file is insensitive.
Ideally, logcheck would like to scan a single log file such as /var/log/
messages, but that’s not really a good idea from a system management point of
view. Therefore, logcheck actually creates temporary log files by combining /var/
log/messages, /var/log/secure and /var/log/maillog to a temporary file in
/usr/local/tmp/check.PID.
The default logcheck.sh script doesn’t check the FTP transfer log /var/
log/xferlog. To check it, add $LOGTAIL /var/log/xferlog >>
$TMPDIR/check.$$ in the section where you find the following lines:
$LOGTAIL /var/log/messages > $TMPDIR/check.$$
$LOGTAIL /var/log/secure >> $TMPDIR/check.$$
$LOGTAIL /var/log/maillog >> $TMPDIR/check.$$
ln -s /usr/local/etc/logcheck.sh /etc/cron.hourly/logcheck.sh
ln -s /usr/local/etc/logcheck.sh /etc/cron.daily/logcheck.sh
Once you have created one of these two symbolic links, cron daemon runs the
script on the schedule you choose.
If you want to run the logcheck script more frequently than once an hour, add
the script in /etc/crontab instead. For example:
00,15,30,45 * * * * /usr/local/etc/logcheck.sh
274754-2 Ch21.F 11/5/01 9:05 AM Page 565
This line in /etc/crontab has crond daemon run the script every 15 minutes.
Swatch
This is a Perl-based system log (/var/log/message) monitoring script that can
monitor system events and trigger alarms based on patterns stored in a configura-
tion file. It has multiple alarm methods, (visual and event triggers). Here is how you
can install and configure swatch.
The preceding configuration tells swatch to echo the bell (character 0x7)
three times if the keyword is seen in the /var/log/messages file. Rename
your configuration file to .swatchrc and keep it in /root/ directory.
4. Run swatch using the /path/to/swatch/swatch --config-file=/
root/.swatchrc command on a xterm or shell window.
IPTraf
IPTraf is an ncurses-based IP LAN monitor that utilizes the built-in raw packet cap-
ture interface of the Linux kernel. It generates various network statistics, including
IPTraf can be used to monitor the load on an IP network, the most used types of
network services, the proceedings of TCP connections, and others. You can down-
load IPTraf from http://cebu.mozcom.com/riker/iptraf.
Using cgichk.pl
This is a simple CGI scanner written in Perl. You can download the source from
http://www.packetstorm.securify.com Web site. When run from the command
line using perl cgichk.pl command, it asks you to enter a hostname for the Web
server you want to scan and a port number (default 80). You can also choose to log
the results in a file.
First, it checks the HTTP protocol version being used by the Web server. For
example, the following sample session shows that we are scanning a machine
called rhat.nitec.com.
Once it detects the protocol version, it asks you to press the enter key to start
checking for CGI vulnerabilities. Following output shows a sample scan for CGI
security issues on rhat.nitec.com Web server running Apache 2.0.
274754-2 Ch21.F 11/5/01 9:05 AM Page 567
In the preceding code, the bold line is a potential CGI security risk. The
webbbs.cgi script can be abused by script kiddies and wannabe hackers to break
into the system. If your scan results one or more identified security risks, consider
removing the scripts or updating them with appropriate fixes.
Using Whisker
Whisker is a Perl-based CGI scanner that I like a lot. You can download the source
distribution from http://www.wiretrip.net/rfp. Once downloaded, extract the
source in a directory and run the whisker.pl script as perl whisker.pl -h host-
name. For example, perl whisker -h rhat.nitec.com command runs the scan-
ner on the Apache Web server running on the named host. The result is:
= Host: rhat.nitec.com
= Server: Apache/2.0.14 (Unix)
+ 200 OK: HEAD /cgi-bin/webbbs.cgi
+ 200 OK: HEAD /manual/
+ 200 OK: HEAD /temp/
274754-2 Ch21.F 11/5/01 9:05 AM Page 569
The scan output uses HTTP status code (such as 200, 303, 403, and 404) to indi-
cate security risks. For example, the preceding scan result shows that there are three
potential risks found (200) on the server. If you want more information, run
whisker with -i and -v options. For example, the perl whisker.pl -h
www.domain.com -i -v command runs it on www.domain.com. Here is a sample
scan output:
= - = - = - = - = - =
= Host: www.domain.com
- Directory index: /
= Server: Apache/1.3.12 (Unix) mod_oas/5.1/
- www.apache.org
+ 302 Found: GET /scripts/
+ 403 Forbidden: GET /cgi-bin/
+ 200 OK: HEAD /cgi-bin/upload.pl
+ 403 Forbidden: HEAD /~root/
+ 403 Forbidden: HEAD /apps/
+ 200 OK: HEAD /shop/
+ 200 OK: HEAD /store/
Notice that there are a few 200 OK lines, which means that the exploits exists.
403 states that access to exploitable resource is denied but it still exists — this is
both good and bad. It’s good because, as the server is configured, the exploit isn’t
accessible. If the configuration changes, the exploit may become available; that’s
why 403 in this case is bad news. The 302 lines indicate false positives. This is
because many servers are configured to respond with a custom error message when
a requested URL is missing and this generates a 302 HTTP status code.
You can also use -I n (where n = 0 to 9) option to enable evasive mode for evad-
ing Intrusion Detection System (IDS) on the Web server. So if you use any IDS solu-
tion, you can also test your IDS effectiveness. For example, if your IDS knows about
/cgi-bin/phf (a known CGI risk), using -I 1 attempts to trick your IDS using URL
encoding so the /cgi-bin/phf request is sent in an encoded URL instead of directly
using /cgi-bin/phf in the request. Similarly, -I 2 tries to confuse an IDS using
extra /./ pattern in the URL. For details, run whisker without any argument.
Using Malice
Malice is also another Perl-based CGI vulnerability scanner. It has anti-IDS evasion
capabilities and a large list of checks.
using password crackers on your own password files you can detect guessable or
weak passwords before the hacker does. Here I will discuss how you can use pass-
word crackers.
When using a cracker program, be sure that you are authorized to run crack
programs. If you work for a company and aren’t directly responsible for sys-
tem administration, ask the boss for the permission or unnecessary legal
trouble may find you.
1. Download and extract the source distribution from the official Web site.
Change your current directory to the src subdirectory under the newly
created subdirectory called john-version (where version is the version
number of your source distribution).
2. Run the make linux-x86-any-elf command to compile and install john
in the run subdirectory in john-version/run.
Cracking passwords takes a lot of system resources. Don’t run john on a pro-
duction server. If you must run on a busy system, consider using nice to
lower priority. For example, nice -n 20 /path/to/john /etc/
passwd.txt & command is nice to your system.
If you run john in the foreground (that is, you don’t use & when running it),
you can press control+c to abort it. You can also restore an aborted session
by running john with only the -restore command-line option.
Crack
Like John the Ripper (john), you can use Crack to take a crack at cracking your
passwords. Crack is the classic password cracker. It can be downloaded from
http://www.users.dircon.co.uk/~crypto.
Tripwire
Tripwire Open Source Linux Edition, under the General Public License (GPL), is a
file and directory integrity checker that creates a database of signatures for all files
and directories and stores them in a single file. When Tripwire is run again, it com-
putes new signatures for current files and directories, then compares them with the
original signatures stored in the database. If there is a discrepancy, the file or direc-
tory name is reported with information about the discrepancy.
See the Using Tripwire Open Source, Linux Edition section in Chapter 9 for
details on how to use Tripwire.
LIDS
The Linux Intrusion Detection System (LIDS) enhances security from within the
kernel of the Linux operating system. In the LIDS security model, the subject,
the object, and the access type are in the kernel, so it’s called a reference monitor.
The LIDS project Web site is http://www.lids.org/about.html. Chapter 8
describes this tool.
274754-2 Ch21.F 11/5/01 9:05 AM Page 572
Snort
This is a packet sniffer, packet logger, and network intrusion detection system that
analyzes network traffic analysis and logs IP packets. Snort uses a flexible rules
language and a detection engine that utilizes modular plugin architecture. Snort
has a real-time alert capability that incorporates alert mechanisms for
◆ Syslog
◆ A user-specified file
◆ A UNIX socket
If you use network switches instead of hubs, your switch must mirror traffic
on the switched port for the machine running snort.
Snort uses the libpcap library. You must install libpcap library, which can be
downloaded from http://www.tcpdump.org. It features rules based logging and
can perform content searching/matching in addition to being used to detect a vari-
ety of other attacks and probes, such as
◆ Buffer overflows
◆ CGI attacks
◆ SMB probes
1. Download snort source distribution from the official Web site. As root,
extract the source using tar xvzf snort-version.tar.gz command.
From the newly created subdirectory, run ./configure to configure the
274754-2 Ch21.F 11/5/01 9:05 AM Page 573
source. The configure script detects if you have supported database (such
as MySQL and Postgres) and for files necessary for the database plugin.
Snort prints packet header information until you stop it by pressing Control+C.
When you abort Snort using such a key combination, it prints packet statistics, like
this:
===============================================================================
Snort received 441 packets and dropped 0(0.000%) packets
Breakdown by protocol: Action Stats:
TCP: 438 (99.320%) ALERTS: 0
UDP: 0 (0.000%) LOGGED: 0
ICMP: 0 (0.000%) PASSED: 0
ARP: 0 (0.000%)
IPv6: 0 (0.000%)
IPX: 0 (0.000%)
OTHER: 3 (0.680%)
DISCARD: 0 (0.000%)
===============================================================================
Fragmentation Stats:
Fragmented IP Packets: 0 (0.000%)
Rebuilt IP Packets: 0
Frag elements used: 0
Discarded(incomplete): 0
Discarded(timeout): 0
===============================================================================
TCP Stream Reassembly Stats:
TCP Packets Used: 0 (0.000%)
Reconstructed Packets: 0 (0.000%)
Streams Reconstructed: 0
===============================================================================
If you want to see more than just the TCP/UDP/ICMP header information, such
as application data in transit then do the following:
◆ If you want to analyze the logged packets using a tcpdump format com-
patible analysis tool, you can use -l path -b options.
274754-2 Ch21.F 11/5/01 9:05 AM Page 575
/usr/local/bin/snort -d -c snort.conf
Here, Snort applies all the rules in snort.conf file and log entries in /var/log/
snort directory by default.
By default, Snort uses full alert mode (-A full), which may be a bit slow for a
fast network. You can use -A fast instead. This ensures that Snort writes sim-
ple alert messages in the log file and logs packet data in tcpdump format.
GShield
GShield is an iptables firewall that you can download from http://muse.linux-
mafia.org/gshield.html. It supports network address translations (NAT), de-
militarized zone (DMZ), port forwarding, transparent proxy and many other cool
features.
Using Netcat
Netcat (nc) is a network utility that allows you to create raw socket connection to a
port. Over a raw connection, data can be sent or received. It’s a great network
debugging and exploration tool that can act as a backend for another script that
serves as a client or server using TCP or UDP protocols.
INSTALLING NETCAT
Here is how you can install Netcat on your system:
1. Download the Netcat RPM source from RPM sites, such as http://www.
rpmfind.net.
Simply enter netcat as the keyword for the search interface and you find
both source and binary RPM distributions.
274754-2 Ch21.F 11/5/01 9:05 AM Page 576
5. Once extracted, run make linux command to compile the source. If you
have errors, try make nc instead. Either way, you end up with a new
binary called nc in the current directory. Install the binary in a suitable
directory, such as /usr/bin using cp nc /usr/bin; chmod 755
/usr/bin/nc commands.
Now you have Netcat binary nc installed on your system. You should be
able to access it from anywhere as long as you have /usr/bin directory
in your path.
Telnet is a very limited TCP client. Using it in scripts is difficult due to its limi-
tations. For example, telnet client doesn’t allow you to send arbitrary binary
data to the other side, because it interprets some of the byte sequences as
its internal command options.Telnet also mixes network output with its own
internal messages, which makes it difficult for keeping the raw output clean.
It’s also not capable of receiving connection requests nor can it use UDP
packets. Netcat doesn’t have any of these limitations and it’s much smaller
and faster than telnet and has many other advantages.
The simplest way to use Netcat is to run nc hostname portnumber. This makes
a TCP connection to the named host on the given port. There are many command-
line options that you can use. Run nc -h to find out about all the command-line
options.
This command tells Netcat (nc) to listen to port 9999 and be verbose (-v)
about it. This is the server.
Entering text on the client or the server gets the message sent to each side. This
acts like a simple talk or chat client/server solution.
If you want to display a text message to anyone who connects to the nc server
running on port 9999, you can modify the nc command used to startup in server
mode as follows:
When you start the nc server using this command line, the /path/to/file is
displayed whenever a new client connects to the server. To disconnect the client
from the server, press Control+C.; the server automatically stops, too.
This command connects to the local SMTP server on port 25 and allows you to
record all incoming and outgoing data in /tmp/smtp.hex in Hex format. For
example, a sample short session for the preceding command is shown below:
220 rhat.nitec.com ESMTP Sendmail 8.11.0/8.11.0; Tue, 27 Mar 2001 10:50:57 -0800
helo domain.com
250 rhat.nitec.com Hello IDENT:[email protected] [127.0.0.1], pleased to meet
you
quit
221 2.0.0 rhat.nitec.com closing connection
After connecting to the local SMTP server via nc, I entered the helo command
(spelled helo, not hello). The server responded with a greeting, then I entered the
quit command to log out. The data collected in the /tmp/smtp.hex is shown below:
In the preceding code, I marked the data I entered via nc in bold face.
◆ Lines starting with the < character represent data received by nc from the
remote server.
◆ Lines starting with the > character are generated by the local nc user (in
this case, me).
This command does port scans for all ports ranging from 1 to 1024 on
rhat.nitec.com system. Here’s an example:
As you can see, nc shows a number of open ports. If you need to slow
down the rate of scan, use the -i seconds option to delay each scan by
specified number of seconds. Scanning is performed from highest ranging
port to lowest. If you want to randomize the scanning of ports, use the -r
option. You can also specify multiple ranges; for example, /usr/bin/nc
-v -w 2 -z rhat.nitec.com 20-50 100-300 scans ports between 20 to
50 and 100 to 300 only.
1. Run nc on the machine where you want to receive one or more files, as
follows:
/usr/bin/nc -l -p port_number | /bin/tar xvzfp -
2. Change the port_number to a real port number > 1024. For example:
/usr/bin/nc -l -p 9999 | /bin/tar xvzfp -
In this case, nc on the receiving host listens on local port 9999. Data
received by nc is passed to the tar program, which uncompresses the data
(due to the z flag) from STDIN (due to the - flag) and writes the files to
disk.
3. On the sender machine, run the following command:
/bin/tar cvzfp - path | /usr/bin/nc -w 3 receiver_host 9999
Change the path to the fully qualified path name of the file or the direc-
tory you want to copy and the receiver_host to the host name that is
listening for data. For example:
/bin/tar cvzfp - /etc | /usr/bin/nc -w 3 rhat.nitec.com
9999
This command transfers all the files in /etc directory to the rhat.nitec.
com system via port 9999.
Tcpdump
This is a powerful and popular network packet monitoring tool. It allows you to
dump the network traffic on STDOUT or in a file. You can download it from
http://www.tcpdump.org.
274754-2 Ch21.F 11/5/01 9:05 AM Page 581
1. Download the source distribution from the official tcpdump site. Extract
the source using tar xvzf tcpdump-version.tar.gz command and
change directory to the newly created subdirectory.
2. Run ./configure to configure the source, then run the make && make
install command, which installs the binary in /usr/local/sbin
directory.
Now you can run tcpdump to monitor network traffic between hosts. For exam-
ple, to monitor traffic between a router called router.nitec.com and the local
machine I can run:
If I ping the router.nitec.com from the local machine, tcpdump displays out-
put such as the following:
Some tools for viewing and analyzing tcpdump trace files are available from the
Internet Traffic Archive (http://www.acm.org/sigcomm/ITA/). You can also use
a program called tcpslices (ftp://ftp.ee.lbl.gov/tcpslice.tar.Z) to analyze
portions of tcpdump output.
LSOF
This is a very powerful diagnostic tool that allows you to associate open files with
processes. It lists information about all files that are open by a process. You can
274754-2 Ch21.F 11/5/01 9:05 AM Page 582
1. As root, extract the downloaded tar ball in a directory and change your
current directory to the newly created subdirectory.
2. Run ./configure linux to configure the source distribution. You are
asked if you want to take an inventory of all the files in this distribution.
Entering no will works in most situations. If you think you may have a
damaged distribution, say yes.
3. You are asked to customize machine.h header file. Enter y to customize.
4. You are asked if you want to enable HASSECURITY. Entering y ensures
that only root user can run lsof to examine all open files; other users may
examine only the files that belong to them.
5. You are asked to enable or disable WARNINGSTATE. Leave the default as
is — allowing lsof to display warnings is a good idea.
By default, HASKERNIDCK is disabled. This option allows lsof to verify its
own integrity in a way, but the process is slow. Keep the default unless
you’re paranoid (like me).
6. You are asked whether you want to backup the old machine.h to
machine.h.old and use the new one. Enter y to indicate positively.
/usr/sbin/lsof /path/to/filename
/path/to/filename is the fully qualified pathname of the file you want lsof to
report on, like this:
/usr/sbin/lsof /var/log/messages
In this case, I want to know which process opened this log file. A typical output
is
As you can see, only the Syslog daemon (syslogd) has opened this file, which is
expected because this daemon writes to this file.
As you can see, the file is still growing even though it’s deleted! In the
preceding example, the file has grown to 31,416,320 bytes (~ 31 MB). The
+L1 option tells lsof to display all files that have a maximum of 1 links
associated with them. This limits the output to a single link or 0 linked
(deleted) files.
Signaling or terminating the program is the only course of action you can
take. In this example case, you can run killall yes to terminate all
instances of the yes command. You can also use kill -9 PID command
to terminate the command using the PID shown in the lsof output.
274754-2 Ch21.F 11/5/01 9:05 AM Page 584
You can limit lsof to report link counts on a single filesystem by using the -a
option. For example, to limit the output to /usr partition, you can enter /usr/
sbin/lsof -a +L1 /usr.
Remember these limitations:
For example, it may not report link counts for First In First Out (FIFO)
buffers, pipes, and sockets.
◆ The link count also doesn’t work properly on NFS filesystem.
/usr/sbin/lsof mountpoint
/usr/sbin/lsof [email protected]
A typical output is
Here, lsof reports that several commands are connected to the 207.183.233.19
(n.nitec.com) address. smbd (Samba daemon), sshd (SSH daemon) and a nc com-
mand are connected to the given IP address. The last connection is suspicious
because it offers the remote IP a connection on an odd (non standard) port 9999
and it’s also run by a regular user. Therefore, it needs to be investigated further.
Typically, network administrators use netstat program to casually monitor net-
work connection activity. However, netstat and lsof can work together to solve
problems.
Here’s a typical output from netstat:
Now you need to find out why a remote host, nano.nitec.com, is on port 9999
of your system. What process is serving this remote connection? Run /usr/sbin/
lsof [email protected] and you get the name of the process and its owner
that is serving this port on the local system.
kabir 9233 0.0 0.1 1552 652 pts/1 S 08:21 0:00 nc -l -p 9999
Now I want to know what files are opened by the nc command whose PID is
9233. So I ran /usr/sbin/lsof -p 9233. The following output appears:
This output shows all the files, devices, libraries, and network sockets that this
program has currently opened. If you want to find all instances of a command and
which files each have opened, run /usr/sbin/lsof -c command_name. For exam-
ple, /usr/sbin/lsof -c sshd shows all instances of the sshd server and their
opened files.
Ngrep
This is the grep for the network traffic. It uses the libpcap library access the network
packet. This tool allows you to specify regular expressions to match against contents
within TCP, UDP, and ICPM packets. You can download the source distribution from
http://www.packetfactory.net/Projects/ngrep. To install it, do the following:
This command tells ngrep to monitor the eth0 interface for TCP packets
on port 80 that contains the string ‘yahoo’. On a host on the same sub-net,
I can use a Web browser to create target network packets as follows:
http://www.google.com/search?q=yahoo
Since the packet generated by that host has the string ‘yahoo’ in the data,
ngrep catches is as follows:
T 207.183.233.19:2225 -> 216.239.37.100:80 [AP]
GET /search?q=yahoo HTTP/1.1..Accept: image/gif, image/x-
xbitmap, imag
e/jpeg, image/pjpeg, application/vnd.ms-powerpoint,
application/vnd.ms
-excel, application/msword, */*..Referer:
http://www.google.com/..Acce
pt-Language: en-us..Accept-Encoding: gzip, deflate..User-
Agent: Mozill
a/4.0 (compatible; MSIE 5.5; Windows NT 5.0)..Host:
www.google.com..Connection: Keep-Alive..Cookie:
274754-2 Ch21.F 11/5/01 9:05 AM Page 587
PREF=ID=5b67841b6f7a913d:TM=982401088:LM
=982401088....
/usr/bin/ngrep -qd eth0 ‘USER|PASS’ tcp port 21
Here, ngrep is told to monitor interface eth0 for either ‘USER’ or ‘PASS’
strings for TCP port 21, which is the command port for FTP connections.
If I connect to this host from another workstation on the same network,
I can get the username and password used for an FTP connection. A sam-
ple ngrep captured data is
T 207.183.233.19:2267 -> 207.183.233.20:21 [AP]
USER kabir..
T 207.183.233.19:2267 -> 207.183.233.20:21 [AP]
Summary
Ensuring computer security is a hard job. Thankfully, there are many tools such as
security assessment or audit tools, port scanners, log monitoring and analyzing
tools, CGI scanners, password crackers, packet sniffers, and intrusion detection
tools that you can use to identify, detect, and possibly (and hopefully) eliminate
security holes. Ironically, these tools are also available to the hackers. Hopefully,
you will run them on your system to eliminate security vulnerability before they
know anything about it.
274754-2 Ch21.F 11/5/01 9:05 AM Page 588
284754-2 AppA.F 11/5/01 9:05 AM Page 589
Appendix A
IP Network Address
Classification
Currently, Ipv4 (32-bit IP) addresses are used on the Internet. A 32-bit IP address is
divided into two parts: the network-prefix and the host-number. Traditionally, each
IP address belongs to an IP class.
The class hierarchy is identified by a self-encoded key in each IP address:
◆ A 24-bit host-number
Each Class A Network can have 16,777,214 (224 – 2) possible hosts. We must
exclude x.0.0.0 and x.1.1.1 host addresses where x is the 8-bit network-prefix with
the first bit set to 0. The address with all 0s is considered the network address; the
one with all 1s is considered the broadcast address.
589
284754-2 AppA.F 11/5/01 9:05 AM Page 590
590 Appendixes
Subnetting IP networks
Each of the IP classes (A, B, or C) can be subdivided into smaller networks
called subnets. When you divide any of the IP classes into subnets, you gain two
advantages:
◆ You help minimize the routing table for your outmost router.
For example, say you have a Class B network, 130.86.32.0. Your router to the
Internet receives all IP packets for this entire network. After the packets enter your
network, however, you might use several subnets to route the IP packets to differ-
ent divisions of your organization. In the absence of subnetting, you would either
use a large Class B network or multiple Class C networks to support the divisions of
your organization. In the latter case, your router table would have to list a larger
number of routes — which usually slows down the performance of the network.
284754-2 AppA.F 11/5/01 9:05 AM Page 591
To create a subnet, you divide the host-number portion of the IP class address
into two components:
◆ Subnet number
The old host number had two parts; the new subnet number has three; thus, sub-
netting shifts from a two-level class hierarchy to a three-level subnet hierarchy.
As an example of subnetting, consider a Class C network called 192.168.1.0. Say
that we want to create six subnets for this network where the largest subnet has 20
hosts.
The very first step is to determine how many bits are necessary to create six sub-
nets from the host-number part, which is 8-bit. Since each bit can be 1 or 0, we
must subnet along binary boundaries and, therefore, the possible subnets are 2 (21),
4 (22), 8 (23), 16 (24), 32 (25), 64 (26), and 128 (27). Since we need six subnets, the
best choice for us is 8 (23), which gives us 2 additional subnets for the future. If we
use a 3-bit subnet number, there are five bits in the new host-number. So, the max-
imum subnet mask would be 255.255.255.224 — or (in binary) 11111111. 11111111.
11111111. 11100000. The last subnet mask 255.255.255.224 can be also represented
as /27, which is called an extended network-prefix. At this point, a Class C (/24) net-
work is subnetted using /27 (a 3-bit extended network prefix). Table A-2 shows all
the subnets possible for this network.
Continued
284754-2 AppA.F 11/5/01 9:05 AM Page 592
592 Appendixes
Appendix B
clear; ls
When you enter a command, you type the name of the command, followed (if
necessary) by other information. Items that follow the name of the command and
modify how it works are arguments. For example, consider the following command
line:
wc –l doc.txt
Two arguments appear in this example, the –l and the doc.txt. It’s up to the
program to determine how to use the arguments.
Two types of arguments exist: options and parameters. Options come right after
the program name and are usually prefixed with a dash (minus-sign character). The
parameters come after the options. From the preceding example, the -l is an option
telling wc to count the number of lines; doc.txt is a parameter indicating which
file to use.
When entering commands at the command line, remember the arguments are
case sensitive just as filenames in Unix are. Overall, the general syntax of a Unix
command is
Basics of wildcards
When you use directory and file commands, you can use special characters called
wildcards to specify patterns in filenames — which identify what the command
593
294754-2 AppB.F 11/5/01 9:05 AM Page 594
594 Appendixes
must work on. For example, to list all the files in the current directory that end in
.c, use:
ls *.c
The asterisk is a wildcard. The shell interprets the pattern and replaces it with all
the filenames that end in .c. Table B-1 shows commonly used wildcards.
Wildcard Meaning
Table B-2 shows examples of wildcard usage in various locations within the
filename.
Example Meaning
As you can see, using wildcards can make selecting multiple items easy.
294754-2 AppB.F 11/5/01 9:05 AM Page 595
Symbol Meaning
Within a regular expression, any character that does not have a special meaning
stands for itself. For example
◆ To search for lines that contain “foo” in the file data.txt, use
◆ To search for only lines in data.txt that begin with the word foo, use
grep ‘^foo’ data.txt
The use of single quotes tells the shell to leave these characters alone and
to pass them to the program. Single quotes are necessary whenever using
any of the special characters.
294754-2 AppB.F 11/5/01 9:05 AM Page 596
596 Appendixes
◆ The dollar sign indicates you want to match a pattern at the end of the
line:
grep ‘hello$’ data.txt
Any lines ending with “hello” result in a match using the preceding regu-
lar expression.
◆ To look for a pattern that begins a word, use \<.
The preceding expression searches for words that begin with ki in the file
data.txt. To find the pattern wee, but only at the end of a word, use
From Table B-3, notice that the period matches any single character except new-
line. This comes in handy if you are searching for all the lines that contain the let-
ters “C” followed by two letters and end in “s”; here, the regular expression is:
This expression matches patterns like: Cats, Cars, and Cris if they are in the file
data.txt.
If you want to specify a range of characters, use a hyphen to separate the begin-
ning and end of the range. When you specify the range, the order must be the same
as in the ASCII code. For example, to search for all the lines that contain a “B” fol-
lowed by any single lowercase letter, use:
It is also possible to specify more than one range of characters in the same pattern.
The preceding example selects all lines that contain the letter B followed by an
uppercase or lowercase letter.
Section Topic
Modern Unix systems provide much more detail than their ancestors; commonly
their sections are broken into subsections. For instance, section 6 is a reference for
games. However, you may find a section 6a (for adventure games), a section 6c (for
classic games), and so on.
The man command allows you to view the online manuals. The syntax for this
command is as follows:
The keyword is usually the name of the program, utility, or function. The default
action is to search in all the available sections following a predefined order and to
show only the first page found, even if the keyword exists in several sections.
Because commands may appear in several sections, the first page found might
not be the man page you are looking for. The command printf is a good example.
Say you are writing a program and would like to know more about the ANSI C
library function printf (). Just by typing
man printf
you get information about printf. However, the man pages for this printf are
for the shell command printf. Obviously, this is the wrong information. A reason-
able solution is to list all the sections that cover printf (if it exists in multiple sec-
tions) and to select the correct one. You can search the keywords of the man pages
with the -f option:
man –k printf
fprintf printf (3b) - formatted output conversion
294754-2 AppB.F 11/5/01 9:05 AM Page 598
598 Appendixes
The printf in question here is a library function. It is in section 3 (from Table B-4)
or, more specifically, in section 3b or 3s from the output. To specify a particular
section of a man page, pass it to man at the command line:
man 3b printf
cat
Syntax:
The cat command displays the contents of a file to stdout. It is often helpful
to examine the contents of a file by using the cat command. The argument you
pass to cat is the name of the file you want to view. To view the total content of
a filename:
cat name
Kiwee
Joe
Ricardo
Charmaine
This example combines the files name1, name2, and name3 to produce the final
file allnames. You establish the order of the merge by the order in which you enter
the files at the command line.
294754-2 AppB.F 11/5/01 9:05 AM Page 599
Using cat, you can append a file to another file. For instance, if you forgot to
add a name4 in the previous command, you can still produce the same results by
executing the following:
chmod
Syntax:
You can use this command to change the permission mode of a file or directory.
The permission mode is specified as a three- or four-digit octal number, as in this
example:
chown
Syntax:
The chown command changes the owner of the file the File parameter specifies to
the user the Owner parameter specifies. The value of the Owner parameter can be a
user ID or a login name in the /etc/passwd file. Optionally, you can also specify a
group. The value of the Group parameter can be a group ID or a group name in the
/etc/group file.
Only the root user can change the owner of a file. You can change the group of
a file only if you are a root user or if you own the file. If you own the file but are
not a root user, you can change the group only to a group of which you are a
member. Table B-5 discusses the details of the chown options.
294754-2 AppB.F 11/5/01 9:05 AM Page 600
600 Appendixes
Option Description
The following example changes ownership of the file from one user to another:
clear
Syntax:
clear
The clear command clears your terminal and returns the command line prompt
to the top of the screen.
cmp
Syntax:
This command compares the contents of two files. If the contents show no dif-
ferences, cmp by default is silent.
To demonstrate, file1.txt contains
this is file 1
this is file 2
the quick brown fox jumps over the lazy dog.
The only difference between the two files is the first line, last character. In one
file, the character is 1, and the other file has a 2, as shown here:
The results of cmp correctly identify character 14, line 1 as the unequal character
between the two files. The -l option prints the byte number and the differing byte
values for each of the files.
The results of the preceding example show us that byte 14 is different in the first
and second files; the first file has an octal 61 and the second file has an octal 62 in
that position.
Finally, the -s option displays nothing. The -s option only returns an exit status
indicating the similarities between the files. It returns 0 (zero) if the files are identi-
cal and 1 if the files are different. Last, the -s option returns a number >1 (greater
than 1) when an error occurs.
cp
Syntax:
Use the cp command to make an exact copy of a file. The cp command requires
at least two arguments: the name of the file or directory you want to copy, and the
location (or filename) of the new file. If the second argument is a name for a direc-
tory that already exists, cp copies the source file into that directory. The command
line looks like this:
cp main.c main.c.bak
The preceding example copies the existing file main.c and creates a new file
called main.c.bak in the same directory. These two files are identical, bit for bit.
cut
Syntax:
602 Appendixes
The cut command extracts columns of data. The data can be in bytes, charac-
ters, or fields from each line in a file. For instance, a file called names contains
information about a group of people. Each line contains data pertaining to one per-
son, like this:
Fast Freddy:Sacramento:CA:111-111-1111
Joe Smoe:Los Angeles:CA:222-222-2222
Drake Snake:San Francisco:CA:333-333-3333
Bill Steal:New York:NY:444-444-4444
To list the names and telephone numbers of all individuals in the file, the options
-f and -d should suffice:
The -f list option specifies the fields you elect to display. The -d options define
each field. In the preceding example, -d : indicates that a colon separates each
field. Using : as the field delimiter makes fields 1 and 4 into name and phone num-
ber fields.
To display the contents of a particular column, use the -c list option.
The preceding example shows how to list columns 1 through 5 in the names
file — and nothing else.
diff
Syntax:
You can use the diff command to determine differences between files and/or
directories. By default, diff does not produce any output if the files are identical.
The diff command is different from the cmp command in the way it compares
the files. The diff command is used to report differences between two files, line by
line. The cmp command reports differences between two files character by character,
294754-2 AppB.F 11/5/01 9:05 AM Page 603
instead of line by line. As a result, it is more useful than diff for comparing binary
files. For text files, cmp is useful mainly when you want to know only whether two
files are identical.
Considering changes character by character entails practical differences from
considering changes line by line. To illustrate, think of what happens if you add a
single newline character to the beginning of a file. If you compare that file with an
otherwise-identical file that lacks the newline at the beginning, diff reports that a
blank line has been added to the file, and cmp reports that the two files differ in
almost every character.
The normal output format consists of one or more hunks of differences; each
hunk shows one area in which the files differ. Normal format hunks look like this:
change-command
< from-file-line
< from-file-line. . .
—-
> to-file-line
> to-file-line. . .
Three types of change commands are possible. Each consists of a line number (or
comma-separated range of lines) in the first file, a single character (indicating the
kind of change to make), and a line number (or comma-separated range of lines) in
the second file. All line numbers are the original line numbers in each file. The spe-
cific types of change commands are as follows:
◆ ‘lar’: Add the lines in range r of the second file after line l of the first
file. For example, ‘8a12,15’ means append lines 12–15 of file 2 after
line 8 of file 1 or (if you’re changing file 2 into file 1) delete lines 12–15
of file 2.
◆ ‘fct’: Replace the lines in range f of the first file with lines in range t of
the second file. This is like a combined add-and-delete but more compact.
For example, ‘5,7c8,10’ means change lines 5–7 of file 1 to read the
same as lines 8–10 of file 2 or (if changing file 2 into file 1) change lines
8–10 of file 2 to read the same as lines 5–7 of file 1.
◆ ‘rdl’: Delete the lines in range r from the first file; line l is where they
would have appeared in the second file had they not been deleted. For
example, ‘5,7d3’ means delete lines 5–7 of file 1 or (if changing file 2
into file 1) append lines 5–7 of file 1 after line 3 of file 2.
a
b
c
294754-2 AppB.F 11/5/01 9:05 AM Page 604
604 Appendixes
d
e
c
d
e
f
g
1,2d0
< a
< b
5a4,5
> f
> g
The diff command produces output that shows how the files are different and
has to happen for the files to be identical. First, notice how c is the first common
character between the two files. The first line reads 1,2d0. This is interpreted as
deleting lines 1 and 2 of the first file, lines a and b. Next, the third line reads 5a4,6.
The a signifies append. If we append lines 4 through 6 of the second file to line 5 of
the first file, the files are identical.
The diff command has some common options. The -i option ignores changes
in case. diff considers upper- and lowercase characters equivalent. The -q option
summarizes information — in effect, the -q option reports if the files differ at all. A
sample output looks like this:
The -b option ignores changes in whitespace. The phrase “the foo” is equiva-
lent to “the foo” if you use the -b option.
du
Syntax:
du [-ask] filenames
This command summarizes disk usage. If you specify a directory, du reports the
disk usage for that directory and any directories it contains. If you do not specify a
filename or directory, du assumes the current directory. du -a breaks down the total
294754-2 AppB.F 11/5/01 9:05 AM Page 605
and shows the size of each directory and file. The -s option will just print the total.
Another useful option is the -k option. This option prints all file sizes in kilobytes.
Here are some examples of the various options:
du -a
247 ./util-linux_2.9e-0.1.deb
130 ./libncurses4_4.2-2.deb
114 ./slang1_1.2.2-2.deb
492 .
du -s
492 .
emacs
The emacs program, a full-screen visual editor, is one of the best editors. It is
known for its flexibility and power as well as for being a resource hog. The power
of emacs is not easily obtained. There is a stiff learning curve that requires patience
and even more patience. There can be as many as four sequential key combinations
to perform certain actions.
However, emacs can do just about anything. Aside from the basic editing fea-
tures, emacs supports the following: syntax highlighting, macros, editing multiple
files at the same time, spell checking, mail, FTP, and many other features.
When reading about emacs, you’ll often see words like meta-Key and C-x. The
meta key is the meta key on your keyboard (if you have one) or most commonly the
Esc key. C-x is the syntax for Ctrl plus the X key. Any “C-” combination refers to
the Ctrl key.
The two most important key combinations to a new emacs user are the C-x C-c
and C-h C-h combinations. The first combination exits emacs. You’d be surprised
to know how many people give up on emacs just because they cannot exit the pro-
gram the first time they use it. Also, C-h C-h displays online help, where you can
follow the tutorial or get detailed information about a command. Table B-6 shows
emacs’s most commonly used commands:
Commands Effects
606 Appendixes
Commands Effects
fgrep
The fgrep command is designed to be a faster-searching program (as opposed to
grep). However, it can search only for exact characters, not for general specifica-
tions. The name fgrep stands for “fixed character grep.” These days, computers and
memory are so fast that there is rarely a need for fgrep.
294754-2 AppB.F 11/5/01 9:05 AM Page 607
file
Syntax:
file filename
The file command determines the file’s type. If the file is not a regular file, this
command identifies its file type. It identifies the file types directory, FIFO, block
special, and character special as such. If the file is a regular file and the file is zero-
length, this command identifies it as an empty file.
If the file appears to be a text file, file examines the first 512 bytes and tries to
determine its programming language. If the file is an executable a.out, file prints
the version stamp, provided it is greater than 0.
file main.C
main.C: c program text
find
Syntax:
find [path] [-type fdl] [-name pattern] [-atime [+-]number of days] [-exec
command {} \;] [-empty]
find . -type d
The find command returns all subdirectory names under the current directory.
The -type option is typically set to d (for directory) or f (for file) or l (for links).
This command finds all text files (ending with .txt extension) in the current
directory, including all its subdirectories.
This command searches all text files (ending with the .txt extension) in the cur-
rent directory, including all its subdirectories for the keyword “magic,” and returns
their names (because -l is used with grep).
608 Appendixes
This command finds all GIF files that have been accessed in the past 24 hours
(one day) and displays their details using the ls -l command.
grep
Syntax:
The grep command allows you to search for one or more files for particular
character patterns. Every line of each file that contains the pattern is displayed at
the terminal. The grep command is useful when you have lots of files and you want
to find out which ones contain words or phrases.
Using the -v option, you can display the inverse of a pattern. Perhaps you want
to select the lines in data.txt that do not contain the word “the”:
If you do not specify the -w option, any word containing “the” matches, such as
“toge[the]r.” The -w option specifies that the pattern must be a whole word. Finally,
the -i option ignores the difference between upper and lowercase letters when
searching for the pattern.
Much of the flexibility of grep comes from the fact that you can specify not
only exact characters but also a more general search pattern. To do this, use what
you describe as “regular expressions.”
head
Syntax:
This command displays the first few lines of a file. By default, it displays the first
10 lines of a file. However, you can use the preceding options to specify a different
number of lines.
head –2 doc.txt
# Outline of future projects
# Last modified: 02/02/99
The preceding example illustrates how to view the first two lines of the text file
doc.txt.
294754-2 AppB.F 11/5/01 9:05 AM Page 609
ln
Syntax:
ln creates two types of links: hard and soft. Think of a link as two names for the
same file. Once you create a link, it’s indistinguishable from the original file. You
cannot remove a file that has hard links from the hard disk until you remove all
links. You create hard links without the -s option.
ln ./www ./public_html
However, a hard link does have limitations. A hard link cannot link to another
directory, and a hard link cannot link to a file on another file system. Using the -s
option, you can create a soft link, which eliminates these restrictions.
ln –s /dev/fs02/jack/www /dev/fs01/foo/public_html
This command creates a soft link between the directory www on file system 2 and
a newly created file public_html on file system 1.
locate
Syntax:
locate keyword
The locate command finds the path of a particular file or command. locate
finds an exact or substring match, as in this example:
locate foo
/usr/lib/texmf/tex/latex/misc/footnpag.sty
/usr/share/automake/footer.am
/usr/share/games/fortunes/food
/usr/share/games/fortunes/food.dat
/usr/share/gimp/patterns/moonfoot.pat
The output locate produces contains the keyword “foo” in the absolute path or
does not have any output.
ls
Syntax:
610 Appendixes
mkdir
Syntax:
mkdir directory . . .
To make a directory, use the mkdir command. You have only two restrictions
when choosing a directory name: (1) File names can be up to 255 characters long,
and (2) directory names can contain any character except the /.
mv
Syntax:
mv [-if]sourcefile targetfile
Use the mv command to move or rename directories and files. The command per-
forms a move or rename depending on whether the targetfile is an existing direc-
tory. To illustrate, suppose you give a directory called foo the new name of foobar.
mv foo foobar
Because foobar does not already exist as a directory, foo becomes foobar. If
you issue the following command,
mv doc.txt foobar
and foobar is an existing directory, you peform a move. The file doc.txt now
resides in the directory foobar.
The -f option removes existing destination files and never prompts the user. The
-i option prompts the user whether to overwrite each destination file that exists. If
the response does not begin with “y” or “Y,” the file is skipped.
294754-2 AppB.F 11/5/01 9:05 AM Page 611
pico
Syntax:
pico [filename]
This full-screen text editor is very user-friendly and highly suitable for users
who migrate from a Windows or DOS environment.
pwd
Syntax:
pwd
This command prints the current working directory. The directories displayed are
the absolute path. None of the directories displayed are hard or soft symbolic links.
pwd
/home/usr/charmaine
rm
Syntax:
rm [-rif] directory/file
To remove a file or directory, use the rm command. Here are some examples:
rm doc.txt
rm ~/doc.txt
rm /tmp/foobar.txt
To remove multiple files with rm, you can use wildcards or type each file indi-
vidually. For example
is equivalent to:
rm doc[1-3].txt
rm is a powerful command that can cause chaos if used incorrectly. For instance,
you have your thesis that you’ve worked so hard on for the last six months. You
decide to rm all your docs, thinking you are in another directory. After finding out
294754-2 AppB.F 11/5/01 9:05 AM Page 612
612 Appendixes
a backup file does not exist (and you are no longer in denial), you wonder if there
were any ways to have prevented this.
The rm command has the -i option that allows rm to be interactive. This tells rm
to ask your permission before removing each file. For example, if you entered:
rm –i *.doc
rm: remove thesis.doc (yes/no)? n
The -i option gives you a parachute. It’s up to you to either pull the cord
(answer no) or suffer the consequences (answer yes). The -f option is completely
the opposite. The -f (force) option tells rm to remove all the files you specify,
regardless of the file permissions. Use the -f option only when you are 100 percent
sure you are removing the correct file(s).
To remove a directory and all files and directories within it, use the -r option.
rm -r will remove an entire subtree.
rm –r documents
If you are not sure what you are doing, combine the -r option with the -i
option:
rm –ri documents
The preceding example asks for your permission before it removes every file and
directory.
sort
Syntax:
The obvious task this command performs is to sort. However, sort also merges
files. The sort command reads files that contain previously sorted data and merges
them into one large, sorted file.
The simplest way to use sort is to sort a single file and display the results on
your screen. If a.txt contains:
b
c
a
d
294754-2 AppB.F 11/5/01 9:05 AM Page 613
sort a.txt
a
b
c
d
To save sorted results, use the -o option: sort -o sorted.txt a.txt saves the
sorted a.txt file in sorted.txt. To use sort to merge existing sorted files and to
save the output in sorted.txt, use:
The -r option for this command reverses the sort order. Therefore, a file that
contains the letters of the alphabet on a line is sorted from z to a if you use the
-r option.
The -d option sorts files based on dictionary order. The sort command consid-
ers only letters, numerals, and spaces and ignores other characters.
The -u option looks for identical lines and suppresses all but one. Therefore, sort
produces only unique lines.
stat
Syntax:
stat file
stat foo.txt
File: “foo.txt”
Size: 4447232 Filetype: Regular File
Mode: (0644/-rw-r — r — ) Uid: ( 0/root) Gid: (0/root)
Device: 3,0 Inode: 16332 Links: 1
Access: Mon Mar 1 21:39:43 1999(00000.02:32:30)
Modify: Mon Mar 1 22:14:26 1999(00000.01:57:47)
Change: Mon Mar 1 22:14:26 1999(00000.01:57:47
You can see the following displayed: file access, modification, change date, size,
owner and group information, permission mode, and so on.
294754-2 AppB.F 11/5/01 9:05 AM Page 614
614 Appendixes
strings
Syntax:
strings filename
The strings command prints character sequences at least four characters long.
You use this utility mainly to describe the contents of nontext files.
tail
Syntax:
The tail command displays the end of a file. By default, tail displays the last 10
lines of a file. To display the last 50 lines of the file doc.txt, issue the command:
The -r option displays the output in reverse order. By default, -r displays all
lines in the file, not just 10 lines. For instance, to display the entire contents of the
file doc.txt in reverse order, use:
tail –r doc.txt
To display the last 10 lines of the file doc.txt in reverse order, use:
Finally, the -f option is useful when you are monitoring a file. With this option,
tail waits for new data to be written to the file by some other program. As new
data are added to the file by some other program, tail displays the data on the
screen. To stop tail from monitoring a file, press Ctrl+C (the intr key) because the
tail command does not stop on its own.
touch
Syntax:
This command updates the timestamp of a file or directory. If the named file
does not exist, this command creates it as an empty file.
294754-2 AppB.F 11/5/01 9:05 AM Page 615
umask
See the section on default file permissions for users in Chapter 4.
uniq
Syntax:
The uniq command compares adjacent lines and displays only one unique line.
When used with the -c option, uniq counts the number of occurrences. A file that
has the contents:
a
a
a
b
a
uniq test.txt
a
b
a
Notice that the adjacent a’s are removed — but not all a’s in the file. This is an
important detail to remember when using uniq. If you would like to find all the
unique lines in a file called test.txt, you can run the following command:
This command sorts the test.txt file and puts all similar lines next to each
other, allowing uniq to display only unique lines. For example, say that you want
to find quickly how many unique visitors come to your Web site; you can run the
following command:
This displays the unique IP addresses in a CLF log file, which is what Apache
web server uses.
294754-2 AppB.F 11/5/01 9:05 AM Page 616
616 Appendixes
vi
The vi program is a powerful full-screen text editor you can find on almost all
Unix systems because of its size and capabilities. The vi editor does not require
much in the way of resources to utilize its features. In addition to the basic edit
functions, vi can search, replace, and concatenate files, and it has its own macro
language, as well as a number of other features.
vi has two modes: input and command. When vi is in input mode, you can
enter, insert, or append text in a document. When vi is in command mode, you tell
vi what to do from the command line — move within the document, merge lines,
search, and so on. You can carry out all vi functions from command mode except
the entering of text. You can enter text only in input mode.
A typical vi newbie assumes he is in input mode and begins typing his docu-
ment. He expects to see his newly inputted text, but what he really sees is his cur-
rent document mangled because he is in command mode.
When vi starts, it is in command mode. You can go from command mode to
input mode by using one of the following commands: [aAiIoOcCsSR]. To return to
command mode, press the Esc key for normal exit, or press Interrupt (the Ctrl+C key
sequence) to end abnormally.
Table B-7 shows a summary of common vi commands and their effects in com-
mand mode.
Keys Effects
Commands Effects
wc
Syntax:
wc [-lwc] filename
The wc (word count) command counts lines, characters, and words. If you use the
wc command without any options, the output displays all statistics of the file. The
file test.txt contains the following text:
618 Appendixes
The results tell us there is one line with nine words containing 44 characters in
the file test.txt. To display only the number of lines, use the -l option. The -w
option displays only the number of words. Finally, the -c option displays only the
total number of characters.
whatis
Syntax:
whatis keyword
This command displays a one-line description for the keyword entered in the
command line. The whatis command is identical to typing man -f. For instance, if
you want to display the time but you are not sure whether to use the time or date
command, enter:
Looking at the results, you can see that the command you want is date. The
time command actually measures how long it takes for a program or command to
execute.
whereis
The whereis command locates source/binary and manuals sections for specified
files. The command first strips the supplied names of leading pathname compo-
nents and any (single) character file extension, such as .c, .h, and so on. Prefixes
of s. resulting from use of source code control are also dealt with.
whereis ls
ls: /bin/ls /usr/man/man1/ls.1.gz
The preceding example indicates the location of the command in question. The
ls command is in the /bin directory, and its corresponding man pages are at
/usr/man/man1/ls.1.gz.
which
Syntax:
which command
294754-2 AppB.F 11/5/01 9:05 AM Page 619
The which command displays the path and aliases of any valid, executable
command.
which df
/usr/bin/df
The preceding example shows us that the df command is in the /usr/bin direc-
tory. which also displays information about shell commands.
which setenv
setenv: shell built-in command.
compress
Syntax:
The compress command attempts to reduce the size of a file using the adaptive
Lempel-Ziv coding algorithm. A file with a .Z extension replaces a compressed file.
Using any type of compression for files is significant because smaller file sizes
increase the amount of available disk space. Also, transferring smaller files across
networks reduces network congestion.
The -v (verbose) option displays the percentage of reduction for each file you
compress and tells you the name of the new file. Here is an example of how to use
the compress command:
ls -alF inbox
-rw — — — - 1 username cscstd 194261 Feb 23 20:12 inbox
compress -v inbox
inbox: Compression: 37.20% — replaced with inbox.Z
ls -alF inbox.Z
-rw — — — - 1 username cscstd 121983 Feb 23 20:12 inbox.Z
294754-2 AppB.F 11/5/01 9:05 AM Page 620
620 Appendixes
gunzip
Syntax:
To decompress files to their original form, use the gunzip command. gunzip
attempts to decompress files ending with the following extensions: .gz, -gz, .z,
-z, _z, .Z, or tgz.
The -v option displays a verbose output when decompressing a file.
gunzip -v README.txt.gz
README.txt.gz: 65.0% — replaced with README.txt
gzip
Syntax:
The gzip command is another compression program. It is known for having one
of the best compression ratios but for a price. It can be considerably slow. Files
compressed with gzip are replaced by files with a .gz extension.
The -9 option yields the best compression sacrificing speed. The -v option is the
verbose option. The size, total, and compression ratios are listed for each file. Also,
the -r option recursively traverses each directory compressing all the files along
the way.
ls -alF README.txt
-rw-r — r — 1 root root 16213 Oct 14 13:55 README.txt
gzip -9v README.txt
README.txt: 65.0% — replaced with README.txt.gz
ls -alF README.txt.gz
-rw-r — r — 1 root root 5691 Oct 14 13:55 README.txt.gz
rpm
Syntax:
This is the Red Hat Package Manager program. It allows you to manage RPM
packages, making it very easy to install and uninstall software.
294754-2 AppB.F 11/5/01 9:05 AM Page 621
rpm -i precious-software-1.0.i386.rpm
You can make rpm a bit more verbose by using -ivh instead of just the -i
option. If you have installed the package and for some reason would like to install
it again, you need to use the --force option to force rpm.
If you are upgrading a software package, you should use the -U option, as in this
example:
rpm -qa
To find out which package a program such as sendmail belongs to, run:
rpm -q sendmail
This returns the RPM package name you use to install sendmail. To find out
which package a specific file such as /bin/tcsh belongs to, run:
This displays the package name of the named file. If you are interested in find-
ing the documentation that comes with a file, use the -d option along with the -qf
options. To list all files associated with a program or package, such as sendmail, use
the -l option, as shown here:
To ensure that an installed package is not modified in any way, you can use the
-V option. For example, to verify that all installed packages are in their original
state, run the following:
rpm -Va
294754-2 AppB.F 11/5/01 9:05 AM Page 622
622 Appendixes
This option becomes very useful if you learn that you or someone else could
have damaged one or more packages.
To uninstall a package such as sendmail, run:
rpm -e sendmail
If you find that removing a package or program breaks other programs because
they depend on it or its files, you have to decide if you want to break these pro-
grams or not. If you decide to remove the package or the program, you can use the
--nodeps option with the -e option to force rpm to uninstall the package.
tar
Syntax:
The tar command allows you to archive multiple files and directories into a sin-
gle .tar file. It also allows you to extract files and directories from such an archive
file, as in this example:
This command creates a tar file called source.tar, which contains all C source
files (ending with extension .c) in the current directory.
Here the v option allows you to see which files are being archived by tar.
Here all the files and subdirectories of the directory called important_dir are
archived in a file called backup.tar.gz. Notice that the z option is compressing
this file; hence, give the resulting file a .gz extension. Often, the .tar.gz exten-
sion is shortened by many users to be .tgz as well.
To extract an archive file, backup.tar, you can run:
tar xf backup.tar
uncompress
Syntax:
When you use the compress command to compress a file, the file is no longer in its
original form. To return a compressed file to its original form, use the uncompress
command.
The uncompress command expects to find a file with a .Z extension, so the
command line “uncompress inbox” is equivalent to “uncompress inbox.Z.”
The -v option produces verbose output.
uncompress -v inbox.Z
inbox.Z: — replaced with inbox
unzip
Syntax:
unzip file(s)
This command decompresses files with the .zip extension. You can compress
these files with the unzip command, Phil Katz’s PKZIP, or any other PKZIP-
compatible program.
uudecode
Syntax:
uudecode file
The uudecode command transforms a uuencoded file into its original form.
uudecode creates the file by using the “target_name” the uuencode command
specifies, which you can also identify on the first line of a uuencoded file.
To convert our uuencoded file from the following entry back to its original form:
uudecode a.out.txt
As a result, you create the executable file a.out from the text file a.out.txt.
294754-2 AppB.F 11/5/01 9:05 AM Page 624
624 Appendixes
uuencode
Syntax:
The uuencode command translates a binary file into readable form. You can do
so by converting the binary file into ASCII printable characters. One of the many
uses of uuencode is transmitting a binary file through e-mail. A uuencoded file
appears as a large e-mail message. The recipient can then save the message and use
the uudecode command to retrieve its binary form.
The target_name is the name of the binary file created when you use the
uuencode.
This example uuencodes the executable program a.out. The target name that
uudecode creates is b.out. Save the uuencoded version of a.out in the file
a.out.txt.
zip
Syntax:
dd
Syntax:
This command takes the /tmp/upppercase.txt file and writes a new file called
/tmp/lowercase.txt and converts all characters to lowercase (lcase). To do the
reverse, you can use conv=ucase option. However, dd is most widely used to write
a boot image file to a floppy disk that has a file system that mkfs has already cre-
ated. For example:
This command writes the /some/boot.image file to the first floppy disk
(/dev/fd0) in 16KB blocks.
df
Syntax:
The df command summarizes the free disk space for the drives mounted on the
system. Hard disk space is a vital (and often scarce) resource in a computer; moni-
tor it carefully. Mismanagement of hard disk space can cause a computer to crawl
on its knees and can cause some unhappy users.
df
To view the disk space summary for the current file system:
df .
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/fs01 8192000 6381920 23% 20963 3% /home/fs01
294754-2 AppB.F 11/5/01 9:05 AM Page 626
626 Appendixes
Notice that the db output is printed in 512-byte blocks. It may seem odd to think
of blocks with this size if you are used to 1K blocks or more precisely 1,024-byte
blocks. The -k option displays the summary with 1,024-byte blocks instead:
df –k .
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/fs01 4096000 3190960 23% 20963 3% /home/fs01
With the -k option in mind, the results are very different. If you interpret the
output incorrectly, you may run out of disk space sooner than you think.
edquota
See the section on assigning disk quotas to users in Chapter 7 for details.
fdformat
Syntax:
fdformat floppy-device
fdformat /dev/fd0H1440
This formats the first floppy disk (/dev/fd0) as a high-density 1.44MB disk.
fdisk
mkfs
Syntax:
This command allows you to make a new file system, as in this example:
mount
Syntax:
This command mounts a file system. Typically, the mount options for commonly
used file systems are stored in /etc/fstab, as in this example:
If the preceding line is in /etc/fstab, you can mount the file system stored in
partition /dev/hda6 as follows:
mount /intranet
Use the -t option to specify file system type. To mount all the file systems spec-
ified in the /etc/fstab, use the -a option, as in this example:
mount -a -t ext2
The preceding command mounts all ext2 file systems. Commonly used options
for -o option are ro (read-only) and rw (read/write), as in this example:
quota
See the section on monitoring disk usage in Chapter 7 for details.
quotaon
See the section on configuring your system to support disk quotas in Chapter 7 for
details.
swapoff
Syntax:
swapoff -a
294754-2 AppB.F 11/5/01 9:05 AM Page 628
628 Appendixes
This command allows you to disable swap devices. The -a option allows you to
disable all swap partitions specified in /etc/fstab.
swapon
Syntax:
swapon -a
This command allows you to enable swap devices. The -a option allows you to
enable all swap partitions specified in /etc/fstab.
umount
Syntax:
This command unmounts a file system from the current system, as in this
example:
umount /cdrom
The preceding command unmounts a file system whose mount point is /cdrom
and the details of whose mount point are specified in /etc/fstab.
The -a option allows you to unmount all file systems (except for the proc file
system) specified in the /etc/fstab file. You can also use the -t option to specify
a particular file system type to unmount, as in this example:
umount -a -t iso9660
This command unmounts all iso9660-type file systems, which are typically
CD-ROMs.
DOS-Compatible Commands
If you need access to MS-DOS files from your Linux system, you need to install the
Mtools package. Mtools is shipped with Red Hat as an RPM package, so installing it
is quite simple. See the rpm command for details on how to install an RPM
package.
Mtools is a collection of utilities that allow you to read, write, and move around
MS-DOS files. It also supports Windows 95–style long filenames, OS/2 Xdf disks,
and 2m disks. The following section covers the common utilities in the Mtools
package.
294754-2 AppB.F 11/5/01 9:05 AM Page 629
mcopy
Syntax:
Use the mcopy utility to copy MS-DOS files to and from Linux, as in this example:
mcopy /tmp/readme.txt b:
The preceding command copies the readme.txt file from the /tmp directory to
the b: drive. The -t option enables you to automatically translate carriage return/
line feed pairs in MS-DOS text files into new line feeds. The -m option allows you
to preserve file modification time.
mdel
Syntax:
mdel msdosfile
mdir
Syntax:
This utility allows you to view an MS-DOS directory. The –/ option allows you
to view all the subdirectories as well.
mformat
Syntax:
This utility allows you to format a floppy disk to hold a minimal MS-DOS file
system. I find it much easier to format a disk by using an MS-DOS machine than to
specify cylinders, heads, sectors, and so on.
294754-2 AppB.F 11/5/01 9:05 AM Page 630
630 Appendixes
mlabel
Syntax:
This utility displays the current volume label (if any) of the name drive and
prompts for the new label if you do not enter it after the drive: in the command
line. The -v option prints a hex dump of the boot sector of the named drive; the
-c option clears the existing volume label, and the -s shows the existing label of
the drive.
dmesg
Syntax:
dmesg
This program prints the status messages the kernel displays during bootup.
free
Syntax:
free
This program displays memory usage statistics. An example of output looks like
this:
shutdown
Syntax:
shutdown -r now
To halt the system after the shutdown, replace the -r with -h. The -k option
allows you to simulate a shutdown event, as in this example:
This command sends a fake shutdown message to all users. The -t option allows
you to specify a delay in seconds between the warning message and the actual
shutdown event. In such a case, if you decide to abort the shutdown, run shutdown
again with the -c option to cancel it.
Note that you can use the HH:MM format to specify the time, as in this example:
shutdown -r 12:55
This reboots the system at 12:55. You can also use +minutes to specify time, as
in this example:
shutdown -r +5
This starts the shutdown process five minutes after you print the warning message.
uname
Syntax:
This command displays information about the current system, as in this example:
uname -a
Linux picaso.nitec.com 2.0.36 #1 Tue Oct 13 22:17:11 EDT 1998 i586 unknown
The -m option displays the system architecture (for instance, i586); the -n option
displays the host name (for instance, picaso.nitec.com); the -r option displays
the release version of the operating system (for instance, 2.0.36); the -s option
294754-2 AppB.F 11/5/01 9:05 AM Page 632
632 Appendixes
displays the operating system name (for instance, Linux); and the -v option dis-
plays the local build version of the operating system (for instance, #1 Tue Oct 13
22:17:11 EDT 1998).
uptime
Syntax:
uptime
This command displays current time, how long the system has been up since the
last reboot, how many users are connected to the server, and the system load in the
last 1, 5, and 15 minutes.
chfn
See the section on modifying an existing user account in Chapter 7.
chsh
See the section on modifying an existing user account in Chapter 7.
groupadd
See the section on creating a new group in Chapter 7.
groupmod
See the section on modifying an existing group in Chapter 7.
groups
Syntax:
groups [username]
This command displays the list of group(s) the named user currently belongs to.
If no username is specified, this command displays the current user’s groups.
294754-2 AppB.F 11/5/01 9:05 AM Page 633
last
Syntax:
This command displays a list of users who have logged in since /var/log/wtmp
was created, as in this example:
last julie
This command shows the number of times user julie has logged in since the last
time /var/log/wtmp was created.
last reboot
This displays the number of times the system has been rebooted since /var/
log/wtmp file was created.
passwd
Syntax:
passwd username
This command allows you to change a user’s password. Only a superuser can
specify a username; everyone else must type passwd without any argument, which
allows the user to change his or her password. A superuser can change anyone’s
password using this program.
su
Syntax:
su [-] [username]
You can use the su command to change into another user, as in this example:
su john
This command allows you to be the user john as long as you know john’s pass-
word, and this account exists on the server you use.
294754-2 AppB.F 11/5/01 9:05 AM Page 634
634 Appendixes
The most common use of this command is to become root. For example, if you
run this command without any username argument, it assumes that you want to be
root and prompts you for the root password. If you enter the correct root pass-
word, su runs a shell by using the root’s UID (0) and GID (0). This allows you effec-
tively to become the root user and to perform administrative tasks. This command
is very useful if you have only Telnet access to the server. You can telnet into the
server as a regular user and use it to become root to perform system administrative
tasks. If you supply the – option, the new shell is marked as the login shell. Once you
become the root user, you can su to other users without entering any password.
useradd
See the section on creating new user account in Chapter 7.
userdel
See the section on deleting or disabling a user account in Chapter 7.
usermod
See the section on modifying an existing user account in Chapter 7.
who
Syntax:
who
This command displays information about the users who are currently logged in
to a system. You can also use the w command for the same purpose.
whoami
Syntax:
whoami
finger
Syntax:
finger user@host
This program allows you to query a finger daemon at the named host, as in this
example:
finger [email protected]
ftp
Syntax:
This is the default FTP client program. You can use this to FTP to an FTP server,
as in this example:
ftp ftp.cdrom.com
lynx
Syntax:
This is the most popular interactive text-based Web browser, as in this example:
lynx http://www.integrationlogic.com/
294754-2 AppB.F 11/5/01 9:05 AM Page 636
636 Appendixes
This command displays the top page of the site. It is a very handy program to
have. For example, say you want to quickly find what kind of Web server the site
uses without asking the Webmaster. You can run the following command:
This displays the HTTP header the lynx browser receives from the Web server. An
example of output is shown here:
As you can see, this header shows that www.integrationlogic.com runs on the
Apache 1.3.3 Web server on a Unix platform. Note that not all Web sites give
their Web server platform information, but most do. If you would like to avoid the
interactive mode, you can use the -dump option to dump the page on the screen
(stdout). For example,
This dumps the header to stdout. The -dump feature can be quite handy, as in
this example:
This allows you to save new.gif on the Web server host on a local file called
new.gif.
The interactive mode allows you to browse sites that are compatible with text-
only browsers.
mail
Syntax:
This is the default SMTP mail client program. You can use this program to send
or receive mail from your system. For example, if you run this program without
any argument, it displays an ampersand (&) prompt and shows you the currently
unread mail by arranging the messages in a numbered list. To read a message, enter
294754-2 AppB.F 11/5/01 9:05 AM Page 637
the index number, and the mail is displayed. To learn more about mail, use the ?
command once you are at the & prompt.
To send a message to a user called [email protected] with the sub-
ject header About your Red Hat book, you can run the following command:
You can then enter your mail message and press Ctrl+D to end the message. You
can switch to your default text editor by entering ~v at the beginning of a line
while you are in compose mode.
If you have already prepared a mail message in a file, you can send it using a
command such as
This sends a message with the given subject line; the message consists of the
contents of the feedback.txt file.
pine
Syntax:
pine
This is a full-screen SMTP mail client that is quite user-friendly. If you typically
use mail clients via telnet, you should definitely try this program. Because of its
user-friendly interfaces, it is suitable for your Linux users who are not yet friends
with Linux.
rlogin
Syntax:
This command allows you to log remotely into a host. For example, to log in to
a host called shell.myhost.com, you can run:
rlogin shell.myhost.com
Because rlogin is not safe, I recommend that you use it only in a closed
LAN environment.
294754-2 AppB.F 11/5/01 9:05 AM Page 638
638 Appendixes
The -l option allows you to specify a username to use for authentication. If you
would like to log remotely into a host without entering a password, create a
.rhosts file in the user’s home directory. Add the hostname or IP address of the
computer you use to issue the rlogin request. Again, because many consider this a
security risk, I do not recommend wide use of rlogin.
talk
Syntax:
If you need to send a message to another user, e-mail works just fine. But if you
need to communicate with another user in real time (as in a telephone conversa-
tion), use the talk command.
To talk to another user who is logged in:
talk [email protected]
The user you request has to accept your talk request. Once the user accepts your
talk request, you can begin talking (or typing) to each other. The talk program
terminates when either party executes Ctrl+C (the intr key combination).
telnet
Syntax:
This is the default Telnet client program. You can use this program to connect to
a Telnet server, as in this example:
telnet shell.myportal.com
wall
Syntax:
wall
294754-2 AppB.F 11/5/01 9:05 AM Page 639
This command allows you to send a text message to every terminal whose user
has not disabled write access to the TTY via the mesg n command. Once you type
wall, you can enter a single or multiline message; you can send it by pressing
Ctrl+D.
host
Syntax:
By default, this program allows you to check the IP address of a host quickly. If
you use the -a option, it returns various sorts of DNS information about the named
host or IP address.
hostname
Syntax:
hostname
ifconfig
Syntax:
This program allows you to configure a network interface. You can also see the
state of an interface using this program. For example, if you have configured your
Red Hat Linux for networking and have a preconfigured network interface device,
eth0, you can run
ifconfig eth0
294754-2 AppB.F 11/5/01 9:05 AM Page 640
640 Appendixes
Here ifconfig reports that network interface device eth0 has an Internet
address (inet addr) 206.171.50.50, a broadcast address (Bcast) 206.171.50.63, and
network mask (Mask) 255.255.255.240. The rest of the information shows the fol-
lowing: how many packets this interface has received (RX packets); how many
packets this interface has transmitted (TX packets); how many errors of different
types have occurred so far; what interrupt address line this device is using; what I/O
address base is being used; and so on.
You can run ifconfig without any arguments to get the full list of all the up
network devices.
You can use ifconfig to bring an interface up, as in this example:
The preceding command starts eth0 with IP address 206.171.50.50. You can also
quickly take an interface down by using the ifconfig command, as in this example:
netcfg
See the section on using netcfg to configure a network interface card in Chapter 9.
netstat
Syntax:
This program displays the status of the network connections, both to and from
the local system, as in this example:
netstat -a
294754-2 AppB.F 11/5/01 9:05 AM Page 641
This command displays all the network connections on the local system. To dis-
play the routing table, use the -r option. To display network connection status on
a continuous basis, use the -c option. To display information on all network inter-
faces, use the -i option.
nslookup
Syntax:
This command allows you to perform DNS queries. You can choose to query a
DNS server in an interactive fashion or just look up information immediately, as in
this example:
This command does the same, but instead of using the default name server spec-
ified in the /etc/resolv.conf file, it uses ns.nitec.com as the name server. You
can also use -q instead of -query, as in this example:
This command returns the IP address (Address record) for the named hostname.
You can run nslookup in interactive mode as well. Just run the command without
any parameters, and you will see the nslookup prompt. At the nslookup prompt,
you can enter “?” to get help. If you are planning on performing multiple DNS
queries at a time, interactive mode can be very helpful. For example, to query the
NS records for multiple domains such as ad-engine.com and
classifiedworks.com, you can just enter the following command:
set query=ns
Once you set the query type to ns, you can simply type ad-engine.com and wait
for the reply; once you get the reply, you can try the next domain name; and so on.
If you would like to change the name server while at the nslookup prompt, use the
server command, as in this example:
server ns.ad-engine.com
294754-2 AppB.F 11/5/01 9:05 AM Page 642
642 Appendixes
This will make nslookup use ns.ad-engine.com as the name server. To quit
interactive mode and return to your shell prompt, enter exit at the nslookup
prompt.
ping
Syntax:
This is one of the programs network administrators use most widely. You can use
it to see whether a remote computer is reachable via the TCP/IP protocol.
Technically, this program sends an Internet Control Message Protocol (ICMP) echo
request to the remote host. Because the protocol requires a response to an echo
request, the remote host is bound to send an echo response. This allows the ping
program to calculate the amount of time it takes to send a packet to a remote host,
as in this example:
ping blackhole.nitec.com
poor networking between the ping requester and the ping responder. The other
interesting statistics are the round-trip minimum (min) time, the average (avg) time,
and the maximum (max) time. The lower these numbers are, the better the routing is
between the involved hosts. For example, if you ping a host on the same LAN, you
should see the round-trip numbers in the one-millisecond range.
If you would like to have ping automatically stop after transmitting a number of
packets, use the -c option, as in this example:
ping -c 10 blackhole.nitec.com
This sends 10 ping requests to the named host. By default, ping sends a 64-byte
(56 data bytes + 8 header bytes) packet. If you are also interested in controlling the
size of the packet sent, use the -s option, as in this example:
This command sends a packet 1,024 (1016 + 8) bytes long to the remote host.
route
Syntax:
This command allows you to control routing to and from your computer. For
example, to create a default route for your network, use the route command as
follows:
644 Appendixes
To set the default gateway, you can run the route command as follows:
For example, to set the default gateway address to 206.171.50.49, you can run
the following command:
You can verify that your network route and default gateway are properly set up
in the routing table by using the following command:
route -n
tcpdump
Syntax:
tcpdump expression
This is a great network debugging tool. For example, to trace all the packets
between two hosts brat.nitec.com and reboot.nitec.com, you can use the fol-
lowing command:
This command makes tcpdump listen for packets between these two computers.
If reboot.nitec.com starts sending ping requests to brat.nitec.com, the output
looks something like the following:
If you are having a problem connecting to an FTP server, you can use tcpdump
on your LAN gateway system to see what is going on, as in this example:
This displays the FTP-related packets originating and arriving in your network.
As you can see, this allows you to debug a network problem at a low level. If
you are experiencing a problem in using a service between two hosts, you can use
tcpdump to identify the problem.
traceroute
Syntax:
This program allows you to locate network routing problems. It displays the
routes between two hosts by tricking the gateways between the hosts into respond-
ing to an ICMP TIME_EXCEEDED request. Here is an example of traceroute output
that shows the route from my local system to the blackhole.nitec.com host:
646 Appendixes
Each line represents a hop; the more hops there are, the worse the route usually
is. In other words, if you have only a few gateways between the source and the des-
tination, chances are that packets between these two hosts are going to be trans-
ferred at a reasonably fast pace. However, this won’t be true all the time because it
takes only a single, slow gateway to mess up delivery time. Using traceroute, you
can locate where your packets are going and where they are perhaps getting stuck.
Once you locate a problem point, you can contact the appropriate authorities to
resolve the routing problem.
bg
Syntax:
bg
This built-in shell command is found in popular shells. This command allows
you to put a suspended process into background. For example, say that you decide
to run du –a / | sort –rn > /tmp/du.sorted to list all the files and directories
in your system according to the disk usage (size) order and to put the result in a file
called /tmp/du.sorted. Depending on the number of files you have on your sys-
tem, this can take a while. In such a case, you can simply suspend the command
line by using Ctrl+Z and type bg to send all commands in the command line to the
background, bringing your shell prompt back on-screen for other use.
If you want to run a command in the background from the start, you can
simply append & to the end of the command line.
To find out what commands are running in the background, enter jobs and you
see the list of background command lines. To bring a command from the back-
ground, use the fg command.
294754-2 AppB.F 11/5/01 9:05 AM Page 647
fg
Syntax:
fg [%job-number]
This built-in shell command is found in popular shells. This command allows you
to put a background process into foreground. If you run this command without any
argument, it brings up the last command you put in the background. If you have mul-
tiple commands running in the background, you can use the jobs command to find
the job number and can supply this number as an argument for fg to bring it to the
foreground. For example, if jobs shows that you have two commands in the back-
ground, you can bring up the first command you put in the background by using:
fg %1
jobs
Syntax:
jobs
This built-in shell command is found in popular shells. This command allows
you to view the list of processes running in the background or currently suspended.
Productivity Commands
The commands in this section help you increase your productivity.
bc
Syntax:
bc
648 Appendixes
cal
Syntax:
This nifty program displays a nicely formatted calendar for the month or year
specified in the command line. If you do not specify anything as an argument, the
calendar for the current month is displayed. To see the calendar for an entire year,
enter the (Western calendar) year in the 1–9999 range, as in this example:
cal 2000
This command displays the following calendar for the year 2000:
2000
ispell
Syntax:
ispell filename
This program allows you to correct spelling mistakes in a text file in an interac-
tive fashion. If you have a misspelling in the file, the program suggests a spelling
and gives you options to replace it with a correctly spelled word. This is the spell
checker for text files.
mesg
Syntax:
mesg [y | n]
This program allows you to enable or disable public write access to your termi-
nal, as in this example:
mesg y
The preceding command enables write access to your terminal so that another
user on the same system can use the write command to write text messages to you.
The n option allows you to disable write access. If you do not want to be bothered
by anyone at any time, you can add mesg n to your login script (.login) file.
write
Syntax:
650 Appendixes
This program allows you to write text messages to the named user, provided the
user has not disabled write access to his or her TTY, as in this example:
write shoeman
This command allows you to type a text message on screen, and when you fin-
ish the message by pressing Ctrl+D, the message is displayed on the user shoeman’s
terminal. If the user is logged in more than once, you have to specify the terminal
name as well, as in this example:
This allows you to write to shoeman and to display the message on terminal
ttyp0. If someone has multiple terminals open, you might want to run the w or who
command to see which TTY is most suitable.
Shell Commands
In this section, you will find some very basic shell commands.
alias
Syntax:
This is a built-in shell command available in most popular shells. This command
lets you create aliases for commands, as in this example:
alias dir ls -l
This command creates an alias called dir for the ls –l command. To see the
entire alias list, run alias without any argument.
history
Syntax:
history
This is a built-in shell command available in most popular shells. This command
displays a list of commands you have recently entered at the command line. The
number of commands that the history command displays is limited by an envi-
ronment variable, also called history. For example, if you add set history =
294754-2 AppB.F 11/5/01 9:05 AM Page 651
100 to your .login file, whenever you log in, you allow the history command to
remember up to 100 command lines. You can easily rerun the commands you see in
the history by entering their index number with a “!” sign. For example, say that
when you enter the history command, you see the following listings:
1 10:25 vi irc-bot.h
2 10:25 vi irc-bot.c
3 10:26 which make
To run the vi irc-bot.c command again, you can simply enter !2 in the com-
mand line.
set
Syntax:
This is a built-in shell command available in most popular shells. It allows you
to set environment variables with specific values, as in this example:
Here a new environment variable foo is set to have bar as the value. To see the
list of all environment variables, run set by itself. To view the value of a specific
environment variable, such as path, run:
echo $path
This shows you the value of the named environment variable. If you use this
command quite often to set a few special environment variables, you can add it to
.login or .profile or to your shell’s dot file so that the special environment vari-
ables are automatically set when you log in.
source
Syntax:
source filename
This is a built-in shell command available in most popular shells. This command
lets you read and execute commands from the named file in the current shell
environment.
294754-2 AppB.F 11/5/01 9:05 AM Page 652
652 Appendixes
unalias
Syntax:
This is a built-in shell command available in most popular shells. This command
lets you remove an alias for a command, as in this example:
unalias dir
This command removes an alias called dir. To remove all aliases, use the * wild-
card as the argument.
Printing-Specific Commands
This section discusses commands that help you print from your Linux system.
lpq
Syntax:
The lpq command lists the status of the printers. If you enter lpq without any
arguments, information about the default printer is displayed.
lpq
Printer: lp@rembrandt ‘Generic dot-matrix printer entry’
Queue: no printable jobs in queue
Status: server finished at 21:11:33
lpr
Syntax:
This command sends a file to the print spool to be printed. If you give no file-
name, data from standard input is assumed.
The -i option allows the option of starting the printing at a specific column. To
specify a particular printer, you can use the -P printer option.
lpr main.c
lprm
Syntax:
The lprm command sends a request to lpd to remove an item from the print
queue. You can specify the print jobs the job ID or username is to remove, or they
can include all items.
To remove all jobs in all print queues:
lprm -a all
To remove all jobs for the user “kiwee” on the printer “p1”:
Appendix C
Internet Resources
THIS APPENDIX PROVIDES YOU with a list of Linux resources. Many Linux-oriented
newsgroups, mailing lists, and Web sites are available on the Internet. Although
you are likely to discover many more new Linux resources as time passes and as
Linux’s popularity increases, the following resources are likely to remain in good
health at all times. I use these resources on an almost daily basis.
Usenet Newsgroups
The following Usenet newsgroups can be a great place to learn about advances in
Linux, to engage in Linux-specific discussions, and also to find answers to ques-
tions you might have.
COMP.OS.LINUX.ADVOCACY (UNMODERATED)
This newsgroup is intended for discussions of the benefits of Linux compared with
other operating systems.
COMP.OS.LINUX.ANNOUNCE (MODERATED)
This newsgroup is intended for all Linux-specific announcements. You will find
information on new Linux software, bug and security alerts, and user group infor-
mation here.
655
304754-2 AppC.F 11/5/01 9:05 AM Page 656
656 Appendixes
COMP.OS.LINUX.ANSWERS (MODERATED)
The Linux FAQ, how-to, readme, and other documents are posted in this news-
group. If you have a question about Linux, check this newsgroup before posting
your question in any Linux newsgroup.
COMP.OS.LINUX.DEVELOPMENT.APPS (UNMODERATED)
This newsgroup is intended for Linux developers who want to discuss development
issues with others.
COMP.OS.LINUX.HARDWARE (UNMODERATED)
This newsgroup is intended for hardware-specific discussions. If you have a ques-
tion about a piece of hardware you are trying to use with Linux, look for help here.
COMP.OS.LINUX.M68K (UNMODERATED)
This newsgroup is for Motorola 68K architecture-specific Linux development.
COMP.OS.LINUX.ALPHA (UNMODERATED)
This newsgroup is for Compaq/Digital Alpha architecture-specific discussions.
COMP.OS.LINUX.NETWORKING (UNMODERATED)
This newsgroup is intended for networking-related discussions.
COMP.OS.LINUX.X (UNMODERATED)
This newsgroup is intended for discussions relating to the X Window System, ver-
sion 11, and compatible software such as servers, clients, libraries, and fonts run-
ning under Linux.
COMP.OS.LINUX.DEVELOPMENT.SYSTEM (UNMODERATED)
This newsgroup is intended for kernel hackers and module developers. Here you
will find ongoing discussions on the development of the Linux operating system
proper: kernel, device drivers, loadable modules, and so forth.
COMP.OS.LINUX.SETUP (UNMODERATED)
This newsgroup is intended for discussions on installation and system administra-
tion issues.
COMP.OS.LINUX.MISC (UNMODERATED)
This is the bit-bucket for the comp.os.linux hierarchy. Any topics not suitable for
the other newsgroups in this hierarchy are discussed here.
304754-2 AppC.F 11/5/01 9:05 AM Page 657
◆ alt.fan.linus-torvalds
◆ alt.uu.comp.os.linux.questions
◆ aus.computers.linux
◆ dc.org.linux-users
◆ de.alt.sources.linux.patches
◆ de.comp.os.linux.hardware
◆ de.comp.os.linux.misc
◆ de.comp.os.linux.networking
◆ de.comp.os.x
◆ ed.linux
◆ fido.linux-ger
◆ fj.os.linux
◆ fr.comp.os.linux
◆ han.sys.linux
◆ hannet.ml.linux.680x0
◆ it.comp.linux.pluto
◆ maus.os.linux
◆ maus.os.linux68k
◆ no.linux
◆ okinawa.os.linux
◆ tn.linux
◆ tw.bbs.comp.linux
◆ ucb.os.linux
◆ uiuc.sw.linux
◆ umich.linux
304754-2 AppC.F 11/5/01 9:05 AM Page 658
658 Appendixes
◆ comp.security.announce
◆ comp.security.misc
◆ comp.security.pgp.announce
◆ comp.security.ssh
◆ comp.security.unix
Mailing Lists
Mailing lists provide a good way of getting information directly to your e-mail
account. If you are interested in Linux news, announcements, and other discus-
sions, mailing lists can be quite helpful. This is especially true of mailing lists that
provide a digest option. Such mailing lists send a digest of all daily or weekly mes-
sages to your e-mail address.
General lists
The following Linux mailing lists are general. They provide good general discus-
sions of Linux news and helpful information for beginning Linux users.
LINUX-ANNOUNCE
Subscribe to Linux-Announce by sending e-mail to linux-announce-request@
redhat.com with the word subscribe in the subject line of the message.
LINUX-LIST
To subscribe, send e-mail to [email protected] with the word subscribe
in the body of your message.
LINUX-NEWBIE
To subscribe, send e-mail to [email protected] with the words
subscribe linux-newbie in the body of your message.
LINUXUSERS
To subscribe, send e-mail to [email protected] with the words subscribe linux
users in the body of your message.
304754-2 AppC.F 11/5/01 9:05 AM Page 659
BUGTRAQ
Although BugTraq is not specific to Linux, it is a great bug alert resource. To sub-
scribe, send e-mail to [email protected] with the following as the body of
the message: SUBSCRIBE bugtraq your-firstname your-lastname.
LINUX-SECURITY
Red Hat Software, Inc. hosts this mailing list. To subscribe, send e-mail to
[email protected] with the words subscribe linux-security
in your subject line.
Special lists
The following mailing lists deal with two issues: Linux as a server platform and
Linux as a desktop platform.
SERVER-LINUX
To subscribe, send e-mail to [email protected] with the words subscribe
SERVER-LINUX in your subject line.
WORKSTATION-LINUX
To subscribe, send e-mail to [email protected] with the words subscribe
WORKSTATION-LINUX in your subject line.
Web Sites
Many Web sites provide Linux-oriented information. Here are a few good ones.
General resources
The following Web sites are general. Most of these sites act as portal sites:
◆ www.redhat.com/
◆ www.linux.com/
◆ www.linuxresources.com/
◆ http://linuxcentral.com/
◆ www.linuxcare.com/
304754-2 AppC.F 11/5/01 9:05 AM Page 660
660 Appendixes
Publications
The following Web sites are official Web sites for various Linux publications:
◆ www.linuxgazette.com/
◆ www.linuxjournal.com/
◆ www.linuxworld.com/
Software stores
The following Web sites offer commercial Linux software:
◆ www.cheapbytes.com/
◆ www.linuxmall.com/
◆ www.lsl.com/
Security resources
The following Web sites deal with computer security:
◆ www.cert.org/
◆ www.securityfocus.com/
◆ www.replay.com/redhat/
◆ www.rootshell.com/
User Groups
A local Linux user group could be just the help you need for finding information on
Linux. You can locate or even register a new user group of your own in your area
by using the following URL: www.linuxresources.com/glue/index.html.
314754-2 AppD.F 11/5/01 9:05 AM Page 661
Appendix D
◆ Although you might have discovered that your system has been compro-
mised, you might not know the full extent of how the attacker got into
your system and, therefore, it is likely that he or she can return to do fur-
ther damage.
◆ By keeping a compromised system on the network you might be aiding
the attacker in launching attacks on other networks, which can result in
legal complications and even fines against your organization.
◆ If you have valuable customer data in your compromised system, it might
still be available to the attacker(s).
Hopefully, the above are reasons enough to unplug your system. Document
when and how you have taken the system off the network.
662 Appendixes
◆ When did this take place? You will have to wait until you perform the
first analysis to answer this question. You might not ever find out when it
actually took place.
◆ How bad is the situation? This one you can answer when you have
looked at all your data and compared them against a clean set.
◆ Can you stop it? Well, by taking the system off the network, you have
stopped the attack, and to stop the same attacker in the future you will
need to know what happened and take appropriate action.
Look especially for UID 0 and GID 0 lines. If any user other than root is
assigned to UID or GID, you may have a lead. You can run grep ‘:0:’
/etc/passwd to detect such entries.
664 Appendixes
If you were successful in identifying the actual time of the incident when attack-
ers got into your system, restore from backup tapes that pre-date that time (if pos-
sible). When restoring your system, do the following:
Appendix E
Unix/Linux users must use the Adobe Acrobat Reader for their systems.
Other PDF readers may not work with searchable PDF documents.
665
324754-2 AppE.F 11/5/01 9:05 AM Page 666
666 Appendixes
◆ John The Ripper. John the Ripper is a password cracker, which detects
weak passwords. You can learn more about this software at http://www.
openwall.com/john/.
◆ Swatch. Swatch monitors log entries in log files and has the ability to
trigger alarms or events in case of security breaches. You can learn more
about Swatch at http://www.stanford.edu/~atkins/swatch/.
◆ tcpdump. tcpdump is a network packet dump and monitoring tool.
You can monitor specifics of each packet traveling via a network
interface using this tool. To learn more about this tool, visit http://
www.tcpdump.org.
◆ Apache Web Server. The latest release version of the Apache server
source distribution is included on the CD.
668 Appendixes
Troubleshooting
If you have difficulty installing or using any of the materials on the companion CD,
try the following solutions:
◆ Turn off any anti-virus software that you may have running. Installers
sometimes mimic virus activity and can make your computer incorrectly
believe that it is being infected by a virus. (Be sure to turn the anti-virus
software back on later.)
◆ Close all running programs. The more programs you’re running, the less
memory is available to other programs. Installers also typically update
files and programs; if you keep other programs running, installation may
not work properly.
◆ Reference the ReadMe: Please refer to the ReadMe file located at the
root of the CD-ROM for the latest product information at the time of
publication.
If you still have trouble with the CD, please call the Hungry Minds Customer
Care phone number: (800) 762-2974. Outside the United States, call 1 (317) 572-3994.
You can also contact Hungry Minds Customer Service by e-mail at
[email protected]. Hungry Minds will provide technical support only
for installation and other general quality control items; for technical support on the
applications themselves, consult the program’s vendor or author.
334754-2 Index.F 11/5/01 9:05 AM Page 669
Index
A alt.fan.linus-torvalds newsgroup, 657
alt.uu.comp.os.linux.questions newsgroup, 657
A records, 400, 403
anongid option, NFS server, 484
Abacus Project suite, 552
anonuid option, NFS server, 484
ACCEPT target, netfilter, 495
AOL Instant Messaging, 525, 526–527
access control. See also authentication
Apache Web server
access-discriminative service, 341–342
access from outside, 520
default access, disabling, 358
compiling, 89–95
finger daemon, 345
starting, 113
FTP, 445–448, 452–455
as xinetd service, 324
by IP address, 337–338
Apache Web server configuration, 95–103
by name, 337–338
AddModule directive, 103
permissions, 354–356
ClearModuleList directive, 103
Samba server, 477–478
disconnections, 97
by time of day, 338
dynamic modules, 103
xinetd and, 337–338, 341–342
KeepAlive directive, 99
access control list (ACL), 119–122, 163–164, 166, 173
KeepAliveTimeout directive, 99–100
access string, 185, 186
LimitRequestBody directive, 102
ACCESS_DB feature, 422–424
LimitRequestFields directive, 102
access-discriminative service, 341–342
LimitRequestFieldSize directive, 102
access_times attribute, 338, 342
LimitRequestLine directive, 102–103
account, PAM module, 244
ListenBacklog directive, 97
Ace Director, 85
MaxClients directive, 98
activity, monitoring system, 6–8
MaxRequestsPerChild directive, 98
add user script parameter, 477
MaxSpareServers directive, 98–99
AddHandler command, 374
MinSpareServers directive, 99
AddModule directive, 103
process control, 95–100
adduser icu command, 232
RLimitCPU directive, 100
admin_e-mail setting, 232
RLimitMEM directive, 101
Adobe Acrobat reader, 665
RLimitPROC directive, 101
Advanced Intrusion Detection Environment (AIDE),
SendBufferSize directive, 96
230–231
StartServers directive, 96
Advanced Power Management BIOS, 28
system resource control, 100–103
ADVANCED_EXCLUDE_TCP option, PortSentry, 556
TimeOut directive, 97
ADVANCED_EXCLUDE_UDP option, PortSentry, 554, 556
Apache Web server security, 351–398
ADVANCED_PORTS_TCP option, PortSentry, 554, 556
authentication, 382–390
ADVANCED_PORTS_UDP option, PortSentry, 554
CGIWrap support, 375–377
aging user accounts, 281–283
dedicated user and group, 352
alias, 71, 305
default access, disabling, 358
alias command, 650
directory index file, 356–357
allow directive, 382–384, 389, 463
directory structure, 352–354
Allow switching LIDS protections option, 159
logging, 379–382
allow trusted domain parameter, 477
overrides, disabling, 358–359
AllowOverwrite directive, 458–459
paranoid configuration, 359–360
allow-query directive, 410
permissions, 354–356
allow-recursion command, 411
allow-update command, 409
sensible configuration, 352–359 669
SSL, 394–398
AllowUser directive, 466
suEXEC support, 372–375
all_squash option, NFS server, 484
Web robots, 390–392
334754-2 Index.F 11/5/01 9:05 AM Page 670
670 Index
Index 671
672 Index
delete user script parameter, 477 DNAT target option, 496, 524
Demilitarized Zone (DMZ), 515–516 DNS Security Extension (DNSSEC), 412–413
denial of service (DoS) attacks, 97, 338–340, DNS server
399–400 access from outside, 520–521
deny directive, 384, 389, 454–455 BIND, 405–413
DENY target, netfilter, 495 chrooting, 412
DenyAll command, 466 configuration, checking with Dlint, 400–405
334754-2 Index.F 11/5/01 9:05 AM Page 673
Index 673
e-mail. See also message, e-mail /etc/group file, 188–191, 191–197, 301
674 Index
F firewall, 514–527
fake oplocks parameter, 142 Apache server access from outside, 520
fast Ethernet, 81–82 cache-only service, 506–508
Fast switching option, 497 corporate, creating, 514–527
FastCGI, 114–117 Demilitarized Zone (DMZ), 515–516
fdformat command, 626 DNS server access, 506–507, 520
fdisk command, 63, 626 FTP server access, 509, 519
fg command, 301, 647 identd server access from outside, 521
fgrep command, 606 internal firewall, purpose of, 515
fiber channel disks, 21, 40 internal firewall, setting up, 516–518
fiber optics, 83 logging packet information, 522
fido.linux-ger newsgroup, 657 mail server access from outside, 521–522
file Network Address Translation (NAT), 524
compression with compress command, 619 packet-filtering, 491–511
compression with gzip command, 620 packet-filtering rules, creating with iptables,
compression with zip command, 624 498–501
decompression with gunzip command, 620 packet-filtering rules, testing, 524–525
decompression with uncompress command, 623 peer-to-peer services, blocking, 525–527
334754-2 Index.F 11/5/01 9:05 AM Page 675
Index 675
676 Index
Index 677
direct rendering infrastructure (DRI), 28 LIDS. See Linux Intrusion Detection System (LIDS)
disk support, 20–24 LILO, configuring, 33–34
filesystem support, 27 Limit directive, 465–467, 469
678 Index
Index 679
mail server, access from outside, 521–522 max xmit parameter, 143
Mail Transport Agent (MTA) MaxClients directive, 98, 469
choosing, 125–126 MaxDaemonChildren command, 131
open mail relay use by, 415–417 maximal_queue_lifetime command, 136
Postfix, 126, 133–136 Maximize-Reliability, TOS value, 523
PowerMTA, 136–139 Maximize-Throughput, TOS value, 523
qmail, 126 Maximum Transport Unit (MTU), 143, 149
Sendmail, 126–133 MaxInstances directive, 458
maildrop directory, 134 max_load, xinetd attribute, 331, 339–340
MAIL_FROM option, 161 MaxMessageSize option, 127
mailing lists, 658–659 MaxMultSect value, 42–43
MAILMETHOD attribute, Tripwire, 229 MaxQueueRunSize command, 132
MAILNOVIOLATIONS attribute, Tripwire, 230 MaxRequestsPerChild directive, 98
MAILPROGRAM attribute, Tripwire, 229, 230 max-smtp-out directive, 138
MAIL_RELAY option, 161 MaxSpareServers directive, 98–99
MAIL_SOURCE option, 161 may field, /etc/shadow file, 280
MAIL_SWITCH option, 161 mcopy utility, 629
MAIL_TO option, 162 MD5 algorithm, 266
make && make install command, 559, 573 mdel utility, 629
make bzImage command, 32 memory
make certificate command, 395 Apache Web server usage, 101
make clean command, 304 kernel configuration, 19–20
make config command, 13, 157 monitoring with vmstat utility, 8
make depend command, 31 monitoring with Vtad, 9
make install command, 41, 401 Squid use of, 121
make kinstall command, 530 total, reporting of, 146
make linux command, 576 Memory Type Range Register (MTRR) support, 19
make menuconfig command, 13, 14–15, 157 merging files with cat command, 598
make menugo command, 529 mesg command, 649
make modules command, 32 message, e-mail
make mrproper command, 14 bounced, 128–130, 136
make nc command, 576 queue processing, frequency of, 128, 136
make release command, 217 size, maximum, 127, 135
make xconfig command, 13 message queue
makemap command, 423, 481 bounced messages, 128–130, 136
Malice, 569 frequency of processing, 128, 136
man command, 597 full, 132–133, 135–136
man pages, 596–598 memory, saving, 131–132
mangle table, netfilter, 494–496 message lifetime, 136
MANGLE_EXTENSIONS variable, 433, 435 number of messages in a run, 132
MAPS (Mail Abuse Prevention System) Realtime message-authentication code (MAC), 266
Blackhole List, 416, 425–426, 441 message_size_limit command, 135
MARK target, netfilter, 495 meta key, 605
MASQUERADE target, netfilter, 495 mformat utility, 629
masquerade_domains command, 442 MinFreeBlock command, 133
masquerade_exceptions command, 442 Minimize-Cost, TOS value, 523
masquerading, 442, 494, 515–518, 524 Minimize-Delay, TOS value, 523
master boot record (MBR), 156 MinQueueAge command, 130
math emulation support, 18–19 MinSpareServers directive, 99
maus.os.linux newsgroup, 657 MIRROR target, netfilter, 495
maus.os.linux68k newsgroup, 657 mkdir command, 610
334754-2 Index.F 11/5/01 9:05 AM Page 680
680 Index
nat table, 494, 495–496, 524 nobody user, 352, 458, 486
Index 681
changing using chgrp command, 182 password field, /etc/passwd file, 279
changing using chown command, 181–182, password field, /etc/shadow file, 280
682 Index
Index 683
private key, 264, 273–274, 287, 294, 406, 413 ps auxw | grep sshd command, 293
private network, using redirection to access, 344 ps utility, 4
processes pseudo terminal (PTY) support, kernel, 30
attached to an open file, finding, 582–583 pstree command, 5–6
disabling raw device access, 171 PTR records, 400, 403
filehandle limit, 36 pub extension, 287
hiding from everyone, 171 public key, 264, 271–272, 294, 406, 413
monitoring with ps, 4–6 publishing guidelines, 392–394
monitoring with vmstat utility, 8 pwck command, 283
protecting daemons from being killed by pwd command, 611
root, 170–171
stack protection with libsafe, 173–178 Q
vulnerability to hackers, 155 qmail, 126
processor qmgr_message_active_limit command, 135
finding by cat command, 18 quarantine command, 430
Intel Physical Address Extension (PAE) mode, 19 queue directory, 136
kernel configuration for, 17 QueueDirectory option, 132
procmail tool, 429–435 QueueLA command, 131
/proc/net/ipsec_tncfg file, 535 queue_minfree command, 135
ProFTPD, 455–471 queuereturn option, 128–129
access restriction by IP address, 463 queue_run_delay command, 136
access restriction to single directory, 466 queuewarn option, 129
benefits of, 455 quota command, 627
chroot jail, 464–465 quotaon command, 627
command buffer size, controlling, 468
command privileges, limiting, 465–467
compiling, 456
R
RAID, software, 30, 66–67
configuring, 456–461
RAM-based filesystem, 68–71
connections, counting, 462
ramfs, 68–71
directory browsing privileges, restricting, 467
raw device, disabling access by processes, 171
directory creation/deletion privileges,
raw reads/writes, control of, 143–144
disabling, 465–466
/rc.d/init.d/sendmail restart command, 423
downloading, 456
read raw parameter, 143
file transfer, simplifying, 466–467
read size parameter, 143
hiding files, 468
read-only access, 164, 165, 186, 200–201
installing, 456
ReadOnly variable, Tripwire, 223
Linux Capabilities use with, 469–471
read/write block size, optimizing, 146–149
monitoring, 462
Real Time Clock (RTC) support, kernel, 30
PAM use with, 463–464
Realtime Blackhole List (RBL), 416, 425–426
securing, 462–471
reboot command, 260
as standalone service, 459–460
recurse attribute, Tripwire, 221
upload-only server, establishing, 466
recursive queries, 411
who is connected, 462
redirect attribute, 342–344
as xinetd service, 460–461
Redirect hostname port, xinetd attribute, 331
Promisc command, 86
Redirect IP address, xinetd attribute, 331
protocol, xinetd attribute, 331
REDIRECT target option, 496, 524
proxy-arp firewall, 512–514
redirecting clients, 342–344
ProxyPass directive, 113
redirecting packets, 522
ps auxw | grep command, 583
redirector program, 122–123
ps auxw | grep pmta command, 137
redundancy, network backbone and, 83
ps auxw | grep sendmail command, 439
334754-2 Index.F 11/5/01 9:05 AM Page 684
684 Index
Redundant Array of Independent Disks (RAID), root_squash option, NFS server, 484, 486
30, 66–67 round-robin Domain Name Service, 85
reference monitor security model, 156 route command, 643–644
RefuseLA command, 131 RPC scanning, 551
regular expressions, 440, 595 rpcinfo program, 485–486
ReiserFS, 49–53 rpm program, 620–622
REJECT command, 423, 424 RSAAuthentication directive, OpenSSG, 289
REJECT target, netfilter, 495 RSAREF library, 270
RELATED, packet state, 523 rulename attribute, Tripwire, 220
RELATED+REPLY, packet state, 523 run level command, 328
RELAY command, 423, 424 rw command, 181
RELAY_ENTIRE_DOMAIN feature, 424
RELAY_HOSTS_ONLY feature, 424 S
Reload or refresh in the browser requests, 387 SAINT. See Security Administrator’s Integrated
Remote OS Identification by TCP/IP Network Tool (SAINT)
Fingerprinting, 551 Samba server performance, 141–145
Remote Procedure Call (RPC), 323 log level parameter, 144
removing file/directory with pico command, 611 opportunistic locks (oplocks) use, 142–143
replay.com Web site, 660 raw reads, control of, 143
require directive, 388 raw writes, control of, 144
required, PAM control flag, 244 read size parameter, 142–143
RequireValidShell directive, 469 strict locking parameter, 144
requisite, PAM control flag, 244 strict sync parameter, 144
reserved field, /etc/shadow file, 280 TCP socket options, 142
restore utility, 239 user accounts, creating automatically, 144–145
RetryFactor command, 130 write size, control of, 143
REUSE flag, 329 Samba server security, 473–483
Reverse-ident scanning, 551 access control by IP address or hostname, 478
RFC error codes, 423 access control by network interface, 477–478
rhosts module, 252 domain-level, 475
RhostsAuthentication directive, OpenSSG, 288 OpenSSL, 481–483
RhostsRSAAuthentication directive, OpenSSG, 289 pam_smb module for authentication, 479–481
rightid option, 533 passwords and, 474–476
rightrsasigkey option, 533 security level, choosing appropriate, 473
RLimitCPU directive, 100 server-level, 475
RLimitMEM directive, 101 share-level, 474
RLimitPROC directive, 101 trusted domains, allowing users from, 477
rlogin command, 637–638 user-level, 474
rm command, 13, 611–612 sanitizer, e-mail, 429–434
rm-rf command, 298 SATAN tools, 541
ro option, NFS server, 483 satisfy directive, 389–390
Robot Exclusion Protocol, 390–391 /sbin/lidsadm command, 163
root access /sbin/lvchange utility, 61
guidelines, 298–299 /sbin/lvcreate utility, 61
LIDS reduction of, 155–156 /sbin/lvdisplay utility, 61
limiting, 299–300 /sbin/lvextend utility, 61, 65
password, 299, 301 /sbin/lvmchange utility, 62
set-UID programs and, 205 /sbin/lvmdiskscan utility, 62
squashing root user, 486 /sbin/lvmsadc utility, 62
su command to become root, 300–302 /sbin/lvmsar utility, 62
sudo command to delegate, 302–307 /sbin/lvreduce utility, 61
334754-2 Index.F 11/5/01 9:05 AM Page 685
Index 685
686 Index
Index 687
688 Index
Telnet continued
netcat compared, 576
U
ucb.os.linux newsgroup, 657
redirecting clients, 343–344
udp, PortSentry mode, 556
SRP-enabled, 317–319
UDP packet fragments, 150–151
telnet command, 638
UDP ports, scanning for, 552
Thawte, 272
UDP raw ICMP port unreachable scanning, 550
ThreadsPerChild directive, 98
UDP_PORTS option, PortSentry, 554, 556
time command, 147
UID
TimeOut directive, 97
checking for duplicate, 191
timestamp command, 614
mapping root to particular UID, 484–485
/tmp/lsof.out file, 663
squashing, 484–485
/tmp/smtp.hex file, 578
UID field, /etc/passwd file, 279
tn.linux newsgroup, 657
uiuc.sw.linux newsgroup, 657
top utility, 6–8
ulimit command, 138
TOS target, netfilter, 495
UltraDMA burst timing feature, 44
touch command, 614
umask command, 201–202, 447, 458, 615
tracepath command, 149
umich.linux newsgroup, 657
traceroute command, 380, 645–646
umount command, 64, 584, 627–628
traffic flow, network, 83–85
unalias command, 652
transaction log, 48
uname command, 631
Transaction Signatures (TSIG), 405–409
uncompress command, 623
Transport Layer Security (TLS), 266
uniq command, 615
traverse_tree( ) warning message, 235
Universal Database Server (UDB), 113
tree command, 5–6
Unix security newsgroups, 658
Tripwire, 571
Unix-to-Unix Copy (uucp) program, 188
compiling, 217–220
unlimited, xinetd attribute, 331
as cron job, 227
unlinked locally open file, finding, 583–584
database creation, 224
unsafe system calls, 361–366
downloading, 216
unzip command, 623
e-mail settings, 229–230
upload directive, 453–454
encryption, 219–220
upload privileges, restricting for anonymous FTP
installing, 217–220
users, 453–454
interactive mode, 225–227
uptime command, 632
obtaining, 216
USB support, kernel, 25
overview of, 205–216
user access, 277–311
policy configuration, 220–224
aging accounts, 281–283
property/mask characters, 221–222
monitoring users, 307–308
protecting, 224–225
OpenSSH, 285–298
report, receiving by e-mail, 228–230
password consistency, checking, 282–283
role attributes, 220–221
risks of, 277, 278–279
signatures, 224–225
root account management, 298–307
updating database, 228
security policy, creating user-access, 309–310
variables, built-in, 223–224
security policy, creating user-termination,
Trojan program, 216, 663
310–311
Try not to flood logs (NEW) option, 159
shadow passwords, 280–283
TTL target, netfilter, 496
shell services, eliminating, 283–284
tune2fs utility, 46–48
user accounts
tw.bbs.comp.linux newsgroup, 657
aging, 281–283
type of service (TOS), setting, 523
creating Samba, 144–145
User directive, 375, 458, 469
334754-2 Index.F 11/5/01 9:05 AM Page 689
Index 689
690 Index
Preamble
The licenses for most software are designed to take away your freedom to share
and change it. By contrast, the GNU General Public License is intended to guaran-
tee your freedom to share and change free software — to make sure the software is
free for all its users. This General Public License applies to most of the Free
Software Foundation's software and to any other program whose authors commit
to using it. (Some other Free Software Foundation software is covered by the GNU
Library General Public License instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our
General Public Licenses are designed to make sure that you have the freedom to
distribute copies of free software (and charge for this service if you wish), that you
receive source code or can get it if you want it, that you can change the software or
use pieces of it in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny
you these rights or to ask you to surrender the rights. These restrictions translate to
certain responsibilities for you if you distribute copies of the software, or if you
modify it.
For example, if you distribute copies of such a program, whether gratis or for a
fee, you must give the recipients all the rights that you have. You must make sure
that they, too, receive or can get the source code. And you must show them these
terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2) offer
you this license which gives you legal permission to copy, distribute and/or modify
the software.
Also, for each author's protection and ours, we want to make certain that every-
one understands that there is no warranty for this free software. If the software is
modified by someone else and passed on, we want its recipients to know that what
they have is not the original, so that any problems introduced by others will not
reflect on the original authors' reputations.
344754-2 GNU.F 11/5/01 9:05 AM Page 692
b) You must cause any work that you distribute or publish, that in whole
or in part contains or is derived from the Program or any part thereof,
to be licensed as a whole at no charge to all third parties under the
terms of this License.
c) If the modified program normally reads commands interactively when
run, you must cause it, when started running for such interactive use in
the most ordinary way, to print or display an announcement including
an appropriate copyright notice and a notice that there is no warranty
(or else, saying that you provide a warranty) and that users may redis-
tribute the program under these conditions, and telling the user how to
view a copy of this License. (Exception: if the Program itself is interac-
tive but does not normally print such an announcement, your work
based on the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If identifiable
sections of that work are not derived from the Program, and can be rea-
sonably considered independent and separate works in themselves, then
this License, and its terms, do not apply to those sections when you dis-
tribute them as separate works. But when you distribute the same sections
as part of a whole which is a work based on the Program, the distribution
of the whole must be on the terms of this License, whose permissions for
other licensees extend to the entire whole, and thus to each and every part
regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest your
rights to work written entirely by you; rather, the intent is to exercise the
right to control the distribution of derivative or collective works based on
the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of a
storage or distribution medium does not bring the other work under the
scope of this License.
3. You may copy and distribute the Program (or a work based on it, under
Section 2) in object code or executable form under the terms of Sections 1
and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections 1
and 2 above on a medium customarily used for software interchange;
or,
b) Accompany it with a written offer, valid for at least three years, to give
any third party, for a charge no more than your cost of physically per-
forming source distribution, a complete machine-readable copy of the
corresponding source code, to be distributed under the terms of
Sections 1 and 2 above on a medium customarily used for software
interchange; or,
344754-2 GNU.F 11/5/01 9:05 AM Page 694
10. If you wish to incorporate parts of the Program into other free programs
whose distribution conditions are different, write to the author to ask for
permission. For software which is copyrighted by the Free Software
Foundation, write to the Free Software Foundation; we sometimes make
exceptions for this. Our decision will be guided by the two goals of pre-
serving the free status of all derivatives of our free software and of pro-
moting the sharing and reuse of software generally.
No Warranty
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING
THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PRO-
GRAM “AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PRO-
GRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY
SERVICING, REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO
IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY
WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMIT-
TED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GEN-
ERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING
BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INAC-
CURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAIL-
URE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
RHCE
128-bit encrypted job security
Red Hat brings the world’s leading Linux® training and certification to you.
®
™
Red Hat Press
Authoritative. Practical. Reliable.
Exciting new books from Red Hat Press and Hungry Minds—whether you’re
configuring Red Hat Linux at home or at the office,
optimizing your Red Hat Linux Internet Server, or building the ultimate Red Hat
defensive security system, we have the books for you!