Troubleshooting High CPU Utilization
Troubleshooting High CPU Utilization
Americas Headquarters:
Cisco Systems, Inc., 170 West Tasman Drive, San Jose, CA 95134-1706 USA
One of the reasons that different ranges and models within those ranges will differ in the baseline
utilization is differences in design. Where the earlier models of the switches with little usage of
microcontrollers, the later ones do utilize these more. As more tasks are being offloaded to those
microcontrollers there is an increase in communication between the CPU and the microcontrollers. The
processes this will be reported under are the HULC led and the Redearth Tx an Rx processes.
To determine switch CPU utilization, enter the show processes cpu sorted privileged EXEC command.
The output shows how busy the CPU has been in the past 5 seconds, the past 1 minute, and the past
5 minutes. The output also shows the utilization percentage that each system process has used in these
periods.
Switch# show processes cpu sorted
CPU utilization for five seconds: 5%/0%; one minute: 6%; five minutes: 5%
PID Runtime(ms)
Invoked
uSecs
5Sec
1Min
5Min TTY Process
1
4539
89782
50 0.00% 0.00% 0.00%
0 Chunk Manager
2
1042
1533829
0 0.00% 0.00% 0.00%
0 Load Meter
3
0
1
0 0.00% 0.00% 0.00%
0 DiagCard3/-1
4
14470573
1165502
12415 0.00% 0.13% 0.16%
0 Check heaps
5
7596
212393
35 0.00% 0.00% 0.00%
0 Pool Manager
6
0
2
0 0.00% 0.00% 0.00%
0 Timers
7
0
1
0 0.00% 0.00% 0.00%
0 Image Licensing
8
0
2
0 0.00% 0.00% 0.00%
0 License Client N
9
1442263
25601
56336 0.00% 0.08% 0.02%
0 Licensing Auto U
10
0
1
0 0.00% 0.00% 0.00%
0 Crash writer
11
979720
2315501
423 0.00% 0.00% 0.00%
0 ARP Input
12
0
1
0 0.00% 0.00% 0.00%
0 CEF MIB API
<output truncated>
In this output, the CPU utilization for the last 5 seconds shows two numbers (5%/0%).
The first number, 5%, tells how busy the CPU was in the past 5 seconds. This number is the total CPU
utilization for all the active system processes, including the percentage of time at the interrupt level.
The second number, 0%, shows the percentage of time at the interrupt level in the past 5 seconds.
The interrupt percentage is the CPU time spent receiving packets from the switch hardware. The
percentage of time at interrupt level is always less than or equal to the total CPU utilization.
Two other important numbers are shown on the same output line: the average utilization for the last
1 minute (6 percent in this example) and the average utilization for past 5 minutes (5 percent in this
example). These values are typical for a nonstacked switch in a small and stable environment.
There can be hundreds of active system processes on the CPU at any time. This number can vary, based
on the switch model, the Cisco IOS release, the feature set, and (if applicable) the number of switches
in a switch stack. For example, on a stack of Catalyst 3750 switches running the IP base image, there are
typically 475 active system processes. The Catalyst 2960 switch running the LAN base image has a
smaller number of active processes than a stack of Catalyst 3750 switches. In general, the more features
in the Cisco IOS image, the more system processes.
In some instances, high CPU utilization is normal and does not cause network problems. High CPU
utilization becomes a problem when the switch fails to perform as expected.
OL-17977-01
Enter the show processes cpu history privileged EXEC command to see the CPU utilization for the last
60 seconds, 60 minutes, and 72 hours. The command output provides graphical views of how busy the
CPU has been. You can see if the CPU has been constantly busy or if utilization has been spiking.
In this example, the CPU spiked to 100 percent 46 hours ago, shortly after the switch was rebooted. It
spiked to 87 percent within the past hour.
CPU utilization spikes caused by a known network event or activity are not problems. Even an
87-percent spike might be acceptable, depending on the cause. For example, an acceptable spike could
be caused by the network administrator entering a write memory privileged EXEC command on the
CLI. A spike is also a normal reaction to a topology change in a large Layer 2 network. See the Normal
Conditions with High CPU Utilization section on page 4 for a list of events and activities that cause the
Spanning Tree
A Layer 2 spanning-tree instance runs for every VLAN configured on a Layer 2 switch by the per-VLAN
spanning-tree (PVST) feature. The CPU time utilized by spanning tree varies depending upon the
number of spanning-tree instances and the number of active interfaces. The more instances and the more
active interfaces, the greater the CPU utilization.
OL-17977-01
The write memory privileged EXEC command (particularly if the switch is in a stack).
The debug privileged EXEC command to enable debugging of a feature. Printing debug messages
to the console increases the CPU utilization as long as debugging is enabled.
Large number of IP SLAs monitoring sessionsthe CPU generates ICMP or traceroute packets.
SNMP polling activities, particularly MIB walkthe Cisco IOS SNMP engine executes SNMP
requests.
Large number of simultaneous DHCP requests, such as when links are restored to numerous clients
(when the switch is acting as DHCP server).
Spanning-tree topology changeWhen a Layer 2 network device does not receive timely
spanning-tree BPDUs on its root port, it considers the Layer 2 path to the root switch as down, and
the device tries to find a new path. Spanning tree reconverges in the Layer 2 network.
Routing topology change, such as BGP route flapping or OSPF route flapping.
EtherChannel links bounceWhen the network device at the other end of the EtherChannel does
not receive the protocol packets required to maintain the EtherChannel link, this might bring down
the link.
UDLD flappingThe switch relies on keepalives from its peer in aggressive mode.
DHCP or IEEE 802.1x failures if the switch cannot forward or respond to requests.
HSRP flapping.
Troubleshooting High CPU Utilization
OL-17977-01
251050
Its normal for the interrupt percentage to be greater than 0 percent and less than 5 percent. It is acceptable
for the interrupt percentage to be between 5 percent and 10 percent. An interrupt percentage over
10 percent should be investigated. See the Analyzing Network Traffic section on page 8 for
investigation information.
Note
Always refer to the release notes for the specific platform and software release of your switch for any
known Cisco IOS bugs. You can eliminate these issues from your troubleshooting steps.
When the switch CPU is busy, management tools such as Telnet or SSH are usually not very useful. We
recommend that you use the switch console for debugging CPU utilization issues.
OL-17977-01
A high interrupt percentage indicates too much network traffic. This is the most common cause of
high CPU utilization. To troubleshoot, see the Analyzing Network Traffic section on page 8.
High CPU utilization percentage with a low interrupt percentage indicates a problem with an
operating system process. To troubleshoot, see the Debugging Active Processes section on
page 19.
When both percentages are high or if you cannot determine whether or not the interrupt percentage
is a significant contributor to CPU utilization, first see the Analyzing Network Traffic section on
page 8. If the information provided in this section does not resolve the high CPU utilization
problem, see the Debugging Active Processes section on page 19.
In this example, the CPU utilization is 64 percent, and the interrupt percentage is 19 percent, which is
high.The utilization problem is caused by the CPU processing too many packets received from the
network. In this case, see the Analyzing Network Traffic section on page 8.
251049
In the next example, the interrupt percentage is low compared to the CPU utilization percentage
(5 percent compared to 82 percent). A high CPU utilization and relatively low interrupt percentage
indicates that one or more system processes is taking too much time. In this case, see the Debugging
Active Processes section on page 19.
Switch# show processes cpu sorted 5sec
CPU utilization for five seconds: 82%/5%; one minute: 40%; five minutes: 34%
PID Runtime(ms)
Invoked
uSecs
5Sec
1Min
5Min TTY Process
217
135928429 493897689
275 45.68% 18.61% 16.78%
0 SNMP ENGINE
47
61840574 480781517
128 23.80% 8.63% 7.43%
0 hrpc <-response
158
58014186 265701225
218 1.11% 1.36% 1.35%
0 Spanning Tree
46
1222030 67734870
18 0.47% 0.14% 0.08%
0 hrpc -> request
75
1034724
8421764
122 0.15% 0.06% 0.02%
0 hpm counter proc
223
125
157
796 0.15% 0.13% 0.03%
2 Virtual Exec
213
2573
263
9783 0.15% 2.43% 0.71%
1 Virtual Exec
150
578692
3251272
177 0.15% 0.02% 0.00%
0 CDP Protocol
114
8436933
3227814
2613 0.15% 0.17% 0.16%
0 HRPC qos request
105
1002819 96357752
10 0.15% 0.10% 0.06%
0 Hulc LED Process
28
701287
68160
10288 0.15% 0.01% 0.00%
0 Per-minute Jobs
215
9757808 42169987
231 0.15% 0.58% 0.56%
0 IP SNMP
12
0
1
0 0.00% 0.00% 0.00%
0 IFS Agent Manage
13
8
67388
0 0.00% 0.00% 0.00%
0 IPC
!<Output truncated>
OL-17977-01
Table 1 lists some common system processes and the associated packet types. If one of the listed system
processes is the most active process in the CPU, it is likely that the corresponding type of network packet
is flooding the CPU.
Table 1
Packet Types
IP Input
IGMPSN
ARP Input
IP ARP packets
SNMP Engine
SNMP packets
See the Identifying Network Packets Received by the CPU section on page 10 to find the source of the
packets and how to troubleshoot.
Table 2 lists the most active system processes when the CPU is busy acting on punted IP packets. CPU
processing of punted packets is not associated with a listed process.
Table 2
Packet Types
Check heaps
Virtual Exec
RedEarth Tx Mana
Statistics collection
See the Identifying Packets Punted from the Switch Hardware section on page 16 for troubleshooting
procedures for punted packets.
The switch hardware counts the packets in each CPU receive queue. You can use this count to
determine the type of packets being received. See the Monitoring Packet Counts for CPU Receive
Queues section on page 10.
You can use the debug privileged EXEC command to print to the console all CPU-received packets.
The debug command can debug each receive queue separately. See the Debugging Packets from
the Switch CPU Receive Queues section on page 12.
The switch counts all IP packets that are sent and received. This information is useful to identify any
counts that are particularly high and incrementing rapidly. See the Monitoring IP Traffic Counts
section on page 14.
retrieved
---------726325
16108
56771
3949
827
58
0
0
382
dropped
---------0
0
0
0
0
0
0
0
0
invalid
---------0
0
0
0
0
0
0
0
0
hol-block
---------0
0
0
0
0
0
0
0
0
stray
---------0
0
0
0
0
0
0
0
0
10
OL-17977-01
cbt-to-spt
0
igmp snooping
3567
icmp
11256
logging
0
rpf-fail
0
dstats
0
cpu heartbeat
322409
<output truncated>
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
The switch also counts the CPU-bound packets that are discarded due to congestion. Each CPU receive
queue has a packet-count maximum. When the receive queue maximum is reached, the switch hardware
discards packets destined for the congested queue. The switch counts discarded packets for each queue.
Increased discard counts for a particular CPU queue mean heavy usage for that queue.
Enter the show platform port-asic stats drop privileged EXEC command to see the CPU receive-queue
discard counts and to identify the queue discarding packets. This command is not as useful as the show
controllers cpu-interface command because the output shows numbers for the receive queues instead
of names, and it shows only the discards. Because the switch hardware sees the CPU receive-queue
dropped packets as sent to the supervisor, the dropped packets are called Supervisor TxQueue Drop
Statistics in the command output.
Switch #show platform port-asic stats drop
Port-asic Port Drop Statistics - Summary
========================================
RxQueue Drop Statistics Slice0
RxQueue 0 Drop Stats Slice0: 0
RxQueue 1 Drop Stats Slice0: 0
RxQueue 2 Drop Stats Slice0: 0
RxQueue 3 Drop Stats Slice0: 0
RxQueue Drop Statistics Slice1
RxQueue 0 Drop Stats Slice1: 0
RxQueue 1 Drop Stats Slice1: 0
RxQueue 2 Drop Stats Slice1: 0
RxQueue 3 Drop Stats Slice1: 0
!<Output truncated>
Port 27 TxQueue Drop Stats: 0
Supervisor TxQueue Drop Statistics
Queue 0: 0
Queue 1: 0
Queue 2: 0
Queue 3: 0
Queue 4: 0
Queue 5: 0
Queue 6: 0
Queue 7: 0
Queue 8: 0
Queue 9: 0
Queue 10: 0
Queue 11: 0
Queue 12: 0
Queue 13: 0
Queue 14: 0
Queue 15: 0
! <Output truncated>
The queue numbers in this output for Supervisor TxQueue Drop Statistics are in the same order as
the queue names in the show controllers cpu-interface command output. For example, Queue 0 in this
output corresponds to rpc in the previous output;. Queue 15 corresponds to cpu heartbeat, and so on.
11
The statistics do not reset. Enter the command multiple times to review active queue discards. The
command output also shows other drop statistics, some of which are truncated in the example.
See the CPU Receive Queues section on page 21 for more information about CPU queues.
Purpose
Step 1
configure terminal
Step 2
no logging console
Step 3
Enable system message logging to a local buffer, and set the buffer size to
12800 bytes.
Step 4
Step 5
exit
12
OL-17977-01
This is an example of turning on the CPU queues one at a time until console floods.
After the debug platform cpu-queue host-q command was entered, a single packet was received.
This is normal.
When the next command, debug platform cpu-queue icmp-q, was entered, the flood began. All
packets received on icmp-q are the same. Only three packets are shown. Thus, the CPU is receiving
an ICMP packet flood.
13
Examine the output for information about the source, including the VLAN (200) and source MAC
address (0000.0300.0101) of this packet (shown in bold text).
*Mar 2 22:48:16.947: ICMP-Q:Dropped Throttle timer not awake: Remote Port Blocked
L3If:Vlan200 L2If:GigabitEthernet1/0/3 DI:0xB4, LT:7, Vlan:200
SrcGPN:3, SrcGID:3,
ACLLogIdx:0x0, MacDA:001d.46be.7541, MacSA: 0000.0300.0101
IP_SA:10.10.200.1
IP_DA:10.10.200.5 IP_Proto:1
TPFFD:ED000003_008B00C8_00B00222-000000B4_00040000_03090000
Enter the show mac address-table privileged EXEC command for the VLAN to see the MAC
address table and to find the interface where this MAC address was learned. The output shows the
packets are being received on interface Gigabit Ethernet 1/0/3 (shown in bold text).
Switch# show mac address-table dynamic vlan 200
Mac Address Table
------------------------------------------Vlan
---200
Mac Address
----------0000.0300.0101
Type
-------DYNAMIC
Ports
----Gi1/0/3
!<Output truncated>
You can use these steps for the different packet types when the CPU is being flooded by a single flow.
Continue to enable the debugging of the different CPU queues until the console is flooded. See the CPU
Receive Queues section on page 21 for details about the CPU queues.
Beginning in privileged EXEC mode, follow these steps to view the system log with the debug messages:
Command
Purpose
Step 1
terminal length 0
Set the number of lines on the terminal screen for the current session to 0.
Step 2
show logging
Step 3
terminal length 30
Set the terminal length to 30, or reset back to the original value.
Step 4
exit
Note
If you modified the configuration before debugging by increasing the system log buffer or adding a
timestamp, consider returning these settings to the default configuration when debugging is complete.
14
OL-17977-01
Hellos: 0/0
To limit Ethernet broadcast or multicast packet storms, use the storm-control {broadcast |
multicast | unicast} level {level [level-low] | bps bps [bps-low] | pps pps [pps-low]} interface level
configuration command. See the Configuring Port-Based Traffic Control chapter in the switch
software configuration guide.
If the root cause of the high CPU utilization is a Layer 2 loop, the spanning tree configuration could
be the problem. See the Configuring STP chapter in the switch software configuration guide.
Policing traffic can limit the number of packets that enter a switch. Policing can deny ingress traffic,
limit it to a specific bits-per-second rate, or permit some traffic while limiting other traffic. You can
police traffic on the MAC address, the IPv4 header, the IPv6 header (if IPv6 is supported on the
Troubleshooting High CPU Utilization
OL-17977-01
15
switch), or the Layer 4 port number. See the chapters on Configuring Network Security with
ACLs, Configuring IPv6 ACLs (if supported on the switch), and Configuring QoS in the switch
software configuration guide.
To prevent IP ARP packets from affecting the CPU utilization on Layer 3 switches, configure
Dynamic ARP Inspection (DAI), and enter the ip arp inspection limit {rate pps [burst interval
seconds] | none} interface configuration command to use the rate-limiting feature. See the chapter
on Configuring Dynamic ARP Inspection in the switch software configuration guide.
retrieved
---------2811788
944641
280645
813536
8787
2808
65614320
25
794570
0
18941
0
0
0
0
1717274
dropped
---------0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
invalid
---------0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
hol-block
---------0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
stray
---------0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
You can also use the show platform ip unicast statistics privileged EXEC to show the same information
about punted packets. The punted IP packets are counted as CPUAdj, shown in bold in this example.
Switch# show platform ip unicast statistics
Global Stats:
HWFwdLoc:0 HWFwdSec:0 UnRes:0 UnSup:0 NoAdj:0
EncapFail:0 CPUAdj:1344291253 Null:0 Drop:0
Prev Global Stats:
HWFwdLoc:0 HWFwdSec:0 UnRes:0 UnSup:0 NoAdj:0
EncapFail:0 CPUAdj:1344291253 Null:0 Drop:0
These statistics are updated every 2 to 3 seconds. Enter the command multiple times to see the change
in the CPUAdj counts. When the CPUAdj counts are rapidly incrementing, many IP packets are being
forwarded to the CPU for IP routing.
16
OL-17977-01
Max
Masks/Values
6364/6364
1120/1120
6144/6144
2048/2048
452/452
512/512
964/964
Used
Masks/values
31/31
1/1
4/4
2047/2047
12/12
21/21
30/30
In the example, the IP indirectly-connected routes resource is full even though the output shows
only 2047 of 2048 maximum as in use. If the switch TCAM is full, the hardware routes packets only for
destination IP addresses that are in the TCAM. All other IP packets that had a TCAM miss are punted
to the CPU. A full TCAM and increasing sw forwarding counts from the show controllers
cpu-interface command output means that punted packets are causing high CPU utilization.
Cisco IOS learns about routes from routing protocolssuch as BGP, RIP, OSPF, EIGRP, and IS-ISand
from statically configured routes. You can enter the show platform ip unicast counts privileged EXEC
command to see how many of these routes were not properly programmed into the TCAM.
Switch# show platform ip unicast counts
# of HL3U fibs 2426
# of HL3U adjs 4
# of HL3U mpaths 0
# of HL3U covering-fibs 0
# of HL3U fibs with adj failures 0
Fibs of Prefix length 0, with TCAM fails: 0
Fibs of Prefix length 1, with TCAM fails: 0
Fibs of Prefix length 2, with TCAM fails: 0
Fibs of Prefix length 3, with TCAM fails: 0
Fibs of Prefix length 4, with TCAM fails: 0
Fibs of Prefix length 5, with TCAM fails: 0
Fibs of Prefix length 6, with TCAM fails: 0
Fibs of Prefix length 7, with TCAM fails: 0
Fibs of Prefix length 8, with TCAM fails: 0
Fibs of Prefix length 9, with TCAM fails: 0
Fibs of Prefix length 10, with TCAM fails: 0
Fibs of Prefix length 11, with TCAM fails: 0
Fibs of Prefix length 12, with TCAM fails: 0
Fibs of Prefix length 13, with TCAM fails: 0
Fibs of Prefix length 14, with TCAM fails: 0
Fibs of Prefix length 15, with TCAM fails: 0
Fibs of Prefix length 16, with TCAM fails: 0
Fibs of Prefix length 17, with TCAM fails: 0
Fibs of Prefix length 18, with TCAM fails: 0
Fibs of Prefix length 19, with TCAM fails: 0
Fibs of Prefix length 20, with TCAM fails: 0
Fibs of Prefix length 21, with TCAM fails: 0
17
Fibs
Fibs
Fibs
Fibs
Fibs
Fibs
Fibs
Fibs
Fibs
Fibs
Fibs
Fibs
of
of
of
of
of
of
of
of
of
of
of
of
Prefix
Prefix
Prefix
Prefix
Prefix
Prefix
Prefix
Prefix
Prefix
Prefix
Prefix
Prefix
length
length
length
length
length
length
length
length
length
length
length
length
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
with
with
with
with
with
with
with
with
with
with
with
with
TCAM
TCAM
TCAM
TCAM
TCAM
TCAM
TCAM
TCAM
TCAM
TCAM
TCAM
TCAM
fails:
fails:
fails:
fails:
fails:
fails:
fails:
fails:
fails:
fails:
fails:
fails:
0
0
0
0
0
0
0
0
0
0
693
0
This output shows 693 failures. You can use this statistic to determine how many additional TCAM
resources are needed to hold the routes being advertised in the network at this time.
To view the number of the number of route entries used by each routing protocol, enter the show ip route
summary privileged EXEC command.
Switch# show ip route summary
IP routing table name is Default-IP-Routing-Table(0)
IP routing table maximum-paths is 32
Route Source
Networks
Subnets
Overhead
Memory (bytes)
connected
5
0
320
760
static
0
0
0
0
rip
0
2390
152960
363280
internal
1
1172
Total
6
2390
153280
365212
Switch#
Optimizing IP Routes
6K
1K
8K
6K
2K
0
0.5K
1K
18
OL-17977-01
The default template on this switch allows only 2048 indirect Layer 3 routes in the TCAM. To allocate
more TCAM resources to indirect Layer 3 routes, you must reduce some other TCAM resources. Use
the sdm prefer template-name global configuration command to change to a template that reserves more
resources for IP routing.
To see a list of available SDM templates for your switch, enter the show sdm templates all privileged
EXEC command.
Switch# show sdm templates all
Id Type
Name
0 desktop
desktop default
1 desktop
desktop vlan
2 desktop
desktop routing
3 aggregator aggregate default
4 aggregator aggregate vlan
5 aggregator aggregate routing
6 desktop
desktop routing pbr
8 desktop
desktop IPv4 and IPv6 default
9 desktop
desktop IPv4 and IPv6 vlan
10 aggregator aggregate IPv4 and IPv6 default
11 aggregator aggregate IPv4 and IPv6 vlan
12 desktop
desktop access IPv4
13 aggregator aggregator access IPv4
14 desktop
desktop IPv4 and IPv6 routing
15 aggregator aggregator IPv4 and IPv6 routing
16 desktop
desktop IPe
17 aggregator aggregator IPe
Note
See the chapter on Configuring SDM Templates in the switch software configuration guide to see the
templates available on your switch and the reserved TCAM resources for each template.
Optimizing IP Routes
When it is not possible or practical to change the SDM template on a Layer 3 switch, you can reduce the
number of routes in the TCAM by using summary routes or by filtering routes.
Using summary routes reduces the routing table size. You enable summary routes on peer routers. Route
summary is enabled by default for RIP and EIGRP and disabled by default for OSPF. To learn about route
summary, see the Configuring IP Unicast Routing chapter in the software configuration guide (only
Layer 3 switches).
You can use route filtering to prevent unwanted routes from being programmed into the TCAM. For
information about OSPF route filtering, see the feature guide at this URL:
http://www.cisco.com/en/US/docs/ios/12_0s/feature/guide/routmap.html
19
Table 3
Normal
Considered High
Suggested Action
More than 5 %
HACL
More than 50 %
SNMP Engine
SNMP query of the flash file system on the switch. Accessing the flash file is a CPU-intensive
operation for SNMP Gets or SNMP GetNext operations.
20
OL-17977-01
Helpful Information
Helpful Information
CPU Receive Queues
A packet sent to the CPU by switch hardware goes into one of 16 CPU queues, depending on the packet
type. Each queue is given a priority, allowing the CPU to drain higher priority queues before lower
priority queues. Each queue has some reserved memory in hardware to hold packets for the queue so that
one queue or packet type cannot use all the available memory.
These are the CPU queues with their uses:
rpcRemote procedure call. Used by Cisco system processes to communicate across the stack.
stpSpanning Tree Protocol. A Layer 2 protocol with its own queue and used for protocol packets
such as LACP and so on.
routing protocolUsed for routing protocol packets received by other network devices.
remote consoleUsed for packets when you enter the session switch-number privileged EXEC
command on a stack master switch to open the console on another switch member.
sw forwardingUsed for packets punted by the hardware for the CPU to route.
hostUsed for packets with a destination IP address matching any switch IP address. Also IP
broadcast packets.
Command
Purpose
Usage
show ip traffic
21
Helpful Information
Table 4
Command
Purpose
Usage
Shows CPU packets discarded due Identify CPU receive queues that are dropping
to congestion.
packets due to flooding.
Shows a history of CPU utilization Determine baseline CPU usage, and identify
for 60 seconds, 60 minutes, and 72 when spikes occur.
hours.
Additional Documents
Another document on Cisco.com focuses on specific high utilization issues in the Catalyst 3750 switch,
although the information also applies to other switches. See Catalyst 3750 Series Switches High CPU
Utilization Troubleshooting.
Unresolved Issues
If the troubleshooting steps in this document do not help you to determine the root cause of high CPU
utilization, you should contact the Technical Assistance Center (TAC) for Cisco. The technical
assistance engineer will want to see the same information that you have gathered in the debugging
efforts. Have this information ready when you contact Cisco technical support to reduce the time to
resolve the problem.
Note
See the next section for a link for submitting a service request.
22
OL-17977-01
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of
Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The
use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. Any examples, command display output, and
figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses in illustrative content is unintentional and
coincidental.
2008 Cisco Systems, Inc. All rights reserved.
23
24
OL-17977-01