
Cisco Systems, Inc.
All contents are Copyright © 2002 Cisco Systems, Inc. All rights reserved. Important Notices and Privacy Statement.
Page 4 of 19
and are serviced without compromising the priority settings administered by the network manager. Strict priority
queuing ensures that the highest priority packets will always get serviced first, ahead of all other traffic, and allows
the other three queues to be serviced using WRR scheduling. In conjunction with scheduling, the Catalyst 3550
Gigabit Ethernet ports support congestion control via Weighted Random Early Detection (WRED). WRED avoids
congestion by setting thresholds at which packets are dropped before congestion occurs.
These features allow network administrators to prioritize mission-critical and/or bandwidth-intensive traffic, such
as Enterprise Resource Planning (ERP) (Oracle, SAP, etc.), voice (IP telephony traffic) and CAD/CAM over less
time-sensitive applications such as FTP or e-mail (Simple Mail Transfer Protocol [SMTP]). For example, it would
be highly undesirable to have a large file download destined to one port on a wiring closet switch and have quality
implications such as increased latency in voice traffic, destined to another port on this switch. This condition is
avoided by ensuring that voice traffic is properly classified and prioritized throughout the network. Other
applications, such as Web browsing, can be treated as low priority and handled on a best-efforts basis.
The Cisco Catalyst 3550 is capable of performing rate limiting via its support of the Cisco Committed Information
Rate (CIR) functionality. Through CIR, bandwidth can be guaranteed in increments as low as 8 Kbps. Bandwidth can
be allocated based on several criteria including MAC source address, MAC destination address, IP source address, IP
destination address, and TCP/UDP port number. Bandwidth allocation is essential in network environments requiring
service-level agreements or when it is necessary for the network manager to control the bandwidth given to certain
users. Each Catalyst 3550 switch 10/100 port supports 8 aggregate or individual ingress policers and 8 aggregate
egress policers. Each Catalyst 3550 Gigabit Ethernet port supports 128 aggregate or individual policers and 8
aggregate egress policers. This gives the network administrator very granular control of the LAN bandwidth.
Network Scalability through High-Performance IP Routing
With hardware-based IP routing and the Enhanced Multilayer Software Image, the Catalyst 3550 switches deliver
high performance dynamic IP routing. The Cisco Express Forwarding (CEF)-based routing architecture allows for
increased scalability and performance. This architecture allows for very high-speed lookups while also ensuring the
stability and scalabilitynecessary to meet theneeds of future requirements.In addition to dynamicIP unicast routing,
the Catalyst 3550 Series is perfectly equipped for networks requiring multicast support. Multicast routing protocol
(PIM) and Internet Group Management Protocol (IGMP) snooping in hardware make the Catalyst 3550 Series
switches ideal for intensive multicast environments.
These switches offer several advantages to improve network performance when used as a stackable wiring closet
switch or as a top-of-the-stack wiring closet aggregator switch. For example, implementing routed uplinks from
the top of the stack will improve network availability by enabling faster failover protection and simplifying the
Spanning-Tree Protocol algorithm by terminating all Spanning-Tree Protocol instances at the aggregator switch.
If one of the uplinks fails, quicker failover to the redundant uplink can be achieved via a scalable routing protocol
such as Open Shortest Path First (OSPF) or Enhanced Interior Gateway Routing Protocol (EIGRP) rather than relying
on standard Spanning-Tree Protocol convergence. Redirection of a packet after a link failure via a routing protocol
results in faster failover than a solution that uses Layer 2 Spanning Tree enhancements. Additionally, routed uplinks
allow better bandwidth utilization by implementing equal cost routing (ECR) on the uplinks to perform load
balancing. This results in dynamic load balancing in a part of the network that often acts as the bottleneck. And,
routed uplinks optimize the utility of uplinks out of the wiring closet by eliminating unnecessary broadcast data
flows into the network backbone.
Komentarze do niniejszej Instrukcji