Wednesday, October 3, 2012

QUEUING:

  • FIFIO QUEUING
    • Simple and easiest to implement
    • Only one parameter used is queue-depth
    • Other queuing methods change the order of packets
    • We cannot modify the tx-ring, only modify the output queue
    • Tx-ring is controlled by clocking & serialization of line
    • Controlled by how large the buffer as waiting to send traffic to tx-ring
    • To config fifo disable previous queueing mechanisms
    • At txr serialization of packet ocuurs
    • After filling the output queue, if packets are arrived at tail of queue will be tail dropped
  • TCP window: A TCP window is the amount of unacknowledged data a sender can send on a particular connection before it gets an acknowledgment back from the receiver, that it has received some of the data.
  • Tcp sliding window:
    • The sending device can send all packets within the TCP window size
    • (as specified in the TCP header) without receiving an ACK,
    • and should start a timeout timer for each of them.
    • After receiving the ACK from the receiving device, the sending device slides the window.
    • In this case, the sending device can send up to 5 TCP Segments without receiving an acknowledgement from the receiving device.
    • After receiving the acknowledgement for Segment 1 from the receiving device, the sending device can slide its window one TCP Segment to the right side and the sending device can transmit segment 6 also.
    • If any TCP Segment lost whiles its journey to the destination, the receiving device cannot acknowledge the sender.
    • Consider while transmission, all other Segments reached the destination except Segment 3.
    • The receiving device can acknowledge up to Segment 2.
    • At the sending device, a timeout will occur and it will re-transmit the lost Segment 3.
    • Now the receiving device has received all the Segments, since only Segment 3 was lost.
    • Now the receiving device will send the ACK for Segment 5, because it has received all the Segments to Segment 5.
    • Acknowledgement for Segment 5 ensures the sender the receiver has succesfully received all the Segments up to 5.
    • defaultly faire queue is there, based on the weight of packet, packet will be switched
    • before configuring fifo remove defaultly enabled fair queue
  • hold-queue : no.of packets in fit in software queue
  • serialization of pakcet: put the packets into the wires/optical interface , it will be done at TxRing

TRAFFIC SHAPING

  • Design to slow the output breakdown from output queue to tx-ring
  • Goal
    • Smooth out or normalize traffic weight when we are sending packets out the interface
    • Delay & queue the packets when exceeding traffic
  • Shaping will require only when
    • Physical serialization rate of link which is based on the physical clocking of interfaces hardware.
    • Where serialization rate of out bound is not matching with policed inbound
  • Generally based on policed inbound traffic will be shaped at outbound
  • Exceed traffic at outbound will be delayed in shaping queue.
  • IOS traffic shaping
    • Delays the exceeding traffic ony
    • Sometimes If queue is large traffic will be dropped when exceeding the rate

RSVP








RSVP


Resource reservation protocol


For intserv models


Both source and destination support RSVP


RSVP intserv flow reservations


Path messages flow downstream from source(can u support this specific reservation)


RESV messages flow upstreram from destination if aggrees


Reservation contains


Flow spec


Tspec (traffic spec, token bucket parameters: rate/burs)


Rspec (reservation spec, type of services)


Filter Spec: flwo sources to reserve recsource for


RSVP REQUIRES CBWFQ OR WFQ


Because RSVP flow can get separate weight


Configuration command


Based on interface bandwidth and Tspec configure on interface


Ip rsvp bandwidth { I am available to receive reservation on my interface)


Doesnot work with HQF


Should be WFQ of CBWFQ with WFQ in class-default


Only used for mpls traffic engineering for bandwidth asking


RSVP works with Per-VC queue


Require legacy FRTS


PVC Queue must be WFQ/CBWFQ


Ip rsvp resource-provider


Configuration


Sender


Ip rsvp sender-host


Receiver


Ip rsvp reservation-host


Transit notdes


Ip rsvp bandwidth

Weighted Fair Queuing (WFQ)


#fair-queue <CDT> <N Flow Queues> <N Reservable Queues>
  • The number of reservable conversations (queues) is the number of flow-queues available to RSVP reservations (if any). Those flows have a very small weight value, and thus are preferred above any other flows. 
  • In addition to reserved flow queues, there are special “Link Queues”. 
  • The number of queues is fixed to 8, and they are numbered right after the maximum dynamic queue (e.g. if there are 32 dynamic queues, “Link Queues” start at number 33). 
  • WFQ uses those queues to service routing protocol traffic and Layer 2 keepalives – everything that is critical to router operations and management. 
  • Each queue has a weight 1024, which is lower than any dynamic queue can get, so control plane traffic has priority over regular data traffic.
  • WFQ uses an intelligent congestion management solution that provides “fair”sharing of the interface bandwidth between multiple traffic flows.
  •  A traffic “flow”(or conversation) is a unidirectional sequence of packets, 
    • defined based on the 
      • protocol type, 
      • the source/destination IP addresses,
      • the source/destination ports numbers (when available), 
      • and partially on the IPv4 ToS byte value.
  • The term “fair” on WFQ refers to the max-min fairness.
  • IOS implementation of WFQ assigns weights automatically based on the IP Precedence (IPP) value in the packet’s IP header. 
  • The formula is Weight=32384/(IPP+1), where IPP is the IP Precedence of the flow.
  • Specifically WFQ implements its fair sharing logic as follows. 
    • First, the scheduler creates a group of flow queues for the interface, based on a manual setting or an automatic calculation derived from the interface bandwidth. 
    • When a new packet arrives for output scheduling, the scheduler applies a special hash function to the packet's source/destination IP/Port values to yield the flow queue number. 
    • This is why the number of queues is always a power of 2, because the hash the output value is somewhere between 0 and 2N.
  • This procedure also means that multiple flows may share the same queue, called "hash collision", when the number of flows is large.
  • This “virtual time” is  the total size of all packets stored in the flow queue scaled by the flow computational weight.
  • Each flow queue has a special virtual scheduling time assigned  the amount of time that would take to serialize the packets in the queue across the output interface.
  • Note that the weight is inversely proportional to IP precedence, and thus more important packets have smaller virtual scheduling time value (WFQ thinks that it can serialize them “faster”). 
  • It is more appropriate to call this computational weight the “scaling factor” to avoid confusion with the classic meaning of weight.