Quality of Service Design
The QoS provided by EuropCom to its IPv4 enterprise customers is a key feature of its service marketing. The demand for guaranteed levels of service has risen because most EuropCom VPN customers have migrated their mission-critical applications, including voice and video, over IP/MPLS. A primary constraint of EuropCom IPv6 deployment is to prevent any impact on IPv4 QoS. At the same time, many IPv6 applications are IPv4 applications migrating to IPv6 (critical data, voice, and video). EuropCom customers expect them to run over IP/MPLS and get the same QoS as their IPv4 counterpart.
IPv6's interaction with the existent QoS deployment for IPv4 is multifaceted:
Impact of IPv6 traffic on IPv4 QoS, its handling with respect to the QoS-based processing of IPv4 traffic Impact of IPv6 QoS configuration on the network edge layer, its interaction with IPv4 QoS Operation costs introduced by IPv6 QoS
The EuropCom MPLS backbone is overengineered and can accommodate the IPv4 as well as the additional IPv6 traffic. IPv6 traffic growth forecasts for the next two to three years justify maintaining the current core design for two major reasons:
IPv6 adoption by EuropCom enterprise customers is expected to be slow in the next few years, and most of the early adopters will run trials with limited number of end users long before they will engage in full-scale deployments. Traffic generated by bandwidth-intensive applications, such as voice and video, will not come on top of its IPv4 counterpart but rather replacing them, initially at a slow pace. Besides a small overhead (IPv6 header is 20 extra bytes), the IPv6 traffic growth due to such applications should therefore have a limited impact on core links utilization.
Overengineering the backbone has saved EuropCom from implementing differentiated services (DiffServ) in the core network. For this reason, it does not care about mixed classes received from customer CEs and has not configured any QoS mechanism at the ingress PE. For IPv6, the same strategy applies, and no QoS mechanism/configuration needs to take place on the ingress interfaces. Note that EuropCom customers can configure QoS on the CE-PE interfaces, at their convenience and leading to two possible scenarios:
The application (a good example is IP telephony) marks the Precedence field in the IP packet, and the CE uses the precedence in its CE-PE policy. In that case, assuming the application can also handle the marking of IPv6 packets, the customer-CE can police IPv6 traffic without any configuration change. Example 13-23 illustrates Nice-CE-Cisco QoS setup. Example 13-23. QoS Configuration of CE Router Nice-CE-Cisco
hostname Nice-CE-Cisco !CE#35
..
interface Serial0/0
ip address 172.21.4.2 255.255.255.0
ipv6 address 2001:6FC:1123:1:33::1/128
ipv6 address FE80::52DE:35 link-local
service-policy out policy-CE-PE-QoS
!
..
class-map class-interactive
match precedence 3
class-map class-non-interactive
match precedence 4 6 7
class-map class-RT
match precedence 5
!
policy-map policy-CE-PE-QoS
..
|
Because match precedence is IP protocol independent, the preceding configuration is unchanged when IPv6 is activated. The customer CE classifies traffic, polices it, and marks the Precedence field for PE-CE DiffServ on the remote EuropCom edge. Classification on the CE can get very complicated, and involves deep packet inspection. In simple cases, the source/destination addresses and ports are used to distinguish between real-time, high-priority, and Best Effort traffic. Examples of classification and DSCP marking are provided in Chapter 5, "Implementing QoS."
On the egress side, some EuropCom customers have requested DiffServ to be activated on the PE-CE interfaces. Because QoS is not activated at the ingress or in the core, the Precedence field set in packets received from the customer network is carried transparently up to the egress PE. EuropCom can use the value to apply PHBs on the PE-CE link. EuropCom has defined (for IPv4) the precedence-to-PHB mapping shown in Table 13-8 on the PE-CE interface.
Table 13-8. Precedence-to-PHB MappingPrecedence Value | PHB | Type of Traffic |
---|
0,1,2,3 | BE | Best Effort traffic | 4,6,7 | AF41 | High-priority traffic | 5 | EF | Real-time traffic |
Note
During congestion states, it is important to protect critical control traffic such as routing protocol traffic. Control traffic is tagged with precedence 6, and its prioritized handling is implemented with the help of the Selective Packet Discard (SPD) mechanism. IPv6 control traffic must also be marked with precedence 6, to avoid being dropped by SPD under congestion conditions.
When enabling VPNv6 on the egress PE, no change is expected in the QoS configuration. Example 13-24 illustrates Nice-PE-VPN QoS configuration.
Example 13-24. QoS Configuration of PE Router Nice-PE-VPN
hostname Nice-PE-VPN !PE#27
..
!
interface Serial0/0
vrf forwarding Cisco-Nice
ip address 172.21.4.1 255.255.255.0
ipv6 address FE80::83D7:27 link-local
service-policy out policy-PE-CE-QoS
!
..
class-map class-PrecHigh
match precedence 4 6 7
class-map class-RT
match precedence 5
!
policy-map policy-PE-CE-QoS
class class-PrecHigh
priority percent 40
class class-RT
priority percent 50
random-detect
class class-default
bandwidth percent 10
random-detect
|
The match precedence command applies to both IPv4 and IPv6 packets. The behavior on PE-CE links is controlled by customer precedence set at the ingress CE (or application). The customer can decide to use the same classification policy for IPv4 and IPv6.
|