idnits 2.17.00 (12 Aug 2021) /tmp/idnits58943/draft-yong-rtgwg-igp-multicast-arch-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 159 has weird spacing: '...hat the recei...' -- The document date (November 9, 2014) is 2743 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC7176' is mentioned on line 325, but not defined ** Obsolete normative reference: RFC 4601 (Obsoleted by RFC 7761) -- Possible downref: Normative reference to a draft: ref. 'ISEXT' Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group L. Yong 2 Internet Draft W. Hao 3 D. Eastlake 4 Category: Standard Track Huawei 5 A. Qu 6 MetiaTek 7 J. Hudson 8 Brocade 9 U. Chunduri 10 Ericsson 12 Expires: May 2015 November 9, 2014 14 IGP Multicast Architecture 16 draft-yong-rtgwg-igp-multicast-arch-01 18 Abstract 20 This document specifies Interior Gateway Protocol (IGP) network 21 architecture to support multicast transport. It describes the 22 architecture components and the algorithms to automatically build a 23 distribution tree for transporting multicast traffic and provides a 24 method of pruning that tree for improved efficiency. 26 Status of this document 28 This Internet-Draft is submitted to IETF in full conformance with 29 the provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF), its areas, and its working groups. Note that 33 other groups may also distribute working documents as Internet- 34 Drafts. 36 Internet-Drafts are draft documents valid for a maximum of six 37 months and may be updated, replaced, or obsoleted by other documents 38 at any time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 The list of current Internet-Drafts can be accessed at 42 http://www.ietf.org/ietf/1id-abstracts.txt. 44 The list of Internet-Draft Shadow Directories can be accessed at 45 http://www.ietf.org/shadow.html. 47 This Internet-Draft will expire on May 9, 2015. 49 Copyright Notice 51 Copyright (c) 2014 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with 59 respect to this document. 61 Table of Contents 63 1. Introduction...................................................3 64 1.1. Motivation................................................3 65 1.2. Conventions used in this document.........................4 66 2. IGP Architecture for Multicast Transport.......................4 67 3. Computation Algorithms in IGP Multicast Domain.................5 68 3.1. Automatic Tree Root Node Selection........................5 69 3.2. Distribution Tree Computation.............................5 70 3.2.1. Parent Selection.....................................6 71 3.2.2. Parallel Local Link Selection........................6 72 3.3. Multiple Distribute Trees for a Multicast Group...........7 73 3.4. Pruning a Distribution Tree for a Group...................7 74 4. Router Forwarding Procedures...................................8 75 4.1. Packet Forwarding Along a Pruned Distribution Tree........8 76 4.2. Local Forwarding at Edge Router...........................8 77 4.2.1. Overlay Multicast Transport..........................9 78 4.3. Multi-homing Access Through Active-active MC-LAG.........10 79 4.4. Reverse Path Forwarding Check (RPFC).....................11 80 5. Security Considerations.......................................12 81 6. IANA Considerations...........................................12 82 7. Acknowledgements..............................................12 83 8. References....................................................12 84 8.1. Normative References.....................................12 85 8.2. Informative References...................................12 87 1. Introduction 89 This document specifies Interior Gateway Protocol (IGP) network 90 architecture to support multicast transport. It describes the 91 architecture components and the algorithms to automatically build a 92 distribution tree for transporting multicast traffic and provides a 93 method of pruning that tree for improved efficiency. 95 An IGP network is built to transport unicast traffic. Traditionally, 96 transporting multicast traffic relies on a protocol independent 97 mechanism and a different protocol, i.e. PIM [RFC4601] [RFC5015]. 98 The PIM protocol builds on top of IGP network and maintains its own 99 states, which results longer convergence time for multicast traffic 101 Data Center infrastructure and advanced systems for cloud 102 applications are looking for an IGP network to transport both 103 unicast and multicast packets in a simpler and more efficient way 104 than use of a separate protocol beyond IGP protocol. (see Section 105 1.1 for motivation) 107 This draft proposes the architecture and algorithms for an IGP based 108 multicast transport. The architecture and algorithms automatically 109 build a bi-directional distribution tree and pruned bi-directional 110 tree for a multicast group without use of PIM. IGP protocol 111 extension for this architecture is addressed in the [ISEXT]. 113 1.1. Motivation 115 Network-as-a-service technically can be achieved by decoupling 116 network IP space from service IP space as with a VxLAN [RFC7348] 117 based network overlay. Decoupling network IP space from service IP 118 address space also provides network agility and programmability to 119 applications in a Data Center environment. To support all service 120 applications, such IP network fabric must support both unicast and 121 multicast. If network IP space is decoupled from service IP space, 122 the network itself no longer needs manual configuration; 123 automatically forming an IP network fabric can be done. The 124 resulting "plug and play" can greatly simplify network operation. 126 With the goal of automation in forming a network fabric and support 127 of any type of forwarding behavior the service applications require, 128 IGP protocol should be extended to support: 130 1. Network formation 132 2. Multi destination distribution tree computation 134 Using external PIM prohibits the "automatic" nature requirement and 135 results a longer convergence time of multicast transport than 136 unicast transport because the convergence time for PIM is added to 137 the basic IGP unicast route convergence time. 139 IGP based multicast reduces the number of protocols, states, and 140 convergence time for multicast, which means a simpler underlay IP 141 network that supports both unicast and multicast transport. 143 1.2. Conventions used in this document 145 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 146 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 147 document are to be interpreted as described in RFC 2119 [RFC2119]. 149 2. IGP Architecture for Multicast Transport 151 An IGP multicast domain defined in this document contains edge 152 routers and transit routers. Multicast source(s) and receiver(s) in 153 a service space locally attach to edge routers or connect to edge 154 routers through a layer 2 or layer 3 network that belongs to the 155 same service space. When an ingress edge router receives a multicast 156 packet from a multicast source in the service space, it replicates 157 it along a pruned tree in the IGP domain. When an egress edge router 158 receives a multicast packet from the IGP domain, it forwards the 159 packet to the L2 or L3 service network that the receivers on and 160 replicates the packet along the pruned tree in the domain. When a 161 transit router receives a multicast packet from another router in 162 the domain, it replicates the packet to its neighbor router(s) in 163 the domain along a pruned tree. 165 An IGP multicast domain is used to carry L2 or L3 multicast traffic 166 in a service (tenant) space in multi-tenant environment. Upon 167 receiving a multicast packet from a source, the edge router first 168 encapsulates the packet, adds its IP address as the source address 169 and the corresponding underlay multicast IP address as the 170 destination address on the encapsulated packet, then replicates it 171 along a pruned tree. Egress edge router(s) decapsulate the packet 172 before sending toward the receiver(s). 174 In an IGP multicast domain, each router has a unique IP address and 175 the router IP address is advertised as a host address by IGP 176 protocol. An IGP domain can be an IGP multicast domain if all 177 routers support the multicast capability described in this document; 178 a subset of an IGP domain can be an IGP multicast domain where only 179 some edge routers and transit routers have IGP multicast capability 180 described in this draft and the draft [ISEXT]. In the case where the 181 IGP multicast domain is subset of an IGP domain, a router in an IGP 182 multicast domain must have at least one adjacency (next hop) to 183 another router that is in the IGP multicast domain, that is, the IGP 184 multicast domain must be connected. Configuring an IP tunnel between 185 two routers in an IGP multicast domain can achieve this. How to 186 configure such tunnel is outside the scope of this document. 188 In an IGP multicast domain, a default distribution tree is 189 established automatically (see Section of 3.1). Operators may 190 configure other distribution trees with different priorities in the 191 domain as well and specify the associated multicast groups carried 192 by these configured trees. By default, all the multicast groups use 193 the default distribution tree. 195 The distribution tree computation algorithm is described in Section 196 3.2. The tree pruning for a particular multicast group is described 197 in Section 3.3. Section 3.4 describes multiple trees to support one 198 multicast group. Section 4 describes router forwarding procedures. 200 3. Computation Algorithms in IGP Multicast Domain 202 3.1. Automatic Tree Root Node Selection 204 By default the tree root is the router with the largest magnitude 205 Router ID, considering the Router ID, i.e. router IPv4 address, to 206 be an unsigned integer. Note that the algorithms in following 207 sections use Router ID for router identifier, i.e. unique IP address 208 assigned to a router in a IGP multicast domain. 210 Operators may configure a default tree root node (based on the 211 topology) that takes precedence over the default tree root auto- 212 calculated. This configured tree root node would advertise its IP 213 address as the default tree root for all multicast groups that are 214 not assigned to a distribution tree in a IGP multicast domain. 216 3.2. Distribution Tree Computation 218 The Distribution Tree Computation Algorithm uses the existing IGP 219 Link State Database (LSDB). Based on the LSDB and shortest path 220 algorithm, all routers in an IGP multicast domain calculate the 221 distribution tree that has the default tree root node and reaches 222 all the edge routers. 224 If an operator configures other distribution tree roots on other 225 routers, the operator specifies what multicast groups use those 226 trees and the tree root routers will advertise themselves as the 227 tree root for those multicast groups by use of the new RTADDR TLV 228 [ISEXT]. All routers in the domain will track the tree root nodes 229 and calculate the path toward to each configured tree root node by 230 using the shortest path algorithm, which form multiple distribute 231 trees. 233 It is important that all the routers calculate the identical 234 branches in a distribution tree in an IGP multicast domain. Section 235 3.2.1 and 3.3.2 specifies the tiebreaking rules for parent router 236 selection in case of equal-cost path and for the link selection in 237 case of multiple local links. Because link costs can be asymmetric, 238 it is important for all tree construction calculations to use the 239 cost towards the root. 241 3.2.1. Parent Selection 243 When there are equal costs from a potential child router to more 244 than one possible parent router, all routers need to use the same 245 tiebreakers. It is desirable to allow splitting traffic on as many 246 links as possible in such situations when multiple distribution 247 trees presents. This document uses the following tiebreaker rules: 249 If there are k distribution trees in the domain, when each router 250 computes these trees, the k trees calculated are ordered and 251 numbered from 0 to k-1 in ascending order according by root IP 252 addresses. 254 The tiebreaker rule is: when building the tree number j, remember 255 all possible equal cost parents for router N. After calculating the 256 entire "tree" (actually, directed graph), for each router N, if N 257 has "p" parents, then order the parents in ascending order according 258 to the 7-octet IS-IS System ID considered as an unsigned integer, 259 and number them starting at zero. For tree j, choose N's parent as 260 choice (j-1) mod p. 262 3.2.2. Parallel Local Link Selection 264 If there are parallel point-to-point links between two routers, say 265 R1 and R2, these parallel links would be visible to R1 and R2, but 266 not to other routers. If this bundle of parallel links is included 267 in a tree, it is important for R1 and R2 to decide which link to 268 use; if the R1-R2 link is the branch for multiple trees, it is 269 desirable to split traffic over as many links as possible. However 270 the local link selection for a tree is irrelevant to other Routers. 272 Therefore, the tiebreaking algorithm need not be visible to any 273 Routers other than R1 and R2. 275 When there are L parallel links between R1 and R2 and they both are 276 on K trees. L links are ordered from 0 to L-1 in ascending order of 277 Circuit ID as associated with the adjacency by the router with the 278 highest System ID, and K trees are ordered from 0 to K-1 in 279 ascending order of root IP addresses. The tiebreaker rule is: for 280 tree k, select the link as choice k mod L. 282 Note that if multiple distribution trees are configured in a domain 283 or on a router, better load balance among parallel links through the 284 tie-breaking algorithm can be achieved. Otherwise, if there is only 285 one tree is configured, then only one link in parallel links can be 286 used for the corresponding distribution tree. However, calculating 287 and maintaining many trees is resource consuming. Operators need to 288 balance between two. 290 Another alternative is to use a lower level link aggregation 291 protocol, such as [802.1AX-2011] on the parallel point-to-point 292 links between R1 and R2. They will then appear to be a single link 293 to the IGP and it will be the link aggregation protocol that spreads 294 traffic across the actual lower level parallel links. 296 3.3. Multiple Distribute Trees for a Multicast Group 298 It is possible that a multicast group is associated with multiple 299 trees that may have the same or different priority. When a multicast 300 group associates with more than one tree, all routers have to select 301 the same tree for the group. The tiebreaker rules specified in PIM 302 [RFC4601] are used here. They are: 304 o Perform longest match on group-range to get a list of trees. 306 o Select the tree with highest priority. 308 o If only one tree with the highest priority, select the tree for 309 the group-range. 311 o If multiple trees are with the highest priority, use the PIM hash 312 function to choose one. PIM hash function is described in section 313 4.1.1 in RFC 4601 [RFC4601]. 315 3.4. Pruning a Distribution Tree for a Group 317 Routers prune the distribution tree for each associated multicast 318 group, i.e. eliminating branches that have no potential downstream 319 receivers. Multi-destination packets SHOULD only be forwarded on 320 branches that are not pruned. The assumption here is that a 321 multicast source is also a multicast receiver but a multicast 322 receiver may not be a multicast source. 324 All routers in the domain receive LSP messages with GRADD-TLV 325 [RFC7176] from the edge routers, which indicate which multicast 326 group that an edge router is the receiver. According that, the 327 routers prune the corresponding distribution tree for each multicast 328 group and maintain a list of adjacency interfaces that are on the 329 pruned tree for a multicast group. Among these interfaces, one 330 interface will be toward the tree-root router (unless the router is 331 the root) and zero or more interfaces will be toward some edge 332 routers. 334 4. Router Forwarding Procedures 336 4.1. Packet Forwarding Along a Pruned Distribution Tree 338 Forwarding a multi-destination packet follows the pruned tree for 339 the group that the packet belongs to. It is done as follows. 341 o If the router receives a multi-destination packet with group IP 342 address that does not associated with any configured tree, the 343 packet MUST be considered associated with the default tree. 345 o Else check if the link that the packet arrives on is one of the 346 ports in the pruned distribution tree. If not, the packet MUST be 347 dropped. 349 o Else optionally perform RPF checking (section 4.4). If the check 350 is performed and it fails, the packet SHOULD be dropped. 352 o Else the packet is forwarded onto all the adjacency interfaces in 353 the pruned tree for the group except the interface where the 354 packet receive. 356 4.2. Local Forwarding at Edge Router 358 Upon receiving a multicast packet, besides forwarding it along the 359 pruned tree, an edge router may also need to forward the packet to 360 the local hosts attached to it. This is referred to as local 361 forwarding in this document. Local forwarding table and multicast 362 forwarding table in IGP domain should be stitched at each edge 363 router. Local forwarding table can be generated using IGMP/PIM 364 protocol running in the network between host and the edge router. 366 The local group database is needed to keep track of the group 367 membership of attached hosts. Each entry in the local group database 368 is a [group, host] pair, which indicates that the attached hosts 369 belonging to the multicast group. When receiving a multicast packet, 370 the edge router forwards the packet to the host that match the 371 [group, host] pair in the local group database. 373 The local group database is built through the operation of the 374 IGMPv3 [RFC3376]. An edge router sends periodic IGMPv3 Host 375 Membership Queries to attached hosts. Hosts then respond with IGMPv3 376 Host Membership Reports, one for each multicast group to which they 377 belong. Upon receiving a Host Membership Report for a multicast 378 group A, the router updates its local group database by 379 adding/refreshing the entry [group A, host] pair. If at a later time 380 Reports for Group A cease to be heard from the host, the entry is 381 then deleted from the local group database. The edge router further 382 sends the LSP message with GRADDR TLV to inform other routers about 383 the group memberships in the local group database. 385 4.2.1. Overlay Multicast Transport 387 An IGP multicast domain may be used to carry overlay multicast 388 traffic. [RFC7365] There are two architecture scenarios: 390 1) IGP multicast domain edge router separates with overlay network 391 edge device [RFC7365]. Before multicast traffic is forwarded, 392 Overlay network should trigger underlay multicast domain to 393 construct multicast tree using IGMP protocol in beforehand. Group 394 address in the protocol is underlay multicast group address. Outer 395 layer traffic encapsulation is performed on the overlay network edge 396 device, IGP multicast domain acts as pure underlay network. 398 2) IGP multicast domain edge router collapses with overlay network 399 edge device. Before multicast traffic is forwarded, local connecting 400 host should trigger underlay multicast domain to construct multicast 401 tree using IGMP like protocol beforehand. Group address in the 402 protocol is overlay multicast group address, edge router should map 403 the group address into underlay multicast group address. 405 The IGP multicast domain can support both scenarios. To carry 406 overlay multicast traffic, a (designated) edge router (see Section 407 below on Multi-Homing Access) further necessarily maintains the 408 mapping between an overlay multicast group and a underlying 409 multicast group, and performs packet encapsulation/descapsulation 410 upon receiving a packet from a host or the underlay IGP network. 411 Mapping between an overlay multicast group and a underlay multicast 412 group can be manually configured, automatically generated by an 413 algorithm at a (designated) edge router. The same edge router MUST 414 be selected as the Designated Forwarder for the overlay multicast 415 group and underlying multicast group that are associated. If 416 multiple overlay multicast groups attach to same edge router sets, 417 these overlay multicast groups can be mapped to the same underlying 418 multicast group to reduce underlay network multicast forwarding 419 table size on each router. The mapping method is beyond the scope of 420 this document. 422 4.3. Multi-homing Access Through Active-active MC-LAG 424 A multicast group receiver may attach to multiple edge routers 425 through an active-active MC-LAG [802.1AX-2011] to enhance 426 reliability. 428 When a remote edge router ingresses a multicast packet w/ multicast 429 group address from local multicast source, if all egress routers in 430 an MC-LAG forward the packet to the local host (receiver), the host 431 will receive multiple copies of the multicast frame from the remote 432 multicast source. To avoid duplicated packets received from the IGP 433 domain to a local network, a Designated Forwarder (DF) mechanism is 434 required. All the edge routers associated to a same MC-LAG use the 435 same algorithm to select one DF edge router for a multicast group. 436 Each MC-LAG should be assigned with a unique MC-LAG identifier in an 437 IGP multicast domain, which may be manually configured or 438 automatically provisioned. When an edge router in a MC-LAG receives 439 a multicast group receiver joining message using IGMP/PIM like 440 protocols, it announces its self MC-LAG ID and the multicast group 441 correspondence to other routers in its IGP LSP. After network state 442 reaches steady state, all edge routers in a MC-LAG elect the same 443 router as DF for each multicast group. Upon receiving a multicast 444 packet from the domain, only the DF edge router will forward the 445 packet towards the receiver. All non-DF edge routers do not forward 446 the packet towards the receiver. 448 All edge routers, including DF and non-DF, can ingress the traffic 449 to IGP domain as usual. DF and non-DF state has influence only on 450 the egress multicast traffic forwarding process. 452 If a multicast group source host attaches to multiple edge routers 453 through an active-active MC-LAG, loop prevention, i.e. the packet 454 sent by source host loops back to the source host via the edge 455 routers in a MC-LAG, is necessary. The solutions for two scenarios 456 are described below. 458 o When the multicast IGP domain edge routers separate with overlay 459 network edge devices that carry overlay network traffic, these 460 routers don't replace traffic source IP address when they inject 461 the traffic into IGP domain. In this case, edge routers should 462 acquire multicast source IP address in beforehand using a 463 mechanism like IGMPv3 explicit tracking, and then the source IP 464 addresses are synchronized among each edge routers in same MC-LAG. 465 Then same split-horizon mechanism described in the above section 466 can be used. 468 o When the multicast IGP domain edge routers collapse with overlay 469 network edge devices, the edge router connecting to multicast 470 source performs multicast encapsulation when it injects local 471 multicast traffic into the IGP domain, source IP is the edge 472 router's IP. Each edge router tracks the IP address(es) 473 associated with the other edge router(s) with which it has shared 474 MC-LAG. When the edge router receives a packet from an IGP domain, 475 it examines the source IP address and filters out the packet on 476 all local interfaces in the same MC-LAG. With this approach, 477 local bias forwarding is required on the ingress edge router. It 478 performs replication locally to all directly attached receivers 479 no matter DF or non-DF state of the out interface connecting to 480 each receiver. 482 4.4. Reverse Path Forwarding Check (RPFC) 484 The routing transients resulting from topology changes can cause 485 temporary transient loops in distribution trees. If no precautions 486 are taken, and there are fork points in such loops, it is possible 487 for multiple copies of a packet to be forwarded. If this is a 488 problem for a particular use, a Reverse Path Forwarding Check (RPFC) 489 may be implemented. 491 In this case, the RPFC works by a router determining for each port, 492 based on the source and destination IP address of a multicast packet, 493 whether the port is a port that router expects to receive such a 494 packet. In other words, is there an edge router with reachability to 495 the source IP address such that, starting at that router and using 496 the tree indicated by the destination IP address, the packet would 497 have arrived at the port in question. If so, it is further 498 distributed. If not, it is discarded. An RPFC can be implemented at 499 some routers and not at others. 501 5. Security Considerations 503 To come in future version 505 6. IANA Considerations 507 This document does not request any IANA action. 509 7. Acknowledgements 511 Authors like to thank Mike McBride and Linda Dunbar for their 512 valuable inputs. 514 8. References 516 8.1. Normative References 518 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 519 Requirement Levels", BCP 14, RFC2119, March 1997. 521 [RFC3376] Cain B., etc, "Internet Group Management Protocol, Version 522 3", rfc4604, October 2002 524 [RFC4601] Fenner, B., et al, "Protocol Independent multicast - 525 Sparse Mode (PIM-SM): Protocol Specification", rfc4601, 526 August 2006 528 [RFC5015] Handley, M., et al, "Bidirectional Protocol Independent 529 Multicast (BIDIR-PIM", rfc5015, October 2007 531 [ISEXT] Yong, L., el al, "IS-IS Extension For Building Distribution 532 Tree", draft-yong-isis-ext-4-distribution-tree, work in 533 progress. 535 [802.1AX-2011] IEEE, "IEEE Standard for Local and metropolitan area 536 networks - Link Aggregation", IEEE802.1AX, 2011 538 8.2. Informative References 540 [RFC7348] Mahalingam, M., Dutt, D., etc, "VXLAN: A Framework for 541 Overlaying Virtualized Layer 2 Networks over Layer 3 542 Networks", RFC7348, 2014 544 [RFC7365] Lasserre, M., "Framework for DC Network Virtualization", 545 RFC7364, 2014. 547 Authors' Addresses 549 Lucy Yong 550 Huawei USA 552 Phone: 918-808-1918 553 Email: lucy.yong@huawei.com 555 Weiguo Hao 556 Huawei Technologies 557 101 Software Avenue, 558 Nanjing 210012 559 China 561 Phone: +86-25-56623144 562 Email: haoweiguo@huawei.com 564 Donald Eastlake 565 Huawei 566 155 Beaver Street 567 Milford, MA 01757 USA 569 Phone: +1-508-333-2270 570 EMail: d3e3e3@gmail.com 572 Andrew Qu 573 MediaTek 574 San Jose, CA 95134 USA 576 Email: laodulaodu@gmail.com 578 Jon Hudson 579 Brocade 580 130 Holger Way 581 San Jose, CA 95134 USA 583 Phone: +1-408-333-4062 584 Email: jon.hudson@gmail.com 586 Uma Chunduri 587 Ericsson Inc. 588 300 Holger Way, 589 San Jose, California 95134 590 USA 592 Phone: 408-750-5678 593 Email: uma.chunduri@ericsson.com