Skip to main content

Core Router

A core router is a router designed to operate in the Internet backbone, or core. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the core Internet and must be able to forward IP packets at full speed on all of them. It must also support the routing protocols being used in the core. A core router is distinct from an edge router: edge routers sit at the edge of a backbone network and connect to core routers.

History

Like the term "supercomputer", the term "core router" refers to the largest and most capable routers of the then-current generation. A router that was a core router when introduced will not be a core router ten years later. At the inception of the ARPANET (the Internet's predecessor) in 1969, the fastest links were 56 kbit/s and a given routing node had at most six links. The "core router" was a dedicated minicomputer called an IMP Interface Message Processor. Link speeds increased steadily, requiring progressively more powerful routers until the mid-1990s, when the typical core link speed reached 155 Mbit/s. At that time, several breakthroughs in fiber optic telecommunications (notably DWDM and EDFA) technologies combined to lower bandwidth costs that in turn drove a sudden dramatic increase in core link speeds: by 2000, a core link operated at 2.5 Gbit/s and core Internet companies were planning for 10 Gbit/s speeds.

The largest provider of core routers in the 1990s was Cisco Systems, who provided core routers as part of a broad product line. Juniper Networks entered the business in 1996, focusing primarily on core routers and addressing the need for a radical increase in routing capability that was driven by the increased link speed. In addition, several new companies attempted to develop new core routers in the late 1990s. It was during this period that the term "core router" came into wide use. The required forwarding rate of these routers became so high that it could not be met with a single processor or a single memory, so these systems all employed some form of a distributed architecture based on an internal switching fabric.

The Internet was historically supply-limited, and core Internet providers historically struggled to expand the Internet to meet the demand. During the late 1990s, they expected a radical increase in demand, driven by the Dot-com bubble. By 2001, it became apparent that the sudden expansion in core link capacity had outstripped the actual demand for Internet bandwidth in the core. The core Internet providers were able to defer purchases of new core routers for a time, and most of the new companies went out of business.

As of 2012, the typical Internet core link speed is 40 Gbit/s, with many links at higher speeds, reaching or exceeding 100 Gbit/s (out of a theoretical current maximum of 111 Gbit/s, provided by Nippon Telegraph and Telephone), provisioning the explosion in demand for bandwidth in the current generation of cloud computing and other bandwidth-intensive (and often latency-sensitive) applications such as high-definition video streaming (see IPTV) and Voice over IP. This, along with newer technologies – such as DOCSIS 3, channel bonding, and VDSL2 (the latter of which can wring more than 100 Mbit/s out of plain, unshielded twisted-pair copper under normal conditions, out of a theoretical maximum of 250 Gbit/s at 0.0m from the VRAD) – and more sophisticated provisioning systems – such as FTTN (fiber [optic cable] to the node) and FTTP (fiber to the premises, either to the home or provisioned with Cat 5e cable) – can provide downstream speeds to the mass-market residential consumer in excess of 300 Mbit/s and upload speeds in excess of 100 Mbit/s with no specialized equipment or modification e.g.(Verizon FiOS).

Source: Wikipedia