Fixed access infrastructure sharing has many advantages. Initially, the primary benefit was to increase competition by lowering the cost barrier to entry, allowing network wholesalers to encourage or governments to enforce more choice for consumers. But today it is just as likely to be about cooperation. Deploying next-generation broadband to meet the growing demand for high-bandwidth services is a costly endeavor. Cooperation strategies are important to decrease the investment risk and thus accelerate ultra-broadband deployments.
The arrival of software-defined networking (SDN) and network function virtualization (NFV) in the access domain create a new way of sharing infrastructure. This is the concept behind Fixed Access Network Sharing (FANS), a technique that uses virtualization as a way of “slicing” the access network and providing independent control of each slice. Compared to physical layer unbundling and active layer network sharing techniques such as bitstreaming, FANS delivers far greater control and flexibility for operators sharing the infrastructure. This leads to better services for customers, lower costs for operators and increases the possibilities of co-investment in or sharing of next-generation fixed access infrastructure.
1. Physical and active layer sharing
There are two ways to approach access network sharing.
1.1. Physical layer sharing
Here, the passive infrastructure is shared amongst multiple “access seekers”. Local Loop Unbundling (LLU), favored by regulators when copper was the only way into a customer’s premises, is still very common. A similar practice can be applied in point-to-point fiber and EPON/GPON networks. Next-generation TWDM-PON’s ability to separate services by wavelength is the latest example of physical layer unbundling.
Unbundling at the physical layer gives seekers direct access to a dedicated line or wavelength. However, there are limitations: LLU does not work effectively with VDSL2 vectoring or G.fast; fiber unbundling for PON cannot be done from the central office and requires access to splitter points in the field.
1.2. Active layer sharing
In this approach, sharing occurs in the active infrastructure using Virtual Unbundled Local Access (VULA) for interconnection to the access node, or bitstreaming for access to higher points in the network. The main benefit compared to physical layer approaches is that the access seeker doesn’t need to own access nodes and can become technology-agnostic. Although there is no direct access to individual lines, the seeker can offer any service with QoS guarantees to any end-user and can compete on an equal basis. Their focus is, therefore, shifted from network operation to service delivery.
2. Active layer sharing through virtualization
As demand for network capacity increases and operators are challenged to maximize the return on their investments, the traditional model of single ownership of the entire physical access network may no longer be optimal. This is particularly important for new fiber-centric access networks offering capacity of 10 Gb/s and higher. Instead, operators are required to find new ways to share investments and maximize revenues over a shared network architecture.
A new approach to active infrastructure sharing can be achieved through virtualizing the access network. Fixed Access Network Sharing (FANS) uses virtualization to partition the physical network into multiple virtual network slices, each of which can be independently controlled.
In this model, Virtual Network Operators (VNOs) have much more flexibility than with physical or virtual unbundling. Each VNO can manage their own slice, seeing only their part of the network, to offer a variety of services in a fully independent way. The infrastructure provider (InP) allocates network resources to each VNO so that there can be no negative interference from running multiple services on the same infrastructure.
Taking the example of PON networks, Figure 1 provides a comparison of different physical and active approaches to network sharing. As can be seen FANS provides the best of both worlds.
- It enables sharing of the entire access network resources, including physical cabling, PON interfaces, PON splitters, access nodes and the uplink interfaces. This is different from unbundling or wavelength overlay, where each operator needs to deploy their own network infrastructure.
- It gives the VNOs flexibility to manage and operate their services on top of the network. This is different from active network sharing where the access network is purely managed by the infrastructure provider.
3. Open access and service partitioning
Traffic flows, representing services between each provider and their end customers, are defined in the central controller, bound to specific policies, attached to resources, and measured at service time. Slicing can be achieved by providing network abstractions presented to each operator as an SDN programmable infrastructure. Policy constraints determine the level of control available to access seekers (AS) while ensuring logical separation of traffic and configuration between tenants. Ideally, a VNO’s view on a device is as close as possible to a physical device minus elements that make more sense being controlled by the infrastructure provider.
The resources of the shared network infrastructure are treated as different resource categories:
- Resources that are allocated one-to-one to a VNO, e.g. PON ports or ONTs
- Resources that are shared between VNOs, e.g. uplink ports
- Infrastructure entities for which there are limited entries, e.g. profile definitions
- Common aspects which remain the responsibility of the infrastructure provider, e.g. node backup.
FANS is also appropriate for a single network operator that wants to separate different services. Independently managed slices of the network can be defined for business, residential or mobile backhaul services. This gives greater granularity and dynamicity than engineered partitioning.
The imminent need to backhaul 5G mobile traffic makes a strong case for FANS. 5G will bring much denser deployments of wireless access points and base stations, which creates opportunities to unify fixed and wireless access infrastructure. However, 5G traffic needs low latency and high throughput. For wireless traffic to reliably share link capacity with residential network services, network element functions will need to be dynamically reconfigured in response to changing traffic patterns.
4. The role of NETCONF/YANG
The fixed broadband market has been very successful in boosting network performance and reaching worldwide mass deployment. Network providers have been successful in their ability to turn on services while the industry has done well at standardizing protocols. We can interconnect multi-vendor networks with few issues and millions of devices can consume network services with a guaranteed high Quality-of-Service. But we have not yet reached the same degree of standardization for exposing network services and for deploying and operating the network infrastructure.
With FANS, the infrastructure provider requires a management interface for access nodes where each VNO needs to interact with the same access nodes but within the context of their own slice. Achieving this with existing network management models based on SNMP, CLI and TL-1 is challenging. Therefore, FANS uses a new management model, based on the NETCONF and YANG. The model is designed to natively enable open and programmable network devices and simplify the integration with OSS/BSS systems.
With Nokia’s fixed access portfolio evolution, we support the adoption of a new emerging protocol, NETCONF/YANG. NETCONF is a proficient management protocol and an alternative to existing CLI/SNMP. YANG is an optimized data modeling language that simplifies integration in management systems. NETCONF and YANG have a number of benefits over traditional SNMP-based mechanisms.
NETCONF has open and standardized application programming interfaces (APIs) and delivers higher degrees of automation: support of transactions, even across multiple nodes; the ability to deal with data stores and multiple versions of them; to segregate configuration and state data; strength in dealing with bulk operations (backup and restore of configurations).
YANG is a richer data modelling language than Management Information Base (MIB) with, for example, more built-in data types, the ability to define application data types and to define constraints. A new “network slice” YANG model can be introduced, where every YANG model is augmented with information defining exactly for which VNO the configuration applies. This allows different VNOs to have different configuration models.
The use of NETCONF/YANG allows multiple VNOs to independently manage shared resources, define network services, perform service and customer provisioning and troubleshooting, while shielding each VNO from seeing each other’s configuration or operational data.
5. Paradigm shift
FANS makes it possible to share infrastructure at the active network layer while gives all partners the required service offering autonomy. The control over the network infrastructure is software definable so operators can provide interoperable network connectivity services that are differentiated at the right level of granularity and consistent with the current highly dynamic demands. This allows operators to penetrate new market segments and attracts more parties who would otherwise be locked out of a traditional FTTH architecture.
The benefits of NETCONF/YANG go beyond those defined for FANS. FANS can be seen as one step in a broad network paradigm shift in which NETCONF/YANG and virtualization will be introduced in access networks. To this end, FANS is currently being standardized within the Broadband Forum and Nokia is actively contributing to the process. The intent is to instantiate the management plane based on NETCONF/YANG and define extended YANG models for the additional features required to implement FANS. This enables a per service, per VNO view, as well as the network view for the infrastructure provider.
In a second stage, some of the applications running on dedicated access node hardware will gradually become virtualized. In this model, the physical access node and its application software are decoupled, with the “virtual access node” running in the cloud. This provides new levels of flexibility to instantiate, customize, tailor, scale-out, isolate and run access node features on different locations, per service, and per VNO.