<< Back

1. Introduction

scope

Table of Contents

1.1 Overview & Problem Statement

The main concept of NFV (Network Function Virtualization) is the ability to use general purpose compute hardware and platforms to run multiple VNFs (Virtualised Network Functions) and hence achieving the desired CapEx and OpEx savings. However, one of big challenges NFV is facing with VNF vendors is that vendors, while building or designing their virtualized services (whether it's VoLTE, EPC, or enterprise services like SD-WAN (Software Defined Wide Area Network)), must bring their own set of infrastructure requirements and custom design parameters. This attitude from vendors triggered the creation of various vendor/function specific silos which are incompatible with each other and have different operating models. In addition, this makes the onboarding and certification processes of VNFs (coming from different vendors) hard to automate and standardise.

Therefore, for a true cloud type deployment, a model, which relies on engagement with specific vendors and unique infrastructure, needs to be reversed in a way that there is a lot more consistency on the infrastructure. Vendors need to bring their software to run on pre-defined environment with common capabilities. That common infrastructure, whether it is optimized for IT (Information Technology) workloads, NFV workloads, or even for AI (Artificial Intelligence) workloads, needs to be fully abstracted to VNFs so that it can be a standard offer.

Additionally, to bring the most value to telco operators as well as vendors, agreeing on a standard set of infrastructure profiles for vendors to use for their VNFs is needed within the industry.

The benefits of this approach are:

1.2 Terminology

This section defines the main terms used in this document; these deinitions are primarily based on the ETSI GS NFV 003 V1.4.1 (2018-08) but have been cleaned to avoid deployment technology dependencies when necessary.

1.2.1 Software layers terminology

1.2.2 Hardware layers terminology

1.2.3 Operational and administrative terminology

1.2.4 Other terminology

1.3 Principles

This section specifies the principles of infrastructure abstraction and profiling work presented by this document.

  1. NFVI provides abstract and physical resources corresponding to:
  2. Compute resources.
  3. Storage resources.
  4. Networking resources. (Limited to connectivity services).
  5. Acceleration resources.
  6. NFVI exposed resources should be supplier independent.
  7. All NFVI APIs must be standard and open to ensure components substitution.
  8. NFVI resources are consumed by VNFs through standard and open APIs.
  9. NFVI resources are configured on behalf of VNFs through standard and open APIs.
  10. NFVI resources are discovered/monitored by management entities (such as orchestration) through standard and open APIs.
  11. VNFs should be modular and utilise minimum resources.
  12. NFVI shall support pre-defined and parameterized T-Shirt sizes.
  13. T-Shirt sizes will evolve with time.
  14. NFVI provides certain resources, capabilities and features and virtual applications (VA) should only consume these resources, capabilities and features.
  15. VNFs that are designed to take advantage of NFVI accelerations should still be able to run without these accelerations with potential performance impacts.
  16. An objective of CNTT is to have a single, overarching Reference Model and the smallest number of Reference Architectures as is practical. Two principles are introduced in support of these objectives:

    • Minimize Architecture proliferation by stipulating compatible features be contained within a single Architecture as much as possible:
    • Features which are compatible, meaning they are not mutually exclusive and can coexist in the same NFVI instance, shall be incorporated into the same Reference Architecture. For example, IPv4 and IPv6 should be captured in the same Architecture, because they don't interfere with each other
    • Focus on the commonalities of the features over the perceived differences. Seek an approach that allows small differences to be handled at either the low level design or implementation stage. For example, assume the use of existing common APIs over new ones.

    • Create an additional Architecture only when incompatible elements are unavoidable:

    • Creating additional Architectures is limited to when incompatible elements are desired by Taskforce members. For example, if one member desires KVM be used as the hypervisor, and another desires ESXi be used as the hypervisor, and no compromise or mitigation* can be negotiated, the Architecture could be forked, subject to review and vote to approve by the CNTT technical Working Group, such that one Architecture would be KVM-based and the other would be ESXi-based.

      *Depending on the relationships and substitutability of the component(s) in question, it may be possible to mitigate component incompatibility by creating annexes to a single Architecture, rather than creating an additional Architecture. With this approach, designers at a Telco would implement the Architecture as described in the reference document and when it came to the particular component in question, they would select from one of the relevant annexes, their preferred option. For example, if one member wanted to use Ceph, and another member wanted to use Swift, assuming the components are equally compatible with the rest of the Architecture, there could be one annex for the Ceph implementation and one annex for the Swift implementation.

1.4 How this document works

There are three level of documents needed to fulfil the CNTT vision. They are, as highlighted in Figure 1-4: Reference Model, Reference Architecture, and Reference Implementation.

scope

Figure 1-4: Scope of CNTT

1.5 Scope

This document focuses on the Reference Model. Figure 1-5 below highlights its scope in more details.

scope

Figure 1-5: Scope of Reference Model

This document specifies: - NFVI Infrastructure abstraction - NFVI metrics & capabilities: A set of carrier grade metrics and capabilities of NFVI which VNFs require to perform telco grade network functions. - Infrastructure profiles catalogue: A catalogue of standard profiles needed in order to completely abstract the infrastructure from VNFs. With a limited and well defined profiles and well understood characteristics, VNF compatibility and performance predictability can be achieved.

>_The current focus is for VMs but the intention is to expand the definition to include Container profiles too._

1.6 Relations to other industry projects

Regarding the ETSI NFV architecture specified by ETSI GS NFV002, the scope of this document is only, but the entirety of, the NFVI part, including its external reference points. A mapping of the functional blocks considered in that document to that NFV architecture is illustrated in Figure 1-6 below

mapping

Figure 1-6: Mapping to ETSI NFV architecture

Following ETSI model, Figure 1-6, the VIM, Virtualised Infrastructure Manager, which controls and manages the NFVI, is not included into NFVI. Nevertheless, the interactions between NFVI and VIM will be part of this document as infrastructure resources management and orchestration have a strong impact on NFVI. These interactions will be detailed in Chapter 7 "API & Interfaces".

1.7 What this document is not covering

Comment: This section is still under development.

1.8 Bogo-Meter

A carefully chosen “Bogo-Meter” rating at the beginning of each chapter indicates the completeness and maturity each chapter's content, at a glance.

1.9 Roadmap

Comment: Please Contact the CNTT team for access to the Roadmap