[Docs] [txt|pdf|xml|html] [Tracker] [Email] [Nits]



Network Working Group                                     D. Purkayastha
Internet-Draft                                                 A. Rahman
Intended status: Informational                                D. Trossen
Expires: September 2, 2018              InterDigital Communications, LLC
                                                           March 1, 2018


     Leading indicators of change for routing in Modern Data Center
                              environments
           draft-purkayastha-dcrouting-leading-indicators-00

Abstract

   This document describes a few use cases to illustrate the
   expectations on today's network.  Based on those expectations, it
   describes how the network architecture and network requirements are
   changing.  The new requirements are impacting data center
   architecture.  The way data centers are evolving, such as central
   data centers to smaller data centers, a single deployment to multiple
   deployments, is described.  With this new data center model, areas
   such as routing inside the data center and outside the data center is
   impacted.  The document describes this impact and summarizes few
   features for these new data center model.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at https://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on September 2, 2018.

Copyright Notice

   Copyright (c) 2018 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents



Purkayastha, et al.     Expires September 2, 2018               [Page 1]


Internet-Draft        Changes in modern data center           March 2018


   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
   2.  Conventions used in this document . . . . . . . . . . . . . .   3
   3.  Evolving landscape  . . . . . . . . . . . . . . . . . . . . .   3
     3.1.  Video orchestration and delivery  . . . . . . . . . . . .   4
     3.2.  Vehicle to vehicle, anything V2X  . . . . . . . . . . . .   4
   4.  Analysis  . . . . . . . . . . . . . . . . . . . . . . . . . .   5
   5.  Data Center Evolution . . . . . . . . . . . . . . . . . . . .   6
   6.  Considerations for MDC (Micro Data Centre) at edge  . . . . .   8
   7.  Conclusion  . . . . . . . . . . . . . . . . . . . . . . . . .   9
   8.  IANA Considerations . . . . . . . . . . . . . . . . . . . . .   9
   9.  Security Considerations . . . . . . . . . . . . . . . . . . .   9
   10. Informative References  . . . . . . . . . . . . . . . . . . .   9
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  10

1.  Introduction

   The requirements on today's networks are very diverse, enabling
   multiple use cases such as IoT, Content Distribution, Multi player
   online gaming, Virtual Network functions such as Cloud RAN.  Huge
   amount of data is generated, stored and consumed at the edge of the
   network.  These use cases has led to the evolution of data centers
   into smaller form factors a.k.a.  Micro Data Center (MDC), suitable
   to be deployed at the edge of the network.

   In this document, we will describe use cases to illustrate the trend
   where MDCs are deployed at multiple physical locations instead of
   one.  This is akin to having several Internet POPs (points of
   presence) rather than one single one, with the MDC representing the
   services commonly found in the Internet.  With this evolving
   landscape of multi POP deployment of MDC at the edge of the network,
   we envision that the MDCs will be deployed over pure L2 network.  We
   will describe the impact on routing within the MDC, as well as among
   the multiple MDCs in the edge.

   The composition of 'multi-POP' MDC, compiled of several smaller Micro
   DCs, drives the need for standardize routing between those POPs,
   particularly if those POPs are purely deployed in an L2 network.




Purkayastha, et al.     Expires September 2, 2018               [Page 2]


Internet-Draft        Changes in modern data center           March 2018


2.  Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in [RFC2119].

3.  Evolving landscape

   Use cases such as IoT, Connected Vehicle imposes stringent
   requirements on the network:

   o  Connectivity anytime anywhere (mobility, multi access)

   o  Higher Bandwidth, Support Media rich applications demanding more
      bandwidth

   o  Ultra Low Latency, Support for Real time applications such as
      Industrial automation, Connected vehicles, Health automation, by
      providing very low to ultra low latency in the network

   o  Due to mobility, changes in network condition (e.g. congestion,
      load), service composition may change frequently to support
      promised quality of experience

   The sheer number of mobile devices connected to the internet, machine
   to machine communication (M2M), the Industrial Internet of Things,
   resource-dependent applications, such as data-heavy streaming video
   and wearables, all generate huge amount of data.

   Processing these huge amounts of data in a central data center has
   the following disadvantage:

   o  Burden the network, reducing available BW

   o  Latency, data travels in the upstream, gets processed and then
      response comes back, thus increases the latency

   In order to reduce the burden on network and improve on latency, data
   is being processed more closely at the edge.  We will describe in
   detail the rationale for moving computation at the edge of the
   network, how data centers are changing to handle that and summarize
   few requirements.  To understand the changes that are happening, we
   start with few relevant use cases.








Purkayastha, et al.     Expires September 2, 2018               [Page 3]


Internet-Draft        Changes in modern data center           March 2018


3.1.  Video orchestration and delivery

   The video orchestration service example from ETSI MEC Requirements
   document [ETSI_MEC] may be considered.  The proposed use case of edge
   video orchestration suggests a scenario where visual content can be
   produced and consumed at the same location close to consumers in a
   densely populated and clearly limited area.  Such a case could be a
   sports event or concert where a remarkable number of consumers are
   using their hand-held devices to access user select tailored content.
   The overall video experience is combined from multiple sources, such
   as local recording devices, which may be fixed as well as mobile, and
   master video from central production server.  The user is given an
   opportunity to select tailored views from a set of local video
   sources.

3.2.  Vehicle to vehicle, anything V2X

   The V2X use case group "Safety" includes several different types of
   use cases to support road safety using the vehicle-to-infrastructure
   (V2I) communication in addition to the vehicle-to-vehicle (V2V).

   Intersection Movement Assist (IMA): This type of use cases was
   specifically listed in the US DOT NHTSA publication 2016-0126
   [USDOT], and ESTI TR 102 638 [ETSI_ITS].  The main purpose of IMA is
   to warn drivers of vehicles approaching from a lateral direction at
   an intersection.  IMA is designed to avoid intersection crossing
   crashes, the most severe crashes based on the fatality counts.
   Intersection crashes include intersection, intersection-related,
   driveway/alley, and driveway access related crashes.

   Advanced driving assistance represented by the two use cases collects
   the most challenging requirements for V2X.  It can require
   distribution of a relative large amount of data with high reliability
   and low latency in parallel.  Additionally, the advanced driving use
   cases would benefit from predictive reliability.  This means that
   vehicles moving along should have the possibility to receive a
   prediction of the network availability to plan ahead.

   Real Time Situational Awareness and High Definition (Local) Maps:
   Real time situational awareness is essential for autonomous vehicles
   especially at critical road segments in cases of changing road
   conditions (e.g. new traffic cone detected by another vehicle some
   time ago).  In addition, the relevant high definition local maps need
   to be made available via downloading from a backend server.

   The use case for real time situational awareness and High Definition
   (Local) Maps should not only be seen as a case to distribute
   information on relatively slow changing road conditions.  The case



Purkayastha, et al.     Expires September 2, 2018               [Page 4]


Internet-Draft        Changes in modern data center           March 2018


   should be extended to distribute and aggregate locally available
   information in real time to the traffic participants via road side
   units.

   See-Through (or High Definition Sensor Sharing): In this type of use
   cases vehicles such as trucks, minivans, cars in platoons are
   required to share camera images of road conditions ahead of them to
   vehicles behind them.

   The vulnerable road user (VRU) use case covers pedestrians and
   cyclists.  A critical requirement to allow efficient use of
   information provided by VRUs is the accuracy of the positioning
   information provided by these traffic participants.  Additional means
   to use available information for better and reliable accuracy is
   crucial to allow a real-world usage of information shared from VRUs.
   Cooperation between vehicles and vulnerable road users (such as
   pedestrians, cyclists, etc.) through their mobile devices (e.g.,
   smartphone, tablets) will be an important key element to improve
   traffic safety and to avoid accidents.

4.  Analysis

   The use cases described above leads to certain expectation /
   capabilities from the network.  These are listed below:

   o  Low latency: Visitors in the stadium would like to have video
      delivered to them instantly.  The videos are not simple clips, but
      composed from different angles enhanced with analytical
      information.  It is also evident from V2V use cases, safety
      information needs to be delivered to the driver with very low
      latency and accurately.  If two cars are approaching an
      intersection, the safety information needs to be delivered to all
      vehicles quickly and on time to avoid accident.

   o  Distributed information source and compute infrastructure: The use
      cases described above, aggregate information and computing from
      different sources and deliver the result to user.  E.g. in the
      video use case, videos from multiple angles may be stored in
      different storage resources and then aggregated before delivering
      to the user.  Similarly, in V2X use cases, map information,
      information from road side sensors, city information may be
      aggregated from different sources and delivered to the user.  In
      carrier networks, operators may deploy multiple data centers
      dispersed geographically.  Each data center may host different
      types of information, services etc.  For example, latency
      sensitive or high usage service functions are deployed in regional
      data centers while other latency tolerant, low usage service
      functions are deployed in global or central data centers.



Purkayastha, et al.     Expires September 2, 2018               [Page 5]


Internet-Draft        Changes in modern data center           March 2018


   o  Mobility: Users are moving around within the stadium and they will
      like to maintain service continuity.  Similarly, in the V2X use
      case, as vehicles are driving down the city, service continuity is
      desired.  As user are moving, they may be served by different
      service end points.  Besides mobility, service end points may also
      move depending on network condition, load etc.  As the service
      consumer and service end points are not fixed, it may be required
      to constantly update the service composition using suitable
      service end points.

   Based on the analysis of the use case, we can summarize the emerging
   trends:

   o  In order to support very low latency, data needs to be stored and
      processed locally at the edge.  This leads to the requirement of
      data centers being deployed at the edge.  But traditional large
      data centers are not suitable for edge deployment.

   o  Different kinds of sensor data, content is generated in multiple
      locations.  These data may be stored locally where they are
      generated.  Application developers and service providers are
      coming up with advanced services, where data from multiple
      sources, available at multiple locations are used.

   o  The composition of a service from multiple service end points and
      data sources keep changing.  To maintain service continuity, the
      service composition may need to be updated, some service end
      points may be removed and some new service end points may be
      added.  The whole process needs to happen quickly, so that user
      does not notice any break in service continuity.

5.  Data Center Evolution

   Installing more hardware resources and bigger switches to increase
   bandwidth in centralized enterprise data centers can only reduce
   latency to certain extent.  Today's approach is to move compute and
   storage resource close to the end user, e.g. at the edge of the
   network such as GW, CPE etc.  Businesses are looking for ways to
   expand data processing infrastructure closer to where data is
   generated.  Today, many organizations that need to share and analyze
   quickly growing amount of data, such as retailers, manufacturers,
   telcos, financial services firms, and many more, are turning to
   localized micro data centers installed on the factory floor, in the
   telco central office, the back of a retail outlet, etc.  The solution
   applies to a broad base of applications that require low latency,
   high bandwidth, or both.





Purkayastha, et al.     Expires September 2, 2018               [Page 6]


Internet-Draft        Changes in modern data center           March 2018


   A micro data center is a "a self-contained, secure computing
   environment that includes all the storage, processing and networking
   required to run the customer's applications."  They are assembled and
   tested in a factory environment and shipped in single enclosures that
   include all necessary power, cooling, security, and associated
   management tools.

   Micro data centers are designed to minimize capital outlay, reduce
   footprint, energy consumption, and increase speed of deployment.
   Several business and technology trends have created the conditions
   for micro data centers to emerge as a solution.  The reason for the
   emergence of micro data center are:

   o  Compaction: Virtualized IT equipment in cloud architectures that
      used to require 10 IT racks can now fit into one.

   o  IT convergence and integration: Servers, storage, networking
      equipment, and software are being integrated in factories for more
      of an "out of the box" experience.

   o  Latency: There is a strong desire, business need, or sometimes
      even life-critical need to reduce latency between centralized data
      centers (e.g. cloud) and applications.

   o  Speed to deployment: To either gain a competitive advantage or
      secure business.

   o  Cost: In many cases micro data centers can utilize "sunk costs" in
      facility power and cooling infrastructure, meaning they can take
      advantage of excess capacity that for one reason or another isn't
      being used.  This kind of under-utilization is a common issue in
      enterprise data centers.

   Micro Data Centers deployed at the edge of the network is entirely or
   largely deployed over Layer 2 for cost and efficiency reasons, e.g.,
   due to the integration with cellular subsystems [_3GPP_SBA], or
   through moves to SDN connectivity in smart cities [BIO_TRIAL] and
   operator core networks [ATT].

   It is also common to deploy more than one micro data centers in the
   edge to support diverse data and computing requirement.  Cloud
   service providers are moving away from single POP deployment of MDC
   at the edge to multiple POP deployment of MDC at the edge.  These
   multiple POPs are deployed over L2 interface for fast and efficient
   switching, exchange of information etc., thus enabling dynamic
   service composition at the edge.  The following diagram describes the
   trend in MDC deployment at the edge in today's network.




Purkayastha, et al.     Expires September 2, 2018               [Page 7]


Internet-Draft        Changes in modern data center           March 2018


                           +-----+
                      +----+ MDC +----+
          +------+    |    +-----+    |      +---------------+
          |      |    |               |      |               |
          |  UE  |----+  EDGE CLOUD   +------+  DATA CENTER  +
          |      |    |               |      | DISTANT CLOUD |
          +------+    |    +-----+    |      +---------------+
                      +----+ MDC +----+
                           +-----+
          |--Service routing over L2--|-Service routing over L3-|


                     Figure 1: Service Routing at Edge

6.  Considerations for MDC (Micro Data Centre) at edge

   As Micro Date centers are deployed at the edge of the network, it
   raises few pointers which needs to be considered.

   o  The edge of the network, due to its small footprint/coverage, is
      more dynamic as users move around network attachment points also
      change quickly.  On the backend, due to network load, proximity to
      user, service end points may also change fast.

   o  Micro data centers may host more than one type of data stores, but
      it may be common that data aggregation happens not only within a
      micro data center, but also across micro data centers.

   In such a dynamic network environment, the capability to identify and
   aggregate one or more data sources within a micro data center as well
   across micro data centers is desirable.

   Given that deployments of micro-DCs are considered in edge
   environments, service routing should be possible over pure Layer 2
   solutions, in particular emerging SDN-based transport networks.

   It should be possible to quickly move a service/data instance in
   response to user mobility or resource availability within a micro
   data center as well as across micro data center.  From a routing
   perspective, this means that any Request for data and service needs
   to be switched fast from one service instance to another running
   within a micro data centre or across micro data centres.  Given the
   evolution of virtual instance technologies which push (virtual)
   service instantiation down into the seconds and below range, any such
   service routing change must be in the same time order as the
   instantiation of the service instance.





Purkayastha, et al.     Expires September 2, 2018               [Page 8]


Internet-Draft        Changes in modern data center           March 2018


   Since service interactions can run over a longer period (e.g., for
   video chunk download), changes of service requests to new service
   instances should be possible mid-session without loss of already
   obtained data.

   As users or service end points are moved within a micro data centre
   or across micro data centres, any service request should follow based
   on direct path.

7.  Conclusion

   We are going to witness deployment of MDC at the edge of the network
   to support advanced use cases of the future.  In order to realize
   that future, we believe that the above features need to be considered
   for tomorrow's data centres.

8.  IANA Considerations

   This document requests no IANA actions.

9.  Security Considerations

   TBD.

10.  Informative References

   [_3GPP_SBA]
              3GPP, "Technical Realization of Service Based
              Architecture", 3GPP TS 29.500 0.4.0, January 2018,
              <http://www.3gpp.org/ftp/Specs/html-info/29500.htm>.

   [ATT]      ATT, "ATT's Network of the Future",  ,
              <http://about.att.com/innovation/sdn>.

   [BIO_TRIAL]
              Bristol, "BIO TRIAL",  ,
              <https://www.bristolisopen.com/platform/>.

   [ETSI_ITS]
              ETSI, "Vehicular Communications, Basic Set of
              Applications, Definitions", ETSI TR 102 638 1.1.1, June
              2009, <http://www.etsi.org/deliver/etsi_tr%5C102600_102699
              %5C102638%5C01.01.01_60%5Ctr_102638v010101p.pdf>.








Purkayastha, et al.     Expires September 2, 2018               [Page 9]


Internet-Draft        Changes in modern data center           March 2018


   [ETSI_MEC]
              ETSI, "Mobile Edge Computing (MEC), Technical
              Requirements", GS MEC 002 1.1.1, March 2016,
              <http://www.etsi.org/deliver/etsi_gs/
              MEC/001_099/002/01.01.01_60/gs_MEC002v010101p.pdf>.

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>.

   [USDOT]    US DOT, "Federal Motor Vehicle Safety Standards; V2V
              Communications", NHTSA-2016-0126 RIN 2127-AL55, 2016,
              <https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/
              documents/v2v_nprm_web_version.pdf>.

Authors' Addresses

   Debashish Purkayastha
   InterDigital Communications, LLC
   Conshohocken
   USA

   Email: Debashish.Purkayastha@InterDigital.com


   Akbar Rahman
   InterDigital Communications, LLC
   Montreal
   Canada

   Email: Akbar.Rahman@InterDigital.com


   Dirk Trossen
   InterDigital Communications, LLC
   64 Great Eastern Street, 1st Floor
   London  EC2A 3QR
   United Kingdom

   Email: Dirk.Trossen@InterDigital.com
   URI:   http://www.InterDigital.com/









Purkayastha, et al.     Expires September 2, 2018              [Page 10]


Html markup produced by rfcmarkup 1.126, available from https://tools.ietf.org/tools/rfcmarkup/