The industry is abuzz with network virtualization these days. Software-Defined Networking ¬†or Software-Driven Networking (SDN), depending on your preference of ONF or IETF, is top of mind with most CIOs and IT managers. But it should also be on the minds of most lines of business managers. Why you ask? Shouldn’t line of business managers worry about revenue and profits and leave all that technical stuff to the IT department?

Historically that’s been the case. IT departments where pretty much “firewalled” from from the rest of the organization. But with the advent of virtualization, cloud computing (and cloud services) that organizational model is rapidly changing. Now we have “C4″ – credit card cloud computing – where lines of business can go out and acquire their own IT services and in some cases completely bypass the in-house IT organization structure, policies and procedures. Not a good idea, and I’m not advocating that you take this approach to the extreme. But lines of business should now be actively engaged in the design and decision-making process when it comes to new business capabilities that are enabled by the latest wave of virtualization and cloud technologies.

SDN is about abstracting the network control plane from the network data plane – which have been intimately intertwined since the dawn of switching and routing technology. Proprietary network operating systems have made it hard to separate the definition and management of the network topology from the physical device – creating a very specialized breed of network design experts who understand the intricacies of turning line of business application/service requests into an underlying network topology. SDN now makes that task a little easier, by separating the control plane (controller in ONF parlance and orchestrator in IETF) from the data plane, which is now driven by routing tables in the new world. With controllers and orchestrators come the hope of making the transition from business need to IT capability much quicker and smoother.

But we have to be careful. If you look at the SDN model, it basically has three layers – the network fabric layer, the network OS layer and the “features” layer. In a SDN world an open, standards-based interface is defined for communications between the network OS and the network fabric layer. This allows the development of commodity-based networking devices which simply take “forwarding” commands from the network OS and apply them to traffic entering/exiting the device. An oversimplification, but that’s essentially how it works. Above the network OS sits the “features” which in theory define use cases for different types of network services. Say for instance the definition of a network topology to optimize data storage backup across a wide area network. On top of that underlying requirement may be other services definitions, such as time of day or quality of service. Another example might be to create a network topology (service) to support migration of virtual machine instances based on a set of events or time of day. And yet another example might be to reroute traffic to coalesce a device for maintenance purposes. This last example has always been a problem in multi-tenant implementations – how do I gracefully move customers who have different service level requirements (such as acceptable maintenance windows) without disrupting other customers supported by the same device. Hopefully you get the picture.

The key to the success of SDN is not to think of SDN simply in terms of “software”. It has to encompass the “service” context that rides on top of the software, and like the abstraction that occurs between the network OS and the network fabric, it must be well-defined. Otherwise we run the risk of having proprietary network services defined inside the network OS layer, and we will once again be bound by vendor-specific networking solutions. Not that vendor-specific is all bad, but it surely limits our options for having a rich set of reusable templates developed with an “open” mindset (think of Open Data Center Alliance as an example).

As you look across the landscape today there is a lot of activity around building “controllers” and “orchestrators”. As CIOs and IT managers consider their options in this space, there are a couple of key factors that should be considered. First, how rich is the “northbound” API set for the controller/orchestrator? This is key to being able to utilize industry-defined solution sets to enable networking topologies to solve common business problems. Along these same lines, the “east/west” APIs should also be considered. These deal with the ability to connect to existing network management systems and/or cloud operating systems (e.g., OpenStack, CloudStack, et al). The combination of the northbound and east/west APIs are critical when deploying controllers/orchestrations in a mixed-mode (virtualized and legacy) networking infrastructure.

The second factor to consider is the design of the controller/orchestrator itself. One of the key issues that must be addressed is scalability, and many solutions are falling short in this area. Beyond that, there are many other design considerations that should be taken into account:

  • Can the system be clustered?
  • Does it support multi-threading?
  • Does it support both in-memory caching and persistence?
  • Does it support the definition of services in addition to elements?
  • Can it maintain service demand/allocation state to ensure optimal use of network fabric resources?
  • Can it auto-discover existing infrastructure/network fabric resources?
  • How extensible is the data model?
  • Does it have predefined device models?
  • Can it support multiple asynchronous queues and distributed protocol adapters?
  • Can it communicate using a variety of methods (CLI, OF-Config, NETCONF, SNMP, XML,…)
  • Does it support multi-tenancy?
  • Does it support role-based authentication and authorization?
  • Does it have built-in policy management and workflow (or at least a robust interface to external services)?
  • Does it support workflow roll-back in the event of provisioning failures?
  • Does it have built-in services for standard networking topologies and services (Layer 2 MAC learning, Layer 3 OSPF routing, TRILL, et al)?
  • Does it support both data center and wide area networking services (MPLS-TE, Layer 2/Layer 3 VPN, et al).
  • etc.

This list looks pretty technical, and I’m not suggesting line of business managers understand all this stuff. But these are essential elements that must be in place to support the creation and management of services, and line of business managers should be asking how CIOs and IT managers will deliver business solutions in the future.

Without these capabilities solutions we will rely on hard-coded or vendor-specific services that will be “embedded” into the controller/orchestrator. So as your enterprise embarks on the path of end-to-end virtualization, it is critical that the IT services that enable business solutions be open, flexible and adaptable. Otherwise you will be back in the mode where lines of business will be bypassing IT to obtain the services they need to be successful.