Five Design Principles for the Network Architect - Intro


(#1 of 7)

Whilst studying for the CCDE, you stumble across new design methodologies all the time, each of which set out some fundamental principles by which you should look to adhere when building your network solutions - the one that springs to mind is Chapter 1 of Russ White's book "Optimal Routing Design".  The idea is that you bear those simple ideas in mind as you go through the design process, checking back in with them to make sure that the design you produce will lead to a network which is robust and manageable, but still meets customer requirements.

Through an iterative process of my own, I have arrived at five principles I try and embody in my work, and over coming blog posts, I'll go into more detail on how I look to apply each of them.  They are:


… is the fundamental desirable property of a network, its raison d'ĂȘtre.  The network exists as the transport to deliver apps to users, make data available to apps, and collect data from sensors/"things". Availability is an umbrella term that covers a multitude of other more granular network properties - stability, redundancy, resilience, performance, convergence and so on. But essentially, maximising availability equates to ensuring all of these things are working in concert for continuous service.   Other elements may also be important too - for example, ensuring good monitoring to detect poor performance or failure and aid troubleshooting; or swift, well-documented support processes that kick in at the right moment to ensure minimal impact of issues when they do occur. As a rule, it is hard to define a measure of generic network availability as in reality it is application or service specific.  It is easier to measure constituent elements such as device or interface uptime or utilisation, but these cannot be considered as representing "availability" in themselves.


… is the property of a network and its constituent devices to be able to easily grow (and contract) to accommodate changes in usage patterns.  At a micro level, this means ensuring technology choices - vendor, model, links, bandwidths, service providers - don’t limit the ability to start using the network to deliver new applications.  It predicates the use of topologies which are easily sized up, circuits which can be upgraded without ordering new ones, the use of flexible licensing and so on.  At the macro level, the use of a modular design allows us to grow a network by simply plugging in another module - a new site for example would be created from a "black box" template and connected up to the existing network with no need to change anything existing; similarly decommissioning existing sites would have no impact on the remaining environment.


… is the use of a variety of techniques to ensure that the customer's fundamental security policies are upheld across the network.  Once the security focus was solely on the Internet perimeter, but now we consider it a more pervasive requirement.  In order to ensure confidentiality, integrity and availability of data passing across the network, we use mechanisms to prevent access to the network for endpoints which are not allowed it (including but not limited to AAA and IPS); to maintain separation between endpoints which are not allowed to speak (using encryption, segmentation, filtering and so on); and actively monitor what endpoints are doing so that remedial steps can be taken should they transgress (quarantine, exfiltration, anti-malware etc).  These elements are embedded throughout the network to prevent bad actors getting in at the edge, but then detecting issues and bad behaviour and acting on them wherever they are seen.


… or ensuring that the network environment can be monitored in a genuinely meaningful way, and give the feedback to the operator in order to support it.  The operator will need to have at their disposal a comprehensive set of tooling for monitoring performance and/or availability with trending capability to see how the environment changes over time.  They also require complete configuration and lifecycle management, whether that be a simple CLI which can be used to configure the devices and obtain state information, or a centralised controller platform that provides interactive access to the whole network through a single GUI or dashboard.  A key feature of a modern network is to be able to support automation and orchestration tooling through (amongst other things) the use of an API - this allows the operator to a) do things quicker and slicker and b) provide abstracted views of the operation of the network to users and organisations that don't need the visibility of the individual elements.


… is the key.  Where possible, the designer should strive to ensure that solution involves the fewest moving parts and interactions between them.  An "elegant"  solution with lots of interacting elements to provide an automated network is great, but a simple one - even if it needs some well-documented manual intervention to ensure failover works as expected - will be better at 3am when the dreaded phone call comes through to your mobile.  The simpler the environment, the more predictable it is, and this has to be the measure of a supportable network.

There are other elements which you will always bring to bear in your network designs - performance requirements, redundancy, backward compatibility etc - but these can all be boiled down to fit within these fundamental principles, and in my mind, tend to be fleshed out in the detail of the design process.  The rest of this series we'll look at these principles in turn starting with Availability.

These principles work well for me but you may find that you need something more or less detailed for the way you work - interested to hear what works for you! 

Previous> Index
Next> Availability

Comments

  1. I agree with a lot of this. Troubleshoot-ability is very important, even if you're not going to be the one supporting it. You still want to ensure your client is going to be happy with your services in the long run and come back to you because they want to not because they HAVE to.

    Complexity to the point that 'no one else understands it but you' might make them go back to you in the short run, but long term they'll find someone else that will design it better, if they wise up.

    I'm seeing clients prioritizing simplicity in the network more, after experiencing an unwieldy over-complex network (usually due to un-planned growth).

    ReplyDelete
    Replies
    1. Thanks for your comment @FutureCCDEMarie!

      Yes, organic growth is definitely a killer of simplicity in the network and operationally hard work for everyone involved. A design that allows you to modularise the functionality of the network reduces the interaction surfaces and minimises that complexity. It helps it scale better and allows you to introduce new capabilities and functionality without impacting the existing operation.

      If you're interested in reading/hearing more thoughts on these topics, take a look at networkfreestyle.tech where Malcolm Booden and I have recorded videos where we talk through ideas on similar topics.

      Good luck with your CCDE studies!

      Delete
  2. To be honest, I don’t know how you manage to do such a good job every single time. Very well done!
    IT services

    ReplyDelete

Post a Comment

Popular posts from this blog

The CCIE is Dead? Long Live the CCIE!! And CCNA! And CCNP!

Five Design Principles for the Network Architect - Availability

Why study for a Cloud networking certification?