Wednesday 17 March 2010

The Evolution of Application Delivery Part 1: Unintelligent

This is Part 1 of a 4 part series covering the Evolution of Application Delivery. I come across a lot of confusion as to what Application Delivery really is. Some think its Load-balancing. They are wrong. :-) Hopefully this series will help better explain ADC's and their purpose.







In the beginning we had basic, unintelligent TCP distribution which was typically delivered through destination address re-writting. This was fine for spreading HTTP Request across multiple servers when one server alone simply wasn't enough.





















The internet matured and soon we had the need to load-balance dynamic applications which included anything from a cgi script residing on a Web Server to multi-tiered architecture like WebSphere or Weblogic systems. These multitiered applications were typically deployed behind a load-balancer and the Web Server was loaded with a crude plug-in as depicted below.













The aforementioned, multi-tiered architecture delivered much needed scalability allowing organisations to build out sideways as needed. The problem with this architecture, especially in more recent times with growing expectations around application availability and responsiveness, is that it is still unintelligent.






A load-balancer has no appreciation of the service running through the architecture. Consider the following: in the diagram above, what happens if one of the Web Servers starts returning errors? A failed disk could result in 404 Page not found. The load-balanacer would then upset 1/3 of your customers.






Some of these early load-balancers would 'ping' the server to see that it was turned on but sending an ICMP packet doesn't verify if the response was good (HTTP 200 OK) or bad (404 Object not found).







No comments:

Post a Comment