What follows is a description of an architectural pattern that I see many developers discussing that I believe is an anti-pattern. My belief is based on architectural theory and I have no empirical evidence to back it up, so feel free to come to your own conclusions.
The proposed architecture looks like this,
I’ve never been a big fan of architecture diagrams like this, because they are a purely logically representation of the architecture and forget about physical realities.
Consider this next diagram which is an example of how this architecture might actually be deployed.
In this diagram I changed the colours of the arrows to indicate the protocol used to interact between the components. The blue arrows use HTTP, the purple one is an in-process call and the red one is cross process but most likely not using HTTP.
HTTP is smart but not very quick
HTTP is designed to enable communication over high latency distributed networks. It is a text based protocol that enables the transmission of a large amount of semantically rich information in a single request/response pair. It was designed to scale massively and allow application components to evolve independently. It was not designed to be particularly efficient over high speed connections within a data center.
High speed in the data center
HTTP is convenient, but definitely not the best choice when communicating with a database server within a data center. The interaction between the Web Site and the Web API is HTTP, because that’s the protocol of choice for Web APIs. However, I think it is important to question the wisdom of this interaction.
The right protocol for the right job
It is highly likely that the Web Site and the Web API are living in the same data center. It is quite possible that they are running on the same physical machine. This means that interactions between the two do not even need a network round trip. There are much faster ways for the web site to get access to the data it needs than using HTTP to talk to a Web API.
DRY Layers
However, from what I have heard, performance is not the motivating factor for funneling all interactions through the Web API. The intent is to provide a single interface that all “client” applications can consume. The goal is re-use. The theory being that we can write a single Web API and all the different client applications can consume that single API.
Building good APIs for clients that are communicating across the Internet, need to satisfy a different set of requirements, as compared to building an API for a client sitting across the room. Internet APIs can’t afford to be chatty. They tend to be more coarsely grained and contain more metadata in order to reduce chattiness. They also need to be much more resilient to change. It’s not hard to push an update to the Web Site when the Web API changes, but it is a lot more challenging to update mobile devices, or some third party integration.
It isn’t impossible, it’s just sub-optimal
You can share a Web API between both local and remote clients. The problems you will encounter will depend on who is the driving force behind API changes. If the Web Site requirements push API changes then you are likely to end up with something that works OK for the web site and sucks horribly for the remote clients. If you are lucky, it will be the remote clients that drive the API and hopefully the performance advantages of being local will make up for the inefficient interface that the Web Site needs to deal with.
A better way
In my opinion, a better unit of re-use would be the business logic of the application packaged up with a package manager and then deployed into either the Web Site or Web API projects. With this approach, the Web Site gets high speed access to the underlying business logic and data and the Web API gets to focus on optimizing for remote clients.
Feedback
As I started out saying, this opinion is based on theory. I’d be really interested in hearing about practical experiences that developers have had with these types of scenarios. Some readers might find this a stretch, but I see a correlation between what I am describing here and the changes that Neflix implemented to its internal architecture.
It is worth noting also that many of the negative impacts that I am envisioning are not necessarily going to surface in the first six months of the project. I tend to focus on the long term evolution of an application, so if you happen to be building a tool for your internal HR department that is going to be scrapped next year, feel free to ignore everything I just said .