Agent Fielding is on a mission

Published on August 28, 2010

This is a continuation of a series on a Rest Agent library I am building for accessing REST apis.  The first post is here and and the second is here.

So far the only significant operation that we enabled our RestAgent to perform is NavigateTo().  However, for a RestAgent on a mission, going places is only half the story, the other major purpose of the agent is to gather content.

I was planning to show some real examples of missions using Stack Overflow’s API, but I ended up spending so much time explaining why it currently would not work, that my points were lost.  So forgive me as I continue with a somewhat more hypothetical example.  We are going to use an instance of our RestAgent to do a mashup of Twitter users and Stackoverflow users.

var agentFielding = new RestAgent(new HttpClient()); 

agentFielding.RegisterMediaType(“application/twitterdoc+xml”, new TwitterMediaHandler()); agentFielding.RegisterMediaType(“application/stackoverflowdoc+xml”, new StackOverflowMediaHandler());

In the following code, Agent Fielding retrieves my Twitter followers and attempts to find matching accounts on Stack Overflow.  In order to interpret the representations that will be retrieved from these two sites, we are going to pretend that these sites actually return non-generic media types and we register handlers for these media types that will transform the wire representation into strong types.

The first step is to navigate Twitter and retrieve the user profiles of my followers:

agentFielding.NavigateTo(new Uri(“”));
var userSearch = agentFielding.CurrentLinks(“UserSearch”);

var followersLink = agentFielding.CurrentLinks[“Followers”); agentFielding.NavigateTo(followersLink);

var twitterUserProfiles = new List<TwitterUserProfile>(); var followerLinks = agentFielding.GetCurrentLinks(l=> l.Relation.Name == ”Follower”]; foreach(Link followerLink in followerLinks) { var twitterProfile = agentFielding.GetContent(followerLink).ReadAsTwitterUserProfile(); twitterUserProfiles.Add(twitterProfile);


Now we have a list of TwitterUserProfile objects from which we can retrieve the name and search on Stack Overflow.

agentFielding.NavigateTo(new Uri(“”)); agentFielding.NavigateTo(agent.CurrentLinks[“Users”]); 
var searchLink = agent.CurrentLinks[“Search”];
var foundProfiles = new List<StackOverflowUserProfile>();
foreach(var profile in twitterUserProfiles) {   searchLink.SetParameter(profile.UserName); 

var matchingUserLinks = agentFielding.GetCurrentLinks(l=> l.Relation.Name == ”User”]) foreach(Link userLink in matchingUserLinks) { var content = agentFielding.GetContent(userLink); foundProfiles.Add(content.ReadAsStackOverflowUserProfile(); } }

Don’t get too hung up on the precise details of the above, there is a bit of smoke and mirrors going on.  I’m really just trying to convey the concept that the same agent class can be used to navigate multiple different services.  This brings the concept of the uniform interface a little further into the client code. 

My hope is that in the future that REST api producers will supply media type handlers for their custom media types and we can use a standardized agent like interface for navigating any REST api.  There will be no need the API producers to create complete client side facades.

There is still plenty of coupling in the above example, when it comes to handling specific representations and knowing about the existence of specific links, there is plenty of service specific code.  This is especially true of these type of scripted or (machine to machine) interactions.  Machines are really dumb, so they need to understand a lot of specifics of the services that they are interacting with.  In the next article I’m going to start digging into how you can use an agent to react to a human driven application, and that will  further reduce the coupling.