Introduction

In case you missed the other days of this deep dive:
Day 1 Deep Dive
Day 2 Deep Dive with MongoDB
Day 3 Deep Dive with MongoDB
Day 4 Deep Dive with MongoDB
Day 5 Deep Dive with MongoDB
Day 6 Deep Dive with Mongoose
Day 7 Deep Dive with Clustering
Day 8 Deep Dive with PM2
Day 9 Deep Dive with Restify
Day 10 Deep Dive with Redis
Day 11 Deep Dive with Redis and ASP.NET Core

As mentioned on Friday I wanted to spend some time turning my newly acquired Node.js and Redis knowledge into something more real-world. Learning when using Node.js (and Redis) and when not to over the last 2 weeks has been really interesting especially coming from an ASP.NET background where I had traditionally used it to solve every problem. While ASP.NET is great and can solve virtually any problem you throw at it, as noted by my deep dives it isn't always the best solution. In particular there are huge performance issues as you scale an ASP.NET/Redis pairing verses Node.js/Redis. Wanting to get my bbxp blog platform that powers this site back into a service oriented architecture as it was a couple years ago with a WCF Service and possibly a micro-service architecture, what better time than now to implement a caching layer using Node.js/Redis.

This blog post will detail some design decisions I made over the weekend and what I feel still needs some polishing. For those curious the code currently checked into the github repo is far from production ready. Until it is production ready I won't be using the code to power this site.

Initial Re-architecting

The first change I made to the solution was to add an ASP.NET Core WebAPI project, a .NET Standard Business Layer project and a .NET Standard Data Layer project for use with the WebAPI project. Having had the foresight earlier this year when I redid the platform to utilize ASP.NET Core everything was broken out so the effort wasn't as huge as it could have been if all of the code was simply placed into the controllers of the MVC app.

One thing of this restructuring that was new was playing around with the new Portable Class Library targeting .NET Standard. From what I have gathered this is the future replacement of the crazy amount of profiles we have been using the last three years - Profile78 is a lot more confusing than .NET Standard 1.4 in my opinion. It took some time finding the most up to date table detailing what platforms are on what version, but for those also looking for a good reference, please bookmark the .NET Standard Library Roadmap page. For this project as of this writing UWP does not support higher than 1.4 so I targeted that version for the PCL project. From what I have read 1.6 support is coming in next major UWP Platform NuGet package update.

Redis Implementation

After deep diving into Redis with ASP.NET Core in Day 11 and Node.js in Day 10, it became pretty clear Node.js was a much better choice for speed as the number of requests increased. Designing this platform to truly be scalable and getting experience designing a very scalable system with new technology I haven't messed with are definitely part of this revamp. With caching as seasoned developers know if a tricky slope. One could simply turn on Output Caching for their ASP.NET MVC or WebForms app - which wouldn't benefit the mobile, IoT or other clients of the platform. In a platform agnostic world, this approach can be used, but I shy away from using that and calling it a day. I would argue that native apps and other services are hit more than a web app for a platform like bbxp in 2016.

So what are some other options? For bbxp, the largest part of the request time server side is pulling the post data from the SQL Server database. I had previously added in some dynamically generated normalized tables when post content is created, updated or deleted, but even still this puts more stress on the database and requires scaling vertically as these tables aren't distributed. This is where a caching mechanism like Redis can really help especially in the approach I took.

A more traditional approach to implementing Redis with ASP.NET Core might have been to simply have the WebAPI service do a check if the Redis database had the data cached (ie the key) and if not push it into the Redis database and return the result. I didn't agree with this approach as it needlessly hit the main WebAPI service for no reason if it was in the cache. A better approach in my mind is to implement it a separate web service, in my case Node.js with restify and have that directly communicate with Redis. This way best case, you get the speed and scalability of Node.js and Redis without ever hitting the primary WebAPI service or SQL Servers. Worse case Node.js returns extremely quickly that the key was not found and then makes a second request to the WebAPI Service to not only query the data from SQL Server, but also fire a call to Redis to add the data to the cache.

An important thing to note here is the way I did my wrapping of the REST service calls, each consumer of the service does not actually know or care which data source the data came from. In the nearly seven years of doing Service Oriented Architectures (SOA), the less business logic work being done client side even as simple as making a second call to a separate web service is too much. The largest part of that is consistency and maintainability of your code. In a multi-platform world you might have ASP.NET, Xamarin, UWP and IoT code bases to maintain with a small team or worse just a single person. Putting this code inside the PCL as I have done is the best approach I have found.

That being said, lets dive into some C# code. For my wrapper of Redis I chose to a pretty straight forward simple approach. Taking a string value for the key and then accepting a type of T, which the helper function will automatically convert into JSON to be stored in Redis:

public async void WriteJSON(string key, T objectValue) { var value = JsonConvert.SerializeObject(objectValue, Formatting.None); value = JToken.Parse(value).ToString(); await db.StringSetAsync(Uri.EscapeDataString(key), value, flags: CommandFlags.FireAndForget); }

A key point here is the FireAndForget call so we aren't delaying the response back to the client while writing to Redis. A better approach for later this week might be to add in Azure's Service Bus or a messaging system like RabbitMQ to handle if the key couldn't be added for instance if the Redis server was down. In this scenario, the system would work with my approach, but the scaling would be hampered and depending on the number of users hitting the site and or the server size itself, this could be disasterous.

Node.js Refactoring

With the additional of several more routes being handled by Node.js than in my testing samples, I decided it was time to refactor the code to cut down on the duplicate redis client code and handling of null values. At this point I am unsure if my node.js code is as polished as it could be, but it does in fact work and handles null checks properly.

My dbFactory.js with the refactored code to expose a get method that handles null checking and returning the json data from redis:

module.exports = function RedisFactory(key, response) { var client = redis.createClient(settings.REDIS_DATABASE_PORT, settings.REDIS_DATABASE_HOSTNAME); client.on("error", function(err) { console.log("Error " + err); }); client.get(key, function(err, reply) { if (reply == null) { response.writeHead(200, { 'Content-Type': 'application/json' }); response.end(""); return response; } response.writeHead(200, { 'Content-Type': 'application/json' }); response.end(reply); return response; }); };

With this refactoring, my actual route files are pretty simple at least at this point. Below is my posts-router.js with the cleaned up code utilizing my new RedisFactory object:

var Router = require('restify-router').Router; var router = new Router(); var RedisFactoryClient = require("./dbFactory"); function getListing(request, response, next) { return RedisFactoryClient("PostListing", response); }; function getSinglePost(request, response, next) { return RedisFactoryClient(request.params.urlArg, response); }; router.get('/node/Posts', getListing); router.get('/node/Posts/:urlArg', getSinglePost); module.exports = router;

As one can see, the code is much simplified over what would have quickly become very redundant bloated code had I kept with my unfactored approach in my testing code.

Next up...

Tomorrow night I hope to start implementing automatic cache invalidation and polishing the cache entry in the business layer interfacing with Redis. With those changes I will detail out the approach with pros and cons to them. For those curious, the UWP client will become a fully supported client along with iOS and Android clients via Xamarin Forms. Those building the source code will see a very early look to getting the home screen posts pulling down with a UI that very closely resembles the MVC look and feel.

All of the code for the platform is committed on GitHub. I hope to begin automated builds like I setup with Raptor and setup releases as I add new features and continue making the platform more generic.
TAGS
none on this post