latest posts

Going back to 11/26/1997 (scary it has been almost 18 years), I’ve been fascinated by artificial intelligence. In November 1997 I was still pretty much doing QBasic with a little Visual Basic, so it is not surprisingly the sole surviving code snippet I have from my Jabba Chat application (yes I was a huge Star Wars fan even back then) was in QBasic:

PRINT "Your Jabba friend, Karen"
PRINT "What is your first name", name$
PRINT "Well HI There "; name%
PRINT "It sure is neat to have You Drop by"
PRINT "Press space bar when ready you're ready to start"

As simple as it may be, this is at a basic level following a pre-programmed path, taking input and "learning" a person's name. Now days with programming languages better structured to handle AI, processing power and overall a better understanding of how we humans think now has never been a better time to dive into this area.

As stated in a previous post back in July 2014 I've become heavily vested into making true Artificial Integllience work in C#. In working on a fully automated scheduling system at work a lot of big questions I had never encountered in my 15 years of professional development with one in particular:

How do you not only replicate a human's job, but also make it better taking advantage of one of the biggest advantages a computer has over a human being; the ability to process huge data sets extremely fast (and consistently produce results)?

The answer wasn't the solution, but instead I realized I was asking the wrong question, and only after really deep diving into the complexities of the rather small "artificial intelligence" engine that I came to realize this. The question should have been what drives a human to make decisions? The simple programmatic solution to that question is to go through and apply conditionals for every scenario. Depending on what the end goal of the project is that maybe a good choice, but what if it is a more complex solution or hits one of the most common events in a computer program: the unexpected, that approach can't be applied. This question drove me down a completely different path thinking about how a human being makes decisions when he or she has yet to have encountered a scenario, an unhandled exception if you will. Thinking about decisions I have had to make throughout my life, big or small I have relied on past experience. Thinking about an application just going to production, it has no experience, each nanosecond is a new experience, for all intents and purpose a human infant. Remembering back as far as I can to when I was 4, sometimes you would fail or make a mistake, as we all have, the key as our parents enthralled into us was to learn from the mistake or failure so we wouldn't make it again. Applications for the most part haven't embraced this. Most of the time a try/catch is employed with an even less likely alert to a human notifying them that their program ran into a new (or possibly repeated experience if the error is caught numerous times before being patched, if ever). The human learns of the "infants" mistake and corrects the issue hopefully. The problem here is that the program didn't learn, it simply was told nothing more than to check for a null object or the appropriate way to handle a specific scenario, i.e. a very rigid form of advancement. This has been the accepted practice for as long as I have programmed. A bug arises, a fix is pushed to staging, tested and pushed to production (assuming it wasn't a hotfix). I don't think this is the right approach any longer. Gone are the days of extremely slow x86 CPUs or limited memory. In today's world we have access to extremely fast gpus and vast amounts of memory that largely go unused coupled with languages that facilitate anything we as programmers can dream up.

So where does the solution really reside in?

I believe the key is to architect applications to become more organic in that it should learn from paths taken previously. I have been bouncing around this idea for the last several years looking to self-updating applications where metrics captured during use could used to automatically update the applications own code. The problem with this is then you're relying on both the original programming to affect the production code in addition to the code it would be updating. Let alone also ensuring that changes that are made automatically are then tracked and reported appropriately. I would venture most larger applications would also need projections to be performed prior to any change along with the scope of what was changed.

Something I added into the same platform at work was the tracking of every request by user, what he or she was requesting, the timestamp and the duration the request took from the inital request to returning of information or processing of the request. To me this not only provided the audit trails that most companies desire and there by the ability to add levels of security to specific pieces of information system regardless of the platform, but also providing the ability to obtain the information of "John Doe requests this piece of information and then this piece of information a few seconds later every time". At that point the system would look for these patterns and alert the appropriate party. In that example, is it that the User Interface for what John Doe needs to access requires two different pages to access or is that he was simply investigating something? Without this level of granularity you are relying on the user to report these "issues" which rarely happens as most users get caught up in simply doing their tasks as quickly as possible.

Going forward I hope to add the automatic reporting of trends and take a pro-active approach to performance ridden areas of the system (if metrics for the last 6 months are returning a particular request in .13 seconds on average and then for the last week it jumps to 3.14 seconds on average, the dev team should be alerted so they can investigate the root cause). However, these are far from my longer term goals in designing a system that truly learns. More on this in the coming months as my next generation ideas come to fruition.


Yesterday I posted about my GitHub repo work over the last couple of weeks. One project I did not include intentionally was jcLSL. Followers of my blog know I had been toying with writing my own scripting language for quite some time, but kept hitting road blocks in parsing and handling anything some what complex. This has led me to think about what I would really want in a scripting language and what value it would bring to the already feature rich C# eco-system that I live and work in every day.

The Problem

At the end of the day I found that I really would only want a scripting language for doing mail merges on strings. Expanding on that, how many times have you needed to have a base template and then populate with a class object or even simply some business logic applied? I'm sure we've all done something like:

public string ParseString(string sourceString, string stringToReplace, string stringValue) {
     return sourceString.Replace(stringToReplace, stringValue); }
While at a simplistic level this is acceptable and a better approach than not wrapping the Replace call, it isn't ideal especially if you are working with POCO (Plain Old CLR Objects), then it becomes a lot more dirty in your code, assuming you wrapped the calls like in the above function, let's say you have a basic User class definition like so:

public partial class User {
     public int ID {
     get; set; }
public string Name {
     get; set; }
And then in your parsing code assuming it also exists in a User class:

public string ToParsedString(string sourceString) {
     sourceString = sourceString.Parse("ID", this.ID); sourceString = sourceString.Parse("Name", this.Name); return sourceString; }
As you could guess not only is that code an eye sore, it doesn't scale when more properties are added to your class and you'd end up adding something similar to each POCO in  your code base - not ideal. This brought me to my first objective, solving this problem in an extremely clean fashion.

Earlier this week I got it to where any POCO with jcLSLMemberAttribute decorated above a property will be automatically parsed handling the situation of code changing in time (some properties could go away, be renamed or added). With my current implementation all you need is to define a jcLSLGenericParser and then call the Run method passing in the string to mail merge and the class object you wish to merge from like so:

var TEST_STRING = "Hello {
This is a test of awesomness with User ID #{
"; var user = new User {
     ID = 1, Name = "Testing" }
; var gParser = new jcLSLGenericParser(); var parsedString = gParser.Run(TEST_STRING, user); ]]>
After running that block of code the parsedString will contain: Hello Testing This is a test of awesomeness with User ID #1. There is an optional event for the jcLSLGenericParser class as well if more custom parsing needs to be achieved.

Going forward

One of the first things to do is to add some more options for different scenarios. Maybe you've got a large code base and going through and adding the jcLSLMemberAttribute decoration would be a task in itself. One approach to solve this scenario is to add an optional parameter to the jcLSLGenericParser constructor to simply iterate through all the properties. This takes a performance hit as one would be expect, but it would leave the level of ties to this library to a minimum. Thoughts on this, please post a comment.

On the larger scale my next major goal is to add support for Streams and other output options. Let's say you had a static HTML template that need to be populated with a News Post for instance. The HTML template could be read in, mail merged against the database and then outputed to a string, stream output to a file or return binary. Trying to handle ever scenario I don't think is realistic or feasible, but handling the 90% scenario or better is my goal.

Another item that I hate implementing with mail merge fields is error handling. One approach is to simply return the exception or error in the mail merge itself. This isn't good when you have external customers and they see something like Object Reference Exception or worse yet a full stack trace. The vailidity of your product will go down quickly. However, I think a standardized {
merge field to store the error to display on an exception page or email would make sense. Handling both objectives in error handling: being aware of the error, but also handling it gracefully.

Further down the road I hope to start adding in support for "true" scripting so you could have conditionals within the merges or other logic you would want to be able to change on the fly without having to deploy an updated Mobile App, ASP.NET MVC Web App, WebAPI Service or whatever platform you're using with this library.

Where to get it

As mentioned earlier you can grab the code and samples on GitHub or grab the binary from either NuGet or from the NuGet Console with: PM> Install-Package jcLSL. As I check code into GitHub and get to stable releases, I'll update the NuGet package.

While largely quiet on here since my last post two weeks ago I have been hard at work on several smaller projects all of which are on GitHub. As mentioned previously, everything I work on in my freetime will be open sourced under the MIT License.


The first item I should mention is some new functionality in my jcAnalytics library. Earlier this week I had some ideas for reducing collections of arbitrary data down to distinct elements. For example if you had 3 Objects of data, with 2 of them being identical, my reduction extension methods would return 2 instead of 3. This is one the biggest problems I find when analyzing data for aggregation or simply reporting, especially when the original amount of data is several hundreds of thousands or more. I attempted the more straight forward single threaded model, as expected the performance as the number of elements increased was dramatically slower than a parallel approach. Wondering if there were any theories on taking a sampling of data quickly to scale as the number of items increased, I was surprised there was not more research on this subject. Doing a Log(n) sample size seemed to be the "goto" method, but I could not find any evidence to support the claim. This is where I think recording patterns of data and then persisting those patterns could actually achieve this goal. Since every problem is unique and every dataset over time the extension methods could in fact learn something along the lines of "I have a collection of 500,000 Addresses, last 10 times I ran I only found 25,000 unique addresses at an average rate of every 4 records." On subseqent runs, it could adapt per request. Maybe assign Guids or another unique identifier for each run with the result patterns on disk, in a SQL database or in Azure Cache. For those curious, I did update the NuGet package as well with these new extension methods. You can download the compiled NuGet Package here on NuGet or via NuGet Console with PM> Install-Package jcANALYTICS.Lib.


A huge topic in my world at work has been offline/online hybrid mobile applications. The idea that one could "sync" and then pull down data for 100% offline use has been on my mind since it was requested several months ago by one of our clients. Knowing the first approach might not be the best and that I wanted to create a generic portable class library that could be plugged into any mobile application on any platform (iOS, Android, Windows), I figured I would begin my research fully exposed on GitHub and then as stable releases were built I would publish them on NuGet. This project is of a larger nature in that it could quickly blossum into a framework instead of simply a library. As of right now on GitHub I have a the GET, POST and DELETE HTTP verbs working to pull/push data, but not storing the data for offline purposes. I'm still working out the logistics of how I want to achieve everything, but the ultimate goal would be to have any request queued when offline and then when a network connection was made automatically sync data. Handling multiple versions of data is a big question. Hypothetical if you edited a piece of information and then edited it again, should it send the request twice or once? If you were online it would have sent it twice and in some cases you would want the full audit trail (as I do in the large enterprise platform at work). Another question that I have not come up with a great answer for is the source of truth question. If you make an edit, then come online I could see a potential race condition of the data syncing back and a request being made on the same data. Handling the push and pull properly will take some extensive logic and more than likely might be a global option or down to the request type level. I am hoping to have an early alpha of this working perfectly in the coming weeks.


This project came at the request of my wife who wanted a way to view Trendnet cameras from her Nokia Lumia 1020 Windows Phone. Trendnet only offered apps for iOS and Android and there were no free apps available in the Windows Phone marketplace - so I spent an evening and wrote one last August (2014). Again going with the Windows 10 Universal approach, I began to re-write the app to take advantage of all the new XAML and addin features I had long since wanted to add in. Going with my open source initiative, all of the code is checked into GitHub. I am hoping to have everything ported from the old Windows Phone 8.1 app along with all of the new functionality this summer.


Another older project that I see a need to fufill going forward. Since Google Reader faded away, I switched over to feedly, but I really don't like their interface nor how slow it is. Originally this project was going to be an ASP.NET MVC/WebAPI project with a Windows Phone/Windows Store app. As with my other projects, I knew I wanted to simply port over the work I had done to a Windows 10 Universal App, but as I got into working on it, there was no reason to tie the apps back to a WebAPI Service if I did away with the MVC view. Knowing I was going to be freely giving away this application and didn't want to have ads I also didn't want to incur massive Azure fees if this were to take off. So for the time being this project will exist as a Windows 10 Univeral App with full support for multiple devices (i.e. if you read an article on one device, it will mark it as read on the others). You can check out the code on GitHub. I'm hoping for a release in the coming months.


This was a project I had been slowly designing in my head since summer of 2012 - a turn based Star Trek game without microtransactions and the ability for one to simply keep playing as long as they want. I started coding this in August 2014 and into September 2014, but put it on hold to work on Windows IoT among other topics of interest. Now with Windows 10's release on the immediate horizon I figured I should wrap up the game and in kind open source the project. As of now I'm in the process porting over the XAML to Windows 10 as it was originally targeting Windows Phone 8.1. Once that process is complete, I will return to working on the logic and with any luck release it sometime this summer, but in the meantime you can checkout the code on GitHub.


I originally wrote this "game" for my boss's child since there was not a dot math game in the Windows Phone marketplace. Seeing as how it got 0 downloads, I open sourced it. I did start porting it over to a Windows 10 Universal Application, but have not finished yet.


Now that Visual Studio 2015 RC is out, I will more than likely be returning to my open source bbXP project. The only reason I put it on hold was the issues I was running into with NuGet packages in CTP6 of Visual Studio 2015. Coming up in a few weeks is the 20th anniversary of when I wrote my first line of code, expect a retrospective post on that in a few weeks.

Continuing my work deep diving into ASP.NET 5 (vNext), I started going down the path of EntityFramework 7. Which similiarly to ASP.NET 5, is like a reboot of the framework itself. Readers interested to dive in, I highly suggest you watch the MVA Video called What's New with ASP.NET 5 that goes over all of the changes in pretty good detail (though I have a running questions list to ask at BUILD in a few weeks).

Noting that EntityFramework 7 beta was included in my ASP.NET 5 project, I hit a road block into finding it through the usual method in the NuGet Package Manager. As of this writing, only 6.1.3 was available. In looking around, the answer is to add another NuGet Package Source. I had done this previously as I setup a private NuGet Package Server at work to host common libraries used throughout all of our projects. For those unaware, goto Tools->NuGet Package Manager->Package Manager Settings.

Once there, click on Package Sources and then the + icon, enter a descriptive name and for the source and paste the following url: and click Update. After you're done, you should have something similiar to this:

You can now close out that window and return to the NuGet Package Manager and upon switching the Package Source dropdown to be ASP.NET vNext (or whatever you called it in the previous screen) you should now see EntityFramework 7 (among other pre-release packages) as shown below.

Hopefully that helps someone out there wanting to deep dive into EntityFramework 7.

Per my announcement on Sunday, I'm working on making bbXP (my CMS that runs this site) generic to the point where anyone could just use it with minimal configuration/customizations. Along this journey, I'm going to be utilizing all of the new ASP.NET 5 (vNext) features. This way I'll be able to use my platform as a test bed for all of the new features of ASP.NET 5 and then apply to production products at work, much like what Version 1 was in back in day when I wanted to deep dive into PHP and MySQL back in 2003 and MVC in general almost 2 years ago.

Tonight's deep dive was into the new configuration model. If you've been developing for ASP.NET or .NET in general you're probably accustomed to using  either the app.config or the web.config like .

And then in your app you would do something like this:

var siteName = ConfigurationManager.AppSettings["SITE_NAME"]; ]]>
And if you got a little fancier you would add a wrapper in your base page or controller to return typed properties for booleans or integers.

With ASP.NET 5, configuration is completely new and extremely flexible, but with the same end result. I will assume you have at the very least downloaded and installed the Visual Studio 2015 CTP in addition to launching the ASP.NET 5 template to at least get somewhat comfortable with all the changes. If you are just starting, I highly suggest watching Daniel Roth's introduction video.

To dive into the configuration specifically, you will want to open the Startup.cs. You will notice at the top of class is the Startup constructor. For bbXP I wanted to add my own json configuration file so my constructor looks like:
public Startup(IHostingEnvironment env) {
     Configuration = new Configuration() .AddJsonFile("config.json") .AddJsonFile("bbxpconfig.json"); }
Knowing I would not have the same ConfigurationManager.AppSettings access as I am used to, I wanted to make a clean method for which to access these configuration options and go one step further to make it strongly typed and utilize dependency injection. So I came up with a quick approach to dynamically populate a class and then use DI to pass the class to my controllers. To get started I wrote a quick function to populate an arbitrary class:

private T readConfig() {
     var tmpObject = Activator.CreateInstance(); var objectType = tmpObject.GetType(); IList props = new List(objectType.GetProperties()); var className = objectType.Name; foreach (var prop in props) {
     var cfgValue = Configuration.Get(String.Format("{
", className, prop.Name)); prop.SetValue(tmpObject, cfgValue, null); }
return tmpObject; }
And then my arbitrary class:

public class GlobalVars {
     public string SITE_NAME {
     get; set; }
Scrolling down to the ConfigureServices function also in the Startup.cs:

public void ConfigureServices(IServiceCollection services) {
     services.AddMvc(); services.AddWebApiConventions(); var options = readConfig(); services.AddSingleton(a => options); }
In this method the first 2 lines are unchanged, but the last 2 add my GlobalVars to the DI list and initialize it with the options from my file. Now to see it in action inside a controller:

[Activate] private GlobalVars _globalVars {
     get; set; }
public IActionResult Index() {
     ViewBag.Title = _globalVars.SITE_NAME; return View(); }
Notice how clean the access to the option is now simply using the new Activate attribute on top of the GlobalVars property. Something I'll be adding to this helper method going forward is type inference so the readConfig method would typecast to the type of the property in your arbitrary class.

Hopefully this helped someone out there in diving into the next version of ASP.NET, more to come for sure.

A little over a year ago now at work I started diving into my next generation platform. I knew it would have the usual specifications: super fast and efficient, full audit handling etc. The crown jewel was to be a fully automated scheduling engine, but over time the platform itself evolved into the most dynamic platforms I know to exist. Fast forward a few months as the system was in Alpha, I started having my own questions of how to start predicting events based on data points now that I was storing much more data. With several other requirements taking precedence I put it on the back burner knowing the "brute force" method of comparing every value of every object on a per class or report was not ideal nor viable given the amount of reporting possibilities.

Last Sunday afternoon after playing around with Microsoft's Azure Machine Learning platform, I came to the realization that their solution while great for non-programmers I was left feeling as though I could do the same thing, but better in C# and because of that I could integrate it across every device even for offline native Mobile experiences.

Thinking about the question I had a while back at work, how can I predict things based on 1 (or maybe no data points)?

Breaking down the problem - the first thing is to reduce the data set. Given a thousand or millions of rows, chances are there are patterns in  the data. Reducing the individual rows to what more than likely is considerably smaller would make additional processing and reporting much easier and more meaningful. Give a million rows to a C-Level executive and they'll be left wondering what they are even looking, but group the data points into maybe the top 10 most common scenarios and immediately he/she will see value.

At this point this is where I opened Visual Studio 2013 and dove into the problem at hand. An hour later I had a working reusable library: jcANALYTICS. As of right now the .NET 4.5 library does the following:

1. Reduces large datasets into groups of datasets
2. Helper methods to return the most common and least common data rows
3. Solution completer - providing an object, based on the larger dataset, fill in the blanks

As of right it is also multi-threaded (can be turned off if desired). Given 100,000 objects on my AMD FX-8350 (8x4ghz) desktop it processes in  just under 5 seconds - though there is certainly room for optimization improvements.

The next big question I imagine is how will this work with my existing application? Knowing utilization of an external library takes some serious considerations (will the company cease to exist, thereby eliminating support, how much in bed will I be after integrating the framework or library etc.). Well good news - at a minimum to process a dataset just 2 lines of code.

Let's assume you've got some class object like the following:

[Serializable] public class Users {
     public string Username {
     get; set; }
public bool? HasIOS {
     get; set; }
public bool? HasAndroid {
     get; set; }
public bool? HasWinPhone {
     get; set; }
public bool? LivesInMaryland {
     get; set; }
To make it work with jcANALYTICS, just inherit from the jcAnalyticsBaseObject class and mark the properties to analyze with the Tally attribute like so:

[Serializable] public class Users : jcAnalyticsBaseObject {
     public string Username {
     get; set; }
[Tally] public bool? HasIOS {
     get; set; }
[Tally] public bool? HasAndroid {
     get; set; }
[Tally] public bool? HasWinPhone {
     get; set; }
[Tally] public bool? LivesInMaryland {
     get; set; }
That's it, then assuming you've got a List collection of Users you would simply use the following lines of code to process the data:

var engine = new jcAEngine< Users >(); engine.AnalyzeData(users); ]]>
After that you've got 4 helper methods to access the analyzed data:

1. GetMostCommon - Returns the most common data row
2. GetLeastCommon - Returns the least common data row
3. GetGroupItems - Returns the analyzed/reduced data points
4. GetCompleteItem - Given an object T based on the common data, fill in an properties based on the most common data that fits what was passed

I think the top 3 are self explanatory, but here's an example of the last function. Using the Users class above, assume you knew  that they lived in Maryland and had a Windows Phone, but you wanted to know based on the other data whether they had an iOS and or Android device as well like so:

var incompleteItem = new Users {
     LivesInMaryland = true, HasWinPhone = true }
; var completeItem = engine.GetCompleteItem(incompleteItem); ]]>
The engine would proceed to look at all of the data points and use the most probable values.

Going forward, my immediate objectives are to optimize the AnalyzeData method and make the library into a Portable Class Library so I can use it on any platform. Longer term I hope to continue to add more methods to analyze and produce data.

You can download the library from the NuGet Console or from here:

[bash] PM> Install-Package jcANALYTICS.Lib [/bash]


After having long been on my "todo list" to deep dive on a Saturday, ASP.NET SignalR finally got some attention yesterday. For those unaware, SignalR offers bi-directional communication between clients and servers over WebSockets. For those longtime readers of my blog, I did deep dive into WebSockets back in August 2012 with a WCF Service and Console App, though lost in the mix of several other technologies (MVC, WebAPI, OpenCL etc.) I had forgotten how powerful the technology was. Before I go any further, I highly suggest reading the Introduction to SignalR by Patrick Fletcher.

Fast forward to January 2015, things are even more connected to each other with REST Web Services like WebAPI, the Internet of Things and Mobile Apps from Android, iOS and Windows Phone exploding the last couple of years. Having a specific need for real-time communication from a server to client came up last Friday night for a dashboard in the next big revision of the architecture at work. The idea behind it would be to show off how every request was truly being tracked to who, when and what they were requesting or submitting. This type of audit trail I hadn't ever implemented in any of the previous three major Service Oriented architectures. In addition, presenting the data with Telerik's Kendo UI Data Visualization controls would be a nice way to show not only the audit trail functionality visually outside of the grid listing, but also to show graphs (pictures still tell a thousand words).

As I dove in, the only examples/tutorials I found were showing a simple chat. A user enters his or her name and messages, all the while without any postback the un-ordered list would dynamically update as new messages came in. Pretty neat - but what I was curious about was how one would execute a server side trigger to all the clients. Going back to my idea for enhancing my work project, it would need to be triggered by the WebAPI service, passed to the SignalR MVC app, which in turn the main MVC app would act as the client to display anything triggered originally from the WebAPI Service. So I started diving further into SignalR and in this post go over what I did (if there is a better way, please let me know in the comments). In the coming weeks I will do a follow up post as I expand the functionality to show at a basic level having three seperate projects like the one I will eventually implement at work at some point.

MVC SignalR Server Side Trigger Example

The following code/screenshots all tie back to an example I wrote for this post, you can download it here.

To begin I started with the base Visual Studio 2013 MVC Project (I will assume from here on out everyone is familiar with ASP.NET MVC):
ASP.NET Web Application Visual Studio 2013 Template
Then select the MVC Template:
ASP.NET MVC Template

Add the NuGet package for SignalR (be sure to get the full package as shown in the screenshot, not just the client):
NuGet Manager with SignalR

Upon the NuGet Package completing installation, you will need to add an OWIN Startup File as shown below:
OWIN Startup Class - Visual Studio 2013

This is crucial to SignalR working properly. For posterity here is the Startup.cs in the project I mentioned above:
using Microsoft.AspNet.SignalR; using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(SignalRMVC.Startup))] namespace SignalRMVC {
     public class Startup {
     public void Configuration(IAppBuilder app) {
     var config = new HubConfiguration {
     EnableJavaScriptProxies = true }
; app.MapSignalR(config); }
Also new for MVC developers is the idea of a SignalR Hub. You will need to add at least one SignalR Hub class to your project, goto Add and then New Item, scroll down to the SignalR grouping and select the SignalR Hub Class (v2) option as shown in the screenshot below:
OWIN Startup Class - Visual Studio 2013

In the Hub class you define the endpoint(s) for your SignalR Server/Client relationship. For this example, I wrote a simple SendMessage function that accepts a string parameter like so:
using Microsoft.AspNet.SignalR; using Microsoft.AspNet.SignalR.Hubs; namespace SignalRMVC {
     [HubName("systemStatusHub")] public class SystemStatusHub : Hub {
     internal static void SendMessage(string logEntry) {
     var context = GlobalHost.ConnectionManager.GetHubContext(); context.Clients.All.sendData(logEntry); }
To make things a little cleaner for this example I added a BaseController with a wrapper to the SignalR Hub (mentioned above) adding in a timestamp along with the string passed from the MVC Action like so:
using System; using System.Web.Mvc; namespace SignalRMVC.Controllers {
     public class BaseController : Controller {
     internal void RecordVisit(string actionName) {
     SystemStatusHub.SendMessage(String.Format("Someone checked out the {
page at {
", actionName, DateTime.Now)); }
With the static wrapper function in place, lets look into the actual MVC Controller, HomeController:
using System.Web.Mvc; namespace SignalRMVC.Controllers {
     public class HomeController : BaseController {
     public ActionResult Index() {
     RecordVisit("home"); return View(); }
public ActionResult About() {
     RecordVisit("about"); return View(); }
public ActionResult Contact() {
     RecordVisit("contact"); return View(); }
Nothing unusual for a C# developer, simply passing an indicator based on the title of the Action.

And then in the Index.shtml, it contains the reference to the dynamically generated /signalr/hubs JavaScript file, the JavaScript Connection to the Hub and the handler for what should happen when it receives a message:

Site Activity

    Pretty simple, as the messages come in, append a li to the activityLog ul.

    Finished Product

    Below is a screenshot after clicking around from another browser:

    SignalR <span classin Action" />

    Again, if you wish to download the complete example you can download it here. In the coming weeks expect at least one more SignalR post detailing a possible solution for common Service Oriented Architectures (seperate MVC, WebAPI and SignalR hosted apps). I hope this helps someone out in the beginning of their SignalR journey.

    Some time ago (I want to say 2005) while working on my Infinity Project I was having issues with the large pre-rendered animations I was rendering out in 3ds max (funny to think that 1024x1024@32bpp was at one point huge) akin to the Final Fantasy VII-IX style. After a few animations not only were the files huge, the number of files (they were rendered out to individual frame files) got out of control. Thinking outside the box I started working on my own image format that could contain multiple frames of animation akin to a gif, while also applying what I thought was an ingenious compression approach. Not a hard problem, but when it came to writing a 3ds max or Photoshop plugin to convert targa files to the new format I was overwhelmed (surprisingly I never thought to just write a command line app to batch convert them).

    Fast forward 9 years to last April/May while watching the HBO show Silicon Valley, I was reminded of my earlier compression idea that I never brought to life. Approaching it as a 28 year old having spent the last 8 years doing professional C# as opposed to the 19 year old version of myself proved to be a huge help. I was able to finally implement it to some degree, however any file over 5mb would take several minutes even on my AMD FX-8350, not exactly performance friendly. So I shelved the project until I could come up with a more performance friendly approach and at the same time write it to be supported across the board (Windows Store, Win32, Windows Phone, Android, Linux, MacOSX, FreeBSD).

    Fast forward again to December 21st, 2014 - I had a hunch a slightly revised approach might satisfy the requirements I laid out earlier and finally gave it a name, jcCCL (Jarred Capellman Crazy Compression Library). Sure enough it was correct, and at first I was seeing a 10-15X compression level over zip or rar on every file format (png, mp4, mp3, docx, pdf) I tried. As is the case most of the time, this was due to an error in my compression algorithm not outputting all of the bytes that I uncovered when getting my decompression algorithm working.

    As it stands I have a WPF Win32 application compressing and decompressing files with drag and drop support, however the compression varies from 0 to 10% - definitely not a world changing compression level. However, now that all of the architecture (versioning support, wrappers etc.) are all complete I can focus on optimizing the algorithm and adding in new features like encryption (toying with the idea that it will be encrypted by default). More to come in time on this project, hoping to have a release available soon.
    For as long as I can remember since C# became my language of choice, I've been yearning for the cleanest and most efficient way of getting data from a database (XML, SQL Server etc.) into my C# application whether it was in an ASP.NET, WinForms or Windows Service. For the most part I was content with Typed DataSets back in the .NET 2.0 days. Creating an XML file by hand with all of the different properties, running the command line xsd tool on the XML file and having it generate a C# class I could use in my WinForms application. This had problems later on down the road when Portable Class Libraries (PCLs) became available, eliminating code duplication, but lack to this day Typed Dataset support, not to mention Client<->Web Service interactions have changed greatly since then switching to a mostly JSON REST infrastructure.

    Ideally I wish there was a clean way to define a Class inside a Portable Class Library (made available to Android, iOS, Windows Store, Windows Phone and .NET 4.5+) and have Entity Framework map to those entities. Does this exist today? To the best of my research it does not without a lot of work upfront (more on this later). So where does this lead us to today?

    For most projects you probably see a simple Entity Framework Model mapped to SQL Tables, Views and possibly Stored Procedures living inside the ASP.NET WebForms or MVC solution. While this might have been acceptable 4 or 5 years ago, in the multi-platform world we live in today you can't assume you'd only have a Web only client. As I've stated on numerous times over the years, investing a little bit more time in the initial development of a project to plan ahead for multiple platforms is key today. Some of you might be saying "Well I know it'll only be a Web project, the client said they'd never want a native mobile app when we asked them" Now ask yourself how many times a client came back and asked for what they said they never wanted when you or the PM asked. Planning ahead not only saves time later, but delivers a better product for your client (internal or external), bringing a better value add to your service.

    Going back to the problem at hand: a highly coupled Entity Framework model to the rest of the ASP.NET application. Below are some possible solutions (not all of them, but in my opinion the most common:

    1. Easy way out

    Some projects I have worked on had the Entity Framework model (and associated code) in their own Library and then the ASP.NET (WebForms, MVC, WebAPI or WCF service) project simply reference the library. While this is better in that if  you migrate from the existing project or want a completely different project to reference the same model (a Windows Service perhaps), then you don't have to invest the time in moving all of the code and updating all of the namespaces in both the new library and the project(s) referencing it. However you still have the tight coupling between your project and the Entity Framework model.

    2. POCO via Generators

    Another possible solution is to use the POCO (Plain Old CLR Object) approach with Entity Framework. There are a number of generators (Entity Framework Power Tools or the EntityFramework Reverse POCO Generator), both have dependencies in  the clients that reference the POCO Classes with Entity Framework, thus negating the idea you'd be able to have 1 set of classes for both your clients of your platform and Entity Framework.

    3. POCO with Reflection

    Yet another possible solution is to create a custom attribute and via reflection map a class object defined in your PCL. This approach has the cleaness of having the following possible POCO Class with custom attributes:
    [POCOClass] [DataContract] public class UserListingResponseItem {
         [DataMember] [POCOMemeber("ID")] public int UserID {
         get; set; }
    [DataMember] [POCOMemeber("Username")] public string Username {
         get; set; }
    [DataMember] [POCOMemeber("FirstName")] public string FirstName {
         get; set; }
    [DataMember] [POCOMemeber("LastName")] public string LastName {
         get; set; }
    The problem with this solution is that as any seasoned C# developer knows, reflection is extremely slow. If performance wasn't an issue (very unlikely) then this could be possible solution.

    4. DbContext to the rescue

    In doing some research on POCO with Entity Framework I came across one approach in which you can retain your existing Model untouched, but then define a new class inheriting from DbContext like so:
    public class TestModelPocoEntities : DbContext {
         public DbSet Users {
         get; set; }
    protected override void OnModelCreating(DbModelBuilder modelBuilder) {
         // Configure Code First to ignore PluralizingTableName convention // If you keep this convention then the generated tables will have pluralized names. modelBuilder.Conventions.Remove(); modelBuilder.Entity().ToTable("Users"); modelBuilder.Entity().Property(t => t.UserID).HasColumnName("ID"); modelBuilder.Entity().HasKey(t => t.UserID); }
    What this code block does is map the Users table to a POCO Class called UserListingResponseItem (the same definition as above). By doing so you can then in your code do the following:
    using (var entity = new TestModelPocoEntities()) {
         return entity.Users.ToList(); }
    As one can see this is extremely clean on the implementation side, albiet a bit tideous on the backend side. Imagining a recent project at work with hundreds of tables this could be extremely daunting to maintain, let alone implement in an sizeable existing project.

    Unsatisfied with these options I was curious how a traditional approach would compare performance wise to Option 4 above, given that it satisfied the requirement of a single class residing in a PCL. For comparison assuming the table is  defined as such:

    Data Translation User Table

    The "traditional" approach:
    using (var entity = new Entities.testEntities()) {
         var results = entity.Users.ToList(); return results.Select(a => new UserListingResponseItem {
         FirstName = a.FirstName, LastName = a.LastName, Username = a.Username, UserID = a.ID }
    ).ToList(); }
    Returns a List of Users EntityFramework objects and then iterates over every item and sets the equivalent property in the UserListingResponseItem Class before returning the result.

    The Benchmark

    For the benchmark I started with the MVC Base Template in Visual Studio 2015 Preview, removed all the extra Views, Controllers and Models and implemented a basic UI for testing:

    Data Translation Base UI

    A simple population of random data for the Users table and deletion of records before each test run:
    private void createTestData(int numUsersToCreate) {
         using (var entity = new Entities.testEntities()) {
         entity.Database.ExecuteSqlCommand("DELETE FROM dbo.Users"); for (var x = 0; x < numUsersToCreate; x++) {
         var user = entity.Users.Create(); user.Active = true; user.Modified = DateTimeOffset.Now; user.Password = Guid.Empty; user.LastName = (x%2 == 0 ? (x*x).ToString() : x.ToString()); user.FirstName = (x%2 != 0 ? (x * x).ToString() : x.ToString()); user.Username = x.ToString(); entity.Users.Add(user); entity.SaveChanges(); }

    Benchmark Results

    Below are the results running the test 3 times for each size data set. For those that are interested I was running the benchmark on my AMD FX-8350 (8x4ghz), VS 2013 Update 4 and SQL Server 2014 with the database installed on a Samsung 840 Pro SSD.

    Data Translation Performance Results

    The results weren't too surprising to me figuring the "traditional" approach would be a factor or so slower than the DbContext approach, but I didn't think about it from the standpoint of larger datasets being considerably slower. Granted we're talking fractions of a second, but multiple that by hundreds of thousands (or millions) of concurrent connections it is considerable.

    Closing thoughts

    Having spent a couple hours deep diving into the newer features of Entity Framework 6.x hoping that the golden solution would exist today I'm having to go back to an idea I had several months ago, jcENTITYFRAMEWORK in which at compile time the associations would be created mapping the existing classes to the equivalent Tables, Views and Stored Procedures. In addition to utilizing the lower level ADO.NET calls instead of simply making EntityFramework calls. Where I left it off I was still hitting an ASP.NET performance hit on smaller data sets (though on larger data sets my implementation was several factors better). More to come on that project in the coming weeks/months as Database I/O with C# is definitely not going away for anyone and there is clearly a problem with the possible solutions today. At the very least coming up with a clean and portable way to allow existing POCOs to be mapped to SQL Tables and Stored Procedures is a new priority for myself.

    For those interested in the ASP.NET MVC and PCL code used in benchmarking the two approaches, you can download it here. Not a definitive test, but real world enough. If for some reason I missed a possible approach, please comment below, I am very eager to see a solution.
    After a failed attempt earlier this week (see my post from Tuesday night) to utilize the CLR that is bundled with Windows IoT, I started digging around for an alternative. Fortunately I was on Twitter later that night and Erik Medina had posted:

    Not having any time Wednesday night to dive in, I spent a few hours Thursday night digging around for the Mono for Windows IoT. Fortunately, it was pretty easy to find Jeremiah Morrill's Mono on Windows IoT Blog post, download his binaries and get going.

    For those wanting to use Xamarin Studio or MonoDevelop instead of Visual Studio after downloading the binary from Jeremiah, you'll need to add mcs.bat to your bin folder, with the following line (assuming you've extract the zip file to mono_iot in the root of your C drive):
    [bash] @"C:\mono_iot\bin\mono.exe" %MONO_OPTIONS% "C:\mono_iot\lib\mono\4.5\mcs.exe" %* [/bash] For whatever reason this wasn't included and without it, you'll receive:
    [bash] Could not obtain a C# compiler. C# compiler not found for Mono / .NET 4.5. [/bash] Inside of Xamarin Studio, goto Tools -> Options and then scroll down to Projects -> .NET Runtimes and click add to the root of the mono_iot folder. After you've added it, it should look like the following (ignoring the Mono 3.3.0 that I installed separately):

    Xamar<span classin Studio with Mono for Windows IoT setup" />

    In addition you'll need to copy the lib folder to your Galileo and at least mono.exe and mono-2.0.dll, both found in the bin folder from Jeremiah's zip file to the folder where you intend to copy your C# executable. You could alternatively after copying over the entire mono_iot folder structure add it to the path like so (assuming once again you've extracted to c:\mono_iot) over a Telnet session:
    [bash] C:\mono_iot\bin>setx PATH "%PATH%;c:\mono_iot\bin" SUCCESS: Specified value was saved. [/bash] In order for the path variable to update, issue a shutdown /r.

    If you want to see the existing variables and their values you can issue a set p which will list the following after you've rebooted your Galileo:
    [bash] Path=C:\windows\system32;C:\windows;C:\wtt;;c:\mono_iot\bin PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC PROCESSOR_ARCHITECTURE=x86 PROCESSOR_LEVEL=5 PROCESSOR_REVISION=0900 ProgramData=C:\ProgramData ProgramFiles=C:\Program Files PROMPT=$P$G PUBLIC=C:\Users\Public [/bash] With the environment variable updated, you'll no longer have to either copy the mono executable to the folder of your app nor include the full path over Telnet - definitely a time saver in my opinion.

    Now back to deploying my little WebClient test from Tuesday with Mono...

    From Visual Studio you don't need set anything up differently, however I was running into issues with the app.config when compiling from Visual Studio and deploying a little test app to my Galileo:
    [bash] System.AggregateException: One or more errors occurred ---> System.TypeInitializationException: An exception was thrown by the type initializer for System.Net.HttpWebRequest ---> System.Configuration.ConfigurationErrorsException: Error Initializing the configuration system. ---> System.Configuration.ConfigurationErrorsException: Unrecognized configuration section [/bash] So I went back to using Xamarin Studio, but received the following:
    [bash] System.Net.WebException: An error occurred performing a WebClient request. ---> System.NotSupportedException: at System.Net.WebRequest.GetCreator (System.String prefix) [0x00000] in :0 at System.Net.WebRequest.Create (System.Uri requestUri) [0x00000] in :0 at System.Net.WebClient.GetWebRequest (System.Uri address) [0x00000] in :0 at System.Net.WebClient.SetupRequest (System.Uri uri) [0x00000] in :0 at System.Net.WebClient.OpenRead (System.Uri address) [0x00000] in :0 --- End of inner exception stack trace --- at System.Net.WebClient.OpenRead (System.Uri address) [0x00000] in :0 at System.Net.WebClient.OpenRead (System.String address) [0x00000] in :0 at (wrapper remoting-invoke-with-check) System.Net.WebClient:OpenRead (string) [/bash] Not a good sign - essentially saying WebClient isn't supported. Got me thinking to verify the version of Mono from Jeremiah:
    [bash] C:\>mono -V Mono JIT compiler version 2.11 (Visual Studio built mono) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. TLS: normal SIGSEGV: normal Notification: Thread + polling Architecture: x86 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: Included Boehm (with typed GC) [/bash] From the 2.x branch not the newer 3.x branch like what I utilize at work for my iOS and Android development, but not wanting to go down the path of creating my own 3.x port I kept diving in - attempting to try the HttpClient that I knew wasn't supported by Windows IoT's CLR. I threw together a quick sample to pull down the compare results from jcBENCH to the console:
    public async Task < T > Get < T > (string url) {
         using (var client = new HttpClient()) {
         var result = await client.GetStringAsync(url); return JsonConvert.DeserializeObject < T > (result); }
    public async void RunHttpTest() {
         var result = await Get < List < string > > (""); foreach (var item in result) {
         Console.WriteLine (item); }
    As far the project was concerned I added the .NET 4.5 version of Newtonsoft.Json.dll to the solution via NuGet and made sure it was copied over during deployment. With a bit of a surprise:
    [bash] C:\winiottest>mono winiottest.exe AMD A10-4600M APU with Radeon(tm) HD Graphics AMD A10-7850K Radeon R7, 12 Compute Cores 4C 8G AMD A6-3500 APU with Radeon(tm) HD Graphics AMD A6-5200 APU with Radeon(TM) HD Graphics AMD Athlon(tm) 5150 APU with Radeon(tm) R3 AMD Athlon(tm) 5350 APU with Radeon(tm) R3 AMD C-60 APU with Radeon(tm) HD Graphics AMD E-350D APU with Radeon(tm) HD Graphics AMD E2-1800 APU with Radeon(tm) HD Graphics AMD FX(tm)-8350 Eight-Core Processor AMD Opteron(tm) Processor 6176 SE ARMv7 Processor rev 0 (v7l) ARMv7 Processor rev 1 (v7l) ARMv7 Processor rev 2 (v7l) ARMv7 Processor rev 3 (v7l) ARMv7 Processor rev 4 (v7l) ARMv7 Processor rev 9 (v7l) Cobalt Qube 2 Intel Core 2 Duo Intel Core i5-4300U Intel(R) Atom(TM) CPU Z3740 @ 1.33GHz Intel(R) Core(TM) i3-2367M CPU @ 1.40GHz Intel(R) Core(TM) i7-4650 Intel(R) Quartz X1000 Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz Intel(R) Xeon(R) CPU E5440 @ 2.83GHz PowerPC G5 (1.1) R14000 UltraSPARC-IIIi [/bash] I should note even with the newly added Sandisk Extreme Pro SDHC card, the total time from execution to returning the results was around 10 seconds, where as on my AMD FX-8350 also hard wired it returns in less than a second. Given that also the Galileo itself is only 400mhz - you definitely won't be running a major WebAPI service on this device, but there are some definite applications (including one I will be announcing in the coming weeks).

    More to come with the Galileo - I received my Intel Centrino 6235 mPCI WiFi/Bluetooth card yesterday and am just awaiting the half->full length mPCIe adapter so I can mount it properly. With any luck I will receive that today and will post on how to get WiFi working on the Galileo under Windows IoT.

    Intel Galileo Development Board
    After hearing about and seeing Windows IoT while at BUILD this April, I was waiting for an opportunity to really get started with it. Fortunately, this weekend I was near a Fry's Electronics and picked up an Intel Galileo Development Board for nearly half off.

    Intel Galileo Development Board - All included parts
    Inside the box you'll find the various power plugs used around the world, a USB cable, the development board and the power brick itself.

    After reading through the very thorough (no need to ellaborate on the steps) Updating your Intel Galileo I was able to get Windows IoT onto a Micro SDHC card. Make sure to connect to the Client Port (the one closest to the Ethernet Port).

    Installing Windows IoT to Galileo
    30 minutes into the installation:

    Installing Windows IoT to Galileo - 30 minutes in
    The whole process took 45 minutes on my HP dv7 laptop to install to my SDHC card. For those curious I chose to use a Sandisk Ultra Plus 16gb Micro SDHC card, while by no means the fastest (the Sandisk Extreme Pro from what I've read is the fastest), it was the most cost effective at $15.

    Intel Galileo Development Board - All included parts
    After unplugging the power from the Galileo, removing the SDHC card from my PC and popping it into the Galileo I was pleasantly surprised to be able to Telnet in after a minute or two.

    Windows IoT with Telnet into Intel Galileo
    For those curious, as I have said in other posts, I find XShell to be the best SSH/SERIAL/TELNET client for Windows, best of all it is free for non-commercial use.

    After installing WindowsDeveloperProgramforIOT.msi, I started porting jcBENCH to Windows IoT. Since jcBENCH is C++ and written in a extremely portable manner the only big thing I had to do was recompile the pthreads Win32 library to not take advantage of SSE instructions as the Galileo does not support them. The other thing to note is if you want to run a program in a more traditional route, simply do the following:

    int _tmain(int argc, _TCHAR* argv[]) {
         ArduinoInit(); // rest of your program here return 0; }
    The base template that is installed in Visual Studio 2013 is focused more on applications that loop continuously (which makes sense given the headless nature of Windows IoT).

    So how did the Intel Galileo fair in jcBENCH?

    [bash] C:\jcBench>jcBENCH.cmdl jcBENCH 1.0.850.0531(x86/WinIoT Intel Galileo Edition) (C) 2012-2014 Jarred Capellman Usage: jcBench [Number of Objects] [Number of Threads] Example: jcBench 100000 4 This would process 100000 objects with 4 threads Invalid or no arguments, using default benchmark of 10000 objects using 1 CPUS CPU Information --------------------- Manufacturer: GenuineIntel Model: Intel Quartz X1000 Count: 1x399.245mhz Architecture: x86 --------------------- Running Benchmark.... Integer: 1 Floating Point: 1 Submit Result (Y or N):n Results can be viewed and compared on [/bash] Given that it is a single 400mhz Pentium (P5 essientially) - not bad, but not great. It got a score of 1 in both Floating Point operations and Integer Operations, given the baseline is a Dual Core AMD C-60 running at a much faster clock rate. This isn't discouraging, especially given the applications for this platform including my own ideas from back in March 2014 (at the bottom of the post) having my own Network Monitoring software. Given the lower power usage and the challenge of making my C++ code fast enough to do those operations, I'm up for the task.

    As for what is next with the Intel Galileo? I have an Intel Centrino 6235 mPCIe 802.11n/Bluetooth 4.0 card coming today for my Galileo, as it is one of the few confirmed working WiFi cards. In addition I have a 16gb Sandisk Extreme Pro SDHC card on its way to test whether or not the "best" card has any impact on a real-world application. For those curious you can now obtain the Windows IoT version of jcBENCH on the official jcBENCH page.

    From a developer perspective, for those needing to make use of Pthreads Library, you can download the source and binary compatible with Windows IoT here.

    In doing some routine maintenance on this blog, I updated the usual JSON.NET, Entity Framework etc. In doing so and testing locally, I came across the following error:
    ASP.NET Webpages Conflict

    In looking at the Web.config, the NuGet Package did update the dependentAssembly section properly:
    ASP.NET Webpages Conflict

    However, in the appSettings section, it didn't update the webpages:Version value:
    ASP.NET Webpages Conflict

    Simply update the "" to "" and you'll be good to go again.


    For the longest time I’ve had these great ideas only to keep them in my head and then watch someone else or some company turn around and develop the idea (not to say someone stole the idea, but given the fact that there are billions of people on this planet, it is only natural to assume one of those billion would come up with the same idea). Watching this happen, as I am sure other developers have had since the 70s I’ve decided to put my outlook on things here, once a year, every July.

    As one who reads or has read my blog for a decent amount of time knows I am very much a polyglot of software and enjoy the system building/configuration/maintaining aspect of hardware. For me, they go hand in hand. The more I know about the platform itself (single threaded performance versus multi-threaded performance, disk iops etc.) the better I can program the software I develop. Likewise the more I know about a specific programming model, the better I will know the hardware that it is specialized for. To take it a step further, this makes decisions on implementation at work & my own projects better.

    As mentioned in the About Me section, I started out in QBasic and a little later when I was 12 I really started getting into custom pc building (which wasn’t anywhere as big as it is today). Digging through the massive Computer Shopper magazines, drooling over the prospect of the highest end Pentium MMX CPUs, massive (at the time) 8 GB hard drives and 19” monitors. Along with the less glamorous 90s PC issues of IRQ conflicts, pass through 3Dfx Voodoo cards that required a 2D video card (and yet another PCI slot), SCSI PCI controllers and dedicated DVD decoders. Suffice it to say I was glad I experienced all of that because it as it creates a huge appreciation for USB, PCI Express, SATA and if nothing else the stability of running a machine 24/7 on a heavy work load (yes part of that is also software).

    To return to the blog’s title…

    Thoughts on the Internet of Things?

    Universally I do follow the Internet of Things (IoT) mindset. Everything will be interconnected, which brings the question of privacy and what that means for the developer of the hardware, the software and consumer. As we all know, your data is money. If the lights in your house for instance were WiFi enabled and connected into a centralized server in your house with an exposed client on a tablet or phone I would be willing to be the hardware and software developer would love to know the total energy usage, which lights in which room were on, what type of bulbs and when the bulbs were dying. Marketing data could then be sold to let you know of bundle deals, new “more efficient” bulbs, how much time is spent in which rooms (if you are in the home theater room a lot, sell the consumer on blu-rays and snacks for instance). With each component of your home becoming this way, the more data will be captured and in some cases will be able to predict what you want before you realize, simply based off your tendencies.

    While I don’t like the lack of privacy in that model (hopefully some laws can be enacted to resolve those issues), being a software developer I would hate to be ever associated with the backlash of capturing that data, but this idea of everything being connected will create a whole new programming model. With the recent trend towards REST web services returning Gzipped JSON with WebAPI for instance, the problem of submitting and retrieving has never been easier and portable across so many platforms. With C# in particular in conjunction with the Http Client library available on NuGet, a lot of the grunt work is already done for you in an asynchronous manner. Where I do see a change, is in the standardization of an API for your lights, TV, garage door, toaster etc. Allowing 3rd party plugins and universal clients to be created rather than having a different app to control element or one company providing a proprietary API that only works on their devices, forcing the difficult decision for the consumer to either stay with that provider to be consistent or mixing the two, requiring 2 apps/devices.

    Where do I see mobile technology going?

    Much like where the Mobile Devices have headed towards (as I predicted 2 years ago), apps are becoming ever increasingly integrated into your device (for better or for worse). I don’t see this trend changing, but I do hope from a privacy standpoint the apps have to become more explicit in what they are accessing. I know there is fine line from the big three (Apple, Google and Microsoft) in becoming overly explicit before any action (remember Vista?), but think if an app gets more than your current location, the capabilities should be brought to a bolder or larger font to better convey the apps true accessibility to your device. I don’t see this situation getting better from a privacy standpoint, but I do see more and more customer demand for the “native” experience to be like that of Cortana on Windows Phone 8.1. She has access to the data you provide her and will help make your experience better. As the phones provide more and more APIs, this trend will only continue until apps are more of plugins to your base operating system’s experience to integrate into services like Yelp, Facebook, Twitter etc.

    Where do I see web technology going?

    I enjoyed diving into MVC over the last year in a half. The model definitely feels much more in line with a MVVM XAML project, but still has overwhelming strong tie to the client side between the strong use of jQuery and the level of effort in maintaining the ever changing browser space (i.e. browser updates coming out at alarming rate). While I think we all appreciate when we goto a site on our phones or desktop and it scales nicely providing a rich experience no matter the device, I feel the ultimate goal of trying to achieve a native experience in the browser is waste of effort. I know just about every web developer might stop reading and be in outrage – but what is the goal of the last web site you developed and designed that was also designed for the mobile? Was it to convey information to the masses? Or was it simply a stop gap until you had a native team to develop for the big three mobile platforms?

    In certain circumstances I agree with the stance of making HTML 5 web apps instead of native apps, especially when it comes to cost prohibiting of a project. But at a certain point, especially as of late with Xamarin’s first class citizen status with Microsoft you have to ask yourself, could I deliver a richer experience natively and possible faster (especially given the cast range of mobile browsers to content with in the HTML 5 route)?

    If you’re a C# developer who wants to see a native experience, definitely give the combination of MVVM Cross, Xamarin’s Framework and utilizing Portable Libraries a try. I wish all of those tools existed when I first dove into iOS development 4 years ago.

    Where do I see desktop apps going?

    In regards to desktop applications, I don’t see them going away even in the “app store” world we live in now. I do however see a demand for a richer experience expected by customers after having a rich native experience on their phone or after using a XAML Windows 8.x Store App. The point being, I don’t think it will be acceptable for an app to look and feel like the default WinForms grey and black color scheme that we’ve all used at one point in our careers and more than likely began our programming (thinking back to classic Visual Basic).

    Touch will also play a big factor in desktop applications (even in the enterprise). Recently at work I did a Windows 8.1 Store App for an executive dashboard. I designed the app with touch in mind, and it was interesting how it changes your perspective of interacting with data. The app in question, utilized Mutli-layered graphs and a Bing Map with several layers (heat maps and pushpins). Gone was the un-natural mouse scrolling and instead pinching, zooming and rotating as if one was in a science fiction movie from just 10 years ago.

    I see this trend continuing especially as the number of practical general purpose devices like laptops having touch screens at every price point, instead of the premium they previously demanded. All that needs to come about is a killer application for the Windows Store – could your next app be that app?

    Where is programming heading in general?

    Getting programmers out of the single-threaded – top to bottom programming mindset. I am hoping next July when I do a prediction post this won’t even be a discussion point, but sadly I don’t see this changing anytime soon. Taking a step back and looking at what this means generally speaking: programmers aren’t utilizing the hardware available to them to their full potential.

    Over 5 years ago at this point I found myself at ends with a consultant who kept asking for more and more cpus added to a particular VM. At the time when he first asked, it seemed reasonable as there was considerably more traffic coming to a particular ASP.NET 3.5 Web Application as a result of a lot of eagerly awaited functionality he and his team had just deployed. Even after the additional CPUS were added, his solution was still extremely slow under no load. This triggered me to review his Subversion checkins and I realized the crux of the matter wasn’t the server – it was his single threaded resource intensive/time consuming code. In this case, the code was poorly written on top of trying to achieve a lot of work performed on a particular page. For those that remember back to .NET 3.5’s implementation of LINQ, it wasn’t exactly a strong performer in performance intensive applications, let alone being looped through multiple times as opposed to one larger LINQ Query. The moral of the story being the single-threaded coded only helped for handling the increased load, not the performance of a user’s experience on a 0% load session.

    A few months later when .NET 4 came out of beta and further still when the Task Parallel Library was released it changed my view on performance (After all, jcBENCH stemmed from my passion for diving into Parallel Programming on different architectures and operating systems back in January 2012). No longer was I relying on high single threaded performing cpus, bu t instead writing my code to take advantage of the ever-increasing # of cores available to me at this particular client (for those curious 2U 24 core Opteron HP G5 rackmount servers).

    With .NET’s 4.5’s async/await I was hopeful that meant more developers I worked with would take advantage of this easy model and no longer lock the UI thread, but I was largely disappointed. If developers couldn’t grasp async/await, let alone TPL how could they proceed to what I feel is an even bigger breakthrough to become available to developers: Heterogeneous Programming, or more specifically OpenCL.

    With parallel programming comes the need to break down your problem into independent problems, all coming together at a later time (like breaking down image processing to look at a range of pixels rather than the entire image for instance). This is where Heterogeneous Programming can make an even bigger impact, in particular with GPUs (Graphics Processing Units) which have upwards of hundreds of cores to process tasks.

    I had dabbled in OpenCL as far as back as June 2012 in working on the OpenCL version of jcBENCH and I did some further research back in January/February of this year (2014) in preparation for a large project at work – a project I ended up using the TPL extensively instead. The problem wasn’t OpenCL’s performance, but my mindset at the time. Before the project began, I thought I knew the problem inside out, but really I only knew it as a human would think about it – not a machine that only knows 1s and 0s. The problem wasn’t a simple task, nor was it something I had ever even attempted previously so I gave myself some slack two months in when it finally hit me on what I was really trying to solve – teaching a computer to think like a human. Therefore when pursuing Heterogeneous programming as a possible solution, ensure you have a 100% understanding of the problem and what you are in the end trying to achieve, in most cases it might make sense to utilize OpenCL instead of a traditional parallel model like with the TPL.

    So why OpenCL outside of the speed boost? Think about the last laptop or desktop you bought, chances are you have an OpenCL 1.x compatible APU and/or GPU in it (i.e. you aren’t required to spend any more money – just utilizing what has already been available to you). In particular on the portable side, laptops/Ultrabooks that already have a lower performing CPU than your desktop, why utilize the CPU when the GPU could off load some of that work?

    The only big problem with OpenCL for C# programmers is the lack of an officially supported interop library from AMD, Apple or any of the other members of the OpenCL group. Instead you’re at the mercy of using one of the freely available wrapper libraries like OpenCL.NET or simply writing your own wrapper. I haven’t made up my mind yet as to which path I will go down – but I know at some point a middle ware makes sense. Wouldn’t it be neat to have a generic work item and be able to simply pass it off to your GPU(s) when you wanted?

    As far as where to begin with OpenCL in general, I strongly suggest reading the OpenCL Programming Guide. Those who have done OpenGL and are familiar with the “Red Book”, this book follows a similar pattern with a similar expectation and end result.


    Could I be way off? Sure – it’s hard to predict the future, while being grounded in the past that brought us here, meaning it’s hard to let go of how we as programmers and technologists in the world have evolved in the last 5 years to satisfy not only our current consumer demand but our own and anticipate what is next. What I am more curious in hearing is programmers outside of the CLR in particular the C++, Java and Python crowds – where they feel the industry is heading and how they see their programming language handling the future, so please leave comments.
    Working on the Android port of jcBENCH today and ran into a question:
    How to accurately detect the number of CPUs on a particular Android Device running at least Android version 4.0?

    This led me to search around, I found a pretty intuitive native Java call:
    Runtime.GetRuntime().AvailableProcessors(); ]]>
    But this was returning 1 on my dual core Android devices. So I continued searching, one such suggestion on Stack Overflow was to count the number of files listed in the /sys/devices/system/cpu folder that began with cpu followed by a number. I ported the Java code listed in the Stack Overflow answer into C# and ran it on all 4 Android devices I own - all of them returned 3 (and looking at the actual listing only actually found a CPU0).

    This got me thinking, I wonder if the traditional C# approach would work in this case? Xamarin afterall is built off of Mono....

    Sure enough:
    System.Environment.ProcessorCount ]]>
    Returned properly on each device. Hopefully that helps someone out there.
    Saturday morning after resolving my Realtek RTL8111-GR Issues on my new ClearOS box, I ran into yet another error:


    Knowing the AM1 Platform that my AMD Athlon 5350 APU/ASUS AM1I-A motherboard run on more than likely does not support IOMMU like my desktop's 990FX does I figured it was a detection issue with the Linux kernel that ClearOS 6.5 utlizes.

    Doing some research into the issue there are a couple adjustments to your GRUB configuration that may or may not resolve the issue. In my case adjusting my GRUB arguments upon boot to include iommu=soft resolved the issue. I'm hoping down the road with newer Linux kernels the detection (if that even is the issue) gets better, but for those running an AMD "Kabini" APU and ran into this issue you'll at least be able to boot into your Linux distribution without any issues.

    Continuing down the path of securing my home network, I wanted to get some sort of automated reporting of traffic and other statistics. Looking around I came upon Monitorix, which offered everything I was looking for. Unfortunately, adding Monitorix to my ClearOS 6.5 install wasn't as trivial as a yum install. In addition, there seems to be a huge gap between the version all of the online guides include (2.5.2-1) and the current as of this writing, version 3.5.1-1. With some work I was able to get the latest installed and running with 1 caveat.

    Installing 2.5.2-1

    To get started execute the following commands:
    [bash] yum-config-manager --enable clearos-core clearos-developer yum upgrade yum --enablerepo=clearos-core,clearos-developer,clearos-epel install clearos-devel app-devel yum install app-web-server rrdtool rrdtool-perl perl-libwww-perl perl-MailTools perl-MIME-Lite perl-CGI perl-DBI perl-XML-Simple perl-Config-General perl-HTTP-Server-Simple rpm -ivh [/bash] Edit /etc/httpd/conf.d/monitorix.conf and update the line that has "" to "all".

    In addition depending on your setup, you may want to configure Monitorix itself in the /etc/monitorix.conf file for an eth1 or other devices that aren't "standard".

    Once statisfied with the configuration execute the following commands:
    [bash] service httpd start service monitorix start [/bash] Now you should be able to access Monitorix from http://localhost/monitorix.

    Installing 3.5.1-1

    Not content to be running a 2 year old version of the software if only for the principle of it, I started to deep dive into getting the latest version up and running. I tried my best to document the steps, though there was some trial and error in doing the upgrade. Going from a fresh install you may need to execute some of the yum commands above, in particular the first 2 commands.

    First off execute the following commands:
    [bash] yum --enablerepo=rpmforge install perl-HTTP-Server-Simple yum install perl-IO-Socket-SSL perl-XML-Simple perl-Config-General perl-HTTP-Server-Simple wget HTTP-Server-Simple-0.440.0-3-mdv2011.0.noarch.rpm [/bash] These will download the neccessary prequisites for the newer version of Monitorix. Next you'll download and install the new rpm:
    [bash] wget rpm -U monitorix-3.5.1-1.noarch.rpm [/bash] Then restart httpd and monitorix:
    [bash] service httpd restart service monitorix restart [/bash] After restarting you may notice an error:
    [bash] Starting monitorix: Can't locate HTTP/Server/Simple/ in @INC (@INC contains: /usr/bin/lib /usr/lib/monitorix /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/lib/monitorix/ line 27. BEGIN failed--compilation aborted at /usr/lib/monitorix/ line 27. Compilation failed in require at /usr/bin/monitorix line 30. BEGIN failed--compilation aborted at /usr/bin/monitorix line 30. [/bash] Doing some research, for Perl to adjust the @INC path permenantly, it requires a recompile, so in order to fix the problem permenantly for Monitorix simply copy /usr/lib/perl5/vendor_perl/5.X.X/HTTP to the /usr/lib/monitorix/.

    After copying the folder, you may also need to verify the path changes for the new version in the /etc/monitorix/monitorix.conf to match the following:
    [bash] base_dir = /var/lib/monitorix/www/ base_lib = /var/lib/monitorix/ base_url = /monitorix base_cgi = /monitorix-cgi [/bash] Also verify the first few lines match the following in the /etc/httpd/conf.d/monitorix.conf:
    [bash] Alias /monitorix /usr/share/monitorix Alias /monitorix /var/lib/monitorix/www [/bash] After restarting httpd and monitorix (same commands as above), I was presented with a "500 Internal Error". Knowing the errors are logged in the /var/log/httpd/error_log file I immediately scrolled to the end to find out the root cause of the internal error:
    [bash] Sat May 10 23:12:04 2014] [error] [client] Undefined subroutine &main::param called at /var/lib/monitorix/www/cgi/monitorix.cgi line 268., referer: [Sat May 10 23:12:04 2014] [error] [client] Premature end of script headers: monitorix.cgi, referer: [/bash] Having not done Perl in nearly 12 years, I simply went to line 268:
    [bash] our $mode = defined(param('mode')) ? param('mode') : ''; [/bash] Looking at the error, it looks to have stemmed from the param calls. Knowing for myself this would always be localhost, I simply updated the line to the following:
    [bash] our $mode = 'localhost'; [/bash] Attempting to restart monitorix again I received the same error on the next line, so for the time being I "hard coded" the values like so:
    [bash] our $mode = 'localhost'; #defined(param('mode')) ? param('mode') : ''; our $graph = 'all'; #param('graph'); our $when = '1day'; #param('when'); our $color = 'black'; #param('color'); our $val = ''; # defined(param('val')) ? param('val') : ''; our $silent = ''; # defined(param('silent')) ? param('silent') : ''; [/bash] After saving, restarting Monitorix I was presented with the Monitorix 3.5.1-1 landing page:

    Monitorix 3.5.1-1

    Clicking on Ok I was presented with all of the graphs I was expecting. To give a sample of a few graphs:
    Monitorix Network Graph

    Monitorix System Graph

    In the coming days I will revisit the error and dig up my old Perl books to remove the hard coded values. Hopefully this helps someone out there with ClearOS wanting to get some neat graphs with Monitorix.

    As some maybe aware, I recently purchased an Asus AM1I-A for a new ClearOS machine to run as a firewall. The installation for ClearOS 6 went extremely smoothly, but upon restarting I kept receiving kernel panic errors from eth1 (the onboard Realtek RTL8111-GR). After doing some investigating, it turns out RHEL and thereby ClearOS have an issue with loading the r8169 kernel module when it detects the RTL8111 (and the 8111 variants).

    Sure enough after doing an lspci -k:
    [bash] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 11) Subsystem: ASUSTeK Computer Inc. Device 859e Kernel driver in use: r8169 Kernel modules: r8169 [/bash] The dreadful r8169 kernel module is the only module installed and in use. Thankfully you can download the r8168 x64-rpm here or wget

    After downloading, simply run:
    [bash] wget -i kmod-r8168-8.037.00-2.clearos.x86_64.rpm [/bash] and then:
    [bash] modprobe r8168 [/bash] Then add Blacklist r8169 to the /etc/modprobe.d/anything.conf and then restart your machine.

    Once your machine is backup, you can verify the correct r8168 module is loaded by re-running lspci -k:
    [bash] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 11) Subsystem: ASUSTeK Computer Inc. Device 859e Kernel driver in use: r8168 Kernel modules: r8168, r8169 [/bash] After installing and using the r8168 module I no longer received kernel panic errors and was able to utilize the onboard RTL8111-GR without issue. Hopefully this helps someone else who ran into the same issue I did.

    Another day, another updated port of jcBENCH, this time for x86/Linux. There are no dependencies, just execute and run.

    You can download the x86-linux-0.8.755.0507 release here.

    [bash] jcBENCH 0.8.755.0507(x86/Linux Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD Phenom(tm) II X2 545 Processor Count: 2x2999.977mhz Architecture: x86 --------------------- Running Benchmark.... Integer: 15.8167 seconds Floating Point: 22.9469 seconds [/bash] To recap the following ports are still needing to be updated:
    -x86/MacOS X
    -arm/Windows Phone 8
    -Windows Store

    Hoping to get more of these ports knocked out in the coming days so as always check back here. In addition I'm hoping to get an official page up and running with all of the release conveniently located instead of the current situation. No eta on that project however.
    I'm pleased to announce the first release of the x86/FreeBSD port of jcBENCH. A few notes on this port, that I thought would be interesting:

    1. At least with FreeBSD 10.0, you need to use clang++ instead of g++.
    2. With FreeBSD in Hyper-V I needed to switch to utilizing the Legacy Network Adapter.

    You can download the 0.8.755.0504 release here.
    [bash] jcBENCH 0.8.755.0505(x86/FreeBSD Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: AuthenticAMD Model: AMD Phenom(tm) II X2 545 Processor Count: 2x1517mhz Architecture: amd64 --------------------- Running Benchmark.... Integer: 14.877 seconds Floating Point: 17.5544 seconds [/bash] I'm hoping to have the x86-Linux release later this week.
    A little less than 2 months ago I had some crazy ideas for interacting with a SQL database and C# as opposed to simply using ADO.Net or Microsoft's own Entity Framework. Not sure exactly as to how I was going to implement some of the features, I shelved it until I came up with a clean way to implement it.

    With this project I had four goals:
    1. Same or similar syntax to Entity Framework - meaning I should be able to simply drop in my framework in place of Entity Framework with little to no changes.
    2. Performance should be equal to or better in both console and WebAPI applications - covering both scenarios of desktop applications and normal for today, WebAPI Services returning results and executing SQL server side and then returning results to a client.
    3. Implement my own caching syntax that puts the effort of caching on the Framework, not the user of the Framework.
    4. Provide an easy way to generate strongly typed classes akin to Microsoft's Entity Framwork. This weekend I was able to achieve #1 and to some degree #2.

    I was able to achieve an identical syntax to Entity Framework like in the snippet below:
    using (var jFactory = new jcEntityFactory()) {
         jFactory.JCEF_ExecuteTestSP(); }
    In regards to performance, I wrote 2 tests. One that simply called a stored procedure with a single insert statement and another that returned several thousand rows. To give some what real results I directly referenced the framework in a console application and then wrote a WebAPI Service referencing the framework along with a wrapper function to call the WebAPI Service from a console application.

    Without further adieu here are the results running it with 10 to 1000 iterations:
    [bash] Console App Tests JC EF 10 Iterations with average of 0.00530009 MS EF 10 Iterations with average of 0.05189771 WebAPI Tests JC EF 10 Iterations with average of 0.18459302 MS EF 10 Iterations with average of 0.12075582 Console App Tests JC EF 100 Iterations with average of 0.000740188 MS EF 100 Iterations with average of 0.005783375 WebAPI Tests JC EF 100 Iterations with average of 0.018184102 MS EF 100 Iterations with average of 0.011673686 Console App Tests JC EF 1000 Iterations with average of 0.0002790646 MS EF 1000 Iterations with average of 0.001455153 WebAPI Tests JC EF 1000 Iterations with average of 0.0017801566 MS EF 1000 Iterations with average of 0.0011440657 [/bash] An interesting note is the WebAPI performance differences between the console application. Sadly, with a WebAPI Service my framework is nearly twice as slow, but in console applications (presumably WinForms and WPF as well) my framework was considerably faster.

    So where does that leave the future of the framework? First off, I am going to investigate further on the performance discrepencies between the two approaches. Secondly, I am going to then add in caching support with the following syntax (assuming one would want to cache a query result for 3 hours):
    using (var jFactory = new jcEntityFactory()) {
         jFactory.Cache(JCEF_ExecuteTestSP(), HOURS, 3); }
    More to come with my framework as it progresses over the next few weeks. As far as a release schedule, once all four of my main project requirements are completed I will release a pre-release version on NuGet. I don't plan on open-sourcing the framework, but that may change further down the road. One thing is for sure, it will be freely available through NuGet.

    Gateway rebadged Cobalt Qube2 - Front
    Gateway rebadged Cobalt Qube2 - Rear


    Ever since I saw one of these devices about 10 years ago I had always wanted to own one, but were always out of the price point I’d pay for a 14 year old network appliance. Low and behold I was able to finally acquire the Gateway rebadged version (unknowingly a rare version) for $41 shipped recently in near perfect condition.

    For those that may be unaware, Cobalt Networks, Inc originally released the Qube 2 in 2000 and in September 2000, Sun Microsystems purchased Cobalt Networks (finalized in December 2000), which in turn in 2003 ended the entire line. For more information on the story behind the development, I highly suggest reading the Startup Story of Cobalt, a very neat story of bringing an idea to fruition (and to a $2.1 billion buyout in 4 years).


    The Qube and Qube 2 are interesting devices in that they run MIPS R5k cpus (150 MHz RM5230 and 250 MHz RM5231-250-Q respectively). Those following my blog know I am a huge fan of RISC cpus especially MIPS so it was neat for me to get to play around with MIPS in a machine outside of a Silicon Graphics workstation or server, on top of a semi-current operating system. I should note for those curious you can run NetBSD on some SGI systems, but if you have a Silicon Graphics machine why wouldn’t you want to run IRIX?

    My particular model has only 64MB of ram, which can be upgraded to 256MB via 2 128MB SIMMs. The Qube 2 requires a special 72pin EDO SIMM running at 3.3V. Before buying any ram second hand via eBay, Flea Market etc. be sure it is 3.3V. On eBay as of this writing there is one vendor with a stock pile of 128MB SIMMS for a fairly reasonable price – so if you’re in the market for a Qube or have one and running with the stock amount I recommend obtaining the ram now before it spikes because of the obscurity of the ram or worse the lack of supply anywhere.

    The IO Board itself is interesting in that it connects to the CPU Board via what looks to be a 32bit 33mhz PCI Slot – meaning the 2 10/100 DEC 21143 Ethernet controllers, VT82C586 VIA ATA 33 controller and any PCI card in the additional slot compete for that ~133 MB/sec bandwidth versus other implementations of the time where each of those devices (or at least a few) would have been on their own controllers. Thinking about that further based on the aforementioned Startup Story of Cobalt and their thinking of making the Qube Dual CPU – maybe the idea was to simply drop another CPU Board into the opposing PCI Slot while also allowing the OEM or customer to drop in a modem (like Gateway did) or another PCI card?

    Another note – temptation might be to throw a Western Digital Black 7200rpm SATA II drive with a SATA->PATA adapter, the tiny exhaust fan on the back of the Qube might not be enough to cool the machine down let alone the additional power requirements on the older power supplies. I highly recommend one of the following: keep with a 5400rpm ATA drive from that era, IDE -> Compact Flash Adapter (slowest option by far) or SATA I/II SSD with a SATA -> PATA converter.

    Inside the Qube 2

    Gateway rebadged Cobalt Qube2 – CPU Board
    Gateway rebadged Cobalt Qube2 - CPU
    Gateway rebadged Cobalt Qube2 – IO Board
    Gateway rebadged Cobalt Qube2 – Seagate ATA IV 20GB Hard Drive
    Gateway rebadged Cobalt Qube2 - RAM

    Pre-Installation Notes

    Seeing as how things go away (especially when the product line has been EOL for 10+ years) I’m locally hosting the Cobalt Qube 2 User Manual, which was helpful in figuring out how to open the Qube 2 properly.

    In addition, for those curious the proper Serial Port Settings are 115,200 Baud Rate, 8 Data Bits, 1 Stop Bits, No Parity and No Flow Control. I found it gave me piece of mind to have the serial port connected to my PC during install because outside of the Qube 2’s LCD screen you will have no other indication of what is occurring.

    A good SSH/TELNET/SERIAL/RLOGIN client for Windows is the Netsarang’s Xshell, which is free for home/student work (I prefer it over PuTTY for those curious).

    Installing NetBSD

    Gateway rebadged Cobalt Qube2 - Installing
    I don’t want to simply regurgitate the very informative NetBSD/cobalt Restore CD HOWTO, but to put it simply:

    Download the latest ISO from the NetBSD site. As of this writing, the latest is 5.2.2 released on 2/1/2014.

    Obtain a crossover Cat5 cable or use an adapter and connect two Cat5(e) cables from the Primary Ethernet port on the Qube to your device

    Burn the ISO Image to a CD, pop it into a laptop or desktop and boot from the CD (I used my HP DV7-7010US for those curious and during the boot process the CD will not touch your existing file system)

    Once the Restore CD of NetBSD has finished booting up on your device, turn on the Qube and hold the left and right arrows until it says Net booting

    It took a few minutes from this point to the restorecd ready message being displayed on the LCD screen, then hold the select button for two seconds, hit the Enter button twice (once to select the restore option and the second time to confirm the restore)

    From here it actually installs NetBSD, some files took longer than others (due to the size more than likely) and for those curious here are the files installed in 5.2.2:
    1. base.tgz
    2. comp.tgz
    3. etc.tgz
    4. man.tgz
    5. misc.tgz
    6. text.tgz
    This step of the installation only took a few minutes and when it was completed the LCD updated to indicate success and that it was rebooting:

    Gateway rebadged Cobalt Qube2 – NetBSD Installed and Restarting

    After this occurred, the first time I installed NetBSD, the LCD appeared stuck on [Starting up]. In going through the Serial Connection log – it appeared my SSD was throwing write errors during installation, I then swapped out the SSD for a 160gb Western Digital SATA drive I had laying around, performed the installation again and had a successful boot from the Qube itself:

    Gateway rebadged Cobalt Qube2 – NetBSD Running

    The point being – if it hangs, hook up the serial connection upon attempting to reinstall NetBSD.

    Post Installation

    After unplugging my laptop and hooking the Qube 2 to one of my gigabit switches, I was under the impression Telnet was running and available to connect as root without a password. Unfortunately I was given the following error: [bash] Trying Connected to Escape character is '^]'. telnetd: Authorization failed. Connection closed by foreign host. [/bash] Doing some research, RLOGIN appeared to be the solution to at least get into the device so I could enable SSH, after switching from TELNET to RLOGIN I was in: [bash] Connecting to Connection established. To escape to local shell, press 'Ctrl+Alt+]'. login: root Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 The NetBSD Foundation, Inc. All rights reserved. Copyright (c) 1982, 1986, 1989, 1991, 1993 The Regents of the University of California. All rights reserved. NetBSD 5.2.2 (GENERIC) #0: Sat Jan 18 18:11:12 UTC 2014 Welcome to NetBSD! Terminal type is xterm. We recommend creating a non-root account and using su(1) for root access. # [/bash] Immediately I went into my /etc/rc.conf and added sshd=YES, ran /etc/rc.d/sshd start and immediately sshd generated the keys and started itself.

    Also something to make you do is set a password for root, which you can do simply by typing in /usr/bin/passwd.

    By default SSH will not allow root to connect (for good reasons), so be sure to add another user. You can add another user that has su to root abilities with useradd –m –G wheel johndoe where johndoe is the name of the username you wish to add.

    Benchmarking Results

    As anyone who has followed my blog for some time, one of the first things I do is port jcBENCH to the platform if it doesn’t already exist. Luckily, NetBSD came with GCC 4.1.1 and BSD derivatives offer a pretty neat C header sysctl.h that provides a lot of the CPU/Architecture information very easily. After implementing the necessary changes and recompiling (wasn’t too slow to compile I have to say), I ran jcBENCH: [bash] $ ./jcBench 100000 1 jcBENCH 0.8.752.0306(mips/NetBSD Edition) (C) 2012-2014 Jarred Capellman CPU Information --------------------- Manufacturer: cobalt Model: Cobalt Qube 2 Count: 1x250mhz Architecture: mipsel --------------------- Running Benchmark.... Integer: 475.185 seconds Floating Point: 2298.39 seconds [/bash] For those who recall I recently upgraded a Silicon Graphics Indy to an R5000 CPU so I was very curious how the Qube running NetBSD would compare to the older Indy. I should note a fair comparison would be to compile jcBENCH on the exact or similar version of GCC in IRIX instead of the version of the 3.4.6 version in nekoware – so take these results with a grain of salt. The results were interesting in that the Floating Point performance was hugely impacted in the Qube (similarly to the R5000SC and R5000PC used in Silicon Graphics machines).

    This led me to investigate the exact variants of the RM5XXX CPUs used in the Silicon Graphics O2, Indy and the Cobalt Qube. Low and behold the Qube’s variant runs on a “crippled” 32bit system bus and without any L2 Cache. This got me thinking of any other R5k series Silicon Graphics I had owned – my Silicon Graphics O2 I received February 2012 came to mind, but sadly it had the R5000SC CPU with 512kb of Level 2 Cache. Also unfortunate, I sold that machine off before I had a chance to do an IRIX port of jcBENCH in April 2012.

    What’s Next?

    The Qube 2 offers a unique opportunity of being a MIPS cpu, extremely low power requirements and virtually silent – leaving me to come up with a 24/7 use for the device. In addition the base NetBSD installation even with 64MB of ram leaves quite a bit left over for additional services: [bash] load averages: 0.02, 0.05, 0.03; up 0+00:28:26 16:30:24 18 processes: 17 sleeping, 1 on CPU CPU states: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle Memory: 17M Act, 312K Wired, 5516K Exec, 7616K File, 35M Free Swap: 128M Total, 128M Free PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND 0 root 125 0 0K 2040K schedule 0:09 0.00% 0.00% [system] 628 root 43 0 4572K 1300K CPU 0:00 0.00% 0.00% top 583 root 85 0 16M 3944K netio 0:00 0.00% 0.00% sshd 560 jcapellm 85 0 16M 2916K select 0:00 0.00% 0.00% sshd 462 root 85 0 12M 2304K wait 0:00 0.00% 0.00% login 485 root 85 0 11M 1820K select 0:00 0.00% 0.00% sshd 603 jcapellm 85 0 4280K 1336K wait 0:00 0.00% 0.00% sh 114 root 85 0 3960K 1320K select 0:00 0.00% 0.00% dhclient 420 root 85 0 4552K 1204K kqueue 0:00 0.00% 0.00% inetd 630 root 85 0 3636K 1188K pause 0:00 0.00% 0.00% csh 458 root 85 0 4024K 1168K select 0:00 0.00% 0.00% rlogind 454 root 85 0 3636K 1164K ttyraw 0:00 0.00% 0.00% csh 159 root 85 0 4248K 1100K kqueue 0:00 0.00% 0.00% syslogd 1 root 85 0 4244K 1028K wait 0:00 0.00% 0.00% init 439 root 85 0 3956K 988K nanoslp 0:00 0.00% 0.00% cron 444 root 85 0 4216K 956K ttyraw 0:00 0.00% 0.00% getty 437 root 85 0 4224K 924K nanoslp 0:00 0.00% 0.00% getty 395 root 85 0 3960K 868K select 0:00 0.00% 0.00% paneld [/bash] Thinking back to the device’s original intent and my current interests of mobile device interoperability with “wired” devices, I’ve decided to go back to an idea I had in November 2012 called jcNETMAP. This Windows Phone 8 app’s purpose was to alert you when your set servers went down, utilizing the Scheduled Task feature of the Windows Phone’s API without relying on any other software or device such as GFI Alerts, a custom Windows Service etc.

    Taking the idea in a slightly different direction, take the following example of what I imagine is a typical home network:

    Maybe add a few tablets, smart phones, consoles, DVRs etc. but the idea being you’re relying on your router’s internal firewall rules to protect your home network. With the advent of DD-WRT among others people might be running a more tuned (albeit all around better version), but still how often can you say you’ve gone into your router and updated the firmware or checked the access logs for anything suspicious? Wouldn’t it be nice if you had some way of knowing traffic funneling from outside to your router could be analyzed like a traditional firewall/packet filter, if anything didn’t look right, send a notification to your smart phone and give you weekly analysis of the “unusual” traffic? I know you could probably setup an open source project to do packet filtering, but would it have the mobile component and run on hardware as low as a Qube 2? And first and foremost – it is simply a learning experience for myself to dive into an area I had never really gotten into previously.

    Like many of my ideas – there could be some serious hurdles to overcome. Among others the biggest concern I have is whether or not the Qube 2 would be able to process packets fast enough on my home network to even make this possible – let alone connect into the Web Service which in turn would send the notification to your Windows Phone 8.x device quickly. Regardless of how this project ends up I am sure it will be worth the investment of time, if nothing else I will learn a ton.

    In the coming days look for jcBENCH and jcDBENCH MIPS/NetBSD releases. Also I have 2 128MB SIMMS and a 30gb Corsair NOVA SSD coming this week to pop into the Qube 2 - will post any hurdles.

    Any general questions about the Qube 2, feel free to comment below and I’ll answer them as soon as I can.

    For frequent visitors to this site, you might have noticed the "metro" look starting to creep into the site design. Going back to April, my gameplan was to simply focus on getting the site functioning and duplicate the look and feel of the WordPress Theme I had been using. A little longer than expected to start the "refit", but I think when it is completed, it will be pretty cool. For now I know the site only looks "perfect" in IE 11, there's some tweaking to be done for Chrome and Firefox, but that's on the back burner until I round out the overall update.

    In addition, the My Computers page has been updated to include a break out of non-x86 systems I have. As I get time I'm going to flush out the detail design and start to get content in there for each system. These later generation Silicon Graphics machines seem to have little to no documentation available as far as what upgrades are possible, things to look for and min/max configurations, not to mention "loudness" factor. Look to that section in the coming weeks with a big update.

    Lastly, in regards to development, jcDB is progressing well on both ia64/Linux and x86/Win32. I am still on track to wrap up a 0.1 release by the end of the year with a C# Library to assist with interfacing with the database. Mode Xngine is also progressing well, I spent a little bit of time over the last 2 weeks working on the SDL Interface for the platforms that don't support Mono.

    This week overall, I expect to post a few DEC Alpha related posts and with any luck get some time to play with NT4/Windows 2000 on one of them.
    Installing a new Windows Service on a server today and ran into unexpected error using the installutil found in your Windows Folder\Microsoft.NET\Framework\vX.xxxx folder.

    The error I received:
    [bash] An exception occurred during the Install phase. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security. [/bash] Judging by the error message, it appeared to be a permissions issue, luckily, the solution is to simply run the command prompt as an administrator.
    With the ever changing software development world (I and every other developer) live in I’ve found it increasingly harder to keep up with every toolset, every framework let alone language. In the last few years I have attempted to buckle down and focus on what I enjoy the most: C#. Over the last 6+ years of C# development I’ve inherited or developed ASP.Net WebForms 1.1 to 4.5 web applications, MVC4, 3.5 to 4.5 WinForms desktop applications, Windows Workflow, custom Sharepoint 2010 parts, WPF, iOS, Android, Windows Phone 7/8, Web Services in WCF and WebAPI and most recently diving into Windows 8 development. Suffice it to say – I’ve been around the block in regards to C#.

    About 6 months ago, I started doing some early research in preparation for a very large project at work that relied more on the mathematical/statistical operations than the traditional “make stuff happen” that I am used to. Keeping with an open, out of the box mentality, I just happened to be in the Book Buyers Inc. bookstore in downtown Mountain View, California on vacation and picked up Professional F# 2.0 for a few dollars used. Knowing they were on already on version 3, I figured it would provide a great introduction to the language and then I would advance my skills through MSDN and future books. I poured through the book on the overly long flight from San Francisco International to Baltimore-Washington International using my laptop the entire flight back writing quick snippets that I could easily port back and forth between C# and F# to see the benefits and best use cases for F#. When I returned home, I found myself wanting more, and as fate would have it shortly afterwards SyncFusion was offering the F# Succinctly e-book by Robert Pickering, for free.

    Eager to read the e-book after my introduction to F#, I ended up finishing it after a short weekend. The e-book, while much shorter than the paperback I purchased, provided a great introduction and solidified many of the concepts I was still cementing in my mind. Like other developers I am sure – when investing time into a new technology or language you want some guarantee of its success and worthiness of your time (especially if it is coming out of your precious off hours) Be happy to know the author chose to include real-world quotes and links to successes with F# over the traditional C# implementations. I should note, while the author does not assume some Visual Basic or C# experience, it definitely will help, but I feel that the book provides an in-depth enough explanation and easy to follow examples for anyone with some higher level programming experience to grasp the main concepts and build a solid foundation to grow from.

    Another element of the e-book I personally enjoyed was the intuitive and easy to follow progression the author chose to utilize. The author early on in the book offered an introduction to F# and proceeded to dive into the fundamentals before providing real-use cases that a professional software developer would appreciate. Several books provide an introductory chapter only to spend the next half of the book on reference manual text or snippets that don’t jump out to you with a real world applicability or even a component of one.

    If there was one element I wished for in the e-book, it would be for it to be longer or a part 2 be written. This "sequel" would build on the concepts provided, assuming a solid foundation of F# and dive into more real-world scenarios where F# would be beneficial over C# or other higher level programming languages. Essentially a "best practices" for the C#/F# programmer.

    On a related note, during my own investigations into F# I found the Microsoft Try F# site to be of great assistance.

    In conclusion, definitely checkout the F# Succinctly e-book (and others) in SyncFusion’s ever growing library of free e-books.
    After updating a large ASP.NET 4.5 WebForms Friday, this afternoon I started to take a look into the new features in the release and discovered the "lightweight" rendering mode of the RadWindow control. Previously - going back to 2011/2012 one of my biggest complaints with the RadWindow was the hacking involved in order to make the popup appear relatively the same across Internet Explorer 9, Chrome and Firefox. Some back and forth with Telerik's Support left much to be desired, so I ended up just padding the bottom and it more or less worked. Thus my excitement for a possible fix to this old problem - turns out the new lightweight mode does infact solve the issue, across Internet Explorer 11, Firefox and Chrome there are only minimal differences now for my popups content (W3C Validated DIVs for the most part).

    This is where the fun ended for me temporarily - in the Q2 2013 (2013.2.717.45) release, the h6 tag for the Title was empty:

    I immediately pulled open the Internet Explorer 11 Developer Tool Inspector and found this curious:
    Not enjoying hacking Control Suites, but needed to implement a fix ASAP so I could continue development, I simply put this CSS Override in my WebForm's CSS Theme File: [css] .RadWindow_Glow .rwTitleWrapper .rwTitle { width: 100% !important; } [/css] Depending on the Skin you selected, you will need to update the name of the class. Hope that helps someone out there - according to the forums, it is a known issue and will be in the Q2 SP1 2013 Release, but in the mean time this will correct the issue.
    After a few days of development, jcBENCH2 is moving along nicely. Features completed:

    1. WebAPI and SQL Server Backend for CRUD Operations of Results
    2. Base UI for the Windows Store App is completed
    3. New Time Based CPU Benchmark inside a PCL
    4. Bing Maps Integration for viewing the top 20 results

    Screenshot of the app as of tonight:
    jcBENCH2 Day 4

    What's left?

    7/17/2013 - Social Networking to share results
    7/18/2013 - Integrate into the other #dev777 projects
    7/19/2013 - Bug fixes, polish and publish

    More details of the development process after the development is complete - I would rather focus on the actual development of the app currently.
    Starting a new, old project this weekend as part of the #dev777 project, jcBENCH 2. The idea being, 7 developers, develop 7 apps and have them all communicate with each other on various platforms.

    Those that have been following my blog for a while, might know I have a little program I originally wrote January 2012 as part of my trip down the Task Parallel Library in C# called, jcBENCH. Originally I created Mac OS X, IRIX (in C++), Win32 and Windows Phone 7 ports. This year I created a Windows Phone 8 app and a revamped WPF port utilizing a completely new backend.

    So why revisit the project? The biggest reason: never being 100% satisfied because of my skill set constantly being expanded I find myself always wanting to go back and make use of a new technology even if the end user sees no benefit. It's the principle - never let your code rot.

    So what is Version 2 going to entail? Or better put, what are some of the issues in the jcBENCH 1.x codebase?

    Issues in the 1.x Codebase

    Issue 1

    As it stands today all of the ports have different code bases. In IRIX's case this was a necessity since Mono hasn't been ported to IRIX (yet). With the advent of PCL (Portable Class Libraries) I can now keep one code base for all but the IRIX port, leaving only the UI and other platform specific APIs in the respective ports.

    Issue 2

    On quad core machines or faster the existing benchmark completes in a fraction of the time. This poses two big problems - doesn't represent a real test of performance over a few second span (meaning all of the CPUs may not have enough time to be tasked before completion) and on the flip side those devices that are much slower (like a cell-phone) it could take several minutes. Solution? Implement a 16 second time benchmark and then calculate the performance based on how many objects were processed during tha time.

    Issue 3

    When testing multi-processor performance, it was cumbersome to test all of the various scenarios. For instance if you had an 8 core CPU as I do with my AMD FX-8350, I had to select 1 CPU, run the benchmark and then record the result, select 2 CPUs and repeat so on and so forth. This took a long time when in reality it would make sense to offer the ability to either run the benchmark on using all cores by default and then via an advanced option allow the end user to select a specific test or have it do the entire test automatically.

    Issue 4

    No easy way to share the results exists across the board in the current version. In recent versions I added a centralized result database and charting so no matter the device you could see how your device compared, but there was no easy to get a screenshot of the benchmark, send the results via email or post on a social network. Where is the fun in a benchmark if you can't brag about it easily? In Version 2 I plan to focus on this aspect.

    Proposed Features for Version 2

    1. Rewritten from the ground up utilizing the latest approaches to cross-platform development I have learned since jcBENCH's original release 1/2012. This includes the extensive use of MVVMCross and Portable Class Libraries to cut down on the code duplication among ports.

    2. Sharing functionality via Email and Social Networking (Twitter and Facebook) will be provided, in addition a new Bing Map will visually reflect the top performing devices across the globe (if the result is submitted with location access allowed)

    3. Using WebAPI (JSON) instead of WCF XML backend for result submission and retrieval. For this app since there is no backend processing between servers, WebAPI makes a lot more sense.

    4. New Timed Based Benchmark as opposed to time to process X amount of tasks

    5. Offer an "advanced" mode to allow the entire test suite to be performed or individual tests (by default it will now use all of the cores available)

    6. At launch only a Windows Store app will be available, but Windows Phone 7/8 and Mac OS X ports will be released later this month.

    Future Features

    Ability to benchmark GPUs is something I have been attempting to get working across platforms and for those that remember I had a special Alpha release last Fall using OpenCL. Once the bugs and features for Version 2 are completed I will shift focus to making this feature a reality.

    Implement all of this functionality in a upgraded IRIX port and finally create a Linux port (using Mono). One of the biggest hurdles I was having with keeping the IRIX version up to date was the SOAP C++ Libraries not being anywhere near the ease of user a Visual Studio/C# environment offers. By switching over to HTTP/JSON I'm hoping to be able to parse and submit data much easier.

    Next Steps

    Given that the project is an app in 7 days, today marks the first day of development. As with any project, the first step was getting a basic feature set as mentioned above and now to create a project timeline based on that functional specification.

    As with my WordPress to MVC Project in April, this will entail daily blog posts with my progress.

    Day 1 (7/13/2013) - Create the new SQL Server Database Schema and WebAPI Backend
    Day 2 (7/14/2013) - Create all of the base UI Elements of the Windows Store App
    Day 3 (7/15/2013) - Create the PCL that contains the new Benchmark Algorithms
    Day 4 (7/16/2013) - Integrate Bing Maps for the location based view
    Day 5 (7/17/2013) - Add Social Networking and Email Sharing Options
    Day 6 (7/18/2013) - Integrate with fellow #dev777 projects
    Day 7 (7/19/2013) - Bug fixing, polish and Windows Store Submission

    So stay tuned for an update later today with my the successes and implementation of the new SQL Server Database Schema and WebAPI Backend
    This morning I will be presenting at the Maryland Code Camp with the topic of Developing Once and Deploying to Many, specifically talking to practices and patterns I've found to help create rich mobile applications efficiently over the last 3 years I've been actively developing for the mobile space. WCF, WPF, PCL, MonoDroid, Azure Mobile Services and Windows Phone 8 are to be discussed.

    For the Powerpoint 2013 presentation, all of the code going to be mentioned during the session, the SQL, PSD files and external libraries used, click here to download the zip file.

    In addition during the session I will be making reference to an app I wrote earlier this year, jcLOG-IT, specifically the Mobile Azure Service and Windows Live Integration elements.

    The code block mentioned for Authentication:
    public async Task AttemptLogin(MobileServiceAuthenticationProvider authType) {
         try {
         if (authType == MobileServiceAuthenticationProvider.MicrosoftAccount) {
         if (!String.IsNullOrEmpty(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken))) {
         App.CurrentUser = await App.MobileService.LoginAsync(Settings.GetSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken)); }
    else {
         var liveIdClient = new LiveAuthClient(Common.Constants.APP_AUTHKEY_LIVECONNECT); while (_session == null) {
         var result = await liveIdClient.LoginAsync(new[] {
    ); if (result.Status != LiveConnectSessionStatus.Connected) {
         continue; }
    _session = result.Session; App.CurrentUser = await App.MobileService.LoginAsync(result.Session.AuthenticationToken); Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, result.Session.AuthenticationToken); }
    Settings.AddSetting(Settings.SETTINGS_OPTIONS.AuthType, authType.ToString()); Settings.AddSetting(Settings.SETTINGS_OPTIONS.IsFirstRun, false.ToString()); return true; }
    catch (Exception ex) {
         Settings.AddSetting(Settings.SETTINGS_OPTIONS.LiveConnectToken, String.Empty); return false; }
    The Settings class:
    public class Settings {
         public enum SETTINGS_OPTIONS {
         IsFirstRun, LiveConnectToken, AuthType, LocalPassword, EnableLocation }
    public static void CheckSettings() {
         var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(SETTINGS_OPTIONS.IsFirstRun.ToString())) {
         WriteDefaults(); }
    public static void AddSetting(SETTINGS_OPTIONS optionName, object value) {
         AddSetting(optionName.ToString(), value); }
    public static void AddSetting(string name, object value) {
         var settings = IsolatedStorageSettings.ApplicationSettings; if (!settings.Contains(name)) {
         settings.Add(name, value); }
    else {
         settings[name] = value; }
    settings.Save(); }
    public static T GetSetting(SETTINGS_OPTIONS optionName) {
         return GetSetting(optionName.ToString()); }
    public static T GetSetting(string name) {
         if (IsolatedStorageSettings.ApplicationSettings.Contains(name)) {
         if (typeof(T) == typeof(MobileServiceAuthenticationProvider)) {
         return (T) Enum.Parse(typeof (MobileServiceAuthenticationProvider), IsolatedStorageSettings.ApplicationSettings[name].ToString()); }
    return (T) Convert.ChangeType(IsolatedStorageSettings.ApplicationSettings[name], typeof (T)); }
    return default(T); }
    public static void WriteDefaults() {
         AddSetting(SETTINGS_OPTIONS.IsFirstRun, false); AddSetting(SETTINGS_OPTIONS.EnableLocation, false); AddSetting(SETTINGS_OPTIONS.LocalPassword, String.Empty); AddSetting(SETTINGS_OPTIONS.LiveConnectToken, String.Empty); AddSetting(SETTINGS_OPTIONS.AuthType, MobileServiceAuthenticationProvider.MicrosoftAccount); }
    I had the interesting request at work last week to do deletions on several million rows in the two main SQL Server 2012 databases. For years now, nothing had been deleted, only soft-deleted with an Active flag. In general anytime I needed to delete rows it usually meant I was doing a test of migration so I would simply TRUNCATE the tables and call it a day - thus never utilizing C# and there by Entity Framework. So what are your options?

    Traditional Approach

    You could go down the "traditional" approach:
    using (var eFactory = new SomeEntities()) {
         var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... foreach (var someObject = eFactory.SomeObjects.Where(a => idList.Contains(a.ID)).ToList()) {
         eFactory.DeleteObject(someObject); eFactory.SaveChanges(); }
    This definitely works, but if you have an inordinate amount of rows I would highly suggest not doing it this way as the memory requirements would be astronomical since you're loading all of the SomeObject entities.

    Considerably better Approach

    using (var eFactory = new SomeEntities()) {
         var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... string idStr = String.Join(",", idList); eFactory.Database.ExecuteSqlCommand("DELETE FROM dbo.SomeObjects WHERE ID IN ({
    )", idStr); }
    This approach creates a comma seperated string and then executes the SQL Command. This is a considerably better than the approach above in that it doesn't load all of those entity objects into memory and doesn't look through each element. However depending on the size of idList you could get the following error:

    Entity Framework 5 - Rare Event

    An even better Approach

    What I ended up doing to solve the problems of those above was to split the list and then process the elements on multiple threads.
    private static List getList(List original, int elementSize = 500) {
         var elementCollection = new List(); // If there are no elements dont bother processing if (original.Count == 0) {
         return elementCollection; }
    // If the size of the collection if (original.Count <= elementSize) {
         elementCollection.Add(String.Join(",", original)); return elementCollection; }
    var elementsToBeProcessed = original.Count; while (elementsToBeProcessed != 0) {
         var rangeSize = elementsToBeProcessed < elementSize ? elementsToBeProcessed : elementSize; elementCollection.Add(String.Join(",", original.GetRange(original.Count - elementsToBeProcessed, rangeSize))); elementsToBeProcessed -= rangeSize; }
    return elementCollection; }
    private static void removeElements(IEnumerable elements, string tableName, string columnName, DbContext objContext, bool debug = false) {
         var startDate = DateTime.Now; if (debug) {
         Console.WriteLine("Removing Rows from Table {
    @ {
    ", tableName, startDate.ToString(CultureInfo.InvariantCulture)); }
    try {
         Parallel.ForEach(elements, elementStr => objContext.Database.ExecuteSqlCommand(String.Format("DELETE FROM dbo.{
    WHERE {
    IN ({
    )", tableName, columnName, elementStr))); }
    catch (Exception ex) {
         Console.WriteLine(ex); }
    if (!debug) {
         return; }
    var endDate = DateTime.Now; Console.WriteLine("Removed Rows from Table {
    in {
    seconds", tableName, endDate.Subtract(startDate).TotalSeconds); }
    To utilize these methods you can do something like this:
    using (var eFactory = new SomeEntities()) {
         var idList = new List(); // assume idList is populated here from a file, other SQL Table etc... var idStr = getList(idList); removeElements(idStr, "SomeObjects", "ID", eFactory); }
    Note you could simplify this down to:
    using (var eFactory = new SomeEntities()) {
         removeElements(getList(/* your Int Collection */), "SomeObjects", "ID", eFactory); }
    Hopefully that helps someone else out there who runs into issues with deleting massive amount of rows. Note I did try to utilize the Entity Framework Extended NuGet library, but ran into errors when trying to delete rows.
    In today's post I will be diving into adding Search Functionality, Custom Error Pages and MVC Optimizations. Links to previous parts: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6 and Part 7.

    Search Functionality

    A few common approaches to adding search functionality to a Web Application:

    Web App Search Approaches

    1. Pull down all of the data and then search on it using a for loop or LINQ - An approach I loathe because to me this is a waste of resources, especially if the content base you're pulling from is of a considerable amount. Just ask yourself, if you were at a library and you knew the topic you were looking for, would you pull out all of the books in the entire library and then filter down or simply find the topic's section and get the handful of books?
    2. Implement a Stored Procedure with a query argument and return the results - An approach I have used over the years, it is easy to implement and for me it leaves the querying where it should be - in the database.
    3. Creating a Search Class with a dynamic interface and customizable properties to search and a Stored Procedure backend like in Approach 2 - An approach I will be going down at a later date for site wide search of a very large/complex WebForms app.
    For the scope of this project I am going with Option #2 since the scope of the content I am searching for only spans the Posts objects. At a later date in Phase 2 I will probably expand this to fit Option #3. However since I will want to be able to search on various objects and return them all in a meaningful way, fast and efficiently. So let's dive into Option #2. Because the usage of virtually the same block of SQL is being utilized in many Stored Procedures at this point, I created a SQL View: [sql] CREATE VIEW dbo.ActivePosts AS SELECT dbo.Posts.ID, dbo.Posts.Created, dbo.Posts.Title, dbo.Posts.Body, dbo.Users.Username, dbo.Posts.URLSafename, dbo.getTagsByPostFUNC(dbo.Posts.ID) AS 'TagList', dbo.getSafeTagsByPostFUNC(dbo.Posts.ID) AS 'SafeTagList', (SELECT COUNT(*) FROM dbo.PostComments WHERE dbo.PostComments.PostID = dbo.Posts.ID AND dbo.PostComments.Active = 1) AS 'NumComments' FROM dbo.Posts INNER JOIN dbo.Users ON dbo.Users.ID = dbo.Posts.PostedByUserID WHERE dbo.Posts.Active = 1 [/sql] And then create a new Stored Procedures with the ability to search content and reference the new SQL View: [sql] CREATE PROCEDURE [dbo].[getSearchPostListingSP] (@searchQueryString VARCHAR(MAX)) AS SELECT dbo.ActivePosts.* FROM dbo.ActivePosts WHERE (dbo.ActivePosts.Title LIKE '%' + @searchQueryString + '%' OR dbo.ActivePosts.Body LIKE '%' + @searchQueryString + '%') ORDER BY dbo.ActivePosts.Created DESC [/sql] You may be asking why not simply add the ActivePosts SQL View to your Entity Model and do something like this in your C# code:
    public List<ActivePosts> GetSearchPostResults(string searchQueryString) {
         using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         return eFactory.ActivePosts.Where(a => a.Title.Contains(searchQueryString) || a.Body.Contains(searchQueryString)).ToList(); }
    That's perfectly valid and I am not against doing it that, but I feel like code like that should be done at the Database level, thus the Stored Procedure. Granted Stored Procedures do add a level of maintenance over doing it via code. For one, anytime you update/add/remove columns you have to update the Complex Type in your Entity Model inside of Visual Studio and then update your C# code that makes reference to that Stored Procedure. For me it is worth it, but to each their own. I have not made performance comparisons on this particular scenario, however last summer I did do some aggregate performance comparisons between LINQ, PLINQ and Stored Procedures in my in C#">LINQ vs PLINQ vs Stored Procedure Row Count Performance in C#. You can't do a 1 to 1 comparison between varchar column searching and aggregate function performance, but my point, or better put, my lesson I want to convey is to definitely keep an open mind and explore all possible routes. You never want to find yourself in a situation of stagnation in your software development career simply doing something because you know it works. Things change almost daily it seems - near impossible as a polyglot programmer to keep up with every change, but when a new project comes around at work do your homework even if it means sacrificing your nights and weekends. The benefits will become apparent instantly and for me the most rewarding aspect - knowing when you laid down that first character in your code you did so with the knowledge of what you were doing was the best you could provide to your employer and/or clients. Back to implementing the Search functionality, I added the following function to my PostFactory class:
    public List<Objects.Post> GetSearchResults(string searchQueryString) {
         using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         return eFactory.getSearchPostListingSP(searchQueryString).Select(a => new Objects.Post(a.ID, a.Created, a.Title, a.Body, a.TagList, a.SafeTagList, a.NumComments.Value, a.URLSafename)).ToList(); }
    You might see the similarity to other functions if you've been following this series. The function exposed in an Operation Contract inside the WCF Service:
    [OperationContract] List<lib.Objects.Post> GetPostSearchResults(string searchQueryString); public List<Post> GetPostSearchResults(string searchQueryString) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetSearchResults(searchQueryString); }
    Back in the MVC App I created a new route to handle searching:
    routes.MapRoute("Search", "Search/{
    ", new {
         controller = "Home", action = "Search" }
    ); ]]>
    So now I can enter values via the url like so: Would search all Posts that contained mvc in the title or body. Then in my Controller class:
    [AcceptVerbs(HttpVerbs.Post)] public ActionResult Search(string searchQueryString) {
         ViewBag.Title = searchQueryString + " << Search Results << " + Common.Constants.SITE_NAME; var model = new Models.HomeModel(baseModel); using (var ws = new WCFServiceClient()) {
         model.Posts = ws.GetPostSearchResults(searchQueryString); }
    ViewBag.Model = model; return View("Index", model); }
    In my partial view:
    <div class="Widget"> <div class="Title"> <h3>Search Post History</h3> </div> <div class="Content"> @using (Html.BeginForm("Search", "Home", new {
         searchQueryString = "searchQueryString"}
    , FormMethod.Post)) {
         <input type="text" id="searchQueryString" name="searchQueryString" class="k-textbox" required placeholder="enter query here" /> <button class="k-button" type="submit">Search >></button> }
    </div> </div> ]]>
    When all was done: [caption id="attachment_2078" align="aligncenter" width="252"]Search box <span classin MVC App" width="252" height="171" class="size-full wp-image-2078" /> Search box in MVC App[/caption] Now you might be asking, what if there are no results? Your get an empty view: [caption id="attachment_2079" align="aligncenter" width="300"]Empty Result - Wrong way to handle it Empty Result - Wrong way to handle it[/caption] This leads me to my next topic:

    Custom Error Pages

    We have all been on sites where we go some place we either don't have access to, doesn't exist anymore or we misspelled. WordPress had a fairly good handler for this scenario: [caption id="attachment_2081" align="aligncenter" width="300"]WordPress Content not found Handler WordPress Content not found Handler[/caption] As seen above when no results are found, we want to let the user know, but also create a generic handler for other error events. To get started let's add a Route to the Global.asax.cs:
    routes.MapRoute("Error", "Error", new {
         controller = "Error", action = "Index" }
    ); ]]>
    This will map to /Error with a tie to an ErrorController and a Views/Error/Index.cshtml. And my ErrorController:
    public class ErrorController : BaseController {
         public ActionResult Index() {
         var model = new Models.ErrorModel(baseModel); return View(model); }
    And my View:
    @model bbxp.mvc.Models.ErrorModel <div class="errorPage"> <h2>Not Found</h2> <div class="content"> Sorry, but you are looking for something that isn't here. </div> </div> ]]>
    Now you maybe asking why isn't the actual error going to be passed into the Controller to be displayed? For me I personally feel a generic error message to the end user while logging/reporting the errors to administrators and maintainers of a site is the best approach. In addition, a generic message protects you somewhat from exposing sensitive information to a potential hacker such as "No users match the query" or worse off database connection information. That being said I added a wrapper in my BaseController:
    public ActionResult ThrowError(string exceptionString) {
         // TODO: Log errors either to the database or email powers that be return RedirectToAction("Index", "Error"); }
    This wrapper will down the road record the error to the database and then email users with alerts turned on. Since I haven't started on the "admin" section I am leaving it as is for the time being. The reason for the argument being there currently is that so when that does happen all of my existing front end code is already good to go as far as logging. Now that I've got my base function implemented, let's revisit the Search function mentioned earlier:
    public ActionResult Search(string searchQueryString) {
         ViewBag.Title = searchQueryString + " << Search Results << " + Common.Constants.SITE_NAME; var model = new Models.HomeModel(baseModel); using (var ws = new WCFServiceClient()) {
         model.Posts = ws.GetPostSearchResults(searchQueryString); }
    if (model.Posts.Count == 0) {
         ThrowError(searchQueryString + " returned 0 results"); }
    ViewBag.Model = model; return View("Index", model); }
    Note the If conditional and the call to the ThrowError, no other work is necessary. As implemented: [caption id="attachment_2083" align="aligncenter" width="300"]Not Found Error Handler Page <span classin the MVC App" width="300" height="81" class="size-medium wp-image-2083" /> Not Found Error Handler Page in the MVC App[/caption] Where does this leave us? The final phase in development: Optimization.


    You might be wondering why I left optimization for last? I feel as though premature optimization leads to not only a longer debugging period when nailing down initial functionality, but also if you do things right as you go on your optimizations are really just tweaking. I've done both approaches in my career and definitely have had more success with doing it last. If you've had the opposite experience please comment below, I would very much like to hear your story. So where do I want to begin?

    YSlow and MVC Bundling

    For me it makes sense to do the more trivial checks that provide the most bang for the buck. A key tool to assist in this manner is YSlow. I personally use the Firefox Add-on version available here. As with any optimization, you need to do a baseline check to give yourself a basis from which to improve. In this case I am going from a fully featured PHP based CMS, WordPress to a custom MVC4 Web App so I was very intrigued by the initial results below. [caption id="attachment_2088" align="aligncenter" width="300"]WordPress YSlow Ratings WordPress YSlow Ratings[/caption] [caption id="attachment_2089" align="aligncenter" width="300"]Custom MVC 4 App YSlow Results Custom MVC 4 App YSlow Ratings[/caption] Only scoring 1 point less than the battle tested WordPress version with no optimizations I feel is pretty neat. Let's now look into what YSlow marked the MVC 4 App down on. In the first line item, it found that the site is using 13 JavaScript files and 8 CSS files. One of the neat MVC features is the idea of bundling multiple CSS and JavaScript files into one. This not only cuts down on the number of HTTP Requests, but also speeds up the initial page load where most of your content is subsequently cached on future page requests. If you recall going back to an earlier post our _Layout.cshtml we included quite a few CSS and JavaScript files:
    <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/jquery.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/kendo.all.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2013.1.319/kendo.aspnetmvc.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo.modernizr.custom.js")"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shCore.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/syntaxhighlighter/shCore.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shThemeRDark.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushCSharp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushPhp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushXml.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushCpp.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushBash.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/syntaxhighlighter/shBrushSql.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/lightbox/jquery-1.7.2.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/lightbox/lightbox.js")" type="text/javascript"></script> <link href="@Url.Content("~/Content/lightbox/lightbox.css")" rel="stylesheet" type="text/css" /> ]]>
    Let's dive into Bundling all of our JavaScript files. First off create a new class, I called it BundleConfig and inside this class add the following static function:
    public static void RegisterBundles(BundleCollection bundles) {
         // JavaScript Files bundles.Add(new ScriptBundle("~/Bundles/kendoBundle") .Include("~/Scripts/kendo/2013.1.319/jquery.min.js") .Include("~/Scripts/kendo/2013.1.319/kendo.all.min.js") .Include("~/Scripts/kendo/2013.1.319/kendo.aspnetmvc.min.js") .Include("~/Scripts/kendo.modernizr.custom.js") ); bundles.Add(new ScriptBundle("~/Bundles/syntaxBundle") .Include("~/Scripts/syntaxhighlighter/shCore.js") .Include("~/Scripts/syntaxhighlighter/shBrushCSharp.js") .Include("~/Scripts/syntaxhighlighter/shBrushPhp.js") .Include("~/Scripts/syntaxhighlighter/shBrushXml.js") .Include("~/Scripts/syntaxhighlighter/shBrushCpp.js") .Include("~/Scripts/syntaxhighlighter/shBrushBash.js") .Include("~/Scripts/syntaxhighlighter/shBrushSql.js") ); bundles.Add(new ScriptBundle("~/Bundles/lightboxBundle") .Include("~/Scripts/lightbox/jquery-1.7.2.min.js") .Include("~/Scripts/lightbox/lightbox.js") ); }
    Then in your _Layout.cshtml replace all of the original JavaScript tags with the following 4 lines:
    @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") ]]>
    So afterwards that block of code should look like:
    <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shCore.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/syntaxhighlighter/shThemeRDark.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/lightbox/lightbox.css")" rel="stylesheet" type="text/css" /> @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") ]]>
    Finally go to your Global.asax.cs file and inside your Application_Start function add the following line:
    BundleConfig.RegisterBundles(BundleTable.Bundles); ]]>
    So in the end your Application_Start function should look like:
    protected void Application_Start() {
         AreaRegistration.RegisterAllAreas(); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); }
    Now after re-running the YSlow test: [caption id="attachment_2092" align="aligncenter" width="300"]YSlow Ratings after Bundling of JavaScript Files <span classin the MVC App" width="300" height="190" class="size-medium wp-image-2092" /> YSlow Ratings after Bundling of JavaScript Files in the MVC App[/caption] Much improved, now we're rated better than WordPress itself. Now onto the bundling of the CSS styles. Add the following below the previously added ScriptBundles in your BundleConfig class:
    // CSS Stylesheets bundles.Add(new StyleBundle("~/Bundles/stylesheetBundle") .Include("~/Content/Site.css") .Include("~/Content/lightbox/lightbox.css") .Include("~/Content/syntaxhighlighter/shCore.css") .Include("~/Content/syntaxhighlighter/shThemeRDark.css") .Include("~/Content/kendo/2013.1.319/kendo.common.min.css") .Include("~/Content/kendo/2013.1.319/kendo.dataviz.min.css") .Include("~/Content/kendo/2013.1.319/kendo.default.min.css") .Include("~/Content/kendo/2013.1.319/kendo.dataviz.default.min.css") ); ]]>
    And then in your _Layout.cshtml add the following in place of all of your CSS includes:
    @Styles.Render("~/Bundles/stylesheetBundle") ]]>
    So when you're done, that whole block should look like the following:
    @Styles.Render("~/Bundles/stylesheetBundle") @Scripts.Render("~/Bundles/kendoBundle") @Scripts.Render("~/Bundles/syntaxBundle") @Scripts.Render("~/Bundles/lightboxBundle") ]]>
    One thing that I should note is if your Bundling isn't working check your Routes. Because of my Routes, after deployment (and making sure the is set to false), I was getting 404 errors on my JavaScript and CSS Bundles. My solution was to use the IgnoreRoutes method in my Global.asax.cs file:
    routes.IgnoreRoute("Bundles/*"); ]]>
    For completeness here is my complete RegisterRoutes:
    "); routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{
    ", defaults: new {
         id = RouteParameter.Optional }
    ); routes.IgnoreRoute("Bundles/*"); routes.MapRoute("Error", "Error/", new {
         controller = "Error", action = "Index" }
    ); routes.MapRoute("Search", "Search/{
    ", new {
         controller = "Home", action = "Search" }
    ); routes.MapRoute("Feed", "Feed", new {
        controller = "Home", action = "Feed"}
    ); routes.MapRoute("Tags", "tag/{
    ", new {
        controller = "Home", action = "Tags"}
    ); routes.MapRoute("PostsRoute", "{
    ", new {
         controller = "Home", action = "Posts" }
    , new {
         year = @"\d+" }
    ); routes.MapRoute("ContentPageRoute", "{
    ", new {
        controller = "Home", action = "ContentPage"}
    ); routes.MapRoute("PostRoute", "{
    ", new {
         controller = "Home", action = "SinglePost" }
    , new {
         year = @"\d+", month = @"\d+", day = @"\d+" }
    ); routes.MapRoute("Default", "{
    ", new {
         controller = "Home", action = "Index" }
    ); ]]>
    Afterwards everything was set properly and if you check your source code you'll notice how MVC generates the HTML:
    <link href="/Bundles/stylesheetBundle?v=l3WYXmrN_hnNspLLaGDUm95yFLXPFiLx613TTF4zSKY1" rel="stylesheet"/> <script src="/Bundles/kendoBundle?v=-KrP5sDXLpezNwcL3Evn9ASyJPShvE5al3knHAy2MOs1"></script> <script src="/Bundles/syntaxBundle?v=NQ1oIC63jgzh75C-QCK5d0B22diL-20L4v96HctNaPo1"></script> <script src="/Bundles/lightboxBundle?v=lOBITxhp8sGs5ExYzV1hgOS1oN3p1VUnMKCjnAbhO6Y1"></script> ]]>
    After re-running YSlow: [caption id="attachment_2095" align="aligncenter" width="300"]YSlow after all bundling <span classin MVC" width="300" height="215" class="size-medium wp-image-2095" /> YSlow after all bundling in MVC[/caption] Now we received a score of 96. What's next? Caching.

    MVC Caching

    Now that we've reduced the amount of data being pushed out to the client and optimized the number of http requests, lets switch gears to reducing the load on the server and enhance the performance of your site. Without diving into all of the intricacies of caching, I am going to turn on server side caching, specifically Output Caching. At a later date I will dive into other approaches of caching including the new HTML5 client side caching that I recently dove into. That being said, turning on Output Caching in your MVC application is really easy, simply put the OutputCache Attribute above your ActionResults like so:
    [OutputCache(Duration = 3600, VaryByParam = "*")] public ActionResult SinglePost(int year, int month, int day, string postname) {
         ----- }
    In this example, the ActionResult will be cached for one hour (3600 seconds = 1 hour) and by setting the VaryByParam to * that means each combination of arguments passed into the function is cached versus caching one argument combination and displaying the one result. I've seen developers simply turn on caching and not thinking about dynamic content - suffice it to say, think about what could be cached and what can't. Common items that don't change often like your header or sidebar can be cached without much thought, but think about User/Role specific content and how bad it would be for a "Guest" user to see content as a Admin because an Admin had accessed the page within the cache time before a Guest user had.


    In this post I went through the last big three items left in my migration from WordPress to MVC: Search Handling, Custom Error Pages and Caching. That being said I have a few "polish" items to accomplish before switching over the site to all of the new code, namely additional testing and adding a basic admin section. After those items I will consider Phase 1 completed and go back to my Windows Phone projects. Stay tuned for Post 9 tomorrow night with the polish items.
    Can't believe it's been a week to the day when I began this project, but I am glad at the amount of progress I have made on the project thus far. Tonight I will dive into adding a WCF Service to act as a layer in between the logic and data layer done in previous posts Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6) and add adding RSS support to the site.

    Integrating a WCF Service

    First off, for those that aren't familiar, WCF (Windows Communication Foundation) is an extremely powerful Web Service Technology created by Microsoft. I first dove into WCF April 2010 when diving into Windows Phone development as there was no support for the "classic" ASMX Web Services. Since then I have used WCF Services as the layer for all ASP.NET WebForms, ASP.NET MVC, Native Mobile Apps and other WCF Services at work since. I should note, WCF to WCF communication is done at the binary level, meaning it doesn't send XML between the services, something I found extremely enlightening that Microsoft implemented. At it's most basic level a WCF Service is comprised of two components, the Service Interface Definition file and the actual implementation. In the case of the migration, I created my Interface as follows:
    [ServiceContract] public interface IWCFService {
         [OperationContract] lib.Objects.Post GetSinglePost(int year, int month, int day, string postname); [OperationContract] List<lib.Objects.Comment> GetCommentsFromPost(int postID); [OperationContract(IsOneWay = true)] void AddComment(string PersonName, string EmailAddress, string Body, int PostID); [OperationContract] lib.Objects.Content GetContent(string pageName); [OperationContract] List<lib.Objects.Post> GetPosts(DateTime startDate, DateTime endDate); [OperationContract] List<lib.Objects.Post> GetPostsByTags(string tagName); [OperationContract] List<lib.Objects.ArchiveItem> GetArchiveList(); [OperationContract] List<lib.Objects.LinkItem> GetLinkList(); [OperationContract] List<lib.Objects.TagCloudItem> GetTagCloud(); [OperationContract] List<lib.Objects.MenuItem> GetMenuItems(); }
    The one thing to note, IsOneWay a top of the AddComment function indicates, the client doesn't expect a return value. As noted in last night's post, the end user is not going to want to wait for all the emails to be sent, they simply want their comment to be posted and the Comment Listing refreshed with their comment. By setting the IsOneWay to true, you ensure the client's experience is fast no matter the server side work being done. And the actual implementation:
    public class WCFService : IWCFService {
         public Post GetSinglePost(int year, int month, int day, string postname) {
         using (var pFactory = new PostFactory()) {
         var post = pFactory.GetPost(postname)[0]; post.Comments = pFactory.GetCommentsFromPost(post.ID); return post; }
    public List<Comment> GetCommentsFromPost(int postID) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetCommentsFromPost(postID); }
    public void AddComment(string PersonName, string EmailAddress, string Body, int PostID) {
         using (var pFactory = new PostFactory()) {
         pFactory.addComment(PostID, PersonName, EmailAddress, Body); }
    public Content GetContent(string pageName) {
         using (var cFactory = new ContentFactory()) {
         return cFactory.GetContent(pageName); }
    public List<Post> GetPosts(DateTime startDate, DateTime endDate) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetPosts(startDate, endDate); }
    public List<Post> GetPostsByTags(string tagName) {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetPostsByTags(tagName); }
    public List<ArchiveItem> GetArchiveList() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetArchiveList(); }
    public List<LinkItem> GetLinkList() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetLinkList(); }
    public List<TagCloudItem> GetTagCloud() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetTagCloud(); }
    public List<MenuItem> GetMenuItems() {
         using (var bFactory = new BaseFactory()) {
         return bFactory.GetMenuItems(); }
    One thing you might be asking, isn't this a security risk? If you're not, you should. Think about it, anyone who has access to your WCF Service could add comments and pull down your data at will. In its current state, this isn't a huge deal since it is only returning data and the AddComment Operation Contract requires a prior approved comment to post, but what about when the administrator functionality is implemented? You definitely don't want to expose your contracts to the outside world with only the parameters needed. So what can you do?
    1. Keep your WCF Service not exposed to the internet - this is problematic in today's world where a mobile presence is almost a necessity. Granted if one were to only create a MVC 4 Mobile Web Application you could keep it behind a firewall. My thought process currently is design and do it right the first time and don't corner yourself into a position where you have to go back and do additional work.
    2. Add username, password or some token to the each Operation Contract and then verify the user - this approach works and I've done it that way for public WCF Services. The problem becomes more of a lot of extra work on both the client and server side. Client Side you can create a base class with the token, username/password and simply pass it into each contract and then server side do a similar implementation
    3. Implement a message level or Forms Membership - This approach requires the most upfront work, but reaps the most benefits as it keeps your Operation Contracts clean and offers an easy path to update at a later date.
    Going forward I will be implementing the 3rd option and of course I will document the process. Hopefully this help get developers thinking about security and better approaches to problems. Moving onto the second half of the post, creating an RSS Feed.

    Creating an RSS Feed

    After getting my class in my WCF Service, I created a new Stored Procedure in preparation: [sql] CREATE PROCEDURE dbo.getRSSFeedListSP AS SELECT TOP 25 dbo.Posts.Created, dbo.Posts.Title, LEFT(CAST(dbo.Posts.Body AS VARCHAR(MAX)), 200) + '...' AS 'Summary', dbo.Posts.URLSafename FROM dbo.Posts INNER JOIN dbo.Users ON dbo.Users.ID = dbo.Posts.PostedByUserID WHERE dbo.Posts.Active = 1 ORDER BY dbo.Posts.Created DESC [/sql] Basically this will return the most recent 25 posts and up to the first 200 characters of the post. Afterwards I created a class to translate the Entity Framework Complex Type:
    [DataContract] public class PostFeedItem {
         [DataMember] public DateTime Published {
         get; set; }
    [DataMember] public string Title {
         get; set; }
    [DataMember] public string Description {
         get; set; }
    [DataMember] public string URL {
         get; set; }
    public PostFeedItem(DateTime published, string title, string description, string url) {
         Published = published; Title = title; Description = description; URL = url; }
    And then I added a new Operation Contract in my WCF Service:
    public List<lib.Objects.PostFeedItem> GetFeedList() {
         using (var pFactory = new PostFactory()) {
         return pFactory.GetFeedList(); }
    Now I am going to leave it up to you which path to implement. At this point you've got all backend work done to return the data you need to write your XML file for RSS. There are many approaches to how you want to go about to proceeding, and it really depends on how you want to serve your RSS Feed. Do you want it to regenerate on the fly for each request? Or do you want to write an XML file only when a new Post is published and simply serve the static XML file? From what my research gave me, there are multiple ways to do each of those. For me I am in favor of doing the work once and writing it out to a file rather than doing all of that work on each request. The later seems like a waste of server resources. Generate Once
    1. One being using the Typed DataSet approach I used in Part 1 - requires very little work and if you're like me, you like a strongly typed approach.
    2. Another option is to use the SyndicationFeed built in class to create your RSS Feed's XML - an approach I hadn't researched prior to for generating one
    3. Using the lower level XmlWriter functionality in .Net to build your RSS Feed's XML - I strongly urge you to not do this with the 2 approaches above being strongly typed. Unstrongly Typed code leads to spaghetti and a debugging disaster when something goes wrong.
    Generate On-Thee-Fly
    1. Use the previously completed WCF OperationContract to simply return the data and then use something like MVC Contrib to return a XmlResult in your MVC Controller.
    2. Set your MVC View to return XML and simply iterate through all of the Post Items
    Those are just some ways to accomplish the goal of creating a RSS Feed for your MVC site. Which is right? I think it is up to you to find what works best for you. That being said, I am going to walk through how to do the first 2 Generate Once Options. For both approaches I am going to use IIS's UrlRewrite functionality to route to For those interested, all it took was the following block in my web.config in the System.WebService section: [xml] <rewrite> <rules> <rule name="RewriteUserFriendlyURL1" stopProcessing="true"> <match url="^feed$" /> <conditions> <add input="{
    " matchType="IsFile" negate="true" /> <add input="{
    " matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="rss.xml" /> </rule> </rules> </rewrite> [/xml] To learn more about URL Rewrite go the official site here.

    Option 1 - XSD Approach

    Utilizing a similar approach to how I got started, utilizing the XSD tool in Part 1, I generated a typed dataset based on the format of an RSS XML file: [xml] <?xml version="1.0"?> <rss version="2.0"> <channel> <title>Jarred Capellman</title> <link></link> <description>Putting 1s and 0s to work since 1995</description> <language>en-us</language> <item> <title>Version 2.0 Up!</title> <link></link> <description>Yeah in all its glory too, it's far from complete, the forum will be up tonight most likely...</description> <pubDate>5/4/2012 12:00:00 AM</pubDate> </item> </channel> </rss> [/xml] [caption id="attachment_2056" align="aligncenter" width="300"]Generated Typed Data Set <span classfor RSS" width="300" height="151" class="size-medium wp-image-2056" /> Generated Typed Data Set for RSS[/caption] Then in my HomeController, I wrote a function to handle writing the XML to be called when a new Post is entered into the system:
    private void writeRSSXML() {
         var dt = new rss(); using (var ws = new WCFServiceClient()) {
         var feedItems = ws.GetFeedList(); var channelRow =; channelRow.title = Common.Constants.SITE_NAME; channelRow.description = Common.Constants.SITE_DESCRIPTION; channelRow.language = Common.Constants.SITE_LANGUAGE; = Common.Constants.URL;;; foreach (var item in feedItems) {
         var itemRow = dt.item.NewitemRow(); itemRow.SetParentRow(channelRow); itemRow.description = item.Description; = buildPostURL(item.URL, item.Published); itemRow.pubDate = item.Published.ToString(CultureInfo.InvariantCulture); itemRow.title = item.Title; dt.item.AdditemRow(itemRow); dt.item.AcceptChanges(); }
    var xmlString = dt.GetXml(); xmlString = xmlString.Replace("<rss>", "<?xml version=\"1.0\" encoding=\"utf-8\"?><rss version=\"2.0\">"); using (var sw = new StreamWriter(HttpContext.Server.MapPath("~/rss.xml"))) {
         sw.Write(xmlString); }
    Pretty intuitive code with one exception - I could not find a way to add the version property to the rss element, thus having to use the GetXml() method and then do a more elaborate solution instead of simply calling dt.WriteXml(HttpContext.Server.MapPath("~/rss.xml")). Overall though I find this approach to be very acceptable, but not perfect.

    Option 2 - Syndication Approach

    Not 100% satisfied with the XSD Approach mentioned above I dove into the SyndicationFeed class. Be sure to include using System.ServiceModel.Syndication; at the top of your MVC Controller. I created the same function as above, but this time utilizing the SyndicationFeed class that is built into .NET:
    private void writeRSSXML() {
         using (var ws = new WCFServiceClient()) {
         var feed = new SyndicationFeed(); feed.Title = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_NAME); feed.Description = SyndicationContent.CreatePlaintextContent(Common.Constants.SITE_DESCRIPTION); feed.Language = Common.Constants.SITE_LANGUAGE; feed.Links.Add(new SyndicationLink(new Uri(Common.Constants.URL))); var feedItems = new List<SyndicationItem>(); foreach (var item in ws.GetFeedList()) {
         var sItem = new SyndicationItem(); sItem.Title = SyndicationContent.CreatePlaintextContent(item.Title); sItem.PublishDate = item.Published; sItem.Summary = SyndicationContent.CreatePlaintextContent(item.Description); sItem.Links.Add(new SyndicationLink(new Uri(buildPostURL(item.URL, item.Published)))); feedItems.Add(sItem); }
    feed.Items = feedItems; var rssWriter = XmlWriter.Create(HttpContext.Server.MapPath("~/rss.xml")); var rssFeedFormatter = new Rss20FeedFormatter(feed); rssFeedFormatter.WriteTo(rssWriter); rssWriter.Close(); }
    On first glance you might notice very similar code between the two approaches, with one major exception - there's no hacks to make it work as intended. Between the two I am going to go live with the later approach, not having to worry about the String.Replace ever failing and not having any "magic" strings is worth it. But I will leave the decision to you as to which to implement or maybe another approach I didn't mention - please comment if you have another approach. I am always open to using "better" or alternate approaches. Now that the WCF Service is fully integrated and RSS Feeds have been added, as far as the end user view there are but a few features remaining: Caching, Searching Content, Error Pages. Stay tuned for Part 8 tomorrow.
    Continuing onto Part 5 of my migration from WordPress to MVC 4, I dove into Content, Comments and Routing tonight. (Other Posts: Part 1, Part 2, Part 3 and Part 4). First thing I did tonight was add a new route to handle pages in the same way WordPress does (YYYY/MM/DD/) for several reasons, though my primary reason is to retain all of the links from the existing WordPress site - something I'd highly suggest you consider doing as well. As noted the other night, your MVC Routing is contained in your Global.asax.cs file. Below is the route I added to accept the same format as WordPress:
    routes.MapRoute("ContentPageRoute", "{
    ", new {
        controller = "Home", action = "ContentPage"}
    ); ]]>
    Be sure to put it before the Default Route otherwise the route above will not work. After I got the Route setup, I went back into my _Layout.cshtml and updated the header links to pull from a SQL Table and then return the results to the layout:
    <div class="HeaderMenu"> <nav> <ul id="menu"> <li>@Html.ActionLink("home", "Index", "Home")</li> @{
         foreach (bbxp.lib.Objects.MenuItem menuItem in @Model.Base.MenuItems) {
         <li>@Html.ActionLink(@menuItem.Title, "ContentPage", "Home", new {
        pagename = @menuItem.URLName}
    , null)</li> }
    </ul> </nav> </div> ]]>
    Further down the road I plan to add a UI interface to adjust the menu items, thus the need to make it programmatic from the start. Next on the list was actually importing the content from the export functionality in WordPress. Thankfully the structure is similar to the actual posts so it only took the following code to get them all imported:
    if (item.post_type == "page") {
         var content = eFactory.Contents.Create(); content.Active = true; content.Body = item.encoded; content.Created = DateTime.Parse(item.post_date); content.Modified = DateTime.Parse(item.post_date); content.PostedByUserID = creator.ID; content.Title = item.title; content.URLSafename = item.post_name; eFactory.Contents.Add(content); eFactory.SaveChanges(); continue; }
    With some time to spare, I started work on the Comments piece of the migration. Immediately after the Post creation in the Importer, I added the following to import all of the comments:
    foreach (var comment in item.GetcommentRows()) {
         var nComment = eFactory.PostComments.Create(); nComment.Active = true; nComment.Body = comment.comment_content; nComment.Created = DateTime.Parse(comment.comment_date); nComment.Modified = DateTime.Parse(comment.comment_date); nComment.PostID = post.ID; nComment.Email = comment.comment_author_email; nComment.Name = comment.comment_author; eFactory.PostComments.Add(nComment); eFactory.SaveChanges(); }
    And now that there were actual comments in the system, I went back into my partial view for the Posts and added the code to display the Comments Link and Total properly:
    <div class="CommentLink"> @{
         object commentLink = @bbxp.mvc.Common.Constants.URL + @Model.PostDate.Year + "/" + @Model.PostDate.Month + "/" + @Model.PostDate.Day + "/" + @Model.URLSafename; <h4><a href="@commentLink">@Model.NumComments @(Model.NumComments == 1 ? "Comment" : "Comments")</a></h4> }
    </div> ]]>
    After getting the Comments Count displayed I wanted to do some refactoring on the code up to now. Now that I've got a pretty good understanding of MVC architecture I started to create Base objects. The commonly pulled in data for instance (Tag Cloud, Menu Items, Archive List etc.) I now have in a BaseModel and pulled in a BaseController. After which all Controllers inherit. I cut down on a good chunk of code and feel pretty confident as time goes on I will be able to expand upon this baseline architecture very easily. [caption id="attachment_2032" align="aligncenter" width="300"]Migration Project <span classas of Part 5" width="300" height="115" class="size-medium wp-image-2032" /> Migration Project as of Part 5[/caption] So what is on the plate next? Getting the Comments displayed, the ability to post new comments and in the back end email people upon a new comment being entered for a particular post.
    Today I began a several month project that includes an extensive Windows Workflow implementation, Mail Merging based around a Word Template (dotx) and extensive integrations to WCF Services, WebForms and a WinForms application. Without going into a ton of detail, this project will most likely be my focus for the next 8-9 months at least. That being said, today I dove into OpenXML Mail Merging I began last October. Realizing the scope of the Mail Merging was evolving, I looked into possibly using an external library. Aspose's Word Library looked like it fit the bill for what I was planning on achieving and allowing me to retire some stop gaps I had put in place years ago at this point. Luckily, the way I had implemented OpenXML myself, it was as easy as replacing 20-30 lines in my main document generation class with the following:
    var docGenerated = new Document(finalDocumentFileName); // Callback to handle HTML Tables docGenerated.MailMerge.FieldMergingCallback = new MailMergeFieldHandler(); // Get all of the Mail Merge Fields in the Document var fieldNames = docGenerated.MailMerge.GetFieldNames(); // Check to make sure there Mail Merge Fields if (fieldNames != null && fieldNames.Length > 0) {
         foreach (string fieldName in fieldNames) {
         var fieldValue = fm.Merge(fieldName); // Replace System.Environment.NewLine with LineBreak if (!String.IsNullOrEmpty(fieldValue)) {
         fieldValue = fieldValue.Replace(System.Environment.NewLine, Aspose.Words.ControlChar.LineBreak); }
    // Perform the Mail Merge docGenerated.MailMerge.Execute(new string[] {
    , new object[] {
    ); }
    // Save the document to a PDF docGenerated.Save(finalDocumentFileName.Replace(".dotx", ".pdf")); ]]>
    And the Callback Class referenced above:
    public class MailMergeFieldHandler : IFieldMergingCallback {
         void IFieldMergingCallback.FieldMerging(FieldMergingArgs e) {
         if (e.FieldValue == null) {
         return; }
    // Only do the more extensive merging for Field Values that start with a <table> tag if (!e.FieldValue.ToString().StartsWith("<table")) {
         e.Text = e.FieldValue.ToString(); return; }
    // Merge the HTML Tables var builder = new DocumentBuilder(e.Document); builder.MoveToMergeField(e.DocumentFieldName); builder.InsertHtml((string)e.FieldValue); e.Text = ""; }
    void IFieldMergingCallback.ImageFieldMerging(ImageFieldMergingArgs args) {
    More to come with the Aspose library as I explore more features, but so far I am very pleased with the ease of use and the performance of library.
    For those that are unfamiliar, from July 2003 until March 2011 this site ran under a custom PHP Content Management System I named bbXP, an acronym for Bulletin Board eXPerience. This project was one of the most influential projects I ever undertook in my free time and really defined the next ten years of programming career. So why the sudden desire to go back to something custom in lieu of WordPress? A simple answer: I love to do things from scratch and as practice for getting my ASP.NET MVC skills up to my ASP.NET WebForms skills. That answer leads to this blog series in which I'll be documenting my transition from this WordPress site to an ASP.NET MVC 4 Web Application and eventually a Windows Phone 8 native application when the MVC app is completed. In this blog post I will be reviewing the initial transition from a MySQL/WordPress installation to a baseline MVC application. A lot of people might disagree with my approach here, especially if you're coming at this from a DBA background. I prefer to do my data translation in C# instead of via SQL via DTS or some other SQL to SQL approach. If you're looking for that approach, I am sure there are other blog posts detailing that process. For those still reading, Step 1 in my mind in migrating to a new platform is getting the data out of WordPress. Luckily in WordPress you have an easy XML export from the Admin Menu: [caption id="attachment_1974" align="aligncenter" width="300"]Step 1 - Export Posts <span classin WordPress" width="300" height="156" class="size-medium wp-image-1974" /> Step 1 - Export Posts in WordPress[/caption] Depending on the amount of posts you have this could be anywhere between a few kb to a couple mbs. Step 2: Getting a strongly typed interface to the newly exported XML. Luckily there is a very awesome tool that has come with Visual Studio since at least 2008: xsd. The xsd tool can be run from it's location in the Visual Studio folder or it can be used anywhere via the Developer Command Prompt: [caption id="attachment_1976" align="aligncenter" width="300"]Step 2: Visual Studio 2012 Developer Prompt Step 2: Visual Studio 2012 Developer Prompt[/caption] Navigate to where you downloaded the WordPress XML Export and run the following two lines, assuming the name of your file was export.xml: [bash] xsd export.xml /d xsd export.xsd /d [/bash] Like so: [caption id="attachment_1977" align="aligncenter" width="300"]Step 3 - Generate Strongly Typed DataSet Step 3 - Generate Strongly Typed DataSet[/caption] After issuing those two commands you'll have a C# Class among a few other files (as seen in the screenshot below) to include in your C# Importer Application - a much better situation to be in when dealing with XML files or data migration I've found. [caption id="attachment_1978" align="aligncenter" width="300"]xsd tool generated Files xsd tool generated Files[/caption] Step 3 - Creating your new database schema. Now that you've got a clean interface for your XML file, you need to create a new database schema for your .NET Application. You could simply recreate it based on the structure of the XML file, but I took the more traditional approach of creating a normalized database schema in SQL Server: [caption id="attachment_1980" align="aligncenter" width="193"]Step 3 - Create your <span classnew database" width="193" height="198" class="size-full wp-image-1980" /> Step 3 - Create your new database[/caption] I'm not going to go over Database Schema Design, I feel it is very subjective and can vary from project to project. For those curious I do follow a pattern for keeping as little in each Table and instead create tables to reuse among other main tables (normalization). For instance with Tags, rather than tying a Tag to a specific Post, I created a relational table so many Posts can reference the same Tag. I did the same with Categories. Step 4 - Create your C# Importer App. Now that you've got your database schema it is time to create your Entity Model and write your C# code to actually import the XML file and populate your new SQL Tables. Pretty standard code for those that have used Typed DataSets and the EntityFramework - if you have questions please comment below and I'll be happy to help.
    NewDataSet ds = new NewDataSet(); ds.ReadXml("export.xml"); using (var eFactory = new bbxp_jarredcapellmanEntities()) {
         foreach (NewDataSet.itemRow item in ds.item.Rows) {
         var creator = eFactory.Users.FirstOrDefault(a => a.Username == item.creator); if (creator == null) {
         creator = eFactory.Users.Create(); creator.Active = true; creator.Created = DateTime.Now; creator.Modified = DateTime.Now; creator.Username = item.creator; eFactory.Users.Add(creator); eFactory.SaveChanges(); }
    var post = eFactory.Posts.Create(); post.Active = true; post.Created = DateTime.Parse(item.post_date); post.Modified = post.Created; post.Body = item.encoded; post.Title = item.title; post.PostedByUserID = creator.ID; eFactory.Posts.Add(post); eFactory.SaveChanges(); }
    After running the importer and some MVC work later: [caption id="attachment_1982" align="aligncenter" width="300"]End Result of an Initial Conversion End Result of an Initial Conversion[/caption] More to come in the coming days, but hopefully that can help get someone pointed in the right direction on moving onto their new custom .NET solution. I should note in this initial migration I am not importing the tags, categories or comments - that will come in the next post.
    If you've been following my blog posts over the last couple of years you'll know I have a profound love of using XML files for reading and writing for various purposes. The files are small and because of things like Typed Datasets in C# you can have clean interfaces to read and write XML files. In Windows Phone however, you do not have Typed Datasets so you're stuck utilizing the XmlSerializer to read and write. To make it a little easier going back to last Thanksgiving I wrote some helper classes in my NuGet library jcWPLIBRARY. The end result within a few lines you can read and write List Collections of Class Objects of your choosing. So why continue down this path? Simple answer: I wanted it better. Tonight I embarked on a "Version 2" of this functionality that really makes it easy to keep with your existing Entity Framework knowledge, but provide the functionality of a database on a Windows Phone 8 device that currently doesn't exist in the same vain it can in a MVC, WinForm, WebForm or Console app. To make this even more of a learning experience, I plan to blog the entire process, the first part of the project: reading all of the objects from an existing file. To begin, I am going to utilize the existing XmlHandler class in my existing Library. This code has been battle tested and I feel no need to write something from scratch especially since I am going to leave the existing classes in the library to not break anyone's apps or my own. First thoughts, what does a XmlSerializer file actually look like when written to? Let's assume you have the following class, a pretty basic class:
    public class Test : jcDB.jObject {
         public int ID {
         get; set; }
    public bool Active {
         get; set; }
    public string Name {
         get; set; }
    public DateTime Created {
         get; set; }
    The output of the file is like so: [xml] <?xml version="1.0" encoding="utf-8"?> <ArrayOfTest xmlns:xsi="" xmlns:xsd=""> <Test> <ID>1</ID> <Active>true</Active> <Name>Testing Name</Name> <Created>2013-04-03T20:47:09.8491958-04:00</Created> </Test> </ArrayOfTest> [/xml] I often forget the XmlSerializer uses the "ArrayOf" prefix on the name of the root object so when testing with sample data when writing a new Windows Phone 8 app I have to refer back - hopefully that helps someone out. Going back to the task at hand - reading data from an XML file and providing an "Entity Framework" like experience - that requires a custom LINQ Provider and another day of programming it. Stay tuned for Part 2 where I go over creating a custom LINQ Provider bound to an XML File.
    This morning I was adding a document handling page in an ASP.NET WebForms project that uses ASP.NET's Theming functionality. Part of the document handling functionality is to pull in the file and return the bytes to the end user all without exposing the actual file path (huge security holes if you do). Since you're writing via Response.Write in these cases, you'd want your ASPX markup to be empty otherwise you'll end up with a Server cannot set content type after HTTP headers have been sent exception that if you've done WebForms development you know full well what the cause is. For those that don't, the important thing to remember is you need to return only the file you are returning. That means no HTML markup in your ASPX file. Upon deploying the code to my development server I received this exception: ASP.NET Theme Exception Easy solution? Update your Pageset EnableTheming="false", StylesheetTheme="" and Theme="" on the page you want to have an empty markup.
    <%@ Page Language="C#" AutoEventWireup="true" EnableTheming="false" StylesheetTheme="" Theme="" CodeBehind="FileExport.aspx.cs" Inherits="SomeWebApp.Common.FileExport" %> ]]>
    Not something I run into very often as I roll my own Theming Support, but for those in a legacy situation or inherited code as I did in this case, I hope this helps.
    Last Fall I wrote my first Windows Service in C# to assist with a Queue Module addon for a time intensive server side task. 6-7 months have gone by and had forgotten a few details involved, so I'm writing them up here for myself and others who might run into the same issue. First off, I'll assume you've created your Windows Service and are ready to deploy it to your Staging or Production environments. The first thing you'll need to do is place the contents of your release or debug folders on your server or workstation. Secondly, you'll need to open an elevated command prompt and goto the Framework folder for the version of .NET your Windows Service is to run the installutil application. In my case I am running a .NET 4.5 Windows Service so my path is: [powershell] C:\Windows\Microsoft.NET\Framework\v4.0.30319 [/powershell] NOTE: If you do not elevate the command prompt you'll see this exception: [powershell] An exception occurred during the Install phase. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security. [/powershell] Once in the framework folder, simply type the following, assuming your service is located in c:\windows_services\newservice\wservice.exe: [powershell] installutil "c:\windows_services\newservice\wservice.exe" [/powershell] After running the above command with the path of your service you should receive the following: [powershell] The Install phase completed successfully, and the Commit phase is beginning. See the contents of the log file for the c:\windows_services\newservice\wservice.exe assembly's progress. The file is located at c:\windows_services\newservice\wservice.InstallLog. Committing assembly 'c:\windows_services\newservice\wservice.exe'. Affected parameters are: logtoconsole = logfile = c:\windows_services\newservice\wservice.InstallLog assemblypath = c:\windows_services\newservice\wservice.exe The Commit phase completed successfully. The transacted install has completed. C:\Windows\Microsoft.NET\Framework\v4.0.30319> [/powershell] At this point, going to your services.msc via Windows Key + R, you should now see your service listed with the option to start it.
    After attending a Windows Phone 8 Jumpstart at Chevy Chase, MD earlier today I got asked about tips developing cross-platform with as much code re-use as possible. In doing a Version 2 of a large platform ubiquitous application since October I've had some new thoughts since my August 2012 post, Cross-Platform Mobile Development and WCF Architecture Notes. Back then I was focused on using a TPL enabled WCF Service to be hit by the various platforms (ASP.NET, Windows Phone, Android, iOS etc.). This approach had a couple problems for a platform that needs to support an ever growing concurrent client base. The main problem is that there is 1 point of failure. If the WCF Service goes down, the entire platform goes. In addition, it does not allow more than 1 WCF server to be involved for the application. The other problem is that while the business logic is hosted in the cloud/a dedicated server with my August 2012 thought process, it doesn't share the actual WCF Service proxies or other common code.

    What is an easy solution for this problem of scalability?

    Taking an existing WCF Service and then implementing a queuing system where possible. This way the client can get an instantaneous response, thus leaving the main WCF Service resources to process the non-queueable Operation Contracts.

    How would you go about doing this?

    You could start out by writing a Windows Service to constantly monitor a set of SQL Tables, XML files etc. depending on your situation. To visualize this: [caption id="attachment_1895" align="aligncenter" width="300"]Queue Based Architecture (3/7/2013) Queue Based Architecture (3/7/2013)[/caption] In a recent project at work, in addition to a Windows Service, I added another database and another WCF Service to help distribute the work. The main idea being for each big operation that is typically a resource intensive task, offload it to another service, with the option to move it to an entirely different server. A good point to make here, is that the connection between WCF Services is done via binary, not JSON or XML.

    Increase your code sharing between platforms

    Something that has become more and more important for me as I add more platforms to my employer's main application is code reuse. This has several advantages:
    1. Updates to one platform affect all, less work and less problems by having to remember to update every platform when an addition, change or fix occurs
    2. For a single developer team like myself, it is a huge time saving principle especially from a maintenance perspective

    What can you do?

    In the last couple of months there have been great new approaches to code re-use. A great way to start is to create a Portable Class Library or PCL. PCLs can be used to create libraries to be compiled by Windows Phone 7/8, ASP.NET, MVC, WinForms, WPF, WCF, MonoDroid and many other platforms. All but MonoDroid is built in, however I recently went through how to Create a Portable Class Library in MonoDroid. The best thing about PCLs, your code is entirely reusable, so you can create your WCF Service proxy(ies), common code such as constants etc. The one thing to keep in mind is to follow the practice of not embedding your business, presentation and data layers in your applications.
    I had an unusual issue that came to my attention this week in regards to using Telerik's RadGrid for displaying a long list of items, in this case 83, when the pagination size was set to 100, as expected the page height grew exponentially over the initial 425px height for this particular RadPageView. On this particular RadGrid my far right hand column had an edit column in which a RadWindow would open to a one field, two button page. Redirecting to an entirely new page made no sense in this situation, thus why I went with the RadWindow control in the first place. Before I dive into the issue and the solution I came up with, you can download the full source code for this post here. For this example I am using a pretty common situation, listing all of the users with their email address and "IDs": [caption id="attachment_1791" align="aligncenter" width="300"]Telerik RadGrid with a PageSize of 10 Telerik RadGrid with a PageSize of 10[/caption] With the Edit User LinkButton opening a RadWindow indicating you can Edit the User's ID: [caption id="attachment_1794" align="aligncenter" width="300"]Telerik RadGrid with RadWindow Telerik RadGrid with RadWindow[/caption] So where does the problem lie? When you have a PageSize high or any content that expands far more than what is visible initially and you open your RadWindow: [caption id="attachment_1795" align="aligncenter" width="300"]Telerik RadGrid with PageSize <span classset to 50" width="300" height="236" class="size-medium wp-image-1795" /> Telerik RadGrid with PageSize set to 50[/caption] The RadWindow appears where you would expect it to if you were still at the top of the page. So can you fix this so the RadWindow appears in the center of the visible area of your browser no matter how far down you are? On your OnItemDataBound code behind:
    protected void rgMain_OnItemDataBound(object sender, GridItemEventArgs e) {
         if (!(e.Item is GridDataItem)) {
         return; }
    var linkButton = (LinkButton) e.Item.FindControl("lbEdit"); var user = ((e.Item as GridDataItem).DataItem is USERS ? (USERS) (e.Item as GridDataItem).DataItem : new USERS()); linkButton.Attributes["href"] = "javascript:void(0);"; linkButton.Attributes["onclick"] = String.Format("return openEditUserWindow('{
    ');", user.ID); }
    The important line here is the linkButton.Attributes["href"] = "javascript:void(0);";. Something else I choose to do in these scenarios where I have a popup is to offer the user a cancel button and a save button, but only refreshing the main window object that needs to be updated. In this case a RadGrid. To achieve this, you need to pass an argument back to the RadWindow from your Popup ASPX Page to indicate when a refresh is necessary. The ASPX for your popup:
    <div style="width: 300px;"> <telerik:RadAjaxPanel runat="server"> <telerik:RadTextBox runat="server" Width="250px" Label="UserID" ID="rTxtBxUserID" /> <div style="padding-top: 10px;"> <div style="float:left;"> <asp:Button ID="btnCancel" runat="server" Text="Cancel" OnClientClick="Close();return false;" /> </div> <div style="float:right"> <asp:Button ID="btnSave" runat="server" OnClientClick="CloseAndSave(); return false;" OnClick="btnSave_Click" Font-Size="14px" Text="Save User" /> </div> </div> </telerik:RadAjaxPanel> </div> ]]>
    The JavaScript in your ASPX Popup Page: [jscript] <telerik:RadScriptBlock ID="RadScriptBlock1" runat="server"> <script type="text/javascript"> function GetRadWindow() {
         var oWindow = null; if (window.radWindow) {
         oWindow = window.radWindow; }
    else if (window.frameElement.radWindow) {
         oWindow = window.frameElement.radWindow; }
    return oWindow; }
    function Close() {
         GetRadWindow().close(); }
    function CloseAndSave() {
         __doPostBack("<%=btnSave.UniqueID %>", ""); GetRadWindow().close(1); }
    </script> </telerik:RadScriptBlock> [/jscript] Then in your main page's ASPX: [jscript] <telerik:RadScriptBlock ID="RadScriptBlock1" runat="server"> <script type="text/javascript"> function openEditUserWindow(UserID) {
         var oWnd = radopen('/edituser_popup.aspx?UserID=' + UserID, "rwEditUser"); }
    function OnClientClose(oWnd, args) {
         var arg = args.get_argument(); if (arg) {
         var masterTable = $find('<%= rgMain.ClientID %>').get_masterTableView(); masterTable.rebind(); }
    </script> </telerik:RadScriptBlock> [/jscript] With this your RadGrid will only refresh if the user hits save on your popup as opposed to doing a costly full post back even if the user didn't make any changes. I hope that helps someone out there who struggled to achieve everything I mentioned in full swoop. Telerik has some great examples on their site, but occasionally it can take some time getting them all working properly. As mentioned above, you can download the full source code for this solution here.
    Recently I upgraded a fairly large Windows Forms .NET 4 app to the latest version of the Windows Forms Control Suite (2012.3.1211.40) and got a few bug reports from end users saying when they were doing an action that updated the Tree View Control it was throwing an exception. At first I thought maybe the Clear() function no longer worked as intended so I tried the following:
    if(treeViewQuestions != null&& treeViewQuestions.Nodes != null&& treeViewQuestions.Nodes.Count > 0) {
         for(intx = 0; x < treeViewQuestions.Nodes.Count; x++) {
         treeViewQuestions.Nodes[x].Remove(); }
    No dice. Digging into the error a big further, I noticed the "UpdateLine" function was the root cause of the issue: Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLine(TreeNodeLineElement lineElement, RadTreeNode node, RadTreeNode nextNode, TreeNodeElement lastNode)\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.UpdateLines()\r\n at Telerik.WinControls.UI.TreeNodeLinesContainer.Synchronize()\r\n at Telerik.WinControls.UI.TreeNodeElement.Synchronize()\r\n at Telerik.WinControls.UI.RadTreeViewElement.SynchronizeNodeElements()\r\n at Telerik.WinControls.UI.RadTreeViewElement.Update(UpdateActions updateAction)\r\n at Telerik.WinControls.UI.RadTreeViewElement.ProcessCurrentNode(RadTreeNode node, Boolean clearSelection)\r\n at Telerik.WinControls.UI.RadTreeNode.OnNotifyPropertyChanged(PropertyChangedEventArgs args)\r\n at Telerik.WinControls.UI.RadTreeNode.SetBooleanProperty(String propertyName, Int32 propertyKey, Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.set_Current(Boolean value)\r\n at Telerik.WinControls.UI.RadTreeNode.ClearChildrenState()\r\n at Telerik.WinControls.UI.RadTreeNode.set_Parent(RadTreeNode value)\r\n at Telerik.WinControls.UI.RadTreeNodeCollection.RemoveItem(Int32 index)\r\n at System.Collections.ObjectModel.Collection`1.Remove(T item)\r\n at Telerik.WinControls.UI.RadTreeNode.Remove() Remembering I had turned on the ShowLines property, I humored the idea of turning them off for the clearing/removing of the nodes and then turning them back on like so:
    treeViewQuestions.ShowLines = false; treeViewQuestions.Nodes.Clear(); treeViewQuestions.ShowLines = true; ]]>
    Sure enough that cured the problem, the last word I got back from Telerik was that this is the approved workaround, but no ETA on a true fix. Hopefully that helps someone else out there.
    Those wishing to get C++ in their new Windows Phone 8 application might find it difficult to get going as I did, therefore I am writing this how to guide to help get you started. If you wish to skip a head and simply download the complete Visual Studio 2012 solution/projects/source files click here. You will need Visual Studio 2012 and the Windows Phone 8 SDK that got released on Tuesday of this week. For this how to, I chose to do a pretty simple implementation, take a C# String, pass it to a C++ WinRT library and have the C++ Library return the number of characters in the string. Pretty simple, but definitely the foundation that will lead to many more interesting ways to utilize C++ going forward for both your own projects and my own. The end result: [caption id="attachment_1560" align="aligncenter" width="180"] End Result of C++ WinRT/Windows Phone 8 Application[/caption] I won't go over the basic XAML in this how to as I'll assume you've had some XAML experience. To get started, create your Windows Phone project type, I chose a basic Windows Phones App like so: [caption id="attachment_1561" align="aligncenter" width="300"]in Visual Studio 2012" width="300" height="207" class="size-medium wp-image-1561" /> Creating a Windows Phone App project in Visual Studio 2012[/caption] Then we'll be adding our C++ WinRT Library. Return to the Add Project screen (Control + Shift + N), scroll down to C++, Windows Phone and select Windows Phone Runtime Component like so: [caption id="attachment_1573" align="aligncenter" width="300"]in Visual Studio 2012" width="300" height="207" class="size-medium wp-image-1573" /> Creating a C++ Windows Phone WinRT project in Visual Studio 2012[/caption] Now that we have both projects, lets dive into the C++ aspect of the project. Like a traditional C++ project, you still have a header and source file. Having not been keeping up with the new C++ syntax was a bit unusual, surprisingly very similar to C#. Pretty standard class definition with the exception of the ref attribute in the class declaration. From what I've read, this is critical in allowing the Class to be accessed via C#. [cpp] #pragma once #include <string> namespace CppWINRT {
         using namespace Windows::Foundation; using Platform::String; public ref class StringCharacterCounter sealed {
         public: unsigned int GetLength(String^ strToParse); }
    ; }
    [/cpp] And our source file where I am converting the WinRT String to an STL wstring and getting the length. Note you don't need to do the conversion, I was simply seeing how using STL interacted with WinRT. [cpp] // CppWINRT.cpp #include "pch.h" #include "CppWINRT.h" using namespace CppWINRT; using namespace Platform; unsigned int StringCharacterCounter::GetLength(String^ strToParse) {
         std::wstring stlString = strToParse->Data(); return stlString.length(); }
    [/cpp] Now that our C++ code is done, lets add our reference in our Windows Phone 8 project. Luckily, we no longer have to do Interops as in the past when having a C# application call out to C++ code, like described in my blog article earlier this month, PInvoke fun with C++ Library and WPF C# Application. The references are handled just as if you had a C# project/library. [caption id="attachment_1565" align="aligncenter" width="300"]in our Windows Phone 8 Project" width="300" height="206" class="size-medium wp-image-1565" /> Add Reference to our C++ WinRT Library in our Windows Phone 8 Project[/caption] Then notice the reference shows up like a normal library reference: [caption id="attachment_1566" align="aligncenter" width="300"]in our Windows Phone 8 Project" width="300" height="168" class="size-medium wp-image-1566" /> Reference Added in our Windows Phone 8 Project[/caption] With the reference in place, now we can begin to use our new C++ WinRT Library. Simply type out the reference name like you would in a C# library and call the GetLength function we created earlier:
    private void btnSubmit_Click(object sender, RoutedEventArgs e) {
         CppWINRT.StringCharacterCounter sccMain = new CppWINRT.StringCharacterCounter(); txtBlockAnswer.Text = sccMain.GetLength(txtBxString.Text).ToString() + " characters were found in the string above"; }
    Pretty simple and painless no? Again if you wish to download the complete solution, you can do so here. Please leave comments and suggestions, I will be posting more C++/WP8 articles as I progress through my own porting effort of jcBENCH.
    In working on the new version of jcBench, I made a decision to continue having the actual benchmarking code in C++ (ie unmanaged code) to promote cross-platform deployments. With Windows Phone 8 getting native code support and my obsession with Silicon Graphics IRIX machines I think this is the best route. That being said, the frontends for jcBench will still definitely be done in C# whenever possible. This brings me to my next topic, getting your C++ Library code to be available to your C# application whether that is a console app, WPF, Windows 8 etc. Surprisingly there is a lot of information out there on this, but none of the examples worked for me. With some trial and error I got it working and figured it might help someone out there. So in your C++ (or C) source file: [cpp] extern "C" {
         __declspec( dllexport ) float runIntegerBenchmark(long numObjects, int numThreads); float runIntegerBenchmark(long numObjects, int numThreads) {
         CPUBenchmark cpuBenchmark = CPUBenchmark(numObjects, numThreads); return cpuBenchmark.runIntegerBenchmark(); }
    __declspec( dllexport ) float runFloatingPointBenchmark(long numObjects, int numThreads); float runFloatingPointBenchmark(long numObjects, int numThreads) {
         CPUBenchmark cpuBenchmark = CPUBenchmark(numObjects, numThreads); return cpuBenchmark.runFloatingPointBenchmark(); }
    [/cpp] Notice the __declspec( dllexport ) function declaration, this is key to telling your C# (or any other language) that this function is exposed externally in the DLL. Something else to keep in mind is the difference in types between variables in C++ and C#. A long for instance in C++ is an Int32 in the CLR. Something to keep in mind if you get something like this thrown in your C# application:
    This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature
    Then in your C# code:
    [DllImport("jcBenchCppLib.dll", CallingConvention = CallingConvention.Cdecl)] public static extern float runIntegerBenchmark(Int32 numObjects, int numThreads); [DllImport("jcBenchCppLib.dll", CallingConvention = CallingConvention.Cdecl)] public static extern float runFloatingPointBenchmark(Int32 numObjects, int numThreads); ]]>
    To execute the function, call it like you would a normal function:
    lblInteger.Content = runIntegerBenchmark(100000, 6) + " seconds"; ]]>
    Something that drives me crazy with software companies is that they do not properly handle cross-platform development from what I've seen. While at Microsoft's TechED North America Conference back in June this year, I spoke with several developers who were basically starting from scratch or doing some sort of convoluted way of putting some code in a DLL and then referencing it for each platform. Done right, you can create iOS (iPad/iPhone), Android and Windows Phone 7.x applications with the only platform specific code being taking the data and binding it to each platforms UI.

    My approach is to leave the data and business logic where it should be (in your data center/cloud) and leave the presentation and interaction to the device (iPhone, iPad, Droid, Windows Phone etc). To me, every developer should be applying this ideal. Platforms are coming and going so fast, wouldn't it suck if you just spent all of this time programming in a platform specific language (like Objective-C on iOS) only to find out from your CEO that he or she promised a port of your application or game to Platform XYZ in a month.

    Having a true 3 tier architecture like in any software development should be embraced even if there is added startup time to get that first pixel displayed on your client device.

    My Spring 2012 developed Mobile architecture consists of:
    -SQL Server 2008 R2 Database
    -SQL Stored Procedures for nearly all I/O to the WCF Service
    -Serialized Struct Containers for database/business objects
    -Task Parallel Library usage for all conversions to and from the Serialized Structs to the SQL Stored Procedures
    -Operation Contracts for all I/O (ie Authentication, Dropdown Box objects etc)

    For example, assume you had a Help Ticket System in your Mobile Application. A Help Ticket typically has several 1 to many relationship Tables associated with it. For instance you could have a history with comments, status changes, multiple files attached etc. Pulling all of this information across a web service in multiple calls is costly especially with the latency involved with 3G and 4G connections. It is much more efficient to do one bigger call, thus doing something like this is the best route I found:

    [Serializable] public struct HT_BASE_ITEM {
         public string Description; public string BodyContent; public int CreatedByUserID; public int TicketID; public List<HT_COMMENT_ITEM> Comments; public RETURN_STATUS returnStatus; }
    public HT_BASE_ITEM getHelpTicket(int HelpTicketID) {
         using (SomeModel eFactory = new SomeModel()) {
         HT_BASE_ITEM htBaseItem = new HT_BASE_ITEM(); getHelpTicketSP_Result dbResult = eFactory.getHelpTicketSP(HelpTicketID).FirstOrDefault(); if (dbResult == null) {
         htBaseItem.returnStatus = RETURN_STATUS.NullResult; return htBaseItem; }
    htBaseItem.Description = dbResult.Description; // Setting the rest of the Struct's properties here return htBaseItem; }
    public RETURN_STATUS addHelpTicket(HT_BASE_ITEM newHelpTicket) {
         using (SomeModel eFactory = new SomeModel()) {
         HelpTicket helpTicket = eFactory.HelpTicket.CreateObject(); helpTicket.Description = newHelpTicket.Description; // Setting the rest of the HelpTicket Table's columns here eFactory.HelpTicket.AddObject(helpTicket); eFactory.SaveChanges(); // Error handling here otherwise return Success back to the client return RETURN_STATUS.SUCCESS; }
    As you can see, the input and output are very clean, if more functionality is desired, i.e. a new field to capture, update the Struct & the input/output functions in the WCF Service, update the WCF reference in your device(s) and add the field to your UI to each device. Very quick and easy in my opinion.

    I've gotten in the habit of adding an Enum property to my returned object depending on the possibility of possibly returning Null or some other problem during the data grabbing or setting operations in my WCF Services. It makes tracking down bugs a lot easier and often if an error occurs I simply record it to a SQL Table and add a front end inside the main ASP.NET Web Application with the logged in user, version of the client app, version of the wcf service (captured via AssemblyInfo). Endusers aren't the most reliable to report issues, so being proactive especially in the mobile realm is key from what I've found.

    This approach does take some additional time upfront, but I was able to create a 100% feature to feature port to Windows Phone from iOS in 3 days for a fairly complex application because I only had to worry about creating a good looking UI in XAML and hooking up my new UI to those previously existing Operation Contracts in my WCF Service.

    This architecture probably won't work for everyone, but I took it a step further for a recent enterprise application where there were several ASP.NET, WinForm and Classic SOAP Services on various .NET versions. An easy solution would have been to simply create a common DLL with the commonly used functionality and reference that in the other platforms. The problem with that being, a fix would need to be deployed to all of the clients and I haven't yet tried a .NET 4.5 Class Library in a 1.1 ASP.NET solution though I can't imagine that would work too well if at all. Creating all of the functionality in a WCF Service and having the clients consume this service has been a breeze. One fix in the WCF Service generally fixes all of the platforms, which is great. Those looking to centralize business logic and database access should really look to this approach.

    A pretty common task I run across is counting the number of occurrences for a specific string, Primary Key ID etc. A few examples: checking a valid username/password combination or an existing value in the database to prevent duplicate/redundant data. Typically if there were no joins involved I would typically just do something like the following:
    public bool DoesExist(string someValue) {
         using (SomeEntity eFactory = new SomeEntity()) {
         return eFactory.SomeTable.Where(a => a.Value == someValue).Count() > 0; }
    Or use the Parallel PLINQ version if there were a considerable amount of rows assuming the overhead involved in PLINQ would negate any performance advantage for smaller tables:
    public bool DoesExist(string someValue) {
         using (SomeEntity eFactory = new SomeEntity()) {
         return eFactory.SomeTable.AsParallel().Where(a => a.Value == someValue).Count() > 0; }
    However if there were multiple tables involved I would create a Stored Procedure and return the Count in a Complex Type like so:
    public bool DoesExist(string someValue) {
         using (SomeEntity eFactory = new SomeEntity()) {
         return eFactory.SomeTableSP(someValue).FirstOrDefault().Value > 0; }
    Intrigued on what the real performance impact was across the board and to figure out what made sense depending on the situation I created a common scenario, a Users Table like so: [caption id="attachment_1377" align="aligncenter" width="267"] Users SQL Server Table Schema[/caption] Populated this table with random data from 100 to 4000 rows and ran the above coding scenarios against it averaging 3 separate times to rule out any fluke scores. In addition I tested looking for the same value run 3X and a random number 3X to see if the row's value position would affect performance (if it was at the near the end of the table or closer to the beginning). I should note this was tested on my HP DV7 laptop that has an A10-4600M (4x2.3ghz CPU) running Windows 8 x64 with 16GB of ram and a Sandisk Extreme 240GB SSD. [caption id="attachment_1378" align="aligncenter" width="300"] LINQ vs PLINQ vs Stored Procedure Count Performance Graph[/caption] The most interesting aspect for me was the consistent performance of the Stored Procedure across the board no matter how many rows there were. I imagine the results are the same for 10,000, 20,000 etc. I'll have to do those tests later. In addition I imagine as soon as table joins come into the picture the difference between a Stored Procedure and a LINQ query would be even greater. So bottom line, use a Stored Procedure for counts. The extra time to create a Stored Procedure, import it into Visual Studio (especially in Visual Studio 2012 where it automatically creates the Complex Type for you) is well worth it.
    Not that I give much creedence to the Windows Experience Index, but here is my score with the addition of the 16GB of Corsair DDR3-1600 and SanDisk Extreme 240GB SSD: [caption id="attachment_1273" align="aligncenter" width="300"] Windows Experience Index for my HP DV7-7010US[/caption] What is interesting in those numbers is that my older Phenom II P920 (4x1.6ghz) got rated just under that in CPU results. After using the laptop for a week now, it definitely feels and responds orders faster than my older Dell. So I turned to jcBench, a much clearer picture (I included my C-50 Netbook for comparison): Integer Results (note lower is better, the legend indicates the number of threads tested) [caption id="attachment_1274" align="aligncenter" width="300"] AMD C-50 vs AMD Phenom II P920 vs AMD A10-4600M in Integer Performance[/caption] Looking at these numbers, the results make a lot more sense. Neither the Phenom II nor the C-50 have a "Turbo" mode in which the unused cores get tuned down and the used cores get ramped up. For instance the A10 in my DV7 will ramp up to 3.2ghz from 2.3ghz. Thus the interesting result in 2 threads of the Phenom II to 1 thread on the A10, nearly equal time (2x1.6ghz vs 1x3.2ghz effectively). I will run Floating Point on my P920 tonight and update the post, but I feel it will be fairly similar to the Integer results. However, the shared FPU on the Trinity CPUs should make it more interesting since the Phenom II had 1 FPU per core.
    I've been working on an OpenCL WPF version of jcBench for the last 2 or 3 weeks in between life, thus the lack of posting. However this morning at a nice time of 6:30 AM, I found a new tool to assist in OpenCL development, the AMD APP Kernel Analyzer. In my code for testing I've just been setting my OpenCL kernel program to a string and then sending it to the ComputeProgram function like so:
    string clKerenel = @"__kernel void computePythag(global long * num, global double * thirdSide) {
         size_t index = get_global_id(0); for (long x = 2; x < num[index] + 2; x++) {
         for (long y = x; y < num[index] + 2; y++) {
         double length = sqrt((double)(x * x) + (double)(y * y)); length *= length; thirdSide[index] = length; }
    return; }
    "; ComputeProgram program = new ComputeProgram(_cContext, clProgramSource); program.Build(null, null, null, IntPtr.Zero); ]]>
    This is problematic as any error isn't captured until runtime best case scenario or you just get a BSOD (as I did last night). This is where AMD's KernelAnalyzer comes into play. [caption id="attachment_1224" align="aligncenter" width="300" caption="AMD APP Kernel Analyzer"][/caption] Note the syntax highlighting and if there were any errors during compilation the Compiler Output window like in any other development tool. An extra feature that I just realized how useful it really is, is that the ability to target to difference generations/platforms of AMD GPUs. I knew the R7XX (4000 series Radeon HD) only had OpenCL 1.0 support, but I didn't realize (naively thinking) that the same program wouldn't compile cleanly across the board. Luckily I still have 2 R7XX series GPUs in use (one in my laptop and another in my secondary desktop), but interesting nonetheless. Definitely more to come on the OpenCL front tonight...
    Just finished getting the features for jcBench 0.2 completed. The big addition is the separate test of integer and floating point numbers. The reason for the addition of this test is that I heard years ago that the size of Level 2 cache directly affected performance of Floating Point operations. You would always hear of the RISC cpus having several MegaBytes of cache, while my first 1ghz Athlon (Thunderbird), December 2000 only had 256kb. As I get older, I get more and more scrupulous over things I hear now or had heard in the past thus the need for me to prove to myself one or the other. I'm still working on going back and re-running the floating point tests so that will come later today, but here are the integer performance results. Note the y-axis is the number of seconds taken to complete the test, so lower is better. [caption id="attachment_1097" align="aligncenter" width="300" caption="jcBench 0.2 integer comparison"][/caption] Kind of a wide range of CPUs, ranging from a netbook cpu in the C-50, to a mobile cpu in the P920 to desktops cpus. The differences based on my current findings vary much more greatly with floating point operations. A key things I got from this data:
    1. Single Threaded, across the board was ridiculously slow, even with AMD's Turbo Core technology that ramps up a core or two and slows down the unused cores. Another unsettling fact for developers that continue to not write parallel programs.
    2. The biggest jump was from 1 thread to 2 threads across the board
    3. MIPS R14000A 600mhz CPU is slightly faster than a C-50 in both single and 2 threaded tests. Finally found a very near equal comparison, I'm wondering with the Turbo Core on the C-60 if it brings it inline.
    4. numalink really does scale, even over the now not defined as "fast" numalink 3 connection, scaling it across 2 Origin 300s using all 8 cpus really did increase performance (44 seconds versus 12 seconds).
    More to come later today with floating point results...
    Even in 2012, I find myself exporting large quantities of data for reports or other needs that need additional aggregation or manipulation in C# that doesn't make sense to do in SQL. You've probably done something like this in your code since your C or C++ days:
    string tmpStr = String.Empty; foreach (contact c in contacts) {
         tmpStr += c.FirstName + " " + c.LastName + ","; }
    return tmpStr; ]]>
    And most likely doing some manipulation otherwise it would probably make more sense to simply concatenate the string in your SQL Query itself. After thinking about it some more, I considered the following code instead:
    return string.Join(",", contacts.Select(a => a.FirstName + " " + a.LastName)); ]]>
    Simple, clean and faster? On 10,000 Contact Entity Objects (averaged against 3 test runs): Traditional Method - 1.4050804 seconds Newer Method - 0.0270016 seconds About 50 times faster to do the newer method, what about with an even larger dataset of 100,000? Traditional Method - 151.0996424 seconds Newer Method - 0.09200503 seconds Nearly 1700 times faster to do the newer method, but now what about a smaller set of 1000? Traditional Method - 0.0410024 seconds Newer Method - 0.0160009 seconds About 2 times faster. In visual terms: [caption id="attachment_1081" align="aligncenter" width="300" caption="Traditional vs Newer Method Test Results"][/caption] This is far from a conclusive, in-depth test. But for larger data sets or in a high traffic/high demand (like a WCF Call that returns a delineated String for instance), string.Join should be used instead. That being said though, the data should be formatted properly ahead of time and any possible error (null values etc) should be considered a precondition to using string.Join. For me, it really got my mind thinking about other small blocks of code that I had been stagnantly using over the years that could speed up intensive tasks, especially with the size of a lot of the results I parse through at work.
    Found this blog post from 3/14/2012 by Stephen Toub on MSDN, which answers a lot of questions I had and it was nice to have validated an approach I was considering earlier:
    Parallel.For doesn’t just queue MaxDegreeOfParallelism tasks and block waiting for them all to complete; that would be a viable implementation if we could assume that the parallel loop is the only thing doing work on the box, but we can’t assume that, in part because of the question that spawned this blog post. Instead, Parallel.For begins by creating just one task. When that task is executed, it’ll first queue a replica of itself, and will then enlist in the processing of the loop; at this point, it’s the only task processing the loop. The loop will be processed serially until the underlying scheduler decides to spare a thread to process the queued replica. At that point, the replica task will be executed: it’ll first queue a replica of itself, and will then enlist in the processing of the loop.
    So based on that response, at least in the current implementation of the Task Parallel Library in .NET 4.x, the approach is to slowly created parallel threads as the resources allow for and fork off new threads as soon and as many possible.
    Diving into Multi-Threading the last couple nights, but not in C# like I had previously. Instead with C. Long ago, I had played with SDL's Built-In Threading when I was working on the Infinity Project. Back then, I had just gotten a Dual Athlon-XP Mobile (Barton) motherboard, so it was my first chance to play with multi-cpu programming. Fast forward 7 years, my primary desktop has 6 cores and most cell phones have at least 2 CPUs. Everything I've written this year has been with multi-threading in mind whether it is an ASP.NET Web Application, Windows Communication Foundation Web Service or Windows Forms Application. Continuing my quest into "going back to the basics" from last weekend, I chose my next quest would be to dive back into C, and attempt to port jcBench to Silicon Graphics' IRIX 64bit MIPS IV platform (it was on the original list of platforms). The first major hurdle, was programming C like C#. Not having classes, the keyword "new", syntax for certain things being completely different (structs for instance), having to initialize arrays with malloc only to remember after getting segmentation faults that by doing so will overload the heap (the list goes on). I've gotten "lazy" with my almost exclusive use of C# it seems, declaring an "array" like:
    ConcurrentQueue<SomeObject> cqObjects = new ConcurrentQueue<SomeObject>(); ]]>
    After the "reintroduction" to C, I started to map out what would be necessary to make an equivalent approach to the Task Parallel Library, not necessarily the syntax, but how it handled nearly all of the work for you. Doing something like (note you don't need to assign the return value from the Entity Model, it could be simply put in the first argument of Parallel.ForEach, I just kept it there for the example):
    List<SomeEntityObject> lObjects = someEntity.getObjectsSP().ToList(); // To ensure there would be no lazy-loading, use the ToList method ConcurrentQueue<SomeGenericObject> cqGenericObjects = new ConcurrentQueue<SomeGenericObject>(); Parallel.ForEach(lObjects, result => {
         if (result.SomeProperty > 1) {
         cqGenericObjects.Enqueue(new SomeGenericObject(result)); }
    ); ]]>
    A few things off the bat you'd have to "port":
    1. Concurrent Object Collections to support modification of collections in a thread safe manner
    2. Iteratively knowing and handling how cores/cpus are available, and constantly allocating new threads as threads complete (ie 6 cores, 1200 tasks, kick off at least 6 threads and handle when those threads complete and "always" maintain a 6 thread count
    The later I can imagine is going to be decent sized task in itself as it will involve platform specific system calls to determine the CPU count, breaking the task down dynamically and then managing all of the threads. At first thought the easiest solution might simply be:
    1. Get number of CPUs/Cores, n
    2. Divide number of "tasks" by the number cores and allocate those tasks for each core, thus only kicking off n threads
    3. When all tasks complete resume normal application flow
    The problem with that is (or at least one of them), is if the actual data for certain objects is considerably more complex then others, you could have 1 or more CPUs finished before the others, which would be wasteful. You could I guess infer based on a sampling of data, maybe kick off 1 thread to "analyze" the data from various indexes in the passed in array and calculate the average time taken to complete, then anticipate the variation of task completion time to more evenly space out tasks. Also taking into account current cpu utilization, as many operating systems use 1 CPU affinity for Operating System tasks, so giving CPU 1 (or the CPU with Operating System usage) to begin with less tasks might make more sense to truly optimize the threading "manager". Hopefully I can dig up some additional information on how TPL allocates their threads to possible give a 3rd alternative, since I've noticed it handles larger tasks very well across multiple threads. Definitely will post back with my findings....
    After having used PLINQ and the Concurrent collections for nearly 2 months now, I can say without a doubt, it is definitely the way of the future. This last week I used it extensively in writing a WCF Service that manipulated a lot of data and needed to return it to an ASP.NET client very quickly. And on the flip side it needed to execute a lot of SQL Insertions based on business logic pretty quickly. As of February 25th, 2012, I think the best approach to writing a data layer is:
    1. Expose all Data Layer access through a WCF Service, ensuring a clear separation between UI and Data Layers
    2. Use of ADO.NET Entity Models tied to SQL Stored Procedures that return Complex Types for objects rather doing a .Where(a => a.Active).ToList()
    3. Process larger result sets with PLINQ, using Concurrent Collections (ie ConcurrentQueue or ConcurrentDictionary) and returning them to the Client (ASP.NET, WP7 etc)
    Next step in my opinion would be to add in intelligent App Fabric caching like what Smarty Template Engine did for PHP. Just a clean way to cache pages, while providing flexible ways to invalidate the cache. I am so glad I found that back in 2006 when I was still doing a lot of PHP work.
    Had a fun time today getting a picture taken from an Android 2.3.4 HTC Vivid to my Mobile WCF Platform. Oddly enough, I could not find any tutorials on it for Monodroid like there are for MonoTouch. Piecing together several stackoverflow posts, I finally figured it out. Here is a possible solution (most likely not the best): At the top of your class, add the following:
    private string _imageUri; private ImageView imageView; private Boolean isMounted {
         get {
         return Android.OS.Environment.ExternalStorageState.Equals(Android.OS.Environment.MediaMounted); }
    Inside your Button Click Event:
    var uri = ContentResolver.Insert(isMounted ? Android.Provider.MediaStore.Images.Media.ExternalContentUri : Android.Provider.MediaStore.Images.Media.InternalContentUri, new ContentValues()); _imageUri = uri.ToString(); var i = new Intent(Android.Provider.MediaStore.ActionImageCapture); i.PutExtra(Android.Provider.MediaStore.ExtraOutput, uri); StartActivityForResult(i, 0); ]]>
    Right below your Click Event function (or anywhere inside the Activity Class you're in):
    protected override void OnActivityResult(int requestCode, Result resultCode, Intent data) {
         if (resultCode == Result.Ok && requestCode == 0) {
         imageView = FindViewById(Resource.Id.ivThumbnail); imageView.DrawingCacheEnabled = true; imageView.SetImageURI(Android.Net.Uri.Parse(_imageUri)); btnUploadImage.Visibility = ViewStates.Visible; }
    Then inside your "Upload Button Click" function:
    Bitmap bitmap = imageView.GetDrawingCache(true); MemoryStream ms = new MemoryStream(); // Note anything less than 50 will result in very pixelated images from what I've seen bitmap.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms); // At this point set your Byte[] variable/property with ms.ToArray(); // for instance I have a SyncFile object with a FileData Property, so I use // SyncFile sFile = new SyncFile() {
         FileData = ms.ToArray(); }
    ; ]]>
    Effectively this code captures a picture, puts it in an ImageView as a thumbnail on the Activity and then upon hitting your Upload Button it converts the image into a Byte Array after compressing it (or not like in my case) and from there call your WCF service upload function. Hopefully that helped someone out.
    Monodroid gave me headaches this afternoon, trying to mimic the Pivots on Windows Phone 7 on Android using the TabActivity. You would think you access an Activity after attaching it to a TabActivity. Well the answer is you can, but figuring it out on Monodroid will make your hair go gray. After digging through the Java based Android Documentation I finally figured it out. I don't know if this is the right way, but it works: Declare an enumeration object with each of your tabs:
    public enum TABS {
         BasicInformation = 0, Detailinformation = 1, OptionalInforation = 2 }
    ; ]]>
    Then add an abstract class to each of your Activity Objects:
    public abstract class MyDroidActivity : Activity {
         public abstract bool SaveActivity(); // Any other custom code you had }
    Then inside your inherited Activity: public class BasicInfoActivity : MyDroidActivity {
         public override bool SaveActivity() {
         // Error handling to return false if for instance the fields weren't populated }
    Then in your TabActivity Class:
    private void SaveAllTabs() {
         TabHost.CurrentTab = (int)TABS.BasicInformation; BasicInformation biTab = (BasicInformation)LocalActivityManager.GetActivity(TabHost.CurrentTabTag); if (!biTab.SaveActivity()) {
         return; }
    // And continue with your other Tabs }
    Not the most elegant, but works. The inherited class and enumerations are not necessary, but helps keep things in order, especially if you have a larger application in my opinion.
    Last night when working on my Silicon Graphics Origin 300 and suffering with an old version Mozilla circa 2005 as seen below: [caption id="attachment_896" align="aligncenter" width="300" caption="Mozilla 1.7.12 on IRIX"][/caption] I started wondering, these machines can function as a Web Server, MySQL server, firewall etc especially my Quad R14k Origin 300, yet web browsing is seriously lacking on them. Firefox 2 is available over at nekoware, but that is painfully slow. Granted I don't use my Origin for web browsing, but when I was using a R12k 400mhz Octane as my primary machine a few years ago as I am sure others around the world are doing it was painful. This problem I don't think is solely for those on EOL'd Silicon Graphics machines, but any older piece of hardware that does everything but web browsing decently. Thinking back to the Amazon Silk platform, using less powerful hardware, but a brilliant software platform, Amazon is able to deliver more with less. The problem arises for the rest of the market because of the diversity of the PC/Workstation market. The way I see it you've got 2 approaches to a "universal" cloud web renderer. You could either:
    1. Write a custom lightweight browser tied to an external WCF/Soap Web Service
    2. Write a packet filter inspector for each platform to intercept requests and return them from a WCF/Soap service either through Firefox Extensions or a lower level implementation, almost like a mini-proxy
    Plan A has major problems because you've got various incarnations of Linux, IRIX, Solaris, VMS, Windows etc, all with various levels of Java and .NET/Mono support (if any), so a Java or .NET/Mono implementation is probably not the right choice. Thus you're left trying to make a portable C/C++ application. To cut down on work, I'd probably use a platform independent library like Gsoap to handle the web service calls. But either way the amount of work would be considerable. Plan B, I've never done anything like before, but I would imagine would be a lot less work than Plan A. I spent 2 hours this morning playing around with a WCF service and a WPF application doing kind of like Plan A. [caption id="attachment_897" align="aligncenter" width="300" caption="jcW3CLOUD in action"]in action" width="300" height="189" class="size-medium wp-image-897" />[/caption] But instead of writing my own browser, I simply used the WebBrowser control, which is just Internet Explorer. The Web Service itself is simply:
    public JCW3CLOUDPage renderPage(string URL) {
         using (WebClient wc = new WebClient()) {
         JCW3CLOUDPage page = new JCW3CLOUDPage(); if (!URL.StartsWith("http://")) {
         URL = "http://" + URL; }
    page.HTML = wc.DownloadString(URL); return page; }
    It simply makes a web request based on the URL from the client, converts the HTML page to a String object and I pass it into a JCW3CLOUDPage object (which would also contain images, although I did not implement image support). Client side (ignoring the WPF UI code):
    private JCW3CLOUDReference.JCW3CLOUDClient _client = new JCW3CLOUDReference.JCW3CLOUDClient(); var page = _client.renderPage(url); int request = getUniqueID(); StreamWriter sw = new StreamWriter(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); sw.Write(page.HTML); sw.Close(); wbMain.Navigate(System.AppDomain.CurrentDomain.BaseDirectory + request + ".html"); ]]>
    It simply makes the WCF request based on the property and then returns the HTML and writes it to a temporary HTML file for the WebBrowser control to read from. Nothing special, you'd probably want to add handling for specific pages, images and caching, but this was far more than I wanted to play with. Hopefully it'll help someone get started on something cool. It does not handle requests from with the WebBrowser control, so you would need to override that as well. Otherwise only the initial request would be returned from the "Cloud", but subsequent requests would be made normally. This project would be way too much for myself to handle, but it did bring up some interesting thoughts:
    1. Handling Cloud based rendering, would keeping images/css/etc stored locally and doing modified date checks on every request be faster than simply pulling down each request fully?
    2. Would the extra costs incurred to the 3G/4G providers make it worthwhile?
    3. Would zipping content and unzipping them outway the processing time on both ends (especially if there was very limited space on the client)
    4. Is there really a need/want for such a product? Who would fund such a project, would it be open source?
    After playing around with Google Charts and doing some extensive C#/SQL integration with it for a dashboard last summer, I figured I'd give Telerik's Kendo a shot. If you're not familiar with Telerik, they produce very useful controls for WinForm, WPF, WP7 and ASP.NET controls (in addition to many others). If you do .NET programming, their product will save you time and money guaranteed. That being said, I started work on the first module for jcDAL last night and wanted to add some cool bar graphs to the web interface for the analyzer. About 15 minutes of reading through one of their examples I had data coming over a WCF service into the Kendo API to display this: [caption id="attachment_880" align="aligncenter" width="621" caption="jcDBAnalyzer Screengrab showcasing Kendo"][/caption] So far so good, I'll report back with any issues, but so far I am very pleased. A lot of the headaches I had with Google Charts I haven't had yet (+1 for Telerik).
    I was reading about AMP the other night, basically a light weight C++ wrapper that is used to offload tasks to your GPU. I was going to brush off my C++ skills tonight, but luckily I don't have to after reading this article. WinRT makes using C++ libraries a breeze, finally you don't have to use p/invoke. I'll definitely be playing around with this as soon as my Visual Studio 11 installation issues get resolved. A word to the wise, if you're installing Windows 8, install it with the Developer Tools. I made the mistake of installing the non-developer tools Windows 8 Developer Preview, which doesn't install the necessary SDK for C++ compiling. If you're only doing C#, I had no trouble programming/compiling WPF and WCF apps.
    After some additional work getting used to the XAML-ish layout I'm done with the initial jcBENCH Android port. You can download it from here. You will need Android 2.2 or higher for it to run. [caption id="attachment_874" align="aligncenter" width="244" caption="jcBENCH Android"][/caption] I've only tested this on a Dual 1.2ghz HTC Vivid. However, the results were interesting. Comparing single-threaded and multi-threaded operations was curious. Running it in multi-threaded mode was actually 3 times slower. I'm not sure if the way the Task Parallel Library was implemented on Monodroid was done poorly or if there is a bug in the detection for how many cores/cpus there are inside the Mono implementation or not, but something isn't right. Single threaded versus my HTC Titan 1.5ghz SnapDragon it lost out by ~23%. Which makes sense, 300mhz difference or 20% comparing single cores to each other. All that being said I'm content with jcBENCH for the moment until I hear feedback or come up with more features to add.
    Spent the afternoon reading IronPython documentation and finally got a chance to play with it a bit. I have to hand it to Microsoft for making it extremely easy:
    var ipy = Python.CreateRuntime(); dynamic pyTest = ipy.UseFile(""); ]]>
    Only 2 lines of code to have access to a python script. For me this opens new doors for scripting operations I still typically write in PHP because of the simplicity of opening notepad and pushing it out to an IIS server running PHP. Now I can write a few line C# app with an interface to run whichever script I wish to run. Being curious about performance, I ported jcBENCH to use IronPython. To put it simply, the math operations pale in comparison to native C# math operations. The actual overhead in creating the Python Interpreter was negligible if anyone was curious even on an AMD C-50 (Dual 1ghz) Netbook with a OCZ Vertex 2 SSD. Porting it did bring an interesting idea, this could work for a workflow process. The ability to process multiple workflows utilizing PLINQ, but with the flexibility to adjust the logic in script and not in the C# code.
    After some more thought about jcBENCH and what its real purpose was I am going to drop the Solaris and IRIX ports. Solaris has a Mono port, but I only have Sun Blade 100 which has a single cpu. Not expecting a ton of performance from that. IRIX on the other hand, I have a Quad R14k 500 Origin 300, but no port of Mono exists. So I could port it to Java, but then you really couldn't compare benchmarks between the Mono/.NET versions. I am about 50% done with the Android port and am just waiting for the OpenSuse 12.1 compatible MonoDevelop release so I can get started on the Linux Port. After those 2 ports are completed I am thinking of starting something entirely new that I have been thinking about the last couple years. Those that deal with a SQL database and write a data layer for his or her .NET project, know the shortcomings or doing either:
    1. Using an ADO.NET Entity Model, adding your Tables, Views and Stored Procedures and then use that as is or extend it with some business logic
    2. Use an all custom data layer using the base DataTable, DataRows etc, wrap your objects with partial classes and create a "factory"
    Both approaches have their pros and cons, the first takes a lot of less time, but you also have a lot less control and could be costly with all of the overhead. Both however will eventually fall apart down the road. The reason, they were built for one audience and one production server or servers. How many times have you gone to your IT Manager and asked for a new Database server because it was quicker then really go back to the architecture of your data layer. As time goes on, this could happen over and over again. I have personally witnessed such an event. A system was designed and built for around 50 internal users, on a single cpu web server and a dual Xeon database server. Over 5 years later, the code has remained the same yet it's been moved to 6 different servers with ever increasing speed. Times have changed and will continue to change, workloads vary from day to day, servers are swapped in and out, so my solution, an adaptive, dynamic data layer. One that profiles itself and uses that data to analyze the server to use either single threaded LINQ queries or PLINQ queries if the added overhead of using PLINQ would out way the time it would take only using one cpu. In addition using Microsoft's AppFabric to cache the commonly used intensive queries that maybe only get run once an hour and the data doesn't change for 24. This doesn't come without a price of course, having only architected this model in my head, I can't say for certain how much overhead the profiling will be. Over the next couple months, I'll be developing this so stay tuned. jcBENCH as you might have guessed was kind of a early test scenario of testing various platforms and how they handled multi-threaded tasks of varying intensity.
    Found this just now going through some really old stuff. I originally posted this on July 7th, 2002, I'd probably do it quite a bit differently now in CS5, but someone might find it interesting, especially if they are on version 4.x or 5.x. Without further adeu: About a week ago (5/25/02), I started work on a Star Wars movie, I knew I'd have to do lightsabers and force power effects, but doing it properly and making it look good took a long time doing it the way I was doing it before I learned how to use masks. So you get the easy and better looking version. Have fun, if you have any more questions just post them in the forum. [caption id="attachment_809" align="aligncenter" width="360" caption="How it should look when we are done"][/caption]
    • 1st step: Start After Effects (duh)
    • 2nd step: Make a new composition (control+n), select the resolution that your footage/image is
    • 3rd step: Add your image/footage (control+i)
    Now your screen should look like this: [caption id="attachment_810" align="aligncenter" width="640" caption="Starting point for the Lightsaber Composition"][/caption]
    • 4th step: Add a solid (control+y), make it white and your composition size
    • 5th step: Now in the timeline with the solid 1 highlighted hit the little icon that looks like an eye (like in the screen cap)
    [caption id="attachment_811" align="aligncenter" width="327" caption="Lightsaber Tutorial - Timeline after the 5th Step"][/caption]
    • 6th step: Hit G to bring up the pen mouse pointer, now put a point in each of the four vertices of your lightsaber
    • 7th step: Now if you have your solid shown by hitting the little place where the eye was, your to be lightsaber should be solid white
    Now your screen should look like this: [caption id="attachment_814" align="aligncenter" width="640" caption="Lightsaber Composition #2"][/caption]
    • 8th step: Now make another solid (control+y), this time black, then in the timeline drag it so it's under the first solid with our lightsaber mask (like in the screen cap)
    • 9th step: then highlight your first solid (the lightsaber one), and hit control+d three times, your duplicating it
    • 10th step: Now click the little arrow in the timeline right next to "Solid 1" and bring out the effects (like in the screen cap on the lower right)
    • 11th step: Then make the feathering value equal to 10
    • 12th step: Repeat this step for each of the solids, by increasing each feathering value by 10, except Solid 2
    Now your screen should look like this: [caption id="attachment_815" align="aligncenter" width="640" caption="Lightsaber Composition #3"][/caption] And your timeline should look like this: [caption id="attachment_816" align="aligncenter" width="561" caption="Lightsaber Composition Timeline #3"][/caption]
    • 13th step: Now make another composition (control+n) and then copy your first composition and then your footage from the bin to the timeline
    • You probably have a black screen with your white glowing saber huh? that's good
    • 14th step: Now with your solid selected in the timeline goto the menu, select layer->transfer mode->screen, now you should see your footage as well
    • 15th step: You probably have a black screen with your white glowing saber huh?
    Now your screen should look like this: [caption id="attachment_820" align="aligncenter" width="640" caption="Lightsaber Composition #4"][/caption] Nobody has a white lightsaber right?
    • 16th step: To add color, highlight your solid and then on the menu goto effect->adjust->color balance
    • 17th step: Now mess with the settings to get the color you want
    • (Possibly) 18th Step: If you have more then 1 lightsaber, just repeat the steps
    Now your screen should look like this: [caption id="attachment_821" align="aligncenter" width="640" caption="Lightsaber Composition #5"][/caption]
    Finally got a Silicon Graphics Visual Workstation 330. Not a traditional Silicon Graphics by any means, but the case was neat and fufills a nostalgic purpose. [caption id="attachment_29" align="aligncenter" width="550" caption="SGI VS 330 in box"][/caption] [caption id="attachment_31" align="aligncenter" width="550" caption="SGI VS 330 Sideview"][/caption] Not wanting to be limited to Dual Pentium 3 933mhz CPUs and no SATA support, I used a spare CPU, ram and motherboard I had laying around. [caption id="attachment_32" align="aligncenter" width="550" caption="Asus M4A89GTD/USB3, Corsair XMS3 16gb, Athlon II X3 435 and Crucial 64gb"][/caption] In addition I swapped out the rear fan, being 11 years old it was quite noisy. I had a spare CoolerMaster 120mm low-rpm Blue LED fan that fit perfectly in the back. All in all, pretty smooth swap. Downloaded the Network Install CD of OpenSuse 12.1, installed it and was ready to go in under an hour. In it's completed form: [caption id="attachment_33" align="aligncenter" width="550" caption="Completed swap of parts out of the SGI VS 330"][/caption] All that's left now is to de-scuff the case to bring it back closer to a new state.
    It's been a very long time since I released something, so without further adieu, I present jcBENCH. It's a floating point CPU benchmark. Down the road I hope to add further tests, this was just something I wrote on the airplane coming back from San Francisco. You'll need to have .NET 4 installed, if you don't have it, click here. Or you can get it from Windows Update. Click here to download the latest version. I'll make an installer in a few days, just unzip it somewhere on your machine and run. Upon running it, the results get automatically uploaded to my server, a results page will be created shortly.
    Just getting started on a WCF Service that is going to be handling all of the business logic for the company I work for. With the amount of data, caching was a necessity. Was playing around with Membase last week, couldn't quite get it to work properly and then started yesterday afternoon with AppFabric. Microsoft's own answer to caching (among other things). It installs pretty easily, the built in IIS extensions are very cool. However the setup isn't for the faint of heart. To sum it up:
    1. Install AppFabric on your SQL Server with all of the options
    2. Then install AppFabric on your Web Server and create your WCF and ASP.NET sites
    3. From the PowerShell Cache console type: New-Cache (Where is the name you want to call it, so it could be: New-Cache RandomTexels
    4. Verify it got created by: Get-Cache If you don't see it or get an error make sure the AppFabric Cache Service is running, open up the Run Window (Windows Key + R) and type: services.msc. It should be one of the top items depending on your setup.
    5. After configuring your other server for .NET 4, usual web site permissions and settings in IIS then do this command: Grant-CacheAllowedClientAccount DOMAIN\WEBSERVER$ Replace DOMAIN with your domain name (ie MOJO) and WEBSERVER with the physical name of your webserver. So for instance: Grant-CacheAllowedClientAccount MOJO\BIGWS$ for a domain callled MOJO and a WebServer called BIGWS.
    There's plenty of code examples of it out there, but as I create the architecture for this WCF I'll discuss my findings.
    Call me crazy, but I simply couldn't stand having an Intel running in the house.  I had bought a Dual Core Atom (D510) CPU/Motherboard a few years ago, originally to run Mac OS X on.  It bounced around between my NAS, HTPC, Windows Media Center 7 Extender and finally a Firewall/Gateway a month or so ago running ClearOS. Over the last weekend I realized I had a new AMD Sempron 140 still sealed, checked prices on the new 5 series Asus motherboards, saw it was only $60 on Amazon and picked up a 2gb G.skill DDR3-1333 stick.  As fate would have it, NewEgg sent over a Norco RPC-270 2U case.  Took about an hour but I got everything installed and up and running again on the new AMD platform.  Below are the build pictures. [caption id="attachment_3981" align="aligncenter" width="225" caption="Empty Norco RPC-270 Case"][/caption] [caption id="attachment_3982" align="aligncenter" width="225" caption="Asus M5A78L AM3+ Motherboard Installed"][/caption] [caption id="attachment_3983" align="aligncenter" width="225" caption="Everything Installed"][/caption] [caption id="attachment_3984" align="aligncenter" width="300" caption="Gotta love AMD CPUs, unlocked the 2nd core without a blink of an eye"][/caption] [caption id="attachment_3985" align="aligncenter" width="300" caption="The stack continues to get bigger"][/caption]